SPEC SFS®2014_vda Result

Copyright © 2016-2020 Standard Performance Evaluation Corporation

DATATOM Corp.,Ltd. SPEC SFS2014_vda = 2800 Streams
DATATOM INFINITY Overall Response Time = 16.72 msec


Performance

Business
Metric
(Streams)
Average
Latency
(msec)
Streams
Ops/Sec
Streams
MB/Sec
2806.35928011291
5606.70956032587
8407.77384053873
11209.304112075166
140011.800140096470
168016.703168107746
196020.201196129049
224023.9612241310331
252031.0162521411633
280039.7412801612919
Performance Graph


Product and Test Information

DATATOM INFINITY
Tested byDATATOM Corp.,Ltd.
Hardware Available04/2020
Software Available04/2020
Date Tested04/2020
License Number6039
Licensee LocationsChengdu, China

INFINITY is a new generation of distributed cluster cloud storage developed by DATATOM. It makes full use of the idea of software-defined storage. It is a unified storage including file, block and object storage. It can also be used as cloud storage for Internet applications and back-end storage for cloud platforms,the cloud platforms that INFINITY supports include VMware, OpenStack and Docker. INFINITY's advantages in on-demand scaling, performance aggregation and data security have made it an ultra-performance cluster storage for a wide range of business applications, and has developed numerous cases for broadcasting, financial , government, education, medical and other industries.

Solution Under Test Bill of Materials

Item NoQtyTypeVendorModel/NameDescription
13604TB 7200RPM SATA HDDWestern DigitalHUS726T4TALE6L4Each server's storage data pool consists of 36 disks. The specification of this SATA HDD disk is: size:3.5inch;interface:SATA 6Gb/s;capacity:4TB;speed:7200RPM
240240GB SATA SSDSeagateXA240LE1000340 hard disks in total. 10 of them are used by the storage server nodes (1/node) to store metadata. 20 of them are used by the storage server nodes (2/node) to create raid-1 pairs for the OS. 10 of them are used by the clients (2/node) to create raid-1 pairs for the OS. The specification of this SSD is: size:2.5inch;interface:SATA 6Gb/s;capacity:240GB.
310800GB NVMe SSDIntelP37001 NVMe SSD resides in each server node,there are 10 server nodes,so the total of NVMe SSD is 10. All of them are used to save storage logs.The specification of this SSD is: interface: PCIe NVMe 3.0 x4; capacity:800GB
41Ethernet SwitchDell EMCS4048-ON48*10GbE and 6*40GbE. 40*10GbE ports are used to connect storage service nodes,5*40GbE ports are used to connect storage client nodes.
515ChassisChenbroRM41736 PlusSupports 36 x 3.5" SAS/SATA with front/rear access of high-density storage server.Economic power-consumption cooling: 7 x 8038 hot-swap Fan, 7500RPM. High power efficiency: 1+1 CRPS 1200W 80 PLUS Platinum power supply.
615MotherboardSupermicroX10DRL-iMotherboard models of servers and clients.
715Host Bus AdapterBroadcomSAS 9311-8iThere is 1 HBA per server and client node.Data transmission rate:12 GB/s SAS-3;I/O controller:LSI SAS 3008.
830ProcessorIntelE5-2630V4There are 2 CPUs per server and client node.Clock speed:2.2 GHz;Turbo clock speed:3.1 GHz;Cores:10;Architecture:x86-64;Threads:20 threads;smart cache per core:25 MB;Max CPUs:2
9120MemoryMicronMTA36ASF4G72PZ-2G6D1QGThere are 8 x 32GB DIMMs per server and client node.DDR4 functionality and operations supported asdefined in the component data sheet;32GB (4 Gig x 72);288-pin, registered dual in-line memory module(RDIMM);Supports ECC error detection and correction
10540GbE NICIntelXL710-QDA2There is 1 40GbE NIC per client node. Each client node has a generated load and one of the nodes acts as the primary client.
112010GbE NICIntelX520-DA2There are 2 10GbE NICs which include 4 network ports per server node.Each two network ports from different NIC form a dual 10 Gigabit bond. The two bonds are used as the internal data sync and external access network of the storage cluster.

Configuration Diagrams

  1. Configuration Diagram

Component Software

Item NoComponentTypeName and VersionDescription
1Storage Server NodesINFINITY filesystem3.5.0INFINITY is a unified storage including file, block and object storage.It adopts a global unified namespace and is internally compatible with variety of NAS protocols,including:CIFS/SMB,NFS,FTP,AFP and WEBDAV.
2Storage Server and Client nodesOperating SystemCentOS 7.4The operating system on each storage node was 64-bit CentOS Version 7.4.

Hardware Configuration and Tuning - Physical

Storage Node
Parameter NameValueDescription
block.dbNoneSpecify the '--block.db' parameter when adding NVMe SSDs. The NVMe SSDs will be used to save storage log information.
Network port bondmod=6Every two network ports of the storage server form a bond port with mod = 6. One bond used for external access network and one bond used for data sync network.

Hardware Configuration and Tuning Notes

In INFINITY storage system, network performance is increased by network port bonding. For storage, read-write performance is increased by using high-performance data disks. For security, we use SSDs to form RAID 1 to guarantee the safety of the OS. We need to manually configure the storage by INFINITY Management Webpage to ensure the data will be stored in designated storage devices.

Software Configuration and Tuning - Physical

Server Node
Parameter NameValueDescription
vm.dirty_writeback_centisecs100Indicates how often the kernel BDI thread wakes up to check whether the data in the cache needs to be written to disk, in units of 1/100 second. The default is 500, which means that the BDI thread will be woken up every 5 seconds.For a large number of continuous Buffer Write environments, the value should be appropriately reduced so that the BDI thread can more quickly determine whether the flush condition is reached, to avoid the accumulation of excessive dirty data and the formation of peak writes.
vm.dirty_expire_centisecs100After the data in the Linux kernel write buffer is 'old',the kernel BDI thread starts to consider writing to the disk. The unit is 1/100 second. The default is 3000, which means that if the 30-second data is old, the disk will be refreshed.For environments with a large number of continuous Buffer Writes, the value should be appropriately reduced to trigger the BDI flush condition as early as possible, and the original write peak can be smoothly placed through multiple submissions.If the setting is too small, I/O will be submitted too frequently.
Client Node
Parameter NameValueDescription
rsize,wsize1048576NFS mount options for data block size.
protocoltcpNFS mount options for protocol.
tcp_fin_timeout600TCP time to wait for final packet before socket closed.
nfsvers4.1NFS mount options for NFS version.

Software Configuration and Tuning Notes

For the flush mechanism of kernel dirty pages, the simple point is that the kernel BDI thread periodically determines whether there are too old or too many dirty pages. If so, the dirty pages are flushed.These two parameters are located in /etc/sysctl.conf. Used the mount command "mount -t nfs ServerIp:/infinityfs1/nfs /nfs" in the test. The mount information is ServerIp:/infinityfs1/nfs on /nfs type nfs4 (rw,relatime,sync,vers=4.1,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,port=12049,timeo=600,retrans=2,sec=sys,clientaddr=ClientIp,local_lock=none,addr=ServerIp).

Service SLA Notes

There were no opaque services in use.

Storage and Filesystems

Item NoDescriptionData ProtectionStable StorageQty
1360 4TB HDDs form the file system data storage poolduplicationYes1
2Ten 240GB SSDs form a file system metadata storage poolduplicationYes1
3Two 240GB SSDs form RAID 1 used to store the OS in each server and client node.RAID-1Yes15
4Each server node independently uses a NVMe SSD to store log information.NoneNo10
Number of Filesystems1
Total Capacity624.6TiB
Filesystem Typeinfinityfs

Filesystem Creation Notes

INFINITY provides two data security protection modes, including erasure code and replication modes. Users can choose appropriate data protection strategies according to their business requirements. In this test,we choosed the duplication mode(one of the replication modes), which means that when a piece of data is stored in storage, it is written to two copies at the same time. so the usable capacity is half the capacity of the raw disk.The reserved space holds disk metadata information, so the actual usable space is less than the total disk space. Use the product default parameters when creating the file system.

Storage and Filesystem Notes

10 servers form a storage cluster, per storage server provides 36 4TB SATA HDD disks(total is 360 4TB SATA HDD disks in the storage cluster) to form a data storage pool, and per storage server provides 1 240GB SATA SSD(total is 10 240GB SATA SSDs in the storage cluster) to form a metadata storage pool. The two storage pools together form a file system. In addition, a separate 800GB NVMe SSD disk in each storage server node is used to store storage log information, and the log writing speed is improved to improve storage performance.

Transport Configuration - Physical

Item NoTransport TypeNumber of Ports UsedNotes
110GbE Network40Each storage server uses four 10GbE network ports, and each two network ports form a dual 10 Gigabit bond,mod=6
240GbE Network5The client uses a 40GbE network port to communicate with the server.

Transport Configuration Notes

Each storage server uses four 10GbE network ports, and each two network ports form a dual 10 Gigabit bond. The two bonds are used as the internal data sync and external access network of the storage cluster. The internal network provides communication between the internal nodes of the cluster, and the external network communicates with the client. The clients each use a single 40GbE network port to communicate with the server.

Switches - Physical

Item NoSwitch NameSwitch TypeTotal Port CountUsed Port CountNotes
1Dell EMC Networking S4048-ON10/40GbE5445The storage servers use 40 10GbE ports, and the storage client use 5 40GbE ports.

Processing Elements - Physical

Item NoQtyTypeLocationDescriptionProcessing Function
120CPUFile System Server NodesIntel(R) Xeon(R) CPU E5-2630 v4 @ 2.20GHz 10-coreFile System Server Nodes
210CPUFile System Client NodesIntel(R) Xeon(R) CPU E5-2630 v4 @ 2.20GHz 10-coreFile System Client Nodes, load generator

Processing Element Notes

There are 2 physical processors in each server and client node. Each processor has 10 cores with two thread per core.There is no Spectre/Meltdown patch installed.

Memory - Physical

DescriptionSize in GiBNumber of InstancesNonvolatileTotal GiB
System memory on server node25610V2560
System memory on client node2565V1280
Grand Total Memory Gibibytes3840

Memory Notes

None

Stable Storage

Infinity Cluster Storage does not use any internal memory to temporarily cache write data to the underlying storage system. All writes are committed directly to the storage disk, therefore there is no need for any RAM battery protection. Data is protected on the storage media using replicas of data. In the event of a power failure a write in transit would not be acknowledged.

Solution Under Test Configuration Notes

The storage cluster uses large-capacity SATA HDD disks to form a data storage pool, and high-performance SATA SSD disks to form a metadata storage pool. In addition, each storage service node uses one SSD disk to save storage logs. Adjust the priority of mixed read and write IO according to the IO model to improve cluster storage performance under mixed IO.

Other Solution Notes

None

Dataflow

The storage client node and the server node are connected to the same switch. 10 server nodes use 40 10GbE network ports, 20 of them are used as external service network ports to communicate with clients, and the other 20 are used as internal data network ports to implement communication between service nodes. Five client nodes use five 40GbE network ports, each client generates load, the load is evenly distributed on each client node, and the pressure of traffic load is evenly distributed on the server's 20 network ports.

Other Notes

None

Other Report Notes

None


Generated on Tue Jun 2 13:59:35 2020 by SpecReport
Copyright © 2016-2020 Standard Performance Evaluation Corporation