SPEC SFS(R)2014_vda Result DATATOM Corp.,Ltd. : DATATOM INFINITY SPEC SFS2014_vda = 2800 Streams (Overall Response Time = 16.72 msec) =============================================================================== Performance =========== Business Average Metric Latency Streams Streams (Streams) (msec) Ops/Sec MB/Sec ------------ ------------ ------------ ------------ 280 6.4 2801 1291 560 6.7 5603 2587 840 7.8 8405 3873 1120 9.3 11207 5166 1400 11.8 14009 6470 1680 16.7 16810 7746 1960 20.2 19612 9049 2240 24.0 22413 10331 2520 31.0 25214 11633 2800 39.7 28016 12919 =============================================================================== Product and Test Information ============================ +---------------------------------------------------------------+ | DATATOM INFINITY | +---------------------------------------------------------------+ Tested by DATATOM Corp.,Ltd. Hardware Available 04/2020 Software Available 04/2020 Date Tested 04/2020 License Number 6039 Licensee Locations Chengdu, China INFINITY is a new generation of distributed cluster cloud storage developed by DATATOM. It makes full use of the idea of software-defined storage. It is a unified storage including file, block and object storage. It can also be used as cloud storage for Internet applications and back-end storage for cloud platforms,the cloud platforms that INFINITY supports include VMware, OpenStack and Docker. INFINITY's advantages in on-demand scaling, performance aggregation and data security have made it an ultra-performance cluster storage for a wide range of business applications, and has developed numerous cases for broadcasting, financial , government, education, medical and other industries. Solution Under Test Bill of Materials ===================================== Item No Qty Type Vendor Model/Name Description ---- ---- ---------- ---------- ---------- ----------------------------------- 1 360 4TB Western HUS726T4TA Each server's storage data pool 7200RPM Digital LE6L4 consists of 36 disks. The SATA HDD specification of this SATA HDD disk is: size:3.5inch;interface:SATA 6Gb/s;capacity:4TB;speed:7200RPM 2 40 240GB SATA Seagate XA240LE100 40 hard disks in total. SSD 03 10 of them are used by the storage server nodes (1/node) to store metadata. 20 of them are used by the storage server nodes (2/node) to create raid-1 pairs for the OS. 10 of them are used by the clients (2/node) to create raid-1 pairs for the OS. The specification of this SSD is: size:2.5inch;interface:SATA 6Gb/s;capacity:240GB. 3 10 800GB NVMe Intel P3700 1 NVMe SSD resides in each server SSD node,there are 10 server nodes,so the total of NVMe SSD is 10. All of them are used to save storage logs.The specification of this SSD is: interface: PCIe NVMe 3.0 x4; capacity:800GB 4 1 Ethernet Dell EMC S4048-ON 48*10GbE and 6*40GbE. 40*10GbE Switch ports are used to connect storage service nodes,5*40GbE ports are used to connect storage client nodes. 5 15 Chassis Chenbro RM41736 Supports 36 x 3.5" SAS/SATA with Plus front/rear access of high-density storage server.Economic power- consumption cooling: 7 x 8038 hot- swap Fan, 7500RPM. High power efficiency: 1+1 CRPS 1200W 80 PLUS Platinum power supply. 6 15 Motherboar Supermicro X10DRL-i Motherboard models of servers and d clients. 7 15 Host Bus Broadcom SAS There is 1 HBA per server and Adapter 9311-8i client node.Data transmission rate:12 GB/s SAS-3;I/O controller:LSI SAS 3008. 8 30 Processor Intel E5-2630V4 There are 2 CPUs per server and client node.Clock speed:2.2 GHz;Turbo clock speed:3.1 GHz;Cores :10;Architecture:x86-64;Threads:20 threads;smart cache per core:25 MB;Max CPUs:2 9 120 Memory Micron MTA36ASF4G There are 8 x 32GB DIMMs per server 72PZ- and client node.DDR4 functionality 2G6D1QG and operations supported asdefined in the component data sheet;32GB (4 Gig x 72);288-pin, registered dual in-line memory module(RDIMM);Supports ECC error detection and correction 10 5 40GbE NIC Intel XL710-QDA2 There is 1 40GbE NIC per client node. Each client node has a generated load and one of the nodes acts as the primary client. 11 20 10GbE NIC Intel X520-DA2 There are 2 10GbE NICs which include 4 network ports per server node.Each two network ports from different NIC form a dual 10 Gigabit bond. The two bonds are used as the internal data sync and external access network of the storage cluster. Configuration Diagrams ====================== 1) sfs2014-20200528-00074.config1.png (see SPEC SFS2014 results webpage) Component Software ================== Item Name and No Component Type Version Description ---- ------------ ------------ ------------ ----------------------------------- 1 Storage INFINITY 3.5.0 INFINITY is a unified storage Server Nodes filesystem including file, block and object storage.It adopts a global unified namespace and is internally compatible with variety of NAS prot ocols,including:CIFS/SMB,NFS,FTP,AF P and WEBDAV. 2 Storage Operating CentOS 7.4 The operating system on each Server and System storage node was 64-bit CentOS Client nodes Version 7.4. Hardware Configuration and Tuning - Physical ============================================ +----------------------------------------------------------------------+ | Storage Node | +----------------------------------------------------------------------+ Parameter Name Value Description --------------- --------------- ---------------------------------------- block.db None Specify the '--block.db' parameter when adding NVMe SSDs. The NVMe SSDs will be used to save storage log information. Network port mod=6 Every two network ports of the storage bond server form a bond port with mod = 6. One bond used for external access network and one bond used for data sync network. Hardware Configuration and Tuning Notes --------------------------------------- In INFINITY storage system, network performance is increased by network port bonding. For storage, read-write performance is increased by using high-performance data disks. For security, we use SSDs to form RAID 1 to guarantee the safety of the OS. We need to manually configure the storage by INFINITY Management Webpage to ensure the data will be stored in designated storage devices. Software Configuration and Tuning - Physical ============================================ +----------------------------------------------------------------------+ | Server Node | +----------------------------------------------------------------------+ Parameter Name Value Description --------------- --------------- ---------------------------------------- vm.dirty_writeb 100 Indicates how often the kernel BDI ack_centisecs thread wakes up to check whether the data in the cache needs to be written to disk, in units of 1/100 second. The default is 500, which means that the BDI thread will be woken up every 5 seconds.For a large number of continuous Buffer Write environments, the value should be appropriately reduced so that the BDI thread can more quickly determine whether the flush condition is reached, to avoid the accumulation of excessive dirty data and the formation of peak writes. vm.dirty_expire 100 After the data in the Linux kernel write _centisecs buffer is 'old',the kernel BDI thread starts to consider writing to the disk. The unit is 1/100 second. The default is 3000, which means that if the 30-second data is old, the disk will be refreshed.For environments with a large number of continuous Buffer Writes, the value should be appropriately reduced to trigger the BDI flush condition as early as possible, and the original write peak can be smoothly placed through multiple submissions.If the setting is too small, I/O will be submitted too frequently. +----------------------------------------------------------------------+ | Client Node | +----------------------------------------------------------------------+ Parameter Name Value Description --------------- --------------- ---------------------------------------- rsize,wsize 1048576 NFS mount options for data block size. protocol tcp NFS mount options for protocol. tcp_fin_timeout 600 TCP time to wait for final packet before socket closed. nfsvers 4.1 NFS mount options for NFS version. Software Configuration and Tuning Notes --------------------------------------- For the flush mechanism of kernel dirty pages, the simple point is that the kernel BDI thread periodically determines whether there are too old or too many dirty pages. If so, the dirty pages are flushed.These two parameters are located in /etc/sysctl.conf. Used the mount command "mount -t nfs ServerIp:/infinityfs1/nfs /nfs" in the test. The mount information is ServerIp:/infinityfs1/nfs on /nfs type nfs4 (rw,relatime,sync,vers=4.1,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,port=12049,timeo=600,retrans=2,sec=sys,clientaddr=ClientIp,local_lock=none,addr=ServerIp). Service SLA Notes ----------------- There were no opaque services in use. Storage and Filesystems ======================= Item Stable No Description Data Protection Storage Qty ---- ------------------------------------- ------------------ -------- ----- 1 360 4TB HDDs form the file system duplication Yes 1 data storage pool 2 Ten 240GB SSDs form a file system duplication Yes 1 metadata storage pool 3 Two 240GB SSDs form RAID 1 used to RAID-1 Yes 15 store the OS in each server and client node. 4 Each server node independently uses a None No 10 NVMe SSD to store log information. Number of Filesystems 1 Total Capacity 624.6TiB Filesystem Type infinityfs Filesystem Creation Notes ------------------------- INFINITY provides two data security protection modes, including erasure code and replication modes. Users can choose appropriate data protection strategies according to their business requirements. In this test,we choosed the duplication mode(one of the replication modes), which means that when a piece of data is stored in storage, it is written to two copies at the same time. so the usable capacity is half the capacity of the raw disk.The reserved space holds disk metadata information, so the actual usable space is less than the total disk space. Use the product default parameters when creating the file system. Storage and Filesystem Notes ---------------------------- 10 servers form a storage cluster, per storage server provides 36 4TB SATA HDD disks(total is 360 4TB SATA HDD disks in the storage cluster) to form a data storage pool, and per storage server provides 1 240GB SATA SSD(total is 10 240GB SATA SSDs in the storage cluster) to form a metadata storage pool. The two storage pools together form a file system. In addition, a separate 800GB NVMe SSD disk in each storage server node is used to store storage log information, and the log writing speed is improved to improve storage performance. Transport Configuration - Physical ================================== Item Number of No Transport Type Ports Used Notes ---- --------------- ---------- ----------------------------------------------- 1 10GbE Network 40 Each storage server uses four 10GbE network ports, and each two network ports form a dual 10 Gigabit bond,mod=6 2 40GbE Network 5 The client uses a 40GbE network port to communicate with the server. Transport Configuration Notes ----------------------------- Each storage server uses four 10GbE network ports, and each two network ports form a dual 10 Gigabit bond. The two bonds are used as the internal data sync and external access network of the storage cluster. The internal network provides communication between the internal nodes of the cluster, and the external network communicates with the client. The clients each use a single 40GbE network port to communicate with the server. Switches - Physical =================== Total Used Item Port Port No Switch Name Switch Type Count Count Notes ---- -------------------- --------------- ------ ----- ------------------------ 1 Dell EMC Networking 10/40GbE 54 45 The storage servers use S4048-ON 40 10GbE ports, and the storage client use 5 40GbE ports. Processing Elements - Physical ============================== Item No Qty Type Location Description Processing Function ---- ---- -------- -------------- ------------------------- ------------------- 1 20 CPU File System Intel(R) Xeon(R) CPU File System Server Server Nodes E5-2630 v4 @ 2.20GHz Nodes 10-core 2 10 CPU File System Intel(R) Xeon(R) CPU File System Client Client Nodes E5-2630 v4 @ 2.20GHz Nodes, load 10-core generator Processing Element Notes ------------------------ There are 2 physical processors in each server and client node. Each processor has 10 cores with two thread per core.There is no Spectre/Meltdown patch installed. Memory - Physical ================= Size in Number of Description GiB Instances Nonvolatile Total GiB ------------------------- ---------- ---------- ------------ ------------ System memory on server 256 10 V 2560 node System memory on client 256 5 V 1280 node Grand Total Memory Gibibytes 3840 Memory Notes ------------ None Stable Storage ============== Infinity Cluster Storage does not use any internal memory to temporarily cache write data to the underlying storage system. All writes are committed directly to the storage disk, therefore there is no need for any RAM battery protection. Data is protected on the storage media using replicas of data. In the event of a power failure a write in transit would not be acknowledged. Solution Under Test Configuration Notes ======================================= The storage cluster uses large-capacity SATA HDD disks to form a data storage pool, and high-performance SATA SSD disks to form a metadata storage pool. In addition, each storage service node uses one SSD disk to save storage logs. Adjust the priority of mixed read and write IO according to the IO model to improve cluster storage performance under mixed IO. Other Solution Notes ==================== None Dataflow ======== The storage client node and the server node are connected to the same switch. 10 server nodes use 40 10GbE network ports, 20 of them are used as external service network ports to communicate with clients, and the other 20 are used as internal data network ports to implement communication between service nodes. Five client nodes use five 40GbE network ports, each client generates load, the load is evenly distributed on each client node, and the pressure of traffic load is evenly distributed on the server's 20 network ports. Other Notes =========== None Other Report Notes ================== None =============================================================================== Generated on Tue Jun 2 13:59:35 2020 by SpecReport Copyright (C) 2016-2020 Standard Performance Evaluation Corporation