SPEC SFS(R)2014_swbuild Result WekaIO : WekaIO Matrix 3.1.8.5 with Supermicro BigTwin Servers SPEC SFS2014_swbuild = 5700 Builds (Overall Response Time = 0.26 msec) =============================================================================== Performance =========== Business Average Metric Latency Builds Builds (Builds) (msec) Ops/Sec MB/Sec ------------ ------------ ------------ ------------ 570 0.2 285012 3687 1140 0.2 570024 7374 1710 0.2 855036 11063 2280 0.2 1140048 14750 2850 0.2 1425038 18436 3420 0.2 1710074 22125 3990 0.3 1995086 25811 4560 0.3 2280098 29497 5130 0.4 2565077 33187 5700 0.6 2849898 36872 =============================================================================== Product and Test Information ============================ +---------------------------------------------------------------+ | WekaIO Matrix 3.1.8.5 with Supermicro BigTwin Servers | +---------------------------------------------------------------+ Tested by WekaIO Hardware Available July 2017 Software Available November 2018 Date Tested December 2018 License Number 4553 Licensee Locations San Jose, California WekaIO Matrix is a flash native parallel and distributed, scale out file system designed to solve the challenges of the most demanding workloads, including AI and machine learning, genomic sequencing, real-time analytics, media rendering, EDA, software development and technical compute. Matrix software is a POSIX compliant parallel file system that delivers industry leading performance and scale at a fraction of traditional storage products price. The software can support billions of files and scales to hundreds of petabytes in a single namespace. Matrix can be deployed on commodity servers as a dedicated storage appliance or in a hyperconverged mode with zero additional storage footprint. The same software runs on-premises and in the public cloud. WekaIO Matrix is a software only solution that runs on any standard X86 hardware infrastructure delivering huge savings compared to proprietary all-flash based appliances. This test platform is deployed as a dedicated storage implementation on Supermicro BigTwin servers. Solution Under Test Bill of Materials ===================================== Item No Qty Type Vendor Model/Name Description ---- ---- ---------- ---------- ---------- ----------------------------------- 1 1 Parallel WekaIO Matrix WekaIO Matrix is a parallel and File Software distributed POSIX file system that System V3.1.8.5 scales across compute nodes and distributes data and metadata evenly across the nodes for parallel access. 2 6 Storage Supermicro SYS- Supermicro BigTwin chassis, each Server 2029BT-HNR with 4 nodes per 2U chassis, Chassis populated with 6 NVMe drives per node. A total of 23 nodes were used in the testing. 3 138 3.84TB U.2 Micron MTFDHAL3T8 Micron 9200 Pro U.2 NVMe Enterprise NVMe SSD TCT1AR Class Drives. 4 46 Processor Intel SR3B3 Intel Xeon Gold 6126 12 Cores 2.6GHz Processor. 5 23 Network Mellanox MCX456A- 100Gbit ConnectX-4 Ethernet dual Interface ECAT port PCI-E adapters, one per node. Card 6 276 DIMM Supermicro DIMM 16GB System Memory DDR4 2667MHz ECC. 2667MHz 2Rx8 ECC 7 23 Boot Drive Micron MTFDDAV240 Micron Pro 5100 SATA M.2, 240GB. TCB1AR 8 23 Network Supermicro AOC-MHIBE- SIOM Single Port InfiniBand EDR Interface M1CGM-O QSFP28 VPI running in Ethernet Card mode. 9 23 BIOS Supermicro SFT-OOB- Out of Band Firmware Management Module LIC BIOS-Flash. 10 10 Switch Mellanox MSN2700-CS 32-port 100GbE Switch. 2FC 11 5 Clients Supermicro SYS- Clients are built-to-order from 2029BT-HNR Supermicro. The base build is a BigTwin SYS-2029BT-HNR 2U/4-node chassis with X11DPT-B motherboards. Each node in the SYS-2020BT-HNR represents one client. The built- to-order components in each client includes 2 Intel(R) Xeon(R) Gold 6126 12-core CPUs, 24 DDR4-2666 16GB ECC RDIMMs, 1 100GbE connection to the switch fabric via 1 Mellanox ConnectX-4 PCIe Ethernet adapter, 1 AOC-MHIBE-M1CGM-O SIOM Single Port InfiniBand EDR QSFP28 VPI that is not used/connected. Out of the 20 clients, 1 was used as prime and 19 were used to generate the workload. Configuration Diagrams ====================== 1) sfs2014-20181218-00055.config1.png (see SPEC SFS2014 results webpage) Component Software ================== Item Name and No Component Type Version Description ---- ------------ ------------ ------------ ----------------------------------- 1 Storage Node MatrixFS 3.1.8.5 WekaIO Matrix is a distributed and File System parallel POSIX file system that runs on any NVMe, SAS or SATA enabled commodity server or cloud compute instance and forms a single storage cluster. The file system presents a POSIX compliant, high performance, scalable global namespace to the applications. 2 Storage Node Operating CentOS 7.4 The operating system on each System storage node was 64-bit CENTOS Version 7.4. 3 Client Operating CentOS 7.4 The operating system on the load System generator client was 64-bit CENTOS Version 7.4. 4 Client MatrixFS 3.1.8.5 MatrixFS Client software is mounted Client on the load generator clients and presents a POSIX compliant file system. Hardware Configuration and Tuning - Physical ============================================ +----------------------------------------------------------------------+ | Storage Node | +----------------------------------------------------------------------+ Parameter Name Value Description --------------- --------------- ---------------------------------------- SR-IOV Enabled Enables CPU virtualization technology HyperThreading Disabled HyperThreading Hardware Configuration and Tuning Notes --------------------------------------- None Software Configuration and Tuning - Physical ============================================ +----------------------------------------------------------------------+ | Storage Node | +----------------------------------------------------------------------+ Parameter Name Value Description --------------- --------------- ---------------------------------------- Jumbo Frames 4190 Enables up to 4190 bytes of Ethernet frames +----------------------------------------------------------------------+ | Client | +----------------------------------------------------------------------+ Parameter Name Value Description --------------- --------------- ---------------------------------------- WriteAmplificat 0 WekaIO MatrixFS install setting, Write ionOptimization Amplification Optimization level Level MAX_OPEN_FILES 66M WekaIO MatrixFS client install time parameter setting, Maximum number of open files nofile 500000 Client side Linux kernel /etc/security/limits.conf nofile setting MTU 4190 Client OS NIC setting, MTU Software Configuration and Tuning Notes --------------------------------------- The MTU is set to 4190 and is required and valid for all environments and workloads. Service SLA Notes ----------------- Not applicable. Storage and Filesystems ======================= Item Stable No Description Data Protection Storage Qty ---- ------------------------------------- ------------------ -------- ----- 1 3.84TB U.2 Micron 9200 Pro NVMe SSD 16+2 Yes 138 in the Supermicro BigTwin chassis 2 240GB M.2 Micron 5100 SATA SSD in the None Yes 23 Supermicro BigTwin to store and boot OS Number of Filesystems 1 Total Capacity 342.73 TiB Filesystem Type MatrixFS Filesystem Creation Notes ------------------------- A single WekaIO Matrix file system was created and distributed evenly across all 138 NVMe drives in the cluster (23 storage nodes x 6 drives/node). Data was protected to a 16+2 failure level. The file system overprovisions an additional 20% of capacity for performance quality of service at high water mark. Storage and Filesystem Notes ---------------------------- WekaIO MatrixFS was created and distributed evenly across all 23 storage nodes in the cluster. The deployment model is as a dedicated server protected with Matrix Distributed Data Coding schema of 16+2. All data and metadata is distributed evenly across the 23 storage nodes. Transport Configuration - Physical ================================== Item Number of No Transport Type Ports Used Notes ---- --------------- ---------- ----------------------------------------------- 1 100GbE NIC 46 The solution used a total of 46 100GbE ports from the storage nodes to the network switch. 2 100GbE NIC 19 The solution used a total of 19 100GbE ports from the clients to the network switch. 3 100GbE NIC 1 The solution used a total of 1 100GbE ports from the prime to the network switch. Transport Configuration Notes ----------------------------- The solution under test had a total of 320 100GbE ports from the 10 Mellanox MSN 2700 switches. The switches were configured in a leaf-spine topology where 4 were spines and 6 were leaves. Each spine switch utilized 24 ports from the connections to the leaf switches. At the leaf switches, the storage nodes consumed a total of 46 100GbE ports, while the clients and prime utilized 20 100GbE ports. The leaf-to-spine connections consumed a total of 96 100GbE ports. Combined, the storage nodes and clients utilized a total of 162 100GbE ports. Switches - Physical =================== Total Used Item Port Port No Switch Name Switch Type Count Count Notes ---- -------------------- --------------- ------ ----- ------------------------ 1 Qty. 10, Mellanox 100Gb Ethernet 320 162 Switches have Jumbo MSN 2700 Frames enabled with MTU set to 4190 Processing Elements - Physical ============================== Item No Qty Type Location Description Processing Function ---- ---- -------- -------------- ------------------------- ------------------- 1 46 CPU SYS-2029BT-HNR Intel(R) Xeon(R) Gold WekaIO MatrixFS, 6126, 12 Cores, 2.6GHz Data Protection, CPU device driver 2 38 CPU SYS-2029BT-HNR Intel(R) Xeon(R) Gold WekaIO MatrixFS 6126, 12 Cores, 2.6GHz client CPU 3 2 CPU SYS-2029BT-HNR Intel(R) Xeon(R) Gold SPEC SFS2014 Prime 6126, 12 Cores, 2.6GHz CPU Processing Element Notes ------------------------ Each storage node has 2 processors, each processor has 12 cores at 2.6Ghz. Each client has 2 processors, each processor has 12 cores. WekaIO Matrix utilized 3 of the 24 available cores on the client to run Matrix functions. Memory - Physical ================= Size in Number of Description GiB Instances Nonvolatile Total GiB ------------------------- ---------- ---------- ------------ ------------ Storage node memory 192 23 V 4416 Client memory 384 19 V 7296 Prime memory 384 1 V 384 Grand Total Memory Gibibytes 12096 Memory Notes ------------ Each storage node has 192GBytes of memory for a total of 4,416GBytes. Each client has 384GBytes of memory for a total of 7,296GBytes. The prime has 384GBytes of memory for a total of 384GBytes. Stable Storage ============== WekaIO does not use any internal memory to temporarily cache write data to the underlying storage system. All writes are committed directly to the storage disk, therefore there is no need for any RAM battery protection. Data is protected on the storage media using WekaIO Matrix Distributed Data Protection (16+2). In the event of a power failure a write in transit would not be acknowledged. Solution Under Test Configuration Notes ======================================= The solution under test was a standard WekaIO Matrix enabled cluster in dedicated server mode. The solution will handle both large file I/O as well as small file random I/O and metadata intensive applications. No specialized tuning is required for different or mixed use workloads. None of the components used to perform the test were patched with Spectre or Meltdown patches (CVE-2017-5754,CVE-2017-5753,CVE-2017-5715). Other Solution Notes ==================== None. Dataflow ======== 5 x SYS-2029BT-HNR storage Chassis (19 clients) were used to generate the benchmark workload. Each client had 1 x 100GbE network connection to a Mellanox MSN 2700 switch. 6 x Supermicro BigTwin 2029BT-HNR storage chassis (23 nodes) were benchmarked. Each storage node had 2 x 100GbE network connection to a Mellanox MSN 2700 switch. The clients (Supermicro) had the MatrixFS native NVMe POSIX Client mounted and had direct and parallel access to all 23 storage nodes. Other Notes =========== None Other Report Notes ================== None =============================================================================== Generated on Wed Mar 13 16:18:51 2019 by SpecReport Copyright (C) 2016-2019 Standard Performance Evaluation Corporation