SPEC SFS(R)2014_swbuild Result WekaIO : WekaIO Matrix 3.1 with Supermicro BigTwin Servers SPEC SFS2014_swbuild = 1200 Builds (Overall Response Time = 1.02 msec) =============================================================================== Performance =========== Business Average Metric Latency Builds Builds (Builds) (msec) Ops/Sec MB/Sec ------------ ------------ ------------ ------------ 120 0.6 60002 887 240 0.6 120004 1776 360 0.6 180007 2663 480 0.7 240009 3551 600 0.7 300011 4440 720 0.9 360013 5328 840 1.1 420015 6216 960 1.5 480017 7104 1080 1.7 540019 7992 1200 2.0 600015 8880 =============================================================================== Product and Test Information ============================ +---------------------------------------------------------------+ | WekaIO Matrix 3.1 with Supermicro BigTwin Servers | +---------------------------------------------------------------+ Tested by WekaIO Hardware Available July 2017 Software Available November 2017 Date Tested February 2018 License Number 4553 Licensee Locations San Jose, California WekaIO Matrix is a flash native parallel and distributed, scale out file system designed to solve the challenges of the most demanding workloads, including AI and machine learning, genomic sequencing, real-time analytics, media rendering, EDA, software development and technical compute. Matrix software allows managing and dynamically scaling data stores up to 100s of PB in size as a single name space, globally shared, POSIX compliant file system that delivers industry leading performance and scale at a fraction of traditional storage products price. The software can be deployed as a dedicated storage appliance or in a hyperconverged mode with zero additional storage footprint and can be used on-premises as well as in the public cloud. WekaIO Matrix is a software only solution that runs on any standard X.86 hardware infrastructure delivering huge savings compared to proprietary all-flash based appliances. This test platform is a dedicated storage implementation on Supermicro BigTwin servers. Solution Under Test Bill of Materials ===================================== Item No Qty Type Vendor Model/Name Description ---- ---- ---------- ---------- ---------- ----------------------------------- 1 1 Parallel WekaIO Matrix WekaIO Matrix is a parallel and File Software distributed POSIX file system that System V3.1 scales across compute nodes and distributes data and metadata across the nodes for parallel access. 2 4 Storage Supermicro SYS- Supermicro BigTwin chassis, each Server 2029BT-HNR with 4 nodes per 2U chassis, Chassis populated with 4 NVMe drives per node. 3 64 1.2TB U2 Micron MTFDHAL1T2 Micron 9100 U.2 NVMe Enterprise NVMe SSD MCF Class Drives. 4 32 Processor Intel BX80673512 Intel Xeon Gold 5122 4C 3.6GHz 2 Processor 5 16 Network Mellanox MCX456A- 100Gbit ConnectX-4 Ethernet dual Interface ECAT port PCI-E adapters, one per node. Card 6 192 DIMM Supermicro DIMM System Memory DDR4 2667MHz ECC 1892mb 2667MHz SRx4 ECC 7 16 Boot Drive Supermicro MEM-IDSAVM SATA DOM Boot Drive, 128G 8-128G 8 16 Network Intel AOC-MGP- 2 Port Intel i350 1GbE RJ45 SIOM Interface I2M-O Card 9 16 BIOS Supermicro SFT-OOB- Out of Band Firmware Management Module LIC BIOS-Flash 10 1 Switch Mellanox MSN2700-CS 32-port 100GbE Switch 2FC 11 11 Clients AIC HP-201-AD AIC chassis, each with 4 servers per 2U chassis. Each server had 2 Intel(R) Xeon(R) E5-2640 v4, CPUs and 128GB memory. A total of 11 of the 12 available servers were used in the testing. Configuration Diagrams ====================== 1) sfs2014-20180225-00038.config1.png (see SPEC SFS2014 results webpage) Component Software ================== Item Name and No Component Type Version Description ---- ------------ ------------ ------------ ----------------------------------- 1 Storage Node MatrixFS 3.1 WekaIO Matrix is a distributed and File System parallel POSIX file system that runs on any NVMe, SAS or SATA enabled commodity server or cloud compute instance and forms a single storage cluster. The file system presents a POSIX compliant, high performance, scalable global namespace to the applications. 2 Storage Node Operating CENTOS 7.3 The operating system on each System storage node was 64-bit CENTOS Version 7.3. 3 Client Operating CENTOS 7.3 The operating system on the load System generator client was 64-bit CENTOS Version 7.3. 4 Client MatrixFS 3.1 MatrixFS Client software is mounted Client on the load generator clients and presents a POSIX compliant file system Hardware Configuration and Tuning - Physical ============================================ +----------------------------------------------------------------------+ | Storage Node | +----------------------------------------------------------------------+ Parameter Name Value Description --------------- --------------- ---------------------------------------- SR-IOV Enabled Enables CPU virtualization technology Hardware Configuration and Tuning Notes --------------------------------------- SR-IOV was enabled in the node BIOS. Hyper threading was disabled. No additional hardware tuning was required. Software Configuration and Tuning - Physical ============================================ +----------------------------------------------------------------------+ | Storage Node | +----------------------------------------------------------------------+ Parameter Name Value Description --------------- --------------- ---------------------------------------- Jumbo Frames 4190 Enables up to 4190 bytes of Ethernet Frames +----------------------------------------------------------------------+ | Client | +----------------------------------------------------------------------+ Parameter Name Value Description --------------- --------------- ---------------------------------------- WriteAmplificat 0 Write amplification Optimization level ionOptimization Level Software Configuration and Tuning Notes --------------------------------------- The MTU is required and valid for all environments and workloads. Service SLA Notes ----------------- Not applicable Storage and Filesystems ======================= Item Stable No Description Data Protection Storage Qty ---- ------------------------------------- ------------------ -------- ----- 1 1.2TB U.2 Micron 9100 Pro NVMe SSD in 14+2 Yes 64 the Supermicro BigTwin node 2 128G SATA DOM in the Supermicro Yes 16 BigTwin node to store OS Number of Filesystems 1 Total Capacity 48.8 TiB Filesystem Type MatrixFS Filesystem Creation Notes ------------------------- A single WekaIO Matrix file system was created and distributed evenly across all 64 NVMe drives in the cluster (16 storage nodes x 4 drives/node). Data was protected to a 14+2 failure level. Storage and Filesystem Notes ---------------------------- WekaIO MatrixFS was created and distributed evenly across all 16 storage nodes in the cluster. The deployment model is as a dedicated server protected with Matrix Distributed Data Coding schema of 14+2. All data and metadata is distributed evenly across the 16 storage nodes. Transport Configuration - Physical ================================== Item Number of No Transport Type Ports Used Notes ---- --------------- ---------- ----------------------------------------------- 1 100GbE NIC 16 The solution used a total of 16 100GbE ports from the storage nodes to the network switch. 2 50GbE NIC 11 The solution used a total of 11 50GbE ports from the clients to the network switch. Transport Configuration Notes ----------------------------- The solution under test utilized 16 100Gbit Ethernet ports from the storage nodes to the network switch. The clients utilized 11 50GbE connections to the network switch. Switches - Physical =================== Total Used Item Port Port No Switch Name Switch Type Count Count Notes ---- -------------------- --------------- ------ ----- ------------------------ 1 Mellanox MSN 2700 100Gb Ethernet 32 27 Switch has Jumbo Frames enabled Processing Elements - Physical ============================== Item No Qty Type Location Description Processing Function ---- ---- -------- -------------- ------------------------- ------------------- 1 32 CPU Supermicro Intel(R) Xeon(R) Gold WekaIO MatrixFS, BigTwin node 5122, 3.6Ghz, 4 core CPU Data Protection, device driver 2 22 CPU AIC HP201-AD Intel(R) Xeon(R) E5-2640 WekaIO MatrixFS v4, 2.4Ghz, 10 core CPU client Processing Element Notes ------------------------ Each storage node has 2 processors, each processor has 4 cores at 3.6Ghz. Each client has 2 processors, each processor has 10 cores. WekaIO Matrix utilized 8 of the 20 available cores on the client to run Matrix functions. The Intel Spectre and Meltdown patches were not applied to any element of the SUT, including the processors. Memory - Physical ================= Size in Number of Description GiB Instances Nonvolatile Total GiB ------------------------- ---------- ---------- ------------ ------------ Storage node memory 96 16 V 1536 Client node memory 128 11 V 1408 Grand Total Memory Gibibytes 2944 Memory Notes ------------ Each storage node has 96GBytes of memory for a total of 1,536GBytes. Each client has 128GBytes of memory, Matrix software utilized 20GBytes of memory per node. Stable Storage ============== WekaIO does not use any internal memory to temporarily cache write data to the underlying storage system. All writes are committed directly to the storage disk, therefore there is no need for any RAM battery protection. Data is protected on the storage media using WekaIO Matrix Distributed Data Protection (14+2). In the event of a power failure a write in transit would not be acknowledged. Solution Under Test Configuration Notes ======================================= The solution under test was a standard WekaIO Matrix enabled cluster in dedicated server mode. The solution will handle both large file I/O as well as small file random I/O and metadata intensive applications. No specialized tuning is required for different or mixed use workloads. Other Solution Notes ==================== None Dataflow ======== 3 x AIC HP201-AD storage Chassis (11 clients) were used to generate the benchmark workload. Each client had 1 x 50GbE network connection to a Mellanox MSN 2700 switch. 4 x Supermicro BigTwin 2029BT-HNR storage chassis (16 nodes) were benchmarked. Each storage node had 1 x 100GbE network connection to the same Mellanox MSN 2700 switch. The clients (AIC) had the MatrixFS native NVMe POSIX Client mounted and had direct and parallel access to all 16 storage nodes. Other Notes =========== None Other Report Notes ================== None =============================================================================== Generated on Wed Mar 13 16:41:47 2019 by SpecReport Copyright (C) 2016-2019 Standard Performance Evaluation Corporation