SPEC SFS(R)2014_vda Result DDN Storage : SFA14KXE with EXAScaler SPEC SFS2014_vda = 3400 Streams (Overall Response Time = 50.07 msec) =============================================================================== Performance =========== Business Average Metric Latency Streams Streams (Streams) (msec) Ops/Sec MB/Sec ------------ ------------ ------------ ------------ 340 6.3 3402 1575 680 7.6 6804 3147 1020 10.3 10206 4710 1360 18.9 13609 6284 1700 30.6 17010 7839 2040 43.2 20411 9399 2380 69.6 23815 10987 2720 84.1 27218 12556 3060 106.5 30620 14136 3400 153.4 34022 15703 =============================================================================== Product and Test Information ============================ +---------------------------------------------------------------+ | SFA14KXE with EXAScaler | +---------------------------------------------------------------+ Tested by DDN Storage Hardware Available 08 2018 Software Available 09 2018 Date Tested 09 2018 License Number 4722 Licensee Locations Santa Clara To address the comprehensive needs of High Performance Computing and Analytics environments, the revolutionary DDN SFA14KXE Hybrid Storage Platform is the highest performance architecture in the industry that delivers up to 60GB/s of throughput, extreme IOPs at low latency with industry-leading density in a single 4U appliance. By integrating the latest high-performance technologies from silicon, to interconnect, memory and flash, along with DDN's SFAOS - a real-time storage engine designed for scalable performance, the SFA14KXE outperforms everything on the market. Leveraging over a decade of leadership in the highest end of Big Data, the EXAScaler Lustre Parallel File system solution running on the SFA14KXE provides flexible choices for highest demanding performance of a parallel file system coupled with DDN's deep expertise and history of supporting highly efficient, large-scale deployments. Solution Under Test Bill of Materials ===================================== Item No Qty Type Vendor Model/Name Description ---- ---- ---------- ---------- ---------- ----------------------------------- 1 1 Storage DDN SFA14KXE 2x Dual Intel(R) Xeon(R) CPU Appliance Storage E5-2650 v2 @ 2.60GHz 2 17 Network Mellanox ConnectX- Dual-port QSFP, EDR IB (100Gb/s) / Adapter VPI 100GigE, PCIe 3.0 x16 8GT/s (MCX4121A- XCAT) 3 1 Lustre Supermicro SYS-1027R- Dual Intel(R) Xeon(R) CPU E5-2667 MDS/MGS WC1RT v3 @ 3.20GHz, 96GB of memory per Server Server 4 16 EXAScaler Supermicro SYS-1027R- Dual Intel Xeon(R) CPU E5-2650 v2 @ Clients WC1RT 2.60GHz, 128GB of memory per client 5 2 Switch Mellanox SB7700 36 port EDR Infiniband switch 6 420 Drives HGST HUH721010A HGST Ultrastar He10 HUH721010AL4200 L4200 - hard drive - 10 TB - SAS 12Gbs 7 1 Enclosure DDN EF4024 External FC connected HDD JBOD enclosure 8 4 Drives HGST HUC156030C HGST Ultrastar C15K600 SS200 HUC156030CSS200 - hard drive - 300 GB - SAS 12Gb/s 9 34 Drives Toshiba AL14SEB030 AL14SEB-N Enterprise Performance N Boot HDD 10 1 FC Adapter QLogic QLogic QLogic 32Gb 2-port FC to PCIe Gen3 QLE2742 x8 Adapter Configuration Diagrams ====================== 1) sfs2014-20181005-00047.config1.png (see SPEC SFS2014 results webpage) Component Software ================== Item Name and No Component Type Version Description ---- ------------ ------------ ------------ ----------------------------------- 1 MDS/MGS Distributed ES 4.0.0, lu Distributed file system software Server Nodes Filesystem stre-2.10.4_ that runs on MDS/MGS node. ddn4 2 OSS Server Distributed ES 4.0.0, lu Distributed file system software Nodes Filesystem stre-2.10.4_ that runs on the Virtual OSS VM's. ddn4 3 Client Nodes Distributed lustre-clie Distributed file system software Filesystem nt-2.10.4_dd that runs on client nodes. n4-1.el7.cen tos.x86_64 4 Client Nodes Operating RHEL 7.4 The Client operating system - System 64-bit Red Hat Enterprise Linux version 7.4. 5 Storage Storage SFAOS 11.1.0 SFAOS - real-time storage Operating Appliance Appliance System designed for scalable performance. Hardware Configuration and Tuning - Physical ============================================ +----------------------------------------------------------------------+ | EXAScaler common configuration | +----------------------------------------------------------------------+ Parameter Name Value Description --------------- --------------- ---------------------------------------- lctl set_param 16M maximum number of pages per RPC osc.*.max_pages _per_rpc lctl set_param 1024 maximum amount of outstanding dirty data osc.*.max_dirty that wasn't synced by the application _mb lctl set_param 2048 maximum amount of cache reserved for llite.*.max_rea readahead cache d_ahead_mb lctl set_param 16 maximum number of parallel RPC's in osc.*.max_rpcs_ flight in_flight lctl set_param 1024 max readahead buffer per file llite.*.max_rea d_ahead_per_fil e_mb lctl set_param 0 disable data checksums osc.*.checksums Hardware Configuration and Tuning Notes --------------------------------------- please check the following pages for detaild documentation on supported parameters : for osc.* parameters --> https://build.whamcloud.com/job/lustre-manual/lastSuccessfulBuild/artifact/lustre_manual.xhtml#TuningClientIORPCStream for llite.* parameters --> https://build.whamcloud.com/job/lustre-manual/lastSuccessfulBuild/artifact/lustre_manual.xhtml#TuningClientReadahead Software Configuration and Tuning - Physical ============================================ +----------------------------------------------------------------------+ | EXAScaler Clients | +----------------------------------------------------------------------+ Parameter Name Value Description --------------- --------------- ---------------------------------------- processor.max_c 0 Defines maximum allowed c-state of CPU. state intel_idle.max_ 0 Defines maximum allowed c-state of CPU cstate in idle mode. idle poll set idle mode to polling for maximum performance. Software Configuration and Tuning Notes --------------------------------------- no special tuning was applied beyond what was mentioned above Service SLA Notes ----------------- None Storage and Filesystems ======================= Item Stable No Description Data Protection Storage Qty ---- ------------------------------------- ------------------ -------- ----- 1 420 HDDs in SFA14KXE DCR 8+2p Yes 420 2 34 mirrored internal 300GB 10K SAS RAID-1 No 34 Boot drives 3 2 mirrored 300GB 15K SAS drives MGS RAID-1 yes 2 4 2 mirrored 300GB 15K SAS drives MDT RAID-1 yes 2 Number of Filesystems 1 Total Capacity 2863.4 TiB Filesystem Type Lustre Filesystem Creation Notes ------------------------- A single filesystem with all MDT and OST disks was created, no additional settings where applied. Storage and Filesystem Notes ---------------------------- The SFA14KXE has 8x virtual disks with a 128KB stripe size and 8+2P RAID protection created from 8 DCR storage pools in 51/1 configuration (51 drives per pool with 1 drive worth of spare space). Transport Configuration - Physical ================================== Item Number of No Transport Type Ports Used Notes ---- --------------- ---------- ----------------------------------------------- 1 100Gb EDR 16 client IB ports 2 100Gb EDR 4 SFA IB ports 3 100Gb EDR 1 MDS IB port 4 100Gb EDR 8 ISL between switches 5 16 Gb FC 2 direct attached FC to JBOF Transport Configuration Notes ----------------------------- 2x 36-port switches in a single InfiniBand fabric, with 8 ISLs between them. Management traffic used IPoIB, data traffic used IB Verbs on the same physical adapter. Switches - Physical =================== Total Used Item Port Port No Switch Name Switch Type Count Count Notes ---- -------------------- --------------- ------ ----- ------------------------ 1 Client/SFA Mellanox 100Gb EDR 36 8 The default SB7700 configuration was used on the switch 2 MDS Mellanox SB7700 100Gb EDR 36 27 The default configuration was used on the switch Processing Elements - Physical ============================== Item No Qty Type Location Description Processing Function ---- ---- -------- -------------- ------------------------- ------------------- 1 2 CPU SFA14KXE Dual Intel(R) Xeon(R) CPU Storage unit E5-2650 v2 @ 2.60GHz 2 16 CPU client nodes Dual Intel Xeon(R) CPU Filesystem client, E5-2650 v2 @ 2.60GHz load generator 3 1 CPU Server nodes Dual Intel(R) Xeon(R) CPU EXAScaler Server E5-2667 v3 @ 3.20GHz Processing Element Notes ------------------------ None Processing Elements - Virtual ============================= Item No Qty Type Location Description Processing Function ---- ---- -------- -------------- ------------------------- ------------------- 1 4 CPU Server nodes 8 Virtual Cores from SFA EXAScaler Server Controller Processing Element Notes ------------------------ None Memory - Physical ================= Size in Number of Description GiB Instances Nonvolatile Total GiB ------------------------- ---------- ---------- ------------ ------------ Memory in SFA System 76 2 NV 152 divided for OS and cache Memory in SFA System for 90 4 V 360 VM Memory EXAScaler client node 64 16 V 1024 system memory EXAScaler MDS/MGS Server 96 1 V 96 node system memory Grand Total Memory Gibibytes 1632 Memory Notes ------------ The EXAScaler filesystem utilizes local filesystem cache/memory for its caching mechanism for both clients and OST's. All resources in the system and local filesystem are available for use by Lustre. In the SFA14KXE, some portion of memory is used for the SFAOS Operating system as well as data caching. Memory - Virtual ================ Size in Number of Description GiB Instances Nonvolatile Total GiB ------------------------- ---------- ---------- ------------ ------------ Memory assigned to ach 90 4 V 360 OSS VM within the SFA Controller Grand Total Memory Gibibytes 360 Memory Notes ------------ Each of the EXAScaler OSS VM's has 90 GB of memory assigned for OS and caching. Stable Storage ============== SFAOS with Declustered RAID performs rapid rebuilds, spreading the rebuild process across many drives. SFAOS also supports a range of features which improve uptime for large scale systems including partial rebuilds, enlosure redundancy, dual active-active controllers, online upgrades and more. The SFA14KXE has built-in backup battery power support to allow destaging of cached data to persistent storage in case of a power outage. The system doesn't require further battery power after the destage process completed. All OSS servers and the SFA14KXE are redundantly configured. All 4 Servers have access to all data shared by the SFA14KXE. In the event of loss of a server, that server's data will be failed over automatically to a remaining server with continued production service. Stable writes and commit operations in EXAScaler are not acknowledged until the OSS server receives an acknowledgment of write completion from the underlying storage system (SFA14KXE) Solution Under Test Configuration Notes ======================================= The solution under test used a EXAScaler Cluster optimized for large file, sequential streaming workloads. The Clients served as Filesystem clients as well as load generators for the benchmark. The Benchmark was executed from one of the server nodes. None of the component used to perform the test where patched with Spectre or Meltdown patches (CVE-2017-5754,CVE-2017-5753,CVE-2017-5715). Other Solution Notes ==================== None Dataflow ======== All 16 Clients where used to generate workload against a single Filesystem mountpoint (single namespace) accessible as a local mount on all clients. The EXAScaler Server received the requests by the clients and processed the read or write operation against all connected DCR backed VD's in the SFA14KXE. Other Notes =========== EXAScaler is a trademark of DataDirect Network in the U.S. and/or other countries. Intel and Xeon are trademarks of the Intel Corporation in the U.S. and/or other countries. Mellanox is a registered trademark of Mellanox Ltd. Other Report Notes ================== None =============================================================================== Generated on Wed Mar 13 16:26:24 2019 by SpecReport Copyright (C) 2016-2019 Standard Performance Evaluation Corporation