SPECstorage(TM) Solution 2020_genomics Result UBIX TECHNOLOGY CO., : UbiPower 18000 distributed all-flash storage system LTD. SPECstorage Solution = 1680 Jobs (Overall Response Time = 0.25 msec) 2020_genomics =============================================================================== Performance =========== Business Average Metric Latency (Jobs) (msec) Jobs Ops/Sec Jobs MB/Sec ------------ ------------ ------------ ------------ 140 0.2 140003 11888 280 0.2 280006 23788 420 0.2 420010 35673 560 0.2 560012 47567 700 0.2 700015 59468 840 0.2 840021 71342 980 0.2 980025 83231 1120 0.2 1120029 95124 1260 0.3 1260031 107018 1400 0.3 1400036 118910 1540 0.3 1540040 130793 1680 0.7 1680045 142695 =============================================================================== Product and Test Information ============================ +---------------------------------------------------------------+ | UbiPower 18000 distributed all-flash storage system | +---------------------------------------------------------------+ Tested by UBIX TECHNOLOGY CO., LTD. Hardware Available July 2022 Software Available July 2022 Date Tested September 2022 License Number 6513 Licensee Locations Shenzhen, China UbiPower 18000 is a new generation of ultra high performance distributed all-flash storage system, dedicated to providing high-performance data services for HPC/HPDA business, including AI and machine learning, genomics sequencing, EDA, CAD/CAE, real-time analytics, media rendering etc. UbiPower 18000 combines high-performance hyperscale NVMe SSD and Storage Class Memory with storage services all connected over RDMA networks to create low-latency, high-throughput, scale-out architecture. Solution Under Test Bill of Materials ===================================== Item No Qty Type Vendor Model/Name Description ---- ---- ---------- ---------- ---------- ----------------------------------- 1 14 Storage UBIX UbiPower UbiPower 18000 High-Performance X Node 18000 Node, including 16 slots for U.2 High-Perfo drives. rmance X Node 2 28 Client Intel M50CYP2UR2 The M50CYP2UR208 is 2U 2-socket Server 08 rack server. The CPU is two Intel 32-Core Processor@2.0 GHz. 512 GiB of system memory. Each server has 1x Mellanox ConnectX-5 100GbE dual- port network card. 1 server used as Prime Client; the other 27 servers used to generate the workload including Prime Client. 3 56 100GbE Mellanox MCX516A-CD ConnectX-5 Ex EN network interface Card AT card, 100GbE dual-port QSFP28, PCIe Gen 4.0 x16. 4 224 SSD Samsung PM9A3 1.92TB NVMe SSD 5 2 Switch Huawei 8850-64CQ- CloudEngine 8850 delivers high EI performance, high port density, and low latency for cloud-oriented data center networks and high-end campus networks. It supports 64 x 100 GE QSFP28 ports. Configuration Diagrams ====================== 1) storage2020-20220917-00043.config1.png (see SPECstorage Solution 2020 results webpage) Component Software ================== Item Name and No Component Type Version Description ---- ------------ ------------ ------------ ----------------------------------- 1 Clients Client OS Centos 7.9 Operating System (OS) for clients in M50CYP2UR208. 2 Storage Node Storage OS UbiPower OS Storage Operating System 1.1.0 Hardware Configuration and Tuning - Physical ============================================ +----------------------------------------------------------------------+ | Storage Server | +----------------------------------------------------------------------+ Parameter Name Value Description --------------- --------------- ---------------------------------------- Port Speed 100Gb Each storage node has 4x 100GbE Ethernet ports connected to the switch. MTU 4200 Jumbo Frames configured for 100Gb ports. +----------------------------------------------------------------------+ | Clients | +----------------------------------------------------------------------+ Parameter Name Value Description --------------- --------------- ---------------------------------------- Port Speed 100Gb Each storage node has 2x 100GbE Ethernet ports connected to the switch. MTU 4200 Jumbo Frames configured for 100Gb ports. Hardware Configuration and Tuning Notes --------------------------------------- None Software Configuration and Tuning - Physical ============================================ +----------------------------------------------------------------------+ | Clients | +----------------------------------------------------------------------+ Parameter Name Value Description --------------- --------------- ---------------------------------------- bond bond2 Bond2 for Storage node 2x 100GbE interfaces of each network card. Bond2 algorithm is balance-xor. The UbiPower will configure the bonding of the storage nodes automatically. Software Configuration and Tuning Notes --------------------------------------- The single filesystem was attached via a single mount per client. The mount string used was "mount -t ubipfs /pool/spec-fs /mnt/spec-test" Service SLA Notes ----------------- None Storage and Filesystems ======================= Item Stable No Description Data Protection Storage Qty ---- ------------------------------------- ------------------ -------- ----- 1 Samsung PM9A3 1.92TB used for 8+2 Yes 224 UbiPower 18000 Storage System 2 Micron 480GB ssd for Storage Nodes RAID-1 Yes 84 and Clients to store and boot OS Number of Filesystems 1 Total Capacity 308TiB Filesystem Type ubipfs Filesystem Creation Notes ------------------------- Each storage node has 16x Samsung PM9A3 SSDs attached to it, which are dedicated to the UbiPower filesystem. The single filesystem consumed all of SSDs across all of the nodes. Storage and Filesystem Notes ---------------------------- UbiPower filesystem was created and distributed evenly across all 14 storage nodes with 8+2 EC configuration in the cluster. All data and metadata are distributed evenly across the 14 storage nodes. Transport Configuration - Physical ================================== Item Number of No Transport Type Ports Used Notes ---- --------------- ---------- ----------------------------------------------- 1 100GbE NetWork 56 Each client is connected to a single port of each switch 2 100GbE NetWork 56 Each storage node is connected to two ports of each switch Transport Configuration Notes ----------------------------- For each client server, two 100GbE interfaces are bonded to one port, with MTU size 4200. For each storage node, two 100GbE interfaces of each network card are bonded to one port, with MTU size 4200. PFC and ECN are configured between switches, client servers and storage nodes. Switches - Physical =================== Total Used Item Port Port No Switch Name Switch Type Count Count Notes ---- -------------------- --------------- ------ ----- ------------------------ 1 CloudEngine 100GbE 128 128 2x CloudEngine switches 8850-64CQ-EI connected together with an 800Gb LAG which used 8 ports of each switch. Each switch has 56 ports used for client connections and 56 ports used for storage node connections. Processing Elements - Physical ============================== Item No Qty Type Location Description Processing Function ---- ---- -------- -------------- ------------------------- ------------------- 1 56 CPU Client Server Intel Xeon Gold 6338 UbiPower Storage CPU@2.00GHz Client, Linux OS, Load Generator and device driver 2 28 CPU storage Node Intel Xeon Gold 6338 UbiPower Storage OS CPU@2.00GHz Processing Element Notes ------------------------ None Memory - Physical ================= Size in Number of Description GiB Instances Nonvolatile Total GiB ------------------------- ---------- ---------- ------------ ------------ 28x client servers with 512 28 V 14336 512GB 14x storage nodes with 512 14 V 7168 512GB 14x storage nodes with 2048 14 NV 28672 2048GB of storage class memory Grand Total Memory Gibibytes 50176 Memory Notes ------------ Each storage controller has main memory that is used for the operating system and caching filesystem read data. Each storage node also has storage class memory; See "Stable Storage" for more information. Stable Storage ============== In UbiPower 18000, all writes are committed directly to the nonvolatile storage class memory before being written to the NVMe SSD. All data are protected by UbiPower OS Distributed Erase Coding Protection (8+2 in this test) across the storage nodes in the cluster. In the case of storage class memory failure, data is no longer written to the storage class memory, but is written to NVMe SSD in a write-through way. Solution Under Test Configuration Notes ======================================= None Other Solution Notes ==================== None Dataflow ======== The 28 client servers are the load generators for the benchmark. Each load generator has access to the single namespace of UbiPower filesystem. The benchmark tool accesses a single mount point on each load generator. In turn each of mount point corresponds to a single shared base directory in the filesystem. The clients process the file operations, and the data requests to and from the 14 UbiPower Storage nodes. Other Notes =========== None Other Report Notes ================== None =============================================================================== Generated on Wed Sep 28 16:43:47 2022 by SpecReport Copyright (C) 2016-2022 Standard Performance Evaluation Corporation