SPECstorage™ Solution 2020_eda_blended Result

Copyright © 2016-2022 Standard Performance Evaluation Corporation

Nettrix SPECstorage Solution 2020_eda_blended = 456 Job_Sets
R620 G40 with 24 NVMe Storage Server Overall Response Time = 0.18 msec


Performance

Business
Metric
(Job_Sets)
Average
Latency
(msec)
Job_Sets
Ops/Sec
Job_Sets
MB/Sec
240.15210800174
480.12821601348
720.12432401522
960.13043202696
1200.13254003870
1440.138648031045
1680.138756041219
1920.146864041394
2160.153972051568
2400.1601080051742
2640.1681188061917
2880.1801296072091
3120.1911404072264
3360.2001512082440
3600.2231620092613
3840.2401728092788
4080.2591836102963
4320.2791944113136
4560.3212052113310
Performance Graph


Product and Test Information

R620 G40 with 24 NVMe Storage Server
Tested byNettrix
Hardware AvailableMay 2022
Software AvailableMay 2022
Date TestedJun 2022
License Number6138
Licensee LocationsBeijing,China

The R620 G40 makes full use of computing, storage, and network resources in a limited space. It can flexibly allocate resources according to business needs to achieve excellent cost performance and energy consumption ratio. It is widely applicable to various industries such as the internet, finance, communication, and transportation to meet the needs of different business models.

Solution Under Test Bill of Materials

Item NoQtyTypeVendorModel/NameDescription
11Storage ServerNettrixR620 G40The R620 G40 with 24 NVMe Storage Server contains 2 x Intel(R) Xeon(R) Platinum 8380 CPU @ 2.30GHz, 32 x 32GB memory, and dual-socket MB, using NFSv4 protocol over a 100GbE network card. The R620 G40 server provides IO/s from 24 filesystems. The R620 G40 server uses a dual-socket processor, 1 x 1.6TB SAS 4.0 SSD, 24 x 7.68TB PCIe Gen4 NVMe SSD. R620 G40 is running RHEL8.3 on SAS 4.0 SSD using NFSv4 and 1 x 100GbE Ethernet network.
21Client ServerNettrixR620 G40The R620 G40 with 24 NVMe Storage Server contains 2 x Intel(R) Xeon(R) Platinum 8380 CPU @ 2.30GHz, 32 x 32GB memory, and dual-socket MB, using NFSv4 protocol over a 100GbE network card. The R620 G40 server uses a dual-socket processor, 1 x 1.92TB SATA SSD for OS, and 6 x 1.6TB SAS 4.0 SSD for storage. The R620 G40 is running ESXi 7.0.2 OS on SATA SSD using NFSv4 and 1 x 100GbE Ethernet network. The R620 G40 server contains 8 virtual machines, each VM client has 16 vCPUs, 120GB memory, 300GB storage, and a 100GbE network port provided by the SR-IOV function.
31DiskPHISONESM1720-1920GThe operation system of client server installs in a SATA SSD (ESM1720-1920G). The capacity of SATA SSD disk is 1.92TB.
47DiskKIOXIAKPM61VUG1T60There are a total of 7 SAS 4.0 SSD disks (KPM61VUG1T60), each with a capacity of 1.6TB. 1 x 1.6TB SAS 4.0 SSD disk is installed in storage server for OS. 6 x 1.6TB SAS 4.0 SSD disk are installed in the client server for storage.
524DiskIntelSSDPF2KX076TZThere are a total of 24 PCIe Gen4 NVMe SSD disks (SSDPF2KX076TZ) installed in storage server. The capacity of a PCIe Gen4 NVMe SSD disk is 7.68TB.
62Ethernet CardNVIDIA(Mellanox)ConnectX-6 Dx 100GbE Dual-port QSFP56ConnectX-6 Dx EN adapter card, 100GbE, OCP3.0, With Host management, Dual-port QSFP56, No Crypto, Thumbscrew (Pull Tab) Bracket (Part Number MCX623436AN-CDAB). 1 of 2 is installed in the storage server, 2 of 2 is installed in the client server.
72Ethernet CardBITLANDI350 1G RJ45 4-Ports PCIe NICThe four 10/100/1000 copper ports with RJ45 connectors (Part Number EGI4-I350-US) are installed for management. 1 of 2 is installed in the storage server, 2 of 2 is installed in the client server.

Configuration Diagrams

  1. Nettrix SUT Diagram

Component Software

Item NoComponentTypeName and VersionDescription
1Storage ServerLinuxRHEL 8.3Storage Server Operating System.
2Client ServerESXi ServerESXi 7.0.2The client OS version is VMware ESXi 7.0.2 build-17630552.
3VM ClientsLinuxRHEL 8.3The client server is configured with about 8 VM clients that are running RHEL 8.3 OS.

Hardware Configuration and Tuning - Physical

Storage Server
Parameter NameValueDescription
MTU9000Network Jumbo Frames.
Client Server
Parameter NameValueDescription
Hyper-Threading [ALL]EnabledThe value of Hyper-Threading [ALL] is set to enabled. The client server has double core numbers (160 cores).
SR-IOVEnabledNetwork card's SR-IOV function is set to enabled.
VM Clients
Parameter NameValueDescription
MTU9000Network Jumbo Frames.

Hardware Configuration and Tuning Notes

The System Under Test has 100GbE Ethernet port set to MTU 9000.

Software Configuration and Tuning - Physical

Client Server
Parameter NameValueDescription
vers4NFS mount option set to version 4.
rsize, wsize1048576NFS mount option for the read and write buffer size. The read and write buffer size is changed in 8 VM clients.

Software Configuration and Tuning Notes

Setting the NFS mount option for the read and write buffer size to 1048576 bytes. 8 VM clients have a 100GbE Ethernet port provided by the SR-IOV function.

Service SLA Notes

None.

Storage and Filesystems

Item NoDescriptionData ProtectionStable StorageQty
11 x 1.6T SAS 4.0 SSD disk is installed in storage server for OS. 6 x 1.6T SAS 4.0 SSD disk are installed in the client server for VM storage.NoneYes7
21 x 1.92T SATA SSD disk is installed in the client server for VM OS.NoneYes1
324 x 7.68TB PCIe Gen4 NVMe SSD disk for storage server IOs.NoneYes24
Number of Filesystems24
Total Capacity184.32 TiB
Filesystem Typexfs

Filesystem Creation Notes

The filesystems are created with default options. Each filesystem is created by a 7.68TB PCIe Gen4 NVMe SSD disk. The 24 PCIe Gen4 NVMe SSD disks in storage server are directly connected to PCI with no controller.

Storage and Filesystem Notes

The 184.32TiB capacity is equally allocated across all 24 filesystems.

Transport Configuration - Physical

Item NoTransport TypeNumber of Ports UsedNotes
1100GbE network2One port of dual-port 100GbE network card in the storage server is directly connected with one port of dual-port 100GbE network card in the client server

Transport Configuration Notes

None.

Switches - Physical

Item NoSwitch NameSwitch TypeTotal Port CountUsed Port CountNotes
1N/AN/AN/AN/AN/A

Processing Elements - Physical

Item NoQtyTypeLocationDescriptionProcessing Function
12CPUStorage ServerIntel(R) Xeon(R) Platinum 8380 CPU @ 2.30GHz 40C,Hyper-Threading [ALL] enabled (total 160 cores)Storage Server
22CPUClient ServerIntel(R) Xeon(R) Platinum 8380 CPU @ 2.30GHz 40C, Hyper-Threading [ALL] enabled (total 160 cores)Client Server

Processing Element Notes

The CPU model of the client server is the same as the storage server. The Hyper-Threading [ALL] is enabled. There are 8 VM clients under the client server, every VM client has 16 vCPU.

Memory - Physical

DescriptionSize in GiBNumber of InstancesNonvolatileTotal GiB
System memory in the storage server10241V1024
System memory in the client server10241V1024
Grand Total Memory Gibibytes2048

Memory Notes

None.

Stable Storage

The storage server does not use a write cache to store data so that writes are committed to the disk immediately. The entire SUT is protected by redundant power supplies, both storage server and client server.

Solution Under Test Configuration Notes

In client server, there are 8 VM clients. Each VM client contains 16 vCPUs, 120GB memory, 300GB disk, a 100GbE network port provided by the SR-IOV function, a management port provided by I350 1G RJ45 4-Ports PCIe NIC. In storage server, the 24 PCIe Gen4 NVMe SSD disks are directly connected to PCI with no controller. There are 24 filesystems, each NVMe disk creates a filesystem. There are using a 100GbE network port provided by ConnectX-6 Dx 100GbE Dual-port QSFP56 and a management port provided by I350 1G RJ45 4-Ports PCIe NIC.

Other Solution Notes

None.

Dataflow

One 100GbE network port in the storage server is directly connected with one 100GbE network port in the client server. The 24 filesystems are created and shared from the storage server to 8 VM clients in the client server. The 24 filesystems (nvme0n1 nvme1n1 ... nvme23n1) are mounted to 8 VM clients (test1 test2 ... test8).

Other Notes

None.

Other Report Notes

None of the components used to perform the test were patched with Spectre or Meltdown patches (CVE-2017-5754, CVE-2017-5753, CVE-2017-5715).


Generated on Thu Jun 30 22:45:52 2022 by SpecReport
Copyright © 2016-2022 Standard Performance Evaluation Corporation