SPEC SFS®2014_swbuild ResultCopyright © 2016-2019 Standard Performance Evaluation Corporation |
Huawei | SPEC SFS2014_swbuild = 200 Builds |
---|---|
Huawei OceanStor 5500 V5 | Overall Response Time = 0.58 msec |
|
Huawei OceanStor 5500 V5 | |
---|---|
Tested by | Huawei | Hardware Available | 04/2018 | Software Available | 04/2018 | Date Tested | 07/2018 | License Number | 3175 | Licensee Locations | Chengdu, China |
Huawei's OceanStor 5500 V5 Storage System is the new generation of mid-range hybrid flash storage, dedicated to providing the reliable and efficient data services for enterprises. Cloud-ready operating system, flash-enabled performance, and intelligent management software, delivering top-of-the-line functionality, performance, efficiency, reliability, and ease of use. Satisfies the data storage requirements of large-database OLTP/OLAP, cloud computing, and many other applications, making it a perfect choice for sectors such as government, finance, telecommunications, and manufacturing.
Item No | Qty | Type | Vendor | Model/Name | Description |
---|---|---|---|---|---|
1 | 1 | Storage Array | Huawei | OceanStor 5500 V5 System(Two Active-Active Controller) | A single Huawei OceanStor 5500 V5 engine includes 2 controllers. Huawei OceanStor 5500 V5 is 2-controller full redundancy. Each controller includes 128GiB memory and 1 4-port 10GbE Smart I/O Modules, 4 ports used for data (connections to load generators). Each controller includes 2 2-port onboard SAS port. Included Premium Bundle which includes NFS, CIFS, NDMP, SmartQuota, HyperClone, HyperSnap, HyperReplication, HyperMetro, SmartQoS, SmartPartition, SmartDedupe, SmartCompression, Only NFS protocol license is used in the test. |
2 | 24 | Disk drive | Huawei | SSDM-900G2S-02 | 900GB SSD SAS Disk Unit(2.5"), all the 24 SSD disks are in the engine. |
3 | 4 | 10GbE HBA card | Intel | Intel Corporation 82599ES 10-Gigabit SFI/SFP+ | Used in client for data connection to storage,each client used 2 10GbE cards,and each card with 2 ports. |
4 | 2 | Client | Huawei | Huawei FusionServer RH2288 V3 servers | Huawei server, each with 128GiB main memory. 1 used as Prime Client; 2 used to generate the workload including Prime Client. |
Item No | Component | Type | Name and Version | Description |
---|---|---|---|---|
1 | Linux | OS | SUSE Linux Enterprise Server 12 SP3 with the kernel 4.4.73-5-default | OS for the 2 clients |
2 | OceanStor | Storage OS | V500R007 | Storage Operating System |
Client | Parameter Name | Value | Description |
---|---|---|
None | None | None |
None
Clients | Parameter Name | Value | Description |
---|---|---|
rsize,wsize | 1048576 | NFS mount options for data block size |
protocol | tcp | NFS mount options for protocol |
nfsvers | 3 | NFS mount options for NFS version |
tcp_fin_timeout | 600 | TCP time to wait for final packet before socket closed |
somaxconn | 65536 | Max tcp backlog an application can request |
tcp_fin_timeout | 5 | TCP time to wait for final packet before socket closed |
tcp_slot_table_entries | 256 | number of simultaneous TCP Remote Procedure Call (RPC) requests |
tcp_rmem | 10000000 20000000 40000000 | receive buffer size, min, default, max |
tcp_wmem | 10000000 20000000 40000000 | send buffer size; min, default, max |
netdev_max_backlog | 300000 | max number of packets allowed to queue |
Used the mount command "mount -t nfs -o nfsvers=3 31.31.31.1:/fs_1 /mnt/fs_1" in the test. The mount information is 31.31.31.1:/fs_1 on /mnt/fs_1 type nfs (rw,relatime,vers=3,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,mountaddr=31.31.31.1,mountvers=3,mountport=2050,mountproto=udp,local_lock=none,addr=31.31.31.1).
None
Item No | Description | Data Protection | Stable Storage | Qty |
---|---|---|---|---|
1 | 900GB SSD Drives used for data; 1x 24 RAID5-9 groups including 4 coffer disk; | RAID-5 | Yes | 1 |
2 | 2 64GB 7200 RPM SATA used for system data for the engine | RAID-1 | Yes | 2 |
Number of Filesystems | 8 | Total Capacity | 8192GiB | Filesystem Type | thin |
---|
The file system block size was 8KB.
Used one engine of OceanStor 5500 V5 in the test. And one engine included two controllers. The engine had 25 disk slot and there are 24 SSD disks in the enclosure for the test. All the 24 disks were created to be a storage pool with RAID5-9. In the storage pool 8 filesystems were created, and each controller included 4 filesystems. The RAID5-9 was 8+1. The RAID5-9 was on the each stripe and all the stripes were distributed in the all the 24 drives by specifical algorithm. For example, stripe 1 was RAID5-9 from disk1 to disk8. And stripe 2 was from disk2 to disk9. All the stripes were just like this.
Item No | Transport Type | Number of Ports Used | Notes |
---|---|---|---|
1 | 10GbE | 8 | For the client-to-storage network, client connected the storage directly. No switch was used.There were 8 10GbE connections totally,communicating with NFSv3 over TCP/IP to 8 clients. |
Each controller used 1 10GbE card and each 10GbE card included 4 port. Totally 8 10GbE port were used for each controller for data transport connectivity to clients. Totally 8 ports for the 2 clients and 8 ports for the 2 storage controllers were used and the clients were connected to the storage directly. The 2 controller interconnect used PCIe to be HA pairs.
Item No | Switch Name | Switch Type | Total Port Count | Used Port Count | Notes |
---|---|---|---|---|---|
1 | None | None | None | None | None |
Item No | Qty | Type | Location | Description | Processing Function |
---|---|---|---|---|---|
1 | 2 | CPU | Storage Controller | Intel(R) Xeon(R) Gold 4109T @ 2.0GHz, 8 core | NFS, TCP/IP, RAID and Storage Controller functions |
2 | 4 | CPU | Client | Intel(R) Xeon(R) CPU E5-2670 v3 @ 2.30GHz | NFS Client, SUSE Linux Enterprise Server 12 SP3 |
Each OceanStor 5500 V5 Storage Controller contains 1 Intel(R) Xeon(R) Gold 4109T @ 2.0GHz processor. Each client contains 2 Intel(R) Xeon(R) CPU E5-2670 v3 @ 2.30GHz processor.
Description | Size in GiB | Number of Instances | Nonvolatile | Total GiB |
---|---|---|---|---|
Main Memory for each OceanStor 5500 V5 Storage Controller | 128 | 2 | V | 256 |
Memory for each client | 128 | 2 | V | 256 | Grand Total Memory Gibibytes | 512 |
Main memory in each storage controller was used for the operating system and caching filesystem data including the read and write cache.
1.There are three ways to protect data. For the disk failure, OceanStor 5500 V5 Storage uses RAID to protect data. For the controller failure, OceanStor 5500 V5 Storage uses cache mirror which data also wrote to the other controller's cache. And for power failure, there are BBUs to supply power for the storage to flush the cache data to disks. 2.No persistent memory were used in the storage, The BBUs could supply the power for the failure recovery and the 128 GiB memory for each controller included the mirror cache. The data was mirrored between the two controllers. 3.The write cache was less than 64GB, so the 64GB of SATA drive could cover the user write data.
None
None
Please reference the configuration diagram. 2 clients were used to generate the workload and 1 client acted as Prime Client to control the other clients. Each client had 4 ports and two ports connected to one controller. Totally there were 8 ports and 8 filesystem. And each port mounted one filesystem.
There were no Spectre/Meltdown patches applied to any component in the Solution Under Test.
None
Generated on Wed Mar 13 17:00:03 2019 by SpecReport
Copyright © 2016-2019 Standard Performance Evaluation Corporation