SPEC SFS®2014_swbuild ResultCopyright © 2016-2019 Standard Performance Evaluation Corporation |
E8 Storage | SPEC SFS2014_swbuild = 600 Builds |
---|---|
E8 Storage D24 with IBM Spectrum Scale 5.0 | Overall Response Time = 0.69 msec |
|
E8 Storage D24 with IBM Spectrum Scale 5.0 | |
---|---|
Tested by | E8 Storage | Hardware Available | December 2016 | Software Available | December 2017 | Date Tested | December 2017 | License Number | 4847 | Licensee Locations | Santa Clara, CA, USA |
E8 Storage is a pioneer in shared accelerated storage for data-intensive,
high-performance applications that drive business revenue. E8 Storage's
affordable, reliable and scalable solution is ideally suited for the most
demanding low-latency workloads, including real-time analytics, financial and
trading applications, transactional processing and large-scale file systems.
Driven by the company's patented architecture, E8 Storage's high-performance
shared NVMe storage solution delivers 10 times the performance at half the cost
of existing storage products. With E8 Storage, enterprise data centers can
enjoy unprecedented storage performance density and scale, delivering NVMe
performance without compromising on reliability and availability.
IBM
Spectrum Scale helps solve the challenge of explosive growth of unstructured
data against a flat IT budget. Spectrum Scale provides unified file and object
software-defined storage for high performance, large scale workloads
on-premises or in the cloud. Spectrum Scale includes the protocols, services
and performance required by many industries, Technical Computing, Big Data,
HDFS and business critical content repositories. IBM Spectrum Scale provides
world-class storage management with extreme scalability, flash accelerated
performance, and automatic policy-based storage tiering from flash through disk
to tape, reducing storage costs up to 90% while improving security and
management efficiency in cloud, big data & analytics environments.
Item No | Qty | Type | Vendor | Model/Name | Description |
---|---|---|---|---|---|
1 | 16 | Spectrum Scale Client | IBM | X3650-M4 | Spectrum Scale 5.0 client nodes |
2 | 16 | Network Interface Card | Mellanox | ConnectX-5 VPI | Dual port 100GbE Adapters used in each Spectrum Scale Server |
3 | 1 | Storage Appliance | E8 Storage | E8-D24 | Dual controller storage appliance with 24 HGST SN200 1.6TB dual-port NVMe SSDs, 2 x Intel Xeon 2.0GHz 14-core CPU and 128GB RAM per controller. 2 x Mellanox ConnectX-4 EN network interface cards are installed per controller |
4 | 1 | Switch | Mellanox | SN2700 | 32-port 100GbE switch |
5 | 1 | Switch | Juniper | EX4200 Series 8PoE | 48-port 1GbE switch |
Item No | Component | Type | Name and Version | Description |
---|---|---|---|---|
1 | Client Nodes | Spectrum Scale File System | 5.0.0 | The Spectrum Scale File System is a distributed file system that runs on both the Elastic Storage Server nodes and client nodes to form a cluster. The cluster allows for the creation and management of single namespace file systems. |
2 | Client Nodes | Operating System | RHEL 7.4 | The operating system on the client nodes was 64-bit Red Hat Enterprise Linux version 7.4. |
3 | Client Nodes | E8 Storage Agent | 2.1.1 | The E8 Storage Agent is a client driver which manages communication and data transfer between the client and storage appliance |
4 | Storage Appliance | E8 Storage Software | 2.1.1 | E8 Storage software provides centralized management and high availability functionality for the E8 Storage solution |
5 | Storage Appliance | Operating System | RHEL 7.4 | The operating system on the storage appliance was 64-bit Red Hat Enterprise Linux version 7.4. |
Spectrum Scale Client Nodes | Parameter Name | Value | Description |
---|---|---|
numaMemoryInterleave | yes | Enables memory interleaving on NUMA based systems. |
verbsRdma | enable | Enables Ethernet RDMA transfers between Spectrum Scale client nodes and E8 Storage controllers |
verbsRdmaSend | yes | Enables the use of Ethernet RDMA for most Spectrum Scale daemon-to-daemon communication. |
verbsPorts | mlx5_1/1/1 mlx5_1/1/26001 | Ethernet device names and port numbers. |
txqueuelen | 10000 | Defines the transmission queue length for the Mellanox adapter |
The Spectrum Scale configuration parameter was set using the mmchconfig command on one of the nodes in the cluster. The verbs settings in the table above allow for efficient use of the RoCE infrastructure. The settings determine when data are transferred over IP and when they are transferred using the verbs protocol.
Spectrum Scale Client Nodes | Parameter Name | Value | Description |
---|---|---|
maxStatCache | 0 | Specifies the number of inodes to keep in the stat cache. |
workerThreads | 128 | Controls the maximum number of concurrent file operations at any one instant, as well as the degree of concurrency for flushing dirty data and metadata in the background and for prefetching data and metadata. |
maxMBpS | 10k | Specifies an estimate of how many megabytes of data can be transferred per second into or out of a single node. |
pagepool | 16g | Specifies the size of the cache on each node. |
maxFilesToCache | 7m | Specifies the number of inodes to cache for recently used files that have been closed. |
ignorePrefetchLUNCount | yes | Specifies that only maxMBpS and not the number of LUNs should be used to dynamically allocate prefetch threads. |
prefetchAggressiveness | 1 | Defines how aggressively to prefetch data. 1 means prefetch on 2nd access if sequential. |
prefetchPct | 5 | Specifies what percent of the pagepool (cache) can be used for prefetching. |
syncInterval | 30 | Specifies the interval (in seconds) in which data that has not been explicitly committed by the client is synced systemwide. | E8 Storage Agent | Parameter Name | Value | Description |
e8block threads | 3 | Specifies the number of threads used by the e8block process on the host servers |
The configuration parameters for the Spectrum Scale file system were set using
the mmchconfig command on one of the nodes in the cluster. The nodes used
mostly default tuning parameters. A discussion of Spectrum Scale tuning can be
found in the official documentation for the mmchconfig command and on the IBM
developerWorks wiki (for additional information see
http://files.gpfsug.org/presentations/2014/UG10_GPFS_Performance_Session_v10.pdf).
The E8 Storage controller used default parameters for all volumes.
There were no opaque services in use
Item No | Description | Data Protection | Stable Storage | Qty |
---|---|---|---|---|
1 | 24 x 1.6TB NVMe SSDs in the E8-D24 | RAID-6 | Yes | 1 |
2 | 2 x 300GB 10K SAS HDD internal drives per Spectrum Scale client nodes used to store the OS | RAID-1 | No | 32 |
Number of Filesystems | 1 | Total Capacity | 24TB | Filesystem Type | Spectrum Scale File System |
---|
A single Spectrum Scale file system was created with a 4 MiB block size for
data and metadata, and 4 KiB inode size. The 24TB file system has 1 data and
one metadata volume (aka pool). Each client node mounted the file system.
The nodes each had an ext4 file system that hosted the operating
system.
The E8 Storage appliance has 24 1.6TB drives configured as a 22+2 RAID group
for data protection, with a 16+2 stripe size. The data and metadata volumes
were provisioned from this single RAID group, with the volumes spanning all
drives in the RAID group. All client nodes had shared read / write access to
both volumes.
The cluster used a single-tier architecture. The Spectrum
Scale nodes performed both file and block level operations. Each node had
access to shared volumes, so any file operation on a node was translated to a
block operation and serviced on the same node.
Item No | Transport Type | Number of Ports Used | Notes |
---|---|---|---|
1 | 1 GbE cluster network | 18 | Each Spectrum Scale node and E8 Storage controller connects to a 1 GbE administration network with MTU=1500. |
2 | 100 GbE cluster network | 16 | Client nodes each have a single port connected to the switch via a 50GbE split cable and each E8 Storage controller has 4 100GbE ports connected to a shared 100GbE cluster network, set to MTU=4200. The ring buffers (rx and tx) were set to 8192 on the network adapters |
The 1GbE network was used for administrative purposes and for Spectrum Scale inter-node communication. All benchmark traffic flowed through the Mellanox SN2700 100Gb Ethernet switch. Each client node had a single active 50Gb Ethernet port which was connected to the switch with a split cable (2 50GbE clients per 100GbE switch port).
Item No | Switch Name | Switch Type | Total Port Count | Used Port Count | Notes |
---|---|---|---|---|---|
1 | Mellanox SN2700 | 100Gb Ethernet | 32 | 16 | The default configuration was used on the switch |
2 | Juniper EX4200 Series 8PoE | 1Gb Ethernet | 48 | 18 | Administrative network only. The default configuration was used on the switch. |
Item No | Qty | Type | Location | Description | Processing Function |
---|---|---|---|---|---|
1 | 32 | CPU | Spectrum Scale client nodes | Intel(R) Xeon(R) CPU E5-2630 v2 2.60GHz 6-core | Spectrum Scale client, E8 Agent, load generator, device drivers |
2 | 4 | CPU | E8-D24 controller | Intel(R) Xeon(R) CPU E5-2660 v4 2.00GHz 14-core | E8 Storage server, E8 Storage RAID, device drivers |
Each of the Spectrum Scale client nodes had 2 physical processors. Each
processor had 6 physical cores with one thread per core by default.
The E8-D24 is a dual controller appliance, each controller has 2 physical
processors. Each processor had 14 cores with one thread per core.
Description | Size in GiB | Number of Instances | Nonvolatile | Total GiB |
---|---|---|---|---|
Spectrum Scale client node system memory | 128 | 16 | V | 2048 |
E8 Storage controller system memory | 16 | 16 | V | 256 | Grand Total Memory Gibibytes | 2304 |
In the client nodes, Spectrum Scale reserves a portion of the physical memory
(per the pagepool setting above) for file data and metadata caching. Some
additional memory is dynamically allocated for buffers used for node to node
communication and up to 7 million inode stat informations per node.
In
the E8 Storage controller, a portion of the physical memory is reserved for
block write data and system metadata caching.
The E8 Storage controller uses a portion of internal memory to temporarily cache write data (as well as store modified data) before being written to the SSDs. Writes are acknowledged as successful once they are stored in the controller write cache, and a redundant copy is kept by the E8 agent on the host. In the event of a controller failure, the hosts will replay the write cache for the surviving controller. In the event of a power failure, each controller has backup battery power which is combined with power-fail protection on the SSDs to ensure data is committed to SSDs prior to shutdown.
The solution under test was a Spectrum Scale cluster optimized for small file, metadata intensive environments. The Spectrum Scale nodes were also the load generators for the benchmark. The benchmark was executed from one of the nodes.
None
The 16 Spectrum Scale nodes were the load generators for the benchmark. Each load generator had access to the single namespace Spectrum Scale file system. The benchmark accessed a single mount point on each load generator. In turn each of mount points corresponded to a single shared base directory in the file system. The nodes process the file operations, and the data requests to and from the backend storage were serviced locally on each node by the E8 Client Agent.
IBM and IBM Spectrum Scale are trademarks of International Business Machines
Corp., registered in many jurisdictions worldwide.
Intel and Xeon are
trademarks of the Intel Corporation in the U.S. and/or other countries.
Mellanox is a registered trademark of Mellanox Ltd.
None
Generated on Wed Mar 13 16:56:57 2019 by SpecReport
Copyright © 2016-2019 Standard Performance Evaluation Corporation