SPEC SFS®2014_database Result

Copyright © 2016-2019 Standard Performance Evaluation Corporation

DDN Storage SPEC SFS2014_database = 750 Databases
SFA14KX with GridScaler Overall Response Time = 0.61 msec


Performance

Business
Metric
(Databases)
Average
Latency
(msec)
Databases
Ops/Sec
Databases
MB/Sec
750.20114402216
1500.21928805433
2250.23643208648
3000.34857611864
3750.505720131081
4500.631864161297
5250.7691008191514
6000.9741152221730
6751.1251296251946
7501.2201440282162
Performance Graph


Product and Test Information

SFA14KX with GridScaler
Tested byDDN Storage
Hardware Available08 2018
Software Available09 2018
Date Tested09 2018
License Number4722
Licensee LocationsSanta Clara

To address the comprehensive needs of High Performance Computing and Analytics environments, the revolutionary DDN SFA14KX Hybrid Storage Platform is the highest performance architecture in the industry that delivers up to 60GB/s of throughput, extreme IOPs at low latency with industry-leading density in a single 4U appliance. By integrating the latest high-performance technologies from silicon, to interconnect, memory and flash, along with DDN's SFAOS - a real-time storage engine designed for scalable performance, the SFA14KX outperforms everything on the market. Leveraging over a decade of leadership in the highest end of Big Data, the GRIDScaler Parallel File system solution running on the SFA14KX provides flexible choices for Enterprise-grade data protection, availability features, and the performance of a parallel file system coupled with DDN's deep expertise and history of supporting highly efficient, large-scale deployments.

Solution Under Test Bill of Materials

Item NoQtyTypeVendorModel/NameDescription
11Storage ApplianceDDN StorageSFA14KX (FC)Dual Intel(R) Xeon(R) CPU E5-2650 v2 @ 2.60GHz
225Network AdapterMellanoxConnectX- VPI (MCX4121A-XCAT)Dual-port QSFP, EDR IB (100Gb/s) / 100GigE, PCIe 3.0 x16 8GT/s
36GridScaler ServerSupermicroSYS-1027R-WC1RTDual Intel(R) Xeon(R) CPU E5-2667 v3 @ 3.20GHz, 96GB of memory per Server
419GridScaler ClientsSupermicroSYS-1027R-WC1RTDual Intel Xeon(R) CPU E5-2650 v2 @ 2.60GHz, 128GB of memory per client
52SwitchMellanoxSB770036 port EDR Infiniband switch
672DrivesHGSTSDLL1DLR400GCCA1HGST Hitachi Ultrastar SS200 400GB MLC SAS 12Gbps Mixed Use (SE) 2.5-inch Internal Solid State Drive (SSD)
750DrivesToshibaAL14SEB030NAL14SEB-N Enterprise Performance Boot HDD
86FC AdapterQLogicQLogic QLE2742QLogic 32Gb 2-port FC to PCIe Gen3 x8 Adapter

Configuration Diagrams

  1. Configuration Diagram

Component Software

Item NoComponentTypeName and VersionDescription
1Server NodesDistributed FilesystemGRIDScaler 5.0.2Distributed file system software that runs on server nodes.
2Client NodesDistributed FilesystemGRIDScaler 5.0.2Distributed file system software that runs on client nodes.
3Client NodesOperating SystemRHEL 7.4The Client operating system - 64-bit Red Hat Enterprise Linux version 7.4.
4Storage ApplianceStorage ApplianceSFAOS 11.1.0SFAOS - real-time storage Operating System designed for scalable performance.

Hardware Configuration and Tuning - Physical

GRIDScaler NSD Server configuration
Parameter NameValueDescription
nsdBufSpace70The parameter nsdBufSpace specifies the percent of pagepool which can be utilized for NSD IO buffers.
nsdMaxWorkerThreads1536The parameter nsdMaxWorkerThreads sets the maximum number of NSD threads on an NSD server that will be concurrently transferring data with NSD clients.
pagepool4gThe Pagepool parameter determines the size of the file data cache.
nsdSmallThreadRatio3The parameter nsdSmallThreadRatio determines the ratio of NSD server queues for small IO's (default less than 64KiB) to the number of NSD server queues that handle large IO's (> 64KiB).
nsdThreadsPerQueue8The parameter nsdThreadsPerQueue determines the number of threads assigned to process each NSD server IO queue.
GRIDScaler common configuration
Parameter NameValueDescription
maxMBpS30000The maxMBpS option is an indicator of the maximum throughput in megabytes that can be submitted per second into or out of a single node.
ignorePrefetchLUNCountyesSpecifies that only maxMBpS and not the number of LUNs should be used to dynamically allocate prefetch threads.
verbsRdmaenableEnables the use of RDMA for data transfers.
verbsRdmaSendyesEnables the use of verbs send/receive for data transfers.
verbsPortsmlx5_0/1 8xLists the Ports used for the communication between the nodes.
workerThreads1024The workerThreads parameter controls an integrated group of variables that tune the file system performance in environments that are capable of high sequential and random read and write workloads and small file activity.
maxReceiverThreads64The maxReceiverThreads parameter is the number of threads used to handle incoming network packets.

Hardware Configuration and Tuning Notes

None

Software Configuration and Tuning - Physical

GridScaler Clients
Parameter NameValueDescription
maxFilesToCache4mThe maxFilesToCache (MFTC) parameter controls how many file descriptors (inodes) each node can cache.
maxStatCache40mThe maxStatCache parameter sets aside pageable memory to cache attributes of files that are not currently in the regular file cache.
pagepool16gThe Pagepool parameter determines the size of the file data cache.
openFileTimeout86400Determines the max amount of seconds we allow inode informations to stay in cache after the last open of the file before discarding them.
maxActiveIallocSegs8Determines how many inode allocation segments a individual client is allowed to select free inodes from in parallel.
syncInterval30Specifies the interval (in seconds) in which data that has not been explicitly committed by the client is synced systemwide.
prefetchAggressiveness1Defines how aggressively to prefetch data.

Software Configuration and Tuning Notes

Detailed description of the configuration and tuning options can be found here --> https://www.ibm.com/developerworks/community/wikis/home?lang=ja#!/wiki/General%20Parallel%20File%20System%20(GPFS)/page/Tuning%20Parameters

Service SLA Notes

None

Storage and Filesystems

Item NoDescriptionData ProtectionStable StorageQty
172 SSDs in SFA14KX (FC)DCR 8+2pYes72
250 mirrored internal 300GB 10K SAS Boot drivesRAID-1No50
Number of Filesystems1
Total Capacity20.3 TiB
Filesystem TypeGRIDScaler

Filesystem Creation Notes

A single filesystem with 1MB blocksize (no separate metadata disks) in scatter allocation mode was created. The filesystem inode limit was set to 1.5 Billion.

Storage and Filesystem Notes

The SFA14KX had 24 8+2p virtual disks with a 128KB strip size created out of a single 72 drive DCR storage pool.

Transport Configuration - Physical

Item NoTransport TypeNumber of Ports UsedNotes
1100Gb EDR25mlx5_0
216 Gb FC24FC0-3

Transport Configuration Notes

2x 36-port switches in a single infiniband fabric, with 8 ISLs between them. Management traffic used IPoIB, data traffic used RDMA on the same physical adapter.

Switches - Physical

Item NoSwitch NameSwitch TypeTotal Port CountUsed Port CountNotes
1Client Mellanox SB7700100Gb EDR3619The default configuration was used on the switch
2Server Mellanox SB7700100Gb EDR366The default configuration was used on the switch

Processing Elements - Physical

Item NoQtyTypeLocationDescriptionProcessing Function
12CPUSFA14KXDual Intel(R) Xeon(R) CPU E5-2650 v2 @ 2.60GHzStorage unit
219CPUclient nodesDual Intel Xeon(R) CPU E5-2650 v2 @ 2.60GHzFilesystem client, load generator
36CPUserver nodesDual Intel(R) Xeon(R) CPU E5-2667 v3 @ 3.20GHzGRIDScaler Server

Processing Element Notes

None

Memory - Physical

DescriptionSize in GiBNumber of InstancesNonvolatileTotal GiB
Cache in SFA System1282NV256
GRIDScaler client node system memory12819V2432
GRIDScaler Server node system memory966V576
Grand Total Memory Gibibytes3264

Memory Notes

The GRIDScaler clients use a portion of the memory (configured via pagepool and file cache parameter) to cache metadata and data. The GRIDScaler servers use a portion of the memory (configured via pagepool) for write Buffers. In the SFA14KX, some portion of memory is used for the SFAOS Operating system as well as data caching.

Stable Storage

SFAOS with Declustered RAID performs rapid rebuilds, spreading the rebuild process across many drives. SFAOS also supports a range of features which improve uptime for large scale systems including partial rebuilds, enlosure redundancy, dual active-active controllers, online upgrades and more. The SFA14KX (FC) has built-in backup battery power support to allow destaging of cached data to persistent storage in case of a power outage. The system doesn't require further battery power after the destage process completed. All servers and the SFA14KX are redundantly configured. All 6 Servers have access to all data shared by the SFA14KX. In the event of loss of a server, that server's data will be failed over automatically to a remaining server with continued production service. Stable writes and commit operations in GRIDScaler are not acknowledged until the NSD server receives an acknowledgment of write completion from the underlying storage system (SFA14KX)

Solution Under Test Configuration Notes

The solution under test used a GRIDSCaler Cluster optimized for small file, metadata intensive workloads. The Clients served as Filesystem clients as well as load generators for the benchmark. The Benchmark was executed from one of the server nodes. None of the component used to perform the test where patched with Spectre or Meltdown patches (CVE-2017-5754,CVE-2017-5753,CVE-2017-5715).

Other Solution Notes

None

Dataflow

All 19 Clients where used to generate workload against a single Filesystem mountpoint (single namespace) accessible as a local mount on all clients. The GRIDScaler Server received the requests by the clients and processed the read or write operation against all connected DCR backed VD's in the SFA14KX.

Other Notes

GRIDScaler are trademarks of DataDirect Network in the U.S. and/or other countries. Intel and Xeon are trademarks of the Intel Corporation in the U.S. and/or other countries. Mellanox is a registered trademark of Mellanox Ltd.

Other Report Notes

None


Generated on Wed Mar 13 16:26:02 2019 by SpecReport
Copyright © 2016-2019 Standard Performance Evaluation Corporation