SPEC SFS®2014_swbuild Result

Copyright © 2016-2019 Standard Performance Evaluation Corporation

NetApp, Inc. SPEC SFS2014_swbuild = 520 Builds
NetApp FAS8200 with FlexGroup Overall Response Time = 1.04 msec


Performance

Business
Metric
(Builds)
Average
Latency
(msec)
Builds
Ops/Sec
Builds
MB/Sec
520.41026002413
1040.56052004826
1560.630780061239
2080.7301040071652
2600.8001300102065
3120.8601560132478
3641.0101820142891
4161.3202080173304
4681.8502340193717
5202.7402600204130
Performance Graph


Product and Test Information

NetApp FAS8200 with FlexGroup
Tested byNetApp, Inc.
Hardware AvailableSeptember 2017
Software AvailableSeptember 2017
Date TestedAugust 2017
License Number33
Licensee LocationsSunnyvale, CA USA

Powered by ONTAP and optimized for scale, the FAS8200 hybrid-flash storage system enables you to quickly respond to changing needs across flash, disk, and cloud with industry-leading data management. FAS8200 systems, with integrated NVMe memory for flash acceleration, are engineered to deliver on core IT requirements for high performance and scalability as well as uptime, data protection, and cost-efficiency.

The FlexGroup feature of ONTAP 9 enables you to massively scale in a single namespace to over 20PB with over 400 billion files while evenly spreading the performance across the cluster. This makes the FAS8200 a great system for engineering and design application as well as DevOps, especially workloads for chip development and software builds that are typically high file-count environments with high meta-data traffic.

Solution Under Test Bill of Materials

Item NoQtyTypeVendorModel/NameDescription
12Storage ArrayNetAppFAS8200 Hybrid Flash System (HA Pair, Active-Active Dual Controller)A single NetApp FAS8200 system is a single chassis with 2 controllers. Each set of 2 controllers comprises a High-Availability (HA) Pair. The words "controller" and "node" are used interchangeably in this document. Each FAS8200 HA Pair includes 256GB of ECC memory, 4 PCIe expansion slots and a set of onboard I/O ports: * 8 UTA2 ports configured as 10GbE, used for data (connections to load generators); * 4 10GbE ports, used for cluster interconnect; * 4 10GbE Base-T ports; * 8 12Gb SAS ports, used to connect to disk shelves. Included Premium Bundle which includes All Protocols, SnapRestore, SnapMirror, SnapVault, FlexClone, SnapManager Suite, Single Mailbox Recovery (SMBR), SnapCenter Foundation. Only NFS protocol license is active in the test, also available in the BASE bundle.
212Disk ShelvesNetAppDisk Shelf DS212C12 drive bays in each shelf; 6 shelves per HA Pair
3412Gb SAS HBANetAppX2069-R6 HBA SAS 3/6/12Gbps QSFP PCIeUsed for connectivity to disk shelves; 2 cards per HA pair; each card has 4 12Gb SAS ports. In the tested configuration, the on-board SAS ports were unused (connected to unused and inactive drive shelves), due to the configuration being part of a shared-infrastructure lab. No benchmark data flowed to or through the unused drive shelves. The PCIe-slot SAS HBA card was used to connect to the disk shelves mentioned above, which were an active part of the tested configuration.
44Network Interface CardNetAppX1117A-EN-R6 NIC 2-port bare cage SFP+ 10GbE PCIe CardUsed for cluster interconnect; 2 cards per HA pair; each card has 2 ports
553SFP TransceiverCiscoCisco Avago 10Gbase-SR part SFBR-7702SDZUsed in both Switches; 21 to clients, 16 to data ports on storage controllers, and 16 to cluster interconnect on storage controllers
621SFP TransceiverNetApp10GbE SFP X6569-R6Used in client NIC for data connection to switch
724SFP TransceiverNetApp10GbE SFP X6599A-R6Used in Controllers for onboard 10Gb data connections; 8 per HA pair, plus this part was used in 4 onboard Cluster Interconnect ports per HA pair.
88SFP TransceiverNetAppSFP X6569-R6Used in Controllers for 10GbE NIC card for cluster interconnect; 4 per HA pair
98NVMe Flash CacheNetAppFlash Cache X3311A1024 GB NVMe PCIe M.2 Flash Cache Modules, 4 per HA pair
10144Disk DrivesNetApp4TB Disk X336A4TB 7200 RPM 12Gb SAS 3.5 inch HDD; 72 disks per HA pair (including 2 spares)
111SwitchCiscoCisco Nexus 7018 SwitchUsed for 10GbE data connections between clients and storage systems. Large switch in use due to testing having been done in a large shared-infrastructure lab. Only the ports used for this test are listed in this report.
1221ClientIBMIBM 3650m3IBM server, each with 24GB main memory. 1 used as Prime Client; 20 used to generate the workload
131SwitchCiscoCisco Nexus 5596Used for 10GbE cluster interconnections
145760Software Enablement/LicenseNetAppOS-ONTAP1-CAP1-PREM-2PONTAP Enablement Fee, Per 0.1TB, Capacity-based License

Configuration Diagrams

  1. NetApp FAS8200 4-Node Cluster with FlexGroup

Component Software

Item NoComponentTypeName and VersionDescription
1LinuxOSRed Hat Enterprise Linux 6.6 for x86_64OS for the 21 clients
2ONTAPStorage OS9.2Storage Operating System
3Data SwitchOperating System6.2(16)Cisco switch NX-OS (kickstart and system software)
4Cluster SwitchOperating System7.3(1)Cisco switch NX-OS (kickstart and system software)

Hardware Configuration and Tuning - Physical

Storage
Parameter NameValueDescription
MTU9000Jumbo Frames configured for Cluster Interconnect ports

Hardware Configuration and Tuning Notes

NetApp FAS8200 storage controller 10Gb Ethernet ports used for cluster interconnections (8 per HA pair) are set up with MTU of 9000, Jumbo Frames.

Software Configuration and Tuning - Physical

Clients
Parameter NameValueDescription
rsize,wsize65536NFS mount options for data block size
protocoltcpNFS mount options for protocol
nfsvers3NFS mount options for NFS version
somaxconn65536Max tcp backlog an application can request
tcp_fin_timeout5TCP time to wait for final packet before socket closed
tcp_slot_table_entries128number of simultaneous TCP Remote Procedure Call (RPC) requests
tcp_max_slot_table_entries128number of simultaneous TCP Remote Procedure Call (RPC) requests
udp_slot_table_entries128number of simultaneous UDP Remote Procedure Call (RPC) requests
tcp_rmem4096 87380 8388608receive buffer size, min, default, max
tcp_wmem4096 87380 8388608send buffer size; min, default, max
aio-max-nr1048576maximum allowed number of events in all active async I/O contexts
netdev_max_backlog300000max number of packets allowed to queue

Software Configuration and Tuning Notes

Tuned the necessary parameters as shown above, for communication between clients and storage controllers over 10Gb Ethernet, to optimize data transfer and minimize overhead.

Service SLA Notes

None

Storage and Filesystems

Item NoDescriptionData ProtectionStable StorageQty
14TB 7200 RPM Drives used for data; 2x 14+2 RAID-DP RAID groups per storage controller node in the clusterRAID-DPYes128
24TB 7200 RPM drives used by ONTAP Operating system; 1x 1+2 RAID-DP RAID group per storage controller node in the clusterRAID-DPYes12
3128GB mSATA flash cards, 2 per HA pair; used as boot media and also for de-staging NVMEM data from main memory to flash in event of failure conditionnoneYes4
Number of Filesystems1
Total Capacity330.4 TiB
Filesystem TypeNetApp FlexGroup

Filesystem Creation Notes

Each element (RAID Groups, aggregates) comprising the exported FlexGroup, along with the FlexGroup itself, was created using default values.

Storage and Filesystem Notes

The storage configuration consisted of 2 FAS8200 HA pairs (4 FAS8200 controller nodes total). The two controllers in each HA pair were connected in a SFO (storage failover) configuration. Together, all 4 controllers (configured as 2 HA Pairs) comprise the tested FAS 8200 HA cluster. Stated in the reverse, the tested FAS 8200 HA cluster consists of 2 HA Pairs, each of which consists of 2 controllers (also referred to as nodes).

Each storage controller was connected to its own and partner's disks in a multi-path HA configuration. Each storage controller was the primary owner of 36 disks, distributed across 6 drive shelves. Each storage controller was configured with 3 RAID groups. One aggregate was created on each RAID Group. The first two aggregates on each node held data for the file system. They were configured as 14+2 RAID-DP RAID groups. The third aggregate, built on a 1+2 RAID-DP RAID Group and referred to as the "root aggregate", on each controller, held ONTAP operating system related files. Additionally, each storage controller was allocated 1 spare disk, for a total of 4 spares disks in the tested FAS8200 HA cluster (note that the spare drives are not included in the "storage and filesystems" table because they held no data during the benchmark execution).

A storage virtual machine or "SVM" was created on the cluster, spanning all storage controller nodes. Within the SVM, a single FlexGroup volume was created using the two data aggregates on each controller. A FlexGroup volume is a scale-out NAS container that provides high performance along with automatic load distribution and scalability.

Transport Configuration - Physical

Item NoTransport TypeNumber of Ports UsedNotes
110GbE36For the client-to-storage network, the FAS8200 Cluster used a total of 16 10GbE connections from storage, communicating with NFSv3 over TCP/IP to 20 clients. The benchmark was conducted in a large shared-infrastructure lab; only the ports shown and documented were used on the Cisco Nexus 7018 switch for this benchmark test.
210GbE16The Cluster Interconnect network is connected via 10GbE to a Cisco 5596 switch, with 8 connections to each HA pair.

Transport Configuration Notes

Each NetApp FAS8200 HA Pair used 8 10Gb Ethernet ports for data transport connectivity to clients (through the Cisco 7018 switch), Item 1 above. Each of the 20 clients driving workload used one 10Gb Ethernet port for data transport. All ports on the Item 1 network had MTU=1500 (default). The Cluster Interconnect network, Item 2 above, utilized MTU=9000 (Jumbo Frames).

Switches - Physical

Item NoSwitch NameSwitch TypeTotal Port CountUsed Port CountNotes
1Cisco Nexus 701810Gb Ethernet Switch483721 client-side data connections; 16 storage-side data connections. Only ports on Cisco Nexus 7018 Ethernet Modules needed for solution under test are included in Port Available count.
2Cisco Nexus 559610Gb Ethernet Switch9616For Cluster Interconnect; MTU=9000

Processing Elements - Physical

Item NoQtyTypeLocationDescriptionProcessing Function
14CPUStorage Controller1.70 GHz Intel Xeon D-1587NFS, TCP/IP, RAID and Storage Controller functions
221CPUClient3.06 GHz Intel Xeon x5675NFS Client, RedHat Linux OS

Processing Element Notes

Each NetApp FAS8200 Storage Controller contains 1 Intel Xeon D-1587 processor with 16 cores at 1.70 GHz. Each client contains 1 Intel Xeon x5675 processor with 6 cores at 3.06GHz.

Memory - Physical

DescriptionSize in GiBNumber of InstancesNonvolatileTotal GiB
Main Memory for each NetApp FAS8200 HA Pair2562V512
NVMe Flash Cache memory; 4TB per HA Pair10248NV8192
Memory for each client; 20 of these drove the workload2421V504
Grand Total Memory Gibibytes9208

Memory Notes

Each storage controller has main memory that is used for the operating system and caching filesystem data. The FlashCache module is a read cache used for caching filesystem data. See "Stable Storage" for more information.

Stable Storage

The WAFL filesystem logs writes, and other filesystem data-modifying transactions, to a portion of controller main memory which we call NVMEM. In a storage-failover configuration (HA Pair), as in the system under test, such transactions are also logged (mirrored) to the NVMEM portion of main memory on the partner storage controller so that, in the event of a storage controller failure, any transactions on the failed controller can be completed by the partner controller. Filesystem operations are not acknowledged until after the storage system has confirmed that the related data are stored in the NVMEM portion of memory of both storage controllers (when both controllers are active). In case of power loss, an integrated battery ensures stable storage by providing power to the necessary components of the system long enough for data to be encrypted and securely flushed to an mSATA non-volatile flash device. NVMEM data does not flow over the Cluster Interconnect network. Data for NVMEM mirroring flows over an internal interconnect.

Solution Under Test Configuration Notes

The cluster network was comprised of four 10GbE ports per controller (8 for each HA pair), connected via a Cisco Nexus 5596 switch. This provides high availability for the cluster network in case of port or link failure. Each storage controller had four 10GbE ports connected to the Cisco 7018 data switch; each client had one such 10GbE connection. The data and cluster networks were separate networks. All ports and interfaces on the data network had default size 1500-byte frames configured. All ports on the cluster interconnect network had MTU size 9000 configured. All clients accessed all file-systems from all the available network interfaces.

Other Solution Notes

All standard data protection features, including background RAID and media error scrubbing, software validated RAID checksum, and double disk failure protection via double parity RAID (RAID-DP) were enabled during the test.

Dataflow

Please reference the configuration diagram. 20 clients were used to generate the workload; 1 client acted as Prime Client to control the 20 other clients. Each client has one 10GbE connection, through a Cisco Nexus 7018 switch. Each storage HA pair had 8 10GbE connections to the data switch. The filesystem consisted of one NetApp FlexGroup. The clients mounted the FlexGroup volume as an NFSv3 filesystem. The ONTAP cluster provided access to the FlexGroup volume on every 10GbE port connected to the data switch (16 ports total). Each client created mount points across those 16 ports symmetrically.

Other Notes

None

Other Report Notes

NetApp is a registered trademark and "Data ONTAP", "FlexGroup", and "WAFL" are trademarks of NetApp, Inc. in the United States and other countries. All other trademarks belong to their respective owners and should be treated as such.


Generated on Wed Mar 13 16:49:38 2019 by SpecReport
Copyright © 2016-2019 Standard Performance Evaluation Corporation