NetApp, Inc. | : | FAS8020 |
SPECsfs2008_nfs.v3 | = | 110281 Ops/Sec (Overall Response Time = 1.18 msec) |
|
Tested By | NetApp, Inc. |
---|---|
Product Name | FAS8020 |
Hardware Available | Feb 2014 |
Software Available | Feb 2014 |
Date Tested | Nov 2013 |
SFS License Number | 33 |
Licensee Locations | Sunnyvale, CA USA |
Powered by Data ONTAP and optimized for scale out, the FAS8000 series unifies your storage infrastructure and has the flexibility to keep up with changing business needs while delivering on core IT requirements for uptime, scalability and cost-efficiency. The FAS8000 features a multi-processor Intel chip set and leverages high-performance memory modules, NVRAM to accelerate and optimize writes, and an I/O-tuned PCIe gen3 architecture that maximizes application throughput. Building on a decade of multi-core optimization, Data ONTAP drives the latest cores and increased core counts to keep up with continuous growth in storage demands. The result is a flexible, efficient I/O design capable of supporting large numbers of high-speed network connections and massive capacity scaling. By delivering more onboard ports to support drive, cluster, and host connectivity, the FAS8000 offers exceptional flexibility and expandability in an extremely dense package. Integrated unified target adapter (UTA) ports support 16Gb Fibre Channel, 10GbE, or FCoE, so your storage is ready on day one for whatever choices the future holds.
Item No | Qty | Type | Vendor | Model/Name | Description |
---|---|---|---|---|---|
1 | 2 | Storage Controller | NetApp | FAS8020 | FAS8020 Controller |
2 | 2 | 4-port SAS IO card | NetApp | SAS Adapter X2065A-R6 | 4-port SAS IO Card |
3 | 6 | Disk Shelves with SAS Disk Drives | NetApp | DS2246-1014-24S-QS-R5 | DS2246 disk shelf with 24x600GB, 10K, SAS HDD |
4 | 2 | Flash Cache 2 Module | NetApp | X1973A-R6 | Flash Cache 2 Module 512GB |
5 | 1 | Software License | NetApp | SW-8020-Cluster License | Clustered Data ONTAP 8.2.1 License |
6 | 1 | Software License | NetApp | SW-2-8020A-NFS-C | NFS software license |
OS Name and Version | Clustered Data ONTAP 8.2.1 |
---|---|
Other Software | None |
Filesystem Software | Clustered Data ONTAP 8.2.1 |
Name | Value | Description |
---|---|---|
vol modify <volume_name> -atime-update | false | Disable access time updates (applied to all volumes) |
N/A
Description | Number of Disks | Usable Size |
---|---|---|
600GB SAS 10K RPM Disk Drives | 144 | 64.1 TB |
Total | 144 | 64.1 TB |
Number of Filesystems | 4 |
---|---|
Total Exported Capacity | 56 TB |
Filesystem Type | WAFL |
Filesystem Creation Options | The file-system was created using default values. An export policy was created giving full-access and applied to the file system |
Filesystem Config | 8 RAID-DP (Double Parity) groups of 17 disks each were created across all the disks |
Fileset Size | 12772.9 GB |
The storage configuration consisted of 2 storage controller nodes connected in a SFO (storage failover) configuration. Each storage controller was connected to its own and partner's disks in a multi-path HA configuration. Each storage controller was the primary owner of 72 disks (3 shelves, each containing 24 disks). Each storage controller contained 2 disk pools, or aggregates. The first aggregate held data for the file system. It was composed of 4 RAID-DP RAID groups, each composed of 15 data disks and 2 parity disks. The second aggregate held Clustered Data ONTAP operating system files. It was composed of a single RAID-DP RAID group, composed of 3 disks. Additionally, each storage controller node was allocated 1 spare disk. A storage virtual machine or "SVM" was created on the cluster, spanning both storage controller nodes. Within the SVM, two FlexVols the containers for user data, were then created on the data aggregate of each storage controller node (for a total of 4 flexible volumes); each FlexVol was striped across all the disks in the data aggregate and was primarily owned by its local storage controller node. In the event of a controller failure the partner storage controller node would take over ownership and manage the FlexVols . FlexVols can be accessed as a single namespace in SVM but in this test we accessed the FlexVols as separate file-systems.
Item No | Network Type | Number of Ports Used | Notes |
---|---|---|---|
1 | Jumbo frame 10GbE | 6 | 4 ports (2 per node) were used for the cluster network and 2 ports (1 per node) were used for the data network |
The two cluster ports from each node were connected directly to each other in a switchless configuration. This provides high availability for the cluster network in case of port or link failure. One port from each node and each load generator was connected to a Cisco Nexus 5596 switch for the data network. The data and cluster networks were on separate subnets. All the ports (cluster and data) were configured to use jumbo frames.
Each load generator was connected to Cisco Nexus 5596 switch via a single 10GbE port. Jumbo frames was set for all connection to the switch.
Item No | Qty | Type | Description | Processing Function |
---|---|---|---|---|
1 | 2 | CPU | 2.0GHz Intel Xeon(R) Processor E5-2620 | Networking, NFS protocol, WAFL filesystem, RAID/Storage drivers, Clustering |
Each storage controller has one physical processor and each physical processor is made up of six cores.
Description | Size in GB | Number of Instances | Total GB | Nonvolatile |
---|---|---|---|---|
Storage Controller Main Memory | 24 | 2 | 48 | V |
NVRAM Non-Volatile Memory on PCIe Adapter | 4 | 2 | 8 | NV |
Flash Cache 2 Module Memory | 512 | 2 | 1024 | NV |
Grand Total Memory Gigabytes | 1080 |
Each storage controller has main memory that is used for the operating system and caching filesystem data. The FlashCache module is a read cache used for caching filesystem data. A separate, integrated battery-backed RAM module is used to provide stable storage for writes that may not have been written to disk.
The WAFL filesystem logs writes, and other filesystem data modifying transactions, to the NVRAM adapter. In a storage-failover configuration, as in the system under test, such transactions are also logged to the NVRAM on the partner storage controller so that, in the event of a storage controller failure, any transactions on the failed controller can be completed by the partner controller. Filesystem modifying CIFS/NFS operations are not acknowledged until after the storage system has confirmed that the related data are stored in NVRAM adapters of both storage controllers (when both controllers are active). The battery-backed NVRAM ensures that any uncommitted transactions are preserved for at least 72 hours.
The system under test consisted of 2 FAS8020 storage controllers and 6 storage shelves, each with 24 600GB SAS disk drives. The controllers were running Clustered Data ONTAP 8.2.1 software. The 2 storage controllers (nodes) were configured in a storage failover (SFO) configuration and connected to their respective disk shelves in a multi-path high-availability (MPHA) configuration. The SFO was provided by the storage failover software option in conjunction with a 10GbE backplane connection between controllers. The cluster network is comprised of 2 10GbE ports connected directly in a switchless configuration to provide redundancy. The data network is comprised of a Cisco Nexus 5596 switch. Each node and load-generator has a single 10GbE port connected to this switch. The data and cluster networks are on separate subnets. All ports and interfaces on the data and cluster networks have Jumbo frames enabled.
All standard data protection features, including background RAID and media error scrubbing, software validated RAID checksum, and double disk failure protection via double parity RAID (RAID-DP) were enabled during the test.
Item No | Qty | Vendor | Model/Name | Description |
---|---|---|---|---|
1 | 3 | Fujitsu | Fujitsu Primergy RX300 S7 | Fujitsu RX300-S7 rack server with 124GB memory running RHEL 6.4 |
2 | 1 | Cisco | Nexus 5596 | Cisco Nexus 5596UP Switch |
LG Type Name | LG1 |
---|---|
BOM Item # | 1 |
Processor Name | Intel Xeon(R) E5-2630 |
Processor Speed | 2.30GHz |
Number of Processors (chips) | 2 |
Number of Cores/Chip | 6 |
Memory Size | 124 GB |
Operating System | RHEL 6.4 Kernel 2.6.32-358.18.1.el6.x86_64 |
Network Type | 10GbE |
Network Attached Storage Type | NFS V3 |
---|---|
Number of Load Generators | 3 |
Number of Processes per LG | 200 |
Biod Max Read Setting | 2 |
Biod Max Write Setting | 2 |
Block Size | AUTO |
LG No | LG Type | Network | Target Filesystems | Notes |
---|---|---|---|---|
1..3 | LG1 | Data Network | /data1 /data2 /data3 /data4 | See UAR Notes |
All clients accessed all file-systems from all the available network interfaces.
Each load-generating client hosted 200 processes, accessing each filesystem from all the network interfaces and network paths to storage controllers. The flexible volumes were striped evenly across all the disks in the aggregate using all the SAS adapters connected to storage backend.
NetApp is a registered trademark and "Data ONTAP", "FlexVol", and "WAFL" are trademarks of NetApp, Inc. in the United States and other countries. All other trademarks belong to their respective owners and should be treated as such.
Generated on Wed Feb 19 07:56:38 2014 by SPECsfs2008 HTML Formatter
Copyright © 1997-2008 Standard Performance Evaluation Corporation
First published at SPEC.org on 19-Feb-2014