SPECsfs2008_nfs.v3 Result ================================================================================ EMC Isilon : S210-2U-Dual-256GB-2x1GE-2x10GE SFP+-14TB-800GB SSD - 14 Nodes SPECsfs2008_nfs. = 253357 Ops/Sec (Overall Response Time = 1.18 msec) v3 ================================================================================ Performance =========== Throughput Response (ops/sec) (msec) --------------------- ---------------- 25504 0.7 51054 0.6 76667 0.7 102288 0.8 127879 0.9 153497 1.0 179261 1.2 205226 1.4 231069 2.0 253357 5.7 ================================================================================ Product and Test Information ============================ Tested By EMC Isilon Product Name S210-2U-Dual-256GB-2x1GE-2x10GE SFP+-14TB-800GB SSD - 14 Nodes Hardware Available July 2014 Software Available July 2014 Date Tested June 2014 SFS License Number 47 Licensee Locations Hopkinton, MA The Isilon S210, built on Isilon's proven scale-out storage platform, provides enterprises with industry-leading IO/s from a single file system, single volume. The S210 accelerates business and increases speed-to-market by providing scalable, high performance storage for mission critical and highly transactional applications. In addition, the single filesystem, single volume, and linear scalability of the OneFS operating system enables enterprises to scale storage seamlessly with their environment and application while maintaining flat operational expenses. The S210 is based on enterprise-class 2.5" 10,000 RPM Serial Attached SCSI drive technology, 10GbE Ethernet networking, and a high performance Infiniband back-end. The S210 scales from as few as 3 nodes to as a high as 144 nodes in a single file system, single volume. Configuration Bill of Materials =============================== Item No Qty Type Vendor Model/Name Description ---- --- ---- ------ ---------- ----------- 1 14 Storage Node Isilon S210-2U-Dual-256GB- S210 14TB SAS + 800GB SSD 2x1GE-2x10GE Storage node SFP+-14TB-800GB SSD 2 14 Software Isilon OneFS 7.1.1 OneFS 7.1.1 License License 3 1 Infiniband QLogic 12200-18 18 Port QDR Infiniband Switch Switch Server Software =============== OS Name and Version OneFS 7.1.1 Other Software N/A Filesystem Software OneFS Server Tuning ============= Name Value Description ---- ----- ----------- efs.journal.flush_idle 0 avoid background journal flushes under low load access pattern random disable data prefetch Server Tuning Notes ------------------- N/A Disks and Filesystems ===================== Description Number of Disks Usable Size ----------- --------------- ----------- 600GB SAS 10k RPM Disk Drives 322 173.0 TB 800GB SSD 14 10.0 TB Total 336 183.0 TB Number of Filesystems 1 Total Exported Capacity 145TB Filesystem Type IFS Filesystem Creation Options Default Filesystem Config 16+2/2 Parity protected (default) Fileset Size 29880.5 GB SSD Policy set to metadata-write acceleration, L3 Cache disabled, File data is striped across all 14 nodes using at most 2 drives per node for a given protection group, thus protecting against 2 drives failures or one full node failure Network Configuration ===================== Item Number of No Network Type Ports Used Notes ------ ------------ -------------- ----- 1 Single 10GbE interface configured on each 14 10GbE SFP+ PCIe node, 1500 MTU NIC Network Configuration Notes --------------------------- All nodes and clients are connected to an Arista Networks DCS-7150S-64-CL Benchmark Network ================= Each load generator and each S210 storage node was configured with a single 10GbE, 1500 MTU connection to the Arista Switch. Processing Elements =================== Item No Qty Type Description Processing Function ----- --- ---- ----------- ------------------- 1 28 CPU Intel(R) Xeon(R) CPU E5-2620 Network, NFS, Filesystem, Device v2 @ 2.10GHz Drivers Processing Element Notes ------------------------ Each storage node has 2 physical processors with 6 cores and SMT enabled Memory ====== Size Number of Nonvol Description in GB Instances Total GB atile ----------- ------ ---------- -------- ------ Storage Node System Memory 256 14 3584 V Storage Node Integrated NVRAM module with 2 14 28 NV Vault-to-Flash Grand Total Memory Gigabytes 3612 Memory Notes ------------ Each storage controller has main memory that is used for the operating system and for caching filesystem data. A separate, integrated battery-backed RAM module is used to provide stable storage for writes that have not yet been written to disk. Stable Storage ============== Each storage node is equipped with an nvram journal that stores writes to the local disks. The nvram has backup power to save data to dedicated on-card flash in the event of power-loss. System Under Test Configuration Notes ===================================== The system under test consisted of 14 S210 storage nodes, 2U each, connected by QDR Infiniband. Each storage node was configured with a single 10GbE network interface connected to a 10GbE switch. Other System Notes ================== Test Environment Bill of Materials ================================== Item No Qty Vendor Model/Name Description ----- --- ------ ---------- ----------- 1 7 Intel S2600WP dual 6-core E5-2630 0 @ 2.30GHz, 48GB RAM Blade system 2 1 Arista DCS-7150S-64-CL Arista Networks DCS-7150S-64-CL Networks Load Generators =============== LG Type Name LG1 BOM Item # 1 Processor Name Intel Xeon E5-2630 0 Processor Speed 2.30GHz Number of Processors (chips) 2 Number of Cores/Chip 6 Memory Size 48 GB Operating System CentOS release 6.5 kernel 2.6.32-431.el6.x86_64 Network Type Intel Corporation 82599ES 10-Gigabit Load Generator (LG) Configuration ================================= Benchmark Parameters -------------------- Network Attached Storage Type NFS V3 Number of Load Generators 7 Number of Processes per LG 192 Biod Max Read Setting 2 Biod Max Write Setting 2 Block Size AUTO Testbed Configuration --------------------- LG No LG Type Network Target Filesystems Notes ----- ------- ------- ------------------ ----- 1..7 LG1 1 /ifs/data Load Generator Configuration Notes ---------------------------------- All clients were connected to a single filesystem through all storage nodes Uniform Access Rule Compliance ============================== Each load-generating client hosted 192 processes. The assignment of processes to network interfaces was done such that they were evenly divided across all network paths to the storage controllers. The filesystem data was striped evenly across all disks and storage nodes Other Notes =========== ================================================================================ Generated on Mon Jul 07 08:59:18 2014 by SPECsfs2008 ASCII Formatter Copyright (C) 1997-2008 Standard Performance Evaluation Corporation