SPECsfs2008_nfs.v3 Result ================================================================================ EMC Corporation : Celerra Gateway NS-G8 Server Failover Cluster, 3 Datamovers (1 stdby) / Symmetrix V-Max SPECsfs2008_nfs = 110621 Ops/Sec (Overall Response Time = 2.32 msec) .v3 ================================================================================ Performance =========== Throughput Response (ops/sec) (msec) --------------------- ---------------- 11030 0.7 22073 0.8 33151 1.1 44193 1.3 55264 2.2 66540 2.4 77592 2.9 88317 3.4 99560 4.7 110621 7.3 ================================================================================ Product and Test Information ============================ Tested By EMC Corporation Product Name Celerra Gateway NS-G8 Server Failover Cluster, 3 Datamovers (1 stdby) / Symmetrix V-Max Hardware Available December 2009 Software Available December 2009 Date Tested December 2009 SFS License Number 47 Licensee Locations Hopkinton, MA The NS-G8 Gateway Server system is a consolidation of file-based servers with NAS(NFS, CIFS) applications configured in an 2+1 High-Availability cluster. The servers deliver network services over high-speed Gigabit Ethernet. The cluster tested here consists of two active datamovers that each provides 4 Jumbo-frame capable Gigabit Ethernet interfaces and one stand-by datamover to provide high availability. The NS-G8 is a gateway to Shared Storage Array Symmetrix V-Max. The Symmetrix V-Max is a new storage architecture using 4 V-Max engines interconnected via a set of multiple active 4 Gbit/s FC HBAs connecting to the fabric. Configuration Bill of Materials =============================== Ite m Vend No Qty Type or Model/Name Description --- --- ---- ---- ---------- ----------- 1 1 Enclosure EMC NSG8-DME0 Celerra NS-G8 empty datamover add on enclosure 2 1 Enclosure EMC NSG8-DME1 Celerra NS-G8 empty datamover add on enclosure 3 3 Datamover EMC NSG8-DM-8A Celerra NS-G8 datamover, 4 GbE ports, 4 FC ports 4 1 Control EMC NSG8-CSB Celerra NS-G8 control station station (Administration only) 5 3 Software EMC NSG8-UNIX-L Celerra NS-G8 UNIX License 6 1 Intelligent EMC SB-64-BASE Symmetrix V-Max Base Engine, 64 GB Cache Storage Array Engine 7 3 Intelligent EMC SB-ADD64NDE Symmetrix V-Max Add Engine, 64 GB Cache Storage Array Engine 8 16 FE IO Module EMC SB-FE80000 Symmetrix V-Max Front End IO Module with multimode SFP's 9 16 Drive EMC SB-DE15-DIR V-Max Direct Connect Storage Bay, Drive Enclosure Enclosure 10 96 Flash Disk STEC NF4F14001B V-Max Enterprise Flash Drive (EFD) 400 GB 4 Gbit/s FC optical 11 4 FC Disk SEAG NS4154501B V-Max Cheetah 450 GB 15K.6 4 Gbit/s FC ATE Disks 12 4 Standby Power EMC SB-DB-SPS V-Max Standby Power Supply Supply 13 1 FC Switch EMC EMC DS-300B 24 port Fibre Channel Switch Server Software =============== OS Name and Version DART 5.6.46.4 Other Software EMC Celerra Control Station Linux 2.6.9-67.0.4.5611 Filesystem Software Celerra UxFS File System Server Tuning ============= Name Value Description ---- ----- ----------- ufs syncInterval 22500 Timeout between UFS log flushes file 30 Total cached dirty blocks for NFSv3 async asyncThresholdPercentage writes ufs cgHighWaterMark 131071 Defines the systems CG cache size ufs inoBlkHashSize 170669 Inode block hash size ufs updateAccTime 0 Disable access time updates ufs nFlushDir 80 Number of UxFS directory and indirect blocks flush threads file prefetch 0 Disable DART read pre-fetch ufs inoHighWaterMark 65536 Number of dirty inode buffers per filesystem nfs thrdToStream 7 Number of NFS flush threads per stream ufs inoHashTableSize 200502 Inode hash table size 7 mkfsArgs dirType DIR_CO Compatibility mode directory style MPAT kernel maxStrToBeProc 24 Number of network streams to process at once ufs nFlushIno 128 Number of UxFS inode blocks flush threads kernel outerLoop 16 Number of consecutive iterations of network packets processing ufs nFlushCyl 40 Number of UxFS cylynder group blocks flush threads nfs withoutCollector 1 Enable NFS-to-CPU thread affinity kernel buffersWatermarkPe 5 Flushing buffer cache threshold rcentage file initialize nodes 100000 Number of inodes 0 file initialize dnlc 367600 Number of dynamic name cache lookup entries 0 nfs start openfiles 120000 Number of open files 0 nfs start nfsd 4 Number of NFS daemons Server Tuning Notes ------------------- Disks and Filesystems ===================== Numbe r of Usable Description Disks Size ----------- ----- ------- This set of 96 EFD disks is divided into 48 2-disk RAID1 pairs, 96 18.8 TB each with 4 LUs per drive, exported as 192 logical volumes. All data file systems reside on these disks. This set of FC disks consists of 2 2-disk RAID1 pairs, each with 2 4 900.0 LUs per drive, exported as 8 logical volumes. These disks are GB reserved for Celerra system use Total 100 19.6 TB Number of Filesystems 8 Total Exported Capacity 17600 GB Filesystem Type UxFS Filesystem Creation Options Default Filesystem Config Each filesystem is striped (32 KB elementsize), across 48 disks (192 logical volumes) for fs1 fs2 fs3 fs4 fs5 fs6 fs7 and fs8 Fileset Size 12889.8 GB The stripe size for all RAID1 logical volumes was 32 KB. Each logical volume was 100 GB. The filesystem named fs1 was built on a Celerra meta volume made by striping across 24 logical volumes on the first V-Max Engine. The filesystem named fs2 was built on a Celerra meta volume made by striping across 24 logical volumes on same V-Max Engine. fs3 and fs4 were configured similar on the second V-Max Engine. Same with fs5 and fs6 on 3rd V-Max Engine and fs7 and fs8 on the last V-Max Engine. Network Configuration ===================== Item Number of No Network Type Ports Used Notes ------ ------------ ------------- ----- 1 Jumbo Gigabit 8 This is the Gigabit network interface Ethernet used for both datamovers. Network Configuration Notes --------------------------- All Gigabit network interfaces were connected to a Cisco 6509 switch. Benchmark Network ================= An MTU size of 9000 was set for all connections to the switch. Each datamover was connected to the network via 4 ports. The LG1 class workload machines were connected with one port. Processing Elements =================== Ite m Typ Processing No Qty e Description Function --- --- --- ----------- ---------------- 1 4 CPU Dual Intel Quad-core 2.3 GHz Xeon E5345 8 MB L2 NFS protocol, cache for each datamover server. 4 chips active for UxFS filesystem the workload. (2 standby not in the quantity) Processing Element Notes ------------------------ Each datamover has two physical processors. Memory ====== Size Number Nonvo in of Inst Total latil Description GB ances GB e ----------- ---- ------- ----- ----- Each datamover main memory. (4 GB in the standby 4 2 8 V datamover not in the quantity) V-Max storage array battery backed global memory. (64 64 4 256 NV GB per V-Max engine) Grand Total Memory Gigabytes 264 Memory Notes ------------ The Symmetrix V-Max was configured with a total of 256 GB of memory. The memory is backed up with sufficient battery power to safely destage all the cached data onto the disk in the event of a power failure. Stable Storage ============== 8 NFS file systems were used. Each RAID1 pair had 4 LUs bound on it. Each file system was striped over a quarter of the logical volumes. The storage array had 8 Fibre Channel connections, 4 per datamover. In this configuration, NFS stable write and commit operations are not acknowledged until after the storage array has acknowledged that the related data has been stored in stable storage (i.e. NVRAM or disk). System Under Test Configuration Notes ===================================== The system under test consisted of 2 NS-G8 Gateway datamovers attached to a Symmetrix V-Max Storage Array with 4 FC links. The datamovers were running DART 5.6.46.4. 4 GbE Ethernet ports per datamover were connected to the network. Other System Notes ================== Failover is supported by an additional Celerra datamover that operates in stand-by mode. In the event of the datamover failure, this unit takes over the function of the failed unit. The stand-by datamover does not contribute to the performance of the system and it is not included in the components listed above. Test Environment Bill of Materials ================================== Item No Qty Vendor Model/Name Description ----- --- ------ ---------- ----------- 1 24 Dell Dell PowerEdge Dell server with 1 GB RAM and the Linux 1850 2.6.9-42.ELsmp operating system Load Generators =============== LG Type Name LG1 BOM Item # 1 Processor Name Intel(R) Xeon(TM) CPU 3.60GHz Processor Speed 3.6 GHz Number of Processors (chips) 2 Number of Cores/Chip 2 Memory Size 1 GB Operating System Linux 2.6.9-42.ELsmp Network Type 1 x Broadcom BCM5704 NetXtreme Gigabit Ethernet Load Generator (LG) Configuration ================================= Benchmark Parameters -------------------- Network Attached Storage Type NFS V3 Number of Load Generators 24 Number of Processes per LG 32 Biod Max Read Setting 5 Biod Max Write Setting 5 Block Size AUTO Testbed Configuration --------------------- LG No LG Type Network Target Filesystems Notes ----- ------- ------- ------------------ ----- 1..24 LG1 1 /fs1,/fs2,/fs3.../fs7,/fs8 N/A Load Generator Configuration Notes ---------------------------------- All filesystems were mounted on all clients, which were connected to the same physical and logical network. Uniform Access Rule Compliance ============================== Each client has the same file systems mounted from each of the two active datmovers. Other Notes =========== Failover is supported by an additional Celerra datamover that operates in stand-by mode. In the event of the datamover failure, this unit takes over the function of the failed unit. The stand-by datamover does not contribute to the performance of the system and it is not included in the components listed above. Symmetrix V-Max was configured with 256 GB of memory, 64 GB per V-Max engine. The memory is backed up with sufficient battery power to safely destage all the cached data onto the disk in the event of a power failure. ================================================================================ Generated on Wed Jan 27 10:56:58 2010 by SPECsfs2008 ASCII Formatter Copyright (C) 1997-2008 Standard Performance Evaluation Corporation