SPECsfs2008_nfs.v3 Result ================================================================================ Avere Systems, Inc. : FXT 2500 (6 Node Cluster) SPECsfs2008_nfs.v3 = 131591 Ops/Sec (Overall Response Time = 1.38 msec) ================================================================================ Performance =========== Throughput Response (ops/sec) (msec) --------------------- ---------------- 13082 0.5 26185 0.7 39341 0.9 52474 1.0 65602 1.1 78844 1.3 91961 1.5 105042 1.8 118413 2.7 131591 4.6 ================================================================================ Product and Test Information ============================ Tested By Avere Systems, Inc. Product Name FXT 2500 (6 Node Cluster) Hardware Available October 2009 Software Available October 2009 Date Tested September 2009 SFS License Number 9020 Licensee Locations Pittsburgh, PA USA The Avere Systems FXT 2500 appliance provides tiered NAS storage that allows performance to scale independently of capacity. The FXT 2500 is built on a 64-bit architecture that is managed by Avere OS software. FXT clusters scale to as many as 25 nodes and support millions of IO operations per second and deliver multiple GB/s of bandwidth. The Avere OS software dynamically organizes data into tiers. Active data is placed on the FXT appliances while inactive data is placed on slower mass storage servers. The system accelerates performance of read, write and metadata operations. The FXT 2500 supports both NFSv3 and CIFS protocols. The tested configuration consisted of (6) FXT 2500 cluster nodes. The system also included a Linux server that acted as the mass storage system. Configuration Bill of Materials =============================== Ite m Model/ No Qty Type Vendor Name Description --- --- ---- ------ ------- ----------- 1 6 Storage Avere S FXT Avere Systems Tiered NAS Appliance running Appliance ystems, 2500 Avere OS 1.0 software. Includes (8) 450 GB SAS Inc. Disks. 2 1 Server Mo Supermi X7DWN+ Motherboard for Linux mass storage NFS Server. therboard cro 3 1 Server Supermi CSE- 4U 24-Drive Chassis for Linux mass storage NFS Chassis cro 846E1- Server. R900B 4 24 Disk Seagate ST31000 1TB Barracuda ES.2 SATA 7200 RPM Disks for Drive 340NS Linux mass storage NFS Server. 5 1 System Seagate ST35003 500GB Barracuda ES.2 SATA 7200 RPM Disk for Disk 20NS Linux mass storage NFS Server. 6 1 RAID Cont LSI SAS8888 SAS/SATA RAID Controller with 512 MB Cache and roller ELP External Battery Backup Unit for Linux mass storage NFS Server. 7 1 NVRAM Curtiss MM- Micro Memory PCIe 1GB battery-backed NVRAM Adapter -Wright 5453CN adapter for Linux mass storage NFS Server. Server Software =============== OS Name and Version Avere OS 1.0 Other Software Mass storage server runs CentOS 5.3 with Linux 2.6.30 kernel Filesystem Software Avere OS 1.0 Server Tuning ============= Name Val Description ue ---- --- ----------- Slider Age 8 h Files may be modified up to 8 hours before being written back our to the mass storage system. s net.core. 104 Maximum network receive socket buffer size (in bytes) on Linux rmem_max 857 mass storage system. 6 net.core. 104 Default network receive socket buffer size (in bytes) on Linux rmem_default 857 mass storage system. 6 net.core. 104 Maximum network send socket buffer size (in bytes) on Linux wmem_max 857 mass storage system. 6 net.core. 104 Default network send socket buffer size (in bytes) on Linux wmem_default 857 mass storage system. 6 vm. 409 Force Linux VM system on to keep 40MB free on Linux mass min_free_kbyt 60 storage system. es vm. 9 Percentage of dirty pages at which the Linux VM system starts dirty_backgro writing out dirty pages on Linux mass storage system. und_ratio vm. 99 Age at which (in centiseconds) a buffer will be written out on dirty_expire_ the Linux mass storage server. centisecs vm. 100 Interval (in centiseconds) that Linux flush daemons wake and dirty_writeba write out dirty data on the Linux mass storage server. ck_centisecs vm. 10 Point (expressed in percentage of total memory) at which a dirty_ratio process writing data will start writing out dirty data on Linux mass storage server. fs.xfs. 200 XFS tunable. The interval at which the xfssyncd thread flushes xfssyncd_cent metadata out to disk. This thread will flush log activity out, isecs and do some processing on unlinked inodes. This setting is for the Linux mass storage system. fs.xfs. 100 XFS tunable. The interval at which xfsbufd scans the dirty xfsbufd_centi metadata buffers list. This setting is for the Linux mass secs storage system. fs.xfs. 200 XFS tunable. The age at which xfsbufd flushes dirty metadata age_buffer_ce buffers to disk. This setting is for the Linux mass storage ntisecs system. /sys/block/ noo Turn off elevator scheduler for RAID array on Linux mass sdc/queue/ p storage server. Hardware RAID controller provides sufficient scheduler scheduling. /sys/block/ 0 Turn off block read-ahead for RAID array on Linux mass storage sdc/queue/ server. Hardware RAID controller provides sufficient read_ahead_kb read-ahead. Server Tuning Notes ------------------- 23+1 RAID5 array was created on the Linux mass storage server using the following LSI MegaCli command line 'MegaCli64 -CfgLdAdd -r5 '[10:0,10:1,10:2, 10:3,10:4,10:5,10:6,10:7,10:8,10:9,10:10,10:11,10:12,10:13,10:14,10:15,10:16, 10:17,10:18,10:19,10:20,10:21,10:22,10:23]' WB ADRA Cached NoCachedBadBBU -strpsz32 -a0' and disk caches were disabled as follows 'MegaCli64 -LDSetProp -DisDskCache -L0 -a0'. This MegaCli command line creates a RAID5 23+1 array with write-back caching enabled in the controller (WB), adaptive read-ahead (ADRA), controller caching (Cached), caching disabled if the battery-backup unit fails (NoCachedBadBBU), a 32KB stripe unit size on controller zero (-a0). Complete documentation for the MegaCli command line tool is available at http:// www.lsi.com/DistributionSystem/AssetDocument/80-00156-01_RevH_SAS_SW_UG.pdf. Disks and Filesystems ===================== Numbe r of Usable Description Disks Size ----------- ----- -------- Each FXT 2500 node contains (8) 450 GB 15K RPM SAS disks. All FXT 48 18.5 TB data resides on these disks. Each FXT 2500 node contains (1) 250 GB SATA disk. System disk. 6 1.5 TB The mass storage system contains (24) 1 TB SATA disks. The XFS 24 20.9 TB file system is used to manage these disks and the FXT nodes access them via NFSv3. The mass storage system contains (1) 500 GB SATA disk. System 1 500.0 GB disk. Total 79 41.4 TB Number of 1 Filesystems Total Exported 21399 GB (Linux mass storage system capacity) Capacity Filesystem Type TFS (Tiered File System) Filesystem Default on FXT nodes. Linux mass storage server XFS filesystem Creation Options created with 'mkfs.xfs -d agsize=44876890112,sunit=64,swidth=1472,unwritten=0 -l logdev=/ dev/umema,size=128m,version=2,sunit=8,lazy-count=1 -i align=1 -f /dev/sdc'. Mounted using 'mount -t xfs -o logbufs=8,logbsize=262144,allocsize=4096,nobarrier,logdev=/dev/ umema /dev/sdc /export' Filesystem 23+1 RAID5 configuration on Linux mass storage server. Config Fileset Size 15320.6 GB Network Configuration ===================== Number of Item Ports No Network Type Used Notes ---- ------------ --------- ----- 1 10 Gigabit 6 One 10 Gigabit Ethernet port used for each FXT Ethernet 2500 appliance. 2 Gigabit Ethernet 1 The mass storage system is connected via a 1 Gigabit Ethernet. Network Configuration Notes --------------------------- Each FXT 2500 was attached via a single 10 GbE port to an Arista DCS-7124S 24 port 10 GbE switch. The load generating clients were attached to (2) HP ProCurve 2900-48G switch via a single 1 GbE link. Each HP switch had a 10 GbE uplink to the Arista switch. The mass storage server was attached to the HP switch via a single 1 GbE link. A 1500 byte MTU was used throughout the network. Benchmark Network ================= An MTU size of 1500 was set for all connections to the switch. Each load generator was connected to the network via a single 1 GbE port. The SUT was configured with 12 separate IP addresses on one subnet. Each cluster node was connected via a 10 GbE NIC and was sponsoring 2 IP addresses. Processing Elements =================== Item No Qty Type Description Processing Function ---- --- ---- ----------- ------------------- 1 12 CPU Intel Xeon E5420 2.50 GHz FXT 2500 Avere OS, Network, NFS/ Quad-Core Processor CIFS, Filesystem, Device Drivers 2 2 CPU Intel Xeon E5420 2.50 GHz Linux mass storage system Quad-Core Processor Processing Element Notes ------------------------ Each file server has two physical processors. Memory ====== Number Size of Inst Nonvol Description in GB ances Total GB atile ----------- ------ ------- -------- ------ FXT System Memory 64 6 384 V Mass storage system memory 32 1 32 V FXT NVRAM module 1 6 6 NV Mass storage system RAID Controller NVRAM Module .5 1 0.5 NV Mass storage system NVRAM Module for XFS log 1 1 1 NV Grand Total Memory Gigabytes 423.5 Memory Notes ------------ Each FXT node has main memory that is used for the operating system and for caching filesystem data. A separate, battery-backed NVRAM module is used to provide stable storage for writes that have not yet been written to disk. Stable Storage ============== The Avere filesystem logs writes and metadata updates to the NVRAM module. Filesystem modifying NFS operations are not acknowledged until the data has been safely stored in NVRAM. The battery backing the NVRAM ensures that any uncommitted transactions persist for at least 72 hours. System Under Test Configuration Notes ===================================== The system under test consisted of (6) Avere FXT 2500 nodes. Each node was attached to the network via 10 Gigabit Ethernet. Each FXT 2500 node contains (8) 450 GB SAS disks. The Linux mass storage system was attached to the network via a single 1 Gigabit Ethernet link. The mass storage server was a 4U Supermicro server configured with an (24) 1TB disk SATA RAID5 array. Other System Notes ================== N/A Test Environment Bill of Materials ================================== Item Model/ No Qty Vendor Name Description ---- --- ------ --------- ----------- 1 13 Supermicro X7DWU Supermicro Server with 8GB of RAM running CentOS 5.3 (Linux 2.6.18-128.el5) 2 5 Supermicro X7DWU Supermicro Server with 8GB of RAM running CentOS 5.3 (Linux 2.6.18-92.el5) 3 1 Arista DCS-7124S Arista 24 Port 10 GbE Switch Networks 4 2 HP 2900-48G HP ProCurve 48 Port 1 GbE Switch Load Generators =============== LG Type Name LG1 BOM Item # 1 Processor Name Intel Xeon E5420 2.50 GHz Quad-Core Processor Processor Speed 2.50 GHz Number of Processors (chips) 2 Number of Cores/Chip 4 Memory Size 8 GB Operating System CentOS 5.3 (Linux 2.6.18-128.el5) Network Type 1 x Intel Gigabit Ethernet Controller 82575EB LG Type Name LG2 BOM Item # 2 Processor Name Intel Xeon E5420 2.50 GHz Quad-Core Processor Processor Speed 2.50 GHz Number of Processors (chips) 2 Number of Cores/Chip 4 Memory Size 8 GB Operating System CentOS 5.3 (Linux 2.6.18-92.el5) Network Type 1 x Intel Gigabit Ethernet Controller 82575EB Load Generator (LG) Configuration ================================= Benchmark Parameters -------------------- Network Attached Storage Type NFS V3 Number of Load Generators 18 Number of Processes per LG 48 Biod Max Read Setting 2 Biod Max Write Setting 2 Block Size 0 Testbed Configuration --------------------- LG Netw Target LG No Type ork Filesystems Notes ----- ------ ---- -------------- ----- 1..7 LG1 1 /export/3 LG1 nodes attached to the first 1 GbE switch 8..9 LG2 1 /export/3 LG2 nodes attached to the first 1 GbE switch 10..15 LG1 1 /export/3 LG1 nodes attached to the second 1 GbE switch 16..18 LG2 1 /export/3 LG2 nodes attached to the second 1 GbE switch Load Generator Configuration Notes ---------------------------------- All clients were connected to a single filesystem through both FXT nodes Uniform Access Rule Compliance ============================== Each load-generating client hosted 48 processes. The assignment of processes to network interfaces was done such that they were evenly divided across all network paths to the FXT appliances. The filesystem data was evenly distributed across all disks and FXT appliances. Other Notes =========== N/A ================================================================================ Generated on Fri Oct 09 08:48:59 2009 by SPECsfs2008 ASCII Formatter Copyright (C) 1997-2008 Standard Performance Evaluation Corporation