SPECsfs2008_nfs.v3 Result ================================================================================ Hitachi Data : Hitachi NAS Platform 3090-G2, powered by BlueArc, One Node Systems (with Hitachi NAS Performance Accelerator feature) SPECsfs2008_nfs = 95757 Ops/Sec (Overall Response Time = 1.73 msec) .v3 ================================================================================ Performance =========== Throughput Response (ops/sec) (msec) --------------------- ---------------- 9439 0.5 18913 0.7 28365 0.8 37850 1.0 47336 1.2 56839 1.3 66395 1.6 75741 2.0 85326 3.5 95757 8.6 ================================================================================ Product and Test Information ============================ Tested By Hitachi Data Systems Product Name Hitachi NAS Platform 3090-G2, powered by BlueArc, One Node (with Hitachi NAS Performance Accelerator feature) Hardware Available February 2011 Software Available February 2012 Date Tested January 2012 SFS License Number 276 Licensee Locations Santa Clara, CA, USA The Hitachi NAS Platform, powered by BlueArc, continues to deliver best-in-class performance and scalability, now with a new Performance Accelerator feature. The Hitachi NAS Performance Accelerator is an optional license key based feature that optimizes and improves the overall performance levels of an HNAS 3090 server by enabling very large scale integration (VLSI) features within the server. When combined with additional storage, the Hitachi NAS Performance Accelerator feature can increase performance levels by up to 30%. For efficient data management, the Hitachi NAS Platform provides Intelligent File Tiering, Clustered Namespace, large 256TB file systems, enterprise search enhancements and integration with Hitachi Storage and management products. Hitachi NAS Platform uses a Hybrid Core Architecture that accelerates processing to achieve the industry's best performance in both throughput and operations per second, availability and scalability are further enhanced with the ability to grow to up to four nodes per cluster. Hitachi NAS Platform family delivers the highest scalability in the market which enables organizations to consolidate file servers and other NAS devices into fewer nodes and storage arrays for simplified management, improved space efficiency and lower energy consumption. HNAS midrange model 3090-G2 can scale up to 8PB of usable data storage and supports simultaneous 1GbE and 10GbE LAN access, and 4Gbps FC storage connectivity. Configuration Bill of Materials =============================== Ite m Vendo No Qty Type r Model/Name Description --- --- ---- ----- ---------- ----------- 1 1 Server HDS SX345321.P Hitachi NAS 3090-G2 Base System 2 1 Server HDS SX345278.P System Management Unit (SMU) 3 1 Software HDS SX440029.P Hitachi NAS SW Lic - Entry NC Unix (NFS) 4 1 Software HDS Accelerator SW Lic Hitachi NAS SW Lic - Hitachi Performance Accelerator 5 4 FC Interface HDS FTLF8524P2BNV.P SFP 4G SWL FINISAR 1-PK 6 4 Network HDS FTLX8511D3.P 10G 850nm XFP Interface 7 1 Storage HDS VSP-A0001.S VSP Hardware Product 8 1 Disk HDS DKC710I-CBXA.P Primary Controller Chassis Controller 9 1 Disk HDS DKC710I-CBXB.P Second Controller Chassis Controller 10 16 Cache HDS DKC-F710I-C32G.P Cache Memory Module (32GB) 11 2 Cache HDS DKC-F710I-BM128.P Cache Flash Memory Module (in use during Power outage) 12 168 Disk Drives HDS DKC-F710I-146KCM.P SFF 146GB Disk Drive 2.5inch 13 3 Chassis HDS DKC-F710I-SBX.P SFF Drive Chassis 14 4 FC Interface HDS DKC-F710I-16UFC.P Fibre 16-Port HOST Adapter(8Gbps) 15 4 Disk Adapter HDS DKC-F710I-SCA.P Disk Adapter 16 4 Processor HDS DKC-F710I-MP.P Processor Blade Blade 17 2 Switch HDS DKC-F710I-ESW.P PCI-Express Switch Adapter Adapter 18 2 Hub HDS DKC-F710I-HUB.P Hub Kit 19 2 Rack HDS DKC-F710I-RK42.P Rack-42U 20 1 Cable HDS DKC-F710I-MDEXC.P Inter-Controller Connecting Kit 21 1 Software HDS 044-230001-03.P VSP Basic Operating System 20TB Base License 22 1 Software HDS 044-230001-04B.P VSP Basic Operating System 4-VSD Pair Base License Server Software =============== OS Name and Version 10.0.3067.11 Other Software None Filesystem Software SiliconFS 10.0.3067.11 Server Tuning ============= Name Value Description ---- ----- ----------- security-mode UNIX Security mode is native UNIX cifs_auth off Disable CIFS security authorization cache-bias small- Set metadata cache bias to small files files fs-accessed-time off Accessed time management was turned off shortname off Disable short name generation for CIFS clients read-ahead 0 Disable file read-ahead Server Tuning Notes ------------------- None Disks and Filesystems ===================== Number Description of Disks Usable Size ----------- -------- ----------- 146GB SAS 15K RPM Disks 168 16.5 TB 160GB SATA 5400 RPM Disks. These two drives are used for 2 160.0 GB storing the core operating system and management logs. No cache or data storage. Total 170 16.6 TB Number of Filesystems 2 Total Exported Capacity 16870.1 GB Filesystem Type WFS-2 Filesystem Creation Options 4K filesystem block size dsb-count (dynamic system block) set at 768 Filesystem Config Each Filesystem was striped across 21 x 3D+1P, RAID-5 LUNs (84 disks) Fileset Size 11073.7 GB The storage configuration consisted of One Virtual Storage Platform storage system (VSP) configured in Dual chassis and with up to 512GB allocated cache memory. There were 168 15K RPM SAS disks in use for these tests. There were 42 LUNs created using RAID-5, 3D+1P. There were sixteen 4Gbps FC ports in use across 2 FED features located in different clusters. The FC ports were connected to the 3090-G2 server via a redundant pair of Brocade 5320 switches. The 3090-G2 server was connected to each Brocade 5320 switch via two 4Gbps FC connections, such that a completely redundant path exists from server to the storage. Hitachi NAS Platform server have two internal mirrored hard disk drives which are used to store the core operating software and system logs. These drives are not used for cache space or for storing data. Network Configuration ===================== Number of Ports Item No Network Type Used Notes ------- ------------ ---------------- ----- 1 10 Gigabit Ethernet 2 Integrated 1GbE / 10GbE Ethernet controller Network Configuration Notes --------------------------- Two 10GbE network interface from the 3090-G2 server was connected to a Brocade TurboIron 24X switch, which provided network connectivity to the clients. The interface was configured to use Jumbo frames (MTU size of 8000 bytes). Benchmark Network ================= Each LG has an Intel XF SR10GbE single port PCIe network interface. Each LG connects via a single 10GbE connection to the ports on the Brocade TurboIron 24X network switch. Processing Elements =================== Item No Qty Type Description Processing Function ---- --- ---- ----------- ------------------- 1 2 FPGA Altera Stratix III EP3SE260 Storage Interface, Filesystem 2 2 FPGA Altera Stratix III EP3SL340 Network Interface, NFS, Filesystem 3 1 CPU Intel E8400 3.0GHz, Dual Core Management 4 8 VSD Intel Xeon Quad-Core CPU VSP unit Processing Element Notes ------------------------ The HNAS 3090-G2 server has 2 FPGA of each type (4 in total) used for benchmark processing functions. The VSD is the VSP's I/O processor board. There are two pairs of these installed per chassis. Each board includes an Intel Xeon Quad-Core CPU. Memory ====== Size in Number of Nonvolatil Description GB Instances Total GB e ----------- --------- ------------ -------- ---------- Server Main Memory 12 1 12 V Server Filesystem and Storage Cache 14 1 14 V Server Battery-backed NVRAM 2 1 2 NV Cache Memory Module (VSP) 32 16 512 NV Grand Total Memory Gigabytes 540 Memory Notes ------------ The 3090-G2 node has 12GB of main memory that is used for the operating system and in support of the FPGA functions. 14GB of memory is dedicated to filesystem metadata and sector caches. A separate, integrated battery-backed NVRAM module on the filesystem board is used to provide stable storage for writes that have not yet been written to disk. The VSP storage system was configured with 512GB Memory. Stable Storage ============== The Hitachi NAS Platform server writes first to the battery based (72 hours) NVRAM internal to the Server. Data from NVRAM is then written to the Storage systems at the earliest opportunity, but always within a few seconds of arrival in the NVRAM. In an active-active cluster configuration, the contents of the NVRAM are synchronously mirrored to ensure that in the event of a single node failover, any pending transactions can be completed by the remaining node. The data from the HNAS is first written onto battery backed VSP cache and are backed up onto the Cache Flash Memory modules in the event of power outage. The Cache Flash Memory modules in the VSP is part of the total solution, but is used only during power outage and not used as cache space. System Under Test Configuration Notes ===================================== The system under test consisted of a Hitachi NAS Platform 3090-G2 server, connected to a VSP storage system via two Brocade 5320 FC switches. The VSP storage system consisted of 168 15K RPM SAS drives. All the connectivity from server to the storage was via a 4Gbps switched FC fabric. For these tests, there were 2 zones created on each FC switch. The Hitachi NAS 3090-G2 server was connected to each zone via 2 integrated 4Gbps FC ports (corresponding to 2 H-ports). The VSP storage system was connected to the 2 zones (corresponding to 16 FC ports) providing I/O path from the server to storage. The System Management Unit (SMU) is part of the total system solution, but is used for management purposes only and was not active during the test. Other System Notes ================== None Test Environment Bill of Materials ================================== Item No Qty Vendor Model/Name Description ----- --- ------ ---------- ----------- 1 16 Oracle Sun Fire x2200 RHEL 5 clients, two Dual core processors, 8GB RAM 2 1 Brocade TurboIron Brocade TurboIron 24X, 24 port 10GbE Switch Load Generators =============== LG Type Name LG1 BOM Item # 1 Processor Name AMD Opteron Processor Speed 2.6 GHz Number of Processors (chips) 2 Number of Cores/Chip 2 Memory Size 8 GB Operating System Red Hat Enterprise Linux 5, 2.6.18-8.e15 kernel. Network Type 1 x Intel XF SR PCIe 10GbE Load Generator (LG) Configuration ================================= Benchmark Parameters -------------------- Network Attached Storage Type NFS V3 Number of Load Generators 16 Number of Processes per LG 64 Biod Max Read Setting 2 Biod Max Write Setting 2 Block Size 64 Testbed Configuration --------------------- LG No LG Type Network Target Filesystems Notes ----- ------- ------- ------------------ ----- 62..77 LG1 1 /w/d0, /w/d1 None Load Generator Configuration Notes ---------------------------------- All the file systems were mounted on all the clients, which were connected to the same physical and logical network. Uniform Access Rule Compliance ============================== Each load generating client hosted 64 processes. The assignment of processes to file systems and network interfaces were done in such a way that they were uniformly divided across all the file systems and network paths., Each load generator was mounted to each filesystem target (/w/d0,/w/d1) and cycled through all the file systems in sequence. Other Notes =========== None Hitachi NAS Platform, powered by BlueArc and Virtual Storage Platform are registered trademarks of Hitachi Data Systems, Inc. in the United States, other countries, or both. All other trademarks belong to their respective owners and should be treated as such. ================================================================================ Generated on Wed Feb 22 15:32:03 2012 by SPECsfs2008 ASCII Formatter Copyright (C) 1997-2008 Standard Performance Evaluation Corporation