Hitachi Data Systems | : | Hitachi Unified Storage File, Model 4100, Four Node Cluster |
SPECsfs2008_nfs.v3 | = | 607647 Ops/Sec (Overall Response Time = 0.89 msec) |
|
Tested By | Hitachi Data Systems |
---|---|
Product Name | Hitachi Unified Storage File, Model 4100, Four Node Cluster |
Hardware Available | July 2013 |
Software Available | September 2013 |
Date Tested | August 2013 |
SFS License Number | 276 |
Licensee Locations | Santa Clara, CA, USA |
The Hitachi Unified Storage (HUS) and Hitachi NAS (HNAS) Platform family of products provide multiprotocol support to store and share block, file and object data types. The 4000 series delivers best-in-class performance, scalability, clustering with automated failover, 99.999% availability, non-disruptive upgrades, smart primary deduplication, intelligent file tiering, automated migration, 256TB file system pools, a single namespace up to the maximum usable capacity, and are integrated with the Hitachi Command suite of management and data protection software. The HUS file module uses a hardware-accelerated "Hybrid Core" architecture that accelerates network and file protocol processing to achieve the industry's best performance in terms of both throughput and Operations per second. The HUS file uses an object-based file system (Silicon File System) and virtualization to deliver the highest scalability in the market, enabling organizations to consolidate file servers and other NAS devices into fewer nodes and storage arrays for simplified management, improved space efficiency and lower energy consumption. Each 4100 node or cluster can scale up 16PB of usable data storage and support 10GbE LAN access and 8Gbps FC storage connectivity.
Hitachi Unified Storage VM includes flash optimized system software and patented Hitachi Accelerated Flash to improve application performance. Utilizing external storage virtualization and automated tiering, HUS VM centralizes storage management of multiple storage tiers for the highest economic value.
Item No | Qty | Type | Vendor | Model/Name | Description |
---|---|---|---|---|---|
1 | 4 | Server | HDS | SX345384.P | Hitachi NAS 4100 Base System |
2 | 1 | Server | HDS | SX345278.P | System Management Unit (SMU) |
3 | 1 | Software | HDS | SX365131.P | Hitachi NAS SW - Software Bundle |
4 | 16 | FC Interface | HDS | FTLF8528P3BNV.P | SFP+ 8Gbps FC |
5 | 24 | Interface | HDS | FTLX8571D3BCV.P | SFP+ 10GE |
6 | 1 | Storage | HDS | HUS-VM-SOLUTION.S | HUS VM Storage Platform |
7 | 1 | Storage | HDS | HUS-VM-A0001.S | HUS VM Product |
8 | 1 | Chassis | HDS | DW700-CBX.P | HUS VM Controller Chassis |
9 | 16 | Cache | HDS | HDW-F700-16GB.P | HUS VM 16GB Cache Module |
10 | 1 | Cache | HDS | DW-F700-BM256.P | HUS VM Cache Flash Memory Module (in use only during power outage) |
11 | 64 | Drives | HDS | HDW-F700-1R6FM.P | HUS VM 1.6TB Flash Module Drive |
12 | 6 | Drive Chassis | HDS | DW-F700-DBF.P | HUS VM Drive Box (Flash) |
13 | 4 | FC Interface | HDS | HDW-F700-HF8G.P | HUS VM 4x8Gbps FC Interface Adapter |
14 | 4 | Disk Adapter | HDS | DW-F700-BS6G.P | HUS VM B/E I/O Module |
15 | 1 | Rack | HDS | A3BF-SOLUTION.P | Solution 19 in rack ROW MIN |
16 | 5 | Software | HDS | 304-232001-03.P | SVC Mo HUS VM Hitachi BOS Base Lic (20TB) - SW Sppt |
17 | 2 | Switch | Brocade | HD-5320-0008.P | Brocade 5320 FC Switch |
18 | 2 | Switch | Brocade | TI-24X-AC.P | Brocade TurboIron 24x 10GbE switch for Cluster Interconnect |
OS Name and Version | 11.2.3319.09 |
---|---|
Other Software | None |
Filesystem Software | SiliconFS 11.2.3319.09 |
Name | Value | Description |
---|---|---|
security-mode | UNIX | Security mode is native UNIX |
cifs_auth | off | Disable CIFS security authorization |
cache-bias | small-files | Set metadata cache bias to small files |
fs-accessed-time | off | Accessed time management was turned off |
shortname | off | Disable short name generation for CIFS clients |
read-ahead | 0 | Disable file read-ahead |
None
Description | Number of Disks | Usable Size |
---|---|---|
1.6TB Hitachi Accelerated Flash module drive | 64 | 89.6 TB |
250GB SATA Disks. These eight drives (two mirrored drives per node) are used for storing the core operating system and management logs. No cache or data storage. | 8 | 1000.0 GB |
Total | 72 | 90.5 TB |
Number of Filesystems | 8 |
---|---|
Total Exported Capacity | 89.6TB |
Filesystem Type | WFS-2 |
Filesystem Creation Options | 4KB filesystem block size |
Filesystem Config | Each file system was striped across 8 LUNs from a single 7D+1P, RAID-5 Group consisting of 8 Flash module drives. |
Fileset Size | 70907.5 GB |
The storage configuration consisted of one Hitachi Unified Storage VM All Flash storage system (HUS VM) configured with single chassis and 256GB cache memory. There were 64 1.6TB Hitachi Accelerated Flash module drives in use to meet the capacity and performance requirements of the benchmark. There were 64 LUNs created using RAID-5, 7D+1P. There were sixteen 8Gbps FC ports in use across two FED features located in different clusters. The FC ports were connected to the 4100 nodes via a redundant pair of Brocade 5320 switches. The 4100 nodes were connected to each Brocade 5320 switch via two 8Gbps FC connections, such that a completely redundant path exists from each node to the storage. Each Hitachi Unified Storage file module nodes have two internal mirrored hard disk drives that are used to store the core operating software and system logs. These drives are not used for cache space or for storing data.
Item No | Network Type | Number of Ports Used | Notes |
---|---|---|---|
1 | 10 Gigabit Ethernet | 4 | Integrated 10GbE Ethernet controller |
One 10GbE network interface from each 4100 node was connected to a Hitachi Apresia 15000-32XL-PSR switch, which provided network connectivity to the clients. The interfaces were configured to use Jumbo frames.
Each LG has an Intel XF SR10GbE single port PCIe network interface. Each LG connects via a single 10GbE connection to the ports on the Hitachi Apresia 15000-32XL-PSR switch.
Item No | Qty | Type | Description | Processing Function |
---|---|---|---|---|
1 | 4 | FPGA | Altera Stratix IV EP4SE530 | Filesystem |
2 | 12 | FPGA | Altera Stratix IV EP4SGX360 | Network Interface, Storage Interface, NFS |
3 | 4 | CPU | Intel Xeon Quad-Core CPU | Management |
4 | 2 | CPU | Intel Xeon 8-Core CPU | HUS VM I/O Management |
5 | 2 | ASIC | Hitachi Custom ASIC | HUS VM data engine |
Each 4100 node has 4 FPGAs that are used for processing functions. The HUS VM storage system is equipped with two Intel Xeon 8-Core CPUs and Hitachi custom ASICs.
Description | Size in GB | Number of Instances | Total GB | Nonvolatile |
---|---|---|---|---|
Server Main Memory | 32 | 4 | 128 | V |
Server Filesystem and Storage Cache | 68 | 4 | 272 | V |
Server Battery-backed NVRAM | 8 | 4 | 32 | NV |
Cache Memory Module(HUS VM) | 16 | 16 | 256 | NV |
Grand Total Memory Gigabytes | 688 |
Each 4100 node has 32GB of main memory that is used for the operating system and in support of the FPGA functions. 68GB of memory is dedicated to filesystem metadata, sector cache and for other purposes. A separate, integrated battery-backed NVRAM module (8GB) on the filesystem board is used to provide stable storage for writes that have not yet been written to disk. The HUS VM storage system was configured with 256GB Memory.
The Hitachi Unified Storage File node writes to the battery based (72 hours) NVRAM internal to the Server first. The data from NVRAM is then written to the backend storage system at the earliest opportunity, but always within a few seconds of arrival in the NVRAM. In a four node active-active cluster configuration, the contents of the NVRAM are synchronously mirrored (in a round robin fashion) to ensure that in the event of one or two node failover, any pending transactions can be completed by the remaining nodes. The data from the node is then written onto the battery backed backend storage system cache (a second layer of NVRAM in the entire solution) and are backed up onto the Cache Flash Memory modules in the event of a power outage. The Cache Flash Memory modules in the backend storage system are part of the total solution, but are used only during power outage and never used as cache space.
The system under test consisted of four Hitachi Unified Storage File 4100 nodes, connected to a HUS VM All Flash storage system via two Brocade 5320 FC switches. The nodes were configured in an active-active cluster mode and are connected by a redundant pair of 10GbE connections to the cluster interconnect ports via two Brocade TurboIron 24X 10GbE switches (cluster interconnect switches). The HUS VM All Flash storage system consisted of 64 1.6TB Hitachi Accelerated Flash module drives. All the connectivity from server to the storage was via two 8Gbps switched FC fabric. For these tests, there were 2 zones created on each FC switch. Each 4100 server was connected to each zone via 2 integrated 8Gbps FC ports (corresponding to 2 FC ports). The HUS VM storage system was connected to the 2 zones (corresponding to 16 FC ports) providing I/O path from the server to storage. The System Management Unit (SMU) is part of the total system solution, but is used for management purposes only and was not active during the test.
None
Item No | Qty | Vendor | Model/Name | Description |
---|---|---|---|---|
1 | 16 | Oracle | Sun Fire x2200 | RHEL 5 clients, two Dual core processors, 8GB RAM |
2 | 1 | Hitachi | Apresia | Hitachi Apresia 15000-32XL-PSR 10GbE Switch |
LG Type Name | LG1 |
---|---|
BOM Item # | 1 |
Processor Name | AMD Opteron |
Processor Speed | 2.6 GHz |
Number of Processors (chips) | 2 |
Number of Cores/Chip | 2 |
Memory Size | 8 GB |
Operating System | Red Hat Enterprise Linux 5, 2.6.18-8.e15 kernel |
Network Type | 1 x Intel XF SR PCIe 10GbE |
Network Attached Storage Type | NFS V3 |
---|---|
Number of Load Generators | 16 |
Number of Processes per LG | 200 |
Biod Max Read Setting | 2 |
Biod Max Write Setting | 2 |
Block Size | 64 |
LG No | LG Type | Network | Target Filesystems | Notes |
---|---|---|---|---|
62..77 | LG1 | 1 | /w/d0, /w/d1, /w/d2, /w/d3, /w/d4, /w/d5, /w/d6, /w/d7 | None |
All the target filesystems from each node were accessed by all the clients.
All the filesystems from each node were mounted on all the clients. Each load generating client hosted 200 processes, accessing all the 8 target file systems (/w/d0, /w/d1, /w/d2, /w/d3, /w/d4, /w/d5, /w/d6, /w/d7).
Other test notes: None
Hitachi Unified Storage, Hitachi Unified Storage VM, Hitachi NAS Platform and Virtual Storage Platform are registered trademarks of Hitachi Data Systems, Inc. in the United States, other countries, or both. All other trademarks belong to their respective owners and should be treated as such.
Generated on Wed Oct 02 17:00:30 2013 by SPECsfs2008 HTML Formatter
Copyright © 1997-2008 Standard Performance Evaluation Corporation
First published at SPEC.org on 02-Oct-2013