SPECsfs2008_cifs Result

EMC Corporation : Celerra VG8 Server Failover Cluster, 2 Data Movers (1 stdby) / Symmetrix VMAX
SPECsfs2008_cifs = 142979 Ops/Sec (Overall Response Time = 1.92 msec)


Performance

Throughput
(ops/sec)
Response
(msec)
14347 0.9
28727 1.1
43118 1.2
57508 1.6
71991 1.8
86441 2.0
100782 2.6
115884 2.9
130104 3.3
142979 3.9
Performance Graph


Product and Test Information

Tested By EMC Corporation
Product Name Celerra VG8 Server Failover Cluster, 2 Data Movers (1 stdby) / Symmetrix VMAX
Hardware Available August 2010
Software Available August 2010
Date Tested October 2010
SFS License Number 47
Licensee Locations Hopkinton, MA

The Celerra VG8 Gateway Server system is a consolidation of file-based servers with NAS (NFS, CIFS) applications configured in an 1+1 High-Availability cluster. The servers deliver network services over high-speed Gigabit Ethernet. The cluster tested here consists of one active Data Mover that provides 8 Jumbo-frame capable Gigabit Ethernet interfaces and one stand-by Data Mover to provide high availability. The Celerra VG8 is a gateway to Shared Storage Array Symmetrix VMAX. The Symmetrix VMAX is a storage architecture using 4 VMAX engines interconnected via a set of multiple active 4 Gbit/s FC HBAs connecting to the fabric.

Configuration Bill of Materials

Item No Qty Type Vendor Model/Name Description
1 1 Enclosure EMC VG8-DME0 Celerra VG8 empty Data Mover add on enclosure
2 1 Enclosure EMC VG8-DME1 Celerra VG8 empty Data Mover add on enclosure
3 2 Data Mover EMC VG8-DM-8A Celerra VG8 Data Mover, 8 GbE ports, 4 FC ports
4 1 Control station EMC VG8-CSB Celerra VG8 control station (Administration only)
5 2 Software EMC VG8-CIFS-L Celerra VG8 CIFS License
6 1 Intelligent Storage Array Engine EMC SB-64-BASE Symmetrix VMAX Base Engine, 64 GB Cache
7 3 Intelligent Storage Array Engine EMC SB-ADD64NDE Symmetrix VMAX Add Engine, 64 GB Cache
8 16 FE IO Module EMC SB-FE80000 Symmetrix VMAX Front End IO Module with multimode SFP's
9 32 Drive Enclosure EMC SB-DE15-DIR Symmetrix VMAX Direct Connect Storage Bay, Drive Enclosure
10 312 FC Disk SEAGATE ST330655FCV Symmetrix VMAX Cheetah 300 GB 15K.5 4 Gbit/s FC Disks
11 4 Standby Power Supply EMC SB-DB-SPS VMAX Standby Power Supply
12 1 FC Switch EMC EMC DS-5100B 40 port Fibre Channel Switch

Server Software

OS Name and Version DART 6.0.36.4
Other Software EMC Celerra Control Station Linux 2.6.18-128.1.1.6005.EMC
Filesystem Software Celerra UxFS File System

Server Tuning

Name Value Description
netd cifs start 600 600 threads to service CIFS requests.

Server Tuning Notes


Disks and Filesystems

Description Number of Disks Usable Size
This set of 304 FC disks was divided into 152 2-disk RAID1 pairs, each with 2 LUs per drive, exported as 304 logical volumes. All data file systems reside on these disks. 304 44.5 TB
This set of FC disks consists of 4 2-disk RAID1 pairs, each with 2 LUs per drive, exported as 8 logical volumes. These disks are reserved for Celerra system use. 8 584.0 GB
Total 312 45.1 TB
Number of Filesystems 4
Total Exported Capacity 44384 GB
Filesystem Type UxFS
Filesystem Creation Options Default
Filesystem Config Each filesystem was striped (32KB element size), across 38 2-disk RAID1 pairs; for fs1 fs2 fs3 and fs4 this consumed a total of 304 logical volumes
Fileset Size 16742.5 GB

The drives were configured as 152 2-disk RAID1 pairs. 38 2-disk RAID1 pairs were assigned to each Symmetrix VMAX engine. Each RAID1 pair had 2 LUs created on it, of which only 1 LU was used per RAID1 pair. 4 stripes were created, one per 38 RAID1 pair group with a stripe element of 32KB. 4 file systems were created, 1 per stripe. A Symmetrix VMAX engine provided access to the drives of 1 file system. All 4 file systems were mapped on the Celerra Data Mover. All 4 file systems were shared by the Celerra Data Mover. Each client mapped all 4 file systems through each network interface of the Celerra.

Network Configuration

Item No Network Type Number of Ports Used Notes
1 Jumbo Gigabit Ethernet 8 This is the Gigabit network interface used for the active Data Mover.

Network Configuration Notes

All Gigabit network interfaces were connected to a Cisco 6509 switch.

Benchmark Network

An MTU size of 9000 was set for all connections to the switch. Each Data Mover was connected to the network via 8 ports. The LG1 class workload machines were connected with one port.

Processing Elements

Item No Qty Type Description Processing Function
1 1 CPU Single socket Intel Six Core Westmere (Xeon X5660) 2.8 GHz VG8 with QPI speed 6400 MHz for each Data Mover server. 1 chip active for the workload. (1 stand-by not in the quantity) CIFS protocol, UxFS filesystem

Processing Element Notes

Each Data Mover has one physical processor.

Memory

Description Size in GB Number of Instances Total GB Nonvolatile
Each Data Mover main memory. (24 GB in the stand-by Data Mover not in the quantity) 24 1 24 V
Symmetrix VMAX storage array battery backed global memory. (64 GB per Symmetrix VMAX engine) 64 4 256 NV
Grand Total Memory Gigabytes     280  

Memory Notes

The Symmetrix VMAX was configured with a total of 256 GB of memory. The memory was backed up with sufficient battery power to safely destage all the cached data onto the disk in the event of a power failure.

Stable Storage

4 CIFS file systems were used. Each RAID1 pair had 2 LUs bound on it. Each file system was striped over half of the logical volumes. The storage array had 8 Fibre Channel connections, 4 per Data Mover.

System Under Test Configuration Notes

The system under test consisted of one Celerra VG8 Gateway Data Mover attached to a Symmetrix VMAX Storage Array with 8 FC links. The Data Movers were running DART 6.0.36.4. 8 GbE Ethernet ports on the Data Mover were connected to the network.

Other System Notes

Failover is supported by an additional Celerra Data Mover that operates in stand-by mode. In the event of the Data Mover failure, the stand-by unit takes over the function of the failed unit. The stand-by Data Mover does not contribute to the performance of the system and it is not included in the components listed above.

Test Environment Bill of Materials

Item No Qty Vendor Model/Name Description
1 18 Dell Dell PowerEdge 1850 Dell server with 1 GB RAM and the Linux 2.6.9-42.ELsmp operating system

Load Generators

LG Type Name LG1
BOM Item # 1
Processor Name Intel(R) Xeon(TM) CPU 3.60GHz
Processor Speed 3.6 GHz
Number of Processors (chips) 2
Number of Cores/Chip 2
Memory Size 1 GB
Operating System Linux 2.6.9-42.ELsmp
Network Type 1 x Broadcom BCM5704 NetXtreme Gigabit Ethernet

Load Generator (LG) Configuration

Benchmark Parameters

Network Attached Storage Type CIFS
Number of Load Generators 18
Number of Processes per LG 32

Testbed Configuration

LG No LG Type Network Target Filesystems Notes
1..18 LG1 1 /fs1,/fs2,/fs3,/fs4 N/A

Load Generator Configuration Notes

All filesystems were mapped on all clients, which were connected to the same physical and logical network.

Uniform Access Rule Compliance

Each client has the same file systems mapped from the active Data Mover.

Other Notes

Failover is supported by an additional Celerra Data Mover that operates in stand-by mode. In the event of the Data Mover failure, the stand-by unit takes over the function of the failed unit. The stand-by Data Mover does not contribute to the performance of the system and it was not included in the components listed above.

The Symmetrix VMAX was configured with 256 GB of memory, 64 GB per VMAX engine. The memory was backed up with sufficient battery power to safely destage all cached data onto the disk in the event of a power failure.

Config Diagrams


Generated on Wed Nov 17 13:39:22 2010 by SPECsfs2008 HTML Formatter
Copyright © 1997-2008 Standard Performance Evaluation Corporation

First published at SPEC.org on 17-Nov-2010