SPEC SFS®2014_swbuild Result
Copyright © 2014-2016 Standard Performance Evaluation Corporation
|Oracle Corporation||SPEC SFS2014_swbuild = 240 Builds|
|Oracle ZFS Storage ZS3-2||Overall Response Time = 1.71 msec|
|Oracle ZFS Storage ZS3-2|
|Tested by||Oracle Corporation||Hardware Available||August 2016||Software Available||August 2016||Date Tested||August 2016||License Number||6||Licensee Locations||Broomfield, CO USA|
The Oracle ZFS Storage ZS3-2 is a mid-range high-performance storage system that offers enterprise-class NAS and SAN capabilities with industry-leading Oracle Database integration, in a cost-effective high-availability configuration. The Oracle ZFS Storage ZS3-2 offers simplified set up and management combined with industry-leading storage analytics and a performance-optimized platform that uses specialized Read and Write Flash caching devices. The Oracle ZFS Storage ZS3-2 can scale to 512 GB Memory, 32 CPU cores, and 1.5 PB capacity, with up to 12.8 TB of Flash Cache in a high-availability configuration. Oracle ZFS Storage Appliances deliver additional economic value bundled data services such as file- and block-level protocols including connectivity over InfiniBand, Compression, Deduplication, Thin provisioning, DTrace Analytics, Virus Scan, Snapshots, Triple Mirror, Triple Parity RAID, Phone-home, NDMP, Clustering, etc.
|1||2||Storage Controller||Oracle||Oracle ZFS Storage ZS3-2:controller||Oracle ZFS Storage ZS3-2: controllers part #7103829 includes 1 - SAS2 PCIE 16 port HBA|
|2||32||Controller Memory||Oracle||Memory DIMM||16 GB DDR3-1600 registered DIMM (for factory installation) part #7102984|
|3||6||10 Gigabit Ethernet Adapter||Oracle||Sun PCI-E Dual 10GbE Fiber||Sun Dual 10GbE SFP+ PCIe 2.0 Low Profile adapter incorporating Intel 82599 10 Gigabit Ethernet controller and supporting pluggable SFP+ Transceivers. ROHS-5. ATO option (2 installed in Client Oracle X5-2) part #7051223|
|4||12||Short Wave Pluggable Transceiver||Oracle||10Gbps Short Wave Pluggable Transceiver (SFP+)||Dual rate transceiver: SFP+ SR. Support 1 Gb/sec and 10 Gb/sec dual rate (for factory installation both ZS3-2 storage server and X5-2 client) part #2129A|
|5||6||Storage Drive Enclosure||Oracle||Oracle Storage Drive Enclosure DE2-24P||Oracle Storage Drive Enclosure DE2-24P: base chassis (for factory installation) part #7103910 Note: 4 of the DE2-24P enclosures are populated with 24 disk drives and 2 of the DE2-24p enclosures have 20 disks drives and 4 log devices|
|6||136||Disk Drives||Oracle||Disk Drives 300GB 10K RPM 2.5 inch SAS-2 HDD||300 GB 10000 rpm 2.5 inch SAS-2 HDD (for factory installation) part #7103911|
|7||8||SSD Drives||Oracle||SAS-2 73GB 2.5-inch SSD Write Flash Accelerator||2.5 inch SAS-2 SSD write flash accelerator with evo bracket (for factory installation) part #7048983|
|8||2||HBA||Oracle||SAS-2 PCIE 6Gbs 16-port HBA||SAS-2 back end HBA part #7103790 (for factory installation)|
|9||12||Cables||Oracle||SAS-2 Cables||SAS-2 back end cables part #7104928|
|10||1||Switch||Arista||Arista 7124SX 10Gb Switch||10Gb Ethernet Optical Switch *Note: the switch currently in not available for order from the manufacturer but is available from other vendors. Factory support for the Arista switch will continue thru 2017.|
|11||1||Client||Oracle||Oracle X5-2||with factory installed memory of 128GB|
|Item No||Component||Type||Name and Version||Description|
|1||Oracle ZFS Storage||Storage Controllers||8.6||Oracle Storage ZFS OS for storage controllers|
|2||Oracle Solaris OS||Client Node||11.3||Oracle Operating System on client node Solaris 11.3|
|Oracle ZS3-2 storage controllers||Parameter Name||Value||Description|
|MTU||9000||Jumbo Frames setup||Oracle X5-2 Client||Parameter Name||Value||Description|
|MTU||9000||Jumbo Frames setup|
Oracle ZS3-2 storage controllers 10Gb ethernet ports are set up to MTU of 9000 jumbo frames. Oracle X5-2 client is 10Gb ethernet ports are set up to MTU of 9000 jumbo frames.
|Oracle X5-2 Client||Parameter Name||Value||Description|
|vers||3||NFS mount option to set NFS mount version 3|
|rsize,wsize||16384||NFS mount option for data block size|
|forcedirectio||forcedirectio||NFS mount option for directio to storage server|
|max_buf||16777216||TCP max send receive buffer size|
|send_buf||4194304||TCP send buffer size|
|recv_buf||4194304||TCP receive buffer size||Oracle ZS3-2 Controllers||Parameter Name||Value||Description|
|Database record size||16KB||Record size for each filesystem of both Oracle ZS3-2 Controllers|
|Maximum # of server threads||1000||Sets up maximum number of NFS server threads used by Oracle ZS3-2 Controllers|
Tune the communications between Oracle X5-2 client and the Oracle ZS3-2 controllers over the 10Gb ethernet by optimizing amount of data transfer and minimum overhead. This includes setting the Oracle X5-2 clients mounts of the Oracle ZS3-2 files systems to use forcedirectio, read and write sizes to 16384, along with increasing the Oracle X5-2 client send and receive buffers sizes to 4194304.
|Description||Data Protection||Stable Storage||Qty||Usable GiB|
|300GB SAS 10K RPM Disk Drives||RAID-10||Yes||136||18.5 TiB|
|73GB SAS-2 SSD Write Flash Accelerator Used for the ZFS Intent Log (ZIL).||None||Yes||8||584.0 GiB|
|500GB SATA 7.2K RPM Disk Drives Oracle ZS3-2 Controllers OS disk drives||Mirrored||no||4||838.0 GiB|
|500GB SATA 7.2K RPM Disk Drives Oracle X5-2 Client OS||Mirrored||no||2||899.0 GiB||Total||20.8 TiB|
|Number of Filesystems||240||Total Capacity||17.36TiB||Filesystem Type||ZFS|
Both controllers are set up with 8 storage pools total (4 storage pools per Oracle ZS3-2 controller). Each of the controller's storage pools are configured with 16 disk drives, 1 write flash accelerator (1 log device) and 1 spare disk drive. When configuring the storage pool via the administrative html interface of each ZS3-2 storage controller, at the start you will be ask to select the number of disk drives and log devices to use per tray. Select 6 drives on 2 of the trays and 5 drives on the 1 tray that has the log devices and select 1 log device. In total the number of disk drives selected is 17 and 1 log device. The system will configure the spare into the storage pool after you select the data profile. Then the storage pools are set up to mirror the data (RAID-10) across all 16 drives (Note: When configuring storage pools on the Oracle ZS3-2 controllers this is a data profile of Mirrored). The write flash accelerator in each storage pool is used for the ZFS Intent Log (ZIL). Do these step for each storage pool until there are 4 per ZS3-2 storage controller have been configured. After the storage pools are created. Each of the storage pools are configured with 30 ZFS filesystems each. Since each controller is configured with 4 storage pools and each storage pool contains 30 ZFS filesystems, in total each controller has 120 ZFS filesystems. Both of the controllers together in total have 240 ZFS filesystems. There are 2 internal mirrored system disk drives per Oracle ZS3-2 controller and are used only for the controllers core operating system. These drives are not used for data cache or storing user data.
All filesystems on both Oracle ZS3-2 controllers are create with setting of the Database Record Size of 16KB. Which is a common practice for storage solutions with the Oracle ZS3-2 storage controllers.
|Item No||Transport Type||Number of Ports Used||Notes|
|1||10 Gigabit Ethernet||12||Each Oracle ZS3-2 controller has 2 Dual port 10 Gigabit Ethernet cards. Oracle X5-2 client also has 2 Dual port 10 Gigabit Ethernet cards.|
Each Oracle ZS3-2 uses 2 active 10Gb ports and 2 standby ports. Total Oracle
ZS3-2 controllers use 4 ports active and 4 ports on standby. All ports active
and standby are setup with the MTU size is set to 9000 on each of the 10 Gb
The single Oracle X5-2 client uses both ports of the 2 dual port 10 Gb ethernet cards. The 4 ports used are set to MTU of 9000.
|Item No||Switch Name||Switch Type||Total Port Count||Used Port Count||Notes|
|1||Arista 7124 10Gb Switch||10Gb Ethernet||24||12||All ports set up to do Jumbo Frames on the Arista 7124SX 10Gb Switch|
|Item No||Qty||Type||Location||Description||Processing Function|
|1||4||CPU||Oracle ZS3-2 Storage Server||2.1GHz Intel Xeon E5-2658||NFS, ZFS, TCP/IP, RAID/Storage Drivers|
|2||2||CPU||Oracle X5-2 Client||2.6GHz Intel Xeon E5-2660||NFS Client Solaris OS|
Each Oracle ZFS Storage ZS3-2 controller contains 2 physical processors, each
with 8 processing cores.
Oracle X5-2 client contains 2 physical processors, each with 10 processing cores. Oracle X5-2 client processors SMT is set to the default settings of Solaris 11.3.
|Description||Size in GiB||Number of Instances||Nonvolatile||Total GiB|
|Memory for Oracle ZFS ZS3-2 Storage Controllers||256||2||V||512|
|Memory for Oracle X5-2 Client||128||1||V||128||Grand Total Memory Gibibytes||640|
The Oracle ZFS Storage ZS3-2 controllers' main memory is used for the Adaptive Replacement Cache (ARC), the data cache, and operating system memory. Oracle X5-2 client memory is not used for storage or cache of the Oracle ZFS ZS3-2 storage controllers, just for the client use.
The Stable Storage requirement is guaranteed by the ZFS Intent Log (ZIL) which logs writes and other filesystem changing transactions to either a write flash accelerator or a disk drive. Writes and other filesystem changing transactions are not acknowledged until the data is written to stable storage. Since this is an active-active cluster high availability system, in the event of a controller failing or power loss, the other active controller can take over for the failed controller. Since the write flash accelerators or disk drives are located in the disk shelves and can be accessed via the 4 backend SAS channels from both controllers, the remaining active controller can complete any outstanding transactions using the ZIL. In the event of power loss to both controllers, the ZIL is used after power is restored to reinstate any writes and other filesystem changes.
The system under test is a Oracle ZFS Storage ZS3-2 storage controllers are setup in an active-active cluster configuration with failover capabilities.
Please reference the configuration diagram. A single client is used for the as the benchmark load generator. The client Oracle X5-2 mounted all of the Oracle ZS3-2 storage controllers filesystems via NFSv3. The Oracle ZS3-2 storage controllers filesystems are numbered 1-240. ZS3-2a has filesystems 1-120 and ZS3-2b has filesystems 121-240 all total 240 filesystems between the 2 ZS3-2 controllers. Oracle ZS3-2 storage controller active ports, 1,2,3 and 4 are assigned separate subnets. Mount the filesystem 1 of the ZS3-2a using 10Gb ethernet port 1 then the filesystem 121 is mounted of the ZS3-2b using 10Gb ethernet port 3, then filesystem 31 is mounted of the ZS3-2a using 10Gb ethernet port 2 and then filesystem 151 is mounted of the ZS3-2b using 10Gb ethernet port 4. The next set would be filesystem 61 of the ZS3-2a using 10Gb ethernet port 1 then the filesystem 181 is mounted of the ZS3-2b using 10Gb ethernet port 3, then filesystem 91 is mounted of the ZS3-2a using 10Gb ethernet port 2 and then filesystem 211 is mounted of the ZS3-2b, using 10Gb ethernet port 4, next filesystems 2, 122, 32,and 152, and so on, until all 240 filesystems are mounted. So in effect, this will round-robin mount the filesystems to spread the load across the storage pools and the Oracle ZS3-2 controllers.
Oracle is a registered trademark of Oracle Corporation. Intel and Xeon are registered trademarks of the Intel Corporation in the U.S. and/or other countries. Arista is a registered trademark for Arista.
Generated on Fri Sep 9 15:36:48 2016 by SpecReport
Copyright © 2014-2016 Standard Performance Evaluation Corporation