SPEC SFS®2014_database Result

Copyright © 2016-2019 Standard Performance Evaluation Corporation

Oracle SPEC SFS2014_database = 2240 Databases
Oracle ZFS Storage ZS7-2 Overall Response Time = 0.78 msec


Performance

Business
Metric
(Databases)
Average
Latency
(msec)
Databases
Ops/Sec
Databases
MB/Sec
2240.26443016645
4480.317860321292
6720.3561290491937
8960.3951720652583
11200.4782150813229
13440.5312580983875
15680.6793011154521
17920.9303441315166
20161.2313871485811
22403.8594301676459
Performance Graph


Product and Test Information

Oracle ZFS Storage ZS7-2
Tested byOracle
Hardware AvailableNovember 13, 2018
Software AvailableNovember 13, 2018
Date TestedOctober 2018
License Number00073
Licensee LocationsRedwood Shores, CA, USA

The Oracle ZFS Storage ZS7-2 is a high-end high-performance all-flash storage system that offers enterprise-class NAS and SAN capabilities with industry-leading Oracle Database integration, in a cost-effective high-availability configuration. The Oracle ZFS Storage ZS7-2 provides simplified set up, management, and industry-leading storage analytics. The performance-optimized platform uses specialized Read and Write Flash caching devices in the hybrid storage configuration, for high-performance throughput and latency. The Oracle ZFS Storage ZS7-2 high-end can scale to 1.5TB Memory, 48 CPU cores per controller and 3.6 PB of all-flash storage. Oracle ZFS Storage Appliances deliver economic value with bundled data services for file and block-level protocols with connectivity over 40GbE, 10GbE, InfiniBand, and 32Gb FC. Data may be managed using Compression, Deduplication, Encryption, Thin provisioning, Real-Time Analytics, Virus Scan, Snapshots, ZFS RAID Data Protection, Snapshots, Remote Replication, NDMP, and High Availability Clustering.

Solution Under Test Bill of Materials

Item NoQtyTypeVendorModel/NameDescription
12Storage ControllerOracleOracle ZFS Storage ZS7-2Oracle ZFS Storage ZS7-2, 2 x 2.10GHz Intel Xeon Platinum 8160 CPU. 1.5TB DDR4-2666 LRDIMM. 2 x 10TB SAS3 HGST boot drives. Support for SAS3, IB, 10GbE.
248MemoryOracleOracle ZFS Storage ZS7-2Oracle ZFS Storage ZS7-2, 48 x 64GB DDR4-2666 LRDIMM. Memory is order configurable, a total of 1.5TB was installed in each storage controller.
36Storage Drive EnclosureOracleOracle Storage Drive Enclosure DE3-24P24 drive slot enclosure, SAS3 connected, 24 x 3TB HGST SSD. Dual PSU.
46Storage Drive EnclosureOracleOracle Storage Drive Enclosure DE3-24P24 drive slot enclosure, SAS3 connected, 20 x 3TB HGST SSD and 4 X 200GB HGST SSD. Dual PSU.
5264SAS3 SSDOracle71180083TB HGST SSD. Drive selection is order configurable, a total of 264 x 3TB HGST SSD drives were installed across all Oracle Storage Drive Enclosure DE3-24P.
624SAS3 SSDOracle7115942200GB HGST SSD. Drive selection is order configurable, a total of 24 x 200GB HGST SSD drives were installed across all Oracle Storage Drive Enclosure DE3-24P. These Drives are used for write flash accelerators.
78ClientOracleOracle X6-2Oracle X6-2 Client Node, 2 x 2.10GHz Intel Xeon CPU E5-2699 v4. 512GB RAM. 2 x 10GbE. Used for benchmark load generation.
88OS DriveOracle7093013600GB HGST hard drive. 8 x 600GB HGST hard drives, one for each Oracle X6-2 Client Node was installed for OS boot drive.
91SwitchOracleOracle Switch ES2-64Oracle Switch ES2-64, high-performance, low-latency 10/40 Gb/sec Ethernet switch.

Configuration Diagrams

  1. Oracle ZFS Storage ZS7-2 Cluster SUT
  2. Oracle ZFS Storage ZS7-2 Cluster vnic Configuration

Component Software

Item NoComponentTypeName and VersionDescription
1Oracle ZFS StorageStorage Controller OS8.8Oracle ZFS Storage OS for storage controllers.
2Oracle LinuxClient Node OS7.3Oracle Linux OS for client nodes.

Hardware Configuration and Tuning - Physical

Oracle ZFS Storage ZS7-2
Parameter NameValueDescription
MTU9000Network Jumbo Frames

Hardware Configuration and Tuning Notes

Oracle ZFS Storage ZS7-2 controllers and Oracle X6-2 client nodes both had 10GbE Ethernet ports set up to MTU of 9000 jumbo frames.

Software Configuration and Tuning - Physical

Oracle X6-2 Client Nodes
Parameter NameValueDescription
vers3NFS mount option set to version 3
rsize,wsize1048576NFS mount option for data block size
syncsyncNFS mount option set to sync io
net.ipv4.tcp_rmem, net.ipv4.tcp_wmem10000000Linux kernel tcp send and receive buffers
net.core.somaxconn65536Linux kernel maximum socket connections

Software Configuration and Tuning Notes

Tune the communications between Oracle X6-2 client nodes and the Oracle ZFS Storage ZS7-2 controllers over the 10GbE Ethernet by optimizing amount of data transfer and minimum overhead. This includes setting the Oracle X6-2 clients mounts of the Oracle ZFS Storage ZS7-2 files systems to use sync io, read and write sizes to 1048576, along with increasing the Oracle X6-2 client send and receive buffers sizes to 10000000.

Service SLA Notes

None

Storage and Filesystems

Item NoDescriptionData ProtectionStable StorageQty
13.2TB SSD Oracle ZFS Storage ZS7-2 Data Pool DrivesRAID-10Yes264
2200GB SSD Oracle ZFS Storage ZS7-2 Log DrivesNoneYes24
310TB HGST Oracle ZFS Storage ZS7-2 OS DrivesMirroredNo4
4600GB HGST Oracle X6-2 Client Node OS DrivesNoneNo8
Number of Filesystems64
Total Capacity366TiB
Filesystem TypeZFS

Filesystem Creation Notes

Two ZFS storage pools are created overall in the SUT (1 storage pool per Oracle ZFS Storage ZS7-2 controller). Each of the controller's storage pools are configured with 128 ssd drives, 12 write flash accelerator (log device) and 4 spare ssd drives. When configuring the storage pool via the administrative html interface of each Oracle ZFS Storage ZS7-2 storage controller, at the start you will be asked to select the number of disk drives and log devices to use per tray. The storage pools are set up to mirror the data (RAID-10) across all 128 data ssd drives (Note: When configuring storage pools on the Oracle ZFS Storage ZS7-2 controllers this is a data profile of Mirrored). The write flash accelerator in each storage pool is used for the ZFS Intent Log (ZIL). Each of the storage pools are configured with 32 ZFS filesystems. Since each controller is configured with 1 storage pool and each storage pool contains 32 ZFS filesystems, in total the SUT has 64 ZFS filesystems. There are 2 internal mirrored system disk drives per Oracle ZFS Storage ZS7-2 controller and are used only for the controllers core operating system. These drives are not used for data cache or storing user data.

Storage and Filesystem Notes

All filesystems on both Oracle ZFS Storage ZS7-2 controllers are created with setting of the Database Record Size of 128KB. The logbias setting is set to latency for each filesystem. This is a common practice for storage solutions with the Oracle ZFS Storage ZS7-2 storage controllers.

Transport Configuration - Physical

Item NoTransport TypeNumber of Ports UsedNotes
110GbE Ethernet16Each Oracle ZFS Storage ZS7-2 Controller uses 8x 10GbE Ethernet physical ports for dataflow
210GbE Ethernet2Each Oracle ZFS Storage ZS7-2 Controller uses 1x 10GbE Ethernet physical port for management
310GbE Ethernet16Each Oracle X6-2 Client Node uses 2x 10GbE Ethernet physical ports for dataflow
410GbE Ethernet8Each Oracle X6-2 Client Node uses 1x 10GbE Ethernet physical port for management

Transport Configuration Notes

Each Oracle ZFS Storage ZS7-2 uses 8 active 10GbE Ethernet ports. Total Oracle ZFS Storage ZS7-2 controllers use 16 ports active. In the event of controller failure IP address will be taken over by surviving controller. All ports are setup with the MTU size of 9000 on each of the 10 GbE ports. There is 1x 10GbE port per controller assigned to the managment interface, this interface is only used manage the controller and does not take part in dataflow.

The Oracle X6-2 client nodes uses 2x 10 GbE Ethernet cards each for dataflow. Each port is set to MTU of 9000. The Oracle X6-2 client nodes each use 1x 10GbE Ethernet port for managment, these interfaces are not used for dataflow.

Each of the 16 active physical 10GbE Ethernet ports are assigned 6 vnic IP addresses on the Oracle X6-2 client nodes. Each of the 16 active physical 10GbE Ethernet ports on the Oracle ZFS Storage ZS7-2 are also assigned 6 vnic IP addresses. On the Oracle ZFS Storage ZS7-2, vnics are configured through the management BUI. On the Oracle X6-2 client nodes, vnics are configured in Linux OS /etc/sysconfig/network-scripts. Please reference the vnic diagram for IP layout.

Switches - Physical

Item NoSwitch NameSwitch TypeTotal Port CountUsed Port CountNotes
1Oracle Switch ES2-6410/40GbE Ethernet Switch4632All ports set up for jumbo frame support

Processing Elements - Physical

Item NoQtyTypeLocationDescriptionProcessing Function
14CPUOracle ZFS Storage ZS7-22.10GHz Intel Xeon Platinum 8160 CPUZFS, TCP/IP, RAID/Storage Drivers, NFS
216CPUOracle X6-2 Client Node2.20GHz Intel Xeon CPU E5-2699 v4TCP/IP, NFS

Processing Element Notes

Each Oracle ZFS Storage ZS7-2 controller contains 2 physical processors, each with 24 processing cores.

Oracle X6-2 client contains 2 physical processors, each with 22 processing cores.

Memory - Physical

DescriptionSize in GiBNumber of InstancesNonvolatileTotal GiB
Memory in Oracle ZFS Storage ZS7-2 15002V3000
Memory in Oracle X6-2 clients 5128V4096
Grand Total Memory Gibibytes7096

Memory Notes

The Oracle ZFS Storage ZS7-2 controllers' main memory is used for the Adaptive Replacement Cache (ARC), the data cache, and operating system memory.

Oracle X6-2 client memory is not used for storage or cache of the Oracle ZFS Storage ZS7-2 controllers, just for the client use.

Stable Storage

The Stable Storage requirement is guaranteed by the ZFS Intent Log (ZIL) which logs writes and other filesystem changing transactions to either a write flash accelerator or a disk drive. Writes and other filesystem changing transactions are not acknowledged until the data is written to stable storage. Since this is an active-active cluster high availability system, in the event of a controller failing or power loss, the other active controller can take over for the failed controller. Since the write flash accelerators or disk drives are located in the disk shelves and can be accessed via the 4 backend SAS channels from both controllers, the remaining active controller can complete any outstanding transactions using the ZIL. In the event of power loss to both controllers, the ZIL is used after power is restored to reinstate any writes and other filesystem changes.

Solution Under Test Configuration Notes

The system under test are Oracle ZFS Storage ZS7-2, high end storage controllers, setup in an active-active cluster configuration with failover capabilities.

The Oracle X6-2 client nodes became EOL in February, 2018. For general availability third parties still sell the model with original Oracle warranty and support. In addition this model line has been refreshed and is available from Oracle as the Oracle X7-2.

Other Solution Notes

None

Dataflow

Please reference the SUT diagram. The 8 Oracle X6-2 client nodes are used for benchmark load generation. The Oracle X6-2 client nodes each mount 8 of the total 64 ZFS filesystems of the Oracle ZFS Storage ZS7-2 controllers via NFSv3. Half of the filesystems are shared from each Oracle ZFS Storage ZS7-2 controller. Each of the two Oracle ZFS Storage ZS7-2 controllers has 8x 10GbE Ethernet active ports for io dataflow, all are assigned separate subnets. Each Oracle X6-2 client node has 2x 10GbE Ethernet ports, accessing half of its nfs mounts through each ethernet port. There is a one-to-one match between the 16 total 10GbE Ethernet client ports to the 16 total 10GbE Ethernet controller ports used for io dataflow (non-management ports). So in effect, this spreads io load evenly across the filesystems mounts, network interfaces, and storage pools of the Oracle ZFS Storage ZS7-2 Cluster SUT.

Other Notes

Oracle and ZFS are registered trademarks of Oracle Corporation in the U.S. and/or other countries. Intel and Xeon are registered trademarks of the Intel Corporation in the U.S. and/or other countries.

Other Report Notes

The test sponsor attests, as of date of publication, that CVE-2017-5754 (Meltdown), CVE-2017-5753 (Spectre variant 1), and CVE-2017-5715 (Spectre variant 2) are not mitigated in the system as tested and documented. There is support for mitigating CVE-2017-5754 (Meltdown), CVE-2017-5753 (Spectre variant 1), and CVE-2017-5715 (Spectre variant 2) in the Oracle ZFS Storage ZS7-2 product, it however was not enabled for this tested run.


Generated on Wed Mar 13 16:27:01 2019 by SpecReport
Copyright © 2016-2019 Standard Performance Evaluation Corporation