SPEC® MPIM2007 Result

Copyright 2006-2010 Standard Performance Evaluation Corporation

Hewlett Packard Enterprise

SGI 8600
(Intel Xeon Gold 6148, 2.40 GHz)

SPECmpiM_peak2007 = Not Run

MPI2007 license: 1 Test date: Oct-2017
Test sponsor: HPE Hardware Availability: Jul-2017
Tested by: HPE Software Availability: Nov-2017
Benchmark results graph

Results Table

Benchmark Base Peak
Ranks Seconds Ratio Seconds Ratio Seconds Ratio Ranks Seconds Ratio Seconds Ratio Seconds Ratio
Results appear in the order in which they were run. Bold underlined text indicates a median measurement.
104.milc 80 95.6 16.4 95.6 16.4 95.9 16.3
107.leslie3d 80 217   24.0 219   23.8 217   24.0
113.GemsFDTD 80 193   32.7 190   33.2 192   32.8
115.fds4 80 104   18.7 104   18.8 105   18.6
121.pop2 80 147   28.0 147   28.1 147   28.0
122.tachyon 80 123   22.7 124   22.6 150   18.6
126.lammps 80 152   19.2 152   19.2 152   19.1
127.wrf2 80 193   40.5 194   40.3 194   40.1
128.GAPgeofem 80 59.6 34.7 59.8 34.5 59.7 34.6
129.tera_tf 80 117   23.7 117   23.7 117   23.6
130.socorro 80 97.6 39.1 98.0 38.9 97.5 39.2
132.zeusmp2 80 118   26.2 118   26.2 118   26.3
137.lu 80 112   32.7 113   32.6 114   32.3
Hardware Summary
Type of System: Homogeneous
Compute Node: HPE XA730i Gen10 Server Node
Interconnect: InfiniBand (MPI and I/O)
File Server Node: Lustre FS
Total Compute Nodes: 2
Total Chips: 4
Total Cores: 80
Total Threads: 160
Total Memory: 384 GB
Base Ranks Run: 80
Minimum Peak Ranks: --
Maximum Peak Ranks: --
Software Summary
C Compiler: Intel C Composer XE for Linux,
Version 18.0.0.128 Build 20170811
C++ Compiler: Intel C++ Composer XE for Linux,
Version 18.0.0.128 Build 20170811
Fortran Compiler: Intel Fortran Composer XE for Linux,
Version 18.0.0.128 Build 20170811
Base Pointers: 64-bit
Peak Pointers: Not Applicable
MPI Library: HPE Performance Software - Message Passing
Interface 2.17
Other MPI Info: OFED 3.2.2
Pre-processors: None
Other Software: None

Node Description: HPE XA730i Gen10 Server Node

Hardware
Number of nodes: 2
Uses of the node: compute
Vendor: Hewlett Packard Enterprise
Model: SGI 8600 (Intel Xeon Gold 6148, 2.40 GHz)
CPU Name: Intel Xeon Gold 6148
CPU(s) orderable: 1-2 chips
Chips enabled: 2
Cores enabled: 40
Cores per chip: 20
Threads per core: 2
CPU Characteristics: Intel Turbo Boost Technology up to 3.70 GHz
CPU MHz: 2400
Primary Cache: 32 KB I + 32 KB D on chip per core
Secondary Cache: 1 MB I+D on chip per core
L3 Cache: 27.5 MB I+D on chip per chip
Other Cache: None
Memory: 192 GB (12 x 16 GB 2Rx4 PC4-2666V-R)
Disk Subsystem: None
Other Hardware: None
Adapter: Mellanox MT27700 with ConnectX-4 ASIC
Number of Adapters: 2
Slot Type: PCIe x16 Gen3 8GT/s
Data Rate: InfiniBand 4X EDR
Ports Used: 1
Interconnect Type: InfiniBand
Software
Adapter: Mellanox MT27700 with ConnectX-4 ASIC
Adapter Driver: OFED-3.4-2.1.8.0
Adapter Firmware: 12.18.1000
Operating System: Red Hat Enterprise Linux Server 7.3 (Maipo),
Kernel 3.10.0-514.2.2.el7.x86_64
Local File System: LFS
Shared File System: LFS
System State: Multi-user, run level 3
Other Software: SGI Management Center Compute Node 3.5.0,
Build 716r171.rhel73-1705051353

Node Description: Lustre FS

Hardware
Number of nodes: 4
Uses of the node: fileserver
Vendor: Hewlett Packard Enterprise
Model: Rackable C1104-GP2 (Intel Xeon E5-2690 v3, 2.60
GHz)
CPU Name: Intel Xeon E5-2690 v3
CPU(s) orderable: 1-2 chips
Chips enabled: 2
Cores enabled: 24
Cores per chip: 12
Threads per core: 1
CPU Characteristics: Intel Turbo Boost Technology up to 3.50 GHz
Hyper-Threading Technology disabled
CPU MHz: 2600
Primary Cache: 32 KB I + 32 KB D on chip per core
Secondary Cache: 256 KB I+D on chip per core
L3 Cache: 30 MB I+D on chip per chip
Other Cache: None
Memory: 128 GB (8 x 16 GB 2Rx4 PC4-2133P-R)
Disk Subsystem: 684 TB RAID 6
48 x 8+2 2TB 7200 RPM
Other Hardware: None
Adapter: Mellanox MT27700 with ConnectX-4 ASIC
Number of Adapters: 2
Slot Type: PCIe x16 Gen3
Data Rate: InfiniBand 4X EDR
Ports Used: 1
Interconnect Type: InfiniBand
Software
Adapter: Mellanox MT27700 with ConnectX-4 ASIC
Adapter Driver: OFED-3.3-1.0.0.0
Adapter Firmware: 12.14.2036
Operating System: Red Hat Enterprise Linux Server 7.3 (Maipo),
Kernel 3.10.0-514.2.2.el7.x86_64
Local File System: ext3
Shared File System: LFS
System State: Multi-user, run level 3
Other Software: None

Interconnect Description: InfiniBand (MPI and I/O)

Hardware
Vendor: Mellanox Technologies and SGI
Model: SGI P0002145
Switch Model: SGI P0002145
Number of Switches: 1
Number of Ports: 36
Data Rate: InfiniBand 4X EDR
Firmware: 11.0350.0394
Topology: Enhanced Hypercube
Primary Use: MPI and I/O traffic

Base Tuning Notes

src.alt used: 129.tera_tf->add_rank_support
src.alt used: 130.socorro->nullify_ptrs

Submit Notes

The config file option 'submit' was used.

General Notes



 Software environment:
   export MPI_REQUEST_MAX=65536
   export MPI_TYPE_MAX=32768
   export MPI_IB_RAILS=2
   export MPI_IB_IMM_UPGRADE=false
   export MPI_CONNECTIONS_THRESHOLD=0
   export MPI_IB_DCIS=2
   export MPI_IB_HYPER_LAZY=false
   ulimit -s unlimited

 BIOS settings:
   AMI BIOS version SAED7177, 07/17/2017

 Job Placement:
   Each MPI job was assigned to a topologically compact set
   of nodes.

 Additional notes regarding interconnect:
   The Infiniband network consists of two independent planes,
   with half the switches in the system allocated to each plane.
   I/O traffic is restricted to one plane, while MPI traffic can
   use both planes.

Base Compiler Invocation

C benchmarks:

 icc 

C++ benchmarks:

126.lammps:  icpc 

Fortran benchmarks:

 ifort 

Benchmarks using both Fortran and C:

 icc   ifort 

Base Portability Flags

121.pop2:  -DSPEC_MPI_CASE_FLAG 
127.wrf2:  -DSPEC_MPI_CASE_FLAG   -DSPEC_MPI_LINUX 
130.socorro:  -assume nostd_intent_in 

Base Optimization Flags

C benchmarks:

 -O3   -xCORE-AVX512   -no-prec-div   -ipo 

C++ benchmarks:

126.lammps:  -O3   -xCORE-AVX512   -no-prec-div   -ansi-alias   -ipo 

Fortran benchmarks:

 -O3   -xCORE-AVX512   -no-prec-div   -ipo 

Benchmarks using both Fortran and C:

 -O3   -xCORE-AVX512   -no-prec-div   -ipo 

Base Other Flags

C benchmarks:

 -lmpi 

C++ benchmarks:

126.lammps:  -lmpi 

Fortran benchmarks:

 -lmpi 

Benchmarks using both Fortran and C:

 -lmpi 

The flags file that was used to format this result can be browsed at
http://www.spec.org/mpi2007/flags/HPE_x86_64_Intel18_flags.html.

You can also download the XML flags source by saving the following link:
http://www.spec.org/mpi2007/flags/HPE_x86_64_Intel18_flags.xml.