MPI2007 license: | 14 | Test date: | Mar-2016 |
---|---|---|---|
Test sponsor: | SGI | Hardware Availability: | Mar-2016 |
Tested by: | SGI | Software Availability: | May-2016 |
SPEC has determined that this result was not in compliance with the SPEC
MPI2007 run and reporting rules. Specifically, the submitter reported
that the result used an unsupported version of the Operating System.
Benchmark | Base | Peak | ||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Ranks | Seconds | Ratio | Seconds | Ratio | Seconds | Ratio | Ranks | Seconds | Ratio | Seconds | Ratio | Seconds | Ratio | |
Results appear in the order in which they were run. Bold underlined text indicates a median measurement. | ||||||||||||||
121.pop2 | NC | NC | NC | NC | NC | NC | NC | |||||||
122.tachyon | NC | NC | NC | NC | NC | NC | NC | |||||||
125.RAxML | NC | NC | NC | NC | NC | NC | NC | |||||||
126.lammps | NC | NC | NC | NC | NC | NC | NC | |||||||
128.GAPgeofem | NC | NC | NC | NC | NC | NC | NC | |||||||
129.tera_tf | NC | NC | NC | NC | NC | NC | NC | |||||||
132.zeusmp2 | NC | NC | NC | NC | NC | NC | NC | |||||||
137.lu | NC | NC | NC | NC | NC | NC | NC | |||||||
142.dmilc | NC | NC | NC | NC | NC | NC | NC | |||||||
143.dleslie | NC | NC | NC | NC | NC | NC | NC | |||||||
145.lGemsFDTD | NC | NC | NC | NC | NC | NC | NC | |||||||
147.l2wrf2 | NC | NC | NC | NC | NC | NC | NC |
Hardware Summary | |
---|---|
Type of System: | Homogeneous |
Compute Node: | SGI ICE XA IP-125 Compute Node |
Interconnect: | InfiniBand (MPI and I/O) |
File Server Node: | SGI MIS Server |
Total Compute Nodes: | 32 |
Total Chips: | 64 |
Total Cores: | 896 |
Total Threads: | 896 |
Total Memory: | 4 TB |
Base Ranks Run: | 896 |
Minimum Peak Ranks: | -- |
Maximum Peak Ranks: | -- |
Software Summary | |
---|---|
C Compiler: | Intel C++ Composer XE 2013 for Linux, Version 14.0.3.174 Build 20140422 |
C++ Compiler: | Intel C++ Composer XE 2013 for Linux, Version 14.0.3.174 Build 20140422 |
Fortran Compiler: | Intel Fortran Composer XE 2013 for Linux, Version 14.0.3.174 Build 20140422 |
Base Pointers: | 64-bit |
Peak Pointers: | Not Applicable |
MPI Library: | SGI MPT 2.14 |
Other MPI Info: | MLNX_OFED_LINUX-3.0-1.0.1 |
Pre-processors: | None |
Other Software: | None |
Hardware | |
---|---|
Number of nodes: | 32 |
Uses of the node: | compute |
Vendor: | SGI |
Model: | SGI ICE XA (Intel Xeon E6-2690 v4, 2.6 GHz) |
CPU Name: | Intel Xeon E5-2690 v4 |
CPU(s) orderable: | 1-2 chips |
Chips enabled: | 2 |
Cores enabled: | 28 |
Cores per chip: | 14 |
Threads per core: | 1 |
CPU Characteristics: | 14 Core, 2.60 GHz, 9.6 GT/s QPI Intel Turbo Boost Technology up to 3.50 GHz Hyper-Threading Technology disabled |
CPU MHz: | 2600 |
Primary Cache: | 32 KB I + 32 KB D on chip per core |
Secondary Cache: | 256 KB I+D on chip per core |
L3 Cache: | 35 MB I+D on chip per chip |
Other Cache: | None |
Memory: | 128 GB (8 x 16 GB 2Rx4 PC4-2400T-R) |
Disk Subsystem: | None |
Other Hardware: | None |
Adapter: | Mellanox MT27500 with ConnectX-3 ASIC (PCIe x8 Gen3 8 GT/s) |
Number of Adapters: | 1 |
Slot Type: | PCIe x8 Gen3 |
Data Rate: | InfiniBand 4x FDR |
Ports Used: | 2 |
Interconnect Type: | InfiniBand |
Software | |
---|---|
Adapter: | Mellanox MT27500 with ConnectX-3 ASIC (PCIe x8 Gen3 8 GT/s) |
Adapter Driver: | OFED-3.0-1.0.1 |
Adapter Firmware: | 10.10.5054 |
Operating System: | SUSE Linux Enterprise Server 11 SP3 (x86_64), Kernel 3.0.101-0.47.52.1.8418.0.PTF-default |
Local File System: | NFSv3 |
Shared File System: | NFSv3 IPoIB |
System State: | Multi-user, run level 3 |
Other Software: | SGI Tempo Service Node 3.1.1, Build 712r32.sles11sp3-1505082100 |
Hardware | |
---|---|
Number of nodes: | 1 |
Uses of the node: | fileserver |
Vendor: | SGI |
Model: | SGI MIS Server |
CPU Name: | Intel Xeon E5-2670 |
CPU(s) orderable: | 1-2 chips |
Chips enabled: | 2 |
Cores enabled: | 16 |
Cores per chip: | 8 |
Threads per core: | 1 |
CPU Characteristics: | Intel Turbo Boost Technology up to 3.30 GHz Hyper-Threading Technology disabled |
CPU MHz: | 2600 |
Primary Cache: | 32 KB I + 32 KB D on chip per core |
Secondary Cache: | 256 KB I+D on chip per core |
L3 Cache: | 20 MB I+D on chip per chip |
Other Cache: | None |
Memory: | 128 GB (12 * 8 GB 2Rx4 PC3-10600R-9, ECC) |
Disk Subsystem: | 45 TB RAID 6 |
Other Hardware: | None |
Adapter: | Mellanox MT27500 with ConnectX-3 ASIC |
Number of Adapters: | 2 |
Slot Type: | PCIe x8 Gen3 |
Data Rate: | InfiniBand 4x FDR |
Ports Used: | 2 |
Interconnect Type: | InfiniBand |
Software | |
---|---|
Adapter: | Mellanox MT27500 with ConnectX-3 ASIC |
Adapter Driver: | OFED-1.5.4.1 |
Adapter Firmware: | 2.30.3000 |
Operating System: | SUSE Linux Enterprise Server 11 SP1 (x86_64), Kernel 3.0.101-0.46-default |
Local File System: | xfs |
Shared File System: | -- |
System State: | Multi-user, run level` 3 |
Other Software: | SGI Foundation Software 2.9, Build 709r19.sles11sp3-1310222002 |
Hardware | |
---|---|
Vendor: | Mellanox Technologies and SGI |
Model: | None |
Switch Model: | LCE-FDR-SWITCH-STD |
Number of Switches: | 4 |
Number of Ports: | 36 |
Data Rate: | InfiniBand 4x FDR |
Firmware: | 9.3.4000 |
Topology: | Enhanced Hypercube |
Primary Use: | MPI and I/O traffic |
The config file option 'submit' was used.
143.dleslie (base): "integer_overflow" src.alt was used. Software environment: export MPI_REQUEST_MAX=65536 export MPI_TYPE_MAX=32768 export MPI_IB_RAILS=2 export MPI_CONNECTIONS_THRESHOLD=0 export MPI_IB_UPGRADE_SENDS=50 export MPI_IB_IMM_UPGRADE=false export MPI_IB_HYPER_LAZY=false ulimit -s unlimited BIOS settings: AMI BIOS version HAAE6125 Hyper-Threading Technology disabled Transparent HugePages enabled Intel Turbo Boost Technology enabled (default) Intel Turbo Boost Technology activated with modprobe acpi_cpufreq cpupower frequency-set -u 2601MHz -d 2601MHz -g performance Job Placement: There were 4 switches used with a topologically compact configuration. Additional notes regarding interconnect: The Infiniband network consists of two independent planes, with half the switches in the system allocated to each plane. I/O traffic is restricted to one plane, while MPI traffic can use both planes.
icc |
126.lammps: | icpc |
ifort |
icc ifort |
121.pop2: | -DSPEC_MPI_CASE_FLAG |