MPI2007 license: | 4 | Test date: | Feb-2009 |
---|---|---|---|
Test sponsor: | SGI | Hardware Availability: | Mar-2009 |
Tested by: | SGI | Software Availability: | Jan-2009 |
Benchmark | Base | Peak | ||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Ranks | Seconds | Ratio | Seconds | Ratio | Seconds | Ratio | Ranks | Seconds | Ratio | Seconds | Ratio | Seconds | Ratio | |
Results appear in the order in which they were run. Bold underlined text indicates a median measurement. | ||||||||||||||
104.milc | 64 | 108 | 14.5 | 108 | 14.6 | |||||||||
107.leslie3d | 64 | 438 | 11.9 | 437 | 12.0 | |||||||||
113.GemsFDTD | 64 | 294 | 21.4 | 294 | 21.5 | |||||||||
115.fds4 | 64 | 167 | 11.7 | 166 | 11.7 | |||||||||
121.pop2 | 64 | 349 | 11.8 | 347 | 11.9 | |||||||||
122.tachyon | 64 | 294 | 9.50 | 293 | 9.55 | |||||||||
126.lammps | 64 | 293 | 9.95 | 293 | 9.96 | |||||||||
127.wrf2 | 64 | 345 | 22.6 | 345 | 22.6 | |||||||||
128.GAPgeofem | 64 | 136 | 15.2 | 136 | 15.2 | |||||||||
129.tera_tf | 64 | 275 | 10.1 | 275 | 10.1 | |||||||||
130.socorro | 64 | 244 | 15.6 | 245 | 15.6 | |||||||||
132.zeusmp2 | 64 | 231 | 13.4 | 231 | 13.4 | |||||||||
137.lu | 64 | 241 | 15.2 | 242 | 15.2 |
Software Summary | |
---|---|
C Compiler: | Intel C Compiler for Linux Version 10.1, Build 20080801 |
C++ Compiler: | Intel C++ Compiler for Linux Version 10.1, Build 20080801 |
Fortran Compiler: | Intel Fortran Compiler for Linux Version 10.1, Build 20080801 |
Base Pointers: | 64-bit |
Peak Pointers: | 64-bit |
MPI Library: | SGI MPT 1.23 |
Other MPI Info: | OFED 1.3.1 |
Pre-processors: | None |
Other Software: | None |
Hardware | |
---|---|
Number of nodes: | 8 |
Uses of the node: | compute |
Vendor: | SGI |
Model: | SGI Altix ICE 8200EX (Intel Xeon X5570, 2.93 GHz) |
CPU Name: | Intel Xeon X5570 |
CPU(s) orderable: | 1-2 chips |
Chips enabled: | 2 |
Cores enabled: | 8 |
Cores per chip: | 4 |
Threads per core: | 2 |
CPU Characteristics: | Intel Turbo Boost Technology up to 3.33 GHz, 6.4 GT/s QPI, Hyper-Threading enabled |
CPU MHz: | 2934 |
Primary Cache: | 32 KB I + 32 KB D on chip per core |
Secondary Cache: | 256 KB I+D on chip per core |
L3 Cache: | 8 MB I+D on chip per chip |
Other Cache: | None |
Memory: | 48 GB (12*4GB DDR3-1066 CL7 RDIMMs) |
Disk Subsystem: | None |
Other Hardware: | None |
Adapter: | Mellanox MT26418 ConnectX IB DDR (PCIe x8 Gen2 5 GT/s) |
Number of Adapters: | 1 |
Slot Type: | PCIe x8 Gen2 |
Data Rate: | InfiniBand 4x DDR |
Ports Used: | 2 |
Interconnect Type: | InfiniBand |
Software | |
---|---|
Adapter: | Mellanox MT26418 ConnectX IB DDR (PCIe x8 Gen2 5 GT/s) |
Adapter Driver: | OFED-1.3.1 |
Adapter Firmware: | 2.5.0 |
Operating System: | SUSE Linux Enterprise Server 10 (x86_64) SP2 Kernel 2.6.16.60-0.30-smp |
Local File System: | NFSv3 |
Shared File System: | NFSv3 IPoIB |
System State: | Multi-user, run level 3 |
Other Software: | SGI ProPack 6 for Linux Service Pack 2 |
Hardware | |
---|---|
Number of nodes: | 1 |
Uses of the node: | fileserver |
Vendor: | SGI |
Model: | SGI Altix XE 240 (Intel Xeon 5140, 2.33 GHz) |
CPU Name: | Intel Xeon 5140 |
CPU(s) orderable: | 1-2 chips |
Chips enabled: | 2 |
Cores enabled: | 4 |
Cores per chip: | 2 |
Threads per core: | 1 |
CPU Characteristics: | 1333 MHz FSB |
CPU MHz: | 2328 |
Primary Cache: | 32 KB I + 32 KB D on chip per core |
Secondary Cache: | 4 MB I+D on chip per chip |
L3 Cache: | None |
Other Cache: | None |
Memory: | 24 GB (6*4GB DDR2-400 DIMMS) |
Disk Subsystem: | 7 TB RAID 5 48 x 147 GB SAS (Seagate Cheetah 15000 rpm) |
Other Hardware: | None |
Adapter: | Mellanox MT25208 InfiniHost III Ex (PCIe x8 Gen1 2.5 GT/s) |
Number of Adapters: | 2 |
Slot Type: | PCIe x8 Gen1 |
Data Rate: | InfiniBand 4x DDR |
Ports Used: | 2 |
Interconnect Type: | InfiniBand |
Software | |
---|---|
Adapter: | Mellanox MT25208 InfiniHost III Ex (PCIe x8 Gen1 2.5 GT/s) |
Adapter Driver: | OFED-1.3 |
Adapter Firmware: | 5.3.0 |
Operating System: | SUSE Linux Enterprise Server 10 (x86_64) SP1 Kernel 2.6.16.54-0.2.5-smp |
Local File System: | xfs |
Shared File System: | -- |
System State: | Multi-user, run level 3 |
Other Software: | SGI ProPack 5 for Linux Service Pack 5 |
Hardware | |
---|---|
Vendor: | Mellanox Technologies |
Model: | MT26418 ConnectX |
Switch Model: | Mellanox MT47396 InfiniScale III |
Number of Switches: | 8 |
Number of Ports: | 24 |
Data Rate: | InfiniBand 4x DDR |
Firmware: | 2020001 |
Topology: | Bristle hypercube with express links |
Primary Use: | MPI traffic |
Hardware | |
---|---|
Vendor: | Mellanox Technologies |
Model: | MT26418 ConnectX |
Switch Model: | Mellanox MT47396 InfiniScale-III |
Number of Switches: | 8 |
Number of Ports: | 24 |
Data Rate: | InfiniBand 4x DDR |
Firmware: | 2020001 |
Topology: | Bristle hypercube with express links |
Primary Use: | I/O traffic |
The config file option 'submit' was used.
Software environment: setenv MPI_REQUEST_MAX 65536 Determines the maximum number of nonblocking sends and receives that can simultaneously exist for any single MPI process. MPI generates an error message if this limit (or the default, if not set) is exceeded. Default: 16384 setenv MPI_TYPE_MAX 32768 Determines the maximum number of data types that can simultaneously exist for any single MPI process. MPI generates an error message if this limit (or the default, if not set) is exceeded. Default: 1024 setenv MPI_BUFS_THRESHOLD 1 Determines whether MPT uses per-host or per-process message buffers for communicating with other hosts. Per-host buffers are generally faster but for jobs running across many hosts they can consume a prodigious amount of memory. MPT will use per- host buffers for jobs using up to and including this many hosts and will use per-process buffers for larger host counts. Default: 64 setenv MPI_DSM_DISTRIBUTE Activates NUMA job placement mode. This mode ensures that each MPI process gets a unique CPU and physical memory on the node with which that CPU is associated. Currently, the CPUs are chosen by simply starting at relative CPU 0 and incrementing until all MPI processes have been forked. limit stacksize unlimited Removes limits on the maximum size of the automatically- extended stack region of the current process and each process it creates. PBS Pro batch scheduler (www.altair.com) is used with placement sets to ensure each MPI job is assigned to a topologically compact set of nodes BIOS settings: AMI BIOS version 8.15 Hyper-Threading Technology enabled (default) Intel Turbo Boost Technology enabled (default) Intel Turbo Boost Technology activated in the OS via /etc/init.d/acpid start /etc/init.d/powersaved start powersave -f
icc |
126.lammps: | icpc |
ifort |
icc ifort |
121.pop2: | -DSPEC_MPI_CASE_FLAG |
127.wrf2: | -DSPEC_MPI_CASE_FLAG -DSPEC_MPI_LINUX |