MPI2007 license: | 4 | Test date: | Feb-2009 |
---|---|---|---|
Test sponsor: | SGI | Hardware Availability: | Mar-2009 |
Tested by: | SGI | Software Availability: | Jan-2009 |
Benchmark | Base | Peak | ||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Ranks | Seconds | Ratio | Seconds | Ratio | Seconds | Ratio | Ranks | Seconds | Ratio | Seconds | Ratio | Seconds | Ratio | |
Results appear in the order in which they were run. Bold underlined text indicates a median measurement. | ||||||||||||||
104.milc | 256 | 32.3 | 48.5 | 31.1 | 50.4 | |||||||||
107.leslie3d | 256 | 127 | 41.2 | 127 | 41.2 | |||||||||
113.GemsFDTD | 256 | 282 | 22.4 | 282 | 22.4 | |||||||||
115.fds4 | 256 | 31.5 | 61.9 | 31.5 | 61.9 | |||||||||
121.pop2 | 256 | 182 | 22.7 | 239 | 17.3 | |||||||||
122.tachyon | 256 | 76.5 | 36.6 | 76.5 | 36.6 | |||||||||
126.lammps | 256 | 137 | 21.3 | 137 | 21.3 | |||||||||
127.wrf2 | 256 | 104 | 75.3 | 104 | 75.2 | |||||||||
128.GAPgeofem | 256 | 40.2 | 51.4 | 40.5 | 50.9 | |||||||||
129.tera_tf | 256 | 85.1 | 32.5 | 85.3 | 32.5 | |||||||||
130.socorro | 256 | 102 | 37.4 | 104 | 36.6 | |||||||||
132.zeusmp2 | 256 | 63.3 | 49.0 | 63.1 | 49.2 | |||||||||
137.lu | 256 | 59.2 | 62.1 | 59.4 | 61.9 |
Hardware Summary | |
---|---|
Type of System: | Homogeneous |
Compute Node: | SGI Altix ICE 8200EX Compute Node |
Interconnects: | InfiniBand (MPI) InfiniBand (I/O) |
File Server Node: | SGI InfiniteStorage Nexis 2000 NAS |
Total Compute Nodes: | 32 |
Total Chips: | 64 |
Total Cores: | 256 |
Total Threads: | 512 |
Total Memory: | 1536 GB |
Base Ranks Run: | 256 |
Minimum Peak Ranks: | -- |
Maximum Peak Ranks: | -- |
Software Summary | |
---|---|
C Compiler: | Intel C Compiler for Linux Version 10.1, Build 20080801 |
C++ Compiler: | Intel C++ Compiler for Linux Version 10.1, Build 20080801 |
Fortran Compiler: | Intel Fortran Compiler for Linux Version 10.1, Build 20080801 |
Base Pointers: | 64-bit |
Peak Pointers: | 64-bit |
MPI Library: | SGI MPT 1.23 |
Other MPI Info: | OFED 1.3.1 |
Pre-processors: | None |
Other Software: | None |
Hardware | |
---|---|
Number of nodes: | 32 |
Uses of the node: | compute |
Vendor: | SGI |
Model: | SGI Altix ICE 8200EX (Intel Xeon X5570, 2.93 GHz) |
CPU Name: | Intel Xeon X5570 |
CPU(s) orderable: | 1-2 chips |
Chips enabled: | 2 |
Cores enabled: | 8 |
Cores per chip: | 4 |
Threads per core: | 2 |
CPU Characteristics: | Intel Turbo Boost Technology up to 3.33 GHz, 6.4 GT/s QPI, Hyper-Threading enabled |
CPU MHz: | 2934 |
Primary Cache: | 32 KB I + 32 KB D on chip per core |
Secondary Cache: | 256 KB I+D on chip per core |
L3 Cache: | 8 MB I+D on chip per chip |
Other Cache: | None |
Memory: | 48 GB (12*4GB DDR3-1066 CL7 RDIMMs) |
Disk Subsystem: | None |
Other Hardware: | None |
Adapter: | Mellanox MT26418 ConnectX IB DDR (PCIe x8 Gen2 5 GT/s) |
Number of Adapters: | 1 |
Slot Type: | PCIe x8 Gen2 |
Data Rate: | InfiniBand 4x DDR |
Ports Used: | 2 |
Interconnect Type: | InfiniBand |
Software | |
---|---|
Adapter: | Mellanox MT26418 ConnectX IB DDR (PCIe x8 Gen2 5 GT/s) |
Adapter Driver: | OFED-1.3.1 |
Adapter Firmware: | 2.5.0 |
Operating System: | SUSE Linux Enterprise Server 10 (x86_64) SP2 Kernel 2.6.16.60-0.30-smp |
Local File System: | NFSv3 |
Shared File System: | NFSv3 IPoIB |
System State: | Multi-user, run level 3 |
Other Software: | SGI ProPack 6 for Linux Service Pack 2 |
Hardware | |
---|---|
Number of nodes: | 1 |
Uses of the node: | fileserver |
Vendor: | SGI |
Model: | SGI Altix XE 240 (Intel Xeon 5140, 2.33 GHz) |
CPU Name: | Intel Xeon 5140 |
CPU(s) orderable: | 1-2 chips |
Chips enabled: | 2 |
Cores enabled: | 4 |
Cores per chip: | 2 |
Threads per core: | 1 |
CPU Characteristics: | 1333 MHz FSB |
CPU MHz: | 2328 |
Primary Cache: | 32 KB I + 32 KB D on chip per core |
Secondary Cache: | 4 MB I+D on chip per chip |
L3 Cache: | None |
Other Cache: | None |
Memory: | 24 GB (6*4GB DDR2-400 DIMMS) |
Disk Subsystem: | 7 TB RAID 5 48 x 147 GB SAS (Seagate Cheetah 15000 rpm) |
Other Hardware: | None |
Adapter: | Mellanox MT25208 InfiniHost III Ex (PCIe x8 Gen1 2.5 GT/s) |
Number of Adapters: | 2 |
Slot Type: | PCIe x8 Gen1 |
Data Rate: | InfiniBand 4x DDR |
Ports Used: | 2 |
Interconnect Type: | InfiniBand |
Software | |
---|---|
Adapter: | Mellanox MT25208 InfiniHost III Ex (PCIe x8 Gen1 2.5 GT/s) |
Adapter Driver: | OFED-1.3 |
Adapter Firmware: | 5.3.0 |
Operating System: | SUSE Linux Enterprise Server 10 (x86_64) SP1 Kernel 2.6.16.54-0.2.5-smp |
Local File System: | xfs |
Shared File System: | -- |
System State: | Multi-user, run level 3 |
Other Software: | SGI ProPack 5 for Linux Service Pack 5 |
Hardware | |
---|---|
Vendor: | Mellanox Technologies |
Model: | MT26418 ConnectX |
Switch Model: | Mellanox MT47396 InfiniScale III |
Number of Switches: | 8 |
Number of Ports: | 24 |
Data Rate: | InfiniBand 4x DDR |
Firmware: | 2020001 |
Topology: | Bristle hypercube with express links |
Primary Use: | MPI traffic |
Hardware | |
---|---|
Vendor: | Mellanox Technologies |
Model: | MT26418 ConnectX |
Switch Model: | Mellanox MT47396 InfiniScale-III |
Number of Switches: | 8 |
Number of Ports: | 24 |
Data Rate: | InfiniBand 4x DDR |
Firmware: | 2020001 |
Topology: | Bristle hypercube with express links |
Primary Use: | I/O traffic |
The config file option 'submit' was used.
Software environment: setenv MPI_REQUEST_MAX 65536 Determines the maximum number of nonblocking sends and receives that can simultaneously exist for any single MPI process. MPI generates an error message if this limit (or the default, if not set) is exceeded. Default: 16384 setenv MPI_TYPE_MAX 32768 Determines the maximum number of data types that can simultaneously exist for any single MPI process. MPI generates an error message if this limit (or the default, if not set) is exceeded. Default: 1024 setenv MPI_BUFS_THRESHOLD 1 Determines whether MPT uses per-host or per-process message buffers for communicating with other hosts. Per-host buffers are generally faster but for jobs running across many hosts they can consume a prodigious amount of memory. MPT will use per- host buffers for jobs using up to and including this many hosts and will use per-process buffers for larger host counts. Default: 64 setenv MPI_DSM_DISTRIBUTE Activates NUMA job placement mode. This mode ensures that each MPI process gets a unique CPU and physical memory on the node with which that CPU is associated. Currently, the CPUs are chosen by simply starting at relative CPU 0 and incrementing until all MPI processes have been forked. limit stacksize unlimited Removes limits on the maximum size of the automatically- extended stack region of the current process and each process it creates. PBS Pro batch scheduler (www.altair.com) is used with placement sets to ensure each MPI job is assigned to a topologically compact set of nodes BIOS settings: AMI BIOS version 8.15 Hyper-Threading Technology enabled (default) Intel Turbo Boost Technology enabled (default) Intel Turbo Boost Technology activated in the OS via /etc/init.d/acpid start /etc/init.d/powersaved start powersave -f
icc |
126.lammps: | icpc |
ifort |
icc ifort |
121.pop2: | -DSPEC_MPI_CASE_FLAG |
127.wrf2: | -DSPEC_MPI_CASE_FLAG -DSPEC_MPI_LINUX |