MPI2007 license: | 4 | Test date: | Jan-2010 |
---|---|---|---|
Test sponsor: | SGI | Hardware Availability: | Sep-2009 |
Tested by: | SGI | Software Availability: | Dec-2009 |
Benchmark | Base | Peak | ||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Ranks | Seconds | Ratio | Seconds | Ratio | Seconds | Ratio | Ranks | Seconds | Ratio | Seconds | Ratio | Seconds | Ratio | |
Results appear in the order in which they were run. Bold underlined text indicates a median measurement. | ||||||||||||||
121.pop2 | 512 | 228 | 17.1 | 224 | 17.4 | 224 | 17.4 | |||||||
122.tachyon | 512 | 278 | 7.00 | 250 | 7.79 | 249 | 7.79 | |||||||
125.RAxML | 512 | 494 | 5.91 | 494 | 5.91 | 501 | 5.83 | |||||||
126.lammps | 512 | 194 | 12.6 | 195 | 12.6 | 196 | 12.5 | |||||||
128.GAPgeofem | 512 | 319 | 18.6 | 320 | 18.6 | 312 | 19.0 | |||||||
129.tera_tf | 512 | 162 | 6.80 | 162 | 6.77 | 161 | 6.83 | |||||||
132.zeusmp2 | 512 | 129 | 16.5 | 129 | 16.4 | 128 | 16.5 | |||||||
137.lu | 512 | 119 | 35.3 | 119 | 35.3 | 119 | 35.3 | |||||||
142.dmilc | 512 | 99.3 | 37.1 | 99.4 | 37.1 | 99.2 | 37.1 | |||||||
143.dleslie | 512 | 151 | 20.6 | 151 | 20.6 | 149 | 20.8 | |||||||
145.lGemsFDTD | 512 | 196 | 22.5 | 196 | 22.5 | 195 | 22.6 | |||||||
147.l2wrf2 | 512 | 356 | 23.1 | 356 | 23.0 | 356 | 23.0 |
Hardware Summary | |
---|---|
Type of System: | Homogeneous |
Compute Node: | SGI Altix ICE 8200EX Compute Node |
Interconnects: | InfiniBand (MPI) InfiniBand (I/O) |
File Server Node: | SGI InfiniteStorage Nexis 2000 NAS |
Total Compute Nodes: | 64 |
Total Chips: | 128 |
Total Cores: | 512 |
Total Threads: | 1024 |
Total Memory: | 2304 GB |
Base Ranks Run: | 512 |
Minimum Peak Ranks: | -- |
Maximum Peak Ranks: | -- |
Software Summary | |
---|---|
C Compiler: | Intel C Compiler for Linux Version 11.1, Build 20091130 |
C++ Compiler: | Intel C++ Compiler for Linux Version 11.1, Build 20091130 |
Fortran Compiler: | Intel Fortran Compiler for Linux Version 11.1, Build 20091130 |
Base Pointers: | 64-bit |
Peak Pointers: | 64-bit |
MPI Library: | SGI MPT 1.25 |
Other MPI Info: | OFED-1.4.1 |
Pre-processors: | None |
Other Software: | None |
Hardware | |
---|---|
Number of nodes: | 64 |
Uses of the node: | compute |
Vendor: | SGI |
Model: | SGI Altix ICE 8200EX (Intel Xeon X5560, 2.80 GHz) |
CPU Name: | Intel Xeon X5560 |
CPU(s) orderable: | 1-2 chips |
Chips enabled: | 2 |
Cores enabled: | 8 |
Cores per chip: | 4 |
Threads per core: | 2 |
CPU Characteristics: | Intel Turbo Boost Technology up to 3.2 GHz, 6.4 GT/s QPI, Hyper-Threading enabled |
CPU MHz: | 2800 |
Primary Cache: | 32 KB I + 32 KB D on chip per core |
Secondary Cache: | 256 KB I+D on chip per core |
L3 Cache: | 8 MB I+D on chip per chip |
Other Cache: | None |
Memory: | 36 GB (6*4GB + 6*2GB DDR3-1333 CL9 RDIMMs running at 1066 MHZ. The 4GB DIMM is installed in DIMM slot A in each channel.) |
Disk Subsystem: | None |
Other Hardware: | None |
Adapter: | Mellanox MT26418 ConnectX IB DDR (PCIe x8 Gen2 5 GT/s) |
Number of Adapters: | 1 |
Slot Type: | PCIe x8 Gen2 |
Data Rate: | InfiniBand 4x DDR |
Ports Used: | 2 |
Interconnect Type: | InfiniBand |
Software | |
---|---|
Adapter: | Mellanox MT26418 ConnectX IB DDR (PCIe x8 Gen2 5 GT/s) |
Adapter Driver: | OFED-1.4.1 |
Adapter Firmware: | 2.6.0 |
Operating System: | SUSE Linux Enterprise Server 10 (x86_64) SP2 Kernel 2.6.16.60-0.30-smp |
Local File System: | None |
Shared File System: | NFSv3 IPoIB |
System State: | Multi-user, run level 3 |
Other Software: | SGI ProPack 6 for Linux Service Pack 5, SGI Tempo V 1.9 |
Hardware | |
---|---|
Number of nodes: | 1 |
Uses of the node: | fileserver |
Vendor: | SGI |
Model: | SGI Altix XE 240 (Intel Xeon 5140, 2.33 GHz) |
CPU Name: | Intel Xeon 5140 |
CPU(s) orderable: | 1-2 chips |
Chips enabled: | 2 |
Cores enabled: | 4 |
Cores per chip: | 2 |
Threads per core: | 1 |
CPU Characteristics: | 1333 MHz FSB |
CPU MHz: | 2333 |
Primary Cache: | 32 KB I + 32 KB D on chip per core |
Secondary Cache: | 4 MB I+D on chip per chip |
L3 Cache: | None |
Other Cache: | None |
Memory: | 16 GB (8*2GB DDR2-667MHz DIMMS) |
Disk Subsystem: | 4.3 TB RAID 5 48 x 146 GB SAS (Seagate Cheetah 15K.5 ) |
Other Hardware: | None |
Adapter: | Mellanox MT26418 ConnectX IB DDR (PCIe x8 Gen2 5 GT/s) |
Number of Adapters: | 2 |
Slot Type: | PCIe x8 Gen2 |
Data Rate: | InfiniBand 4x DDR |
Ports Used: | 2 |
Interconnect Type: | InfiniBand |
Software | |
---|---|
Adapter: | Mellanox MT26418 ConnectX IB DDR (PCIe x8 Gen2 5 GT/s) |
Adapter Driver: | OFED-1.4.1 |
Adapter Firmware: | 2.3.0 |
Operating System: | SUSE Linux Enterprise Server 10 (x86_64) SP1 Kernel 2.6.16.54-0.2.5-smp |
Local File System: | xfs |
Shared File System: | -- |
System State: | Multi-user, run level 3 |
Other Software: | SGI ProPack 5 for Linux Service Pack 5 |
Hardware | |
---|---|
Vendor: | Mellanox Technologies |
Model: | MT26418 ConnectX |
Switch Model: | Mellanox MT48436 InfiniScale-IV |
Number of Switches: | 128 |
Number of Ports: | 36 |
Data Rate: | InfiniBand 4x QDR |
Firmware: | 4020001 |
Topology: | Bristle hypercube with double dimensional links |
Primary Use: | MPI traffic |
Hardware | |
---|---|
Vendor: | Mellanox Technologies |
Model: | MT26418 ConnectX |
Switch Model: | Mellanox MT48436 InfiniScale-IV |
Number of Switches: | 64 |
Number of Ports: | 36 |
Data Rate: | InfiniBand 4x QDR |
Firmware: | 4020001 |
Topology: | Bristle hypercube with double dimensional links |
Primary Use: | I/O traffic |
The config file option 'submit' was used.
Software environment: export MPI_REQUEST_MAX=65536 export MPI_TYPE_MAX=32768 export MPI_BUFS_THRESHOLD=1 export MPI_DSM_DISTRIBUTE=yes export MPI_IB_RAILS=2 ulimit -s unlimited BIOS settings: AMI BIOS version 8.15 Hyper-Threading Technology enabled (default) Intel Turbo Boost Technology enabled (default) Intel Turbo Boost Technology activated in the OS via /etc/init.d/acpid start /etc/init.d/powersaved start powersave -f Interconnect Data Rate: The system interconnect has DDR InfiniBand HCAs, while the switches and cables run up to QDR rate. Job Placement: Each MPI job was assigned to a topologically compact set of nodes, i.e. the minimal needed number of switches was used for each job: 2 switches for 16/32/64 ranks, 4 switches for 128 ranks, 8 switches for 256 ranks, 16 switches for 512 ranks, 32 switches for 1024 ranks, 64 switches for 2048 ranks and 128 switches for 4096 ranks.
icc |
126.lammps: | icpc |
ifort |
icc ifort |
121.pop2: | -DSPEC_MPI_CASE_FLAG |