MPI2007 license: | 13 | Test date: | Mar-2010 |
---|---|---|---|
Test sponsor: | Intel Corporation | Hardware Availability: | Mar-2010 |
Tested by: | Pavel Shelepugin | Software Availability: | Feb-2010 |
Benchmark | Base | Peak | ||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Ranks | Seconds | Ratio | Seconds | Ratio | Seconds | Ratio | Ranks | Seconds | Ratio | Seconds | Ratio | Seconds | Ratio | |
Results appear in the order in which they were run. Bold underlined text indicates a median measurement. | ||||||||||||||
104.milc | 96 | 138 | 11.3 | 136 | 11.5 | 136 | 11.5 | |||||||
107.leslie3d | 96 | 466 | 11.2 | 466 | 11.2 | 466 | 11.2 | |||||||
113.GemsFDTD | 96 | 368 | 17.1 | 371 | 17.0 | 372 | 17.0 | |||||||
115.fds4 | 96 | 165 | 11.8 | 165 | 11.8 | 164 | 11.9 | |||||||
121.pop2 | 96 | 400 | 10.3 | 404 | 10.2 | 409 | 10.1 | |||||||
122.tachyon | 96 | 285 | 9.80 | 285 | 9.82 | 284 | 9.86 | |||||||
126.lammps | 96 | 278 | 10.5 | 280 | 10.4 | 278 | 10.5 | |||||||
127.wrf2 | 96 | 359 | 21.7 | 362 | 21.5 | 363 | 21.5 | |||||||
128.GAPgeofem | 96 | 146 | 14.1 | 146 | 14.1 | 146 | 14.1 | |||||||
129.tera_tf | 96 | 278 | 9.97 | 275 | 10.1 | 276 | 10.0 | |||||||
130.socorro | 96 | 227 | 16.8 | 222 | 17.2 | 219 | 17.4 | |||||||
132.zeusmp2 | 96 | 229 | 13.6 | 229 | 13.6 | 228 | 13.6 | |||||||
137.lu | 96 | 199 | 18.5 | 198 | 18.5 | 199 | 18.5 |
Hardware Summary | |
---|---|
Type of System: | Homogeneous |
Compute Node: | Discovery Node |
Interconnects: | IB Switch Gigabit Ethernet |
File Server Node: | HOME |
Total Compute Nodes: | 3 |
Total Chips: | 12 |
Total Cores: | 96 |
Total Threads: | 96 |
Total Memory: | 384 GB |
Base Ranks Run: | 96 |
Minimum Peak Ranks: | -- |
Maximum Peak Ranks: | -- |
Software Summary | |
---|---|
C Compiler: | Intel C++ Compiler 11.1.064 for Linux |
C++ Compiler: | Intel C++ Compiler 11.1.064 for Linux |
Fortran Compiler: | Intel Fortran Compiler 11.1.064 for Linux |
Base Pointers: | 64-bit |
Peak Pointers: | 64-bit |
MPI Library: | Intel MPI Library 3.2.2.006 for Linux |
Other MPI Info: | None |
Pre-processors: | No |
Other Software: | None |
Hardware | |
---|---|
Number of nodes: | 3 |
Uses of the node: | compute |
Vendor: | Quanta |
Model: | QSSC-S4R |
CPU Name: | Intel Xeon X7560 |
CPU(s) orderable: | 1-4 chips |
Chips enabled: | 4 |
Cores enabled: | 32 |
Cores per chip: | 8 |
Threads per core: | 1 |
CPU Characteristics: | Intel Turbo Boost Technology up to 2.66 GHz, 6.4 GT/s QPI, Hyper-Threading disabled |
CPU MHz: | 2261 |
Primary Cache: | 32 KB I + 32 KB D on chip per core |
Secondary Cache: | 256 KB I+D on chip per core |
L3 Cache: | 24 MB I+D on chip per chip, 24 MB shared / 8 cores |
Other Cache: | None |
Memory: | 128 GB (dual-rank RDIMM 32x4-GB DDR3-1066 MHz) |
Disk Subsystem: | Seagate 400 GB ST3400755SS |
Other Hardware: | None |
Adapter: | Intel (ESB2) 82575EB Dual-Port Gigabit Ethernet Controller |
Number of Adapters: | 1 |
Slot Type: | PCI-Express x8 |
Data Rate: | 1Gbps Ethernet |
Ports Used: | 2 |
Interconnect Type: | Ethernet |
Adapter: | Mellanox MHQH29-XTC |
Number of Adapters: | 1 |
Slot Type: | PCIe x8 Gen2 |
Data Rate: | InfiniBand 4x QDR |
Ports Used: | 1 |
Interconnect Type: | InfiniBand |
Software | |
---|---|
Adapter: | Intel (ESB2) 82575EB Dual-Port Gigabit Ethernet Controller |
Adapter Driver: | e1000 |
Adapter Firmware: | None |
Adapter: | Mellanox MHQH29-XTC |
Adapter Driver: | OFED 1.4.2 |
Adapter Firmware: | 2.7.000 |
Operating System: | Red Hat EL 5.4, kernel 2.6.18-164 |
Local File System: | Linux/ext2 |
Shared File System: | NFS |
System State: | Multi-User |
Other Software: | PBS Pro 10.1 |
Hardware | |
---|---|
Number of nodes: | 1 |
Uses of the node: | fileserver |
Vendor: | Intel |
Model: | SSR212CC |
CPU Name: | Intel Xeon CPU |
CPU(s) orderable: | 2 chips |
Chips enabled: | 2 |
Cores enabled: | 2 |
Cores per chip: | 1 |
Threads per core: | 1 |
CPU Characteristics: | -- |
CPU MHz: | 2800 |
Primary Cache: | 12 KB I + 16 KB D on chip per chip |
Secondary Cache: | 1 MB I+D on chip per chip |
L3 Cache: | None |
Other Cache: | None |
Memory: | 6 GB |
Disk Subsystem: | 10 disks, 320GB/disk, 2.6TB total |
Other Hardware: | None |
Adapter: | Intel 82546GB Dual-Port Gigabit Ethernet Controller |
Number of Adapters: | 1 |
Slot Type: | PCI-Express x8 |
Data Rate: | 1Gbps Ethernet |
Ports Used: | 1 |
Interconnect Type: | Ethernet |
Software | |
---|---|
Adapter: | Intel 82546GB Dual-Port Gigabit Ethernet Controller |
Adapter Driver: | e1000 |
Adapter Firmware: | N/A |
Operating System: | RedHat EL 4 Update 4 |
Local File System: | None |
Shared File System: | NFS |
System State: | Multi-User |
Other Software: | None |
Hardware | |
---|---|
Vendor: | Mellanox |
Model: | Mellanox MTS3600Q-1UNC |
Switch Model: | Mellanox MTS3600Q-1UNC |
Number of Switches: | 46 |
Number of Ports: | 36 |
Data Rate: | InfiniBand 4x QDR |
Firmware: | 7.1.000 |
Topology: | Fat tree |
Primary Use: | MPI traffic |
Hardware | |
---|---|
Vendor: | Force10 Networks |
Model: | Force10 S50, Force10 C300 |
Switch Model: | Force10 S50, Force10 C300 |
Number of Switches: | 15 |
Number of Ports: | 48 |
Data Rate: | 1Gbps Ethernet, 10Gbps Ethernet |
Firmware: | 8.2.1.0 |
Topology: | Fat tree |
Primary Use: | Cluster File System |
The config file option 'submit' was used.
MPI startup command: mpirun command was used to start MPI jobs. This command starts an independent ring of mpd daemons, launches an MPI job, and shuts down the mpd ring upon the job termination. BIOS settings: Intel Hyper-Threading Technology (SMT): Disabled (default is Enabled) Intel Turbo Boost Technology (Turbo) : Enabled (default is Enabled) RAM configuration: Compute nodes have 2x4-GB RDIMM on each memory channel. Network: Forty six 36-port switches: 18 core switches and 28 leaf switches. Each leaf has one link to each core. Remaining 18 ports on 25 of 28 leafs are used for compute nodes. On the remaining 3 leafs the ports are used for FS nodes and other peripherals. Job placement: Each MPI job was assigned to a topologically compact set of nodes, i.e. the minimal needed number of leaf switches was used for each job: 1 switch for 32/64/96/128 ranks. PBS Pro was used for job submission. It has no impact on performance. Can be found at: http://www.altair.com
mpiicc |
126.lammps: | mpiicpc |
mpiifort |
mpiicc mpiifort |
121.pop2: | -DSPEC_MPI_CASE_FLAG |
126.lammps: | -DMPICH_IGNORE_CXX_SEEK |
127.wrf2: | -DSPEC_MPI_CASE_FLAG -DSPEC_MPI_LINUX |