SPEC(R) MPIM2007 Summary Linux Networx Linux Networx LS-1 Fri Oct 12 21:09:29 2007 MPI2007 License: 021 Test date: Sep-2007 Test sponsor: Scali, Inc Hardware availability: Apr-2007 Tested by: Scali, Inc Software availability: Aug-2007 Base Base Base Peak Peak Peak Benchmarks Ranks Run Time Ratio Ranks Run Time Ratio -------------- ------ --------- --------- ------ --------- --------- 104.milc 32 545 2.87 * 104.milc 32 544 2.88 S 104.milc 32 545 2.87 S 107.leslie3d 32 1608 3.25 S 107.leslie3d 32 1610 3.24 S 107.leslie3d 32 1609 3.24 * 113.GemsFDTD 32 1057 5.97 S 113.GemsFDTD 32 1062 5.94 S 113.GemsFDTD 32 1057 5.97 * 115.fds4 32 662 2.95 * 115.fds4 32 662 2.95 S 115.fds4 32 662 2.95 S 121.pop2 32 939 4.39 S 121.pop2 32 942 4.38 S 121.pop2 32 941 4.39 * 122.tachyon 32 910 3.07 * 122.tachyon 32 910 3.07 S 122.tachyon 32 910 3.07 S 126.lammps 32 973 2.99 * 126.lammps 32 973 2.99 S 126.lammps 32 975 2.99 S 127.wrf2 32 1393 5.60 S 127.wrf2 32 1394 5.59 * 127.wrf2 32 1399 5.57 S 128.GAPgeofem 32 565 3.65 S 128.GAPgeofem 32 565 3.66 * 128.GAPgeofem 32 564 3.66 S 129.tera_tf 32 1007 2.75 S 129.tera_tf 32 1008 2.75 S 129.tera_tf 32 1008 2.75 * 130.socorro 32 803 4.76 * 130.socorro 32 802 4.76 S 130.socorro 32 803 4.76 S 132.zeusmp2 32 903 3.43 * 132.zeusmp2 32 902 3.44 S 132.zeusmp2 32 904 3.43 S 137.lu 32 1381 2.66 S 137.lu 32 1375 2.67 S 137.lu 32 1379 2.67 * ============================================================================== 104.milc 32 545 2.87 * 107.leslie3d 32 1609 3.24 * 113.GemsFDTD 32 1057 5.97 * 115.fds4 32 662 2.95 * 121.pop2 32 941 4.39 * 122.tachyon 32 910 3.07 * 126.lammps 32 973 2.99 * 127.wrf2 32 1394 5.59 * 128.GAPgeofem 32 565 3.66 * 129.tera_tf 32 1008 2.75 * 130.socorro 32 803 4.76 * 132.zeusmp2 32 903 3.43 * 137.lu 32 1379 2.67 * SPECmpiM_base2007 3.59 SPECmpiM_peak2007 Not Run BENCHMARK DETAILS ----------------- Type of System: Homogenous Total Compute Nodes: 8 Total Chips: 16 Total Cores: 32 Total Threads: 32 Total Memory: 64 GB Base Ranks Run: 32 Minimum Peak Ranks: -- Maximum Peak Ranks: -- C Compiler: QLogic PathScale C Compiler 3.0 C++ Compiler: QLogic PathScale C++ Compiler 3.0 Fortran Compiler: QLogic PathScale Fortran Compiler 3.0 Base Pointers: 64-bit Peak Pointers: Not Applicable MPI Library: Scali MPI Connect 5.5 Other MPI Info: IB Gold VAPI Pre-processors: None Other Software: None Node Description: Linux Networx LS-1 ==================================== HARDWARE -------- Number of nodes: 8 Uses of the node: compute Vendor: Linux Networx, Inc. Model: LS-1 CPU Name: Intel Xeon 5160 CPU(s) orderable: 1-2 chips Chips enabled: 2 Cores enabled: 4 Cores per chip: 2 Threads per core: 1 CPU Characteristics: 1333 Mhz FSB CPU MHz: 3000 Primary Cache: 32 KB I + 32 KB D on chip per core Secondary Cache: 4 MB I+D on chip per chip L3 Cache: None Other Cache: None Memory: 8 GB (8 x 1GB DIMMs) Disk Subsystem: 250GB SAS hard drive Other Hardware: None Adapter: Mellanox MHGA28-XTC Number of Adapters: 1 Slot Type: PCIe x8 Data Rate: InfiniBand 4x DDR Ports Used: 1 Interconnect Type: InfiniBand SOFTWARE -------- Adapter: Mellanox MHGA28-XTC Adapter Driver: IBGD 1.8.2 Adapter Firmware: 5.1.4 Operating System: SLES9 SP3 Local File System: Not applicable Shared File System: GPFS System State: multi-user Other Software: None Node Description: Linux Networx Evolocity 1 =========================================== HARDWARE -------- Number of nodes: 8 Uses of the node: file server Vendor: Linux Networx, Inc. Model: Evolocity 1 CPU Name: AMD Opteron 248 CPU(s) orderable: 1-2 chips Chips enabled: 2 Cores enabled: 2 Cores per chip: 1 Threads per core: 1 CPU Characteristics: -- CPU MHz: 2200 Primary Cache: 64 KB I + 64 KB D on chip per core Secondary Cache: 1 MB I+D on chip per core L3 Cache: None Other Cache: None Memory: 8 GB (8 x 1GB DIMMs) Disk Subsystem: 18 TB SAN interconnected by FC2 Other Hardware: -- Adapter: Mellanox MHXL-CF128-T Number of Adapters: 1 Slot Type: PCI-X Data Rate: InfiniBand 4x SDR Ports Used: 1 Interconnect Type: InfiniBand SOFTWARE -------- Adapter: Mellanox MHXL-CF128-T Adapter Driver: IBGD 1.8.2 Adapter Firmware: 3.5.0 Operating System: SLES9 SP3 Local File System: Not applicable Shared File System: GPFS System State: multi-user Other Software: -- Interconnect Description: InfiniBand ==================================== HARDWARE -------- Vendor: QLogic Model: QLogic Silverstorm 9120 Fabric Director Switch Model: 9120 Number of Switches: 1 Number of Ports: 144 Data Rate: InfiniBand 4x SDR and InfiniBand 4x DDR Firmware: 4.0.0.5.5 Topology: Single switch (star) Primary Use: MPI and filesystem traffic Submit Notes ------------ Scali MPI Connect's mpirun wrapper has been used to submit the jobs. Description of switches: -aff manual:0x1:0x2:0x4:0x8: instruct the launcher to bind rank N..N+3 to the cores corresponding to the masks 1,2,4, and 8 respectively on each node. -npn 4: launch 4 processes per node. -rsh rsh: use rsh as method to connect to nodes. -mstdin none: do not connect the processes' STDIN to anything. -q: quiet mode, no output from launcher. -machinefile: file selecting the hosts to run on. -net smp,ib: prioritized list of networks used for communication between processes. General Notes ------------- Scali, Inc has executed the benchmark on Linux Networx's Solution Center. We are grateful for the support from Linux Networx and in particular Justin Wood in order to finalize the submissions. Base Compiler Invocation ------------------------ C benchmarks: /opt/scali/bin/mpicc -ccl pathcc C++ benchmarks: 126.lammps: /opt/scali/bin/mpicc -ccl pathCC Fortran benchmarks: /opt/scali/bin/mpif77 -ccl pathf90 Benchmarks using both Fortran and C: /opt/scali/bin/mpicc -ccl pathcc /opt/scali/bin/mpif77 -ccl pathf90 Base Portability Flags ---------------------- 104.milc: -DSPEC_MPI_LP64 115.fds4: -DSPEC_MPI_LC_TRAILING_DOUBLE_UNDERSCORE -DSPEC_MPI_LP64 121.pop2: -DSPEC_MPI_DOUBLE_UNDERSCORE -DSPEC_MPI_LP64 122.tachyon: -DSPEC_MPI_LP64 127.wrf2: -DF2CSTYLE -DSPEC_MPI_DOUBLE_UNDERSCORE -DSPEC_MPI_LINUX -DSPEC_MPI_LP64 128.GAPgeofem: -DSPEC_MPI_LP64 130.socorro: -fno-second-underscore -DSPEC_MPI_LP64 132.zeusmp2: -DSPEC_MPI_LP64 Base Optimization Flags ----------------------- C benchmarks: -march=core -Ofast -OPT:malloc_alg=1 C++ benchmarks: 126.lammps: -march=core -O3 -OPT:Ofast -CG:local_fwd_sched=on Fortran benchmarks: -march=core -O3 -OPT:Ofast -OPT:malloc_alg=1 -LANG:copyinout=off Benchmarks using both Fortran and C: -march=core -Ofast -OPT:malloc_alg=1 -O3 -OPT:Ofast -LANG:copyinout=off Base Other Flags ---------------- C benchmarks: -IPA:max_jobs=4 C++ benchmarks: 126.lammps: -IPA:max_jobs=4 Fortran benchmarks: -IPA:max_jobs=4 Benchmarks using both Fortran and C: -IPA:max_jobs=4 The flags file that was used to format this result can be browsed at http://www.spec.org/mpi2007/flags/MPI2007_flags.20071107.00.html You can also download the XML flags source by saving the following link: http://www.spec.org/mpi2007/flags/MPI2007_flags.20071107.00.xml SPEC and SPEC MPI are registered trademarks of the Standard Performance Evaluation Corporation. All other brand and product names appearing in this result are trademarks or registered trademarks of their respective holders. ----------------------------------------------------------------------------- For questions about this result, please contact the tester. For other inquiries, please contact webmaster@spec.org. Copyright 2006-2010 Standard Performance Evaluation Corporation Tested with SPEC MPI2007 v1.0. Report generated on Tue Jul 22 13:33:09 2014 by MPI2007 ASCII formatter v1463. Originally published on 7 November 2007.