IBM Corporation IBM Cloud AMD EPYC 7F72 Bare Metal Server 185083 SPECjbb2015-MultiJVM max-jOPS
72471 SPECjbb2015-MultiJVM critical-jOPS
Tested by: IBM Corporation Test Sponsor: IBM Corporation Test location: Austin, TX Test date: Apr 15, 2020
SPEC license #: 11 Hardware Availability: Jun-2020 Software Availability: Jan-2020 Publication: Tue May 19 11:13:37 EDT 2020
Benchmark Results Summary
 
Overall Throughput RT curve
Overall SUT (System Under Test) Description
VendorIBM Corporation
Vendor URLhttps://cloud.ibm.com
System SourceSingle Supplier
System DesignationServer Rack
Total Systems1
All SUT Systems IdenticalYES
Total Nodes1
All Nodes IdenticalYES
Nodes Per System1
Total Chips2
Total Cores48
Total Threads96
Total Memory Amount (GB)512
Total OS Images1
SW EnvironmentNon-virtual
 
Hardware hw_1
NameIBM Cloud AMD EPYC 7F72 Bare Metal Server
VendorIBM Corporation
Vendor URLhttps://cloud.ibm.com
AvailableJun-2020
ModelNone
Form Factor2U
CPU NameAMD EPYC 7F72
CPU Characteristics24 Core, 3.20 GHz, 192 MB L3 cache (Turbo Boost Technology up to 3.70 GHz)
Number of Systems1
Nodes Per System1
Chips Per System2
Cores Per System48
Cores Per Chip24
Threads Per System96
Threads Per Core2
Version1.0 03/25/2020
CPU Frequency (MHz)3200
Primary Cache32KB(I)+32KB(D) per core
Secondary Cache512KB (I+D) per core
Tertiary Cache192MB (I+D) on chip per chip
Other CacheNone
Disk1x960 GB SSD
File Systemext4
Memory Amount (GB)512
# and size of DIMM(s)16 x 32GB
Memory Details32 GB 2Rx4 PC4-2933-R
# and type of Network Interface Cards (NICs)4 x 10 GbE
Power Supply Quantity and Rating (W)2 x 1600W
Other HardwareNone
Cabinet/Housing/EnclosureNone
Shared DescriptionNone
Shared CommentNone
Notes

  • NA: The test sponsor attests, as of date of publication, that CVE-2017-5754 (Meltdown) is mitigated in the system as tested and documented.
  • Yes: The test sponsor attests, as of date of publication, that CVE-2017-5753 (Spectre variant 1) is mitigated in the system as tested and documented.
  • Yes: The test sponsor attests, as of date of publication, that CVE-2017-5715 (Spectre variant 2) is mitigated in the system as tested and documented.
  • Other Hardware network_1
    NameNone
    VendorNone
    Vendor URLNone
    VersionNone
    AvailableNone
    BitnessNone
    NotesNone
    Operating System os_1
    NameUbuntu 18.04.4 LTS
    VendorUbuntu
    Vendor URLhttp://ubuntu.com/
    Version4.15.0-91-generic
    AvailableFeb-2019
    Bitness64
    NotesNone
    Java Virtual Machine jvm_1
    NameOracle Java SE 13.0.2
    VendorOracle
    Vendor URLhttp://oracle.com/
    VersionJava HotSpot 64-bit Server VM, version 13.0.2
    AvailableJan-2020
    Bitness64
    NotesNone
    Other Software other_1
    NameNone
    VendorNone
    Vendor URLNone
    VersionNone
    AvailableNone
    BitnessNone
    NotesNone
    Hardware
    OS Images os_Image_1(1)
    Hardware Description hw_1
    Number of Systems 1
    SW Environment non-virtual
    Tuning

    BIOS Settings:

    • Set Nodes per Socket to 2 [NPS2]
    • SMT Control = Auto [Auto means SMT Enabled]
    • Determinism Control = Manual
    • Determinism Slider = Power
    • L1 Stream HW Prefetcher = Disable
    • L2 Stream HW Prefetcher = Disable

    Notes None
    OS Image os_Image_1
    JVM Instances jvm_Ctr_1(1), jvm_Backend_1(12), jvm_TxInjector_1(12)
    OS Image Description os_1
    Tuning

    • cpupower -c all frequency-set -g performance
    • tuned-adm profile latency-performance
    • echo 10000000 > /proc/sys/kernel/sched_min_granularity_ns
    • echo 15000000 > /proc/sys/kernel/sched_wakeup_granularity_ns
    • echo 3000 > /proc/sys/kernel/sched_migration_cost_ns
    • echo 990000 > /proc/sys/kernel/sched_rt_runtime_us
    • echo 100000 > /proc/sys/kernel/sched_latency_ns
    • echo 10000 > /proc/sys/vm/dirty_expire_centisecs
    • echo 1500 > /proc/sys/vm/dirty_writeback_centisecs
    • echo 40 > /proc/sys/vm/dirty_ratio
    • echo 10 > /proc/sys/vm/dirty_background_ratio
    • echo 10 > /proc/sys/vm/swappiness
    • echo 0 > /proc/sys/kernel/numa_balancing
    • echo always > /sys/kernel/mm/transparent_hugepage/enabled
    • echo always > /sys/kernel/mm/transparent_hugepage/defrag
    • ulimit -n 1024000
    • UserTasksMax=970000
    • DefaultTasksMax=970000

    Notes None
    JVM Instance jvm_Ctr_1
    Parts of Benchmark Controller
    JVM Instance Description jvm_1
    Command Line

    -Xms3g -Xmx3g -Xmn2g -XX:+UseParallelOldGC -XX:ParallelGCThreads=1 -XX:CICompilerCount=2

    Tuning None
    Notes

    Used numactl to interleave memory on all Nodes in Driver system

    • numactl --interleave=all

    JVM Instance jvm_Backend_1
    Parts of Benchmark Backend
    JVM Instance Description jvm_1
    Command Line

    -Xms31g -Xmx31g -Xmn29g -server -XX:MetaspaceSize=256m -XX:AllocatePrefetchInstr=2 -XX:LargePageSizeInBytes=2m -XX:-UsePerfData -XX:-UseAdaptiveSizePolicy -XX:+AlwaysPreTouch -XX:-UseBiasedLocking -XX:+UseLargePages -XX:+UseParallelOldGC -XX:SurvivorRatio=23 -XX:TargetSurvivorRatio=98 -XX:ParallelGCThreads=8 -XX:MaxTenuringThreshold=5 -XX:InitialCodeCacheSize=25m -XX:MaxInlineSize=900 -XX:FreqInlineSize=900 -XX:LoopUnrollLimit=30 -XX:LoopMaxUnroll=6 -XX:CICompilerCount=2 -XX:+UseGCTaskAffinity -XX:+UseTransparentHugePages -XX:ParGCArrayScanChunk=3584 -XX:InlineSmallCode=3000 -XX:AutoBoxCacheMax=5000

    Tuning

    Used numactl to affinitize each Backend JVM to 8 threads

    • numactl --physcpubind=0-3,48-51 --localalloc
    • numactl --physcpubind=4-7,52-55 --localalloc
    • numactl --physcpubind=8-11,56-59 --localalloc
    • numactl --physcpubind=12-15,60-63 --localalloc
    • numactl --physcpubind=16-19,64-67 --localalloc
    • numactl --physcpubind=20-23,68-71 --localalloc
    • numactl --physcpubind=24-27,72-75 --localalloc
    • numactl --physcpubind=28-31,76-79 --localalloc
    • numactl --physcpubind=32-35,80-83 --localalloc
    • numactl --physcpubind=36-39,84-87 --localalloc
    • numactl --physcpubind=40-43,88-91 --localalloc
    • numactl --physcpubind=44-47,92-95 --localalloc

    Notes "Notes here"
    JVM Instance jvm_TxInjector_1
    Parts of Benchmark TxInjector
    JVM Instance Description jvm_1
    Command Line

    -Xms3g -Xmx3g -Xmn2g -XX:+UseParallelOldGC -XX:ParallelGCThreads=1 -XX:CICompilerCount=2

    Tuning None
    Notes

    Used numactl to affinitize each Transaction Injector JVM to 8 threads

    • numactl --physcpubind=0-3,48-51 --localalloc
    • numactl --physcpubind=4-7,52-55 --localalloc
    • numactl --physcpubind=8-11,56-59 --localalloc
    • numactl --physcpubind=12-15,60-63 --localalloc
    • numactl --physcpubind=16-19,64-67 --localalloc
    • numactl --physcpubind=20-23,68-71 --localalloc
    • numactl --physcpubind=24-27,72-75 --localalloc
    • numactl --physcpubind=28-31,76-79 --localalloc
    • numactl --physcpubind=32-35,80-83 --localalloc
    • numactl --physcpubind=36-39,84-87 --localalloc
    • numactl --physcpubind=40-43,88-91 --localalloc
    • numactl --physcpubind=44-47,92-95 --localalloc
    max-jOPS = jOPS passed before the First Failure
    Pass/Fail Pass Pass Fail Fail Fail
    jOPS 183093 185083 187073 189063 191053
    critical-jOPS = Geomean ( jOPS @ 10000; 25000; 50000; 75000; 100000; SLAs )
    Response time percentile is 99-th
    SLA (us) 10000 25000 50000 75000 100000 Geomean
    jOPS 50748 68660 78610 80601 90551 72471
      Percentile
      10-th 50-th 90-th 95-th 99-th 100-th
    500us 3980 / 5970 - / 1990 - / 1990 - / 1990 - / 1990 - / 1990
    1000us 19901 / 21892 7961 / 9951 3980 / 5970 1990 / 3980 - / 1990 - / 1990
    5000us 119408 / 121399 57714 / 59704 35823 / 37813 31842 / 33832 25872 / 27862 5970 / 1990
    10000us 125379 / 127369 107468 / 109458 71645 / 73635 63684 / 65675 49753 / 51744 13931 / 1990
    25000us 129359 / 131349 123389 / 125379 107468 / 109458 101497 / 103487 71645 / 65675 19901 / 1990
    50000us 131349 / 133339 127369 / 129359 115428 / 117418 107468 / 109458 77615 / 79606 19901 / 1990
    75000us 133339 / 135330 129359 / 131349 119408 / 121399 113438 / 115428 79606 / 81596 19901 / 1990
    100000us 135330 / 137320 131349 / 133339 123389 / 125379 117418 / 119408 89556 / 91546 19901 / 1990
    200000us 163191 / 151251 137320 / 139310 131349 / 133339 129359 / 131349 123389 / 125379 33832 / 1990
    500000us 181103 / 183093 163191 / 165182 149260 / 151251 145280 / 147270 139310 / 141300 105477 / 93537
    1000000us 185083 / - 183093 / 185083 177122 / 179113 173142 / 175132 163191 / 161201 133339 / 135330
    Probes jOPS / Total jOPS
    Request Mix Accuracy
    Note
    (Actual % in the Mix - Expected % in the Mix) must be within:
    'Main Tx' limit of +/-5.0% for the requests whose expected % in the mix is >= 10.0%
    'Minor Tx' limit of +/-1.0% for the requests whose expected % in the mix is < 10.0%
    There were no non-critical failures in Response Time curve building
    Delay between status pings
    IR/PR Accuracy
    This section lists properties only set by user
    Property Name Default Controller Group1.Backend.beJVM Group1.TxInjector.txiJVM1 Group10.Backend.beJVM Group10.TxInjector.txiJVM1 Group11.Backend.beJVM Group11.TxInjector.txiJVM1 Group12.Backend.beJVM Group12.TxInjector.txiJVM1 Group2.Backend.beJVM Group2.TxInjector.txiJVM1 Group3.Backend.beJVM Group3.TxInjector.txiJVM1 Group4.Backend.beJVM Group4.TxInjector.txiJVM1 Group5.Backend.beJVM Group5.TxInjector.txiJVM1 Group6.Backend.beJVM Group6.TxInjector.txiJVM1 Group7.Backend.beJVM Group7.TxInjector.txiJVM1 Group8.Backend.beJVM Group8.TxInjector.txiJVM1 Group9.Backend.beJVM Group9.TxInjector.txiJVM1
    specjbb.comm.connect.client.pool.size 256 16 24 32 24 32 24 32 24 32 24 32 24 32 24 32 24 32 24 32 24 32 24 32 24 32
    specjbb.comm.connect.selector.runner.count 0 1
    specjbb.comm.connect.timeouts.connect 60000 90000
    specjbb.comm.connect.timeouts.read 60000 90000
    specjbb.comm.connect.timeouts.write 60000 90000
    specjbb.comm.connect.worker.pool.max 256 16 32 32 32 32 32 32 32 32 32 32 32 32 32 32 32 32 32 32 32 32 32 32 32 32
    specjbb.controller.handshake.period 5000 20000
    specjbb.controller.handshake.timeout 600000 90000
    specjbb.customerDriver.threads 64 {probe=95, saturate=350, service=64}
    specjbb.forkjoin.workers 96 {Tier1=64, Tier2=4, Tier3=8}
    specjbb.group.count 1 12
    specjbb.heartbeat.period 10000 2000
    specjbb.heartbeat.threshold 100000 90000
    specjbb.mapreducer.pool.size 96 4
    specjbb.txi.pergroup.count 1 1
    View table in csv format
     
    Level: COMPLIANCE
    Check Agent Result
    Check properties on compliance All PASSED
     
    Level: CORRECTNESS
    Check Agent Result
    Compare SM and HQ Inventory All PASSED
    High-bound (max attempted) is 199014 IR
    High-bound (settled) is 192668 IR