Quanta Cloud Technology, QuantaGrid D44N-1U 1144744 SPECjbb2015-MultiJVM max-jOPS
526162 SPECjbb2015-MultiJVM critical-jOPS
Tested by: Quanta Computer Inc. Test Sponsor: Quanta Computer Inc. Test location: Taoyuan, TW, R.O.C Test date: September 18, 2024
SPEC license #: 9050 Hardware Availability: Oct-2024 Software Availability: Sep-2024 Publication: MMM DD, YYYY
Benchmark Results Summary
 
Overall Throughput RT curve
Overall SUT (System Under Test) Description
VendorQuanta Cloud Technology
Vendor URLhttp://qct.io
System SourceSingle Supplier
System DesignationServer Rack
Total Systems1
All SUT Systems IdenticalYES
Total Nodes1
All Nodes IdenticalYES
Nodes Per System1
Total Chips2
Total Cores256
Total Threads512
Total Memory Amount (GB)1536
Total OS Images1
SW EnvironmentNon-virtual
 
Hardware hw_1
NameQuantaGrid D44N-1U
VendorQuanta Cloud Technology
Vendor URLhttp://qct.io
AvailableOct-2024
ModelQuantaGrid D44N-1U
Form Factor1U
CPU NameAMD EPYC 9755
CPU Characteristics128 core, 2.7GHz, 512MB L3 Cache (Max. Boost Clock up to 4.1GHz)
Number of Systems1
Nodes Per System1
Chips Per System2
Cores Per System256
Cores Per Chip128
Threads Per System512
Threads Per Core2
Version3B02 09/09/2024
CPU Frequency (MHz)2700
Primary Cache32KB(I)+48KB(D) per core
Secondary Cache1MB (I+D) per core
Tertiary Cache512MB (I+D) on chip per chip
Other CacheNone
Disk1 x 960 GB NVMe SSD
File Systembtrfs
Memory Amount (GB)1536
# and size of DIMM(s)24 x 64GB
Memory Details64GB 2Rx4 DDR5 6400 MHz, running at 6000 MHz.
# and type of Network Interface Cards (NICs)Intel 10 Gigabit Ethernet X710-AT2 2-port
Power Supply Quantity and Rating (W)1 x 1600
Other HardwareNone
Cabinet/Housing/EnclosureNone
Shared DescriptionNone
Shared CommentNone
Notes
  • NA: The test sponsor attests, as of date of publication, that CVE-2017-5754 (Meltdown) is mitigated in the system as tested and documented.
  • Yes: The test sponsor attests, as of date of publication, that CVE-2017-5753 (Spectre variant 1) is mitigated in the system as tested and documented.
  • Yes: The test sponsor attests, as of date of publication, that CVE-2017-5715 (Spectre variant 2) is mitigated in the system as tested and documented.
Other Hardware network_1
NameNone
VendorNone
Vendor URLNone
VersionNone
AvailableNone
BitnessNone
NotesNone
Operating System os_1
NameSUSE Linux Enterprise Server 15 SP6
VendorSUSE
Vendor URLhttp://suse.com/
Version6.4.0-150600.21-default
AvailableJun-2024
Bitness64
NotesNone
Java Virtual Machine jvm_1
NameOracle Java SE 22.0.2
VendorOracle
Vendor URLhttp://oracle.com/
VersionJava HotSpot 64-bit Server VM, version 22.0.2
AvailableJul-2024
Bitness64
Notesnote
Other Software other_1
NameNone
VendorNone
Vendor URLNone
VersionNone
AvailableNone
BitnessNone
NotesNone
Hardware
OS Images os_Image_1(1)
Hardware Description hw_1
Number of Systems 1
SW Environment non-virtual
Tuning
  • L1 Stream HW Prefetcher : Disabled
  • L2 Stream HW Prefetcher : Disabled
  • NUMA Nodes Per Socket : NPS4
  • Power Profile Selection : High Performance Mode
  • Determinism Slider : Power Determinism
  • ACPI CST C2 Latency : 18
  • PPT Control : 500
  • TDP Control : 500
Notes None
OS Image os_Image_1
JVM Instances jvm_Ctr_1(1), jvm_Backend_1(32), jvm_TxInjector_1(32)
OS Image Description os_1
Tuning

  • cpupower -c all frequency-set -g performance
  • tuned-adm profile throughput-performance

  • echo 960000 > /proc/sys/kernel/sched_rt_runtime_us
  • echo 800000000 > /proc/sys/kernel/sched_latency_ns
  • echo 40000 > /proc/sys/kernel/sched_migration_cost_ns
  • echo 910000000 > /proc/sys/kernel/sched_min_granularity_ns
  • echo 2000000 > /proc/sys/kernel/sched_wakeup_granularity_ns
  • echo 9000 > /proc/sys/kernel/sched_nr_migrate
  • echo 10000 > /proc/sys/vm/dirty_expire_centisecs
  • echo 1500 > /proc/sys/vm/dirty_writeback_centisecs
  • echo 40 > /proc/sys/vm/dirty_ratio
  • echo 10 > /proc/sys/vm/dirty_background_ratio
  • echo 10 > /proc/sys/vm/swappiness
  • echo 0 > /proc/sys/kernel/numa_balancing
  • echo 0 > /proc/sys/vm/numa_stat
  • echo always > /sys/kernel/mm/transparent_hugepage/enabled
  • echo always > /sys/kernel/mm/transparent_hugepage/defrag

Notes None
JVM Instance jvm_Ctr_1
Parts of Benchmark Controller
JVM Instance Description jvm_1
Command Line

-Xms8g -Xmx8g -Xmn6g -XX:+UseParallelGC -XX:ParallelGCThreads=1 -XX:CICompilerCount=2

Tuning

Used numactl to interleave memory on all CPUs

  • numactl --interleave=all

Notes None
JVM Instance jvm_Backend_1
Parts of Benchmark Backend
JVM Instance Description jvm_1
Command Line

-Xms31g -Xmx31g -Xmn29g -XX:AllocatePrefetchInstr=2 -XX:+UseParallelGC -XX:ParallelGCThreads=16 -XX:LargePageSizeInBytes=2m -XX:-UseAdaptiveSizePolicy -XX:+AlwaysPreTouch -XX:+UseLargePages -XX:SurvivorRatio=12 -XX:TargetSurvivorRatio=95 -XX:MaxTenuringThreshold=15 -XX:InlineSmallCode=11k -XX:MaxGCPauseMillis=100 -XX:LoopUnrollLimit=200 -XX:+UseTransparentHugePages -XX:TLABAllocationWeight=2 -XX:ThreadStackSize=140 -XX:CompileThresholdScaling=120 -XX:CICompilerCount=4 -XX:AutoBoxCacheMax=32 -XX:OnStackReplacePercentage=100 -XX:TLABSize=1m -XX:MinTLABSize=1m -XX:-ResizeTLAB -XX:TLABWasteTargetPercent=1 -XX:TLABWasteIncrement=1 -XX:YoungPLABSize=1m -XX:OldPLABSize=1m

Tuning

Used numactl to affinitize each Backend JVM to 8 Core / 16 Threads

  • Group1: numactl --physcpubind=0-7,256-263 --localalloc
  • Group2: numactl --physcpubind=8-15,264-271 --localalloc
  • Group3: numactl --physcpubind=16-23,272-279 --localalloc
  • Group4: numactl --physcpubind=24-31,280-287 --localalloc
  • Group5: numactl --physcpubind=32-39,288-295 --localalloc
  • Group6: numactl --physcpubind=40-47,296-303 --localalloc
  • Group7: numactl --physcpubind=48-55,304-311 --localalloc
  • Group8: numactl --physcpubind=56-63,312-319 --localalloc
  • Group9: numactl --physcpubind=64-71,320-327 --localalloc
  • Group10: numactl --physcpubind=72-79,328-335 --localalloc
  • Group11: numactl --physcpubind=80-87,336-343 --localalloc
  • Group12: numactl --physcpubind=88-95,344-351 --localalloc
  • Group13: numactl --physcpubind=96-103,352-359 --localalloc
  • Group14: numactl --physcpubind=104-111,360-367 --localalloc
  • Group15: numactl --physcpubind=112-119,368-375 --localalloc
  • Group16: numactl --physcpubind=120-127,376-383 --localalloc
  • Group17: numactl --physcpubind=128-135,384-391 --localalloc
  • Group18: numactl --physcpubind=136-143,392-399 --localalloc
  • Group19: numactl --physcpubind=144-151,400-407 --localalloc
  • Group20: numactl --physcpubind=152-159,408-415 --localalloc
  • Group21: numactl --physcpubind=160-167,416-423 --localalloc
  • Group22: numactl --physcpubind=168-175,424-431 --localalloc
  • Group23: numactl --physcpubind=176-183,432-439 --localalloc
  • Group24: numactl --physcpubind=184-191,440-447 --localalloc
  • Group25: numactl --physcpubind=192-199,448-455 --localalloc
  • Group26: numactl --physcpubind=200-207,456-463 --localalloc
  • Group27: numactl --physcpubind=208-215,464-471 --localalloc
  • Group28: numactl --physcpubind=216-223,472-479 --localalloc
  • Group29: numactl --physcpubind=224-231,480-487 --localalloc
  • Group30: numactl --physcpubind=232-239,488-495 --localalloc
  • Group31: numactl --physcpubind=240-247,496-503 --localalloc
  • Group32: numactl --physcpubind=248-255,504-511 --localalloc
Notes None
JVM Instance jvm_TxInjector_1
Parts of Benchmark TxInjector
JVM Instance Description jvm_1
Command Line

-Xms8g -Xmx8g -Xmn6g -XX:+UseParallelGC -XX:ParallelGCThreads=1 -XX:CICompilerCount=2

Tuning

Used numactl to affinitize each Backend JVM to 8 Core / 16 Threads

  • Group1: numactl --physcpubind=0-7,256-263 --localalloc
  • Group2: numactl --physcpubind=8-15,264-271 --localalloc
  • Group3: numactl --physcpubind=16-23,272-279 --localalloc
  • Group4: numactl --physcpubind=24-31,280-287 --localalloc
  • Group5: numactl --physcpubind=32-39,288-295 --localalloc
  • Group6: numactl --physcpubind=40-47,296-303 --localalloc
  • Group7: numactl --physcpubind=48-55,304-311 --localalloc
  • Group8: numactl --physcpubind=56-63,312-319 --localalloc
  • Group9: numactl --physcpubind=64-71,320-327 --localalloc
  • Group10: numactl --physcpubind=72-79,328-335 --localalloc
  • Group11: numactl --physcpubind=80-87,336-343 --localalloc
  • Group12: numactl --physcpubind=88-95,344-351 --localalloc
  • Group13: numactl --physcpubind=96-103,352-359 --localalloc
  • Group14: numactl --physcpubind=104-111,360-367 --localalloc
  • Group15: numactl --physcpubind=112-119,368-375 --localalloc
  • Group16: numactl --physcpubind=120-127,376-383 --localalloc
  • Group17: numactl --physcpubind=128-135,384-391 --localalloc
  • Group18: numactl --physcpubind=136-143,392-399 --localalloc
  • Group19: numactl --physcpubind=144-151,400-407 --localalloc
  • Group20: numactl --physcpubind=152-159,408-415 --localalloc
  • Group21: numactl --physcpubind=160-167,416-423 --localalloc
  • Group22: numactl --physcpubind=168-175,424-431 --localalloc
  • Group23: numactl --physcpubind=176-183,432-439 --localalloc
  • Group24: numactl --physcpubind=184-191,440-447 --localalloc
  • Group25: numactl --physcpubind=192-199,448-455 --localalloc
  • Group26: numactl --physcpubind=200-207,456-463 --localalloc
  • Group27: numactl --physcpubind=208-215,464-471 --localalloc
  • Group28: numactl --physcpubind=216-223,472-479 --localalloc
  • Group29: numactl --physcpubind=224-231,480-487 --localalloc
  • Group30: numactl --physcpubind=232-239,488-495 --localalloc
  • Group31: numactl --physcpubind=240-247,496-503 --localalloc
  • Group32: numactl --physcpubind=248-255,504-511 --localalloc
Notes None
max-jOPS = jOPS passed before the First Failure
Pass/Fail Pass Pass Pass Fail Fail
jOPS 1118428 1131586 1144744 1157902 1171060
critical-jOPS = Geomean ( jOPS @ 10000; 25000; 50000; 75000; 100000; SLAs )
Response time percentile is 99-th
SLA (us) 10000 25000 50000 75000 100000 Geomean
jOPS 357458 450660 556394 651320 690794 526162
  Percentile
  10-th 50-th 90-th 95-th 99-th 100-th
500us 26316 / 39474 13158 / 26316 - / 13158 - / 13158 - / 13158 - / 13158
1000us 171054 / 184212 39474 / 52632 26316 / 39474 26316 / 39474 13158 / 26316 - / 13158
5000us 842111 / 855269 736847 / 750005 276318 / 289476 223686 / 236844 197370 / 210528 - / 13158
10000us 855269 / 868427 802637 / 815795 671057 / 684215 605267 / 618425 368423 / 302634 13158 / 26316
25000us 868427 / 881585 828953 / 842111 750005 / 763163 697373 / 710531 473687 / 381581 13158 / 26316
50000us 881585 / 894743 842111 / 855269 776321 / 789479 736847 / 750005 578951 / 500003 13158 / 26316
75000us 881585 / 894743 855269 / 868427 802637 / 815795 776321 / 789479 657899 / 644741 13158 / 26316
100000us 894743 / 907901 868427 / 881585 815795 / 828953 776321 / 789479 697373 / 684215 184212 / 26316
200000us 934217 / 947375 881585 / 894743 855269 / 868427 842111 / 855269 802637 / 789479 486845 / 197370
500000us 1144744 / - 986848 / 1000006 921059 / 934217 907901 / 921059 868427 / 881585 736847 / 210528
1000000us 1144744 / - 1144744 / - 1105270 / 1118428 1078954 / 1092112 1013164 / 986848 894743 / 302634
Probes jOPS / Total jOPS
Request Mix Accuracy
Note
(Actual % in the Mix - Expected % in the Mix) must be within:
'Main Tx' limit of +/-5.0% for the requests whose expected % in the mix is >= 10.0%
'Minor Tx' limit of +/-1.0% for the requests whose expected % in the mix is < 10.0%
There were no non-critical failures in Response Time curve building
Delay between status pings
IR/PR Accuracy
This section lists properties only set by user
Property Name Default Controller Group1.Backend.beJVM Group1.TxInjector.txiJVM1 Group10.Backend.beJVM Group10.TxInjector.txiJVM1 Group11.Backend.beJVM Group11.TxInjector.txiJVM1 Group12.Backend.beJVM Group12.TxInjector.txiJVM1 Group13.Backend.beJVM Group13.TxInjector.txiJVM1 Group14.Backend.beJVM Group14.TxInjector.txiJVM1 Group15.Backend.beJVM Group15.TxInjector.txiJVM1 Group16.Backend.beJVM Group16.TxInjector.txiJVM1 Group17.Backend.beJVM Group17.TxInjector.txiJVM1 Group18.Backend.beJVM Group18.TxInjector.txiJVM1 Group19.Backend.beJVM Group19.TxInjector.txiJVM1 Group2.Backend.beJVM Group2.TxInjector.txiJVM1 Group20.Backend.beJVM Group20.TxInjector.txiJVM1 Group21.Backend.beJVM Group21.TxInjector.txiJVM1 Group22.Backend.beJVM Group22.TxInjector.txiJVM1 Group23.Backend.beJVM Group23.TxInjector.txiJVM1 Group24.Backend.beJVM Group24.TxInjector.txiJVM1 Group25.Backend.beJVM Group25.TxInjector.txiJVM1 Group26.Backend.beJVM Group26.TxInjector.txiJVM1 Group27.Backend.beJVM Group27.TxInjector.txiJVM1 Group28.Backend.beJVM Group28.TxInjector.txiJVM1 Group29.Backend.beJVM Group29.TxInjector.txiJVM1 Group3.Backend.beJVM Group3.TxInjector.txiJVM1 Group30.Backend.beJVM Group30.TxInjector.txiJVM1 Group31.Backend.beJVM Group31.TxInjector.txiJVM1 Group32.Backend.beJVM Group32.TxInjector.txiJVM1 Group4.Backend.beJVM Group4.TxInjector.txiJVM1 Group5.Backend.beJVM Group5.TxInjector.txiJVM1 Group6.Backend.beJVM Group6.TxInjector.txiJVM1 Group7.Backend.beJVM Group7.TxInjector.txiJVM1 Group8.Backend.beJVM Group8.TxInjector.txiJVM1 Group9.Backend.beJVM Group9.TxInjector.txiJVM1
specjbb.comm.connect.client.pool.size 256 305 220 230 220 230 220 230 220 230 220 230 220 230 220 230 220 230 220 230 220 230 220 230 220 230 220 230 220 230 220 230 220 230 220 230 220 230 220 230 220 230 220 230 220 230 220 230 220 230 220 230 220 230 220 230 220 230 220 230 220 230 220 230 220 230
specjbb.comm.connect.selector.runner.count 0 5 2 3 2 3 2 3 2 3 2 3 2 3 2 3 2 3 2 3 2 3 2 3 2 3 2 3 2 3 2 3 2 3 2 3 2 3 2 3 2 3 2 3 2 3 2 3 2 3 2 3 2 3 2 3 2 3 2 3 2 3 2 3 2 3
specjbb.comm.connect.worker.pool.max 256 350 220 230 220 230 220 230 220 230 220 230 220 230 220 230 220 230 220 230 220 230 220 230 220 230 220 230 220 230 220 230 220 230 220 230 220 230 220 230 220 230 220 230 220 230 220 230 220 230 220 230 220 230 220 230 220 230 220 230 220 230 220 230 220 230
specjbb.comm.connect.worker.pool.min 1 1 12 8 12 8 12 8 12 8 12 8 12 8 12 8 12 8 12 8 12 8 12 8 12 8 12 8 12 8 12 8 12 8 12 8 12 8 12 8 12 8 12 8 12 8 12 8 12 8 12 8 12 8 12 8 12 8 12 8 12 8 12 8 12 8
specjbb.controller.maxir.maxFailedPoints 3 1
specjbb.controller.rtcurve.warmup.step 0.1 0.2
specjbb.customerDriver.threads 64 {probe=75, saturate=186, service=90}
specjbb.forkjoin.workers 512 {Tier1=620, Tier2=32, Tier3=134}
specjbb.group.count 1 32
specjbb.heartbeat.period 10000 9000000
specjbb.heartbeat.threshold 100000 9000000
specjbb.mapreducer.pool.size 512 20
specjbb.txi.pergroup.count 1 1
View table in csv format
 
Level: COMPLIANCE
Check Agent Result
Check properties on compliance All PASSED
 
Level: CORRECTNESS
Check Agent Result
Compare SM and HQ Inventory All PASSED
High-bound (max attempted) is 1315798 IR
High-bound (settled) is 1219711 IR