SPEC CPU2017 Platform Settings for Supermicro Systems
GOMP_CPU_AFFINITY
Binds threads to specific CPUs. The variable should contain a space-separated or comma-separated list of CPUs.
This list may contain different kinds of entries: either single CPU numbers in any order, a range of CPUs (M-N) or a range with some stride (M-N:S).
CPU numbers are zero based. For example, GOMP_CPU_AFFINITY="0 3 1-2 4-15:2" will bind the initial thread to CPU 0, the second to CPU 3, the third to CPU 1, the fourth to CPU 2, the fifth to CPU 4, the sixth through tenth to CPUs 6, 8, 10, 12, and 14 respectively and then start assigning back from the beginning of the list.
GOMP_CPU_AFFINITY=0 binds all threads to CPU 0.
OMP_DYNAMIC
The OMP_DYNAMIC environment variable enables or disables dynamic adjustment of the number of threads available for running parallel regions.
If it is set to TRUE, the number of threads available for executing parallel regions can be adjusted at run time to make the best use of system resources.
For more information, see the description for profilefreq=num in XLSMPOPTS.
If it is set to FALSE, dynamic adjustment is disabled.
The default setting is TRUE.
OMP_SCHEDULE
The OMP_SCHEDULE environment variable specifies the scheduling algorithm used for loops not explicitly assigned a scheduling algorithm with the omp schedule clause.
Valid options for algorithm are: auto,dynamic,guided,runtime,static
If specifying a chunk size with n, the value of n must be a positive integer.
The default scheduling algorithm is auto.
OMP_THREAD_LIMIT
The OMP_THREAD_LIMIT environment variable sets the maximum number of OpenMP threads to use in a contention group by setting the thread-limit-var ICV.
The value of this environment variable must be a positive integer.
The behavior of the program is implementation defined if the requested value of OMP_THREAD_LIMIT is greater than the number of threads an implementation can support, or if the value is not a positive integer.
MALLOC_CONF = "retain:true"
Retain unused virtual memory for later reuse rather than discarding it by calling munmap or equivalent.
It also makes jemalloc use mmap in a more greedy way, mapping larger chunks in one go.
This option is disabled by default unless discarding virtual memory is known to trigger platform-specific performance problems, e.g. for [64-bit] Linux, which has a quirk in its virtual memory allocation algorithm that causes semi-permanent VM map holes under normal jemalloc operation.
Although munmap causes issues on 32-bit Linux as well, retaining virtual memory for 32-bit Linux is disabled by default due to the practical possibility of address space exhaustion.
- Determinism Control:
-
This BIOS option allows for choose AGESA determinism control.
AGESA is an acronym for "AMD Generic Encapsulated Software Architecture."
AGESA is a bootstrap protocol by which system devices on AMD64-architecture mainboards are initialized, it responsible for the initialization of the processor cores, memory, and the HyperTransport controller.
Available settings are:
- Manual: User can customize determinism slider.
- Auto (Default setting): Use the processor fused determinism control.
- Determinism Slider:
-
This BIOS option allows for Enable/Disable AGESA determinism to control performance.
AGESA is an acronym for "AMD Generic Encapsulated Software Architecture."
AGESA is a bootstrap protocol by which system devices on AMD64-architecture mainboards are initialized, it responsible for the initialization of the processor cores, memory, and the HyperTransport controller.
Available settings are:
- Performance: AGESA will enable 100% deterministic performance control.
- Power: AGESA will not enable deterministic performance control.
- Auto (Default setting): Use default value for deterministic performance control.
- cTDP Control:
-
This BIOS option is for "Configurable TDP (cTDP)", it allows user can set customized value for TDP. Available settings are:
- Auto (Default setting): Use the fused TDP value.
- Manual: Let user specifies customized TDP value.
- cTDP:
-
TDP is an acronym for “Thermal Design Power.” TDP is the recommended target for power used when designing the cooling capacity for a server.
EPYC processors are able to control this target power consumption within certain limits. This capability is referred to as “configurable TDP” or "cTDP."
cTDP can be used to reduce power consumption for greater efficiency, or in some cases, increase power consumption above the default value to provide additional performance.
cTDP is controlled using a BIOS option.
The default EPYC cTDP value corresponds with the microprocessor’s nominal TDP. For the EPYC 7702, the default value is 200W.
The default cTDP value is set at a good balance between performance and energy efficiency.
The EPYC 7702 cTDP can be reduced as low as 180W, which will minimize the power consumption for the processor under load, but at the expense of peak performance.
Increasing the EPYC 7742 cTDP to 240W will maximize peak performance by allowing the CPU to maintain higher dynamic clock speeds, but will make the microprocessor less energy efficient.
Note that at maximum cTDP, the CPU thermal solution must be capable of dissipating at least 240W or the EPYC 7742 processor might engage in thermal throttling under load.
The available cTDP ranges for each EPYC model are in the table below:
Model | Nominal TDP | Minimum cTDP | Maximum cTDP** |
EPYC 7742 | 225W | 225W | 240W |
EPYC 7702 | 200W | 165W | 200W |
EPYC 7702P | 200W | 165W | 200W |
EPYC 7601 | 180W | 165W | 200W |
EPYC 7551 | 180W | 165W | 200W |
EPYC 7501 | 155/170W | 135W | 155/170W* |
EPYC 7451 | 180W | 165W | 200W |
EPYC 7401 | 155/170W | 135W | 155/170W* |
EPYC 7351 | 155/170W | 135W | 155/170W* |
EPYC 7301 | 155/170W | 135W | 155/170W* |
EPYC 7281 | 155/170W | 135W | 155/170W* |
EPYC 7251 | 120W | 105W | 120W |
*Max TDP is 170W when DDR4 is operating at 2667 MT/sec, or 155W when DDR4 is operating at lower frequencies.
** cTDP must remain below the thermal solution design parameters or thermal throttling could be frequently encountered.
- IOMMU:
-
The I/O Memory Management Unit (IOMMU) extends the AMD64 system architecture by adding support for address translation and system memory access protection on DMA transfers from periph-eral devices.
IOMMU also helps filter and remap interrupts from peripheral devices.
Available settings are:
- Disabled: Disable IOMMU support.
- Enabled: Enable IOMMU support.
- Auto (Default setting): Use default value for IOMMU. The default value is disable.
- Package Power Limit Control:
-
This is a per processor Package Power Limit (PPT) value applicable for all populated processors in the system.
This can be set to limit the PPT to a certain value.
Available settings are:
- Auto (Default setting): Use the fused processor PPT value.
- Manual: Let user specifies customized processor PPT value.
- Package Power Limit:
-
Set customize processor Package Power Limit (PPT) value to be used on all populated processors in the system.
If set to 240 = Use the 240W PPT ***PPT will be used as the ASIC power limit***
- APBDIS:
-
APBDis is an IO Boost disable on uncore.
For any system user that needs to block these uncore optimizations that are impacting base core clock speed, we are exposing a method to disable this behavior called APBDis.
This locks the fabric clock to the non-boosted speeds.
Available settings are:
- 0: Disable APBDIS, locks the fabric clock to the non-boosted speeds.
- 1: Enable APBDIS, unlocks the fabric clock to the boosted speeds.
- Auto (Default setting): Use default value for APBDIS. The default value is 0.
- NUMA Nodes Per Socket:
-
Specifies the number of desired NUMA nodes per socket.
This option allows the user to divide the memory that each socket has into a certain number of NUMA memory nodes for optimal memory bandwidth.
Available settings are:
- NPS0: It will attempt to interleave the two sockets together.
- NPS1: Each processor socket will have one NUMA memory node.
- NPS2: Each processor socket will have two NUMA memory nodes.
- NPS4: Each processor socket will have four NUMA memory nodes.
- Auto (Default setting): Use default value for NUMA nodes per socket. The default value is NPS1.