The AMDGPU backend provides ISA code generation for AMD GPUs, starting with the R600 family up until the current GCN families. It lives in the lib/Target/AMDGPU
directory.
Use the clang -target <Architecture>-<Vendor>-<OS>-<Environment>
option to specify the target triple:
AMDGPU ArchitecturesAMDGPU Vendors
Architecture Description r600
AMD GPUs HD2XXX-HD6XXX for graphics and compute shaders. amdgcn
AMD GPUs GCN GFX6 onwards for graphics and compute shaders. AMDGPU Operating Systems
Vendor Description amd
Can be used for all AMD GPU usage. mesa3d
Can be used if the OS is mesa3d
.AMDGPU Environments
OS Description <empty> Defaults to the unknown OS.
amdhsa
Compute kernels executed on HSA [HSA] compatible runtimes such as AMD's ROCm [AMD-ROCm].
amdpal
Graphic shaders and compute kernels executed on AMD PAL runtime.
mesa3d
Graphic shaders and compute kernels executed on Mesa 3D runtime.
Environment Description <empty> Default.
Use the clang -mcpu <Processor>
option to specify the AMD GPU processor. The names from both the Processor and Alternative Processor can be used.
AMDGPU Processors
Processor
Alternative Processor
Target Triple Architecture
dGPU/ APU
Target Features Supported [Default]
ROCm Support
Example Products
=========== **Radeon HD
=============== 2000/3000 Series
- ============
(R600)** [AM
===== D-RADE
========= ON-HD-2000
======= -3000]_
------------
r600
r630
rs880
rv670
**Radeon HD
4000 Series (R70
-------------
r600
r600
r600
r600
0)** [AMD-RAD------dGPU dGPU dGPU dGPU EON-HD
-4000]_
------------
rv710
rv730
rv770
**Radeon HD
5000 Series (Eve
-------------
r600
r600
r600
rgreen)** [AM------dGPU dGPU dGPU D-RADE
ON-HD-5000
]_
------------
cedar
cypress
juniper
redwood
sumo
-------------
r600
r600
r600
r600
r600
------dGPU dGPU dGPU dGPU dGPU
**Radeon HD 6000 Series (Nor thern Islands )** [A MD-RADEON- HD-6000] _ ------------
barts
caicos
cayman
turks
**GCN GFX6 (
Southern Islands
-------------
r600
r600
r600
r600
(SI))** [AMD------dGPU dGPU dGPU dGPU -GCN-G
FX6]_
------------
gfx600
gfx601
**GCN GFX7 (
-----------------
tahiti
-hainan
-oland
-pitcairn
-verde
Sea Islands (CI)-------------
amdgcn
amdgcn
)** [AMD-GCN-
------dGPU dGPU
GFX7]_
------------
gfx700
gfx701
gfx702
gfx703
gfx704
**GCN GFX8 (
-----------------
kaveri
hawaii
kabini
mullins
bonaire
Volcanic Islands
-------------
amdgcn
amdgcn
amdgcn
amdgcn
amdgcn
(VI))** [AMD
------APU
dGPU
dGPU
APU
dGPU
-GCN-G
FX8]_
ROCm
ROCm
------------------- A6-7000 - A6 Pro-7050B - A8-7100 - A8 Pro-7150B - A10-7300 - A10 Pro-7350B - FX-7500 - A8-7200P - A10-7400P - FX-7600P - FirePro W8100 - FirePro W9100 - FirePro S9150 - FirePro S9170 - Radeon R9 290 - Radeon R9 290x - Radeon R390 - Radeon R390x - E1-2100 - E1-2200 - E1-2500 - E2-3000 - E2-3800 - A4-5000 - A4-5100 - A6-5200 - A4 Pro-3340B - Radeon HD 7790 - Radeon HD 8770 - R7 260 - R7 260X
------------
gfx801
-----------------
carrizo
-------------
amdgcn
amdgcn
amdgcn
amdgcn
------APU
APU
APU
APU
- ----------- xnack
[on]
- - xnack
[on]
- - xnack
[on]
- - xnack
[on]
ROCm
------------------- A6-8500P - Pro A6-8500B - A8-8600P - Pro A8-8600B - FX-8800P - Pro A12-8800B - A10-8700P - Pro A10-8700B - A10-8780P - A10-9600P - A10-9630P - A12-9700P - A12-9730P - FX-9800P - FX-9830P - E2-9010 - A6-9210 - A9-9410
gfx802
iceland
tonga
amdgcn
dGPU
- xnack [off]
ROCm
- FirePro S7150
- FirePro S7100
- FirePro W7100
- Radeon R285
- Radeon R9 380
- Radeon R9 385
- Mobile FirePro M7170
gfx803
fiji
amdgcn
dGPU
- xnack [off]
ROCm
- Radeon R9 Nano
- Radeon R9 Fury
- Radeon R9 FuryX
- Radeon Pro Duo
- FirePro S9300x2
- Radeon Instinct MI8
polaris10
amdgcn
dGPU
- xnack [off]
ROCm
- Radeon RX 470
- Radeon RX 480
- Radeon Instinct MI6
gfx810
GCN GFX9
polaris11
stoney
amdgcn
amdgcn
dGPU
APU
- xnack [off]
- xnack [on]
ROCm
- Radeon RX 460
------------
gfx900
gfx902
gfx904
gfx906
-------------
amdgcn
amdgcn
amdgcn
amdgcn
------dGPU
APU
dGPU
dGPU
- ----------- xnack
[off]
- - xnack
[on]
- - xnack
[off]
- - xnack
[off]
--------ROCm
- ------------------- Radeon Vega
Frontier Edition
- Radeon RX Vega 56
- Radeon RX Vega 64
- Radeon RX Vega 64 Liquid
- Radeon Instinct MI25
- Ryzen 3 2200G
- Ryzen 5 2400G TBA
TBA
Target features control how code is generated to support certain processor specific features. Not all target features are supported by all processors. The runtime must ensure that the features supported by the device used to execute the code match the features enabled when generating the code. A mismatch of features may result in incorrect execution, or a reduction in performance.
The target features supported by each processor, and the default value used if not specified explicitly, is listed in amdgpu-processor-table
.
Use the clang -m[no-]<TargetFeature>
option to specify the AMD GPU target features.
For example:
-mxnack
Enable the
xnack
feature.-mno-xnack
Disable the
AMDGPU Target Featuresxnack
feature.Target Feature Description -m[no-]xnack
Enable/disable generating code that has memory clauses that are compatible with having XNACK replay enabled.
This is used for demand paging and page migration. If XNACK replay is enabled in the device, then if a page fault occurs the code may execute incorrectly if the
xnack
feature is not enabled. Executing code that has the feature enabled on a device that does not have XNACK replay enabled will execute correctly, but may be less performant than code with the feature disabled.
The AMDGPU backend uses the following address space mappings.
The memory space names used in the table, aside from the region memory space, is from the OpenCL standard.
LLVM Address Space number is used throughout LLVM (for example, in LLVM IR).
Address Space Mapping
LLVM Address Space Memory Space 0 Generic (Flat) 1 Global 2 Region (GDS) 3 Local (group/LDS) 4 Constant 5 Private (Scratch) 6 Constant 32-bit
This section provides LLVM memory synchronization scopes supported by the AMDGPU backend memory model when the target triple OS is amdhsa
(see amdgpu-amdhsa-memory-model
and amdgpu-target-triples
).
The memory model supported is based on the HSA memory model [HSA] which is based in turn on HRF-indirect with scope inclusion [HRF]. The happens-before relation is transitive over the synchonizes-with relation independent of scope, and synchonizes-with allows the memory scope instances to be inclusive (see table amdgpu-amdhsa-llvm-sync-scopes-table
).
This is different to the OpenCL [OpenCL] memory model which does not have scope inclusion and requires the memory scopes to exactly match. However, this is conservatively correct for OpenCL.
AMDHSA LLVM Sync Scopes
LLVM Sync Scope Description none
The default:
system
.Synchronizes with, and participates in modification and seq_cst total orderings with, other operations (except image operations) for all address spaces (except private, or generic that accesses private) provided the other operation's sync scope is:
system
.agent
and executed by a thread on the same agent.workgroup
and executed by a thread in the same workgroup.wavefront
and executed by a thread in the same wavefront.
agent
Synchronizes with, and participates in modification and seq_cst total orderings with, other operations (except image operations) for all address spaces (except private, or generic that accesses private) provided the other operation's sync scope is:
system
oragent
and executed by a thread on the same agent.workgroup
and executed by a thread in the same workgroup.wavefront
and executed by a thread in the same wavefront.
workgroup
Synchronizes with, and participates in modification and seq_cst total orderings with, other operations (except image operations) for all address spaces (except private, or generic that accesses private) provided the other operation's sync scope is:
system
,agent
orworkgroup
and executed by a thread in the same workgroup.wavefront
and executed by a thread in the same wavefront.
wavefront
Synchronizes with, and participates in modification and seq_cst total orderings with, other operations (except image operations) for all address spaces (except private, or generic that accesses private) provided the other operation's sync scope is:
system
,agent
,workgroup
orwavefront
and executed by a thread in the same wavefront.
singlethread
Only synchronizes with, and participates in modification and seq_cst total orderings with, other operations (except image operations) running in the same thread for all address spaces (for example, in signal handlers).
The AMDGPU backend implements the following LLVM IR intrinsics.
This section is WIP.
The AMDGPU backend supports the following LLVM IR attributes.
AMDGPU LLVM IR Attributes
LLVM Attribute Description "amdgpu-flat-work-group-size"="min,max"
Specify the minimum and maximum flat work group sizes that will be specified when the kernel is dispatched. Generated by the
amdgpu_flat_work_group_size
CLANG attribute [CLANG-ATTR]."amdgpu-implicitarg-num-bytes"="n"
Number of kernel argument bytes to add to the kernel argument block size for the implicit arguments. This varies by OS and language (for OpenCL see
opencl-kernel-implicit-arguments-appended-for-amdhsa-os-table
)."amdgpu-max-work-group-size"="n"
Specify the maximum work-group size that will be specifed when the kernel is dispatched.
"amdgpu-num-sgpr"="n"
Specifies the number of SGPRs to use. Generated by the
amdgpu_num_sgpr
CLANG attribute [CLANG-ATTR]."amdgpu-num-vgpr"="n"
Specifies the number of VGPRs to use. Generated by the
amdgpu_num_vgpr
CLANG attribute [CLANG-ATTR]."amdgpu-waves-per-eu"="m,n"
Specify the minimum and maximum number of waves per execution unit. Generated by the
amdgpu_waves_per_eu
CLANG attribute [CLANG-ATTR].
The AMDGPU backend generates a standard ELF [ELF] relocatable code object that can be linked by lld
to produce a standard ELF shared code object which can be loaded and executed on an AMDGPU target.
The AMDGPU backend uses the following ELF header:
AMDGPU ELF Header
Field Value e_ident[EI_CLASS]
ELFCLASS64
e_ident[EI_DATA]
ELFDATA2LSB
e_ident[EI_OSABI]
ELFOSABI_NONE
ELFOSABI_AMDGPU_HSA
ELFOSABI_AMDGPU_PAL
ELFOSABI_AMDGPU_MESA3D
e_ident[EI_ABIVERSION]
ELFABIVERSION_AMDGPU_HSA
ELFABIVERSION_AMDGPU_PAL
ELFABIVERSION_AMDGPU_MESA3D
e_type
ET_REL
ET_DYN
e_machine
EM_AMDGPU
e_entry
0 e_flags
See amdgpu-elf-header-e_flags-table
e_ident[EI_CLASS]
The ELF class is:
ELFCLASS32
forr600
architecture.ELFCLASS64
foramdgcn
architecture which only supports 64 bit applications.
e_ident[EI_DATA]
All AMDGPU targets use
ELFDATA2LSB
for little-endian byte ordering.e_ident[EI_OSABI]
One of the following AMD GPU architecture specific OS ABIs (see
amdgpu-os-table
):ELFOSABI_NONE
for unknown OS.ELFOSABI_AMDGPU_HSA
foramdhsa
OS.ELFOSABI_AMDGPU_PAL
foramdpal
OS.ELFOSABI_AMDGPU_MESA3D
formesa3D
OS.
e_ident[EI_ABIVERSION]
The ABI version of the AMD GPU architecture specific OS ABI to which the code object conforms:
ELFABIVERSION_AMDGPU_HSA
is used to specify the version of AMD HSA runtime ABI.ELFABIVERSION_AMDGPU_PAL
is used to specify the version of AMD PAL runtime ABI.ELFABIVERSION_AMDGPU_MESA3D
is used to specify the version of AMD MESA 3D runtime ABI.
e_type
Can be one of the following values:
ET_REL
The type produced by the AMD GPU backend compiler as it is relocatable code object.
ET_DYN
The type produced by the linker as it is a shared code object.
The AMD HSA runtime loader requires a
ET_DYN
code object.e_machine
The value
EM_AMDGPU
is used for the machine for all processors supported by ther600
andamdgcn
architectures (seeamdgpu-processor-table
). The specific processor is specified in theEF_AMDGPU_MACH
bit field of thee_flags
(seeamdgpu-elf-header-e_flags-table
).e_entry
The entry point is 0 as the entry points for individual kernels must be selected in order to invoke them through AQL packets.
e_flags
The AMDGPU backend uses the following ELF header flags:
AMDGPU ELF Headere_flags
Name Value Description AMDGPU Processor Flag See amdgpu-processor-table
.---------------------------------- ---------- ----------------------------- EF_AMDGPU_MACH
0x000000ff
AMDGPU processor selection mask for
EF_AMDGPU_MACH_xxx
values defined inamdgpu-ef-amdgpu-mach-table
.EF_AMDGPU_XNACK
0x00000100
Indicates if the
xnack
target feature is enabled for all code contained in the code object. If the processor does not support thexnack
target feature then must be 0. Seeamdgpu-target-features
.EF_AMDGPU_MACH
ValuesName
Value
Description (see
amdgpu-processor-table
)================================= ========== ============================= EF_AMDGPU_MACH_NONE
0x000 not specified EF_AMDGPU_MACH_R600_R600
0x001 r600
EF_AMDGPU_MACH_R600_R630
0x002 r630
EF_AMDGPU_MACH_R600_RS880
0x003 rs880
EF_AMDGPU_MACH_R600_RV670
0x004 rv670
EF_AMDGPU_MACH_R600_RV710
0x005 rv710
EF_AMDGPU_MACH_R600_RV730
0x006 rv730
EF_AMDGPU_MACH_R600_RV770
0x007 rv770
EF_AMDGPU_MACH_R600_CEDAR
0x008 cedar
EF_AMDGPU_MACH_R600_CYPRESS
0x009 cypress
EF_AMDGPU_MACH_R600_JUNIPER
0x00a juniper
EF_AMDGPU_MACH_R600_REDWOOD
0x00b redwood
EF_AMDGPU_MACH_R600_SUMO
0x00c sumo
EF_AMDGPU_MACH_R600_BARTS
0x00d barts
EF_AMDGPU_MACH_R600_CAICOS
0x00e caicos
EF_AMDGPU_MACH_R600_CAYMAN
0x00f cayman
EF_AMDGPU_MACH_R600_TURKS
0x010 turks
reserved
0x011 -0x01f
Reserved for
r600
architecture processors.EF_AMDGPU_MACH_AMDGCN_GFX600
0x020 gfx600
EF_AMDGPU_MACH_AMDGCN_GFX601
0x021 gfx601
EF_AMDGPU_MACH_AMDGCN_GFX700
0x022 gfx700
EF_AMDGPU_MACH_AMDGCN_GFX701
0x023 gfx701
EF_AMDGPU_MACH_AMDGCN_GFX702
0x024 gfx702
EF_AMDGPU_MACH_AMDGCN_GFX703
0x025 gfx703
EF_AMDGPU_MACH_AMDGCN_GFX704
0x026 gfx704
reserved 0x027 Reserved. EF_AMDGPU_MACH_AMDGCN_GFX801
0x028 gfx801
EF_AMDGPU_MACH_AMDGCN_GFX802
0x029 gfx802
EF_AMDGPU_MACH_AMDGCN_GFX803
0x02a gfx803
EF_AMDGPU_MACH_AMDGCN_GFX810
0x02b gfx810
EF_AMDGPU_MACH_AMDGCN_GFX900
0x02c gfx900
EF_AMDGPU_MACH_AMDGCN_GFX902
0x02d gfx902
EF_AMDGPU_MACH_AMDGCN_GFX904
0x02e gfx904
EF_AMDGPU_MACH_AMDGCN_GFX906
0x02f gfx906
reserved 0x030 Reserved.
An AMDGPU target ELF code object has the standard ELF sections which include:
AMDGPU ELF Sections
Name Type Attributes .bss
SHT_NOBITS
SHF_ALLOC
+SHF_WRITE
.data
SHT_PROGBITS
SHF_ALLOC
+SHF_WRITE
.debug_
*SHT_PROGBITS
none .dynamic
SHT_DYNAMIC
SHF_ALLOC
.dynstr
SHT_PROGBITS
SHF_ALLOC
.dynsym
SHT_PROGBITS
SHF_ALLOC
.got
SHT_PROGBITS
SHF_ALLOC
+SHF_WRITE
.hash
SHT_HASH
SHF_ALLOC
.note
SHT_NOTE
none .rela
nameSHT_RELA
none .rela.dyn
SHT_RELA
none .rodata
SHT_PROGBITS
SHF_ALLOC
.shstrtab
SHT_STRTAB
none .strtab
SHT_STRTAB
none .symtab
SHT_SYMTAB
none .text
SHT_PROGBITS
SHF_ALLOC
+SHF_EXECINSTR
These sections have their standard meanings (see [ELF]) and are only generated if needed.
.debug
*The standard DWARF sections. See
amdgpu-dwarf
for information on the DWARF produced by the AMDGPU backend..dynamic
,.dynstr
,.dynsym
,.hash
The standard sections used by a dynamic loader.
.note
See
amdgpu-note-records
for the note records supported by the AMDGPU backend..rela
name,.rela.dyn
For relocatable code objects, name is the name of the section that the relocation records apply. For example,
.rela.text
is the section name for relocation records associated with the.text
section.For linked shared code objects,
.rela.dyn
contains all the relocation records from each of the relocatable code object's.rela
name sections.See
amdgpu-relocation-records
for the relocation records supported by the AMDGPU backend..text
The executable machine code for the kernels and functions they call. Generated as position independent code. See
amdgpu-code-conventions
for information on conventions used in the isa generation.
As required by ELFCLASS32
and ELFCLASS64
, minimal zero byte padding must be generated after the name
field to ensure the desc
field is 4 byte aligned. In addition, minimal zero byte padding must be generated to ensure the desc
field size is a multiple of 4 bytes. The sh_addralign
field of the .note
section must be at least 4 to indicate at least 8 byte alignment.
The AMDGPU backend code object uses the following ELF note records in the .note
section. The Description column specifies the layout of the note record's desc
field. All fields are consecutive bytes. Note records with variable size strings have a corresponding *_size
field that specifies the number of bytes, including the terminating null character, in the string. The string(s) come immediately after the preceding fields.
Additional note records can be present.
AMDGPU ELF Note Records
Name Type Description "AMD" NT_AMD_AMDGPU_HSA_METADATA
<metadata null terminated string>
NT_AMD_AMDGPU_HSA_METADATA
Specifies extensible metadata associated with the code objects executed on HSA [HSA] compatible runtimes such as AMD's ROCm [AMD-ROCm]. It is required when the target triple OS is
amdhsa
(seeamdgpu-target-triples
). Seeamdgpu-amdhsa-code-object-metadata
for the syntax of the code object metadata string.
Symbols include the following:
AMDGPU ELF Symbols
Name Type Section Description link-name
STT_OBJECT
.data
.rodata
.bss
Global variable
link-name .kd
STT_OBJECT
- .rodata
Kernel descriptor link-name STT_FUNC
- .text
Kernel entry point
- Global variable
Global variables both used and defined by the compilation unit.
If the symbol is defined in the compilation unit then it is allocated in the appropriate section according to if it has initialized data or is readonly.
If the symbol is external then its section is
STN_UNDEF
and the loader will resolve relocations using the definition provided by another code object or explicitly defined by the runtime.All global symbols, whether defined in the compilation unit or external, are accessed by the machine code indirectly through a GOT table entry. This allows them to be preemptable. The GOT table is only supported when the target triple OS is
amdhsa
(seeamdgpu-target-triples
).- Kernel descriptor
Every HSA kernel has an associated kernel descriptor. It is the address of the kernel descriptor that is used in the AQL dispatch packet used to invoke the kernel, not the kernel entry point. The layout of the HSA kernel descriptor is defined in
amdgpu-amdhsa-kernel-descriptor
.- Kernel entry point
Every HSA kernel also has a symbol for its machine code entry point.
AMDGPU backend generates Elf64_Rela
relocation records. Supported relocatable fields are:
word32
This specifies a 32-bit field occupying 4 bytes with arbitrary byte alignment. These values use the same byte order as other word values in the AMD GPU architecture.
word64
This specifies a 64-bit field occupying 8 bytes with arbitrary byte alignment. These values use the same byte order as other word values in the AMD GPU architecture.
Following notations are used for specifying relocation calculations:
- A
Represents the addend used to compute the value of the relocatable field.
- G
Represents the offset into the global offset table at which the relocation entry's symbol will reside during execution.
- GOT
Represents the address of the global offset table.
- P
Represents the place (section offset for
et_rel
or address foret_dyn
) of the storage unit being relocated (computed usingr_offset
).- S
Represents the value of the symbol whose index resides in the relocation entry. Relocations not using this must specify a symbol index of
STN_UNDEF
.- B
Represents the base address of a loaded executable or shared object which is the difference between the ELF address and the actual load address. Relocations using this are only valid in executable or shared objects.
The following relocation types are supported:
AMDGPU ELF Relocation Records
Relocation Type Kind Value Field Calculation R_AMDGPU_NONE
0 none none
R_AMDGPU_ABS32_LO
Static, Dynamic
1
word32
(S + A) & 0xFFFFFFFF
R_AMDGPU_ABS32_HI
Static, Dynamic
2
word32
(S + A) >> 32
R_AMDGPU_ABS64
Static, Dynamic
3
word64
S + A
R_AMDGPU_REL32
Static 4 word32
S + A - P R_AMDGPU_REL64
Static 5 word64
S + A - P
R_AMDGPU_ABS32
Static, Dynamic
6
word32
S + A
R_AMDGPU_GOTPCREL
Static 7 word32
G + GOT + A - P R_AMDGPU_GOTPCREL32_LO
Static 8 word32
(G + GOT + A - P) & 0xFFFFFFFF R_AMDGPU_GOTPCREL32_HI
Static 9 word32
(G + GOT + A - P) >> 32 R_AMDGPU_REL32_LO
Static 10 word32
(S + A - P) & 0xFFFFFFFF
R_AMDGPU_REL32_HI
reservedStatic
11 12
word32
(S + A - P) >> 32
R_AMDGPU_RELATIVE64
Dynamic 13 word64
B + A
R_AMDGPU_ABS32_LO
and R_AMDGPU_ABS32_HI
are only supported by the mesa3d
OS, which does not support R_AMDGPU_ABS64
.
There is no current OS loader support for 32 bit programs and so R_AMDGPU_ABS32
is not used.
Standard DWARF [DWARF] Version 5 sections can be generated. These contain information that maps the code object executable code and data to the source language constructs. It can be used by tools such as debuggers and profilers.
The following address space mapping is used:
AMDGPU DWARF Address Space Mapping
DWARF Address Space Memory Space 1 Private (Scratch) 2 Local (group/LDS) omitted Global omitted Constant omitted Generic (Flat) not supported Region (GDS)
See amdgpu-address-spaces
for information on the memory space terminology used in the table.
An address_class
attribute is generated on pointer type DIEs to specify the DWARF address space of the value of the pointer when it is in the private or local address space. Otherwise the attribute is omitted.
An XDEREF
operation is generated in location list expressions for variables that are allocated in the private and local address space. Otherwise no XDREF
is omitted.
This section is WIP.
Source text for online-compiled programs (e.g. those compiled by the OpenCL runtime) may be embedded into the DWARF v5 line table using the clang -gembed-source
option, described in table amdgpu-debug-options
.
For example:
-gembed-source
Enable the embedded source DWARF v5 extension.
-gno-embed-source
Disable the embedded source DWARF v5 extension.
AMDGPU Debug OptionsDebug Flag Description -g[no-]embed-source
Enable/disable embedding source text in DWARF debug sections. Useful for environments where source cannot be written to disk, such as when performing online compilation.
This option enables one extended content types in the DWARF v5 Line Number Program Header, which is used to encode embedded source.
AMDGPU DWARF Line Number Program Header Extended Content Types
Content Type Form DW_LNCT_LLVM_source
DW_FORM_line_strp
The source field will contain the UTF-8 encoded, null-terminated source text with '\n'
line endings. When the source field is present, consumers can use the embedded source instead of attempting to discover the source on disk. When the source field is absent, consumers can access the file to get the source text.
The above content type appears in the file_name_entry_format
field of the line table prologue, and its corresponding value appear in the file_names
field. The current encoding of the content type is documented in table amdgpu-dwarf-extended-content-types-encoding
AMDGPU DWARF Line Number Program Header Extended Content Types Encoding
Content Type Value DW_LNCT_LLVM_source
0x2001
This section provides code conventions used for each supported target triple OS (see amdgpu-target-triples
).
This section provides code conventions used when the target triple OS is amdhsa
(see amdgpu-target-triples
).
The AMDHSA OS uses the following syntax to specify the code object target as a single string:
<Architecture>-<Vendor>-<OS>-<Environment>-<Processor><Target Features>
Where:
<Architecture>
,<Vendor>
,<OS>
and<Environment>
are the same as the Target Triple (seeamdgpu-target-triples
).<Processor>
is the same as the Processor (seeamdgpu-processors
).<Target Features>
is a list of the enabled Target Features (seeamdgpu-target-features
), each prefixed by a plus, that apply to Processor. The list must be in the same order as listed in the tableamdgpu-target-feature-table
. Note that Target Features must be included in the list if they are enabled even if that is the default for Processor.
For example:
"amdgcn-amd-amdhsa--gfx902+xnack"
The code object metadata specifies extensible metadata associated with the code objects executed on HSA [HSA] compatible runtimes such as AMD's ROCm [AMD-ROCm]. It is specified by the NT_AMD_AMDGPU_HSA_METADATA
note record (see amdgpu-note-records
) and is required when the target triple OS is amdhsa
(see amdgpu-target-triples
). It must contain the minimum information necessary to support the ROCM kernel queries. For example, the segment sizes needed in a dispatch packet. In addition, a high level language runtime may require other information to be included. For example, the AMD OpenCL runtime records kernel argument information.
The metadata is specified as a YAML formatted string (see [YAML] and YamlIO
).
The metadata is represented as a single YAML document comprised of the mapping defined in table amdgpu-amdhsa-code-object-metadata-mapping-table
and referenced tables.
For boolean values, the string values of false
and true
are used for false and true respectively.
Additional information can be added to the mappings. To avoid conflicts, any non-AMD key names should be prefixed by "vendor-name.".
AMDHSA Code Object Metadata Mapping
String Key Value Type Required? Description "Version"
"Printf"
sequence of 2 integers
sequence of strings
Required
- The first integer is the major version. Currently 1.
- - The second integer is the minor
version. Currently 0.
Each string is encoded information about a printf function call. The encoded information is organized as fields separated by colon (':'):
ID:N:S[0]:S[1]:...:S[N-1]:FormatString
where:
ID
A 32 bit integer as a unique id for each printf function call
N
A 32 bit integer equal to the number of arguments of printf function call minus 1
S[i]
(where i = 0, 1, ... , N-1)32 bit integers for the size in bytes of the i-th FormatString argument of the printf function call
- FormatString
The format string passed to the printf function call.
"Kernels"
sequence of mapping
Required
Sequence of the mappings for each kernel in the code object. See
amdgpu-amdhsa-code-object-kernel-metadata-mapping-table
for the definition of the mapping.
The HSA architected queuing language (AQL) defines a user space memory interface that can be used to control the dispatch of kernels, in an agent independent way. An agent can have zero or more AQL queues created for it using the ROCm runtime, in which AQL packets (all of which are 64 bytes) can be placed. See the HSA Platform System Architecture Specification [HSA] for the AQL queue mechanics and packet layouts.
The packet processor of a kernel agent is responsible for detecting and dispatching HSA kernels from the AQL queues associated with it. For AMD GPUs the packet processor is implemented by the hardware command processor (CP), asynchronous dispatch controller (ADC) and shader processor input controller (SPI).
The ROCm runtime can be used to allocate an AQL queue object. It uses the kernel mode driver to initialize and register the AQL queue with CP.
To dispatch a kernel the following actions are performed. This can occur in the CPU host program, or from an HSA kernel executing on a GPU.
- A pointer to an AQL queue for the kernel agent on which the kernel is to be executed is obtained.
- A pointer to the kernel descriptor (see
amdgpu-amdhsa-kernel-descriptor
) of the kernel to execute is obtained. It must be for a kernel that is contained in a code object that that was loaded by the ROCm runtime on the kernel agent with which the AQL queue is associated. - Space is allocated for the kernel arguments using the ROCm runtime allocator for a memory region with the kernarg property for the kernel agent that will execute the kernel. It must be at least 16 byte aligned.
- Kernel argument values are assigned to the kernel argument memory allocation. The layout is defined in the HSA Programmer's Language Reference [HSA]. For AMDGPU the kernel execution directly accesses the kernel argument memory in the same way constant memory is accessed. (Note that the HSA specification allows an implementation to copy the kernel argument contents to another location that is accessed by the kernel.)
- An AQL kernel dispatch packet is created on the AQL queue. The ROCm runtime api uses 64 bit atomic operations to reserve space in the AQL queue for the packet. The packet must be set up, and the final write must use an atomic store release to set the packet kind to ensure the packet contents are visible to the kernel agent. AQL defines a doorbell signal mechanism to notify the kernel agent that the AQL queue has been updated. These rules, and the layout of the AQL queue and kernel dispatch packet is defined in the HSA System Architecture Specification [HSA].
- A kernel dispatch packet includes information about the actual dispatch, such as grid and work-group size, together with information from the code object about the kernel, such as segment sizes. The ROCm runtime queries on the kernel symbol can be used to obtain the code object values which are recorded in the
amdgpu-amdhsa-code-object-metadata
. - CP executes micro-code and is responsible for detecting and setting up the GPU to execute the wavefronts of a kernel dispatch.
- CP ensures that when the a wavefront starts executing the kernel machine code, the scalar general purpose registers (SGPR) and vector general purpose registers (VGPR) are set up as required by the machine code. The required setup is defined in the
amdgpu-amdhsa-kernel-descriptor
. The initial register state is defined inamdgpu-amdhsa-initial-kernel-execution-state
. - The prolog of the kernel machine code (see
amdgpu-amdhsa-kernel-prolog
) sets up the machine state as necessary before continuing executing the machine code that corresponds to the kernel. - When the kernel dispatch has completed execution, CP signals the completion signal specified in the kernel dispatch packet if not 0.
The memory space properties are:
AMDHSA Memory Spaces
Memory Space Name
HSA Segment Name
Hardware Name
Address Size
NULL Value
================= =========== ======== ======= ================== Private private scratch 32 0x00000000 Local group LDS 32 0xFFFFFFFF Global global global 64 0x0000000000000000 Constant
constant
same as global
64
0x0000000000000000
Generic flat flat 64 0x0000000000000000 Region
N/A
GDS
32
not implemented for AMDHSA
The global and constant memory spaces both use global virtual addresses, which are the same virtual address space used by the CPU. However, some virtual addresses may only be accessible to the CPU, some only accessible by the GPU, and some by both.
Using the constant memory space indicates that the data will not change during the execution of the kernel. This allows scalar read instructions to be used. The vector and scalar L1 caches are invalidated of volatile data before each kernel dispatch execution to allow constant memory to change values between kernel dispatches.
The local memory space uses the hardware Local Data Store (LDS) which is automatically allocated when the hardware creates work-groups of wavefronts, and freed when all the wavefronts of a work-group have terminated. The data store (DS) instructions can be used to access it.
The private memory space uses the hardware scratch memory support. If the kernel uses scratch, then the hardware allocates memory that is accessed using wavefront lane dword (4 byte) interleaving. The mapping used from private address to physical address is:
wavefront-scratch-base + (private-address * wavefront-size * 4) + (wavefront-lane-id * 4)
There are different ways that the wavefront scratch base address is determined by a wavefront (see amdgpu-amdhsa-initial-kernel-execution-state
). This memory can be accessed in an interleaved manner using buffer instruction with the scratch buffer descriptor and per wavefront scratch offset, by the scratch instructions, or by flat instructions. If each lane of a wavefront accesses the same private address, the interleaving results in adjacent dwords being accessed and hence requires fewer cache lines to be fetched. Multi-dword access is not supported except by flat and scratch instructions in GFX9.
The generic address space uses the hardware flat address support available in GFX7-GFX9. This uses two fixed ranges of virtual addresses (the private and local appertures), that are outside the range of addressible global memory, to map from a flat address to a private or local address.
FLAT instructions can take a flat address and access global, private (scratch) and group (LDS) memory depending in if the address is within one of the apperture ranges. Flat access to scratch requires hardware aperture setup and setup in the kernel prologue (see amdgpu-amdhsa-flat-scratch
). Flat access to LDS requires hardware aperture setup and M0 (GFX7-GFX8) register setup (see amdgpu-amdhsa-m0
).
To convert between a segment address and a flat address the base address of the appertures address can be used. For GFX7-GFX8 these are available in the amdgpu-amdhsa-hsa-aql-queue
the address of which can be obtained with Queue Ptr SGPR (see amdgpu-amdhsa-initial-kernel-execution-state
). For GFX9 the appature base addresses are directly available as inline constant registers SRC_SHARED_BASE/LIMIT
and SRC_PRIVATE_BASE/LIMIT
. In 64 bit address mode the apperture sizes are 2^32 bytes and the base is aligned to 2^32 which makes it easier to convert from flat to segment or segment to flat.
Image and sample handles created by the ROCm runtime are 64 bit addresses of a hardware 32 byte V# and 48 byte S# object respectively. In order to support the HSA query_sampler
operations two extra dwords are used to store the HSA BRIG enumeration values for the queries that are not trivially deducible from the S# representation.
HSA signal handles created by the ROCm runtime are 64 bit addresses of a structure allocated in memory accessible from both the CPU and GPU. The structure is defined by the ROCm runtime and subject to change between releases (see [AMD-ROCm-github]).
The HSA AQL queue structure is defined by the ROCm runtime and subject to change between releases (see [AMD-ROCm-github]). For some processors it contains fields needed to implement certain language features such as the flat address aperture bases. It also contains fields used by CP such as managing the allocation of scratch memory.
A kernel descriptor consists of the information needed by CP to initiate the execution of a kernel, including the entry point address of the machine code that implements the kernel.
CP microcode requires the Kernel descriptor to be allocated on 64 byte alignment.
Kernel Descriptor for GFX6-GFX9
Bits Size Field Name Description 31:0
4 bytes
GROUP_SEGMENT_FIXED_SIZE
The amount of fixed local address space memory required for a work-group in bytes. This does not include any dynamically allocated local address space memory that may be added when the kernel is dispatched.
63:32
127:64
4 bytes
8 bytes
PRIVATE_SEGMENT_FIXED_SIZE
The amount of fixed private address space memory required for a work-item in bytes. If is_dynamic_callstack is 1 then additional space must be added to this value for the call stack. Reserved, must be 0.
191:128
383:192
8 bytes
24 bytes
KERNEL_CODE_ENTRY_BYTE_OFFSET
Byte offset (possibly negative) from base address of kernel descriptor to kernel's entry point instruction which must be 256 byte aligned. Reserved, must be 0.
415:384
4 bytes
COMPUTE_PGM_RSRC1
Compute Shader (CS) program settings used by CP to set up
COMPUTE_PGM_RSRC1
configuration register. Seeamdgpu-amdhsa-compute_pgm_rsrc1-gfx6-gfx9-table
.447:416
4 bytes
COMPUTE_PGM_RSRC2
Compute Shader (CS) program settings used by CP to set up
COMPUTE_PGM_RSRC2
configuration register. Seeamdgpu-amdhsa-compute_pgm_rsrc2-gfx6-gfx9-table
.448
1 bit
ENABLE_SGPR_PRIVATE_SEGMENT _BUFFER
Enable the setup of the SGPR user data registers (see
amdgpu-amdhsa-initial-kernel-execution-state
).The total number of SGPR user data registers requested must not exceed 16 and match value in
compute_pgm_rsrc2.user_sgpr.user_sgpr_count
. Any requests beyond 16 will be ignored.449 1 bit ENABLE_SGPR_DISPATCH_PTR see above 450 1 bit ENABLE_SGPR_QUEUE_PTR see above 451 1 bit ENABLE_SGPR_KERNARG_SEGMENT_PTR see above 452 1 bit ENABLE_SGPR_DISPATCH_ID see above 453 1 bit ENABLE_SGPR_FLAT_SCRATCH_INIT see above 454
455 511:456 512
1 bit
1 bit 8 bytes **Total
ENABLE_SGPR_PRIVATE_SEGMENT _SIZE
size 64 bytes.**
see above
Reserved, must be 0. Reserved, must be 0.
This section defines the register state that will be set up by the packet processor prior to the start of execution of every wavefront. This is limited by the constraints of the hardware controllers of CP/ADC/SPI.
The order of the SGPR registers is defined, but the compiler can specify which ones are actually setup in the kernel descriptor using the enable_sgpr_*
bit fields (see amdgpu-amdhsa-kernel-descriptor
). The register numbers used for enabled registers are dense starting at SGPR0: the first enabled register is SGPR0, the next enabled register is SGPR1 etc.; disabled registers do not have an SGPR number.
The initial SGPRs comprise up to 16 User SRGPs that are set by CP and apply to all wavefronts of the grid. It is possible to specify more than 16 User SGPRs using the enable_sgpr_*
bit fields, in which case only the first 16 are actually initialized. These are then immediately followed by the System SGPRs that are set up by ADC/SPI and can have different values for each wavefront of the grid dispatch.
SGPR register initial state is defined in amdgpu-amdhsa-sgpr-register-set-up-order-table
.
SGPR Register Set Up Order
SGPR Order
Name (kernel descriptor enable field)
Number of SGPRs
Description
========== ========================== ====== ============================== First
Private Segment Buffer (enable_sgpr_private _segment_buffer)
4
V# that can be used, together with Scratch Wavefront Offset as an offset, to access the private memory space using a segment address.
CP uses the value provided by the runtime.
then
Dispatch Ptr (enable_sgpr_dispatch_ptr)
2
64 bit address of AQL dispatch packet for kernel dispatch actually executing.
then
Queue Ptr (enable_sgpr_queue_ptr)
2
64 bit address of amd_queue_t object for AQL queue on which the dispatch packet was queued.
then
Kernarg Segment Ptr (enable_sgpr_kernarg _segment_ptr)
2
64 bit address of Kernarg segment. This is directly copied from the kernarg_address in the kernel dispatch packet.
Having CP load it once avoids loading it at the beginning of every wavefront.
then
Dispatch Id (enable_sgpr_dispatch_id)
2
64 bit Dispatch ID of the dispatch packet being executed.
then
Flat Scratch Init (enable_sgpr_flat_scratch _init)
2
This is 2 SGPRs:
- GFX6
Not supported.
- GFX7-GFX8
The first SGPR is a 32 bit byte offset from
SH_HIDDEN_PRIVATE_BASE_VIMID
to per SPI base of memory for scratch for the queue executing the kernel dispatch. CP obtains this from the runtime. (The Scratch Segment Buffer base address isSH_HIDDEN_PRIVATE_BASE_VIMID
plus this offset.) The value of Scratch Wavefront Offset must be added to this offset by the kernel machine code, right shifted by 8, and moved to the FLAT_SCRATCH_HI SGPR register. FLAT_SCRATCH_HI corresponds to SGPRn-4 on GFX7, and SGPRn-6 on GFX8 (where SGPRn is the highest numbered SGPR allocated to the wavefront). FLAT_SCRATCH_HI is multiplied by 256 (as it is in units of 256 bytes) and added toSH_HIDDEN_PRIVATE_BASE_VIMID
to calculate the per wavefront FLAT SCRATCH BASE in flat memory instructions that access the scratch apperture.The second SGPR is 32 bit byte size of a single work-item's scratch memory usage. CP obtains this from the runtime, and it is always a multiple of DWORD. CP checks that the value in the kernel dispatch packet Private Segment Byte Size is not larger, and requests the runtime to increase the queue's scratch size if necessary. The kernel code must move it to FLAT_SCRATCH_LO which is SGPRn-3 on GFX7 and SGPRn-5 on GFX8. FLAT_SCRATCH_LO is used as the FLAT SCRATCH SIZE in flat memory instructions. Having CP load it once avoids loading it at the beginning of every wavefront.
- GFX9
This is the 64 bit base address of the per SPI scratch backing memory managed by SPI for the queue executing the kernel dispatch. CP obtains this from the runtime (and divides it if there are multiple Shader Arrays each with its own SPI). The value of Scratch Wavefront Offset must be added by the kernel machine code and the result moved to the FLAT_SCRATCH SGPR which is SGPRn-6 and SGPRn-5. It is used as the FLAT SCRATCH BASE in flat memory instructions.
then
Private Segment Size
1
The 32 bit byte size of a (enable_sgpr_private single work-item's scratch_segment_size) memory allocation. This is the value from the kernel dispatch packet Private Segment Byte Size rounded up by CP to a multiple of DWORD.
Having CP load it once avoids loading it at the beginning of every wavefront.
This is not used for GFX7-GFX8 since it is the same value as the second SGPR of Flat Scratch Init. However, it may be needed for GFX9 which changes the meaning of the Flat Scratch Init value.
then
Grid Work-Group Count X (enable_sgpr_grid _workgroup_count_X)
1
32 bit count of the number of work-groups in the X dimension for the grid being executed. Computed from the fields in the kernel dispatch packet as ((grid_size.x + workgroup_size.x - 1) / workgroup_size.x).
then
Grid Work-Group Count Y (enable_sgpr_grid _workgroup_count_Y && less than 16 previous SGPRs)
1
32 bit count of the number of work-groups in the Y dimension for the grid being executed. Computed from the fields in the kernel dispatch packet as ((grid_size.y + workgroup_size.y - 1) / workgroupSize.y).
Only initialized if <16 previous SGPRs initialized.
then
Grid Work-Group Count Z (enable_sgpr_grid _workgroup_count_Z && less than 16 previous SGPRs)
1
32 bit count of the number of work-groups in the Z dimension for the grid being executed. Computed from the fields in the kernel dispatch packet as ((grid_size.z + workgroup_size.z - 1) / workgroupSize.z).
Only initialized if <16 previous SGPRs initialized.
then
Work-Group Id X (enable_sgpr_workgroup_id _X)
1
32 bit work-group id in X dimension of grid for wavefront.
then
Work-Group Id Y (enable_sgpr_workgroup_id _Y)
1
32 bit work-group id in Y dimension of grid for wavefront.
then
Work-Group Id Z (enable_sgpr_workgroup_id _Z)
1
32 bit work-group id in Z dimension of grid for wavefront.
then
Work-Group Info (enable_sgpr_workgroup _info)
1
{first_wavefront, 14'b0000, ordered_append_term[10:0], threadgroup_size_in_wavefronts[5:0]}
then
Scratch Wavefront Offset (enable_sgpr_private _segment_wavefront_offset)
1
32 bit byte offset from base of scratch base of queue executing the kernel dispatch. Must be used as an offset with Private segment address when using Scratch Segment Buffer. It must be used to set up FLAT SCRATCH for flat addressing (see
amdgpu-amdhsa-flat-scratch
).
The order of the VGPR registers is defined, but the compiler can specify which ones are actually setup in the kernel descriptor using the enable_vgpr*
bit fields (see amdgpu-amdhsa-kernel-descriptor
). The register numbers used for enabled registers are dense starting at VGPR0: the first enabled register is VGPR0, the next enabled register is VGPR1 etc.; disabled registers do not have a VGPR number.
VGPR register initial state is defined in amdgpu-amdhsa-vgpr-register-set-up-order-table
.
VGPR Register Set Up Order
VGPR Order
Name (kernel descriptor enable field)
Number of VGPRs
Description
========== ========================== ====== ============================== First
Work-Item Id X (Always initialized)
1
32 bit work item id in X dimension of work-group for wavefront lane.
then
Work-Item Id Y (enable_vgpr_workitem_id > 0)
1
32 bit work item id in Y dimension of work-group for wavefront lane.
then
Work-Item Id Z (enable_vgpr_workitem_id > 1)
1
32 bit work item id in Z dimension of work-group for wavefront lane.
The setting of registers is done by GPU CP/ADC/SPI hardware as follows:
- SGPRs before the Work-Group Ids are set by CP using the 16 User Data registers.
- Work-group Id registers X, Y, Z are set by ADC which supports any combination including none.
- Scratch Wavefront Offset is set by SPI in a per wavefront basis which is why its value cannot included with the flat scratch init value which is per queue.
- The VGPRs are set by SPI which only supports specifying either (X), (X, Y) or (X, Y, Z).
Flat Scratch register pair are adjacent SGRRs so they can be moved as a 64 bit value to the hardware required SGPRn-3 and SGPRn-4 respectively.
The global segment can be accessed either using buffer instructions (GFX6 which has V# 64 bit address support), flat instructions (GFX7-GFX9), or global instructions (GFX9).
If buffer operations are used then the compiler can generate a V# with the following properties:
- base address of 0
- no swizzle
- ATC: 1 if IOMMU present (such as APU)
- ptr64: 1
- MTYPE set to support memory coherence that matches the runtime (such as CC for APU and NC for dGPU).
- GFX6-GFX8
The M0 register must be initialized with a value at least the total LDS size if the kernel may access LDS via DS or flat operations. Total LDS size is available in dispatch packet. For M0, it is also possible to use maximum possible value of LDS for given target (0x7FFF for GFX6 and 0xFFFF for GFX7-GFX8).
- GFX9
The M0 register is not used for range checking LDS accesses and so does not need to be initialized in the prolog.
If the kernel may use flat operations to access scratch memory, the prolog code must set up FLAT_SCRATCH register pair (FLAT_SCRATCH_LO/FLAT_SCRATCH_HI which are in SGPRn-4/SGPRn-3). Initialization uses Flat Scratch Init and Scratch Wavefront Offset SGPR registers (see amdgpu-amdhsa-initial-kernel-execution-state
):
- GFX6
Flat scratch is not supported.
- GFX7-GFX8
- The low word of Flat Scratch Init is 32 bit byte offset from
SH_HIDDEN_PRIVATE_BASE_VIMID
to the base of scratch backing memory being managed by SPI for the queue executing the kernel dispatch. This is the same value used in the Scratch Segment Buffer V# base address. The prolog must add the value of Scratch Wavefront Offset to get the wavefront's byte scratch backing memory offset fromSH_HIDDEN_PRIVATE_BASE_VIMID
. Since FLAT_SCRATCH_LO is in units of 256 bytes, the offset must be right shifted by 8 before moving into FLAT_SCRATCH_LO. - The second word of Flat Scratch Init is 32 bit byte size of a single work-items scratch memory usage. This is directly loaded from the kernel dispatch packet Private Segment Byte Size and rounded up to a multiple of DWORD. Having CP load it once avoids loading it at the beginning of every wavefront. The prolog must move it to FLAT_SCRATCH_LO for use as FLAT SCRATCH SIZE.
- The low word of Flat Scratch Init is 32 bit byte offset from
- GFX9
The Flat Scratch Init is the 64 bit address of the base of scratch backing memory being managed by SPI for the queue executing the kernel dispatch. The prolog must add the value of Scratch Wavefront Offset and moved to the FLAT_SCRATCH pair for use as the flat scratch base in flat memory instructions.
This section describes the mapping of LLVM memory model onto AMDGPU machine code (see memmodel
). The implementation is WIP.
The AMDGPU backend supports the memory synchronization scopes specified in amdgpu-memory-scopes
.
The code sequences used to implement the memory model are defined in table amdgpu-amdhsa-memory-model-code-sequences-gfx6-gfx9-table
.
The sequences specify the order of instructions that a single thread must execute. The s_waitcnt
and buffer_wbinvl1_vol
are defined with respect to other memory instructions executed by the same thread. This allows them to be moved earlier or later which can allow them to be combined with other instances of the same instruction, or hoisted/sunk out of loops to improve performance. Only the instructions related to the memory model are given; additional s_waitcnt
instructions are required to ensure registers are defined before being used. These may be able to be combined with the memory model s_waitcnt
instructions as described above.
The AMDGPU backend supports the following memory models:
- HSA Memory Model [HSA]
The HSA memory model uses a single happens-before relation for all address spaces (see
amdgpu-address-spaces
).- OpenCL Memory Model [OpenCL]
The OpenCL memory model which has separate happens-before relations for the global and local address spaces. Only a fence specifying both global and local address space, and seq_cst instructions join the relationships. Since the LLVM
memfence
instruction does not allow an address space to be specified the OpenCL fence has to convervatively assume both local and global address space was specified. However, optimizations can often be done to eliminate the additionals_waitcnt
instructions when there are no intervening memory instructions which access the corresponding address space. The code sequences in the table indicate what can be omitted for the OpenCL memory. The target triple environment is used to determine if the source language is OpenCL (seeamdgpu-opencl
).
ds/flat_load/store/atomic
instructions to local memory are termed LDS operations.
buffer/global/flat_load/store/atomic
instructions to global memory are termed vector memory operations.
For GFX6-GFX9:
- Each agent has multiple compute units (CU).
- Each CU has multiple SIMDs that execute wavefronts.
- The wavefronts for a single work-group are executed in the same CU but may be executed by different SIMDs.
- Each CU has a single LDS memory shared by the wavefronts of the work-groups executing on it.
- All LDS operations of a CU are performed as wavefront wide operations in a global order and involve no caching. Completion is reported to a wavefront in execution order.
- The LDS memory has multiple request queues shared by the SIMDs of a CU. Therefore, the LDS operations performed by different wavefronts of a work-group can be reordered relative to each other, which can result in reordering the visibility of vector memory operations with respect to LDS operations of other wavefronts in the same work-group. A
s_waitcnt lgkmcnt(0)
is required to ensure synchronization between LDS operations and vector memory operations between wavefronts of a work-group, but not between operations performed by the same wavefront. - The vector memory operations are performed as wavefront wide operations and completion is reported to a wavefront in execution order. The exception is that for GFX7-GFX9
flat_load/store/atomic
instructions can report out of vector memory order if they access LDS memory, and out of LDS operation order if they access global memory. - The vector memory operations access a single vector L1 cache shared by all SIMDs a CU. Therefore, no special action is required for coherence between the lanes of a single wavefront, or for coherence between wavefronts in the same work-group. A
buffer_wbinvl1_vol
is required for coherence between wavefronts executing in different work-groups as they may be executing on different CUs. - The scalar memory operations access a scalar L1 cache shared by all wavefronts on a group of CUs. The scalar and vector L1 caches are not coherent. However, scalar operations are used in a restricted way so do not impact the memory model. See
amdgpu-amdhsa-memory-spaces
. - The vector and scalar memory operations use an L2 cache shared by all CUs on the same agent.
- The L2 cache has independent channels to service disjoint ranges of virtual addresses.
- Each CU has a separate request queue per channel. Therefore, the vector and scalar memory operations performed by wavefronts executing in different work-groups (which may be executing on different CUs) of an agent can be reordered relative to each other. A
s_waitcnt vmcnt(0)
is required to ensure synchronization between vector memory operations of different CUs. It ensures a previous vector memory operation has completed before executing a subsequent vector memory or LDS operation and so can be used to meet the requirements of acquire and release. - The L2 cache can be kept coherent with other agents on some targets, or ranges of virtual addresses can be set up to bypass it to ensure system coherence.
Private address space uses buffer_load/store
using the scratch V# (GFX6-GFX8), or scratch_load/store
(GFX9). Since only a single thread is accessing the memory, atomic memory orderings are not meaningful and all accesses are treated as non-atomic.
Constant address space uses buffer/global_load
instructions (or equivalent scalar memory instructions). Since the constant address space contents do not change during the execution of a kernel dispatch it is not legal to perform stores, and atomic memory orderings are not meaningful and all access are treated as non-atomic.
A memory synchronization scope wider than work-group is not meaningful for the group (LDS) address space and is treated as work-group.
The memory model does not support the region address space which is treated as non-atomic.
Acquire memory ordering is not meaningful on store atomic instructions and is treated as non-atomic.
Release memory ordering is not meaningful on load atomic instructions and is treated a non-atomic.
Acquire-release memory ordering is not meaningful on load or store atomic instructions and is treated as acquire and release respectively.
AMDGPU backend only uses scalar memory operations to access memory that is proven to not change during the execution of the kernel dispatch. This includes constant address space and global address space for program scope const variables. Therefore the kernel machine code does not have to maintain the scalar L1 cache to ensure it is coherent with the vector L1 cache. The scalar and vector L1 caches are invalidated between kernel dispatches by CP since constant address space data may change between kernel dispatch executions. See amdgpu-amdhsa-memory-spaces
.
The one execption is if scalar writes are used to spill SGPR registers. In this case the AMDGPU backend ensures the memory location used to spill is never accessed by vector memory operations at the same time. If scalar writes are used then a s_dcache_wb
is inserted before the s_endpgm
and before a function return since the locations may be used for vector memory instructions by a future wavefront that uses the same scratch area, or a function call that creates a frame at the same address, respectively. There is no need for a s_dcache_inv
as all scalar writes are write-before-read in the same thread.
Scratch backing memory (which is used for the private address space) is accessed with MTYPE NC_NV (non-coherenent non-volatile). Since the private address space is only accessed by a single thread, and is always write-before-read, there is never a need to invalidate these entries from the L1 cache. Hence all cache invalidates are done as *_vol
to only invalidate the volatile cache lines.
On dGPU the kernarg backing memory is accessed as UC (uncached) to avoid needing to invalidate the L2 cache. This also causes it to be treated as non-volatile and so is not invalidated by *_vol
. On APU it is accessed as CC (cache coherent) and so the L2 cache will coherent with the CPU and other agents.
AMDHSA Memory Model Code Sequences GFX6-GFX9
LLVM Instr
LLVM Memory Ordering
LLVM Memory Sync Scope
AMDGPU Address Space
AMDGPU Machine Code
============ *Non-Atomic
============ *
------------- ------------- --------------- ----------- ------------------------------- load
none
none
- global
- generic
- private
- constant
- !volatile & !nontemporal
- buffer/global/flat_load
- volatile & !nontemporal
- buffer/global/flat_load glc=1
- nontemporal
- buffer/global/flat_load glc=1 slc=1
load none none - local 1. ds_load store
none
none
- global
- generic
- private
- constant
- !nontemporal
- buffer/global/flat_store
- nontemporal
- buffer/global/flat_stote glc=1 slc=1
store **Unordered A
none tomic**
none
- local
- ds_store
------------- ------------- --------------- ----------- ------------------------------- load atomic unordered any any Same as non-atomic. store atomic unordered any any Same as non-atomic. atomicrmw
**Monotonic A
unordered
tomic**
any
any
Same as monotonic atomic.
------------- ------------- --------------- ----------- ------------------------------- load atomic
monotonic
- singlethread
- wavefront
- workgroup
- global
- generic
- buffer/global/flat_load
load atomic
monotonic
- singlethread
- wavefront
- workgroup
- local
- ds_load
load atomic
monotonic
- agent
- system
- global
- generic
- buffer/global/flat_load glc=1
store atomic
monotonic
- singlethread
- wavefront
- workgroup
- agent
- system
- global
- generic
- buffer/global/flat_store
store atomic
monotonic
- singlethread
- wavefront
- workgroup
- local
- ds_store
atomicrmw
monotonic
- singlethread
- wavefront
- workgroup
- agent
- system
- global
- generic
- buffer/global/flat_atomic
atomicrmw
**Acquire Ato
monotonic
mic**
- singlethread
- wavefront
- workgroup
- local
- ds_atomic
------------- ------------- --------------- ----------- ------------------------------- load atomic
acquire
- singlethread
- wavefront
- global
- local
- generic
- buffer/global/ds/flat_load
load atomic acquire - workgroup - global 1. buffer/global/flat_load load atomic
acquire
- workgroup
- local
- ds_load
- s_waitcnt lgkmcnt(0)
- If OpenCL, omit.
- Must happen before any following global/generic load/load atomic/store/store atomic/atomicrmw.
- Ensures any following global data read is no older than the load atomic value being acquired.
load atomic
acquire
- workgroup
- generic
- flat_load
- s_waitcnt lgkmcnt(0)
- If OpenCL, omit.
- Must happen before any following global/generic load/load atomic/store/store atomic/atomicrmw.
- Ensures any following global data read is no older than the load atomic value being acquired.
load atomic
acquire
- agent
- system
- global
- buffer/global/flat_load glc=1
- s_waitcnt vmcnt(0)
- Must happen before following buffer_wbinvl1_vol.
- Ensures the load has completed before invalidating the cache.
- buffer_wbinvl1_vol
- Must happen before any following global/generic load/load atomic/atomicrmw.
- Ensures that following loads will not see stale global data.
load atomic
acquire
- agent
- system
- generic
- flat_load glc=1
- s_waitcnt vmcnt(0) & lgkmcnt(0)
- If OpenCL omit lgkmcnt(0).
- Must happen before following buffer_wbinvl1_vol.
- Ensures the flat_load has completed before invalidating the cache.
- buffer_wbinvl1_vol
- Must happen before any following global/generic load/load atomic/atomicrmw.
- Ensures that following loads will not see stale global data.
atomicrmw
acquire
- singlethread
- wavefront
- global
- local
- generic
- buffer/global/ds/flat_atomic
atomicrmw acquire - workgroup - global 1. buffer/global/flat_atomic atomicrmw
acquire
- workgroup
- local
- ds_atomic
- waitcnt lgkmcnt(0)
- If OpenCL, omit.
- Must happen before any following global/generic load/load atomic/store/store atomic/atomicrmw.
- Ensures any following global data read is no older than the atomicrmw value being acquired.
atomicrmw
acquire
- workgroup
- generic
- flat_atomic
- waitcnt lgkmcnt(0)
- If OpenCL, omit.
- Must happen before any following global/generic load/load atomic/store/store atomic/atomicrmw.
- Ensures any following global data read is no older than the atomicrmw value being acquired.
atomicrmw
acquire
- agent
- system
- global
- buffer/global/flat_atomic
- s_waitcnt vmcnt(0)
- Must happen before following buffer_wbinvl1_vol.
- Ensures the atomicrmw has completed before invalidating the cache.
- buffer_wbinvl1_vol
- Must happen before any following global/generic load/load atomic/atomicrmw.
- Ensures that following loads will not see stale global data.
atomicrmw
acquire
- agent
- system
- generic
- flat_atomic
- s_waitcnt vmcnt(0) & lgkmcnt(0)
- If OpenCL, omit lgkmcnt(0).
- Must happen before following buffer_wbinvl1_vol.
- Ensures the atomicrmw has completed before invalidating the cache.
- buffer_wbinvl1_vol
- Must happen before any following global/generic load/load atomic/atomicrmw.
- Ensures that following loads will not see stale global data.
fence
acquire
- singlethread
- wavefront
none
none
fence
acquire
- workgroup
none
- s_waitcnt lgkmcnt(0)
- If OpenCL and address space is not generic, omit.
- However, since LLVM currently has no address space on the fence need to conservatively always generate. If fence had an address space then set to address space of OpenCL fence flag, or to generic if both local and global flags are specified.
- Must happen after any preceding local/generic load atomic/atomicrmw with an equal or wider sync scope and memory ordering stronger than unordered (this is termed the fence-paired-atomic).
- Must happen before any following global/generic load/load atomic/store/store atomic/atomicrmw.
- Ensures any following global data read is no older than the value read by the fence-paired-atomic.
fence
**Release Ato
acquire
mic**
- agent
- system
none
- s_waitcnt lgkmcnt(0) & vmcnt(0)
- If OpenCL and address space is not generic, omit lgkmcnt(0).
- However, since LLVM currently has no address space on the fence need to conservatively always generate (see comment for previous fence).
- Could be split into separate s_waitcnt vmcnt(0) and s_waitcnt lgkmcnt(0) to allow them to be independently moved according to the following rules.
- s_waitcnt vmcnt(0) must happen after any preceding global/generic load atomic/atomicrmw with an equal or wider sync scope and memory ordering stronger than unordered (this is termed the fence-paired-atomic).
- s_waitcnt lgkmcnt(0) must happen after any preceding local/generic load atomic/atomicrmw with an equal or wider sync scope and memory ordering stronger than unordered (this is termed the fence-paired-atomic).
- Must happen before the following buffer_wbinvl1_vol.
- Ensures that the fence-paired atomic has completed before invalidating the cache. Therefore any following locations read must be no older than the value read by the fence-paired-atomic.
- buffer_wbinvl1_vol
- Must happen before any following global/generic load/load atomic/store/store atomic/atomicrmw.
- Ensures that following loads will not see stale global data.
------------- ------------- --------------- ----------- ------------------------------- store atomic
release
- singlethread
- wavefront
- global
- local
- generic
- buffer/global/ds/flat_store
store atomic
release
- workgroup
- global
- s_waitcnt lgkmcnt(0)
- If OpenCL, omit.
- Must happen after any preceding local/generic load/store/load atomic/store atomic/atomicrmw.
- Must happen before the following store.
- Ensures that all memory operations to local have completed before performing the store that is being released.
- buffer/global/flat_store
store atomic release - workgroup - local 1. ds_store store atomic
release
- workgroup
- generic
- s_waitcnt lgkmcnt(0)
- If OpenCL, omit.
- Must happen after any preceding local/generic load/store/load atomic/store atomic/atomicrmw.
- Must happen before the following store.
- Ensures that all memory operations to local have completed before performing the store that is being released.
- flat_store
store atomic
release
- agent
- system
- global
- generic
- s_waitcnt lgkmcnt(0) & vmcnt(0)
- If OpenCL, omit lgkmcnt(0).
- Could be split into separate s_waitcnt vmcnt(0) and s_waitcnt lgkmcnt(0) to allow them to be independently moved according to the following rules.
- s_waitcnt vmcnt(0) must happen after any preceding global/generic load/store/load atomic/store atomic/atomicrmw.
- s_waitcnt lgkmcnt(0) must happen after any preceding local/generic load/store/load atomic/store atomic/atomicrmw.
- Must happen before the following store.
- Ensures that all memory operations to memory have completed before performing the store that is being released.
- buffer/global/ds/flat_store
atomicrmw
release
- singlethread
- wavefront
- global
- local
- generic
- buffer/global/ds/flat_atomic
atomicrmw
release
- workgroup
- global
- s_waitcnt lgkmcnt(0)
- If OpenCL, omit.
- Must happen after any preceding local/generic load/store/load atomic/store atomic/atomicrmw.
- Must happen before the following atomicrmw.
- Ensures that all memory operations to local have completed before performing the atomicrmw that is being released.
- buffer/global/flat_atomic
atomicrmw release - workgroup - local 1. ds_atomic atomicrmw
release
- workgroup
- generic
- s_waitcnt lgkmcnt(0)
- If OpenCL, omit.
- Must happen after any preceding local/generic load/store/load atomic/store atomic/atomicrmw.
- Must happen before the following atomicrmw.
- Ensures that all memory operations to local have completed before performing the atomicrmw that is being released.
- flat_atomic
atomicrmw
release
- agent
- system
- global
- generic
- s_waitcnt lgkmcnt(0) & vmcnt(0)
- If OpenCL, omit lgkmcnt(0).
- Could be split into separate s_waitcnt vmcnt(0) and s_waitcnt lgkmcnt(0) to allow them to be independently moved according to the following rules.
- s_waitcnt vmcnt(0) must happen after any preceding global/generic load/store/load atomic/store atomic/atomicrmw.
- s_waitcnt lgkmcnt(0) must happen after any preceding local/generic load/store/load atomic/store atomic/atomicrmw.
- Must happen before the following atomicrmw.
- Ensures that all memory operations to global and local have completed before performing the atomicrmw that is being released.
- buffer/global/ds/flat_atomic
fence
release
- singlethread
- wavefront
none
none
fence
release
- workgroup
none
- s_waitcnt lgkmcnt(0)
- If OpenCL and address space is not generic, omit.
- However, since LLVM currently has no address space on the fence need to conservatively always generate. If fence had an address space then set to address space of OpenCL fence flag, or to generic if both local and global flags are specified.
- Must happen after any preceding local/generic load/load atomic/store/store atomic/atomicrmw.
- Must happen before any following store atomic/atomicrmw with an equal or wider sync scope and memory ordering stronger than unordered (this is termed the fence-paired-atomic).
- Ensures that all memory operations to local have completed before performing the following fence-paired-atomic.
fence
**Acquire-Rel
release
ease Atomic**
- agent
- system
none
- s_waitcnt lgkmcnt(0) & vmcnt(0)
- If OpenCL and address space is not generic, omit lgkmcnt(0).
- If OpenCL and address space is local, omit vmcnt(0).
- However, since LLVM currently has no address space on the fence need to conservatively always generate. If fence had an address space then set to address space of OpenCL fence flag, or to generic if both local and global flags are specified.
- Could be split into separate s_waitcnt vmcnt(0) and s_waitcnt lgkmcnt(0) to allow them to be independently moved according to the following rules.
- s_waitcnt vmcnt(0) must happen after any preceding global/generic load/store/load atomic/store atomic/atomicrmw.
- s_waitcnt lgkmcnt(0) must happen after any preceding local/generic load/store/load atomic/store atomic/atomicrmw.
- Must happen before any following store atomic/atomicrmw with an equal or wider sync scope and memory ordering stronger than unordered (this is termed the fence-paired-atomic).
- Ensures that all memory operations have completed before performing the following fence-paired-atomic.
------------- ------------- --------------- ----------- ------------------------------- atomicrmw
acq_rel
- singlethread
- wavefront
- global
- local
- generic
- buffer/global/ds/flat_atomic
atomicrmw
acq_rel
- workgroup
- global
- s_waitcnt lgkmcnt(0)
- If OpenCL, omit.
- Must happen after any preceding local/generic load/store/load atomic/store atomic/atomicrmw.
- Must happen before the following atomicrmw.
- Ensures that all memory operations to local have completed before performing the atomicrmw that is being released.
- buffer/global/flat_atomic
atomicrmw
acq_rel
- workgroup
- local
- ds_atomic
- s_waitcnt lgkmcnt(0)
- If OpenCL, omit.
- Must happen before any following global/generic load/load atomic/store/store atomic/atomicrmw.
- Ensures any following global data read is no older than the load atomic value being acquired.
atomicrmw
acq_rel
- workgroup
- generic
- s_waitcnt lgkmcnt(0)
- If OpenCL, omit.
- Must happen after any preceding local/generic load/store/load atomic/store atomic/atomicrmw.
- Must happen before the following atomicrmw.
- Ensures that all memory operations to local have completed before performing the atomicrmw that is being released.
- flat_atomic
- s_waitcnt lgkmcnt(0)
- If OpenCL, omit.
- Must happen before any following global/generic load/load atomic/store/store atomic/atomicrmw.
- Ensures any following global data read is no older than the load atomic value being acquired.
atomicrmw
acq_rel
- agent
- system
- global
- s_waitcnt lgkmcnt(0) & vmcnt(0)
- If OpenCL, omit lgkmcnt(0).
- Could be split into separate s_waitcnt vmcnt(0) and s_waitcnt lgkmcnt(0) to allow them to be independently moved according to the following rules.
- s_waitcnt vmcnt(0) must happen after any preceding global/generic load/store/load atomic/store atomic/atomicrmw.
- s_waitcnt lgkmcnt(0) must happen after any preceding local/generic load/store/load atomic/store atomic/atomicrmw.
- Must happen before the following atomicrmw.
- Ensures that all memory operations to global have completed before performing the atomicrmw that is being released.
- buffer/global/flat_atomic
- s_waitcnt vmcnt(0)
- Must happen before following buffer_wbinvl1_vol.
- Ensures the atomicrmw has completed before invalidating the cache.
- buffer_wbinvl1_vol
- Must happen before any following global/generic load/load atomic/atomicrmw.
- Ensures that following loads will not see stale global data.
atomicrmw
acq_rel
- agent
- system
- generic
- s_waitcnt lgkmcnt(0) & vmcnt(0)
- If OpenCL, omit lgkmcnt(0).
- Could be split into separate s_waitcnt vmcnt(0) and s_waitcnt lgkmcnt(0) to allow them to be independently moved according to the following rules.
- s_waitcnt vmcnt(0) must happen after any preceding global/generic load/store/load atomic/store atomic/atomicrmw.
- s_waitcnt lgkmcnt(0) must happen after any preceding local/generic load/store/load atomic/store atomic/atomicrmw.
- Must happen before the following atomicrmw.
- Ensures that all memory operations to global have completed before performing the atomicrmw that is being released.
- flat_atomic
- s_waitcnt vmcnt(0) & lgkmcnt(0)
- If OpenCL, omit lgkmcnt(0).
- Must happen before following buffer_wbinvl1_vol.
- Ensures the atomicrmw has completed before invalidating the cache.
- buffer_wbinvl1_vol
- Must happen before any following global/generic load/load atomic/atomicrmw.
- Ensures that following loads will not see stale global data.
fence
acq_rel
- singlethread
- wavefront
none
none
fence
acq_rel
- workgroup
none
- s_waitcnt lgkmcnt(0)
- If OpenCL and address space is not generic, omit.
- However, since LLVM currently has no address space on the fence need to conservatively always generate (see comment for previous fence).
- Must happen after any preceding local/generic load/load atomic/store/store atomic/atomicrmw.
- Must happen before any following global/generic load/load atomic/store/store atomic/atomicrmw.
- Ensures that all memory operations to local have completed before performing any following global memory operations.
- Ensures that the preceding local/generic load atomic/atomicrmw with an equal or wider sync scope and memory ordering stronger than unordered (this is termed the acquire-fence-paired-atomic ) has completed before following global memory operations. This satisfies the requirements of acquire.
- Ensures that all previous memory operations have completed before a following local/generic store atomic/atomicrmw with an equal or wider sync scope and memory ordering stronger than unordered (this is termed the release-fence-paired-atomic ). This satisfies the requirements of release.
fence
**Sequential
acq_rel
Consistent At
- agent
- system
omic**
none
- s_waitcnt lgkmcnt(0) & vmcnt(0)
- If OpenCL and address space is not generic, omit lgkmcnt(0).
- However, since LLVM currently has no address space on the fence need to conservatively always generate (see comment for previous fence).
- Could be split into separate s_waitcnt vmcnt(0) and s_waitcnt lgkmcnt(0) to allow them to be independently moved according to the following rules.
- s_waitcnt vmcnt(0) must happen after any preceding global/generic load/store/load atomic/store atomic/atomicrmw.
- s_waitcnt lgkmcnt(0) must happen after any preceding local/generic load/store/load atomic/store atomic/atomicrmw.
- Must happen before the following buffer_wbinvl1_vol.
- Ensures that the preceding global/local/generic load atomic/atomicrmw with an equal or wider sync scope and memory ordering stronger than unordered (this is termed the acquire-fence-paired-atomic ) has completed before invalidating the cache. This satisfies the requirements of acquire.
- Ensures that all previous memory operations have completed before a following global/local/generic store atomic/atomicrmw with an equal or wider sync scope and memory ordering stronger than unordered (this is termed the release-fence-paired-atomic ). This satisfies the requirements of release.
- buffer_wbinvl1_vol
- Must happen before any following global/generic load/load atomic/store/store atomic/atomicrmw.
- Ensures that following loads will not see stale global data. This satisfies the requirements of acquire.
------------- ------------- --------------- ----------- ------------------------------- load atomic
seq_cst
- singlethread
- wavefront
- global
- local
- generic
Same as corresponding load atomic acquire, except must generated all instructions even for OpenCL.
load atomic
seq_cst
- workgroup
- global
- generic
- s_waitcnt lgkmcnt(0)
- Must happen after preceding global/generic load atomic/store atomic/atomicrmw with memory ordering of seq_cst and with equal or wider sync scope. (Note that seq_cst fences have their own s_waitcnt lgkmcnt(0) and so do not need to be considered.)
- Ensures any preceding sequential consistent local memory instructions have completed before executing this sequentially consistent instruction. This prevents reordering a seq_cst store followed by a seq_cst load. (Note that seq_cst is stronger than acquire/release as the reordering of load acquire followed by a store release is prevented by the waitcnt of the release, but there is nothing preventing a store release followed by load acquire from competing out of order.)
- Following instructions same as corresponding load atomic acquire, except must generated all instructions even for OpenCL.
load atomic
seq_cst
- workgroup
- local
Same as corresponding load atomic acquire, except must generated all instructions even for OpenCL.
load atomic
seq_cst
- agent
- system
- global
- generic
- s_waitcnt lgkmcnt(0) & vmcnt(0)
- Could be split into separate s_waitcnt vmcnt(0) and s_waitcnt lgkmcnt(0) to allow them to be independently moved according to the following rules.
- waitcnt lgkmcnt(0) must happen after preceding global/generic load atomic/store atomic/atomicrmw with memory ordering of seq_cst and with equal or wider sync scope. (Note that seq_cst fences have their own s_waitcnt lgkmcnt(0) and so do not need to be considered.)
- waitcnt vmcnt(0) must happen after preceding global/generic load atomic/store atomic/atomicrmw with memory ordering of seq_cst and with equal or wider sync scope. (Note that seq_cst fences have their own s_waitcnt vmcnt(0) and so do not need to be considered.)
- Ensures any preceding sequential consistent global memory instructions have completed before executing this sequentially consistent instruction. This prevents reordering a seq_cst store followed by a seq_cst load. (Note that seq_cst is stronger than acquire/release as the reordering of load acquire followed by a store release is prevented by the waitcnt of the release, but there is nothing preventing a store release followed by load acquire from competing out of order.)
- Following instructions same as corresponding load atomic acquire, except must generated all instructions even for OpenCL.
store atomic
seq_cst
- singlethread
- wavefront
- workgroup
- global
- local
- generic
Same as corresponding store atomic release, except must generated all instructions even for OpenCL.
store atomic
seq_cst
- agent
- system
- global
- generic
Same as corresponding store atomic release, except must generated all instructions even for OpenCL.
atomicrmw
seq_cst
- singlethread
- wavefront
- workgroup
- global
- local
- generic
Same as corresponding atomicrmw acq_rel, except must generated all instructions even for OpenCL.
atomicrmw
seq_cst
- agent
- system
- global
- generic
Same as corresponding atomicrmw acq_rel, except must generated all instructions even for OpenCL.
fence
seq_cst
- singlethread
- wavefront
- workgroup
- agent
- system
none
Same as corresponding fence acq_rel, except must generated all instructions even for OpenCL.
The memory order also adds the single thread optimization constrains defined in table amdgpu-amdhsa-memory-model-single-thread-optimization-constraints-gfx6-gfx9-table
.
AMDHSA Memory Model Single Thread Optimization Constraints GFX6-GFX9
LLVM Memory Ordering
Optimization Constraints
============ ============================================================== unordered none monotonic none acquire
- If a load atomic/atomicrmw then no following load/load atomic/store/ store atomic/atomicrmw/fence instruction can be moved before the acquire.
- If a fence then same as load atomic, plus no preceding associated fence-paired-atomic can be moved after the fence.
release
- If a store atomic/atomicrmw then no preceding load/load atomic/store/ store atomic/atomicrmw/fence instruction can be moved after the release.
- If a fence then same as store atomic, plus no following associated fence-paired-atomic can be moved before the fence.
acq_rel Same constraints as both acquire and release. seq_cst
- If a load atomic then same constraints as acquire, plus no preceding sequentially consistent load atomic/store atomic/atomicrmw/fence instruction can be moved after the seq_cst.
- If a store atomic then the same constraints as release, plus no following sequentially consistent load atomic/store atomic/atomicrmw/fence instruction can be moved before the seq_cst.
- If an atomicrmw/fence then same constraints as acq_rel.
For code objects generated by AMDGPU backend for HSA [HSA] compatible runtimes (such as ROCm [AMD-ROCm]), the runtime installs a trap handler that supports the s_trap
instruction with the following usage:
AMDGPU Trap Handler for AMDHSA OS
Usage
Code Sequence
Trap Handler Inputs
Description
=================== reserved
===============
s_trap 0x00
======================= Reserved by hardware.
debugtrap(arg)
s_trap 0x01
SGPR0-1
:
queue_ptr
VGPR0
:
arg
Reserved for HSA
debugtrap
intrinsic (not implemented).
llvm.trap
llvm.debugtrap
reserved reserved reserved debugger breakpoint
reserved reserved reserved
s_trap 0x02
s_trap 0x03
s_trap 0x04
s_trap 0x05
s_trap 0x06
s_trap 0x07
s_trap 0x08
s_trap 0xfe
s_trap 0xff
SGPR0-1
:
queue_ptr
Causes dispatch to be terminated and its associated queue put into the error state. - If debugger not installed then behaves as a no-operation. The trap handler is entered and immediately returns to continue execution of the wavefront. - If the debugger is installed, causes the debug trap to be reported by the debugger and the wavefront is put in the halt state until resumed by the debugger. Reserved. Reserved. Reserved. Reserved for debugger breakpoints. Reserved. Reserved. Reserved.
This section provides code conventions used when the target triple OS is amdpal
(see amdgpu-target-triples
) for passing runtime parameters from the application/runtime to each invocation of a hardware shader. These parameters include both generic, application-controlled parameters called user data as well as system-generated parameters that are a product of the draw or dispatch execution.
Each hardware stage has a set of 32-bit user data registers which can be written from a command buffer and then loaded into SGPRs when waves are launched via a subsequent dispatch or draw operation. This is the way most arguments are passed from the application/runtime to a hardware shader.
Compute shader user data mappings are simpler than graphics shaders, and have a fixed mapping.
Note that there are always 10 available user data entries in registers -entries beyond that limit must be fetched from memory (via the spill table pointer) by the shader.
PAL Compute Shader User Data Registers
User Register Description 0 Global Internal Table (32-bit pointer) 1 Per-Shader Internal Table (32-bit pointer) 2 - 11 Application-Controlled User Data (10 32-bit values) 12 Spill Table (32-bit pointer) 13 - 14 Thread Group Count (64-bit pointer) 15 GDS Range
Graphics pipelines support a much more flexible user data mapping:
PAL Graphics Shader User Data Registers
User Register Description 0 Global Internal Table (32-bit pointer) + Per-Shader Internal Table (32-bit pointer)
- 1-15
Application Controlled User Data (1-15 Contiguous 32-bit Values in Registers)
+ Spill Table (32-bit pointer) + Draw Index (First Stage Only) + Vertex Offset (First Stage Only) + Instance Offset (First Stage Only) The placement of the global internal table remains fixed in the first user data SGPR register. Otherwise all parameters are optional, and can be mapped to any desired user data SGPR register, with the following regstrictions:
- Draw Index, Vertex Offset, and Instance Offset can only be used by the first activehardware stage in a graphics pipeline (i.e. where the API vertex shader runs).
- Application-controlled user data must be mapped into a contiguous range of user data registers.
- The application-controlled user data range supports compaction remapping, so only entries that are actually consumed by the shader must be assigned to corresponding registers. Note that in order to support an efficient runtime implementation, the remapping must pack registers in the same order as entries, with unused entries removed.
The global internal table is a table of shader resource descriptors (SRDs) that define how certain engine-wide, runtime-managed resources should be accessed from a shader. The majority of these resources have HW-defined formats, and it is up to the compiler to write/read data as required by the target hardware.
The following table illustrates the required format:
PAL Global Internal Table
Offset Description 0-3 Graphics Scratch SRD 4-7 Compute Scratch SRD 8-11 ES/GS Ring Output SRD 12-15 ES/GS Ring Input SRD 16-19 GS/VS Ring Output #0 20-23 GS/VS Ring Output #1 24-27 GS/VS Ring Output #2 28-31 GS/VS Ring Output #3 32-35 GS/VS Ring Input SRD 36-39 Tessellation Factor Buffer SRD 40-43 Off-Chip LDS Buffer SRD 44-47 Off-Chip Param Cache Buffer SRD 48-51 Sample Position Buffer SRD 52 vaRange::ShadowDescriptorTable High Bits The pointer to the global internal table passed to the shader as user data is a 32-bit pointer. The top 32 bits should be assumed to be the same as the top 32 bits of the pipeline, so the shader may use the program counter's top 32 bits.
This section provides code conventions used when the target triple OS is empty (see amdgpu-target-triples
).
For code objects generated by AMDGPU backend for non-amdhsa OS, the runtime does not install a trap handler. The llvm.trap
and llvm.debugtrap
instructions are handled as follows:
AMDGPU Trap Handler for Non-AMDHSA OS
Usage Code Sequence Description llvm.trap s_endpgm Causes wavefront to be terminated. llvm.debugtrap
none
Compiler warning given that there is no trap handler installed.
When the language is OpenCL the following differences occur:
- The OpenCL memory model is used (see
amdgpu-amdhsa-memory-model
). - The AMDGPU backend appends additional arguments to the kernel's explicit arguments for the AMDHSA OS (see
opencl-kernel-implicit-arguments-appended-for-amdhsa-os-table
). - Additional metadata is generated (see
amdgpu-amdhsa-code-object-metadata
).
OpenCL kernel implicit arguments appended for AMDHSA OS
Position
Byte Size
Byte Alignment
Description
======== ==== ========= =========================================== 1 8 8 OpenCL Global Offset X 2 8 8 OpenCL Global Offset Y 3 8 8 OpenCL Global Offset Z 4 8 8 OpenCL address of printf buffer 5
8
8
OpenCL address of virtual queue used by enqueue_kernel.
6
8
8
OpenCL address of AqlWrap struct used by enqueue_kernel.
When the language is HCC the following differences occur:
- The HSA memory model is used (see
amdgpu-amdhsa-memory-model
).
AMDGPU backend has LLVM-MC based assembler which is currently in development. It supports AMDGCN GFX6-GFX9.
This section describes general syntax for instructions and operands.
AMDGPUAsmGFX7 AMDGPUAsmGFX8 AMDGPUAsmGFX9 AMDGPUOperandSyntax
An instruction has the following syntax:
<opcode> <operand0>, <operand1>,... <modifier0> <modifier1>...
Note that operands are normally comma-separated while modifiers are space-separated.
The order of operands and modifiers is fixed. Most modifiers are optional and may be omitted.
See detailed instruction syntax description for GFX7<AMDGPUAsmGFX7>
, GFX8<AMDGPUAsmGFX8>
and GFX9<AMDGPUAsmGFX9>
.
Note that features under development are not included in this description.
For more information about instructions, their semantics and supported combinations of operands, refer to one of instruction set architecture manuals [AMD-GCN-GFX6], [AMD-GCN-GFX7], [AMD-GCN-GFX8] and [AMD-GCN-GFX9].
The following syntax for register operands is supported:
- SGPR registers: s0, ... or s[0], ...
- VGPR registers: v0, ... or v[0], ...
- TTMP registers: ttmp0, ... or ttmp[0], ...
- Special registers: exec (exec_lo, exec_hi), vcc (vcc_lo, vcc_hi), flat_scratch (flat_scratch_lo, flat_scratch_hi)
- Special trap registers: tba (tba_lo, tba_hi), tma (tma_lo, tma_hi)
- Register pairs, quads, etc: s[2:3], v[10:11], ttmp[5:6], s[4:7], v[12:15], ttmp[4:7], s[8:15], ...
- Register lists: [s0, s1], [ttmp0, ttmp1, ttmp2, ttmp3]
- Register index expressions: v[2*2], s[1-1:2-1]
- 'off' indicates that an operand is not enabled
Detailed description of modifiers may be found here<AMDGPUOperandSyntax>
.
ds_add_u32 v2, v4 offset:16
ds_write_src2_b64 v2 offset0:4 offset1:8
ds_cmpst_f32 v2, v4, v6
ds_min_rtn_f64 v[8:9], v2, v[4:5]
For full list of supported instructions, refer to "LDS/GDS instructions" in ISA Manual.
flat_load_dword v1, v[3:4]
flat_store_dwordx3 v[3:4], v[5:7]
flat_atomic_swap v1, v[3:4], v5 glc
flat_atomic_cmpswap v1, v[3:4], v[5:6] glc slc
flat_atomic_fmax_x2 v[1:2], v[3:4], v[5:6] glc
For full list of supported instructions, refer to "FLAT instructions" in ISA Manual.
buffer_load_dword v1, off, s[4:7], s1
buffer_store_dwordx4 v[1:4], v2, ttmp[4:7], s1 offen offset:4 glc tfe
buffer_store_format_xy v[1:2], off, s[4:7], s1
buffer_wbinvl1
buffer_atomic_inc v1, v2, s[8:11], s4 idxen offset:4 slc
For full list of supported instructions, refer to "MUBUF Instructions" in ISA Manual.
s_load_dword s1, s[2:3], 0xfc
s_load_dwordx8 s[8:15], s[2:3], s4
s_load_dwordx16 s[88:103], s[2:3], s4
s_dcache_inv_vol
s_memtime s[4:5]
For full list of supported instructions, refer to "Scalar Memory Operations" in ISA Manual.
s_mov_b32 s1, s2
s_mov_b64 s[0:1], 0x80000000
s_cmov_b32 s1, 200
s_wqm_b64 s[2:3], s[4:5]
s_bcnt0_i32_b64 s1, s[2:3]
s_swappc_b64 s[2:3], s[4:5]
s_cbranch_join s[4:5]
For full list of supported instructions, refer to "SOP1 Instructions" in ISA Manual.
s_add_u32 s1, s2, s3
s_and_b64 s[2:3], s[4:5], s[6:7]
s_cselect_b32 s1, s2, s3
s_andn2_b32 s2, s4, s6
s_lshr_b64 s[2:3], s[4:5], s6
s_ashr_i32 s2, s4, s6
s_bfm_b64 s[2:3], s4, s6
s_bfe_i64 s[2:3], s[4:5], s6
s_cbranch_g_fork s[4:5], s[6:7]
For full list of supported instructions, refer to "SOP2 Instructions" in ISA Manual.
s_cmp_eq_i32 s1, s2
s_bitcmp1_b32 s1, s2
s_bitcmp0_b64 s[2:3], s4
s_setvskip s3, s5
For full list of supported instructions, refer to "SOPC Instructions" in ISA Manual.
s_barrier
s_nop 2
s_endpgm
s_waitcnt 0 ; Wait for all counters to be 0
s_waitcnt vmcnt(0) & expcnt(0) & lgkmcnt(0) ; Equivalent to above
s_waitcnt vmcnt(1) ; Wait for vmcnt counter to be 1.
s_sethalt 9
s_sleep 10
s_sendmsg 0x1
s_sendmsg sendmsg(MSG_INTERRUPT)
s_trap 1
For full list of supported instructions, refer to "SOPP Instructions" in ISA Manual.
Unless otherwise mentioned, little verification is performed on the operands of SOPP Instructions, so it is up to the programmer to be familiar with the range or acceptable values.
For vector ALU instruction opcodes (VOP1, VOP2, VOP3, VOPC, VOP_DPP, VOP_SDWA), the assembler will automatically use optimal encoding based on its operands. To force specific encoding, one can add a suffix to the opcode of the instruction:
- _e32 for 32-bit VOP1/VOP2/VOPC
- _e64 for 64-bit VOP3
- _dpp for VOP_DPP
- _sdwa for VOP_SDWA
VOP1/VOP2/VOP3/VOPC examples:
v_mov_b32 v1, v2
v_mov_b32_e32 v1, v2
v_nop
v_cvt_f64_i32_e32 v[1:2], v2
v_floor_f32_e32 v1, v2
v_bfrev_b32_e32 v1, v2
v_add_f32_e32 v1, v2, v3
v_mul_i32_i24_e64 v1, v2, 3
v_mul_i32_i24_e32 v1, -3, v3
v_mul_i32_i24_e32 v1, -100, v3
v_addc_u32 v1, s[0:1], v2, v3, s[2:3]
v_max_f16_e32 v1, v2, v3
VOP_DPP examples:
v_mov_b32 v0, v0 quad_perm:[0,2,1,1]
v_sin_f32 v0, v0 row_shl:1 row_mask:0xa bank_mask:0x1 bound_ctrl:0
v_mov_b32 v0, v0 wave_shl:1
v_mov_b32 v0, v0 row_mirror
v_mov_b32 v0, v0 row_bcast:31
v_mov_b32 v0, v0 quad_perm:[1,3,0,1] row_mask:0xa bank_mask:0x1 bound_ctrl:0
v_add_f32 v0, v0, |v0| row_shl:1 row_mask:0xa bank_mask:0x1 bound_ctrl:0
v_max_f16 v1, v2, v3 row_shl:1 row_mask:0xa bank_mask:0x1 bound_ctrl:0
VOP_SDWA examples:
v_mov_b32 v1, v2 dst_sel:BYTE_0 dst_unused:UNUSED_PRESERVE src0_sel:DWORD
v_min_u32 v200, v200, v1 dst_sel:WORD_1 dst_unused:UNUSED_PAD src0_sel:BYTE_1 src1_sel:DWORD
v_sin_f32 v0, v0 dst_unused:UNUSED_PAD src0_sel:WORD_1
v_fract_f32 v0, |v0| dst_sel:DWORD dst_unused:UNUSED_PAD src0_sel:WORD_1
v_cmpx_le_u32 vcc, v1, v2 src0_sel:BYTE_2 src1_sel:WORD_0
For full list of supported instructions, refer to "Vector ALU instructions".
The AMDGPU assembler defines and updates some symbols automatically. These symbols do not affect code generation.
Set to the GFX generation number of the target being assembled for. For example, when assembling for a "GFX9" target this will be set to the integer value "9". The possible GFX generation numbers are presented in amdgpu-processors
.
Set to zero before assembly begins. At each instruction, if the current value of this symbol is less than or equal to the maximum VGPR number explicitly referenced within that instruction then the symbol value is updated to equal that VGPR number plus one.
May be used to set the .amdhsa_next_free_vpgr directive in amdhsa-kernel-directives-table
.
May be set at any time, e.g. manually set to zero at the start of each kernel.
Set to zero before assembly begins. At each instruction, if the current value of this symbol is less than or equal the maximum SGPR number explicitly referenced within that instruction then the symbol value is updated to equal that SGPR number plus one.
May be used to set the .amdhsa_next_free_spgr directive in amdhsa-kernel-directives-table
.
May be set at any time, e.g. manually set to zero at the start of each kernel.
Directives which begin with .amdgcn
are valid for all amdgcn
architecture processors, and are not OS-specific. Directives which begin with .amdhsa
are specific to amdgcn
architecture processors when the amdhsa
OS is specified. See amdgpu-target-triples
and amdgpu-processors
.
Optional directive which declares the target supported by the containing assembler source file. Valid values are described in amdgpu-amdhsa-code-object-target-identification
. Used by the assembler to validate command-line options such as -triple
, -mcpu
, and those which specify target features.
Creates a correctly aligned AMDHSA kernel descriptor and a symbol, <name>.kd
, in the current location of the current section. Only valid when the OS is amdhsa
. <name>
must be a symbol that labels the first instruction to execute, and does not need to be previously defined.
Marks the beginning of a list of directives used to generate the bytes of a kernel descriptor, as described in amdgpu-amdhsa-kernel-descriptor
. Directives which may appear in this list are described in amdhsa-kernel-directives-table
. Directives may appear in any order, must be valid for the target being assembled for, and cannot be repeated. Directives support the range of values specified by the field they reference in amdgpu-amdhsa-kernel-descriptor
. If a directive is not specified, it is assumed to have its default value, unless it is marked as "Required", in which case it is an error to omit the directive. This list of directives is terminated by an .end_amdhsa_kernel
directive.
AMDHSA Kernel Assembler Directives
Directive Default Supported On Description
.amdhsa_group_segment_fixed_size
0
GFX6-GFX9
Controls GROUP_SEGMENT_FIXED_SIZE in
amdgpu-amdhsa-kernel-descriptor-gfx6-gfx9-table
.
.amdhsa_private_segment_fixed_size
0
GFX6-GFX9
Controls PRIVATE_SEGMENT_FIXED_SIZE in
amdgpu-amdhsa-kernel-descriptor-gfx6-gfx9-table
.
.amdhsa_user_sgpr_private_segment_buffer
0
GFX6-GFX9
Controls ENABLE_SGPR_PRIVATE_SEGMENT_BUFFER in
amdgpu-amdhsa-kernel-descriptor-gfx6-gfx9-table
.
.amdhsa_user_sgpr_dispatch_ptr
0
GFX6-GFX9
Controls ENABLE_SGPR_DISPATCH_PTR in
amdgpu-amdhsa-kernel-descriptor-gfx6-gfx9-table
.
.amdhsa_user_sgpr_queue_ptr
0
GFX6-GFX9
Controls ENABLE_SGPR_QUEUE_PTR in
amdgpu-amdhsa-kernel-descriptor-gfx6-gfx9-table
.
.amdhsa_user_sgpr_kernarg_segment_ptr
0
GFX6-GFX9
Controls ENABLE_SGPR_KERNARG_SEGMENT_PTR in
amdgpu-amdhsa-kernel-descriptor-gfx6-gfx9-table
.
.amdhsa_user_sgpr_dispatch_id
0
GFX6-GFX9
Controls ENABLE_SGPR_DISPATCH_ID in
amdgpu-amdhsa-kernel-descriptor-gfx6-gfx9-table
.
.amdhsa_user_sgpr_flat_scratch_init
0
GFX6-GFX9
Controls ENABLE_SGPR_FLAT_SCRATCH_INIT in
amdgpu-amdhsa-kernel-descriptor-gfx6-gfx9-table
.
.amdhsa_user_sgpr_private_segment_size
0
GFX6-GFX9
Controls ENABLE_SGPR_PRIVATE_SEGMENT_SIZE in
amdgpu-amdhsa-kernel-descriptor-gfx6-gfx9-table
.
.amdhsa_system_sgpr_private_segment_wavefront_offset
0
GFX6-GFX9
Controls ENABLE_SGPR_PRIVATE_SEGMENT_WAVEFRONT_OFFSET in
amdgpu-amdhsa-compute_pgm_rsrc2-gfx6-gfx9-table
.
.amdhsa_system_sgpr_workgroup_id_x
1
GFX6-GFX9
Controls ENABLE_SGPR_WORKGROUP_ID_X in
amdgpu-amdhsa-compute_pgm_rsrc2-gfx6-gfx9-table
.
.amdhsa_system_sgpr_workgroup_id_y
0
GFX6-GFX9
Controls ENABLE_SGPR_WORKGROUP_ID_Y in
amdgpu-amdhsa-compute_pgm_rsrc2-gfx6-gfx9-table
.
.amdhsa_system_sgpr_workgroup_id_z
0
GFX6-GFX9
Controls ENABLE_SGPR_WORKGROUP_ID_Z in
amdgpu-amdhsa-compute_pgm_rsrc2-gfx6-gfx9-table
.
.amdhsa_system_sgpr_workgroup_info
0
GFX6-GFX9
Controls ENABLE_SGPR_WORKGROUP_INFO in
amdgpu-amdhsa-compute_pgm_rsrc2-gfx6-gfx9-table
.
.amdhsa_system_vgpr_workitem_id
0
GFX6-GFX9
Controls ENABLE_VGPR_WORKITEM_ID in
amdgpu-amdhsa-compute_pgm_rsrc2-gfx6-gfx9-table
. Possible values are defined inamdgpu-amdhsa-system-vgpr-work-item-id-enumeration-values-table
.
.amdhsa_next_free_vgpr
Required
GFX6-GFX9
Maximum VGPR number explicitly referenced, plus one. Used to calculate GRANULATED_WORKITEM_VGPR_COUNT in
amdgpu-amdhsa-compute_pgm_rsrc1-gfx6-gfx9-table
.
.amdhsa_next_free_sgpr
Required
GFX6-GFX9
Maximum SGPR number explicitly referenced, plus one. Used to calculate GRANULATED_WAVEFRONT_SGPR_COUNT in
amdgpu-amdhsa-compute_pgm_rsrc1-gfx6-gfx9-table
.
.amdhsa_reserve_vcc
1
GFX6-GFX9
Whether the kernel may use the special VCC SGPR. Used to calculate GRANULATED_WAVEFRONT_SGPR_COUNT in
amdgpu-amdhsa-compute_pgm_rsrc1-gfx6-gfx9-table
.
.amdhsa_reserve_flat_scratch
1
GFX7-GFX9
Whether the kernel may use flat instructions to access scratch memory. Used to calculate GRANULATED_WAVEFRONT_SGPR_COUNT in
amdgpu-amdhsa-compute_pgm_rsrc1-gfx6-gfx9-table
.
.amdhsa_reserve_xnack_mask
Target Feature Specific (+xnack)
GFX8-GFX9
Whether the kernel may trigger XNACK replay. Used to calculate GRANULATED_WAVEFRONT_SGPR_COUNT in
amdgpu-amdhsa-compute_pgm_rsrc1-gfx6-gfx9-table
.
.amdhsa_float_round_mode_32
0
GFX6-GFX9
Controls FLOAT_ROUND_MODE_32 in
amdgpu-amdhsa-compute_pgm_rsrc1-gfx6-gfx9-table
. Possible values are defined inamdgpu-amdhsa-floating-point-rounding-mode-enumeration-values-table
.
.amdhsa_float_round_mode_16_64
0
GFX6-GFX9
Controls FLOAT_ROUND_MODE_16_64 in
amdgpu-amdhsa-compute_pgm_rsrc1-gfx6-gfx9-table
. Possible values are defined inamdgpu-amdhsa-floating-point-rounding-mode-enumeration-values-table
.
.amdhsa_float_denorm_mode_32
0
GFX6-GFX9
Controls FLOAT_DENORM_MODE_32 in
amdgpu-amdhsa-compute_pgm_rsrc1-gfx6-gfx9-table
. Possible values are defined inamdgpu-amdhsa-floating-point-denorm-mode-enumeration-values-table
.
.amdhsa_float_denorm_mode_16_64
3
GFX6-GFX9
Controls FLOAT_DENORM_MODE_16_64 in
amdgpu-amdhsa-compute_pgm_rsrc1-gfx6-gfx9-table
. Possible values are defined inamdgpu-amdhsa-floating-point-denorm-mode-enumeration-values-table
.
.amdhsa_dx10_clamp
1
GFX6-GFX9
Controls ENABLE_DX10_CLAMP in
amdgpu-amdhsa-compute_pgm_rsrc1-gfx6-gfx9-table
.
.amdhsa_ieee_mode
1
GFX6-GFX9
Controls ENABLE_IEEE_MODE in
amdgpu-amdhsa-compute_pgm_rsrc1-gfx6-gfx9-table
.
.amdhsa_fp16_overflow
0
GFX9
Controls FP16_OVFL in
amdgpu-amdhsa-compute_pgm_rsrc1-gfx6-gfx9-table
.
.amdhsa_exception_fp_ieee_invalid_op
0
GFX6-GFX9
Controls ENABLE_EXCEPTION_IEEE_754_FP_INVALID_OPERATION in
amdgpu-amdhsa-compute_pgm_rsrc2-gfx6-gfx9-table
.
.amdhsa_exception_fp_denorm_src
0
GFX6-GFX9
Controls ENABLE_EXCEPTION_FP_DENORMAL_SOURCE in
amdgpu-amdhsa-compute_pgm_rsrc2-gfx6-gfx9-table
.
.amdhsa_exception_fp_ieee_div_zero
0
GFX6-GFX9
Controls ENABLE_EXCEPTION_IEEE_754_FP_DIVISION_BY_ZERO in
amdgpu-amdhsa-compute_pgm_rsrc2-gfx6-gfx9-table
.
.amdhsa_exception_fp_ieee_overflow
0
GFX6-GFX9
Controls ENABLE_EXCEPTION_IEEE_754_FP_OVERFLOW in
amdgpu-amdhsa-compute_pgm_rsrc2-gfx6-gfx9-table
.
.amdhsa_exception_fp_ieee_underflow
0
GFX6-GFX9
Controls ENABLE_EXCEPTION_IEEE_754_FP_UNDERFLOW in
amdgpu-amdhsa-compute_pgm_rsrc2-gfx6-gfx9-table
.
.amdhsa_exception_fp_ieee_inexact
0
GFX6-GFX9
Controls ENABLE_EXCEPTION_IEEE_754_FP_INEXACT in
amdgpu-amdhsa-compute_pgm_rsrc2-gfx6-gfx9-table
.
.amdhsa_exception_int_div_zero
0
GFX6-GFX9
Controls ENABLE_EXCEPTION_INT_DIVIDE_BY_ZERO in
amdgpu-amdhsa-compute_pgm_rsrc2-gfx6-gfx9-table
.
Here is an example of a minimal assembly source file, defining one HSA kernel:
.amdgcn_target "amdgcn-amd-amdhsa--gfx900+xnack" // optional
.text
.globl hello_world
.p2align 8
.type hello_world,@function
hello_world:
s_load_dwordx2 s[0:1], s[0:1] 0x0
v_mov_b32 v0, 3.14159
s_waitcnt lgkmcnt(0)
v_mov_b32 v1, s0
v_mov_b32 v2, s1
flat_store_dword v[1:2], v0
s_endpgm
.Lfunc_end0:
.size hello_world, .Lfunc_end0-hello_world
.rodata
.p2align 6
.amdhsa_kernel hello_world
.amdhsa_user_sgpr_kernarg_segment_ptr 1
.amdhsa_next_free_vgpr .amdgcn.next_free_vgpr
.amdhsa_next_free_sgpr .amdgcn.next_free_sgpr
.end_amdhsa_kernel
- AMD-GCN-GFX6
- AMD-GCN-GFX7
- AMD-GCN-GFX8
- AMD-GCN-GFX9
- AMD-RADEON-HD-2000-3000
- AMD-RADEON-HD-4000
- AMD-RADEON-HD-5000
- AMD-RADEON-HD-6000
- AMD-ROCm
ROCm: Open Platform for Development, Discovery and Education Around GPU Computing
- AMD-ROCm-github
- CLANG-ATTR
- DWARF
- ELF
- HRF
- HSA
- OpenCL
- YAML