|
164 | 164 | <td class="status"> |
165 | 165 | <svg height="11.92" overflow="visible" version="1.1" width="11.92"><g transform="translate(0,11.92) matrix(1 0 0 -1 0 0) translate(5.96,0) translate(0,5.96)" fill="#85924E" stroke="#000000" stroke-width="0.4pt" color="#000000"><path d="M 0 0 M 5.96 0 C 5.96 3.29 3.29 5.96 0 5.96 C -3.29 5.96 -5.96 3.29 -5.96 0 C -5.96 -3.29 -3.29 -5.96 0 -5.96 C 3.29 -5.96 5.96 -3.29 5.96 0 Z M 0 0" style="stroke:none"></path></g></svg><sup class="footnote" title="Intel has extensive support for OpenMP through their latest compilers"><a href="#desc-intelopenmp">35</a></sup></td> |
166 | 166 | <td class="status"> |
167 | | - <svg height="9.45" overflow="visible" version="1.1" width="9.45"><g transform="translate(0,9.45) matrix(1 0 0 -1 0 0) translate(0.55,0) translate(0,0.55)" fill="#000000" stroke="#EB5F73" stroke-width="0.8pt" color="#000000"><path d="M 0 0 L 8.34 8.34" style="fill:none"></path></g></svg><sup class="footnote" title="Intel supports pSTL algorithms through their DPC++ Library (oneDPL; GitHub). It's heavily namespaced and not yet on the same level as NVIDIA"><a href="#desc-prettyok">36</a></sup></td> |
| 167 | + <svg height="12.64" overflow="visible" version="1.1" width="12.64"><g transform="translate(0,12.64) matrix(1 0 0 -1 0 0) translate(6.32,0) translate(0,6.32)" fill="#FBBC6A" stroke="#000000" stroke-width="0.4pt" color="#000000"><path d="M 6.32 6.32 L -6.32 6.32 L -6.32 -6.32 L 6.32 -6.32 Z" style="stroke:none"></path></g></svg><sup class="footnote" title="Intel supports pSTL algorithms through their DPC++ Library (oneDPL; GitHub). It's heavily namespaced and not yet on the same level as NVIDIA"><a href="#desc-intelstandardc">36</a></sup></td> |
168 | 168 | <td class="status"> |
169 | 169 | <svg height="12.64" overflow="visible" version="1.1" width="12.64"><g transform="translate(0,12.64) matrix(1 0 0 -1 0 0) translate(6.32,0) translate(0,6.32)" fill="#FBBC6A" stroke="#000000" stroke-width="0.4pt" color="#000000"><path d="M 6.32 6.32 L -6.32 6.32 L -6.32 -6.32 L 6.32 -6.32 Z" style="stroke:none"></path></g></svg><sup class="footnote" title="With Intel oneAPI 2022.3, Intel supports DO CONCURRENT with GPU offloading"><a href="#desc-intelstandardfortran">37</a></sup></td> |
170 | 170 | <td class="status"> |
|
216 | 216 | <li id="desc-intelsyclc"><span class="number">33:</span> <span class="description"><a href='https://www.khronos.org/sycl/'>SYCL</a> is the prime programming model for Intel GPUs; actually, SYCL is only a standard, while Intel's implementation of it is called <a href='https://www.intel.com/content/www/us/en/developer/tools/oneapi/data-parallel-c-plus-plus.html'>DPC++</a> (<em>Data Parallel C++</em>), which extends the SYCL standard in various places; actually actually, Intel namespaces everything <em>oneAPI</em> these days, so the <em>full</em> proper name is Intel oneAPI DPC++ (which incorporates a C++ compiler and also a library)</span><a href="#compat-table" class="back" title="Back to table">↺</a></li> |
217 | 217 | <li id="desc-intelopenacc"><span class="number">34:</span> <span class="description">OpenACC can be used on Intel GPUs by translating the code to OpenMP with <a href='https://github.com/intel/intel-application-migration-tool-for-openacc-to-openmp'>Intel's Source-to-Source translator</a></span><a href="#compat-table" class="back" title="Back to table">↺</a></li> |
218 | 218 | <li id="desc-intelopenmp"><span class="number">35:</span> <span class="description">Intel has <a href='https://www.intel.com/content/www/us/en/develop/documentation/get-started-with-cpp-fortran-compiler-openmp/top.html'>extensive support for OpenMP</a> through their latest compilers</span><a href="#compat-table" class="back" title="Back to table">↺</a></li> |
219 | | - <li id="desc-prettyok"><span class="number">36:</span> <span class="description">Intel supports pSTL algorithms through their <a href='https://www.intel.com/content/www/us/en/developer/tools/oneapi/dpc-library.html#gs.fifrh5'>DPC++ Library</a> (oneDPL; <a href='https://github.com/oneapi-src/oneDPL'>GitHub</a>). It's heavily namespaced and not yet on the same level as NVIDIA</span><a href="#compat-table" class="back" title="Back to table">↺</a></li> |
| 219 | + <li id="desc-intelstandardc"><span class="number">36:</span> <span class="description">Intel supports pSTL algorithms through their <a href='https://www.intel.com/content/www/us/en/developer/tools/oneapi/dpc-library.html#gs.fifrh5'>DPC++ Library</a> (oneDPL; <a href='https://github.com/oneapi-src/oneDPL'>GitHub</a>). It's heavily namespaced and not yet on the same level as NVIDIA</span><a href="#compat-table" class="back" title="Back to table">↺</a></li> |
220 | 220 | <li id="desc-intelstandardfortran"><span class="number">37:</span> <span class="description">With <a href='https://www.intel.com/content/www/us/en/developer/articles/release-notes/fortran-compiler-release-notes.html'>Intel oneAPI 2022.3</a>, Intel supports DO CONCURRENT with GPU offloading</span><a href="#compat-table" class="back" title="Back to table">↺</a></li> |
221 | 221 | <li id="desc-intelkokkosc"><span class="number">38:</span> <span class="description">Kokkos supports Intel GPUs through SYCL</span><a href="#compat-table" class="back" title="Back to table">↺</a></li> |
222 | 222 | <li id="desc-intelalpakac"><span class="number">39:</span> <span class="description"><a href='https://github.com/alpaka-group/alpaka/releases/tag/0.9.0'>Alpaka v0.9.0</a> introduces experimental SYCL support; also, Alpaka can use OpenMP backends</span><a href="#compat-table" class="back" title="Back to table">↺</a></li> |
|
0 commit comments