Skip to content

The sycl version of llm.c (for the final project of HPC course 2024, UNISA)

License

Notifications You must be signed in to change notification settings

salehjg/llm.sycl

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

82 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

LLM.SYCL

This project is basically a partial translation of LLM.C repository from C-CUDA to C++ SYCL.

How to

Prepare

You need to have the oneAPI and CUDA SDKs installed. The code has been tested with the following versions:

  • oneAPI: 2021.4
  • CUDA: 12.2

Furthermore, you need to have numpy, torch, and python3 installed to run the training. The dataset will be fetched automatically.

Train

Refer to the readme file in data/ for training the model. This is required to run the CUDA and the SYCL implementations.

Build

Source the oneAPI and CUDA environment and then:

mkdir build && cd build
CC=icx CXX=icpx cmake ..
# ccmake ..
make -j

This will give you the LLM_SYCL, OrigTrain, and TestAll executables.

Run

To run the original CUDA code with minor modifications to disable training and to perform some intermediate tensor dumping as gold values:

./OrigTrain -b 1

To run the SYCL code:

./LLM_SYCL -s --batch 1 -x -g 10 -y

Set -g for larger values to generate more text. See -h for more details.

To run the test suite:

./TestAll

Verify

The output of the SYCL code should be similar to the output of the CUDA code. Other than that, for more detailed comparison with the gold (CUDA) implementation, you can use the data/compare.py script:

./build/OrigTrain -b 1
./build/LLM_SYCL -s --batch 1 -g 10
python data/compare.py

Note that we are running the SYCL implementation with profiling and intermediate tensor dumping enabled. This is the default config for the modified CUDA implementation.

Notes

Retrieve the available profiling types:

vtune -help collect
vtune -help collect gpu-hotspots

Profile computing tasks:

vtune -collect gpu-hotspots -- ./LLM_SYCL <OPTIONS>
vtune -report hotspots -group-by=computing-instance -format=csv > out.csv

Analyze the roofline

advisor -collect=roofline --profile-gpu --project-dir=./dir -- ./a.out
advisor -report=roofline --gpu --project-dir=. --report-output=./roofline.html

Credits

This repo is developed as the final project for the HPC course 2024 of Prof. B. Cosenza at the University of Salerno. The following open-source projects have been used:

About

The sycl version of llm.c (for the final project of HPC course 2024, UNISA)

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published