Skip to content

Tiny C++11 GPT-2 inference implementation from scratch

License

Notifications You must be signed in to change notification settings

keith2018/TinyGPT

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

7 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

TinyGPT

Tiny C++11 GPT-2 inference implementation from scratch, which is mainly based on the project picoGPT.

Accompanying blog post: Write a GPT from scratch (TinyGPT)

Core class

Build and Run

1. Get the code

git clone --recurse-submodules https://github.com/keith2018/TinyGPT.git

2. Install Intel MKL(Math Kernel Library)

Official website: Intel®-Optimized Math Library for Numerical Computing on CPUs & GPUs

3. Download GPT-2 model file

python3 tools/download_gpt2_model.py

if success, you'll see the file model_file.data in directory assets/gpt2

4. Build and Run

mkdir build
cmake -B ./build -DCMAKE_BUILD_TYPE=Release
cmake --build ./build --config Release

This will generate the executable file and copy assets to directory app/bin, then you can run the demo:

cd app/bin
./TinyGPT_demo
[DEBUG] TIMER TinyGPT::Model::loadModelGPT2: cost: 800 ms
[DEBUG] TIMER TinyGPT::Encoder::getEncoder: cost: 191 ms
INPUT:Alan Turing theorized that computers would one day become
GPT:the most powerful machines on the planet.
INPUT:exit

Dependencies

License

This code is licensed under the MIT License (see LICENSE).