Lc0 is a UCI-compliant chess engine designed to play chess via neural network, specifically those of the LeelaChessZero project.
Lc0 can be acquired either via a git clone or an archive download from GitHub. Be aware that there is a required submodule which isn't included in source archives.
For essentially all purposes, including selfplay game generation and match play, we highly recommend using the latest release/version
branch (for example release/0.19
), which is equivalent to using the latest version tag.
Versioning follows the Semantic Versioning guidelines, with major, minor and patch sections. The training server enforces game quality using the versions output by the client and engine.
Download using git:
git clone -b release/0.19 --recurse-submodules https://github.com/LeelaChessZero/lc0.git
If downloading an archive, you need to also download and place the submodule:
- Download https://github.com/LeelaChessZero/lc0/archive/release/0.19.zip (.tar.gz archive is also available)
- Extract
- Download https://github.com/LeelaChessZero/lczero-common/archive/master.zip (also available as .tar.gz)
- Move the second archive into the first archive's
libs/lczero-common/
folder and extract - The final form should look like
<TOP>/libs/lczero-common/proto/
Having successfully acquired Lc0 via either of these methods, proceed to the build section below and follow the instructions for your OS.
Building should be easier now than it was in the past. Please report any problems you have.
Aside from the git submodule, lc0 requires the Meson build system and at least one backend library for evaluating the neural network, as well as the required libraries protobuf
and zlib
. (gtest
is optionally used for the test suite.) If your system already has those two libraries installed, they will be used; otherwise Meson will generate its own copy of the two (a "subproject"), which in turn requires that git is installed (yes, separately from cloning the actual lc0 repository). Meson also requires python and Ninja.
Backend support includes (in theory) any CBLAS-compatible library for CPU usage, such as OpenBLAS or Intel's MKL. For GPUs, OpenCL and CUDA+cudnn are supported.
Given those basics, the OS and backend specific instructions are below.
- Install backend:
- Install ninja build (
ninja-build
), meson, and (optionally) gtest (libgtest-dev
). - Go to
lc0/
- Run
./build.sh
lc0
will be inlc0/build/release/
directory
If you want to build with a different compiler, pass the CC
and CXX
environment variables:
CC=clang-6.0 CXX=clang++-6.0 ./build.sh
Nvidia provides .deb packages. CUDA will be installed in /usr/local/cuda-10.0
and requires 3GB of diskspace.
If your /usr/local
partition doesn't have that much space left you can create a symbolic link before
doing the install; for example: sudo ln -s /opt/cuda-10.0 /usr/local/cuda-10.0
The instructions given on the nvidia website tell you to finish with apt install cuda
. However, this
might not work (missing dependencies). In that case use apt install cuda-10-0
. Afterwards you can
install the meta package cuda
which will cause an automatic upgrade to a newer version when that
comes available (assuming you use Installer Type deb (network)
, if you'd want that (just cuda-10-0 will
stay at version 10). If you don't know what to do, only install cuda-10-0.
cuDNN exists of two packages, the Runtime Library and the Developer Library (both a .deb package).
Before you can download the latter you need to create a (free) "developer" account with nvidia for which at least a legit email address is required (their website says: The e-mail address is not made public and will only be used if you wish to receive a new password or wish to receive certain news or notifications by e-mail.). Further they ask for a name, date of birth (not visible later on), country, organisation ("LeelaZero" if you have none), primary industry segment ("Other"/none) and which development areas you are interested in ("Deep Learning").
For Ubuntu 18.04 you need the latest version of meson and clang-6.0 before performing the steps above:
sudo apt-get install clang-6.0 ninja-build pkg-config protobuf-compiler libprotobuf-dev meson
CC=clang-6.0 CXX=clang++-6.0 INSTALL_PREFIX=~/.local ./build.sh
Make sure that ~/.local/bin
is in your PATH
environment variable. You can now type lc0 --help
and start.
For Ubuntu 16.04 you need the latest version of meson and clang-6.0 before performing the steps above:
wget -O - https://apt.llvm.org/llvm-snapshot.gpg.key | sudo apt-key add -
sudo apt-add-repository 'deb http://apt.llvm.org/xenial/ llvm-toolchain-xenial-6.0 main'
sudo apt-get update
sudo apt-get install clang-6.0 ninja-build protobuf-compiler libprotobuf-dev
pip3 install meson --user
CC=clang-6.0 CXX=clang++-6.0 INSTALL_PREFIX=~/.local ./build.sh
Make sure that ~/.local/bin
is in your PATH
environment variable. You can now type lc0 --help
and start.
Use https://github.com/vochicong/lc0-docker to run latest releases of lc0 and the client inside a Docker container.
- Install Microsoft Visual Studio
- Install CUDA (v9.2 is fine)
- Install cuDNN.
- Install Python3
- Install Meson:
pip3 install --upgrade meson
- Edit
build-cuda.cmd
:
- If you use MSVS other than 2015 (or if it's installed into non-standard location):
C:\Program Files (x86)\Microsoft Visual Studio 14.0\
replace 14.0 with your version--backend 2015
replace 2015 with your version
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v9.2\lib\x64
replace with your CUDA pathC:\dev\cuDNN\
replace with your cuDNN directory
- Run
build-cuda.cmd
. It will generate MSVS project and pause.
Then either:
- Hit to build it.
- Resulting binary will be
build/lc0.exe
Or.
- Open generated solution
build/lc0.sln
in Visual Studio and build yourself.
- Install brew as per the instructions at https://brew.sh/
- Install python3:
brew install python3
- Install meson:
brew install meson
- Install ninja:
brew install ninja
- When using Mojave install SDK headers: installer -pkg /Library/Developer/CommandLineTools/Packages/macOS_SDK_headers_for_macOS_10.14.pkg -target /
- Run
./build.sh
- The resulting binary will be in build/release
- Install OpenBLAS
git clone https://github.com/xianyi/OpenBLAS.git
cd OpenBLAS/
make
sudo make PREFIX=/usr install
cd ..
- Install Meson
pip3 install meson
- Install clang
wget http://releases.llvm.org/6.0.0/clang+llvm-6.0.0-armv7a-linux-gnueabihf.tar.xz
tar -xf clang+llvm-6.0.0-armv7a-linux-gnueabihf.tar.xz
rm clang+llvm-6.0.0-armv7a-linux-gnueabihf.tar.xz
mv clang+llvm-6.0.0-armv7a-linux-gnueabihf clang_6.0.0
sudo mv clang_6.0.0 /usr/local
echo 'export PATH=/usr/local/clang_6.0.0/bin:~/.local/bin:$PATH' >> .bashrc
echo 'export LD_LIBRARY_PATH=/usr/local/clang_6.0.0/lib:$LD_LIBRARY_PATH' >> .bashrc
source .bashrc
- Clone lc0 and compile
git clone https://github.com/LeelaChessZero/lc0.git
cd lc0
git submodule update --init --recursive
CC=clang CXX=clang++ ./build.sh -Ddefault_library=static
- The resulting binary will be in build/release
Leela Chess is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version.
Leela Chess is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details.
You should have received a copy of the GNU General Public License along with Leela Chess. If not, see http://www.gnu.org/licenses/.
The source files of Lc0 with the exception of the BLAS and OpenCL
backends (all files in the blas
and opencl
sub-directories) have
the following additional permission, as allowed under GNU GPL version 3
section 7:
If you modify this Program, or any covered work, by linking or combining it with NVIDIA Corporation's libraries from the NVIDIA CUDA Toolkit and the NVIDIA CUDA Deep Neural Network library (or a modified version of those libraries), containing parts covered by the terms of the respective license agreement, the licensors of this Program grant you additional permission to convey the resulting work.