Assets 3

This archive contains 12 of the 14 games played with four KBA-associated professionals in April 2018 (see this post). ELF OpenGo won all games by resignation.

We are respecting the players' request not to publish two of the games, and we're also anonymizing the games. Each player is represented in at least two games contained in this archive.

The first two games were played using a weaker prerelease version of the ELF OpenGo model weights. For each move, ELF OpenGo used 2 threads with 10000 rollouts per thread (grouped into batches of 4).

All other games were played using the v0 pretrained model (publicly available for download). For each move, ELF OpenGo used 2 threads with 40000 rollouts per thread (grouped into batches of 16). This took around 50 seconds per move on a V100 GPU.

For all games, no constraints were imposed on human thinking time.

There is no source code associated with this release. All files herein are covered under this repository's BSD-style LICENSE.

Assets 3

Small update of our pretrained model for ELF OpenGo. This is v0 after finetuning with approximately 250000 additional minibatches (learning rate 1e-4).

Download the .bin file to get started.

There is no source code associated with this release. All files herein are covered under this repository's LICENSE file.

Assets 3

Initial release of our pretrained model for ELF OpenGo.

Download the .bin file to get started.

There is no source code associated with this release. All files herein are covered under this repository's LICENSE file.

Assets 3

This archive contains 998 games played with LeelaZero on 2018 May 3. ELF OpenGo won 980 games and LeelaZero won 18 games.

Two games hung on a LeelaZero GPU tuning step and were not included.

All games were played using the following configuration:

  • ELF OpenGo used the v0 pretrained model (publicly available for download). For each move, ELF OpenGo used 2 threads with 40000 rollouts per thread (grouped into batches of 16). This took around 50 seconds per move on a V100 GPU.
  • LeelaZero used model 158603eb. LeelaZero was fully tuned with the full-tuner option. For each move, LeelaZero used 50 seconds of thinking time on a V100 GPU.

Neither bot benefitted from pondering.

There is no source code associated with this release. All files herein are covered under this repository's BSD-style LICENSE.