Skip to content

Koivisto v5.0

Compare
Choose a tag to compare
@Luecx Luecx released this 07 Jul 18:20
· 400 commits to master since this release

It has been a long time since we released Koivisto 4.0 and many things happened since. We are actively developing Koi while not wanting Koivisto to be tested by third parties who also list obvious clones which makes the ratings increditable.

We dislike the popularity of neural networks inside chess engines, not because we do not understand how they work but mostly those who use them seem to not understand what they are actually doing.
Using neural networks does not require any understanding of chess which you would need when writing a hand-crafted-evaluation or as we like to refer to: real-men-evaluation (RME). Using neural networks became more of an engineering challenge than anything else. 3 components are required for them which is: a good tuner, good data, good engine implementation. Since a good tuner requires some understanding of how they work, most engines out there seem to be using other peoples tuners. Effectively there are just a few tuners out there but a lot more NN engines. Secondly, generating data seems to be a privilege to the big projects which gather computing resources around them. The easiest part is probably the NN implementation inside the engines themself although even here many people seem to ctrl+c, ctrl+v popular implementations.

Since we personally work with neural networks beside chess engine development, we decided to write our own tuner... from scratch... We already did this a few months ago but just a few days ago we decided to give it a shot and actually tune a few networks. We generated around 1.5M selfplay-games with Koivisto, extracted a few positions and initially ended up with around 50M positions. Later we realised, the filtering mechanism we applied was bad and simply wrong. This lead to a neural network which beat our master branch by just 40 elo. Since there are other parties helping out with Koivisto like @justNo4b who generated some data with Drofa himself, he used the tuner and generated a network which was suddenly +80 elo above master.

The result seemed slightly surprising so we rechecked the data generation and filtering and found a bug. After redoing the training process which barely took one hour, we tested a new network which showed the following result:

ELO   | 103.93 +- 5.74 (95%)
CONF  | 10.0+0.10s Threads=1 Hash=16MB
GAMES | N: 10240 W: 5019 L: 2044 D: 3177

A network trained on 100M positions of Ethereal data provided by Andrew Grant was ~150 elo above master.

Koivisto, while standing on the shoulder of giants, has implemented many of it's own specialities to both search and classical eval on top of well known concepts. We are now taking the path of the sloth, and replacing our beloved RME with a silly neural network. We want to maintain our distance from other engines, so it was important that our NN development kept the same 'koivisto touch' that we already had before. All three aspects of our development have been done internally and are our own. We have written our own trainer, generated our own data, and have our own NN probing code. We strive to be as original as possible, and will not veer from this path moving forward.

Generation of higher quality data is going on and might lead to additional elo being gained here. Since the NN branch in our project started as a small test to verify the integrity of our NN tuner, we have choosen a very simple, non-relative, 2-layer, 12x64-input network. A new topology is high on our list since we do consider the topology to be very far from optimal.


Beside the addition of neural network code inside Koivisto, we have 83 further elo gaining patches since 4.0. Many RME patches have been going on which are eventually invalidated. More to that later. Our search has also made a lot of progress. Adding further unique ideas not found in any other engine so far, we gained a large amount of elo inside our search since then.

Together with the neural network code, our results look similar to this against Koivisto 4.0.

ELO   | 367.3 +- 30.5 (95%)
CONF  | 10.0+0.10s Threads=1 Hash=16MB
GAMES | N: 919 W: 767 L: 46 D: 106

Due to the size of the network, we will keep the networks seperate in a submodule of our repository. Further information can be found on our github page. Furthermore, the only compile we offer are AVX2 compiles. Any machine which does not support AVX2 will not be able to run Koivisto games from now on.


Beside Koivisto 5.0 being released, we will also ship binaries for 4.83. Since many people, especially Berserk author has helped with our classical evaluation, we want to make a final release which marks the end of development for RME inside Koivisto.


We want to thank all the contributors to the project, especially Berserk author for his massive contribution to our search and the classical evaluation, @justNo4b for helping and supporting us with various topics and helping with the training of neural networks, Andrew Grant for the many discussions we had to improve parts of the code, sharing scripts and many more. Beside that we thank the official OpenBench discord with all its members (especially noobpwnftw) for answering any question we have as soon as possible and supporting us whenever possible. We also want to thank the author of Seer for offering us to share training resources and giving us ideas for training our classical as well as our neural network evaluation.