Skip to content

A replication of DeepMind's 2016 Nature publication, "Mastering the game of Go with deep neural networks and tree search." Based off of the specification in the paper.

License

wrongu/RocAlphaGo

Repository files navigation

RocAlphaGo

(Previously known just as "AlphaGo," renamed to clarify that we are not affiliated with DeepMind)

This project is a student-led replication/reference implementation of DeepMind's 2016 Nature publication, "Mastering the game of Go with deep neural networks and tree search," details of which can be found on their website. This implementation uses Python and Keras - a decision to prioritize code clarity, at least in the early stages.

Build Status Gitter

Documentation

See the project wiki.

Current project status

This is not yet a full implementation of AlphaGo. Development is being carried out on the develop branch. The current emphasis is on speed optimizations, which are necessary to complete training of the value-network and to create feasible tree-search. See the cython-optimization branch for more.

Selected data (i.e. trained models) are released in our data repository.

This project has primarily focused on the neural network training aspect of DeepMind's AlphaGo. We also have a simple single-threaded implementation of their tree search algorithm, though it is not fast enough to be competitive yet.

See the wiki page on the training pipeline for information on how to run the training commands.

How to contribute

See the 'Contributing' document and join the Gitter chat.

About

A replication of DeepMind's 2016 Nature publication, "Mastering the game of Go with deep neural networks and tree search." Based off of the specification in the paper.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages