Skip to content

Commit

Permalink
Merge branch 'master' into cao/master
Browse files Browse the repository at this point in the history
  • Loading branch information
gcao committed Jun 2, 2020
2 parents 7e5de68 + d04fdf4 commit c20948a
Show file tree
Hide file tree
Showing 91 changed files with 6,642 additions and 472 deletions.
167 changes: 72 additions & 95 deletions README.md

Large diffs are not rendered by default.

62 changes: 62 additions & 0 deletions SelfplayTraining.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,62 @@
## Selfplay Training:
If you'd also like to run the full self-play loop and train your own neural nets, in addition to probably wanting to [compile KataGo yourself](https://github.com/lightvector/KataGo#compiling-katago), you must have [Python3](https://www.python.org/) and [Tensorflow](https://www.tensorflow.org/install/) installed. The version of Tensorflow known to work with the current code and with which KataGo's main run was trained is 1.15. Earlier versions than 1.15 will probably not work, and KataGo has NOT been tested with TF2.0. You'll also probably need a decent amount of GPU power.

There are 5 things that need to all run concurrently to form a closed self-play training loop.
* Selfplay engine (C++ - `cpp/katago selfplay`) - continuously plays games using the latest neural net in some directory of accepted models, writing the data to some directory.
* Shuffler (python - `python/shuffle.py`) - scans directories of data from selfplay and shuffles it to produce TFRecord files to write to some directory.
* Training (python - `python/train.py`) - continuously trains a neural net using TFRecord files from some directory, saving models periodically to some directory.
* Exporter (python - `python/export_model.py`) - scans a directory of saved models and converts from Tensorflow's format to the format that all the C++ uses, exporting to some directory.
* Gatekeeper (C++ - `cpp/katago gatekeeper`) - polls a directory of newly exported models, plays games against the latest model in an accepted models directory, and if the new model passes, moves it to the accepted models directory. OPTIONAL, it is also possible to train just accepting every new model.

On the cloud, a reasonable small-scale setup for all these things might be:
* A machine with a decent amount of cores and memory to run the shuffler and exporter.
* A machine with one or two powerful GPUs and a lot of cpus and memory to run the selfplay engine.
* A machine with a medium GPU and a lot of cpus and memory to run the gatekeeper.
* A machine with a modest GPU to run the training.
* A well-performing shared filesystem accessible by all four of these machines.

You may need to play with learning rates, batch sizes, and the balance between training and self-play. If the training GPU is too strong, you may overfit more since it will be on the same data over and over because self-play won't be generating new data fast enough, and it's possible you will want to adjust hyperparameters or even add an artificial delay for each loop of training. Overshooting the other way and having too much GPU power on self-play is harder since generally you need at least an order of magnitude more power on self-play than training. If you do though maybe you'll start seeing diminishing returns as the training becomes the limiting factor in improvement.

Example instructions to start up these things (assuming you have appropriate machines set up), with some base directory $BASEDIR to hold the all the models and training data generated with a few hundred GB of disk space. The below commands assume you're running from the root of the repo and that you can run bash scripts.
* `cpp/katago selfplay -output-dir $BASEDIR/selfplay -models-dir $BASEDIR/models -config cpp/configs/training/SELFPLAYCONFIG.cfg >> log.txt 2>&1 & disown`
* Some example configs for different numbers of GPUs are: cpp/configs/training/selfplay{1,2,4,8a,8b,8c}.cfg. You may want to edit them depending on your specs - for example to change the sizes of various tables depending on how much memory you have, or to specify gpu indices if you're doing things like putting some mix of training, gatekeeper, and self-play on the same machines or GPUs instead of on separate ones. Note that the number of game threads in these configs is very large, probably far larger than the number of cores on your machine. This is intentional, as each thread only currently runs synchronously with respect to neural net queries, so a large number of parallel games is needed to take advantage of batching.
* Take a look at the generated `log.txt` for any errors and/or for running stats on started games and occasional neural net query stats.
* Edit the config to change the number of playouts used or other parameters, or to set a cap on the number of games generated after which selfplay should terminate.
* If `models-dir` is empty, selfplay will use a random number generator instead to produce data, so selfplay is the **starting point** of setting up the full closed loop.
* Multiple selfplays across many machines can coexist using the same output dirs on a shared filesystem. This is the intended way to run selfplay across a cluster.
* `cd python; ./selfplay/shuffle_and_export_loop.sh $NAMEOFRUN $BASEDIR/ $SCRATCH_DIRECTORY $NUM_THREADS $BATCH_SIZE $USE_GATING`
* `$NAMEOFRUN` should be a short alphanumeric string that ideally should be globally unique, to distinguish models from your run if you choose to share your results with others. It will get prefixed on to the internal names of exported models, which will appear in log messages when KataGo loads the model.
* This starts both the shuffler and exporter. The shuffler will use the scratch directory with the specified number of threads to shuffle in parallel. Make sure you have some disk space. You probably want as many threads as you have cores. If not using the gatekeeper, specify `0` for `$USE_GATING`, else specify `1`.
* KataGo uses a batch size of 256, but you might have to use a smaller batch size if your GPU has less memory or you are training a very big net.
* Also, if you're low on disk space, take a look also at the `./selfplay/shuffle.sh` script (which is called by `shuffle_and_export_loop.sh`). Right now it's *very* conservative about cleaning up old shuffles but you could tweak it to be a bit more aggressive.
* You can also edit `./selfplay/shuffle.sh` if you want to change any details about the lookback window for training data, see `shuffle.py` for more possible arguments.
* The loop script will output `$BASEDIR/logs/outshuffle.txt` and `$BASEDIR/logs/outexport.txt`, take a look at these to see the output of the shuffle program and/or any errors it encountered.
* `cd python; ./selfplay/train.sh $BASEDIR/ $TRAININGNAME b6c96 $BATCH_SIZE main -lr-scale 1.0 >> log.txt 2>&1 & disown`
* This starts the training. You may want to look at or edit the train.sh script, it also snapshots the state of the repo for logging, as well as contains some training parameters that can be tweaked.
* `$TRAININGNAME` is a name prefix for the neural net, whose name will follow the convention `$NAMEOFRUN-$TRAININGNAME-s(# of samples trained on)-d(# of data samples generated)`.
* The batch size specified here MUST match the batch size given to the shuffle script.
* The fourth argument controls some export behavior:
* `main` - this is the main net for selfplay, save it regularly to `$BASEDIR/tfsavedmodels_toexport` which the export loop will export regularly for gating.
* `extra` - save models to `$BASEDIR/tfsavedmodels_toexport_extra`, which the export loop will then export to `$BASEDIR/models_extra`, a directory that does not feed into gating or selfplay.
* `trainonly` - the neural net without exporting anything. This is useful for when you are trying to jointly train additional models of different sizes and there's no point to have them export anything yet (maybe they're too weak to bother testing).
* Any additional arguments, like "-lr-scale 1.0" to adjust learning rate will simply get forwarded on to train.py. The argument `-max-epochs-this-instance` can be used to make training terminate after a few epochs, instead of running forever. Run train.py with -help for other arguments.
* Take a look at the generated `log.txt` for any possible errors, as well as running stats on training and loss statistics.
* You can choose a different size than b6c96 if desired. Configuration is in `python/modelconfigs.py`, which you can also edit to add other sizes.
* `cpp/katago gatekeeper -rejected-models-dir $BASEDIR/rejectedmodels -accepted-models-dir $BASEDIR/models/ -sgf-output-dir $BASEDIR/gatekeepersgf/ -test-models-dir $BASEDIR/modelstobetested/ -config cpp/configs/training/GATEKEEPERCONFIG.cfg >> log.txt 2>&1 & disown`
* This starts the gatekeeper. Some example configs for different numbers of GPUs are: configs/training/gatekeeper{1,2a,2b,2c}.cfg. Again, you may want to edit these. The number of simultaneous game threads here is also large for the same reasons as for selfplay. No need to start this if specifying `0` for `$USE_GATING`.
* Take a look at the generated `log.txt` for any errors and/or for the game-by-game progress of each testing match that the gatekeeper runs.
* The argument `-quit-if-no-nets-to-test` can make gatekeeper terminate after testing all nets queued for testing, instead of running forever and waiting for more. Run with -help to see other arguments as well.

To manually pause a run, sending `SIGINT` or `SIGKILL` to all the relevant processes is the recommended method. The selfplay and gatekeeper processes will terminate gracefully when receiving such a signal and finish writing all pending data (this may take a minute or two), and any python or bash scripts will be terminated abruptly but are all implemented to write to disk in a way that is safe if killed at any point. To resume the run, just restart everything again with the same `$BASEDIR` and everything will continue where it left off.

### Synchronous vs Asynchronous
The normal pipeline, and the method that all scripts and configs are geared for by default, is to have all steps run simultaneously and _asynchronously_ without ever stopping. Selfplay continuously produces data and polls for new nets, shuffle repeatedly takes the data and shuffles it, training continuously uses the data to produce new nets, etc. This is by far the simplest and most efficient method if using more than one machine in the training loop, since different processes can simply just keep running on their own machine without waiting for steps on any other. To do so, simply just start up each separate process as described above, each one on an appropriate machine.

It is also possible to run _synchronously_, with each step sequentially following the previous, which could be suitable for attempting to run on only one machine with only one GPU. An example script is provided in `python/selfplay/synchronous_loop.sh` for how to do this. In particular it:
* Provides a `-max-games-total` to the selfplay so it terminates after a certain number of games.
* Provides smaller values of `-keep-target-rows` for the shuffler to reduce the data per cycle and `-samples-per-epoch` and `-max-epochs-this-instance 1` for the training to terminate after training on a smaller number of samples instead of going forever.
* If using the gatekeeper at all, provides `-quit-if-no-nets-to-test` to it so that it terminates after gatekeeping any nets produced by training. Not using gating (passing in 0 for `USEGATING`) will be faster and will save compute power, and the whole loop works perfectly fine without it, but having it at first can be nice to help debugging and make sure that things are working and that the net is actually getting stronger.

The default parameters in the example synchronous loop script are NOT heavily tested, and unlike the asynchronous setup, have NOT been used for KataGo's primary training runs, so it is quite possible that they are suboptimal, and will need some experimentation. The right parameters may also vary depending on what you're training - for example a 9x9-only run may prefer a different number of samples and windowing policy than 19x19, etc.

With either a synchronous OR an asynchronous setup, it's recommended to be spending anywhere from 4x to 40x more GPU power on the selfplay than on the training. For the normal asynchronous setup, this is done by simply using more and/or stronger GPUs on the selfplay processes than on training. For synchronous, this can be done by playing around with the various parameters (number of games, visits per move, samples per epoch, etc) and seeing how long each step takes, to find a good balance for your hardware. Note however that very early in a run may be misleading for timing these steps though, since with early barely-better-than-random nets games will last a lot longer than a little further in to a run.
2 changes: 1 addition & 1 deletion cpp/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -29,6 +29,7 @@ Summary of source folders, in approximate dependency order, from lowest level to
* `playutils.{cpp,h}` - Miscellaneous: handicap placement, ownership and final stone status, computing high-level stats to report, benchmarking.
* `play.{cpp,h}` - Running matches and self-play games.
* `tests` - A variety of tests.
* `models` - A directory with a small number of small-sized (and not very strong) models for running tests.
* `command` - Top-level subcommands callable by users. GTP, analysis commands, benchmarking, selfplay data generation, etc.
* `commandline.{cpp,h}` - Common command line logic shared by all subcommands.
* `gtp.cpp` - Main GTP engine.
Expand All @@ -41,4 +42,3 @@ Summary of source folders, in approximate dependency order, from lowest level to
Other folders:

* `configs` - Default or example configs for many of the different subcommands.
* `models` - A small number of small-sized (and not very strong) models for running tests.
37 changes: 31 additions & 6 deletions cpp/command/analysis.cpp
Original file line number Diff line number Diff line change
@@ -1,6 +1,8 @@
#include "../core/global.h"
#include "../core/config_parser.h"
#include "../core/timer.h"
#include "../core/datetime.h"
#include "../core/makedir.h"
#include "../search/asyncbot.h"
#include "../program/setup.h"
#include "../program/playutils.h"
Expand Down Expand Up @@ -67,23 +69,32 @@ int MainCmds::analysis(int argc, const char* const* argv) {
}

Logger logger;
logger.addFile(cfg.getString("logFile"));
if(cfg.contains("logFile") && cfg.contains("logDir"))
throw StringError("Cannot specify both logFile and logDir in config");
else if(cfg.contains("logFile"))
logger.addFile(cfg.getString("logFile"));
else {
MakeDir::make(cfg.getString("logDir"));
Rand rand;
logger.addFile(cfg.getString("logDir") + "/" + DateTime::getCompactDateTimeString() + "-" + Global::uint32ToHexString(rand.nextUInt()) + ".log");
}

logger.setLogToStderr(true);

logger.write("Analysis Engine starting...");
logger.write(Version::getKataGoVersionForHelp());

auto loadParams = [](ConfigParser& config, SearchParams& params, Player& perspective) {
auto loadParams = [](ConfigParser& config, SearchParams& params, Player& perspective, Player defaultPerspective) {
params = Setup::loadSingleParams(config);
perspective = Setup::parseReportAnalysisWinrates(config,C_EMPTY);
perspective = Setup::parseReportAnalysisWinrates(config,defaultPerspective);
//Set a default for conservativePass that differs from matches or selfplay
if(!config.contains("conservativePass") && !config.contains("conservativePass0"))
params.conservativePass = true;
};

SearchParams defaultParams;
Player defaultPerspective;
loadParams(cfg, defaultParams, defaultPerspective);
loadParams(cfg, defaultParams, defaultPerspective, C_EMPTY);

const int analysisPVLen = cfg.contains("analysisPVLen") ? cfg.getInt("analysisPVLen",1,100) : 15;
const bool assumeMultipleStartingBlackMovesAreHandicap =
Expand All @@ -102,6 +113,19 @@ int MainCmds::analysis(int argc, const char* const* argv) {
);
}

int nnMaxBatchSizeTotal = nnEval->getNumGpus() * nnEval->getMaxBatchSize();
int numThreadsTotal = defaultParams.numThreads * numAnalysisThreads;
if(nnMaxBatchSizeTotal * 1.5 <= numThreadsTotal) {
logger.write(
Global::strprintf(
"Note: nnMaxBatchSize * number of GPUs (%d) is smaller than numSearchThreads * numAnalysisThreads (%d)",
nnMaxBatchSizeTotal, numThreadsTotal
)
);
logger.write("The number of simultaneous threads that might query the GPU could be larger than the batch size that the GPU will handle at once.");
logger.write("It may improve performance to increase nnMaxBatchSize, unless you are constrained on GPU memory.");
}

//Check for unused config keys
cfg.warnUnusedKeys(cerr,&logger);

Expand Down Expand Up @@ -346,6 +370,7 @@ int MainCmds::analysis(int argc, const char* const* argv) {

//Defaults
rbase.params = defaultParams;
rbase.perspective = defaultPerspective;
rbase.analysisPVLen = analysisPVLen;
rbase.includeOwnership = false;
rbase.includePolicy = false;
Expand Down Expand Up @@ -597,7 +622,7 @@ int MainCmds::analysis(int argc, const char* const* argv) {
}
std::map<string,string> overrideSettings;
for(auto it = settings.begin(); it != settings.end(); ++it) {
overrideSettings[it.key()] = it.value().is_string() ? std::string(it.value()): it.value().dump(); // always convert to string
overrideSettings[it.key()] = it.value().is_string() ? it.value().get<string>(): it.value().dump(); // always convert to string
}

// Reload settings to allow overrides
Expand All @@ -607,7 +632,7 @@ int MainCmds::analysis(int argc, const char* const* argv) {
//Ignore any unused keys in the ORIGINAL config
localCfg.markAllKeysUsedWithPrefix("");
localCfg.overrideKeys(overrideSettings);
loadParams(localCfg, rbase.params, rbase.perspective);
loadParams(localCfg, rbase.params, rbase.perspective, defaultPerspective);
SearchParams::failIfParamsDifferOnUnchangeableParameter(defaultParams,rbase.params);
//Hard failure on unused override keys newly present in the config
vector<string> unusedKeys = localCfg.unusedKeys();
Expand Down
7 changes: 7 additions & 0 deletions cpp/command/evalsgf.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -22,6 +22,7 @@ int MainCmds::evalsgf(int argc, const char* const* argv) {
int moveNum;
string printBranch;
string extraMoves;
string hintLoc;
int64_t maxVisits;
int numThreads;
float overrideKomi;
Expand All @@ -44,6 +45,7 @@ int MainCmds::evalsgf(int argc, const char* const* argv) {
TCLAP::ValueArg<string> printArg("p","print","Alias for -print-branch",false,string(),"MOVE MOVE ...");
TCLAP::ValueArg<string> extraMovesArg("","extra-moves","Extra moves to force-play before doing search",false,string(),"MOVE MOVE ...");
TCLAP::ValueArg<string> extraArg("e","extra","Alias for -extra-moves",false,string(),"MOVE MOVE ...");
TCLAP::ValueArg<string> hintLocArg("","hint-loc","Hint loc",false,string(),"MOVE");
TCLAP::ValueArg<long> visitsArg("v","visits","Set the number of visits",false,-1,"VISITS");
TCLAP::ValueArg<int> threadsArg("t","threads","Set the number of threads",false,-1,"THREADS");
TCLAP::ValueArg<float> overrideKomiArg("","override-komi","Artificially set komi",false,std::numeric_limits<float>::quiet_NaN(),"KOMI");
Expand All @@ -65,6 +67,7 @@ int MainCmds::evalsgf(int argc, const char* const* argv) {
cmd.add(printArg);
cmd.add(extraMovesArg);
cmd.add(extraArg);
cmd.add(hintLocArg);
cmd.add(visitsArg);
cmd.add(threadsArg);
cmd.add(overrideKomiArg);
Expand All @@ -84,6 +87,7 @@ int MainCmds::evalsgf(int argc, const char* const* argv) {
string print = printArg.getValue();
extraMoves = extraMovesArg.getValue();
string extra = extraArg.getValue();
hintLoc = hintLocArg.getValue();
maxVisits = (int64_t)visitsArg.getValue();
numThreads = threadsArg.getValue();
overrideKomi = overrideKomiArg.getValue();
Expand Down Expand Up @@ -244,6 +248,9 @@ int MainCmds::evalsgf(int argc, const char* const* argv) {
AsyncBot* bot = new AsyncBot(params, nnEval, &logger, searchRandSeed);

bot->setPosition(nextPla,board,hist);
if(hintLoc != "") {
bot->setRootHintLoc(Location::ofString(hintLoc,board));
}

//Print initial state----------------------------------------------------------------
const Search* search = bot->getSearchStopAndWait();
Expand Down
Loading

0 comments on commit c20948a

Please sign in to comment.