Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Mixed precision training support #2191

Merged
merged 3 commits into from Apr 2, 2019
Merged

Mixed precision training support #2191

merged 3 commits into from Apr 2, 2019

Conversation

godmoves
Copy link
Contributor

@godmoves godmoves commented Feb 3, 2019

Add mixed precision training support. In this method we do all computation in fp16 except loss calculation and the weight update/storage, this will speed up the training procedure and help us use larger batch size.

@godmoves
Copy link
Contributor Author

godmoves commented Feb 3, 2019

There is a tunable parameter loss_scale which is used to ensure the gradient falls in the fp16 range, I choose this value based on some test and the result of my test suggests the value 128 works very well.

This is the Tensorboard of my test. The full precision (dark blue) and mix precision (light blue) losses are identical, but the mix precision training is faster (improve from 1260 pos/s to 1480 pos/s on a 1080ti for 10b128f. I do the full training test for 10b128f network only due to hardware limitation).

image

@godmoves
Copy link
Contributor Author

godmoves commented Feb 3, 2019

Here is the speed test. We can see there is a relatively large speedup for large networks. For GPU with TensorCore, it will use that for the fp16 computation automatically, so the speed improvement for smaller net is larger than the GPU without that.

1080ti (no TensorCore)

Architecture Batch Size FP32 Speed (pos/s) FP16 Speed (pos/s) Speed Boost
10x128 128 1300 1490 14.6%
256 1195 1397 16.9%
512 OOM* 1358
20x256 128 205 337 64.4%
256 OOM* 278

2080ti (TensorCore)

Architecture Batch Size FP32 Speed (pos/s) FP16 Speed (pos/s) Speed Boost
10x128 128 1752 2543 45.1%
256 1810 2753 52.1%
512 OOM* 2529
20x256 128 381 562 47.5%
256 OOM* 251

OOM: Out of memory.

@gcp
Copy link
Member

gcp commented Feb 4, 2019

I'll try testing it on Windows & with an RTX.

@gcp gcp force-pushed the next branch 2 times, most recently from d5e3539 to 6aaa5bb Compare February 19, 2019 16:09
@roy7
Copy link
Collaborator

roy7 commented Feb 19, 2019

Any test results to report @gcp? :)

@gcp
Copy link
Member

gcp commented Mar 2, 2019

I don't. I no longer manage to get TensorFlow working in Windows at all.

The problem is that I've upgraded to CUDA 10.0 to get Turing features, but the official TensorFlow releases, for god knows what reason, are still on CUDA 9.0 and MSVC2015.

Nightlies do seem to be on CUDA 10.0 but are too broken and crash with obscure error messages.

So I have no good way to test this right now.

@apetresc
Copy link
Contributor

apetresc commented Mar 4, 2019

The problem is that I've upgraded to CUDA 10.0 to get Turing features, but the official TensorFlow releases, for god knows what reason, are still on CUDA 9.0 and MSVC2015.

This doesn't appear to be the case anymore since 1.13.1, so maybe it's worth a try now?

@gcp
Copy link
Member

gcp commented Mar 6, 2019

I am pretty sure it was downloading 1.13.x. The Windows packages still seem to link against CUDA 9.

@godmoves
Copy link
Contributor Author

For networks larger than 40b256f, the reg_term of the initial random weight might be too large and get out of the fp16 range. So I cast the l2 loss of each weight back to fp32 before sum them up to prevent the possible overflow.

@gcp
Copy link
Member

gcp commented Mar 25, 2019

I'll try this again now, the docs seem to really claim CUDA 10 support for latest TensorFlow.

@gcp gcp mentioned this pull request Mar 26, 2019
@gcp
Copy link
Member

gcp commented Apr 2, 2019

I can only confirm what I already said: TensorFlow 1.13.1 does not support CUDA 10 on Windows.

@gcp
Copy link
Member

gcp commented Apr 2, 2019

Current TensorFlow Nightlies "work" in the sense that they start up, but they hang after loading the shufflebuffers and nothing seems to happen on the GPU.

TensorFlow 2.0 alpha is not backwards compatible and can't run the code.

@gcp
Copy link
Member

gcp commented Apr 2, 2019

@vipmath That one won't work (wrong CUDA, wrong Python), but that repo does have a version of 1.12.0 that might be suitable.

@gcp gcp merged commit 7949a21 into leela-zero:next Apr 2, 2019
@gcp
Copy link
Member

gcp commented Apr 2, 2019

Works, about 33% speedup on a Turing card.

gcp pushed a commit that referenced this pull request Apr 2, 2019
* Add mixed precision training support.
* Do not use loss scale if training with fp32
* Fix potential reg_term overflow of large networks.

Pull request #2191.
roy7 pushed a commit to roy7/leela-zero that referenced this pull request Apr 11, 2019
* Add mixed precision training support.
* Do not use loss scale if training with fp32
* Fix potential reg_term overflow of large networks.

Pull request leela-zero#2191.
godmoves added a commit to godmoves/leela-zero that referenced this pull request Apr 27, 2019
* Command line parsing : OPENGL --> OPENCL

* Asynchronous simulation / evaluation+backup for batching.

* temp commit.

* New fractional backup implementation.

* reorder children after Dirichlet noise + minor fix.

* Fix for compiler syntax nitpick.

* Once again...

* Output max queue length.

* One queue for each GPU.

* Limit max queue size to twice gpucount*batchsize and Serialize OpenCL commandqueue. (Reverted "one queue for each GPU".)

* temp commits.

* Less variation in speed (pos/s) but seems ~5% slower than max performance.

* Use accumulated virtual losses to avoid visiting expanding nodes.

* Fix missing header leading to error with some compiler.

* Fast conclusion of think().

* Solve problem with root node expansion when it's in NNCache; Fix error with some compilers.

* Cleanup loop code.

Pull request leela-zero#2033.

* always output tuning result

* fixes.

* Tensor core support for half precision

* Bugfixes

* Use m32n8k16 format instead of m16n16k16 - seems to be a bit faster

* Merge fixes.

* Code cleanup for tuning for tensorcores

* Change default to try SA=0 / SA=1 for tensorcore cases

* Update UCTSearch.cpp

* Clear NNCache when clear_board or loadsgf is issued.

* Fixes.

* Queue insertion/vl undo improvements.

* Half precision by default.

* hgemm : Added m16n16k16/m32n8k16/m8n32k16 tuning

Tuner will see which shaped multiplication is fastest.
MDIMA represents the M dimension, NDIMB represents the N dimension.

* Tuner : adjusted range for tensorcore cases so that it covers all MDIMA/NDIMB dimensions

* Fix bug causing infinite wait.

* Fix bug causing infinite wait.

* Minor fixes.

* Minor fixes.

* Crucial fix: infinite wait_expanded.

* Tentative fixes.

* Follow-up fixes.

* Update UCTNode.cpp

* stupid typo.

* stupid typo.

* small fix.

* Fix crucial bug in frac-backup factor calculation.

* Fix crucial bug in frac-backup factor calculation.

* Better output stats.

* Defaulted frac-backup; better naming of pending backup stats.

* Small fix.

* Revert SEL -> WR for get_visits for selection.

* Forgotten comment text change.

* Make some debug variables atomic.

* Renaming a variable; static_cast -> load()

* virtual loss in numerator.

* Small output fix.

* Reorganize pending backup obligations.

* Move backup data insertion to Network::get_output0.

* Remove statics; bugfixes.

* Optimizations? Do not use m_return_queue.

* Corrected implementation of virtual loss accumulation.

* Missing include.

* Modifications that don't achieve good result.

* WIP; implemented readers-writer lock.

* A snapshot as basis of further changes.

* Checkpoint.

* Checkpoint: Seamless think/ponder transition implemented.
NOT for actual use: This version sends positions to GPUs without limit for stress-testing purposes; will eat up your memory.

* Bugfixes and better debug outputs; usable version.

* Checkpoint: changes are not done but it compiles.

* Checkpoint: moved some members from OpenCLScheduler and OpenCL_Network to OpenCL; compiles.

* temp

* temp commit; won't compile.

* Checkpoint: implementation unfinished, now switch to another design.

* Mostly lock-free OpenCLScheduler.
Ensure minimal latency when there're enough positions to feed the GPUs.
Compiles. Pending debug.

* Seems working now.

* Fixes.

* Worker thread = search thread.

* Tweak conversion script for ELF v2.

Small tweak to conversion script for ELF v2 weights.

Pull request leela-zero#2213.

* Bugfix: accumulated virtual loss removal.

* Work around inexplicable reported bug.

* Endgame/Double-pass bugfix.

* Fix some cv race conditions.

* Update OpenCL.h

* Correctly initialize board when reading SGF.

Even though SGF defaults to size 19 boards, we should not try
to set up a board that size if LZ has not been compiled to support
it.

Pull request leela-zero#1964.

* Increase memory limit for 32-bit builds.

Without this, it's empirically not possible to load the current 256x40
networks on a 32-bit machine.

* Never select a CPU during OpenCL autodetection.

If we are trying to auto-select the best device for OpenCL, never select
a CPU. This will cause the engine to refuse to run when people are
trying to run the OpenCL version without a GPU or without GPU drivers,
instead of selecting any slow and suboptimal (and empirically extremely
broken) OpenCL-on-CPU drivers.

Falling back to CPU-only would be another reasonable alternative, but
doesn't provide an alert in case the GPU drivers are missing.

Improves behavior of issue leela-zero#1994.

* Fix tuner for heterogeneous GPUs and auto precision.

Fix full tuner for heterogeneous GPUs and auto precision detection.

--full-tuner implies --tune-only
--full-tuner requires an explicit precision

Fixes leela-zero#1973.

Pull request leela-zero#2004.

* Optimized out and out_in kernels.

Very minor speedup of about 2% with batch size of 1.
With batch size of 5 there is a speedup of about 5% with half precision
and 12% with single precision.

Out transformation memory accesses are almost completely coalesced
with the new kernel.

Pull request leela-zero#2014.

* Update OpenCL C++ headers.

From upstream a807dcf0f8623d40dc5ce9d1eb00ffd0e46150c7.

* CPU-only eval performance optimization.

* CPUPipe : change winograd transformation constants to an equation.

Combined with a series of strength reduction changes, 
improves netbench by about 8%.

* Convert some std::array into individual variables

For some reason this allows gcc to optimize the code better,
improving netbench by 2%.

Pull request leela-zero#2021.

* Convolve in/out performance optimization.

Use hard-coded equations instead of matrix multiplication.

Pull request leela-zero#2023.

* Validation: fix -k option.

Fix Validation -k option by reading its value before the parser is reused.

Pull request leela-zero#2024.

* Add link to Azure free trial instructions.

See pull request leela-zero#2031.

* Cleanup loop code.

Pull request leela-zero#2033.

* Cleanup atomics and dead if.

Pull request leela-zero#2034.

* Const in SGFTree.

Pull request leela-zero#2035.

* Make the README more clear.

Simplify instructions, especially related to building and running
when wanting to contribute.

Based on pull request leela-zero#1983.

* Refactor to allow AutoGTP to use Engine.

* Move Engine to Game.h and refactor autogtp to use it too.
* Fix initialization of job engines.

Pull request leela-zero#2029.

* Fix printf call style.

Generally speaking, providing character pointers as the first argument 
directly might cause FSB (Format String Bug).

Pull request leela-zero#2063.

* Add O(sqrt(log(n))) scaling to tree search.

Pull request leela-zero#2072.

* Update Khronos OpenCL C++ headers.

Update from upstream f0b7045.

Fixes warnings related to CL_TARGET_OPENCL_VERSION.

* AutoGTP: allow specifying an SGF as initial position.

* Make AutoGTP URL parametric.
* Support for the sgfhash and movescount parameters in get-task.
* Automatic downloading of sgf and training files.
* Fix Management.cpp for older Qt5 versions.
* Added starting match games from specified initial position
* Tidy ValidationJob::init() like ProductionJob::init()
* Use existing QUuid method of generating random file 
  names instead of QTemporaryFile when fetching game data.

Moreover, we do not load training data in LeelaZ since it is not needed to start from
an arbitrary position.

Pull request leela-zero#2052.

* Support separate options for white in match games.

* Add optional separate options for white in match game.
* Fixed loading of saved match order with optionsSecond.

Pull request leela-zero#2078.

* Option to get network output without writing to cache. 

Pull request leela-zero#2093.

* Add permission to link with NVIDIA libs. Update year.

See issue leela-zero#2032.

All contributors to the core engine have given their permission to
add an additional permission to link with NVIDIA's CUDA/cuDNN/TensorRT
libraries. This makes it possible to distribute the engine when built to
use those libraries.

Update the copyright notices to 2019.

* Add link to GoReviewPartner.

Pull request leela-zero#2147.

* Reminder to install OpenCL driver if seperate.

Although the OpenCL driver is generally installed as part of the driver
install, mention the requirement explicitly in case it wasn't.

See pull request leela-zero#2138.

* Fixed leelaz_file on Android.

Pull request leela-zero#2135.

* Fix 'catching polymorphic type by value' warning.

Pull request leela-zero#2134.

* Fixed converter script for minigo removing bias.

Fixes leela-zero#2020.

Pull request leela-zero#2133.

* Add zlib to the mac OS X build instructions.

See pull request leela-zero#2122.

* UCTNodePtr rare race condition fix.

Calling get_eval() on zero-visit node will assert-fail.
The original code could assert-fail on b.get_eval() if 'a' and 'b' both
had zero visits but suddenly 'a' gained an additional visit.

Pull request leela-zero#2110.

* Make sure analysis is printed at least once.

Fixes issue leela-zero#2001.

Pull request leela-zero#2114.

* Don't post if not requested.

Follow up fix for pull request leela-zero#2114.

* AutoGTP: Allow specifying initial GTP commands.

* AutoGTP: Allow specifying initial GTP commands.
  Also add support for white taking the first move in handicapped job games.
* AutoGTP: Refactored core loop for match games to avoid code duplication.
* Fixed white using black's match game settings after loading from an SGF by
  moving SGF loading into Game::gameStart() to before sending GTP commands
  (except handicap commands).
* Changed so that when an SGF file is loaded, AutoGTP determines whether
  handicap is in use from the SGF rather than from any starting GTP commands.

Pull request leela-zero#2096.

* Update Eigen to 3.3.7. 

This includes some optimization improvements for newer GCC/Clang that
may be relevant to a lot of our users.

Pull request leela-zero#2151.

* Fix lz-setoption name playouts.

Fixes issue leela-zero#2167.

I could swear I fixed this before. Maybe I forgot to push?

* AutoGTP: More info in SGF comments.

* AutoGTP: Added full engine options and starting GTP commands 
  to SGF comments that are produced.
* Refactored Game::fixSgf().

Pull request leela-zero#2160.

* Truncate and compress minigo weights.

Truncate to 4 precision and compress converted minigo weights.

Pull request leela-zero#2173.

* Add gomill-explain_last_move.

Add gomill-explain_last_move for additional output in ringmaster
competitions.

Pull request leela-zero#2174.

* Add a feature to exclude moves from the search.

* The "avoid" command is now a param for lz-analyze and for
  lz-genmove_analyze.

New syntax is:

  `lz-analyze ARGS [avoid <color> <coords> <number_of_moves>] [avoid ...]`
  `lz-genmove_analyze ARGS [avoid <color> <coords> <number_of_moves>] [avoid ...]`

The number_of_moves is now always relative to the current move number.

Example:

  `lz-analyze b 200 avoid b q16 1 avoid b q4 1 avoid b d16 1 avoid b d4 1`

* Re-organize the parser for the "analyze" commands.

  * New tag "interval"; old syntax "100" is now short for "interval 100"
  * Tags can be specified in any arbitrary order
  * Moved all of the parsing code for "lz-anaylze" and
    "lz-genmove_analyze" into the parse_analyze_tags function
  * parse_analyze_tags uses its return value instead of side effects

* Implement the "allow" tag for lz-analyze.

It works similar to "avoid".  Adding moves to the "allow" list is the
same as adding all other moves (except pass and resign) to the "avoid" list.

* "Avoid" and "allow" moves can be specified as a comma-separated list.

Example:

  `lz-analyze b 100 avoid w q4,q16,d4,d16 2 avoid b pass 50`

Pull request leela-zero#1949.

* Removed --cpu-only option from USE_CPU_ONLY build. 

Generalized output displayed in cases where potentially referring to a CPU 
instead of or as well as a GPU.

Pull request leela-zero#2161.

* Tensor Core support with PTX inline assembly.

* Tensor core support for half precision
* hgemm : Added m16n16k16/m32n8k16/m8n32k16 tuning

Tuner will see which shaped multiplication is fastest.
MDIMA represents the M dimension, NDIMB represents the N dimension.

* tensorcore : Test m16n16k16 typs only for checking tensorcore availability

It seems that there are cases where only m16n16k16 is supported.
If other formats are not available they will be auto-disabled on tuning.

Pull request leela-zero#2049.

* Update TODO list.

We support avoid tags now. Clarify batching work needs
changes in the search.

* Remove an unnecessary std::move().

Which inhibits RVO. See e.g. https://stackoverflow.com/a/19272035

* Add contributor (and maintainer) guidelines. 

* Add contributor (and maintainer) guidelines.

Spell out the existing code style, C++ usage, git workflow,
commit message requirements, and give guidelines regarding reviewing,
merging and adding configuration options and GTP extensions.

Pull request leela-zero#2186.

* Add several simple GTP commands.

Added several simple GTP commands useful for building interfaces to LZ.

Added the following GTP commands.

    last_move
    move_history

The output of these commands is in line with that of the corresponding
commands in GNU Go when such commands existed.

Pull request leela-zero#2170.

* Minor style fixups.

Minor fixups for pull request leela-zero#2170.

* Remark about move assignment in style guideline.

Emphasize use of emplace_back and move semantics.

* Add lz-analyze minmoves tag.

Add an lz-analyze tag to suggest the minimum amount of moves the
engine should post info about (rather than only those it considers
interesting, i.e. the ones with at least a visit).

This allows some very flexible constructs:

Getting a heatmap:

    lz-setoption name visits value 1
    lz-analyze interval 1 minmoves 361

Forcing a move among the top policy moves only:

    lz-setoption name visits value 1
    lz-analyze interval 1 minmoves 2
    (store those moves, e.g. A1, B1)
    lz-setoption name visits value 0
    lz-genmove_analyze b interval 1 allow b A1 1 allow b B1 1

* Fix style, extra spaces in PV output.

Adding the minmoves tag exposes a small bug in the PV
output formatting. Avoid extra blank spaces.

Small style fixups.

* Rework test regex for MSVC limits.

Seems like the previous test regex is causing MSVC's regex engine to run
out of stack space.

* .gitignore: Add build.

leela-zero's default build directory is `build`.

It is very annoying when using leela as a git submodule that 
the repository updates whenever it builds.

Pull request leela-zero#2199.

* Batched neural net evaluations

Group evaluations and run them in parallel. Roughly 50% speedup on my setup, but there are a couple of points that is debatable.

- Thread / batch sizing heuristics : This PR changes how the default threads / default batch sizes are picked.  See Leela.cpp
- Batch-forming heuristic : See OpenCLScheduler.cpp for the batch forming heuristic : the heuristic exists so that we can wait for the rest of the engine to create more NN evaluations so that we can run larger batches.  We can't wait indefinitely since there are cases we enter 'serial' paths.  Since heuristics are heuristics, these might need some tests on a larger variety of types of systems.

Did make sure that winrate improves when running default vs. default command line `./leelaz -w (weight file)` on time parity.

Pull request leela-zero#2188.

* Autogtp: Tune for batchsize 1

Self-play games specify `-t 1` for playing which implies batch size of 1, but tuning was done for default settings since number of threads was not specified.

Pull request leela-zero#2206

* Update README.md.

Update links to leela-zero instead of gcp.
Update badge and link to the new AppVeyor project
under leela-zero instead of gcp ownership.

* Remove unused lambda capture.

Pull request leela-zero#2231.

* README.md: link to mentioned pull requests.

Pull request leela-zero#2229.

* Minor cleanup involving Network::get_output. 

Pull request leela-zero#2228.

* Set up default batch size and threads.

Fixes issue leela-zero#2214.

Pull request leela-zero#2256.

* Shuffle tuner parameters to find good parameters quicker.

Parameters are searched in a linear fashion currently. By shuffling them,
we will find a good instance more quickly.

Also, shuffing could help reduce possible bias due to grouped, similar
parameters that affect the environment (e.g. cache, branch predictor, ...),
leading to more accurate/fair results.

Additionally, this is a preparation for exiting the tuner during the search,
which becomes a possible option.

Pull request leela-zero#2225.

* Refactor tree_stats_helper to lambda.

Pull request leela-zero#2244.

* Enable batching for self-play.

Pull request leela-zero#2253.

* Allow configuring default komi at compile-time.

Pull request leela-zero#2257.

* Make chunkparser more robust.

Some clients are sending corrupted data, make the
chunk parser resilient against it.

* Fix thread count error message.

Pull request leela-zero#2287.

* Fix small style nits.

* Add support for time controls in loadsgf/printsgf.

Added extra support for "TM" and "OT" and other sgf time control
properties on printsgf and loadsgf GTP commands.

* Added parsing and loading of "TM" and "OT" sgf properties on GTP command
  loadsgf. Only supports "OT" syntax matching output from a printsgf GTP
  command.
* Change SGFTree to have a shared_ptr for a time control.
* Added saving and loading of "BL", "WL", "OB" and "OW" sgf properties on
  GTP commands printsgf and loadsgf.
* Change to make TimeControl::make_from_text_sgf() a time control factory
  and other minor tidying.

Pull request leela-zero#2172.

* Fix inconsistent default timecontrol.

As noted in pull request leela-zero#2172, the default
constructor set byo yomi stones but no time or
periods.

* Error out if weights are for wrong board size.

We currently will either crash or do strange things if we're
fed a weights file that doesn't match the board size we're compiled
for.

See issue leela-zero#2289.

* Ignore passing moves unless they make sense.

Only pass when winning or low on legal moves.
Disabled in self-play.

Fixes issue leela-zero#2273.
Based on pull request leela-zero#2277.

Pull request leela-zero#2301.

* Always allow passing when low on moves.

As pointed out by @gjm11 in leela-zero#2277, when there's few legal moves we might
want to allow passing even if this loses on the board count. The
alternative might be to self-destruct large groups and carry the game
on endlessely even if the policy wouldn't want to.

No difference in "dumbpass" mode.

* Report root visits in gomill-explain_last_move.

See issue leela-zero#2280.

Pull request leela-zero#2302.

* Choose move based on normal distribution LCB.

* Calculate node variance.
* Use normal distribution LCB to choose the played move.
* Cached student-t.
* Sort lz-analyze output according to LCB.
* Don't choose nodes with very few visits even if LCB is better.

Guard against NN misevaluations when top move has lot of visits.
Without this it's possible for move with few hundred visits to be picked
over a move with over ten thousand visits.

The problem is that the evaluation distribution isn't really normal
distribution. Evaluations correlate and the distribution can change
if deeper in the tree it finds a better alternative.

Pull request leela-zero#2290.

* Mixed precision training support.

* Add mixed precision training support.
* Do not use loss scale if training with fp32
* Fix potential reg_term overflow of large networks.

Pull request leela-zero#2191.

* Update AUTHORS.

* Don't detect precision with Tensor Cores. 

Don't autodetect or default to fp32 when all cards have
Tensor Cores. We will assume fp16 is the fastest.

This avoids problems in tune-only mode which does not
detect the precision to use and would use fp32 on such cards.

Pull request leela-zero#2312.

* Update README.md.

We have a first implementation of batching now.

* Ignore --batchsize in CPU only compiles.

AutoGTP will always send --batchsize, but CPU only
compiles don't support the option. Ignore the option
in those builds.

The same problem exists with --tune-only, but quitting
immediately happens to be sane behavior so we don't need
to fix that.

Pull request leela-zero#2313.

* Don't include OpenCL scheduler in CPU build.

It will recursively include OpenCL.h and that
is bad.

Pull request leela-zero#2314.

* Bump version numbers.

* Fix: batch sizes were not set according to command line.
Vandertic pushed a commit to CuriosAI/sai that referenced this pull request Jun 10, 2019
* Add mixed precision training support.
* Do not use loss scale if training with fp32
* Fix potential reg_term overflow of large networks.

Pull request leela-zero#2191.
ihavnoid pushed a commit that referenced this pull request Jul 27, 2019
* Correctly initialize board when reading SGF.

Even though SGF defaults to size 19 boards, we should not try
to set up a board that size if LZ has not been compiled to support
it.

Pull request #1964.

* Increase memory limit for 32-bit builds.

Without this, it's empirically not possible to load the current 256x40
networks on a 32-bit machine.

* Never select a CPU during OpenCL autodetection.

If we are trying to auto-select the best device for OpenCL, never select
a CPU. This will cause the engine to refuse to run when people are
trying to run the OpenCL version without a GPU or without GPU drivers,
instead of selecting any slow and suboptimal (and empirically extremely
broken) OpenCL-on-CPU drivers.

Falling back to CPU-only would be another reasonable alternative, but
doesn't provide an alert in case the GPU drivers are missing.

Improves behavior of issue #1994.

* Fix tuner for heterogeneous GPUs and auto precision.

Fix full tuner for heterogeneous GPUs and auto precision detection.

--full-tuner implies --tune-only
--full-tuner requires an explicit precision

Fixes #1973.

Pull request #2004.

* Optimized out and out_in kernels.

Very minor speedup of about 2% with batch size of 1.
With batch size of 5 there is a speedup of about 5% with half precision
and 12% with single precision.

Out transformation memory accesses are almost completely coalesced
with the new kernel.

Pull request #2014.

* Update OpenCL C++ headers.

From upstream a807dcf0f8623d40dc5ce9d1eb00ffd0e46150c7.

* CPU-only eval performance optimization.

* CPUPipe : change winograd transformation constants to an equation.

Combined with a series of strength reduction changes, 
improves netbench by about 8%.

* Convert some std::array into individual variables

For some reason this allows gcc to optimize the code better,
improving netbench by 2%.

Pull request #2021.

* Convolve in/out performance optimization.

Use hard-coded equations instead of matrix multiplication.

Pull request #2023.

* Validation: fix -k option.

Fix Validation -k option by reading its value before the parser is reused.

Pull request #2024.

* Add link to Azure free trial instructions.

See pull request #2031.

* Cleanup atomics and dead if.

Pull request #2034.

* Const in SGFTree.

Pull request #2035.

* Make the README more clear.

Simplify instructions, especially related to building and running
when wanting to contribute.

Based on pull request #1983.

* Refactor to allow AutoGTP to use Engine.

* Move Engine to Game.h and refactor autogtp to use it too.
* Fix initialization of job engines.

Pull request #2029.

* Fix printf call style.

Generally speaking, providing character pointers as the first argument 
directly might cause FSB (Format String Bug).

Pull request #2063.

* Update Khronos OpenCL C++ headers.

Update from upstream f0b7045.

Fixes warnings related to CL_TARGET_OPENCL_VERSION.

* Cleanup loop code.

Pull request #2033.

* AutoGTP: allow specifying an SGF as initial position.

* Make AutoGTP URL parametric.
* Support for the sgfhash and movescount parameters in get-task.
* Automatic downloading of sgf and training files.
* Fix Management.cpp for older Qt5 versions.
* Added starting match games from specified initial position
* Tidy ValidationJob::init() like ProductionJob::init()
* Use existing QUuid method of generating random file 
  names instead of QTemporaryFile when fetching game data.

Moreover, we do not load training data in LeelaZ since it is not needed to start from
an arbitrary position.

Pull request #2052.

* Support separate options for white in match games.

* Add optional separate options for white in match game.
* Fixed loading of saved match order with optionsSecond.

Pull request #2078.

* Add O(sqrt(log(n))) scaling to tree search.

Pull request #2072.

* Option to get network output without writing to cache. 

Pull request #2093.

* Add permission to link with NVIDIA libs. Update year.

See issue #2032.

All contributors to the core engine have given their permission to
add an additional permission to link with NVIDIA's CUDA/cuDNN/TensorRT
libraries. This makes it possible to distribute the engine when built to
use those libraries.

Update the copyright notices to 2019.

* Add link to GoReviewPartner.

Pull request #2147.

* Reminder to install OpenCL driver if seperate.

Although the OpenCL driver is generally installed as part of the driver
install, mention the requirement explicitly in case it wasn't.

See pull request #2138.

* Fixed leelaz_file on Android.

Pull request #2135.

* Fix 'catching polymorphic type by value' warning.

Pull request #2134.

* Fixed converter script for minigo removing bias.

Fixes #2020.

Pull request #2133.

* Add zlib to the mac OS X build instructions.

See pull request #2122.

* UCTNodePtr rare race condition fix.

Calling get_eval() on zero-visit node will assert-fail.
The original code could assert-fail on b.get_eval() if 'a' and 'b' both
had zero visits but suddenly 'a' gained an additional visit.

Pull request #2110.

* Make sure analysis is printed at least once.

Fixes issue #2001.

Pull request #2114.

* Don't post if not requested.

Follow up fix for pull request #2114.

* AutoGTP: Allow specifying initial GTP commands.

* AutoGTP: Allow specifying initial GTP commands.
  Also add support for white taking the first move in handicapped job games.
* AutoGTP: Refactored core loop for match games to avoid code duplication.
* Fixed white using black's match game settings after loading from an SGF by
  moving SGF loading into Game::gameStart() to before sending GTP commands
  (except handicap commands).
* Changed so that when an SGF file is loaded, AutoGTP determines whether
  handicap is in use from the SGF rather than from any starting GTP commands.

Pull request #2096.

* Update Eigen to 3.3.7. 

This includes some optimization improvements for newer GCC/Clang that
may be relevant to a lot of our users.

Pull request #2151.

* Fix lz-setoption name playouts.

Fixes issue #2167.

I could swear I fixed this before. Maybe I forgot to push?

* AutoGTP: More info in SGF comments.

* AutoGTP: Added full engine options and starting GTP commands 
  to SGF comments that are produced.
* Refactored Game::fixSgf().

Pull request #2160.

* Truncate and compress minigo weights.

Truncate to 4 precision and compress converted minigo weights.

Pull request #2173.

* Add gomill-explain_last_move.

Add gomill-explain_last_move for additional output in ringmaster
competitions.

Pull request #2174.

* Add a feature to exclude moves from the search.

* The "avoid" command is now a param for lz-analyze and for
  lz-genmove_analyze.

New syntax is:

  `lz-analyze ARGS [avoid <color> <coords> <number_of_moves>] [avoid ...]`
  `lz-genmove_analyze ARGS [avoid <color> <coords> <number_of_moves>] [avoid ...]`

The number_of_moves is now always relative to the current move number.

Example:

  `lz-analyze b 200 avoid b q16 1 avoid b q4 1 avoid b d16 1 avoid b d4 1`

* Re-organize the parser for the "analyze" commands.

  * New tag "interval"; old syntax "100" is now short for "interval 100"
  * Tags can be specified in any arbitrary order
  * Moved all of the parsing code for "lz-anaylze" and
    "lz-genmove_analyze" into the parse_analyze_tags function
  * parse_analyze_tags uses its return value instead of side effects

* Implement the "allow" tag for lz-analyze.

It works similar to "avoid".  Adding moves to the "allow" list is the
same as adding all other moves (except pass and resign) to the "avoid" list.

* "Avoid" and "allow" moves can be specified as a comma-separated list.

Example:

  `lz-analyze b 100 avoid w q4,q16,d4,d16 2 avoid b pass 50`

Pull request #1949.

* Removed --cpu-only option from USE_CPU_ONLY build. 

Generalized output displayed in cases where potentially referring to a CPU 
instead of or as well as a GPU.

Pull request #2161.

* Tensor Core support with PTX inline assembly.

* Tensor core support for half precision
* hgemm : Added m16n16k16/m32n8k16/m8n32k16 tuning

Tuner will see which shaped multiplication is fastest.
MDIMA represents the M dimension, NDIMB represents the N dimension.

* tensorcore : Test m16n16k16 typs only for checking tensorcore availability

It seems that there are cases where only m16n16k16 is supported.
If other formats are not available they will be auto-disabled on tuning.

Pull request #2049.

* Update TODO list.

We support avoid tags now. Clarify batching work needs
changes in the search.

* Remove an unnecessary std::move().

Which inhibits RVO. See e.g. https://stackoverflow.com/a/19272035

* Add contributor (and maintainer) guidelines. 

* Add contributor (and maintainer) guidelines.

Spell out the existing code style, C++ usage, git workflow,
commit message requirements, and give guidelines regarding reviewing,
merging and adding configuration options and GTP extensions.

Pull request #2186.

* Add several simple GTP commands.

Added several simple GTP commands useful for building interfaces to LZ.

Added the following GTP commands.

    last_move
    move_history

The output of these commands is in line with that of the corresponding
commands in GNU Go when such commands existed.

Pull request #2170.

* Minor style fixups.

Minor fixups for pull request #2170.

* Remark about move assignment in style guideline.

Emphasize use of emplace_back and move semantics.

* Add lz-analyze minmoves tag.

Add an lz-analyze tag to suggest the minimum amount of moves the
engine should post info about (rather than only those it considers
interesting, i.e. the ones with at least a visit).

This allows some very flexible constructs:

Getting a heatmap:

    lz-setoption name visits value 1
    lz-analyze interval 1 minmoves 361

Forcing a move among the top policy moves only:

    lz-setoption name visits value 1
    lz-analyze interval 1 minmoves 2
    (store those moves, e.g. A1, B1)
    lz-setoption name visits value 0
    lz-genmove_analyze b interval 1 allow b A1 1 allow b B1 1

* Fix style, extra spaces in PV output.

Adding the minmoves tag exposes a small bug in the PV
output formatting. Avoid extra blank spaces.

Small style fixups.

* Rework test regex for MSVC limits.

Seems like the previous test regex is causing MSVC's regex engine to run
out of stack space.

* .gitignore: Add build.

leela-zero's default build directory is `build`.

It is very annoying when using leela as a git submodule that 
the repository updates whenever it builds.

Pull request #2199.

* Batched neural net evaluations

Group evaluations and run them in parallel. Roughly 50% speedup on my setup, but there are a couple of points that is debatable.

- Thread / batch sizing heuristics : This PR changes how the default threads / default batch sizes are picked.  See Leela.cpp
- Batch-forming heuristic : See OpenCLScheduler.cpp for the batch forming heuristic : the heuristic exists so that we can wait for the rest of the engine to create more NN evaluations so that we can run larger batches.  We can't wait indefinitely since there are cases we enter 'serial' paths.  Since heuristics are heuristics, these might need some tests on a larger variety of types of systems.

Did make sure that winrate improves when running default vs. default command line `./leelaz -w (weight file)` on time parity.

Pull request #2188.

* Autogtp: Tune for batchsize 1

Self-play games specify `-t 1` for playing which implies batch size of 1, but tuning was done for default settings since number of threads was not specified.

Pull request #2206

* Tweak conversion script for ELF v2.

Small tweak to conversion script for ELF v2 weights.

Pull request #2213.

* Update README.md

Update links to leela-zero instead of gcp.

* Update README.md

Appveyor link still needs to be 'gcp'.

* Update README.md

Update badge and link to the new AppVeyor project under leela-zero instead of gcp ownership.

* Update README.md.

Update links to leela-zero instead of gcp.
Update badge and link to the new AppVeyor project
under leela-zero instead of gcp ownership.

* Remove unused lambda capture.

Pull request #2231.

* README.md: link to mentioned pull requests.

Pull request #2229.

* Minor cleanup involving Network::get_output. 

Pull request #2228.

* Set up default batch size and threads.

Fixes issue #2214.

Pull request #2256.

* Shuffle tuner parameters to find good parameters quicker.

Parameters are searched in a linear fashion currently. By shuffling them,
we will find a good instance more quickly.

Also, shuffing could help reduce possible bias due to grouped, similar
parameters that affect the environment (e.g. cache, branch predictor, ...),
leading to more accurate/fair results.

Additionally, this is a preparation for exiting the tuner during the search,
which becomes a possible option.

Pull request #2225.

* Refactor tree_stats_helper to lambda.

Pull request #2244.

* Enable batching for self-play.

Pull request #2253.

* Allow configuring default komi at compile-time.

Pull request #2257.

* Update README.md

Update links to leela-zero instead of gcp.

* Make chunkparser more robust.

Some clients are sending corrupted data, make the
chunk parser resilient against it.

* Fix thread count error message.

Pull request #2287.

* Fix small style nits.

* Add support for time controls in loadsgf/printsgf.

Added extra support for "TM" and "OT" and other sgf time control
properties on printsgf and loadsgf GTP commands.

* Added parsing and loading of "TM" and "OT" sgf properties on GTP command
  loadsgf. Only supports "OT" syntax matching output from a printsgf GTP
  command.
* Change SGFTree to have a shared_ptr for a time control.
* Added saving and loading of "BL", "WL", "OB" and "OW" sgf properties on
  GTP commands printsgf and loadsgf.
* Change to make TimeControl::make_from_text_sgf() a time control factory
  and other minor tidying.

Pull request #2172.

* Fix inconsistent default timecontrol.

As noted in pull request #2172, the default
constructor set byo yomi stones but no time or
periods.

* Error out if weights are for wrong board size.

We currently will either crash or do strange things if we're
fed a weights file that doesn't match the board size we're compiled
for.

See issue #2289.

* Ignore passing moves unless they make sense.

Only pass when winning or low on legal moves.
Disabled in self-play.

Fixes issue #2273.
Based on pull request #2277.

Pull request #2301.

* Always allow passing when low on moves.

As pointed out by @gjm11 in #2277, when there's few legal moves we might
want to allow passing even if this loses on the board count. The
alternative might be to self-destruct large groups and carry the game
on endlessely even if the policy wouldn't want to.

No difference in "dumbpass" mode.

* Report root visits in gomill-explain_last_move.

See issue #2280.

Pull request #2302.

* Choose move based on normal distribution LCB.

* Calculate node variance.
* Use normal distribution LCB to choose the played move.
* Cached student-t.
* Sort lz-analyze output according to LCB.
* Don't choose nodes with very few visits even if LCB is better.

Guard against NN misevaluations when top move has lot of visits.
Without this it's possible for move with few hundred visits to be picked
over a move with over ten thousand visits.

The problem is that the evaluation distribution isn't really normal
distribution. Evaluations correlate and the distribution can change
if deeper in the tree it finds a better alternative.

Pull request #2290.

* Mixed precision training support.

* Add mixed precision training support.
* Do not use loss scale if training with fp32
* Fix potential reg_term overflow of large networks.

Pull request #2191.

* Update AUTHORS.

* Don't detect precision with Tensor Cores. 

Don't autodetect or default to fp32 when all cards have
Tensor Cores. We will assume fp16 is the fastest.

This avoids problems in tune-only mode which does not
detect the precision to use and would use fp32 on such cards.

Pull request #2312.

* Update README.md.

We have a first implementation of batching now.

* Ignore --batchsize in CPU only compiles.

AutoGTP will always send --batchsize, but CPU only
compiles don't support the option. Ignore the option
in those builds.

The same problem exists with --tune-only, but quitting
immediately happens to be sane behavior so we don't need
to fix that.

Pull request #2313.

* Don't include OpenCL scheduler in CPU build.

It will recursively include OpenCL.h and that
is bad.

Pull request #2314.

* Bump version numbers.

* Address git hub security alert

* Match upstream
Vandertic pushed a commit to CuriosAI/sai that referenced this pull request Dec 14, 2019
* Correctly initialize board when reading SGF.

Even though SGF defaults to size 19 boards, we should not try
to set up a board that size if LZ has not been compiled to support
it.

Pull request leela-zero#1964.

* Increase memory limit for 32-bit builds.

Without this, it's empirically not possible to load the current 256x40
networks on a 32-bit machine.

* Never select a CPU during OpenCL autodetection.

If we are trying to auto-select the best device for OpenCL, never select
a CPU. This will cause the engine to refuse to run when people are
trying to run the OpenCL version without a GPU or without GPU drivers,
instead of selecting any slow and suboptimal (and empirically extremely
broken) OpenCL-on-CPU drivers.

Falling back to CPU-only would be another reasonable alternative, but
doesn't provide an alert in case the GPU drivers are missing.

Improves behavior of issue leela-zero#1994.

* Fix tuner for heterogeneous GPUs and auto precision.

Fix full tuner for heterogeneous GPUs and auto precision detection.

--full-tuner implies --tune-only
--full-tuner requires an explicit precision

Fixes leela-zero#1973.

Pull request leela-zero#2004.

* Optimized out and out_in kernels.

Very minor speedup of about 2% with batch size of 1.
With batch size of 5 there is a speedup of about 5% with half precision
and 12% with single precision.

Out transformation memory accesses are almost completely coalesced
with the new kernel.

Pull request leela-zero#2014.

* Update OpenCL C++ headers.

From upstream a807dcf0f8623d40dc5ce9d1eb00ffd0e46150c7.

* CPU-only eval performance optimization.

* CPUPipe : change winograd transformation constants to an equation.

Combined with a series of strength reduction changes, 
improves netbench by about 8%.

* Convert some std::array into individual variables

For some reason this allows gcc to optimize the code better,
improving netbench by 2%.

Pull request leela-zero#2021.

* Convolve in/out performance optimization.

Use hard-coded equations instead of matrix multiplication.

Pull request leela-zero#2023.

* Validation: fix -k option.

Fix Validation -k option by reading its value before the parser is reused.

Pull request leela-zero#2024.

* Add link to Azure free trial instructions.

See pull request leela-zero#2031.

* Cleanup atomics and dead if.

Pull request leela-zero#2034.

* Const in SGFTree.

Pull request leela-zero#2035.

* Make the README more clear.

Simplify instructions, especially related to building and running
when wanting to contribute.

Based on pull request leela-zero#1983.

* Refactor to allow AutoGTP to use Engine.

* Move Engine to Game.h and refactor autogtp to use it too.
* Fix initialization of job engines.

Pull request leela-zero#2029.

* Fix printf call style.

Generally speaking, providing character pointers as the first argument 
directly might cause FSB (Format String Bug).

Pull request leela-zero#2063.

* Update Khronos OpenCL C++ headers.

Update from upstream f0b7045.

Fixes warnings related to CL_TARGET_OPENCL_VERSION.

* Cleanup loop code.

Pull request leela-zero#2033.

* AutoGTP: allow specifying an SGF as initial position.

* Make AutoGTP URL parametric.
* Support for the sgfhash and movescount parameters in get-task.
* Automatic downloading of sgf and training files.
* Fix Management.cpp for older Qt5 versions.
* Added starting match games from specified initial position
* Tidy ValidationJob::init() like ProductionJob::init()
* Use existing QUuid method of generating random file 
  names instead of QTemporaryFile when fetching game data.

Moreover, we do not load training data in LeelaZ since it is not needed to start from
an arbitrary position.

Pull request leela-zero#2052.

* Support separate options for white in match games.

* Add optional separate options for white in match game.
* Fixed loading of saved match order with optionsSecond.

Pull request leela-zero#2078.

* Add O(sqrt(log(n))) scaling to tree search.

Pull request leela-zero#2072.

* Option to get network output without writing to cache. 

Pull request leela-zero#2093.

* Add permission to link with NVIDIA libs. Update year.

See issue leela-zero#2032.

All contributors to the core engine have given their permission to
add an additional permission to link with NVIDIA's CUDA/cuDNN/TensorRT
libraries. This makes it possible to distribute the engine when built to
use those libraries.

Update the copyright notices to 2019.

* Add link to GoReviewPartner.

Pull request leela-zero#2147.

* Reminder to install OpenCL driver if seperate.

Although the OpenCL driver is generally installed as part of the driver
install, mention the requirement explicitly in case it wasn't.

See pull request leela-zero#2138.

* Fixed leelaz_file on Android.

Pull request leela-zero#2135.

* Fix 'catching polymorphic type by value' warning.

Pull request leela-zero#2134.

* Fixed converter script for minigo removing bias.

Fixes leela-zero#2020.

Pull request leela-zero#2133.

* Add zlib to the mac OS X build instructions.

See pull request leela-zero#2122.

* UCTNodePtr rare race condition fix.

Calling get_eval() on zero-visit node will assert-fail.
The original code could assert-fail on b.get_eval() if 'a' and 'b' both
had zero visits but suddenly 'a' gained an additional visit.

Pull request leela-zero#2110.

* Make sure analysis is printed at least once.

Fixes issue leela-zero#2001.

Pull request leela-zero#2114.

* Don't post if not requested.

Follow up fix for pull request leela-zero#2114.

* AutoGTP: Allow specifying initial GTP commands.

* AutoGTP: Allow specifying initial GTP commands.
  Also add support for white taking the first move in handicapped job games.
* AutoGTP: Refactored core loop for match games to avoid code duplication.
* Fixed white using black's match game settings after loading from an SGF by
  moving SGF loading into Game::gameStart() to before sending GTP commands
  (except handicap commands).
* Changed so that when an SGF file is loaded, AutoGTP determines whether
  handicap is in use from the SGF rather than from any starting GTP commands.

Pull request leela-zero#2096.

* Update Eigen to 3.3.7. 

This includes some optimization improvements for newer GCC/Clang that
may be relevant to a lot of our users.

Pull request leela-zero#2151.

* Fix lz-setoption name playouts.

Fixes issue leela-zero#2167.

I could swear I fixed this before. Maybe I forgot to push?

* AutoGTP: More info in SGF comments.

* AutoGTP: Added full engine options and starting GTP commands 
  to SGF comments that are produced.
* Refactored Game::fixSgf().

Pull request leela-zero#2160.

* Truncate and compress minigo weights.

Truncate to 4 precision and compress converted minigo weights.

Pull request leela-zero#2173.

* Add gomill-explain_last_move.

Add gomill-explain_last_move for additional output in ringmaster
competitions.

Pull request leela-zero#2174.

* Add a feature to exclude moves from the search.

* The "avoid" command is now a param for lz-analyze and for
  lz-genmove_analyze.

New syntax is:

  `lz-analyze ARGS [avoid <color> <coords> <number_of_moves>] [avoid ...]`
  `lz-genmove_analyze ARGS [avoid <color> <coords> <number_of_moves>] [avoid ...]`

The number_of_moves is now always relative to the current move number.

Example:

  `lz-analyze b 200 avoid b q16 1 avoid b q4 1 avoid b d16 1 avoid b d4 1`

* Re-organize the parser for the "analyze" commands.

  * New tag "interval"; old syntax "100" is now short for "interval 100"
  * Tags can be specified in any arbitrary order
  * Moved all of the parsing code for "lz-anaylze" and
    "lz-genmove_analyze" into the parse_analyze_tags function
  * parse_analyze_tags uses its return value instead of side effects

* Implement the "allow" tag for lz-analyze.

It works similar to "avoid".  Adding moves to the "allow" list is the
same as adding all other moves (except pass and resign) to the "avoid" list.

* "Avoid" and "allow" moves can be specified as a comma-separated list.

Example:

  `lz-analyze b 100 avoid w q4,q16,d4,d16 2 avoid b pass 50`

Pull request leela-zero#1949.

* Removed --cpu-only option from USE_CPU_ONLY build. 

Generalized output displayed in cases where potentially referring to a CPU 
instead of or as well as a GPU.

Pull request leela-zero#2161.

* Tensor Core support with PTX inline assembly.

* Tensor core support for half precision
* hgemm : Added m16n16k16/m32n8k16/m8n32k16 tuning

Tuner will see which shaped multiplication is fastest.
MDIMA represents the M dimension, NDIMB represents the N dimension.

* tensorcore : Test m16n16k16 typs only for checking tensorcore availability

It seems that there are cases where only m16n16k16 is supported.
If other formats are not available they will be auto-disabled on tuning.

Pull request leela-zero#2049.

* Update TODO list.

We support avoid tags now. Clarify batching work needs
changes in the search.

* Remove an unnecessary std::move().

Which inhibits RVO. See e.g. https://stackoverflow.com/a/19272035

* Add contributor (and maintainer) guidelines. 

* Add contributor (and maintainer) guidelines.

Spell out the existing code style, C++ usage, git workflow,
commit message requirements, and give guidelines regarding reviewing,
merging and adding configuration options and GTP extensions.

Pull request leela-zero#2186.

* Add several simple GTP commands.

Added several simple GTP commands useful for building interfaces to LZ.

Added the following GTP commands.

    last_move
    move_history

The output of these commands is in line with that of the corresponding
commands in GNU Go when such commands existed.

Pull request leela-zero#2170.

* Minor style fixups.

Minor fixups for pull request leela-zero#2170.

* Remark about move assignment in style guideline.

Emphasize use of emplace_back and move semantics.

* Add lz-analyze minmoves tag.

Add an lz-analyze tag to suggest the minimum amount of moves the
engine should post info about (rather than only those it considers
interesting, i.e. the ones with at least a visit).

This allows some very flexible constructs:

Getting a heatmap:

    lz-setoption name visits value 1
    lz-analyze interval 1 minmoves 361

Forcing a move among the top policy moves only:

    lz-setoption name visits value 1
    lz-analyze interval 1 minmoves 2
    (store those moves, e.g. A1, B1)
    lz-setoption name visits value 0
    lz-genmove_analyze b interval 1 allow b A1 1 allow b B1 1

* Fix style, extra spaces in PV output.

Adding the minmoves tag exposes a small bug in the PV
output formatting. Avoid extra blank spaces.

Small style fixups.

* Rework test regex for MSVC limits.

Seems like the previous test regex is causing MSVC's regex engine to run
out of stack space.

* .gitignore: Add build.

leela-zero's default build directory is `build`.

It is very annoying when using leela as a git submodule that 
the repository updates whenever it builds.

Pull request leela-zero#2199.

* Batched neural net evaluations

Group evaluations and run them in parallel. Roughly 50% speedup on my setup, but there are a couple of points that is debatable.

- Thread / batch sizing heuristics : This PR changes how the default threads / default batch sizes are picked.  See Leela.cpp
- Batch-forming heuristic : See OpenCLScheduler.cpp for the batch forming heuristic : the heuristic exists so that we can wait for the rest of the engine to create more NN evaluations so that we can run larger batches.  We can't wait indefinitely since there are cases we enter 'serial' paths.  Since heuristics are heuristics, these might need some tests on a larger variety of types of systems.

Did make sure that winrate improves when running default vs. default command line `./leelaz -w (weight file)` on time parity.

Pull request leela-zero#2188.

* Autogtp: Tune for batchsize 1

Self-play games specify `-t 1` for playing which implies batch size of 1, but tuning was done for default settings since number of threads was not specified.

Pull request leela-zero#2206

* Tweak conversion script for ELF v2.

Small tweak to conversion script for ELF v2 weights.

Pull request leela-zero#2213.

* Update README.md

Update links to leela-zero instead of gcp.

* Update README.md

Appveyor link still needs to be 'gcp'.

* Update README.md

Update badge and link to the new AppVeyor project under leela-zero instead of gcp ownership.

* Update README.md.

Update links to leela-zero instead of gcp.
Update badge and link to the new AppVeyor project
under leela-zero instead of gcp ownership.

* Remove unused lambda capture.

Pull request leela-zero#2231.

* README.md: link to mentioned pull requests.

Pull request leela-zero#2229.

* Minor cleanup involving Network::get_output. 

Pull request leela-zero#2228.

* Set up default batch size and threads.

Fixes issue leela-zero#2214.

Pull request leela-zero#2256.

* Shuffle tuner parameters to find good parameters quicker.

Parameters are searched in a linear fashion currently. By shuffling them,
we will find a good instance more quickly.

Also, shuffing could help reduce possible bias due to grouped, similar
parameters that affect the environment (e.g. cache, branch predictor, ...),
leading to more accurate/fair results.

Additionally, this is a preparation for exiting the tuner during the search,
which becomes a possible option.

Pull request leela-zero#2225.

* Refactor tree_stats_helper to lambda.

Pull request leela-zero#2244.

* Enable batching for self-play.

Pull request leela-zero#2253.

* Allow configuring default komi at compile-time.

Pull request leela-zero#2257.

* Update README.md

Update links to leela-zero instead of gcp.

* Make chunkparser more robust.

Some clients are sending corrupted data, make the
chunk parser resilient against it.

* Fix thread count error message.

Pull request leela-zero#2287.

* Fix small style nits.

* Add support for time controls in loadsgf/printsgf.

Added extra support for "TM" and "OT" and other sgf time control
properties on printsgf and loadsgf GTP commands.

* Added parsing and loading of "TM" and "OT" sgf properties on GTP command
  loadsgf. Only supports "OT" syntax matching output from a printsgf GTP
  command.
* Change SGFTree to have a shared_ptr for a time control.
* Added saving and loading of "BL", "WL", "OB" and "OW" sgf properties on
  GTP commands printsgf and loadsgf.
* Change to make TimeControl::make_from_text_sgf() a time control factory
  and other minor tidying.

Pull request leela-zero#2172.

* Fix inconsistent default timecontrol.

As noted in pull request leela-zero#2172, the default
constructor set byo yomi stones but no time or
periods.

* Error out if weights are for wrong board size.

We currently will either crash or do strange things if we're
fed a weights file that doesn't match the board size we're compiled
for.

See issue leela-zero#2289.

* Ignore passing moves unless they make sense.

Only pass when winning or low on legal moves.
Disabled in self-play.

Fixes issue leela-zero#2273.
Based on pull request leela-zero#2277.

Pull request leela-zero#2301.

* Always allow passing when low on moves.

As pointed out by @gjm11 in leela-zero#2277, when there's few legal moves we might
want to allow passing even if this loses on the board count. The
alternative might be to self-destruct large groups and carry the game
on endlessely even if the policy wouldn't want to.

No difference in "dumbpass" mode.

* Report root visits in gomill-explain_last_move.

See issue leela-zero#2280.

Pull request leela-zero#2302.

* Choose move based on normal distribution LCB.

* Calculate node variance.
* Use normal distribution LCB to choose the played move.
* Cached student-t.
* Sort lz-analyze output according to LCB.
* Don't choose nodes with very few visits even if LCB is better.

Guard against NN misevaluations when top move has lot of visits.
Without this it's possible for move with few hundred visits to be picked
over a move with over ten thousand visits.

The problem is that the evaluation distribution isn't really normal
distribution. Evaluations correlate and the distribution can change
if deeper in the tree it finds a better alternative.

Pull request leela-zero#2290.

* Mixed precision training support.

* Add mixed precision training support.
* Do not use loss scale if training with fp32
* Fix potential reg_term overflow of large networks.

Pull request leela-zero#2191.

* Update AUTHORS.

* Don't detect precision with Tensor Cores. 

Don't autodetect or default to fp32 when all cards have
Tensor Cores. We will assume fp16 is the fastest.

This avoids problems in tune-only mode which does not
detect the precision to use and would use fp32 on such cards.

Pull request leela-zero#2312.

* Update README.md.

We have a first implementation of batching now.

* Ignore --batchsize in CPU only compiles.

AutoGTP will always send --batchsize, but CPU only
compiles don't support the option. Ignore the option
in those builds.

The same problem exists with --tune-only, but quitting
immediately happens to be sane behavior so we don't need
to fix that.

Pull request leela-zero#2313.

* Don't include OpenCL scheduler in CPU build.

It will recursively include OpenCL.h and that
is bad.

Pull request leela-zero#2314.

* Bump version numbers.

* Address git hub security alert

* Match upstream
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

5 participants