Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add permission under GPL v3 section 7 for linking with cuDNN/CUDA/TensorRT. #2032

Closed
gcp opened this issue Nov 19, 2018 · 39 comments
Closed

Comments

@gcp
Copy link
Member

gcp commented Nov 19, 2018

For some background, see #2007 (comment).

In order to be able to distribute Leela Zero binaries with cuDNN/CUDA/TensorRT support together with the required DLLs we need to comply with both the GPL and NVIDIA's license terms. The GPL requires that all source for the program be available freely, whereas NVIDIA's libraries, including their headers, add a bunch of restrictions.

There is a way around this by adding an additional permission as allowed by GPL v3 in section 7. An example for such an additional permission for CUDA is found here: Gentoo Example

This can be done for all files currently in Leela Zero with the agreement of all contributors, so that Leela Zero with CUDA support can be distributed as a binary, together with the required DLLs.

The suggested text is the following:

Additional permission under GNU GPL version 3 section 7

If you modify this Program, or any covered work, by linking or
combining it with NVIDIA Corporation's libraries from the
NVIDIA CUDA Toolkit and/or the NVIDIA CUDA Deep Neural
Network library and/or the NVIDIA TensorRT inference library
(or a modified version of those libraries), containing parts covered 
by the terms of the respective license agreement, the licensors of 
this Program grant you additional permission to convey the resulting 
work.
@gcp
Copy link
Member Author

gcp commented Nov 19, 2018

These are the people whose code would be involved. So if you are listed here, read the above and say whether you're OK with it or not.

@gcp - OK
@marcocalignano - OK
@Ttl - OK
@sethtroisi - OK
@TFiFiE - OK
@ihavnoid - OK
@killerducky - OK
@bood - OK
@ywrt - OK
@WhiteHalmos - OK
@Hersmunch - OK
@earthengine - OK
@akdtg - OK
@barrybecker4 - OK
@kuba97531 - OK
@alreadydone - OK
@roy7 - OK
@bittsitt - OK
@thynson - OK
@tux3 - OK
@nerai - OK
@OmnipotentEntity - OK
@tterava - OK
@Alderi-Tokori - OK
@eddh - OK

@thynson
Copy link
Contributor

thynson commented Nov 19, 2018

Absolutely agreed. 👍

@bood
Copy link
Collaborator

bood commented Nov 19, 2018

I'm okay with this additional permission.

@nerai
Copy link
Contributor

nerai commented Nov 19, 2018

I'm ok with this.

@kuba97531
Copy link
Contributor

I'm OK with this.

@ihavnoid
Copy link
Member

I'm okay.

@Ttl
Copy link
Member

Ttl commented Nov 19, 2018

Ok.

@tux3
Copy link
Contributor

tux3 commented Nov 19, 2018

I'm okay with this.

@roy7
Copy link
Collaborator

roy7 commented Nov 19, 2018

Ok with me!

@killerducky
Copy link
Contributor

Ok.

@tterava
Copy link
Contributor

tterava commented Nov 19, 2018

Ok

@Alderi-Tokori
Copy link
Contributor

I'm ok with this as well.

@marcocalignano
Copy link
Member

I am OK with this.

@sethtroisi
Copy link
Member

sethtroisi commented Nov 19, 2018

I'm OK and excited about this

@roy7
Copy link
Collaborator

roy7 commented Nov 19, 2018

I pinged those I could locate on Discord to help out.

@OmnipotentEntity
Copy link
Contributor

Sorry for the delay. I'm OK with this. Though I honestly hope you wouldn't hold up this change for people (such as myself) who have contributions that can be considered de minimus.

@alreadydone
Copy link
Contributor

alreadydone commented Nov 19, 2018

Of course I'm OK with this! Glad to see this happening!

@Hersmunch
Copy link
Member

Ok from me

@TFiFiE
Copy link
Contributor

TFiFiE commented Nov 19, 2018

Permission granted.

1 similar comment
@ywrt
Copy link
Contributor

ywrt commented Nov 19, 2018

Permission granted.

@remdu
Copy link
Contributor

remdu commented Nov 19, 2018

I'm OK with this !

@barrybecker4
Copy link
Contributor

It's fine with me.

@earthengine
Copy link
Contributor

Ok.

@hred6
Copy link

hred6 commented Nov 21, 2018

I'm OK with this !ok!

@gcp
Copy link
Member Author

gcp commented Nov 27, 2018

I sent the 3 people that are still missing an email.

@ghost
Copy link

ghost commented Nov 27, 2018

I'm OK with this. Thanks for sending the email!

@akdtg
Copy link
Contributor

akdtg commented Nov 28, 2018

I'm ready to sign every @OmnipotentEntity's word #2032 (comment)
Agreed, go ahead.

@wonderingabout
Copy link
Contributor

wonderingabout commented Dec 4, 2018

when this issue is solved, please @ me

i already wrote instructions on how to install cuda, cudnn, tensorRT on ubuntu
will make a video tutorial

@roy7
Copy link
Collaborator

roy7 commented Dec 4, 2018

@bittsitt I hope you are still out there someplace. :)

@gcp
Copy link
Member Author

gcp commented Dec 7, 2018

I hope so too. The problem is that his contribution was pretty important: it's all the lz-analyze code that makes Lizzie work. We could do it the ffmpeg way and condition that code on not using CUDA, so people analyzing would be using the OpenCL version (perhaps with the PTX hack that @ihavnoid made), and clients running the training could just use CUDA. But obviously that's far from ideal and doesn't make it really easier for users.

Rewriting the code isn't trivial either, especially as you should absolutely avoid the current implementation as a reference (ideally it should be clean room-ed after writing a GTP spec about it).

So yeah. Finding @bittsitt would be preferable by a few orders of magnitude.

@roy7
Copy link
Collaborator

roy7 commented Dec 9, 2018

I'm unable to extract their email address from GitHub since his last commit was so long ago I think. I tried pinging them in a different repository.

@alreadydone
Copy link
Contributor

alreadydone commented Dec 9, 2018

Nice find! It seems he's still watching featurecat/lizzie though no longer watching his fork of lizzie and leela-zero. You can find the email addresses associated to commits by searching the name in gitk, and in one case bittsitt shows up with a gmail address; I think @gcp did something like this. BTW, I found his Twitch account and sent him a message but didn't get a response yet.

@OmnipotentEntity
Copy link
Contributor

Re: Cleanroom implementation. I'm about to graduate in a few days, and I'll finally have free time again. If someone writes up a specification and creates a fork without this feature, I'll be happy to re-add it before attempting to to work on the binary file format shit that I've had on the backburner for months.

Though I have not read or looked at bittsitt's code, I cannot prove it, but considering that the code is published and freely available, it would be difficult for anyone to demonstrate that.

@roy7
Copy link
Collaborator

roy7 commented Dec 10, 2018

I tried dropping a note to the Gmail address. Lets hope they still check it. :)

@gcp
Copy link
Member Author

gcp commented Dec 11, 2018

Though I have not read or looked at bittsitt's code, I cannot prove it, but considering that the code is published and freely available, it would be difficult for anyone to demonstrate that.

I'm sure that'd be fine. It's not like we'd be expecting to be sued over this and have to prove it. But it's just proper to respect another persons' copyright which means no peeking.

I'll take a look at documenting the GTP commands, this should be done anyhow.

@bittsitt
Copy link
Contributor

Sorry, not been around much lately, but Junyan (@alreadydone) found me on twitch. Ok by me!

@roy7
Copy link
Collaborator

roy7 commented Dec 31, 2018

Yay!

@wonderingabout
Copy link
Contributor

what shall be done now @gcp ?

@gcp
Copy link
Member Author

gcp commented Jan 4, 2019

I'll update the licenses in the tree. This is awesome news.

gcp added a commit that referenced this issue Jan 4, 2019
See issue #2032.

All contributors to the core engine have given their permission to
add an additional permission to link with NVIDIA's CUDA/cuDNN/TensorRT
libraries. This makes it possible to distribute the engine when built to
use those libraries.

Update the copyright notices to 2019.
@gcp gcp closed this as completed Jan 4, 2019
gcp added a commit that referenced this issue Apr 2, 2019
See issue #2032.

All contributors to the core engine have given their permission to
add an additional permission to link with NVIDIA's CUDA/cuDNN/TensorRT
libraries. This makes it possible to distribute the engine when built to
use those libraries.

Update the copyright notices to 2019.
godmoves added a commit to godmoves/leela-zero that referenced this issue Apr 27, 2019
* Command line parsing : OPENGL --> OPENCL

* Asynchronous simulation / evaluation+backup for batching.

* temp commit.

* New fractional backup implementation.

* reorder children after Dirichlet noise + minor fix.

* Fix for compiler syntax nitpick.

* Once again...

* Output max queue length.

* One queue for each GPU.

* Limit max queue size to twice gpucount*batchsize and Serialize OpenCL commandqueue. (Reverted "one queue for each GPU".)

* temp commits.

* Less variation in speed (pos/s) but seems ~5% slower than max performance.

* Use accumulated virtual losses to avoid visiting expanding nodes.

* Fix missing header leading to error with some compiler.

* Fast conclusion of think().

* Solve problem with root node expansion when it's in NNCache; Fix error with some compilers.

* Cleanup loop code.

Pull request leela-zero#2033.

* always output tuning result

* fixes.

* Tensor core support for half precision

* Bugfixes

* Use m32n8k16 format instead of m16n16k16 - seems to be a bit faster

* Merge fixes.

* Code cleanup for tuning for tensorcores

* Change default to try SA=0 / SA=1 for tensorcore cases

* Update UCTSearch.cpp

* Clear NNCache when clear_board or loadsgf is issued.

* Fixes.

* Queue insertion/vl undo improvements.

* Half precision by default.

* hgemm : Added m16n16k16/m32n8k16/m8n32k16 tuning

Tuner will see which shaped multiplication is fastest.
MDIMA represents the M dimension, NDIMB represents the N dimension.

* Tuner : adjusted range for tensorcore cases so that it covers all MDIMA/NDIMB dimensions

* Fix bug causing infinite wait.

* Fix bug causing infinite wait.

* Minor fixes.

* Minor fixes.

* Crucial fix: infinite wait_expanded.

* Tentative fixes.

* Follow-up fixes.

* Update UCTNode.cpp

* stupid typo.

* stupid typo.

* small fix.

* Fix crucial bug in frac-backup factor calculation.

* Fix crucial bug in frac-backup factor calculation.

* Better output stats.

* Defaulted frac-backup; better naming of pending backup stats.

* Small fix.

* Revert SEL -> WR for get_visits for selection.

* Forgotten comment text change.

* Make some debug variables atomic.

* Renaming a variable; static_cast -> load()

* virtual loss in numerator.

* Small output fix.

* Reorganize pending backup obligations.

* Move backup data insertion to Network::get_output0.

* Remove statics; bugfixes.

* Optimizations? Do not use m_return_queue.

* Corrected implementation of virtual loss accumulation.

* Missing include.

* Modifications that don't achieve good result.

* WIP; implemented readers-writer lock.

* A snapshot as basis of further changes.

* Checkpoint.

* Checkpoint: Seamless think/ponder transition implemented.
NOT for actual use: This version sends positions to GPUs without limit for stress-testing purposes; will eat up your memory.

* Bugfixes and better debug outputs; usable version.

* Checkpoint: changes are not done but it compiles.

* Checkpoint: moved some members from OpenCLScheduler and OpenCL_Network to OpenCL; compiles.

* temp

* temp commit; won't compile.

* Checkpoint: implementation unfinished, now switch to another design.

* Mostly lock-free OpenCLScheduler.
Ensure minimal latency when there're enough positions to feed the GPUs.
Compiles. Pending debug.

* Seems working now.

* Fixes.

* Worker thread = search thread.

* Tweak conversion script for ELF v2.

Small tweak to conversion script for ELF v2 weights.

Pull request leela-zero#2213.

* Bugfix: accumulated virtual loss removal.

* Work around inexplicable reported bug.

* Endgame/Double-pass bugfix.

* Fix some cv race conditions.

* Update OpenCL.h

* Correctly initialize board when reading SGF.

Even though SGF defaults to size 19 boards, we should not try
to set up a board that size if LZ has not been compiled to support
it.

Pull request leela-zero#1964.

* Increase memory limit for 32-bit builds.

Without this, it's empirically not possible to load the current 256x40
networks on a 32-bit machine.

* Never select a CPU during OpenCL autodetection.

If we are trying to auto-select the best device for OpenCL, never select
a CPU. This will cause the engine to refuse to run when people are
trying to run the OpenCL version without a GPU or without GPU drivers,
instead of selecting any slow and suboptimal (and empirically extremely
broken) OpenCL-on-CPU drivers.

Falling back to CPU-only would be another reasonable alternative, but
doesn't provide an alert in case the GPU drivers are missing.

Improves behavior of issue leela-zero#1994.

* Fix tuner for heterogeneous GPUs and auto precision.

Fix full tuner for heterogeneous GPUs and auto precision detection.

--full-tuner implies --tune-only
--full-tuner requires an explicit precision

Fixes leela-zero#1973.

Pull request leela-zero#2004.

* Optimized out and out_in kernels.

Very minor speedup of about 2% with batch size of 1.
With batch size of 5 there is a speedup of about 5% with half precision
and 12% with single precision.

Out transformation memory accesses are almost completely coalesced
with the new kernel.

Pull request leela-zero#2014.

* Update OpenCL C++ headers.

From upstream a807dcf0f8623d40dc5ce9d1eb00ffd0e46150c7.

* CPU-only eval performance optimization.

* CPUPipe : change winograd transformation constants to an equation.

Combined with a series of strength reduction changes, 
improves netbench by about 8%.

* Convert some std::array into individual variables

For some reason this allows gcc to optimize the code better,
improving netbench by 2%.

Pull request leela-zero#2021.

* Convolve in/out performance optimization.

Use hard-coded equations instead of matrix multiplication.

Pull request leela-zero#2023.

* Validation: fix -k option.

Fix Validation -k option by reading its value before the parser is reused.

Pull request leela-zero#2024.

* Add link to Azure free trial instructions.

See pull request leela-zero#2031.

* Cleanup loop code.

Pull request leela-zero#2033.

* Cleanup atomics and dead if.

Pull request leela-zero#2034.

* Const in SGFTree.

Pull request leela-zero#2035.

* Make the README more clear.

Simplify instructions, especially related to building and running
when wanting to contribute.

Based on pull request leela-zero#1983.

* Refactor to allow AutoGTP to use Engine.

* Move Engine to Game.h and refactor autogtp to use it too.
* Fix initialization of job engines.

Pull request leela-zero#2029.

* Fix printf call style.

Generally speaking, providing character pointers as the first argument 
directly might cause FSB (Format String Bug).

Pull request leela-zero#2063.

* Add O(sqrt(log(n))) scaling to tree search.

Pull request leela-zero#2072.

* Update Khronos OpenCL C++ headers.

Update from upstream f0b7045.

Fixes warnings related to CL_TARGET_OPENCL_VERSION.

* AutoGTP: allow specifying an SGF as initial position.

* Make AutoGTP URL parametric.
* Support for the sgfhash and movescount parameters in get-task.
* Automatic downloading of sgf and training files.
* Fix Management.cpp for older Qt5 versions.
* Added starting match games from specified initial position
* Tidy ValidationJob::init() like ProductionJob::init()
* Use existing QUuid method of generating random file 
  names instead of QTemporaryFile when fetching game data.

Moreover, we do not load training data in LeelaZ since it is not needed to start from
an arbitrary position.

Pull request leela-zero#2052.

* Support separate options for white in match games.

* Add optional separate options for white in match game.
* Fixed loading of saved match order with optionsSecond.

Pull request leela-zero#2078.

* Option to get network output without writing to cache. 

Pull request leela-zero#2093.

* Add permission to link with NVIDIA libs. Update year.

See issue leela-zero#2032.

All contributors to the core engine have given their permission to
add an additional permission to link with NVIDIA's CUDA/cuDNN/TensorRT
libraries. This makes it possible to distribute the engine when built to
use those libraries.

Update the copyright notices to 2019.

* Add link to GoReviewPartner.

Pull request leela-zero#2147.

* Reminder to install OpenCL driver if seperate.

Although the OpenCL driver is generally installed as part of the driver
install, mention the requirement explicitly in case it wasn't.

See pull request leela-zero#2138.

* Fixed leelaz_file on Android.

Pull request leela-zero#2135.

* Fix 'catching polymorphic type by value' warning.

Pull request leela-zero#2134.

* Fixed converter script for minigo removing bias.

Fixes leela-zero#2020.

Pull request leela-zero#2133.

* Add zlib to the mac OS X build instructions.

See pull request leela-zero#2122.

* UCTNodePtr rare race condition fix.

Calling get_eval() on zero-visit node will assert-fail.
The original code could assert-fail on b.get_eval() if 'a' and 'b' both
had zero visits but suddenly 'a' gained an additional visit.

Pull request leela-zero#2110.

* Make sure analysis is printed at least once.

Fixes issue leela-zero#2001.

Pull request leela-zero#2114.

* Don't post if not requested.

Follow up fix for pull request leela-zero#2114.

* AutoGTP: Allow specifying initial GTP commands.

* AutoGTP: Allow specifying initial GTP commands.
  Also add support for white taking the first move in handicapped job games.
* AutoGTP: Refactored core loop for match games to avoid code duplication.
* Fixed white using black's match game settings after loading from an SGF by
  moving SGF loading into Game::gameStart() to before sending GTP commands
  (except handicap commands).
* Changed so that when an SGF file is loaded, AutoGTP determines whether
  handicap is in use from the SGF rather than from any starting GTP commands.

Pull request leela-zero#2096.

* Update Eigen to 3.3.7. 

This includes some optimization improvements for newer GCC/Clang that
may be relevant to a lot of our users.

Pull request leela-zero#2151.

* Fix lz-setoption name playouts.

Fixes issue leela-zero#2167.

I could swear I fixed this before. Maybe I forgot to push?

* AutoGTP: More info in SGF comments.

* AutoGTP: Added full engine options and starting GTP commands 
  to SGF comments that are produced.
* Refactored Game::fixSgf().

Pull request leela-zero#2160.

* Truncate and compress minigo weights.

Truncate to 4 precision and compress converted minigo weights.

Pull request leela-zero#2173.

* Add gomill-explain_last_move.

Add gomill-explain_last_move for additional output in ringmaster
competitions.

Pull request leela-zero#2174.

* Add a feature to exclude moves from the search.

* The "avoid" command is now a param for lz-analyze and for
  lz-genmove_analyze.

New syntax is:

  `lz-analyze ARGS [avoid <color> <coords> <number_of_moves>] [avoid ...]`
  `lz-genmove_analyze ARGS [avoid <color> <coords> <number_of_moves>] [avoid ...]`

The number_of_moves is now always relative to the current move number.

Example:

  `lz-analyze b 200 avoid b q16 1 avoid b q4 1 avoid b d16 1 avoid b d4 1`

* Re-organize the parser for the "analyze" commands.

  * New tag "interval"; old syntax "100" is now short for "interval 100"
  * Tags can be specified in any arbitrary order
  * Moved all of the parsing code for "lz-anaylze" and
    "lz-genmove_analyze" into the parse_analyze_tags function
  * parse_analyze_tags uses its return value instead of side effects

* Implement the "allow" tag for lz-analyze.

It works similar to "avoid".  Adding moves to the "allow" list is the
same as adding all other moves (except pass and resign) to the "avoid" list.

* "Avoid" and "allow" moves can be specified as a comma-separated list.

Example:

  `lz-analyze b 100 avoid w q4,q16,d4,d16 2 avoid b pass 50`

Pull request leela-zero#1949.

* Removed --cpu-only option from USE_CPU_ONLY build. 

Generalized output displayed in cases where potentially referring to a CPU 
instead of or as well as a GPU.

Pull request leela-zero#2161.

* Tensor Core support with PTX inline assembly.

* Tensor core support for half precision
* hgemm : Added m16n16k16/m32n8k16/m8n32k16 tuning

Tuner will see which shaped multiplication is fastest.
MDIMA represents the M dimension, NDIMB represents the N dimension.

* tensorcore : Test m16n16k16 typs only for checking tensorcore availability

It seems that there are cases where only m16n16k16 is supported.
If other formats are not available they will be auto-disabled on tuning.

Pull request leela-zero#2049.

* Update TODO list.

We support avoid tags now. Clarify batching work needs
changes in the search.

* Remove an unnecessary std::move().

Which inhibits RVO. See e.g. https://stackoverflow.com/a/19272035

* Add contributor (and maintainer) guidelines. 

* Add contributor (and maintainer) guidelines.

Spell out the existing code style, C++ usage, git workflow,
commit message requirements, and give guidelines regarding reviewing,
merging and adding configuration options and GTP extensions.

Pull request leela-zero#2186.

* Add several simple GTP commands.

Added several simple GTP commands useful for building interfaces to LZ.

Added the following GTP commands.

    last_move
    move_history

The output of these commands is in line with that of the corresponding
commands in GNU Go when such commands existed.

Pull request leela-zero#2170.

* Minor style fixups.

Minor fixups for pull request leela-zero#2170.

* Remark about move assignment in style guideline.

Emphasize use of emplace_back and move semantics.

* Add lz-analyze minmoves tag.

Add an lz-analyze tag to suggest the minimum amount of moves the
engine should post info about (rather than only those it considers
interesting, i.e. the ones with at least a visit).

This allows some very flexible constructs:

Getting a heatmap:

    lz-setoption name visits value 1
    lz-analyze interval 1 minmoves 361

Forcing a move among the top policy moves only:

    lz-setoption name visits value 1
    lz-analyze interval 1 minmoves 2
    (store those moves, e.g. A1, B1)
    lz-setoption name visits value 0
    lz-genmove_analyze b interval 1 allow b A1 1 allow b B1 1

* Fix style, extra spaces in PV output.

Adding the minmoves tag exposes a small bug in the PV
output formatting. Avoid extra blank spaces.

Small style fixups.

* Rework test regex for MSVC limits.

Seems like the previous test regex is causing MSVC's regex engine to run
out of stack space.

* .gitignore: Add build.

leela-zero's default build directory is `build`.

It is very annoying when using leela as a git submodule that 
the repository updates whenever it builds.

Pull request leela-zero#2199.

* Batched neural net evaluations

Group evaluations and run them in parallel. Roughly 50% speedup on my setup, but there are a couple of points that is debatable.

- Thread / batch sizing heuristics : This PR changes how the default threads / default batch sizes are picked.  See Leela.cpp
- Batch-forming heuristic : See OpenCLScheduler.cpp for the batch forming heuristic : the heuristic exists so that we can wait for the rest of the engine to create more NN evaluations so that we can run larger batches.  We can't wait indefinitely since there are cases we enter 'serial' paths.  Since heuristics are heuristics, these might need some tests on a larger variety of types of systems.

Did make sure that winrate improves when running default vs. default command line `./leelaz -w (weight file)` on time parity.

Pull request leela-zero#2188.

* Autogtp: Tune for batchsize 1

Self-play games specify `-t 1` for playing which implies batch size of 1, but tuning was done for default settings since number of threads was not specified.

Pull request leela-zero#2206

* Update README.md.

Update links to leela-zero instead of gcp.
Update badge and link to the new AppVeyor project
under leela-zero instead of gcp ownership.

* Remove unused lambda capture.

Pull request leela-zero#2231.

* README.md: link to mentioned pull requests.

Pull request leela-zero#2229.

* Minor cleanup involving Network::get_output. 

Pull request leela-zero#2228.

* Set up default batch size and threads.

Fixes issue leela-zero#2214.

Pull request leela-zero#2256.

* Shuffle tuner parameters to find good parameters quicker.

Parameters are searched in a linear fashion currently. By shuffling them,
we will find a good instance more quickly.

Also, shuffing could help reduce possible bias due to grouped, similar
parameters that affect the environment (e.g. cache, branch predictor, ...),
leading to more accurate/fair results.

Additionally, this is a preparation for exiting the tuner during the search,
which becomes a possible option.

Pull request leela-zero#2225.

* Refactor tree_stats_helper to lambda.

Pull request leela-zero#2244.

* Enable batching for self-play.

Pull request leela-zero#2253.

* Allow configuring default komi at compile-time.

Pull request leela-zero#2257.

* Make chunkparser more robust.

Some clients are sending corrupted data, make the
chunk parser resilient against it.

* Fix thread count error message.

Pull request leela-zero#2287.

* Fix small style nits.

* Add support for time controls in loadsgf/printsgf.

Added extra support for "TM" and "OT" and other sgf time control
properties on printsgf and loadsgf GTP commands.

* Added parsing and loading of "TM" and "OT" sgf properties on GTP command
  loadsgf. Only supports "OT" syntax matching output from a printsgf GTP
  command.
* Change SGFTree to have a shared_ptr for a time control.
* Added saving and loading of "BL", "WL", "OB" and "OW" sgf properties on
  GTP commands printsgf and loadsgf.
* Change to make TimeControl::make_from_text_sgf() a time control factory
  and other minor tidying.

Pull request leela-zero#2172.

* Fix inconsistent default timecontrol.

As noted in pull request leela-zero#2172, the default
constructor set byo yomi stones but no time or
periods.

* Error out if weights are for wrong board size.

We currently will either crash or do strange things if we're
fed a weights file that doesn't match the board size we're compiled
for.

See issue leela-zero#2289.

* Ignore passing moves unless they make sense.

Only pass when winning or low on legal moves.
Disabled in self-play.

Fixes issue leela-zero#2273.
Based on pull request leela-zero#2277.

Pull request leela-zero#2301.

* Always allow passing when low on moves.

As pointed out by @gjm11 in leela-zero#2277, when there's few legal moves we might
want to allow passing even if this loses on the board count. The
alternative might be to self-destruct large groups and carry the game
on endlessely even if the policy wouldn't want to.

No difference in "dumbpass" mode.

* Report root visits in gomill-explain_last_move.

See issue leela-zero#2280.

Pull request leela-zero#2302.

* Choose move based on normal distribution LCB.

* Calculate node variance.
* Use normal distribution LCB to choose the played move.
* Cached student-t.
* Sort lz-analyze output according to LCB.
* Don't choose nodes with very few visits even if LCB is better.

Guard against NN misevaluations when top move has lot of visits.
Without this it's possible for move with few hundred visits to be picked
over a move with over ten thousand visits.

The problem is that the evaluation distribution isn't really normal
distribution. Evaluations correlate and the distribution can change
if deeper in the tree it finds a better alternative.

Pull request leela-zero#2290.

* Mixed precision training support.

* Add mixed precision training support.
* Do not use loss scale if training with fp32
* Fix potential reg_term overflow of large networks.

Pull request leela-zero#2191.

* Update AUTHORS.

* Don't detect precision with Tensor Cores. 

Don't autodetect or default to fp32 when all cards have
Tensor Cores. We will assume fp16 is the fastest.

This avoids problems in tune-only mode which does not
detect the precision to use and would use fp32 on such cards.

Pull request leela-zero#2312.

* Update README.md.

We have a first implementation of batching now.

* Ignore --batchsize in CPU only compiles.

AutoGTP will always send --batchsize, but CPU only
compiles don't support the option. Ignore the option
in those builds.

The same problem exists with --tune-only, but quitting
immediately happens to be sane behavior so we don't need
to fix that.

Pull request leela-zero#2313.

* Don't include OpenCL scheduler in CPU build.

It will recursively include OpenCL.h and that
is bad.

Pull request leela-zero#2314.

* Bump version numbers.

* Fix: batch sizes were not set according to command line.
Vandertic pushed a commit to CuriosAI/sai that referenced this issue Jun 10, 2019
See issue leela-zero#2032.

All contributors to the core engine have given their permission to
add an additional permission to link with NVIDIA's CUDA/cuDNN/TensorRT
libraries. This makes it possible to distribute the engine when built to
use those libraries.

Update the copyright notices to 2019.
ihavnoid pushed a commit that referenced this issue Jul 27, 2019
* Correctly initialize board when reading SGF.

Even though SGF defaults to size 19 boards, we should not try
to set up a board that size if LZ has not been compiled to support
it.

Pull request #1964.

* Increase memory limit for 32-bit builds.

Without this, it's empirically not possible to load the current 256x40
networks on a 32-bit machine.

* Never select a CPU during OpenCL autodetection.

If we are trying to auto-select the best device for OpenCL, never select
a CPU. This will cause the engine to refuse to run when people are
trying to run the OpenCL version without a GPU or without GPU drivers,
instead of selecting any slow and suboptimal (and empirically extremely
broken) OpenCL-on-CPU drivers.

Falling back to CPU-only would be another reasonable alternative, but
doesn't provide an alert in case the GPU drivers are missing.

Improves behavior of issue #1994.

* Fix tuner for heterogeneous GPUs and auto precision.

Fix full tuner for heterogeneous GPUs and auto precision detection.

--full-tuner implies --tune-only
--full-tuner requires an explicit precision

Fixes #1973.

Pull request #2004.

* Optimized out and out_in kernels.

Very minor speedup of about 2% with batch size of 1.
With batch size of 5 there is a speedup of about 5% with half precision
and 12% with single precision.

Out transformation memory accesses are almost completely coalesced
with the new kernel.

Pull request #2014.

* Update OpenCL C++ headers.

From upstream a807dcf0f8623d40dc5ce9d1eb00ffd0e46150c7.

* CPU-only eval performance optimization.

* CPUPipe : change winograd transformation constants to an equation.

Combined with a series of strength reduction changes, 
improves netbench by about 8%.

* Convert some std::array into individual variables

For some reason this allows gcc to optimize the code better,
improving netbench by 2%.

Pull request #2021.

* Convolve in/out performance optimization.

Use hard-coded equations instead of matrix multiplication.

Pull request #2023.

* Validation: fix -k option.

Fix Validation -k option by reading its value before the parser is reused.

Pull request #2024.

* Add link to Azure free trial instructions.

See pull request #2031.

* Cleanup atomics and dead if.

Pull request #2034.

* Const in SGFTree.

Pull request #2035.

* Make the README more clear.

Simplify instructions, especially related to building and running
when wanting to contribute.

Based on pull request #1983.

* Refactor to allow AutoGTP to use Engine.

* Move Engine to Game.h and refactor autogtp to use it too.
* Fix initialization of job engines.

Pull request #2029.

* Fix printf call style.

Generally speaking, providing character pointers as the first argument 
directly might cause FSB (Format String Bug).

Pull request #2063.

* Update Khronos OpenCL C++ headers.

Update from upstream f0b7045.

Fixes warnings related to CL_TARGET_OPENCL_VERSION.

* Cleanup loop code.

Pull request #2033.

* AutoGTP: allow specifying an SGF as initial position.

* Make AutoGTP URL parametric.
* Support for the sgfhash and movescount parameters in get-task.
* Automatic downloading of sgf and training files.
* Fix Management.cpp for older Qt5 versions.
* Added starting match games from specified initial position
* Tidy ValidationJob::init() like ProductionJob::init()
* Use existing QUuid method of generating random file 
  names instead of QTemporaryFile when fetching game data.

Moreover, we do not load training data in LeelaZ since it is not needed to start from
an arbitrary position.

Pull request #2052.

* Support separate options for white in match games.

* Add optional separate options for white in match game.
* Fixed loading of saved match order with optionsSecond.

Pull request #2078.

* Add O(sqrt(log(n))) scaling to tree search.

Pull request #2072.

* Option to get network output without writing to cache. 

Pull request #2093.

* Add permission to link with NVIDIA libs. Update year.

See issue #2032.

All contributors to the core engine have given their permission to
add an additional permission to link with NVIDIA's CUDA/cuDNN/TensorRT
libraries. This makes it possible to distribute the engine when built to
use those libraries.

Update the copyright notices to 2019.

* Add link to GoReviewPartner.

Pull request #2147.

* Reminder to install OpenCL driver if seperate.

Although the OpenCL driver is generally installed as part of the driver
install, mention the requirement explicitly in case it wasn't.

See pull request #2138.

* Fixed leelaz_file on Android.

Pull request #2135.

* Fix 'catching polymorphic type by value' warning.

Pull request #2134.

* Fixed converter script for minigo removing bias.

Fixes #2020.

Pull request #2133.

* Add zlib to the mac OS X build instructions.

See pull request #2122.

* UCTNodePtr rare race condition fix.

Calling get_eval() on zero-visit node will assert-fail.
The original code could assert-fail on b.get_eval() if 'a' and 'b' both
had zero visits but suddenly 'a' gained an additional visit.

Pull request #2110.

* Make sure analysis is printed at least once.

Fixes issue #2001.

Pull request #2114.

* Don't post if not requested.

Follow up fix for pull request #2114.

* AutoGTP: Allow specifying initial GTP commands.

* AutoGTP: Allow specifying initial GTP commands.
  Also add support for white taking the first move in handicapped job games.
* AutoGTP: Refactored core loop for match games to avoid code duplication.
* Fixed white using black's match game settings after loading from an SGF by
  moving SGF loading into Game::gameStart() to before sending GTP commands
  (except handicap commands).
* Changed so that when an SGF file is loaded, AutoGTP determines whether
  handicap is in use from the SGF rather than from any starting GTP commands.

Pull request #2096.

* Update Eigen to 3.3.7. 

This includes some optimization improvements for newer GCC/Clang that
may be relevant to a lot of our users.

Pull request #2151.

* Fix lz-setoption name playouts.

Fixes issue #2167.

I could swear I fixed this before. Maybe I forgot to push?

* AutoGTP: More info in SGF comments.

* AutoGTP: Added full engine options and starting GTP commands 
  to SGF comments that are produced.
* Refactored Game::fixSgf().

Pull request #2160.

* Truncate and compress minigo weights.

Truncate to 4 precision and compress converted minigo weights.

Pull request #2173.

* Add gomill-explain_last_move.

Add gomill-explain_last_move for additional output in ringmaster
competitions.

Pull request #2174.

* Add a feature to exclude moves from the search.

* The "avoid" command is now a param for lz-analyze and for
  lz-genmove_analyze.

New syntax is:

  `lz-analyze ARGS [avoid <color> <coords> <number_of_moves>] [avoid ...]`
  `lz-genmove_analyze ARGS [avoid <color> <coords> <number_of_moves>] [avoid ...]`

The number_of_moves is now always relative to the current move number.

Example:

  `lz-analyze b 200 avoid b q16 1 avoid b q4 1 avoid b d16 1 avoid b d4 1`

* Re-organize the parser for the "analyze" commands.

  * New tag "interval"; old syntax "100" is now short for "interval 100"
  * Tags can be specified in any arbitrary order
  * Moved all of the parsing code for "lz-anaylze" and
    "lz-genmove_analyze" into the parse_analyze_tags function
  * parse_analyze_tags uses its return value instead of side effects

* Implement the "allow" tag for lz-analyze.

It works similar to "avoid".  Adding moves to the "allow" list is the
same as adding all other moves (except pass and resign) to the "avoid" list.

* "Avoid" and "allow" moves can be specified as a comma-separated list.

Example:

  `lz-analyze b 100 avoid w q4,q16,d4,d16 2 avoid b pass 50`

Pull request #1949.

* Removed --cpu-only option from USE_CPU_ONLY build. 

Generalized output displayed in cases where potentially referring to a CPU 
instead of or as well as a GPU.

Pull request #2161.

* Tensor Core support with PTX inline assembly.

* Tensor core support for half precision
* hgemm : Added m16n16k16/m32n8k16/m8n32k16 tuning

Tuner will see which shaped multiplication is fastest.
MDIMA represents the M dimension, NDIMB represents the N dimension.

* tensorcore : Test m16n16k16 typs only for checking tensorcore availability

It seems that there are cases where only m16n16k16 is supported.
If other formats are not available they will be auto-disabled on tuning.

Pull request #2049.

* Update TODO list.

We support avoid tags now. Clarify batching work needs
changes in the search.

* Remove an unnecessary std::move().

Which inhibits RVO. See e.g. https://stackoverflow.com/a/19272035

* Add contributor (and maintainer) guidelines. 

* Add contributor (and maintainer) guidelines.

Spell out the existing code style, C++ usage, git workflow,
commit message requirements, and give guidelines regarding reviewing,
merging and adding configuration options and GTP extensions.

Pull request #2186.

* Add several simple GTP commands.

Added several simple GTP commands useful for building interfaces to LZ.

Added the following GTP commands.

    last_move
    move_history

The output of these commands is in line with that of the corresponding
commands in GNU Go when such commands existed.

Pull request #2170.

* Minor style fixups.

Minor fixups for pull request #2170.

* Remark about move assignment in style guideline.

Emphasize use of emplace_back and move semantics.

* Add lz-analyze minmoves tag.

Add an lz-analyze tag to suggest the minimum amount of moves the
engine should post info about (rather than only those it considers
interesting, i.e. the ones with at least a visit).

This allows some very flexible constructs:

Getting a heatmap:

    lz-setoption name visits value 1
    lz-analyze interval 1 minmoves 361

Forcing a move among the top policy moves only:

    lz-setoption name visits value 1
    lz-analyze interval 1 minmoves 2
    (store those moves, e.g. A1, B1)
    lz-setoption name visits value 0
    lz-genmove_analyze b interval 1 allow b A1 1 allow b B1 1

* Fix style, extra spaces in PV output.

Adding the minmoves tag exposes a small bug in the PV
output formatting. Avoid extra blank spaces.

Small style fixups.

* Rework test regex for MSVC limits.

Seems like the previous test regex is causing MSVC's regex engine to run
out of stack space.

* .gitignore: Add build.

leela-zero's default build directory is `build`.

It is very annoying when using leela as a git submodule that 
the repository updates whenever it builds.

Pull request #2199.

* Batched neural net evaluations

Group evaluations and run them in parallel. Roughly 50% speedup on my setup, but there are a couple of points that is debatable.

- Thread / batch sizing heuristics : This PR changes how the default threads / default batch sizes are picked.  See Leela.cpp
- Batch-forming heuristic : See OpenCLScheduler.cpp for the batch forming heuristic : the heuristic exists so that we can wait for the rest of the engine to create more NN evaluations so that we can run larger batches.  We can't wait indefinitely since there are cases we enter 'serial' paths.  Since heuristics are heuristics, these might need some tests on a larger variety of types of systems.

Did make sure that winrate improves when running default vs. default command line `./leelaz -w (weight file)` on time parity.

Pull request #2188.

* Autogtp: Tune for batchsize 1

Self-play games specify `-t 1` for playing which implies batch size of 1, but tuning was done for default settings since number of threads was not specified.

Pull request #2206

* Tweak conversion script for ELF v2.

Small tweak to conversion script for ELF v2 weights.

Pull request #2213.

* Update README.md

Update links to leela-zero instead of gcp.

* Update README.md

Appveyor link still needs to be 'gcp'.

* Update README.md

Update badge and link to the new AppVeyor project under leela-zero instead of gcp ownership.

* Update README.md.

Update links to leela-zero instead of gcp.
Update badge and link to the new AppVeyor project
under leela-zero instead of gcp ownership.

* Remove unused lambda capture.

Pull request #2231.

* README.md: link to mentioned pull requests.

Pull request #2229.

* Minor cleanup involving Network::get_output. 

Pull request #2228.

* Set up default batch size and threads.

Fixes issue #2214.

Pull request #2256.

* Shuffle tuner parameters to find good parameters quicker.

Parameters are searched in a linear fashion currently. By shuffling them,
we will find a good instance more quickly.

Also, shuffing could help reduce possible bias due to grouped, similar
parameters that affect the environment (e.g. cache, branch predictor, ...),
leading to more accurate/fair results.

Additionally, this is a preparation for exiting the tuner during the search,
which becomes a possible option.

Pull request #2225.

* Refactor tree_stats_helper to lambda.

Pull request #2244.

* Enable batching for self-play.

Pull request #2253.

* Allow configuring default komi at compile-time.

Pull request #2257.

* Update README.md

Update links to leela-zero instead of gcp.

* Make chunkparser more robust.

Some clients are sending corrupted data, make the
chunk parser resilient against it.

* Fix thread count error message.

Pull request #2287.

* Fix small style nits.

* Add support for time controls in loadsgf/printsgf.

Added extra support for "TM" and "OT" and other sgf time control
properties on printsgf and loadsgf GTP commands.

* Added parsing and loading of "TM" and "OT" sgf properties on GTP command
  loadsgf. Only supports "OT" syntax matching output from a printsgf GTP
  command.
* Change SGFTree to have a shared_ptr for a time control.
* Added saving and loading of "BL", "WL", "OB" and "OW" sgf properties on
  GTP commands printsgf and loadsgf.
* Change to make TimeControl::make_from_text_sgf() a time control factory
  and other minor tidying.

Pull request #2172.

* Fix inconsistent default timecontrol.

As noted in pull request #2172, the default
constructor set byo yomi stones but no time or
periods.

* Error out if weights are for wrong board size.

We currently will either crash or do strange things if we're
fed a weights file that doesn't match the board size we're compiled
for.

See issue #2289.

* Ignore passing moves unless they make sense.

Only pass when winning or low on legal moves.
Disabled in self-play.

Fixes issue #2273.
Based on pull request #2277.

Pull request #2301.

* Always allow passing when low on moves.

As pointed out by @gjm11 in #2277, when there's few legal moves we might
want to allow passing even if this loses on the board count. The
alternative might be to self-destruct large groups and carry the game
on endlessely even if the policy wouldn't want to.

No difference in "dumbpass" mode.

* Report root visits in gomill-explain_last_move.

See issue #2280.

Pull request #2302.

* Choose move based on normal distribution LCB.

* Calculate node variance.
* Use normal distribution LCB to choose the played move.
* Cached student-t.
* Sort lz-analyze output according to LCB.
* Don't choose nodes with very few visits even if LCB is better.

Guard against NN misevaluations when top move has lot of visits.
Without this it's possible for move with few hundred visits to be picked
over a move with over ten thousand visits.

The problem is that the evaluation distribution isn't really normal
distribution. Evaluations correlate and the distribution can change
if deeper in the tree it finds a better alternative.

Pull request #2290.

* Mixed precision training support.

* Add mixed precision training support.
* Do not use loss scale if training with fp32
* Fix potential reg_term overflow of large networks.

Pull request #2191.

* Update AUTHORS.

* Don't detect precision with Tensor Cores. 

Don't autodetect or default to fp32 when all cards have
Tensor Cores. We will assume fp16 is the fastest.

This avoids problems in tune-only mode which does not
detect the precision to use and would use fp32 on such cards.

Pull request #2312.

* Update README.md.

We have a first implementation of batching now.

* Ignore --batchsize in CPU only compiles.

AutoGTP will always send --batchsize, but CPU only
compiles don't support the option. Ignore the option
in those builds.

The same problem exists with --tune-only, but quitting
immediately happens to be sane behavior so we don't need
to fix that.

Pull request #2313.

* Don't include OpenCL scheduler in CPU build.

It will recursively include OpenCL.h and that
is bad.

Pull request #2314.

* Bump version numbers.

* Address git hub security alert

* Match upstream
Vandertic pushed a commit to CuriosAI/sai that referenced this issue Dec 14, 2019
* Correctly initialize board when reading SGF.

Even though SGF defaults to size 19 boards, we should not try
to set up a board that size if LZ has not been compiled to support
it.

Pull request leela-zero#1964.

* Increase memory limit for 32-bit builds.

Without this, it's empirically not possible to load the current 256x40
networks on a 32-bit machine.

* Never select a CPU during OpenCL autodetection.

If we are trying to auto-select the best device for OpenCL, never select
a CPU. This will cause the engine to refuse to run when people are
trying to run the OpenCL version without a GPU or without GPU drivers,
instead of selecting any slow and suboptimal (and empirically extremely
broken) OpenCL-on-CPU drivers.

Falling back to CPU-only would be another reasonable alternative, but
doesn't provide an alert in case the GPU drivers are missing.

Improves behavior of issue leela-zero#1994.

* Fix tuner for heterogeneous GPUs and auto precision.

Fix full tuner for heterogeneous GPUs and auto precision detection.

--full-tuner implies --tune-only
--full-tuner requires an explicit precision

Fixes leela-zero#1973.

Pull request leela-zero#2004.

* Optimized out and out_in kernels.

Very minor speedup of about 2% with batch size of 1.
With batch size of 5 there is a speedup of about 5% with half precision
and 12% with single precision.

Out transformation memory accesses are almost completely coalesced
with the new kernel.

Pull request leela-zero#2014.

* Update OpenCL C++ headers.

From upstream a807dcf0f8623d40dc5ce9d1eb00ffd0e46150c7.

* CPU-only eval performance optimization.

* CPUPipe : change winograd transformation constants to an equation.

Combined with a series of strength reduction changes, 
improves netbench by about 8%.

* Convert some std::array into individual variables

For some reason this allows gcc to optimize the code better,
improving netbench by 2%.

Pull request leela-zero#2021.

* Convolve in/out performance optimization.

Use hard-coded equations instead of matrix multiplication.

Pull request leela-zero#2023.

* Validation: fix -k option.

Fix Validation -k option by reading its value before the parser is reused.

Pull request leela-zero#2024.

* Add link to Azure free trial instructions.

See pull request leela-zero#2031.

* Cleanup atomics and dead if.

Pull request leela-zero#2034.

* Const in SGFTree.

Pull request leela-zero#2035.

* Make the README more clear.

Simplify instructions, especially related to building and running
when wanting to contribute.

Based on pull request leela-zero#1983.

* Refactor to allow AutoGTP to use Engine.

* Move Engine to Game.h and refactor autogtp to use it too.
* Fix initialization of job engines.

Pull request leela-zero#2029.

* Fix printf call style.

Generally speaking, providing character pointers as the first argument 
directly might cause FSB (Format String Bug).

Pull request leela-zero#2063.

* Update Khronos OpenCL C++ headers.

Update from upstream f0b7045.

Fixes warnings related to CL_TARGET_OPENCL_VERSION.

* Cleanup loop code.

Pull request leela-zero#2033.

* AutoGTP: allow specifying an SGF as initial position.

* Make AutoGTP URL parametric.
* Support for the sgfhash and movescount parameters in get-task.
* Automatic downloading of sgf and training files.
* Fix Management.cpp for older Qt5 versions.
* Added starting match games from specified initial position
* Tidy ValidationJob::init() like ProductionJob::init()
* Use existing QUuid method of generating random file 
  names instead of QTemporaryFile when fetching game data.

Moreover, we do not load training data in LeelaZ since it is not needed to start from
an arbitrary position.

Pull request leela-zero#2052.

* Support separate options for white in match games.

* Add optional separate options for white in match game.
* Fixed loading of saved match order with optionsSecond.

Pull request leela-zero#2078.

* Add O(sqrt(log(n))) scaling to tree search.

Pull request leela-zero#2072.

* Option to get network output without writing to cache. 

Pull request leela-zero#2093.

* Add permission to link with NVIDIA libs. Update year.

See issue leela-zero#2032.

All contributors to the core engine have given their permission to
add an additional permission to link with NVIDIA's CUDA/cuDNN/TensorRT
libraries. This makes it possible to distribute the engine when built to
use those libraries.

Update the copyright notices to 2019.

* Add link to GoReviewPartner.

Pull request leela-zero#2147.

* Reminder to install OpenCL driver if seperate.

Although the OpenCL driver is generally installed as part of the driver
install, mention the requirement explicitly in case it wasn't.

See pull request leela-zero#2138.

* Fixed leelaz_file on Android.

Pull request leela-zero#2135.

* Fix 'catching polymorphic type by value' warning.

Pull request leela-zero#2134.

* Fixed converter script for minigo removing bias.

Fixes leela-zero#2020.

Pull request leela-zero#2133.

* Add zlib to the mac OS X build instructions.

See pull request leela-zero#2122.

* UCTNodePtr rare race condition fix.

Calling get_eval() on zero-visit node will assert-fail.
The original code could assert-fail on b.get_eval() if 'a' and 'b' both
had zero visits but suddenly 'a' gained an additional visit.

Pull request leela-zero#2110.

* Make sure analysis is printed at least once.

Fixes issue leela-zero#2001.

Pull request leela-zero#2114.

* Don't post if not requested.

Follow up fix for pull request leela-zero#2114.

* AutoGTP: Allow specifying initial GTP commands.

* AutoGTP: Allow specifying initial GTP commands.
  Also add support for white taking the first move in handicapped job games.
* AutoGTP: Refactored core loop for match games to avoid code duplication.
* Fixed white using black's match game settings after loading from an SGF by
  moving SGF loading into Game::gameStart() to before sending GTP commands
  (except handicap commands).
* Changed so that when an SGF file is loaded, AutoGTP determines whether
  handicap is in use from the SGF rather than from any starting GTP commands.

Pull request leela-zero#2096.

* Update Eigen to 3.3.7. 

This includes some optimization improvements for newer GCC/Clang that
may be relevant to a lot of our users.

Pull request leela-zero#2151.

* Fix lz-setoption name playouts.

Fixes issue leela-zero#2167.

I could swear I fixed this before. Maybe I forgot to push?

* AutoGTP: More info in SGF comments.

* AutoGTP: Added full engine options and starting GTP commands 
  to SGF comments that are produced.
* Refactored Game::fixSgf().

Pull request leela-zero#2160.

* Truncate and compress minigo weights.

Truncate to 4 precision and compress converted minigo weights.

Pull request leela-zero#2173.

* Add gomill-explain_last_move.

Add gomill-explain_last_move for additional output in ringmaster
competitions.

Pull request leela-zero#2174.

* Add a feature to exclude moves from the search.

* The "avoid" command is now a param for lz-analyze and for
  lz-genmove_analyze.

New syntax is:

  `lz-analyze ARGS [avoid <color> <coords> <number_of_moves>] [avoid ...]`
  `lz-genmove_analyze ARGS [avoid <color> <coords> <number_of_moves>] [avoid ...]`

The number_of_moves is now always relative to the current move number.

Example:

  `lz-analyze b 200 avoid b q16 1 avoid b q4 1 avoid b d16 1 avoid b d4 1`

* Re-organize the parser for the "analyze" commands.

  * New tag "interval"; old syntax "100" is now short for "interval 100"
  * Tags can be specified in any arbitrary order
  * Moved all of the parsing code for "lz-anaylze" and
    "lz-genmove_analyze" into the parse_analyze_tags function
  * parse_analyze_tags uses its return value instead of side effects

* Implement the "allow" tag for lz-analyze.

It works similar to "avoid".  Adding moves to the "allow" list is the
same as adding all other moves (except pass and resign) to the "avoid" list.

* "Avoid" and "allow" moves can be specified as a comma-separated list.

Example:

  `lz-analyze b 100 avoid w q4,q16,d4,d16 2 avoid b pass 50`

Pull request leela-zero#1949.

* Removed --cpu-only option from USE_CPU_ONLY build. 

Generalized output displayed in cases where potentially referring to a CPU 
instead of or as well as a GPU.

Pull request leela-zero#2161.

* Tensor Core support with PTX inline assembly.

* Tensor core support for half precision
* hgemm : Added m16n16k16/m32n8k16/m8n32k16 tuning

Tuner will see which shaped multiplication is fastest.
MDIMA represents the M dimension, NDIMB represents the N dimension.

* tensorcore : Test m16n16k16 typs only for checking tensorcore availability

It seems that there are cases where only m16n16k16 is supported.
If other formats are not available they will be auto-disabled on tuning.

Pull request leela-zero#2049.

* Update TODO list.

We support avoid tags now. Clarify batching work needs
changes in the search.

* Remove an unnecessary std::move().

Which inhibits RVO. See e.g. https://stackoverflow.com/a/19272035

* Add contributor (and maintainer) guidelines. 

* Add contributor (and maintainer) guidelines.

Spell out the existing code style, C++ usage, git workflow,
commit message requirements, and give guidelines regarding reviewing,
merging and adding configuration options and GTP extensions.

Pull request leela-zero#2186.

* Add several simple GTP commands.

Added several simple GTP commands useful for building interfaces to LZ.

Added the following GTP commands.

    last_move
    move_history

The output of these commands is in line with that of the corresponding
commands in GNU Go when such commands existed.

Pull request leela-zero#2170.

* Minor style fixups.

Minor fixups for pull request leela-zero#2170.

* Remark about move assignment in style guideline.

Emphasize use of emplace_back and move semantics.

* Add lz-analyze minmoves tag.

Add an lz-analyze tag to suggest the minimum amount of moves the
engine should post info about (rather than only those it considers
interesting, i.e. the ones with at least a visit).

This allows some very flexible constructs:

Getting a heatmap:

    lz-setoption name visits value 1
    lz-analyze interval 1 minmoves 361

Forcing a move among the top policy moves only:

    lz-setoption name visits value 1
    lz-analyze interval 1 minmoves 2
    (store those moves, e.g. A1, B1)
    lz-setoption name visits value 0
    lz-genmove_analyze b interval 1 allow b A1 1 allow b B1 1

* Fix style, extra spaces in PV output.

Adding the minmoves tag exposes a small bug in the PV
output formatting. Avoid extra blank spaces.

Small style fixups.

* Rework test regex for MSVC limits.

Seems like the previous test regex is causing MSVC's regex engine to run
out of stack space.

* .gitignore: Add build.

leela-zero's default build directory is `build`.

It is very annoying when using leela as a git submodule that 
the repository updates whenever it builds.

Pull request leela-zero#2199.

* Batched neural net evaluations

Group evaluations and run them in parallel. Roughly 50% speedup on my setup, but there are a couple of points that is debatable.

- Thread / batch sizing heuristics : This PR changes how the default threads / default batch sizes are picked.  See Leela.cpp
- Batch-forming heuristic : See OpenCLScheduler.cpp for the batch forming heuristic : the heuristic exists so that we can wait for the rest of the engine to create more NN evaluations so that we can run larger batches.  We can't wait indefinitely since there are cases we enter 'serial' paths.  Since heuristics are heuristics, these might need some tests on a larger variety of types of systems.

Did make sure that winrate improves when running default vs. default command line `./leelaz -w (weight file)` on time parity.

Pull request leela-zero#2188.

* Autogtp: Tune for batchsize 1

Self-play games specify `-t 1` for playing which implies batch size of 1, but tuning was done for default settings since number of threads was not specified.

Pull request leela-zero#2206

* Tweak conversion script for ELF v2.

Small tweak to conversion script for ELF v2 weights.

Pull request leela-zero#2213.

* Update README.md

Update links to leela-zero instead of gcp.

* Update README.md

Appveyor link still needs to be 'gcp'.

* Update README.md

Update badge and link to the new AppVeyor project under leela-zero instead of gcp ownership.

* Update README.md.

Update links to leela-zero instead of gcp.
Update badge and link to the new AppVeyor project
under leela-zero instead of gcp ownership.

* Remove unused lambda capture.

Pull request leela-zero#2231.

* README.md: link to mentioned pull requests.

Pull request leela-zero#2229.

* Minor cleanup involving Network::get_output. 

Pull request leela-zero#2228.

* Set up default batch size and threads.

Fixes issue leela-zero#2214.

Pull request leela-zero#2256.

* Shuffle tuner parameters to find good parameters quicker.

Parameters are searched in a linear fashion currently. By shuffling them,
we will find a good instance more quickly.

Also, shuffing could help reduce possible bias due to grouped, similar
parameters that affect the environment (e.g. cache, branch predictor, ...),
leading to more accurate/fair results.

Additionally, this is a preparation for exiting the tuner during the search,
which becomes a possible option.

Pull request leela-zero#2225.

* Refactor tree_stats_helper to lambda.

Pull request leela-zero#2244.

* Enable batching for self-play.

Pull request leela-zero#2253.

* Allow configuring default komi at compile-time.

Pull request leela-zero#2257.

* Update README.md

Update links to leela-zero instead of gcp.

* Make chunkparser more robust.

Some clients are sending corrupted data, make the
chunk parser resilient against it.

* Fix thread count error message.

Pull request leela-zero#2287.

* Fix small style nits.

* Add support for time controls in loadsgf/printsgf.

Added extra support for "TM" and "OT" and other sgf time control
properties on printsgf and loadsgf GTP commands.

* Added parsing and loading of "TM" and "OT" sgf properties on GTP command
  loadsgf. Only supports "OT" syntax matching output from a printsgf GTP
  command.
* Change SGFTree to have a shared_ptr for a time control.
* Added saving and loading of "BL", "WL", "OB" and "OW" sgf properties on
  GTP commands printsgf and loadsgf.
* Change to make TimeControl::make_from_text_sgf() a time control factory
  and other minor tidying.

Pull request leela-zero#2172.

* Fix inconsistent default timecontrol.

As noted in pull request leela-zero#2172, the default
constructor set byo yomi stones but no time or
periods.

* Error out if weights are for wrong board size.

We currently will either crash or do strange things if we're
fed a weights file that doesn't match the board size we're compiled
for.

See issue leela-zero#2289.

* Ignore passing moves unless they make sense.

Only pass when winning or low on legal moves.
Disabled in self-play.

Fixes issue leela-zero#2273.
Based on pull request leela-zero#2277.

Pull request leela-zero#2301.

* Always allow passing when low on moves.

As pointed out by @gjm11 in leela-zero#2277, when there's few legal moves we might
want to allow passing even if this loses on the board count. The
alternative might be to self-destruct large groups and carry the game
on endlessely even if the policy wouldn't want to.

No difference in "dumbpass" mode.

* Report root visits in gomill-explain_last_move.

See issue leela-zero#2280.

Pull request leela-zero#2302.

* Choose move based on normal distribution LCB.

* Calculate node variance.
* Use normal distribution LCB to choose the played move.
* Cached student-t.
* Sort lz-analyze output according to LCB.
* Don't choose nodes with very few visits even if LCB is better.

Guard against NN misevaluations when top move has lot of visits.
Without this it's possible for move with few hundred visits to be picked
over a move with over ten thousand visits.

The problem is that the evaluation distribution isn't really normal
distribution. Evaluations correlate and the distribution can change
if deeper in the tree it finds a better alternative.

Pull request leela-zero#2290.

* Mixed precision training support.

* Add mixed precision training support.
* Do not use loss scale if training with fp32
* Fix potential reg_term overflow of large networks.

Pull request leela-zero#2191.

* Update AUTHORS.

* Don't detect precision with Tensor Cores. 

Don't autodetect or default to fp32 when all cards have
Tensor Cores. We will assume fp16 is the fastest.

This avoids problems in tune-only mode which does not
detect the precision to use and would use fp32 on such cards.

Pull request leela-zero#2312.

* Update README.md.

We have a first implementation of batching now.

* Ignore --batchsize in CPU only compiles.

AutoGTP will always send --batchsize, but CPU only
compiles don't support the option. Ignore the option
in those builds.

The same problem exists with --tune-only, but quitting
immediately happens to be sane behavior so we don't need
to fix that.

Pull request leela-zero#2313.

* Don't include OpenCL scheduler in CPU build.

It will recursively include OpenCL.h and that
is bad.

Pull request leela-zero#2314.

* Bump version numbers.

* Address git hub security alert

* Match upstream
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests