Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Script for converting minigo weights #1538

Merged
merged 1 commit into from Jun 11, 2018
Merged

Conversation

Ttl
Copy link
Member

@Ttl Ttl commented Jun 9, 2018

Converts minigo weights to work with LZ. Some minigo weights are available here: https://console.cloud.google.com/storage/browser/minigo-pub

Some tests:

LZ weights 147 vs. LZ with minigo weights 303. 1000 visits, minigo weights has 3 handicap stones:

lz_minigo_303 v lz_147 (15/100 games)
board size: 19   handicap: 3 (free)   komi: 7.5
                wins                  avg cpu
lz_minigo_303      4 26.67%   (black)  945.66
lz_147            11 73.33%   (white)  615.45

Minigo vs LZ using the same 303-olympus weights with 100 playouts. LZ arguments: -g -d -t 1 --noponder -p 100 --timemanage off -r 5. Minigo used the default settings.

lz_303 v minigo_303 (100/100 games)
board size: 19   komi: 7.5
             wins              black         white       avg cpu
lz_303         86 86.00%       42 84.00%     44 88.00%    107.65
minigo_303     14 14.00%       6  12.00%     8  16.00%   4449.02
                               48 48.00%     52 52.00%

@amj
Copy link

amj commented Jun 9, 2018

was this vs minigo's python code or the c++ code?

It obviously seems a little off that the same weights should be so bad in a different engine and it makes me wonder if we have a bug. Can you show me the command used to invoke minigo?

@Ttl
Copy link
Member Author

Ttl commented Jun 9, 2018

Python code. Command was BOARD_SIZE=19 python3 main.py gtp -l v7-19x19_models_000303-olympus -v 3.

At least some of the strength difference is caused by batching and lack of FPU reduction.

Also I noticed that minigo has lot of batch norm gamma values that are zero causing the whole output plane to be zero. For example 65 planes of the input convolution have zero gamma and two next convolutions have 12 and 25 zeroed planes. This seems little extreme to me.

@brilee
Copy link

brilee commented Jun 9, 2018

To summarize what I think are possible differences in MCTS fiddly bits, there's:

  • parallelism in tree search; up to how many virtual losses are allowed before waiting for GPU to return batched board evaluations?
  • cPUCT params
  • Q initialization (FPU stuff)
  • anything else?

@brilee
Copy link

brilee commented Jun 9, 2018

@amj was saying that LZ might be working with parallel reads = 1 whereas we're using parallel reads=8 by default. If that's correct, would you be open to trying the matchup again with --parallel_readouts=1?

@Ttl
Copy link
Member Author

Ttl commented Jun 9, 2018

I just wanted to verify that the weights were converted correctly. The match was run on a laptop and even with just 100 playouts it took over a day to play 100 games. I have other uses for it right now and don't want to repeat the match.

@amj
Copy link

amj commented Jun 9, 2018

Ok, i can definitely understand that -- if you can compile our cc version it should be much faster :)

Edit-to-add: the disparity in perf numbers makes me wonder if we had GPU enabled for minigo?

@brilee
Copy link

brilee commented Jun 9, 2018

Okay - thank you very much for reporting the results here.

@Ttl
Copy link
Member Author

Ttl commented Jun 10, 2018

The laptop this was played on had only Intel integrated GPU. As far as I know tensorflow can't use it, but LZs OpenCL code supports it. That's why there is such a large difference in the CPU use.

@sethtroisi
Copy link
Member

sethtroisi commented Jun 10, 2018

I have everything running on my computer (which has a GPU)

Control File

competition_type = 'playoff'
description = """ Testing MiniGo models in leelaz src code """

record_games = True
stderr_to_log = True

players = {
    'lz-minigo-303' : Player("./leelaz -g -d -t 1 --noponder -p 100 --timemanage off -r 5 -w 000303-olympus_converted.txt"),
    'minigo-303' : Player(
        "python3 main.py gtp -l models/000303-olympus -v 3 --num_readouts 100",
        cwd="~/Projects/minigo",
        environ={'BOARD_SIZE' : '19'},
    ),
}

board_size = 19
komi = 7.5

matchups = [
    Matchup('lz-minigo-303', 'minigo-303', alternating=True, scorer='players', number_of_games=20),
]

Results


lz-minigo-303 v minigo-303 (60/60 games)
board size: 19   komi: 7.5
                wins              black         white       avg cpu
lz-minigo-303     46 76.67%       22 73.33%     24 80.00%      8.58
minigo-303        14 23.33%       6  20.00%     8  26.67%     26.36
                                  28 46.67%     32 53.33%

I tested with --parallel_readouts=2 which is slower than the default batch_size of 8 but still faster than --parallel_readouts=1

lz-minigo-303 v minigo-303 (48/60 games)
board size: 19   komi: 7.5
                wins              black         white       avg cpu
lz-minigo-303     41 85.42%       22 91.67%     19 79.17%      9.66
minigo-303         7 14.58%       5  20.83%     2   8.33%     48.23
                                  27 56.25%     21 43.75%

Finally I tested with --parallel_readouts=1 & --c_puct=0.8 (LZ value) which had an even score

lz-minigo-303 v minigo-303 (100/100 games)
board size: 19   komi: 7.5
                wins              black         white       avg cpu
lz-minigo-303     53 53.00%       25 50.00%     28 56.00%      9.91
minigo-303        47 47.00%       22 44.00%     25 50.00%     75.54
                                  47 47.00%     53 53.00%

I'm running two tests now one with just --parallel_readouts=1 and a second with --parallel_readouts=2 --c_puct=0.8

@Ttl
Copy link
Member Author

Ttl commented Jun 11, 2018

Shouldn't the equivalent Puct be twice as large in minigo because minigo uses winrate on scale [-1, 1] while LZ uses [0, 1]?

@alreadydone
Copy link
Contributor

If you compile leelaz with USE_TUNER defined
https://github.com/gcp/leela-zero/blob/d362ee86cf7ead85831c8a947c7fed580f63c8eb/src/config.h#L81
you should be able to specify options --puct (0.69 for Minigo) and --fpu_reduction (0 for Minigo). Also LZ uses net_eval for FPU (before reduction). To use get_pure_eval (dynamic parent Q) instead, use my branch patch-4, but revert the changes to fpu_reduction (L275) and puct (L294) in the first commit.

@gcp gcp merged commit 54e130e into leela-zero:next Jun 11, 2018
@amj
Copy link

amj commented Jun 11, 2018

@Ttl yes i would expect the equivalent puct to need to be twice as large as well -- very strange that it worked that way.

@sethtroisi
Copy link
Member

sethtroisi commented Jun 12, 2018

With @alreadydone's help I enabled USE_TUNER and set --puct 0.69 --fpu_reduction 0
I see very similar move selection now.

disabling rotation I get nearly identical results (I actually have to apply the LZ rotation 1, flip over the X direction)

ChinChangYang added a commit to ChinChangYang/leela-zero that referenced this pull request Aug 25, 2018
* Add multi GPU training support.

Pull request leela-zero#1386.

* Extend GTP to support real time search info.

* Extend GTP to add support for displaying winrates and variations
  from LZ while LZ is thinking.
* Use UCI format for lz-analyze and lz-genmove-analyze.
* Don't sort gtp lz-analyze ouput because it is not thread-safe.

Pull request leela-zero#1388.

* Remove virtual loss from eval for live stats.

For discussion see pull request leela-zero#1412.

* Make analysis output use one move per line.

More in line with UCI, cleaner, easier to parse, smaller code.

* Remove versioned clang from Makefile.

Don't hardcode the clang version in the Makefile.

* Fix varargs usage.

Regression from leela-zero#1388. Fixes issue leela-zero#1424.

* AutoGTP: send leelaz version to server.

Send leelaz version embedded in the URL used to ask for a new job.

Pull request leela-zero#1430.

* Multi GPU: fix split and variable placement.

* Fix split in net_to_model.
* Add soft placement of variables.
* Fixes Windows issues.

Pull request leela-zero#1443.

* Mutex optimization.

* Updated Mutex implementation to use TTS instead of TS.
* Explicitly relax memory order (no behavior change, it's the default) 
  and attempt TS before TTS loop. 
  (improves performance in low contention locks)

Pull request leela-zero#1432.

* Update leela-zero.vcxproj for VS2015.

Pull request leela-zero#1439.

* Add order to analysis data.

See discussion in issue leela-zero#1425.

Pull request leela-zero#1478.

* Fix misleading comments & naming.

The Alpha (Go) Zero outputs use TanH nonlinearities, not sigmoids. The
code comments and variable naming refer to an earlier version that used
sigmoids and that is confusing people.

See issue leela-zero#1484.

* Add Lizzie and LeelaSabaki to README.

Pull request leela-zero#1513.

* Make Debian package with CMake.

* Create debian package by cpack

We can create debian leelaz package by "make package"  
by cpack.

* Find leelaz if ./leelaz is not existed

If leelaz is installed at /usr/bin, then autogtp should find it by 
leelaz instead of ./leelaz.

* Generate package dependency list

Use dpkg-shlibdeps to generate better package dependency list

* Use git tags as version strings

Pull request leela-zero#1445.

* Look for symmetry on NNCache lookup.

* Look for symmetrical position in cache.
* Disable NNCache symmetry in self-play.

To increase randomness from rotational assymetry.

* Only check symmetry in opening. Refactor TimeControl.

Only check for symmetries in the NNCache when we are in the 
opening (fast moving zone). Refactor TimeControl to take the 
boardsize out.

* Change bench to assymetric position.

Avoids rotation symmetry speedups, they are not typical.

* Rename rotation to symmetry, limit to early opening.

Be consistent and don't call symmetries rotations. Limit the symmetry
lookups to until halfway the opening (which is the first 30 moves on
19 x 19).

Based on pull request leela-zero#1275, but without keeping the rotation array in
every board instance.

Pull request leela-zero#1421.

* Symmetry calculation cleanup.

Pull request leela-zero#1522.

* Non-pruning (simple) time management.

See issue leela-zero#1416.

Pull request leela-zero#1497.

* Clean up some constants.

* Remove unused 'BIG' constant.
* Capture "N/A" vertex value in constant.

Pull request leela-zero#1528.

* Duplicate line removal.

Pull request leela-zero#1529.

* Script for converting minigo weights.

Pull request leela-zero#1538.

* Update README.md.

Added q+Enter instructions.

Pull request leela-zero#1542.

* Fix Validation checking on Windows.

Fix Validation checking if binary exists on Windows.

Pull request leela-zero#1544.

* Constant for the unchanged symmetry index.

Pull request leela-zero#1548.

* Update README.md.

Update the TODO list.

* Removed unused class KeyPress. 

Pull request leela-zero#1560.

* Allow 3 AutoGTP quitting conditions.

Pull request leela-zero#1580.

* More draw handling.

Pull request leela-zero#1577.

* Suppress upstream warnings in Makefile.

Pull request leela-zero#1605.

* Fix TF update operations.

The real update operation should be the computation of the gradient 
rather than the assignment of it.

Pull request leela-zero#1614.

Fixes issue leela-zero#1502.

* Code restructuring: less globals.

* Remove thread_local variables for OpenCL subsystem.
  (this is to allow many different OpenCL implementations
   to exist concurrently)
* OpenCLScheduler: task queue cleanup.
* Change static Network methods to instance methods and
  replace it with global Network instance.
* All weights moved from Network.cpp static variables to class Network.
* NNCache is now a member variable of Network, not a global.
* Network filename now comes from external call, not a global variable.
* Removed global g_network object,
  instead it is member of UCTSearch class.
* UCTNode is now a static member variable of GTP.
  (instead of a static of a function)
* Rename ThreadData to OpenCLContext.
  (it's no longer a thread-specific structure).

Pull request leela-zero#1558.

* Removed unused types. 

Pull request leela-zero#1621.

* Resurrect GPU autodetection.

Fixes issue leela-zero#1632.

Pull request leela-zero#1633.

* Restrict the use of "score".

Using "score" as a nonspecific term (and not when it, for example,
refers to the count at the end of game) makes it unnecessarily hard
to understand the code and see how it matches with the literature.

Pull request leela-zero#1635.

* Code restructuring: Create ForwardPipe interface.

Code restructuring: Create abstract class ForwardPipe,
which represents a class that has a forward() call.

* Moved network initialization code to OpenCLScheduler.
* Created abstract class ForwardPipe which will be the base interface
  of all forward() calls.
* Moved CPU-based forward() code to class CPUPipe.
* Added --cpu-only option.

This command line option will run a CPU-only implementation on a
OpenCL build. Can be used for testing and running fallback modes
rather than switching binaries.

Pull request leela-zero#1620.

* Coding style consistency cleanups.

* Remove use of "new".

Prefer make_unique instead.

* Give ForwardPipe a virtual destructor.

Silence clang warning.

Pull request leela-zero#1644.

* Replace if-else chain with switch statement.

Pull request leela-zero#1638.

* Use Winograd F(4x4, 3x3).

* Winograd F(4x4, 3x3) for CPU
* Winograd F(4x4, 3x3) for OpenCL 
* OpenCL batching support.

Pull request leela-zero#1643.

* Increase error budget in tuner.

The 256 channel network exceeds 1% error in the tuner,
but the network output seems accurate enough during play.

Fixes leela-zero#1645.

Pull request leela-zero#1647.

* Get rid of more "network" globals and pointers. 

Keep a single "network" global in GTP, owned by a unique_ptr and move
things around when needed.

Pull request leela-zero#1650.

* Runtime selection of fp16/fp32.

* OpenCL half precision is now command-line option, 
  support compiled in by default.
  This converts the OpenCL code into a gigantic template library.
* Update Network self-check.
 - Final output is used for self-check.
 - Criteria is 20% error, while ignoring values smaller than 1/361.
 - Throws exception when three out of last ten checks fails.

Pull request leela-zero#1649.

* Minor code cleanups.

Slight style edits of code and comments.

* Clean up SGFTree style.

Modernize some parts of SGFTree's style.

* Remove separate USE_HALF build from CI.

This is integrated into the main build now.

Pull request leela-zero#1655.

* Don't assume alternating colors in SGF.

Fix a bug that an SGF file/string cannot contain
2 consecutive moves of the same color.

Fixes issue leela-zero#1469.

Pull request leela-zero#1654.

* Remove separate half precision kernel.

Use the preprocessor defines to make a single kernel support 
both single precision and half precision storage.

Pull request leela-zero#1661.

* Compress duplicate evaluation code. 

Pull request leela-zero#1660.

* Consistent header guard naming. 

Pull request leela-zero#1664.

* Replace macros with proper constants.

Pull request leela-zero#1671.

* Implement NN eval fp16/fp32 autodetect.

Implemented NN eval fp16/fp32 autodetect.
Runs both precisions for 1 seconds, and if fp16 is faster than 
fp32 by more than 5%, fp16 is used. 
Removes --use-half, replaces it with 
--precision [auto|single|half] option, default auto.

Pull request leela-zero#1657.

* Resign analysis: search for the highest resign threshold. 

Added resign analysis option to search for the highest 
resign threshold that should be set.

Pull request leela-zero#1606.

* Half precision compute support.

Use half precision computation on cards that support it.

Pull request leela-zero#1672.

* Thread scalability improvements.

- On OpenCLScheduler, don't use condvars which tends to be slow 
  because of thread sleep/wake.
- Instead, use spinlocks and just have enough contexts to avoid sleeping.
- Allow more threads than the CPU physically has. 
  This is required in many multi-GPU setups with low core counts 
  (e.g., quad-core non-hyperthread with 2 GPUs)

Pull request leela-zero#1669.

* Use L2-norm in self check.

The previous method is too strict for fp16 compute. 

Since lower precision of fp16 is still good enough to play at 
the same strength as fp32 relax the self check.

Pull request leela-zero#1698.

* OpenCL tuner fixes.

* Fix error calculation (Missing batch_size divider).
* Better error reporting when no working configuration could be found.
* Change reference data to have less rounding errors with half precision.
* Replace BLAS reference SGEMM with custom code that gives transposed 
  output like the OpenCL SGEMM.

Pull request leela-zero#1710.

* Change policy vector to array.

Should save a tiny bit of memory.

Pull request leela-zero#1716.

* Fall back to single precision net on breakage.

Fall back to single precision net when half precision is broken, 
at least when detection mode is auto.

Pull request leela-zero#1726.

* AutoGTP: use compressed weights networks.

Pull request leela-zero#1721.

* Fix OpenCL buffer sizes.

Some OpenCL buffers were allocated too big. 
Tested with oclgrind that the new sizes are correct.

Pull request leela-zero#1727.

* Script for quantizing weights.

Use smaller precision to store the weights to decrease the file size.

See discussion in issue leela-zero#1733.

Pull request leela-zero#1736.

* Network initialization restructuring.

* Network initialization restructuring

- Create one net at a time when doing fp16/fp32 autodetect.
  Saves some GPU memory.
- Create an internal lambda which initializes the nets.
- Use std::copy to copy vectors to reduce runtime.

* zeropad_U : loop reordering for performance optimization.

Plus other optimizations for zero-copying initialization.

Pull request leela-zero#1750.

* Fix comments, code style.

Minor fixes to incorrect comments, and reduce some excessively long
lines.

* Validation: support GTP commands for each binary.

* Changed Validation and Game to support multiple GTP commands
  at start up but left the Validations options untouched.
* Separated engine options (as positional arguments) from match options.
  Replaced time settings option with ability to specify any GTP commands.
* Added --gtp-command options using the existing option parser.
  Also changed default binary options from -p 1600 to -v 3200.
* Each binary argument has to be preceded by "--".
* Changes to use Engine Objects.
* Exits on failed GTP command.

Added printing of GTP commands in gameStart() so users can see what
commands are actually sent to each engine.

Pull request leela-zero#1652.

* Don't refer to stone locations as "squares".

* Don't refer to stone locations as "squares".

* Use "vertex" for those in the "letterbox" representation.
* Otherwise, mostly use "intersection".
* Also, capture all possible moves (i.e. including pass) in its own
  explicit constant.

* Clean up network constants.

Pull request leela-zero#1723.
@fmscole
Copy link

fmscole commented Sep 16, 2018

How to Converts weights from LZ to minigo ?
Where is the code?

@sethtroisi
Copy link
Member

I don't think there's code that converts backwards (LZ to MG) yet.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

7 participants