New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Script for converting minigo weights #1538
Conversation
was this vs minigo's python code or the c++ code? It obviously seems a little off that the same weights should be so bad in a different engine and it makes me wonder if we have a bug. Can you show me the command used to invoke minigo? |
Python code. Command was At least some of the strength difference is caused by batching and lack of FPU reduction. Also I noticed that minigo has lot of batch norm gamma values that are zero causing the whole output plane to be zero. For example 65 planes of the input convolution have zero gamma and two next convolutions have 12 and 25 zeroed planes. This seems little extreme to me. |
To summarize what I think are possible differences in MCTS fiddly bits, there's:
|
@amj was saying that LZ might be working with parallel reads = 1 whereas we're using parallel reads=8 by default. If that's correct, would you be open to trying the matchup again with |
I just wanted to verify that the weights were converted correctly. The match was run on a laptop and even with just 100 playouts it took over a day to play 100 games. I have other uses for it right now and don't want to repeat the match. |
Ok, i can definitely understand that -- if you can compile our cc version it should be much faster :) Edit-to-add: the disparity in perf numbers makes me wonder if we had GPU enabled for minigo? |
Okay - thank you very much for reporting the results here. |
The laptop this was played on had only Intel integrated GPU. As far as I know tensorflow can't use it, but LZs OpenCL code supports it. That's why there is such a large difference in the CPU use. |
I have everything running on my computer (which has a GPU) Control File
Results
I tested with
Finally I tested with
I'm running two tests now one with just |
Shouldn't the equivalent Puct be twice as large in minigo because minigo uses winrate on scale [-1, 1] while LZ uses [0, 1]? |
If you compile leelaz with USE_TUNER defined |
@Ttl yes i would expect the equivalent puct to need to be twice as large as well -- very strange that it worked that way. |
With @alreadydone's help I enabled disabling rotation I get nearly identical results (I actually have to apply the LZ rotation 1, flip over the X direction) |
* Add multi GPU training support. Pull request leela-zero#1386. * Extend GTP to support real time search info. * Extend GTP to add support for displaying winrates and variations from LZ while LZ is thinking. * Use UCI format for lz-analyze and lz-genmove-analyze. * Don't sort gtp lz-analyze ouput because it is not thread-safe. Pull request leela-zero#1388. * Remove virtual loss from eval for live stats. For discussion see pull request leela-zero#1412. * Make analysis output use one move per line. More in line with UCI, cleaner, easier to parse, smaller code. * Remove versioned clang from Makefile. Don't hardcode the clang version in the Makefile. * Fix varargs usage. Regression from leela-zero#1388. Fixes issue leela-zero#1424. * AutoGTP: send leelaz version to server. Send leelaz version embedded in the URL used to ask for a new job. Pull request leela-zero#1430. * Multi GPU: fix split and variable placement. * Fix split in net_to_model. * Add soft placement of variables. * Fixes Windows issues. Pull request leela-zero#1443. * Mutex optimization. * Updated Mutex implementation to use TTS instead of TS. * Explicitly relax memory order (no behavior change, it's the default) and attempt TS before TTS loop. (improves performance in low contention locks) Pull request leela-zero#1432. * Update leela-zero.vcxproj for VS2015. Pull request leela-zero#1439. * Add order to analysis data. See discussion in issue leela-zero#1425. Pull request leela-zero#1478. * Fix misleading comments & naming. The Alpha (Go) Zero outputs use TanH nonlinearities, not sigmoids. The code comments and variable naming refer to an earlier version that used sigmoids and that is confusing people. See issue leela-zero#1484. * Add Lizzie and LeelaSabaki to README. Pull request leela-zero#1513. * Make Debian package with CMake. * Create debian package by cpack We can create debian leelaz package by "make package" by cpack. * Find leelaz if ./leelaz is not existed If leelaz is installed at /usr/bin, then autogtp should find it by leelaz instead of ./leelaz. * Generate package dependency list Use dpkg-shlibdeps to generate better package dependency list * Use git tags as version strings Pull request leela-zero#1445. * Look for symmetry on NNCache lookup. * Look for symmetrical position in cache. * Disable NNCache symmetry in self-play. To increase randomness from rotational assymetry. * Only check symmetry in opening. Refactor TimeControl. Only check for symmetries in the NNCache when we are in the opening (fast moving zone). Refactor TimeControl to take the boardsize out. * Change bench to assymetric position. Avoids rotation symmetry speedups, they are not typical. * Rename rotation to symmetry, limit to early opening. Be consistent and don't call symmetries rotations. Limit the symmetry lookups to until halfway the opening (which is the first 30 moves on 19 x 19). Based on pull request leela-zero#1275, but without keeping the rotation array in every board instance. Pull request leela-zero#1421. * Symmetry calculation cleanup. Pull request leela-zero#1522. * Non-pruning (simple) time management. See issue leela-zero#1416. Pull request leela-zero#1497. * Clean up some constants. * Remove unused 'BIG' constant. * Capture "N/A" vertex value in constant. Pull request leela-zero#1528. * Duplicate line removal. Pull request leela-zero#1529. * Script for converting minigo weights. Pull request leela-zero#1538. * Update README.md. Added q+Enter instructions. Pull request leela-zero#1542. * Fix Validation checking on Windows. Fix Validation checking if binary exists on Windows. Pull request leela-zero#1544. * Constant for the unchanged symmetry index. Pull request leela-zero#1548. * Update README.md. Update the TODO list. * Removed unused class KeyPress. Pull request leela-zero#1560. * Allow 3 AutoGTP quitting conditions. Pull request leela-zero#1580. * More draw handling. Pull request leela-zero#1577. * Suppress upstream warnings in Makefile. Pull request leela-zero#1605. * Fix TF update operations. The real update operation should be the computation of the gradient rather than the assignment of it. Pull request leela-zero#1614. Fixes issue leela-zero#1502. * Code restructuring: less globals. * Remove thread_local variables for OpenCL subsystem. (this is to allow many different OpenCL implementations to exist concurrently) * OpenCLScheduler: task queue cleanup. * Change static Network methods to instance methods and replace it with global Network instance. * All weights moved from Network.cpp static variables to class Network. * NNCache is now a member variable of Network, not a global. * Network filename now comes from external call, not a global variable. * Removed global g_network object, instead it is member of UCTSearch class. * UCTNode is now a static member variable of GTP. (instead of a static of a function) * Rename ThreadData to OpenCLContext. (it's no longer a thread-specific structure). Pull request leela-zero#1558. * Removed unused types. Pull request leela-zero#1621. * Resurrect GPU autodetection. Fixes issue leela-zero#1632. Pull request leela-zero#1633. * Restrict the use of "score". Using "score" as a nonspecific term (and not when it, for example, refers to the count at the end of game) makes it unnecessarily hard to understand the code and see how it matches with the literature. Pull request leela-zero#1635. * Code restructuring: Create ForwardPipe interface. Code restructuring: Create abstract class ForwardPipe, which represents a class that has a forward() call. * Moved network initialization code to OpenCLScheduler. * Created abstract class ForwardPipe which will be the base interface of all forward() calls. * Moved CPU-based forward() code to class CPUPipe. * Added --cpu-only option. This command line option will run a CPU-only implementation on a OpenCL build. Can be used for testing and running fallback modes rather than switching binaries. Pull request leela-zero#1620. * Coding style consistency cleanups. * Remove use of "new". Prefer make_unique instead. * Give ForwardPipe a virtual destructor. Silence clang warning. Pull request leela-zero#1644. * Replace if-else chain with switch statement. Pull request leela-zero#1638. * Use Winograd F(4x4, 3x3). * Winograd F(4x4, 3x3) for CPU * Winograd F(4x4, 3x3) for OpenCL * OpenCL batching support. Pull request leela-zero#1643. * Increase error budget in tuner. The 256 channel network exceeds 1% error in the tuner, but the network output seems accurate enough during play. Fixes leela-zero#1645. Pull request leela-zero#1647. * Get rid of more "network" globals and pointers. Keep a single "network" global in GTP, owned by a unique_ptr and move things around when needed. Pull request leela-zero#1650. * Runtime selection of fp16/fp32. * OpenCL half precision is now command-line option, support compiled in by default. This converts the OpenCL code into a gigantic template library. * Update Network self-check. - Final output is used for self-check. - Criteria is 20% error, while ignoring values smaller than 1/361. - Throws exception when three out of last ten checks fails. Pull request leela-zero#1649. * Minor code cleanups. Slight style edits of code and comments. * Clean up SGFTree style. Modernize some parts of SGFTree's style. * Remove separate USE_HALF build from CI. This is integrated into the main build now. Pull request leela-zero#1655. * Don't assume alternating colors in SGF. Fix a bug that an SGF file/string cannot contain 2 consecutive moves of the same color. Fixes issue leela-zero#1469. Pull request leela-zero#1654. * Remove separate half precision kernel. Use the preprocessor defines to make a single kernel support both single precision and half precision storage. Pull request leela-zero#1661. * Compress duplicate evaluation code. Pull request leela-zero#1660. * Consistent header guard naming. Pull request leela-zero#1664. * Replace macros with proper constants. Pull request leela-zero#1671. * Implement NN eval fp16/fp32 autodetect. Implemented NN eval fp16/fp32 autodetect. Runs both precisions for 1 seconds, and if fp16 is faster than fp32 by more than 5%, fp16 is used. Removes --use-half, replaces it with --precision [auto|single|half] option, default auto. Pull request leela-zero#1657. * Resign analysis: search for the highest resign threshold. Added resign analysis option to search for the highest resign threshold that should be set. Pull request leela-zero#1606. * Half precision compute support. Use half precision computation on cards that support it. Pull request leela-zero#1672. * Thread scalability improvements. - On OpenCLScheduler, don't use condvars which tends to be slow because of thread sleep/wake. - Instead, use spinlocks and just have enough contexts to avoid sleeping. - Allow more threads than the CPU physically has. This is required in many multi-GPU setups with low core counts (e.g., quad-core non-hyperthread with 2 GPUs) Pull request leela-zero#1669. * Use L2-norm in self check. The previous method is too strict for fp16 compute. Since lower precision of fp16 is still good enough to play at the same strength as fp32 relax the self check. Pull request leela-zero#1698. * OpenCL tuner fixes. * Fix error calculation (Missing batch_size divider). * Better error reporting when no working configuration could be found. * Change reference data to have less rounding errors with half precision. * Replace BLAS reference SGEMM with custom code that gives transposed output like the OpenCL SGEMM. Pull request leela-zero#1710. * Change policy vector to array. Should save a tiny bit of memory. Pull request leela-zero#1716. * Fall back to single precision net on breakage. Fall back to single precision net when half precision is broken, at least when detection mode is auto. Pull request leela-zero#1726. * AutoGTP: use compressed weights networks. Pull request leela-zero#1721. * Fix OpenCL buffer sizes. Some OpenCL buffers were allocated too big. Tested with oclgrind that the new sizes are correct. Pull request leela-zero#1727. * Script for quantizing weights. Use smaller precision to store the weights to decrease the file size. See discussion in issue leela-zero#1733. Pull request leela-zero#1736. * Network initialization restructuring. * Network initialization restructuring - Create one net at a time when doing fp16/fp32 autodetect. Saves some GPU memory. - Create an internal lambda which initializes the nets. - Use std::copy to copy vectors to reduce runtime. * zeropad_U : loop reordering for performance optimization. Plus other optimizations for zero-copying initialization. Pull request leela-zero#1750. * Fix comments, code style. Minor fixes to incorrect comments, and reduce some excessively long lines. * Validation: support GTP commands for each binary. * Changed Validation and Game to support multiple GTP commands at start up but left the Validations options untouched. * Separated engine options (as positional arguments) from match options. Replaced time settings option with ability to specify any GTP commands. * Added --gtp-command options using the existing option parser. Also changed default binary options from -p 1600 to -v 3200. * Each binary argument has to be preceded by "--". * Changes to use Engine Objects. * Exits on failed GTP command. Added printing of GTP commands in gameStart() so users can see what commands are actually sent to each engine. Pull request leela-zero#1652. * Don't refer to stone locations as "squares". * Don't refer to stone locations as "squares". * Use "vertex" for those in the "letterbox" representation. * Otherwise, mostly use "intersection". * Also, capture all possible moves (i.e. including pass) in its own explicit constant. * Clean up network constants. Pull request leela-zero#1723.
How to Converts weights from LZ to minigo ? |
I don't think there's code that converts backwards (LZ to MG) yet. |
Converts minigo weights to work with LZ. Some minigo weights are available here: https://console.cloud.google.com/storage/browser/minigo-pub
Some tests:
LZ weights 147 vs. LZ with minigo weights 303. 1000 visits, minigo weights has 3 handicap stones:
Minigo vs LZ using the same 303-olympus weights with 100 playouts. LZ arguments:
-g -d -t 1 --noponder -p 100 --timemanage off -r 5
. Minigo used the default settings.