New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Winograd F(4x4, 3x3) #1643
Winograd F(4x4, 3x3) #1643
Conversation
6f70a99
to
d54a305
Compare
and OpenCL batching support.
Nice work. Does combining this with USE_HALF start to showing rounding issues? |
Does this mean the architecture will be different ? |
No, it's just a more efficient way of doing the computation. The outputs are still 3x3, i.e. equivalent to 3x3 convolutions. |
OK. I guess this will be particularly useful on android :) |
Nice work! Regarding batching what I did was simply increased the number of threads to a huge number. I didnt change anything on the tree search code itself. If there are enough threads needing a NN evaluation, then batching is used, if not, then it simply used the non batch mode implementation. Something to gain on heavily threaded situations, nothing to lose on low thread situations. I do wonder... did you test USE_HALF? I do want to put a PR that supports runtime selection of precisions and that probably needs to be pushed after this gets merged... |
|
IIRC this was done with some slight delay? We should discuss batching strategies in an issue. It came to my mind that we might be able to use a global "number of threads in a lock" counter to detect when we lack tree parallelism for a batch. |
FWIW: I did some tests and I get consitently worse performance with this PR 7cfbb72 vs one commit before 6333b66.
|
You should file an issue. Also try the default number of threads. I got 15-25% speedup on all NVIDIA cards I tested. About 20-25% on AMD too.
There is a built in --benchmark command. |
Another thing to note is that this requires re-tuning. If you forced the full tuner, you might want to do so again. |
Ubuntu18.04 + 1070ti
|
* Winograd F(4x4, 3x3) for CPU * Winograd F(4x4, 3x3) for OpenCL * OpenCL batching support. Pull request #1643.
* Add multi GPU training support. Pull request leela-zero#1386. * Extend GTP to support real time search info. * Extend GTP to add support for displaying winrates and variations from LZ while LZ is thinking. * Use UCI format for lz-analyze and lz-genmove-analyze. * Don't sort gtp lz-analyze ouput because it is not thread-safe. Pull request leela-zero#1388. * Remove virtual loss from eval for live stats. For discussion see pull request leela-zero#1412. * Make analysis output use one move per line. More in line with UCI, cleaner, easier to parse, smaller code. * Remove versioned clang from Makefile. Don't hardcode the clang version in the Makefile. * Fix varargs usage. Regression from leela-zero#1388. Fixes issue leela-zero#1424. * AutoGTP: send leelaz version to server. Send leelaz version embedded in the URL used to ask for a new job. Pull request leela-zero#1430. * Multi GPU: fix split and variable placement. * Fix split in net_to_model. * Add soft placement of variables. * Fixes Windows issues. Pull request leela-zero#1443. * Mutex optimization. * Updated Mutex implementation to use TTS instead of TS. * Explicitly relax memory order (no behavior change, it's the default) and attempt TS before TTS loop. (improves performance in low contention locks) Pull request leela-zero#1432. * Update leela-zero.vcxproj for VS2015. Pull request leela-zero#1439. * Add order to analysis data. See discussion in issue leela-zero#1425. Pull request leela-zero#1478. * Fix misleading comments & naming. The Alpha (Go) Zero outputs use TanH nonlinearities, not sigmoids. The code comments and variable naming refer to an earlier version that used sigmoids and that is confusing people. See issue leela-zero#1484. * Add Lizzie and LeelaSabaki to README. Pull request leela-zero#1513. * Make Debian package with CMake. * Create debian package by cpack We can create debian leelaz package by "make package" by cpack. * Find leelaz if ./leelaz is not existed If leelaz is installed at /usr/bin, then autogtp should find it by leelaz instead of ./leelaz. * Generate package dependency list Use dpkg-shlibdeps to generate better package dependency list * Use git tags as version strings Pull request leela-zero#1445. * Look for symmetry on NNCache lookup. * Look for symmetrical position in cache. * Disable NNCache symmetry in self-play. To increase randomness from rotational assymetry. * Only check symmetry in opening. Refactor TimeControl. Only check for symmetries in the NNCache when we are in the opening (fast moving zone). Refactor TimeControl to take the boardsize out. * Change bench to assymetric position. Avoids rotation symmetry speedups, they are not typical. * Rename rotation to symmetry, limit to early opening. Be consistent and don't call symmetries rotations. Limit the symmetry lookups to until halfway the opening (which is the first 30 moves on 19 x 19). Based on pull request leela-zero#1275, but without keeping the rotation array in every board instance. Pull request leela-zero#1421. * Symmetry calculation cleanup. Pull request leela-zero#1522. * Non-pruning (simple) time management. See issue leela-zero#1416. Pull request leela-zero#1497. * Clean up some constants. * Remove unused 'BIG' constant. * Capture "N/A" vertex value in constant. Pull request leela-zero#1528. * Duplicate line removal. Pull request leela-zero#1529. * Script for converting minigo weights. Pull request leela-zero#1538. * Update README.md. Added q+Enter instructions. Pull request leela-zero#1542. * Fix Validation checking on Windows. Fix Validation checking if binary exists on Windows. Pull request leela-zero#1544. * Constant for the unchanged symmetry index. Pull request leela-zero#1548. * Update README.md. Update the TODO list. * Removed unused class KeyPress. Pull request leela-zero#1560. * Allow 3 AutoGTP quitting conditions. Pull request leela-zero#1580. * More draw handling. Pull request leela-zero#1577. * Suppress upstream warnings in Makefile. Pull request leela-zero#1605. * Fix TF update operations. The real update operation should be the computation of the gradient rather than the assignment of it. Pull request leela-zero#1614. Fixes issue leela-zero#1502. * Code restructuring: less globals. * Remove thread_local variables for OpenCL subsystem. (this is to allow many different OpenCL implementations to exist concurrently) * OpenCLScheduler: task queue cleanup. * Change static Network methods to instance methods and replace it with global Network instance. * All weights moved from Network.cpp static variables to class Network. * NNCache is now a member variable of Network, not a global. * Network filename now comes from external call, not a global variable. * Removed global g_network object, instead it is member of UCTSearch class. * UCTNode is now a static member variable of GTP. (instead of a static of a function) * Rename ThreadData to OpenCLContext. (it's no longer a thread-specific structure). Pull request leela-zero#1558. * Removed unused types. Pull request leela-zero#1621. * Resurrect GPU autodetection. Fixes issue leela-zero#1632. Pull request leela-zero#1633. * Restrict the use of "score". Using "score" as a nonspecific term (and not when it, for example, refers to the count at the end of game) makes it unnecessarily hard to understand the code and see how it matches with the literature. Pull request leela-zero#1635. * Code restructuring: Create ForwardPipe interface. Code restructuring: Create abstract class ForwardPipe, which represents a class that has a forward() call. * Moved network initialization code to OpenCLScheduler. * Created abstract class ForwardPipe which will be the base interface of all forward() calls. * Moved CPU-based forward() code to class CPUPipe. * Added --cpu-only option. This command line option will run a CPU-only implementation on a OpenCL build. Can be used for testing and running fallback modes rather than switching binaries. Pull request leela-zero#1620. * Coding style consistency cleanups. * Remove use of "new". Prefer make_unique instead. * Give ForwardPipe a virtual destructor. Silence clang warning. Pull request leela-zero#1644. * Replace if-else chain with switch statement. Pull request leela-zero#1638. * Use Winograd F(4x4, 3x3). * Winograd F(4x4, 3x3) for CPU * Winograd F(4x4, 3x3) for OpenCL * OpenCL batching support. Pull request leela-zero#1643. * Increase error budget in tuner. The 256 channel network exceeds 1% error in the tuner, but the network output seems accurate enough during play. Fixes leela-zero#1645. Pull request leela-zero#1647. * Get rid of more "network" globals and pointers. Keep a single "network" global in GTP, owned by a unique_ptr and move things around when needed. Pull request leela-zero#1650. * Runtime selection of fp16/fp32. * OpenCL half precision is now command-line option, support compiled in by default. This converts the OpenCL code into a gigantic template library. * Update Network self-check. - Final output is used for self-check. - Criteria is 20% error, while ignoring values smaller than 1/361. - Throws exception when three out of last ten checks fails. Pull request leela-zero#1649. * Minor code cleanups. Slight style edits of code and comments. * Clean up SGFTree style. Modernize some parts of SGFTree's style. * Remove separate USE_HALF build from CI. This is integrated into the main build now. Pull request leela-zero#1655. * Don't assume alternating colors in SGF. Fix a bug that an SGF file/string cannot contain 2 consecutive moves of the same color. Fixes issue leela-zero#1469. Pull request leela-zero#1654. * Remove separate half precision kernel. Use the preprocessor defines to make a single kernel support both single precision and half precision storage. Pull request leela-zero#1661. * Compress duplicate evaluation code. Pull request leela-zero#1660. * Consistent header guard naming. Pull request leela-zero#1664. * Replace macros with proper constants. Pull request leela-zero#1671. * Implement NN eval fp16/fp32 autodetect. Implemented NN eval fp16/fp32 autodetect. Runs both precisions for 1 seconds, and if fp16 is faster than fp32 by more than 5%, fp16 is used. Removes --use-half, replaces it with --precision [auto|single|half] option, default auto. Pull request leela-zero#1657. * Resign analysis: search for the highest resign threshold. Added resign analysis option to search for the highest resign threshold that should be set. Pull request leela-zero#1606. * Half precision compute support. Use half precision computation on cards that support it. Pull request leela-zero#1672. * Thread scalability improvements. - On OpenCLScheduler, don't use condvars which tends to be slow because of thread sleep/wake. - Instead, use spinlocks and just have enough contexts to avoid sleeping. - Allow more threads than the CPU physically has. This is required in many multi-GPU setups with low core counts (e.g., quad-core non-hyperthread with 2 GPUs) Pull request leela-zero#1669. * Use L2-norm in self check. The previous method is too strict for fp16 compute. Since lower precision of fp16 is still good enough to play at the same strength as fp32 relax the self check. Pull request leela-zero#1698. * OpenCL tuner fixes. * Fix error calculation (Missing batch_size divider). * Better error reporting when no working configuration could be found. * Change reference data to have less rounding errors with half precision. * Replace BLAS reference SGEMM with custom code that gives transposed output like the OpenCL SGEMM. Pull request leela-zero#1710. * Change policy vector to array. Should save a tiny bit of memory. Pull request leela-zero#1716. * Fall back to single precision net on breakage. Fall back to single precision net when half precision is broken, at least when detection mode is auto. Pull request leela-zero#1726. * AutoGTP: use compressed weights networks. Pull request leela-zero#1721. * Fix OpenCL buffer sizes. Some OpenCL buffers were allocated too big. Tested with oclgrind that the new sizes are correct. Pull request leela-zero#1727. * Script for quantizing weights. Use smaller precision to store the weights to decrease the file size. See discussion in issue leela-zero#1733. Pull request leela-zero#1736. * Network initialization restructuring. * Network initialization restructuring - Create one net at a time when doing fp16/fp32 autodetect. Saves some GPU memory. - Create an internal lambda which initializes the nets. - Use std::copy to copy vectors to reduce runtime. * zeropad_U : loop reordering for performance optimization. Plus other optimizations for zero-copying initialization. Pull request leela-zero#1750. * Fix comments, code style. Minor fixes to incorrect comments, and reduce some excessively long lines. * Validation: support GTP commands for each binary. * Changed Validation and Game to support multiple GTP commands at start up but left the Validations options untouched. * Separated engine options (as positional arguments) from match options. Replaced time settings option with ability to specify any GTP commands. * Added --gtp-command options using the existing option parser. Also changed default binary options from -p 1600 to -v 3200. * Each binary argument has to be preceded by "--". * Changes to use Engine Objects. * Exits on failed GTP command. Added printing of GTP commands in gameStart() so users can see what commands are actually sent to each engine. Pull request leela-zero#1652. * Don't refer to stone locations as "squares". * Don't refer to stone locations as "squares". * Use "vertex" for those in the "letterbox" representation. * Otherwise, mostly use "intersection". * Also, capture all possible moves (i.e. including pass) in its own explicit constant. * Clean up network constants. Pull request leela-zero#1723.
Replace F(2x2, 3x3) Winograd transformations with F(4x4, 3x3) transformations that have bigger tile size. F(2x2, 3x3) is theoretically 2.25 times faster than direct convolution and F(4x4, 3x3) is 4 times faster. Transformations are bigger though so in practice the gain is probably not as big.
Benchmarks on GTX 1050 Ti:
Batching is much more useful with the larger tile size. There are only 25 tiles on 19x19 board with the larger tile size when there was previously 100. For SGEMM it needs to be padded to 32 tiles so about 20 % of computation is wasted on the padding. With batch size of 4 I get 367 n/s, which would be 80 % faster than the current code.
The tree search part of batching is still missing so it can't be used even though the OpenCL part supports it. The good news is that because of the required padding for batch size of one, even a simple batching implementation with fixed size should be faster on pretty much all devices. I'm not familiar with that part of the code and I hope someone else can implement the tree search part of the batching support (@ihavnoid ?).
Intel iGPU users will also be pleasantly surprised that I spent some time in vectorizing the transformations so they will run much faster on Intel iGPUs. Performance increased on my laptop from 24 n/s to 49 n/s with this PR and batch size of 4 (34 n/s with batch size of 1).