Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update default net to nn-1337b1adec5b.nnue #4387

Closed

Conversation

linrock
Copy link
Contributor

@linrock linrock commented Feb 9, 2023

Created by retraining the master net on a dataset composed of:

  • Most of the previous best dataset filtered to remove positions likely having only one good move
  • Adding training data from Leela T77 dec2021 rescored with 16tb of 7-piece tablebases

Trained with end lambda 0.7 and max epoch 900. Positions with ply <= 28 were removed from most of the previous best dataset before training began. A new nnue-pytorch trainer param for skipping early plies was used to skip plies <= 24 in the unfiltered and additional Leela T77 parts of the dataset.

python easy_train.py \
  --experiment-name leela96-dfrc99-T80octnovT79aprmayT60novdec-eval-filt-v2-T78augsep-12tb-T77dec-16tb-lambda7-sk24 \
  --training-dataset /data/leela96-dfrc99-T80octnovT79aprmayT60novdec-eval-filt-v2-T78augsep-12tb-T77dec-16tb.binpack \
  --nnue-pytorch-branch linrock/nnue-pytorch/easy-train-early-fen-skipping \
  --early-fen-skipping 24 \
  --gpus "0," \
  --start-from-engine-test-net True \
  --start-lambda 1.0 \
  --end-lambda 0.7 \
  --gamma 0.995 \
  --lr 4.375e-4 \
  --tui False \
  --seed $RANDOM \
  --max_epoch 900

The depth6 multipv2 search filtering method is the same as the one used for filtering recent best datasets, with a lower eval difference threshold to remove slightly more positions than before. These parts of the dataset were filtered:

  • 96% of T60T70wIsRightFarseerT60T74T75T76.binpack
  • 99% of dfrc_n5000.binpack
  • T80 oct + nov 2022 data, no positions with castling flags, rescored with ~600gb 7p tablebases
  • T79 apr + may 2022 data, rescored with 12tb 7p tablebases
  • T60 nov + dec 2021 data, rescored with 12tb 7p tablebases

These parts of the dataset were not filtered. Positions with ply <= 24 were skipped during training:

  • T78 aug + sep 2022 data, rescored with 12tb 7p tablebases
  • 84% of T77 dec 2021 data, rescored with 16tb 7p tablebases

The code and exact evaluation thresholds used for data filtering can be found at:
https://github.com/linrock/Stockfish/tree/tools-filter-multipv2-eval-diff-t2/src/filter

The exact training data used can be found at:
https://robotmoon.com/nnue-training-data/

Local elo at 25k nodes per move:
nn-epoch859.nnue : 3.5 +/ 1.2

Passed STC:
LLR: 2.95 (-2.94,2.94) <0.00,2.00>
https://tests.stockfishchess.org/tests/view/63dfeefc73223e7f52ad769f
Total: 219744 W: 58572 L: 58002 D: 103170
Ptnml(0-2): 609, 24446, 59284, 24832, 701

Passed LTC:
https://tests.stockfishchess.org/tests/view/63e268fc73223e7f52ade7b6
LLR: 2.94 (-2.94,2.94) <0.50,2.50>
Total: 91256 W: 24528 L: 24121 D: 42607
Ptnml(0-2): 48, 8863, 27390, 9288, 39

bench 3841998

bench 3841998
@vondele vondele added the to be merged Will be merged shortly label Feb 9, 2023
@vondele vondele closed this in 05dea2c Feb 9, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
to be merged Will be merged shortly
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants