Record: 8192 Vocab Size, NorMuon, Selective Quantization; 1.186 val_bpb#78
Open
mtybadger wants to merge 1 commit intoopenai:mainfrom
Open
Record: 8192 Vocab Size, NorMuon, Selective Quantization; 1.186 val_bpb#78mtybadger wants to merge 1 commit intoopenai:mainfrom
mtybadger wants to merge 1 commit intoopenai:mainfrom
Conversation
phaesoo
added a commit
to phaesoo/parameter-golf
that referenced
this pull request
Mar 19, 2026
openai#77, openai#78) Analyzed techniques, ablations, and individual BPB contributions. Key finding: sliding window eval (~0.034) and int6+wider MLP (~0.029) are the dominant validated techniques. Several promising combinations remain untested across submissions. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
xskuy
pushed a commit
to xskuy/parameter-golf
that referenced
this pull request
Mar 19, 2026
Major improvements based on competition intelligence (day 2 PRs): 1. Sliding window eval (stride=256): overlapping windows give each token more context. Free ~0.03 bpb improvement, zero artifact cost. Based on PRs openai#70, openai#77, openai#65. 2. Int6 quantization: configurable WEIGHT_QUANT_BITS (default 6) and EMBED_QUANT_BITS (default 8). Saves ~25% artifact space vs int8, allowing bigger models. Based on PRs openai#78, openai#70. 3. MLP 3x expansion: MLP_MULT_NUM=3 (up from 8/3). Wider MLP gives ~0.019 bpb improvement. Based on PRs openai#70, openai#66. 4. Default dim=512 with LR=0.03 (best config from experiments). 5. forward_logits() helper for sliding window (avoids model.forward which returns loss, not logits). Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
2023-03-19: Vocab Size, NorMuon, Selective Quantization; 1.186 val_bpb
Day 1! This record contains three main new ideas, as well as some tweaks to the baseline, particularly vocab size. I had several ideas I wanted to try today, and these are the ones that worked - I want to chase further on quantization in the coming days.
Changes in this model:
with this tokenizer_spec:
{ "tokenizers": [ { "name": "sp_bpe_1024", "dataset_suffix": "sp1024", "vocab_size": 1024 }, { "name": "sp_bpe_8192", "dataset_suffix": "sp8192", "vocab_size": 8192 } ] }with a 50/50 val/train split as a result. Tokenizers for sp1024, 2048, 4096 and 8192 with data are publicly available on my huggingface.
modded-nanogpt, replacing MuonConfiguration:
step_avg:43.67msandfinal_int8_zlib_roundtrip_exact val_bpb:1.22731147immediately before.Command:
Key metrics (from
train.log):9359/20000steps due to the wallclock cap.val_loss:3.0261,val_bpb:1.1717val_loss:3.06233041, `val_bpb:1.18576208600075ms(step_avg:64.12ms)14743224 bytes53612 bytes14796836 bytesTraining volume:
524288tokens/step7224688640Included files:
train_gpt.py(code snapshot used for the run)train.log(exact remote training log)submission.json(leaderboard metadata)