Skip to content

docs: bump versions1.json to 0.17.0 (latest)#4360

Merged
ko3n1g merged 1 commit intoNVIDIA:mainfrom
ko3n1g:ko3n1g/docs/bump-versions1-main
Apr 17, 2026
Merged

docs: bump versions1.json to 0.17.0 (latest)#4360
ko3n1g merged 1 commit intoNVIDIA:mainfrom
ko3n1g:ko3n1g/docs/bump-versions1-main

Conversation

@ko3n1g
Copy link
Copy Markdown
Contributor

@ko3n1g ko3n1g commented Apr 17, 2026

Summary

  • Adds 0.17.0 (latest) entry pointing to latest/ URL
  • Demotes 0.16.0 to a versioned entry pointing to 0.16.0/ URL

Before / After

Before:

{ "name": "0.16.0 (latest)", "version": "0.16.0", "url": ".../latest/" }

After:

{ "name": "0.17.0 (latest)", "version": "0.17.0", "url": ".../latest/" },
{ "name": "0.16.0",          "version": "0.16.0",  "url": ".../0.16.0/" }

Signed-off-by: oliver könig <okoenig@nvidia.com>
@copy-pr-bot
Copy link
Copy Markdown

copy-pr-bot Bot commented Apr 17, 2026

Auto-sync is disabled for draft pull requests in this repository. Workflows must be run manually.

Contributors can view more details about this message here.

@ko3n1g
Copy link
Copy Markdown
Contributor Author

ko3n1g commented Apr 17, 2026

/ok to test

@svcnvidia-nemo-ci svcnvidia-nemo-ci added this to the Core 0.16 milestone Apr 17, 2026
@ko3n1g ko3n1g marked this pull request as ready for review April 17, 2026 10:22
@svcnvidia-nemo-ci svcnvidia-nemo-ci requested a review from a team April 17, 2026 10:22
@svcnvidia-nemo-ci svcnvidia-nemo-ci added the docs-only documentation only (docs or docstrings) label Apr 17, 2026
@ko3n1g ko3n1g merged commit 30bc230 into NVIDIA:main Apr 17, 2026
46 checks passed
Victarry added a commit to yanring/Megatron-LM that referenced this pull request Apr 20, 2026
* origin/main: (286 commits)
  Rename MambaModel/MambaStack to HybridModel/HybridStack (NVIDIA#4099)
  Fix Megatron initialization with extra_args_provider (NVIDIA#4327)
  Fix RL to once again work with --skip-train (NVIDIA#4249)
  Add activation logging and tokens per expert logging (NVIDIA#3842)
  Make param_index_map always use unpacked (full numel) offsets (NVIDIA#4328)
  FA4 Inference (NVIDIA#4186)
  Fix RL reward due to stop token (NVIDIA#4096)
  cp: Fix UT timeout (NVIDIA#4310) (NVIDIA#4373)
  feat(ckpt): add --async-ckpt-use-cpu-shm argument (NVIDIA#4355)
  Update copy-pr-bot.yaml [skip ci]
  Docs: improve docstrings and comments in example training loop (NVIDIA#4041)
  Add QK layernorm support for dot-product attention in MambaModel (NVIDIA#4067)
  Fix bug with non-partial rollouts (NVIDIA#3964)
  [docs] ci: use parent-relative json_url for version picker (NVIDIA#4367)
  Add tables and histogram for RL staleness (NVIDIA#4097)
  Port DeepSeek Sparse Attention to `MambaModel` (NVIDIA#3553)
  docs: bump versions1.json to 0.17.0 (latest) (NVIDIA#4360)
  Fix potential coredump issue that occurs when saving a checkpoint (NVIDIA#1871)
  ci(gb200): add 1-node mr-github functional test variants (NVIDIA#4334)
  fix: wait for async P2P send before deallocating output tensor (NVIDIA#4047)
  ...

# Conflicts:
#	megatron/core/transformer/cuda_graphs.py
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

docs-only documentation only (docs or docstrings)

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants