Skip to content

gpt-oss 20b support#889

Merged
chochowski merged 14 commits intoNVIDIA:dkorzekwa/any_modelfrom
chochowski:mchochowski/any_model_gptoss
Feb 27, 2026
Merged

gpt-oss 20b support#889
chochowski merged 14 commits intoNVIDIA:dkorzekwa/any_modelfrom
chochowski:mchochowski/any_model_gptoss

Conversation

@chochowski
Copy link

@chochowski chochowski commented Feb 13, 2026

What does this PR do?

Adds gpt-oss-20b support for puzzle any-model pruning.

Type of change:
new feature

Overview:
adds descriptor, converter and yaml configuration files for expert removal. Introduces slight changes on conversion to account for mxfp4 quantized checkpoint of gpt-oss

Usage

# Add a code snippet demonstrating how to use this

Testing

Before your PR is "Ready for review"

  • Make sure you read and follow Contributor guidelines and your commits are signed.
  • Is this change backward compatible?: Yes/No
  • Did you write any new necessary tests?: Yes/No
  • Did you add or update any necessary documentation?: Yes/No
  • Did you update Changelog?: Yes/No

Additional Information

@chochowski chochowski requested review from a team as code owners February 13, 2026 11:18
@chochowski chochowski requested review from jingyu-ml and removed request for a team February 13, 2026 11:18
@copy-pr-bot
Copy link

copy-pr-bot bot commented Feb 13, 2026

This pull request requires additional validation before any workflows can run on NVIDIA's runners.

Pull request vetters can view their responsibilities here.

Contributors can view more details about this message here.

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Feb 13, 2026

Important

Review skipped

Auto reviews are disabled on base/target branches other than the default branch.

🗂️ Base branches to auto review (3)
  • main
  • release/.*
  • feature/.*

Please check the settings in the CodeRabbit UI or the .coderabbit.yaml file in this repository. To trigger a single review, invoke the @coderabbitai review command.

You can disable this status message by setting the reviews.review_status to false in the CodeRabbit configuration file.

Use the checkbox below for a quick retry:

  • 🔍 Trigger review
✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@kevalmorabia97 kevalmorabia97 requested review from danielkorzekwa and kevalmorabia97 and removed request for jingyu-ml February 13, 2026 19:33
@chochowski chochowski requested review from a team as code owners February 21, 2026 16:32
@chochowski chochowski requested review from kevalmorabia97 and removed request for a team February 21, 2026 16:32
chochowski and others added 9 commits February 23, 2026 01:17
Signed-off-by: mchochowski <mchochowski@nvidia.com>
Signed-off-by: mchochowski <mchochowski@nvidia.com>
Signed-off-by: mchochowski <mchochowski@nvidia.com>
…uator (NVIDIA#894)

This PR adds Nemo Evaluator support to the AnyModel branch. It includes
documentation and a deployment script that allow for evaluation of
AnyModel Puzzletron checkpoints with Nemo Evaluator.

We assume development on a GPU node, following the current tutorial
style, so we don't rely on Slurm-based deployment/evaluation, but
instead use direct evaluation via `eval-factory run_eval`.

---------

Signed-off-by: jrausch <jrausch@nvidia.com>
Signed-off-by: mchochowski <mchochowski@nvidia.com>
## What does this PR do?

**Overview:**

- Update the AnyModel Puzzletron tutorial to use lm-eval. We add a
script that monkey patches lm-eval to use the patched AnyModel model
loading
- No need for running ray deployments or replacing the NeMo
Export-Deploy deployment script with a patched version
- Moved instructions for using NeMo Evaluator to an alternative readme
file

---------

Signed-off-by: jrausch <jrausch@nvidia.com>
Signed-off-by: mchochowski <mchochowski@nvidia.com>
## What does this PR do?

**Overview:**
Updated license of examples/puzzletron/evaluation/lm_eval_anymodel.py to
match that of reference examples/llm_eval/lm_eval_hf.py.

Signed-off-by: jrausch <jrausch@nvidia.com>
Signed-off-by: mchochowski <mchochowski@nvidia.com>
Signed-off-by: mchochowski <mchochowski@nvidia.com>
…ml config

Signed-off-by: mchochowski <mchochowski@nvidia.com>
Signed-off-by: mchochowski <mchochowski@nvidia.com>
@chochowski chochowski force-pushed the mchochowski/any_model_gptoss branch from e07dbaa to ee182b5 Compare February 23, 2026 09:17
Signed-off-by: mchochowski <mchochowski@nvidia.com>
Copy link
Collaborator

@kevalmorabia97 kevalmorabia97 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Left some comments. Also seeing pre-commit formatting not applied. Please run pre-commit run --all-files

Comment on lines 149 to 151
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why do we need this env variable and to add the workdir to sys.path?

Comment on lines 297 to 305
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why do we need to broadcast_list twice instead of reusing output of first call for 2nd one?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this is actually nemo-deploy code with patch - didn't want to touch the internals, only update the model loading

chochowski and others added 4 commits February 26, 2026 05:57
Signed-off-by: mchochowski <mchochowski@nvidia.com>
Signed-off-by: mchochowski <mchochowski@nvidia.com>
Signed-off-by: chochowski <Marcin.Chochowski@gmail.com>
Co-authored-by: Keval Morabia <28916987+kevalmorabia97@users.noreply.github.com>
Signed-off-by: chochowski <Marcin.Chochowski@gmail.com>
@chochowski chochowski merged commit 2409ac8 into NVIDIA:dkorzekwa/any_model Feb 27, 2026
2 checks passed
danielkorzekwa pushed a commit that referenced this pull request Mar 4, 2026
## What does this PR do?
Adds gpt-oss-20b support for puzzle any-model pruning.

**Type of change:** <!-- Use one of the following: Bug fix, new feature,
new example, new tests, documentation. -->
new feature

**Overview:**
adds descriptor, converter and yaml configuration files for expert
removal. Introduces slight changes on conversion to account for mxfp4
quantized checkpoint of gpt-oss

## Usage
<!-- You can potentially add a usage example below. -->

```python
# Add a code snippet demonstrating how to use this
```

## Testing
<!-- Mention how have you tested your change if applicable. -->

## Before your PR is "*Ready for review*"
<!-- If you haven't finished some of the above items you can still open
`Draft` PR. -->

- **Make sure you read and follow [Contributor
guidelines](https://github.com/NVIDIA/Model-Optimizer/blob/main/CONTRIBUTING.md)**
and your commits are signed.
- **Is this change backward compatible?**: Yes/No <!--- If No, explain
why. -->
- **Did you write any new necessary tests?**: Yes/No
- **Did you add or update any necessary documentation?**: Yes/No
- **Did you update
[Changelog](https://github.com/NVIDIA/Model-Optimizer/blob/main/CHANGELOG.rst)?**:
Yes/No <!--- Only for new features, API changes, critical bug fixes or
bw breaking changes. -->

## Additional Information
<!-- E.g. related issue. -->

---------

Signed-off-by: mchochowski <mchochowski@nvidia.com>
Signed-off-by: jrausch <jrausch@nvidia.com>
Signed-off-by: chochowski <Marcin.Chochowski@gmail.com>
Co-authored-by: J Rausch <38429553+j-rausch@users.noreply.github.com>
Co-authored-by: Keval Morabia <28916987+kevalmorabia97@users.noreply.github.com>
Signed-off-by: Daniel Korzekwa <dkorzekwa@nvidia.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants