Skip to content

Remove unsafe torch.load(weights_only=False) and legacy PyTorch model loading#2389

Merged
jambayk merged 2 commits intomainfrom
jambayk/torch-load
Apr 8, 2026
Merged

Remove unsafe torch.load(weights_only=False) and legacy PyTorch model loading#2389
jambayk merged 2 commits intomainfrom
jambayk/torch-load

Conversation

@jambayk
Copy link
Copy Markdown
Contributor

@jambayk jambayk commented Apr 7, 2026

Describe your changes

Motivation

torch.load(path, weights_only=False) deserializes arbitrary Python objects via pickle and is a known security risk. PyTorch has deprecated this behavior. Olive's PYTORCH_ENTIRE_MODEL format relied on this unsafe call as its default loading path, silently exposing users to deserialization attacks.

Changes

Removed PYTORCH_ENTIRE_MODEL format: the enum value, the torch.load(path, weights_only=False) call in PyTorchModelHandler.load_model(), and all references across the codebase.

PyTorchModelHandler now requires model_loader, except for PYTORCH_TORCH_SCRIPT (uses torch.jit.load) and PYTORCH_SLICE_GPT_MODEL (has a dedicated loader). The default format is now PYTORCH_STATE_DICT.

Deleted TorchTRTConversion pass. This pass was unused and saved entire models with torch.save(model), which depended on the removed PYTORCH_ENTIRE_MODEL loading path. Files removed:

  • olive/passes/pytorch/torch_trt_conversion.py
  • olive/passes/pytorch/trt_utils.py
  • test/passes/pytorch/test_torch_trt_conversion.py

Removed PyTorchModelHandler support from GptqQuantizer (AutoGPTQ). The PyTorch branch saved entire models via torch.save and loaded via inherit_pytorch_from_pytorch. Only HfModelHandler is now accepted. Cleaned up related imports, config descriptions, and the isinstance check.

Removed inherit_pytorch_from_pytorch from olive/passes/pytorch/common.py. Its only caller was the AutoGPTQ PyTorch branch. Made model_file_format a required parameter in inherit_pytorch_from_hf (no more PYTORCH_ENTIRE_MODEL default).

CLI now requires _model_loader. olive/cli/base.py previously allowed model_path + model_script without _model_loader, which fell through to the unsafe torch.load. Now _model_loader is always required for PyTorch models.

Added PYTORCH_DIFFUSERS format. DiffusersModelHandler previously used PYTORCH_ENTIRE_MODEL as a metadata tag. Replaced with a dedicated enum value since diffusers has its own loading logic.

Checklist before requesting a review

  • Add unit tests for this change.
  • Make sure all tests can pass.
  • Update documents if necessary.
  • Lint and apply fixes to your code by running lintrunner -a
  • Is this a user-facing change? If yes, give a description of this change to be included in the release notes.

Release notes

PyTorchModelHandler no longer supports loading models directly via torch.load. A model_loader function is now required. The TorchTRTConversion pass has been removed. GptqQuantizer (AutoGPTQ) no longer accepts PyTorchModelHandler input; use HfModelHandler instead.

(Optional) Issue link

… loading

Remove PYTORCH_ENTIRE_MODEL format which used torch.load(weights_only=False)
to deserialize arbitrary pickled models. This is a known security risk and
is deprecated by PyTorch. PyTorchModelHandler now requires model_loader for
all formats except TorchScript and SliceGPT.

Also removes the unused TorchTRTConversion pass, removes PyTorchModelHandler
support from GptqQuantizer (AutoGPTQ), and updates CLI/docs/tests accordingly.

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
Copilot AI review requested due to automatic review settings April 7, 2026 22:06
Copy link
Copy Markdown
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR hardens Olive’s PyTorch model loading by removing the insecure torch.load(..., weights_only=False) path (and the associated PYTORCH_ENTIRE_MODEL format), requiring explicit safe loaders, and deleting now-incompatible legacy functionality (TorchTRTConversion, AutoGPTQ PyTorch branch).

Changes:

  • Removed ModelFileFormat.PYTORCH_ENTIRE_MODEL and the unsafe torch.load(weights_only=False) loading path; default PyTorch format is now PYTORCH_STATE_DICT and model_loader is required (except TorchScript / SliceGPT).
  • Removed the unused TorchTRTConversion pass (and its utilities/tests) and removed PyTorchModelHandler support from GptqQuantizer (AutoGPTQ).
  • Updated CLI/docs/tests to align with the new requirement for explicit PyTorch model loading functions, and introduced PYTORCH_DIFFUSERS format for Diffusers metadata.

Reviewed changes

Copilot reviewed 17 out of 17 changed files in this pull request and generated no comments.

Show a summary per file
File Description
olive/model/handler/pytorch.py Enforces safer PyTorch loading by requiring model_loader and removing unsafe torch.load(..., weights_only=False) branch.
olive/constants.py Removes PYTORCH_ENTIRE_MODEL; adds PYTORCH_DIFFUSERS.
olive/cli/base.py Requires _model_loader in model scripts for CLI PyTorch models to avoid falling back to unsafe loading.
olive/passes/pytorch/autogptq.py Drops PyTorchModelHandler branch; quantizer now operates on HfModelHandler only.
olive/passes/pytorch/common.py Removes inherit_pytorch_from_pytorch; makes model_file_format required for inherit_pytorch_from_hf.
olive/olive_config.json Removes TorchTRTConversion registration.
olive/model/handler/diffusers.py Switches Diffusers metadata tag to PYTORCH_DIFFUSERS.
olive/passes/pytorch/torch_trt_conversion.py Deleted (legacy pass depended on unsafe whole-model serialization).
olive/passes/pytorch/trt_utils.py Deleted (utilities for TorchTRTConversion).
test/passes/pytorch/test_torch_trt_conversion.py Deleted (tests for removed TorchTRTConversion pass).
test/model/test_pytorch_model.py Updates tests to supply model_loader and removes unsafe-load unit test.
test/systems/test_local.py Updates test to construct PyTorchModelHandler with a model_loader.
docs/source/reference/pass.rst Removes TorchTRTConversion reference docs.
docs/source/reference/options.md Removes TorchTRTConversion from pass options list.
docs/source/reference/cli.rst Updates CLI docs to no longer claim PyTorch.EntireModel as the default.
docs/source/how-to/configure-workflows/how-to-configure-model.md Updates PyTorch model configuration guidance to require model_loader.
docs/source/features/model-conversion/convert-pytorch.md Removes TorchTRTConversion documentation section.
Comments suppressed due to low confidence (2)

olive/model/handler/pytorch.py:146

  • The validation logic allows model_loader to be a string even when model_script is not provided if model_path is set (because of the trailing or model_path). This will fail later in load_model() when UserModuleLoader has no user_module (assertion in load_object). Tighten the check so that a string model_loader always requires model_script, and only allow model_path alone for the TorchScript/SliceGPT formats where model_loader is intentionally optional.
        if model_loader is None and model_file_format not in (
            ModelFileFormat.PYTORCH_TORCH_SCRIPT,
            ModelFileFormat.PYTORCH_SLICE_GPT_MODEL,
        ):
            raise ValueError(
                "model_loader is required for PyTorchModelHandler. Either provide a callable model_loader,"
                " or specify model_script with a model_loader function name."
            )
        if not (isinstance(model_loader, Callable) or (isinstance(model_loader, str) and model_script) or model_path):
            raise ValueError(
                "model_path is required since model_loader is not callable or model_script is not provided"
            )

docs/source/how-to/configure-workflows/how-to-configure-model.md:38

  • The example code block is labeled as JSON but contains a trailing comma after the last property, which makes it invalid JSON. Please remove the trailing comma (or relabel the block as pseudo-JSON) so users can copy/paste the config successfully.
A `model_loader` function is required to load PyTorch models. The `model_script` specifies the script containing the loading function, and `model_loader` is the name of that function.
```json
{
    "type": "PytorchModel",
    "model_path": "model_dir",
    "model_script": "load_model.py",
    "model_loader": "load_model",
}
</details>

devang-ml
devang-ml previously approved these changes Apr 7, 2026
@jambayk jambayk enabled auto-merge (squash) April 7, 2026 22:26
Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
@jambayk jambayk merged commit fd660c1 into main Apr 8, 2026
11 checks passed
@jambayk jambayk deleted the jambayk/torch-load branch April 8, 2026 01:08
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants