Remove unsafe torch.load(weights_only=False) and legacy PyTorch model loading#2389
Merged
Remove unsafe torch.load(weights_only=False) and legacy PyTorch model loading#2389
Conversation
… loading Remove PYTORCH_ENTIRE_MODEL format which used torch.load(weights_only=False) to deserialize arbitrary pickled models. This is a known security risk and is deprecated by PyTorch. PyTorchModelHandler now requires model_loader for all formats except TorchScript and SliceGPT. Also removes the unused TorchTRTConversion pass, removes PyTorchModelHandler support from GptqQuantizer (AutoGPTQ), and updates CLI/docs/tests accordingly. Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
Contributor
There was a problem hiding this comment.
Pull request overview
This PR hardens Olive’s PyTorch model loading by removing the insecure torch.load(..., weights_only=False) path (and the associated PYTORCH_ENTIRE_MODEL format), requiring explicit safe loaders, and deleting now-incompatible legacy functionality (TorchTRTConversion, AutoGPTQ PyTorch branch).
Changes:
- Removed
ModelFileFormat.PYTORCH_ENTIRE_MODELand the unsafetorch.load(weights_only=False)loading path; default PyTorch format is nowPYTORCH_STATE_DICTandmodel_loaderis required (except TorchScript / SliceGPT). - Removed the unused
TorchTRTConversionpass (and its utilities/tests) and removedPyTorchModelHandlersupport fromGptqQuantizer(AutoGPTQ). - Updated CLI/docs/tests to align with the new requirement for explicit PyTorch model loading functions, and introduced
PYTORCH_DIFFUSERSformat for Diffusers metadata.
Reviewed changes
Copilot reviewed 17 out of 17 changed files in this pull request and generated no comments.
Show a summary per file
| File | Description |
|---|---|
olive/model/handler/pytorch.py |
Enforces safer PyTorch loading by requiring model_loader and removing unsafe torch.load(..., weights_only=False) branch. |
olive/constants.py |
Removes PYTORCH_ENTIRE_MODEL; adds PYTORCH_DIFFUSERS. |
olive/cli/base.py |
Requires _model_loader in model scripts for CLI PyTorch models to avoid falling back to unsafe loading. |
olive/passes/pytorch/autogptq.py |
Drops PyTorchModelHandler branch; quantizer now operates on HfModelHandler only. |
olive/passes/pytorch/common.py |
Removes inherit_pytorch_from_pytorch; makes model_file_format required for inherit_pytorch_from_hf. |
olive/olive_config.json |
Removes TorchTRTConversion registration. |
olive/model/handler/diffusers.py |
Switches Diffusers metadata tag to PYTORCH_DIFFUSERS. |
olive/passes/pytorch/torch_trt_conversion.py |
Deleted (legacy pass depended on unsafe whole-model serialization). |
olive/passes/pytorch/trt_utils.py |
Deleted (utilities for TorchTRTConversion). |
test/passes/pytorch/test_torch_trt_conversion.py |
Deleted (tests for removed TorchTRTConversion pass). |
test/model/test_pytorch_model.py |
Updates tests to supply model_loader and removes unsafe-load unit test. |
test/systems/test_local.py |
Updates test to construct PyTorchModelHandler with a model_loader. |
docs/source/reference/pass.rst |
Removes TorchTRTConversion reference docs. |
docs/source/reference/options.md |
Removes TorchTRTConversion from pass options list. |
docs/source/reference/cli.rst |
Updates CLI docs to no longer claim PyTorch.EntireModel as the default. |
docs/source/how-to/configure-workflows/how-to-configure-model.md |
Updates PyTorch model configuration guidance to require model_loader. |
docs/source/features/model-conversion/convert-pytorch.md |
Removes TorchTRTConversion documentation section. |
Comments suppressed due to low confidence (2)
olive/model/handler/pytorch.py:146
- The validation logic allows
model_loaderto be a string even whenmodel_scriptis not provided ifmodel_pathis set (because of the trailingor model_path). This will fail later inload_model()whenUserModuleLoaderhas nouser_module(assertion inload_object). Tighten the check so that a stringmodel_loaderalways requiresmodel_script, and only allowmodel_pathalone for the TorchScript/SliceGPT formats wheremodel_loaderis intentionally optional.
if model_loader is None and model_file_format not in (
ModelFileFormat.PYTORCH_TORCH_SCRIPT,
ModelFileFormat.PYTORCH_SLICE_GPT_MODEL,
):
raise ValueError(
"model_loader is required for PyTorchModelHandler. Either provide a callable model_loader,"
" or specify model_script with a model_loader function name."
)
if not (isinstance(model_loader, Callable) or (isinstance(model_loader, str) and model_script) or model_path):
raise ValueError(
"model_path is required since model_loader is not callable or model_script is not provided"
)
docs/source/how-to/configure-workflows/how-to-configure-model.md:38
- The example code block is labeled as JSON but contains a trailing comma after the last property, which makes it invalid JSON. Please remove the trailing comma (or relabel the block as pseudo-JSON) so users can copy/paste the config successfully.
A `model_loader` function is required to load PyTorch models. The `model_script` specifies the script containing the loading function, and `model_loader` is the name of that function.
```json
{
"type": "PytorchModel",
"model_path": "model_dir",
"model_script": "load_model.py",
"model_loader": "load_model",
}
</details>
devang-ml
previously approved these changes
Apr 7, 2026
Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
shaahji
approved these changes
Apr 8, 2026
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Describe your changes
Motivation
torch.load(path, weights_only=False)deserializes arbitrary Python objects via pickle and is a known security risk. PyTorch has deprecated this behavior. Olive'sPYTORCH_ENTIRE_MODELformat relied on this unsafe call as its default loading path, silently exposing users to deserialization attacks.Changes
Removed
PYTORCH_ENTIRE_MODELformat: the enum value, thetorch.load(path, weights_only=False)call inPyTorchModelHandler.load_model(), and all references across the codebase.PyTorchModelHandlernow requiresmodel_loader, except forPYTORCH_TORCH_SCRIPT(usestorch.jit.load) andPYTORCH_SLICE_GPT_MODEL(has a dedicated loader). The default format is nowPYTORCH_STATE_DICT.Deleted
TorchTRTConversionpass. This pass was unused and saved entire models withtorch.save(model), which depended on the removedPYTORCH_ENTIRE_MODELloading path. Files removed:olive/passes/pytorch/torch_trt_conversion.pyolive/passes/pytorch/trt_utils.pytest/passes/pytorch/test_torch_trt_conversion.pyRemoved
PyTorchModelHandlersupport fromGptqQuantizer(AutoGPTQ). The PyTorch branch saved entire models viatorch.saveand loaded viainherit_pytorch_from_pytorch. OnlyHfModelHandleris now accepted. Cleaned up related imports, config descriptions, and theisinstancecheck.Removed
inherit_pytorch_from_pytorchfromolive/passes/pytorch/common.py. Its only caller was the AutoGPTQ PyTorch branch. Mademodel_file_formata required parameter ininherit_pytorch_from_hf(no morePYTORCH_ENTIRE_MODELdefault).CLI now requires
_model_loader.olive/cli/base.pypreviously allowedmodel_path+model_scriptwithout_model_loader, which fell through to the unsafetorch.load. Now_model_loaderis always required for PyTorch models.Added
PYTORCH_DIFFUSERSformat.DiffusersModelHandlerpreviously usedPYTORCH_ENTIRE_MODELas a metadata tag. Replaced with a dedicated enum value since diffusers has its own loading logic.Checklist before requesting a review
lintrunner -aRelease notes
PyTorchModelHandlerno longer supports loading models directly viatorch.load. Amodel_loaderfunction is now required. TheTorchTRTConversionpass has been removed.GptqQuantizer(AutoGPTQ) no longer acceptsPyTorchModelHandlerinput; useHfModelHandlerinstead.(Optional) Issue link