-
Notifications
You must be signed in to change notification settings - Fork 4.7k
Open
Description
Hi team,
I'm facing an issue with loading Bark model checkpoints under PyTorch 2.6+ due to the new weights_only=True default in torch.load. Below is a summary of my setup and what I've tried:
Context and what I tried
- I use Bark directly from this repository, inside a FastAPI server (example below).
- On startup, Bark internally calls
torch.load(...)through itspreload_models()logic. - Since PyTorch 2.6, I get an error about
weights_onlyand unsafe globals (numpy.core.multiarray.scalaretc.).
To solve this, I tried using both:
from torch.serialization import add_safe_globals, safe_globals
import numpy as np
add_safe_globals([np.core.multiarray.scalar])
# During model preloading:
with safe_globals([np.core.multiarray.scalar]):
preload_models()The problem
- Despite using both
add_safe_globalsand thesafe_globalscontext manager, I still get the sameUnpicklingError/weights_only error*. - I have no way of passing
weights_only=Falsetotorch.loadfrom user code, and I don't want to patch Bark's source code in the repo. - The model weights are directly downloaded from the official repo (not third-party or modified).
What I would like to know
- Is there a workaround to allow loading checkpoints with Bark (without editing the repo code) under these new PyTorch restrictions?
- Is there any officially supported way to override or bypass weights_only, or am I missing a recommended approach here?
- Any advice for users who just want to run Bark as intended without risking model/package integrity?
BTW: If the issue sounds like AI generated it is because it is, but the issue is real please help
Thanks for your help!
Metadata
Metadata
Assignees
Labels
No labels