-
Notifications
You must be signed in to change notification settings - Fork 3.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Is there a way to swap a set of parameters inside of an .onnx / .ort graph with an identically shaped set of parameters? #6090
Comments
If you use external-data format, you can replace the data file representing external tensors with new values, as you wish. Alternatively, you can make the weights as input parameters of the model, and then vary them as you wish for each invocation. However, this will incur a performance penalty (potentially huge) if ort has to do things like move the weight to gpu or transpose them etc. (which will be done once at session creation if the weights are not inputs). |
@gramalingam After reading the docs and tinkering with some of those functions, I am still not sure I quite understand the purpose of the external-data format, or if it would be compatible for the onnxruntime API (as opposed to onnx). What is the purpose of the format, and could you provide psuedo code to show how to load a subset of params with onnxruntime? |
Yes, onnxruntime also supports the external-data format, which is part of the onnx standard. The external-data format serves a couple of purposes. First, the protobuf format has a limit of 2GB on the size of a protobuf object (in terms of the size of the serialized representation). Models which exceed this size can exploit the external-data format to get around this limitation. Second, even if the model size is less than 2GB, weights end up dominating the size of the model representation. Hence, it is convenient and efficient to load these weights only if required. It helps analysis/optimization tools that care about the graph, and not so much about the weights. |
Is there a way to specify which parameters in the graph to load weights into? Or this capability doesn't exist yet |
In my application I adding an initializer with the
Because I am not supplying a model path (I'm initializing from an array), does this imply the two methods or not compatible? |
This issue is not completed! I still think this is very very necessary functionality that ONNX is completely missing. There have been multiple papers showing the effectiveness of LoRA switching ensembles with LoRALand being just one recent one off the top of my head. For myself and a couple others this missing functionality is a deal breaker for ONNX so I would not brush it off so lightly! If it really is very difficult to implement please let me know as I will move away from my work in ONNX. This issue on a different repo appeared to have managed to do it but looking at their code it looks very janky and may require TensorRT. |
Loading the onnx model using Lora method,Perhaps you can refer to my code:AIDB |
Thanks for the link. Because this seems of interest to others, here's what I ended up doing, I hope it helps someone else-- Let's call the base model To prevent from storing def export_model_as_external_data(model_onnx_path, model_save_path, gt_size = 1024):
'''Exports the model at `model_onnx_path` to one called `model_save_path`, with all parameter sets
over size `gt_size` becoming external data'''
print(f"[LoRA Export] Converting model to external data format...", end=' ')
loc = Path(model_onnx_path).parent
model = onnx.load(model_onnx_path)
onnx.save_model(
model,
f=model_save_path,
save_as_external_data=True,
all_tensors_to_one_file=False,
size_threshold=gt_size
)
print(" Success") Then for creating the Similar story for Two gotchas I had:
|
Ask a Question
Question
I want to be able to swap params at inference time to facilitate a LoRA deployment.
Eg. in torch, I could do
Notes
I am using an ORT file for inference if that matters
The text was updated successfully, but these errors were encountered: