Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

model.ByteSize() is negative when converting microsoft/phi2 model #1642

Open
1 of 4 tasks
guotuofeng opened this issue Jan 12, 2024 · 3 comments
Open
1 of 4 tasks

model.ByteSize() is negative when converting microsoft/phi2 model #1642

guotuofeng opened this issue Jan 12, 2024 · 3 comments
Labels
bug Something isn't working

Comments

@guotuofeng
Copy link

guotuofeng commented Jan 12, 2024

System Info

* optimum: 1.16.1
* Windows amd64
* Python  3.8.18
* onnxruntime nightly build
* onnx 1.15.0
* protobuf 3.20.3
* torch  2.1.2

Who can help?

No response

Information

  • The official example scripts
  • My own modified scripts

Tasks

  • An officially supported task in the examples folder (such as GLUE/SQuAD, ...)
  • My own task or dataset (give details below)

Reproduction (minimal, reproducible, runnable)

run the following command:

python -m optimum.exporters.onnx -m microsoft/phi-2 --library-name transformers .

The following errors will be raised.
image

I made one line print in graph_transformations.py.
image

Expected behavior

We might need add the following checks?
image

Just not sure why the model.ByteSizes() return -1765569341

@guotuofeng guotuofeng added the bug Something isn't working label Jan 12, 2024
@fxmarty
Copy link
Collaborator

fxmarty commented Jan 17, 2024

Thank you, related to #1044 & https://github.com/microsoft/Olive/blob/697948c2a1f7fe938609e1c97060d17f255c322e/olive/passes/onnx/optimum_merging.py#L44-L49

This is a bug in ModelProto.ByteSize() on Windows only.

As a workaround, can you try: python -m optimum.exporters.onnx -m microsoft/phi-2 --library-name transformers . --no-post-process

It would be great if you can open an issue at https://github.com/onnx/onnx sharing the onnx model there, and with a small reproduction like

import onnx

model = onnx.load(model_path)

print(model.ByteSizes())

@guotuofeng
Copy link
Author

Thanks for the info. Just create the issue in ONNX repo.

@xadupre
Copy link

xadupre commented Jan 25, 2024

How do you use ByteSize()? Maybe we can implement a function which returns the result you build with it. I don't think protobuf will update its API since it is not meant to support models bigger than 2Gb.

The other option is to export the model with external weights enabled. A new API https://onnx.ai/onnx/api/model_container.html was introduced to make it easier to build such model with external weights without serialization of the weights. That would be the direction I would recommend.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

3 participants