Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Barracuda with savedmodel file format #326

Open
romygt opened this issue May 10, 2023 · 2 comments
Open

Barracuda with savedmodel file format #326

romygt opened this issue May 10, 2023 · 2 comments

Comments

@romygt
Copy link

romygt commented May 10, 2023

Can I use barracuda with the savedModel format instead of Onnx

@dilne
Copy link

dilne commented Jun 28, 2023

ONNX is required. You can convert a PyTorch saved model to ONNX like shown below. I include the ONNX checker, it should confirm everything worked:

import torch
import torch.onnx
import onnx
import numpy as np

# load path
load_path = r"your/model/directory/name_of_model.pt"

# output path
output_path = r"your/output/directory/name_of_model.onnx"

model = torch.load(load_path)
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model = model.to(device)
model.eval()

# Input to the model (you need to define the size for the model inputs)
# This example is for a single batch, single channel tensor of size 28 x 28)
x = torch.randn(1, 1, 28, 28, requires_grad=True).to(device)
torch_out = model(x)

# Export the model
torch.onnx.export(model,               # model being run
                  x,                         # model input (or a tuple for multiple inputs)
                  output_path,               # where to save the model (can be a file or file-like object)
                  export_params=True,        # store the trained parameter weights inside the model file
                  opset_version=9,           # the ONNX version to export the model to
                  do_constant_folding=True,  # whether to execute constant folding for optimization
                  input_names = ['input'],   # the model's input names
                  output_names = ['output'], # the model's output names
                  )

onnx_model = onnx.load(output_path)
onnx.checker.check_model(onnx_model)

@hayden-donnelly
Copy link

Not sure if Tensorflow savedmodel is any different, but I use this code to convert my TF savedmodels to ONNX:

import tensorflow as tf 
import tf2onnx

input_path = "../data/models/"
model_name = "simple_upsampler_bilinear"
file_type = ''
output_path = "../data/models/onnx_models/"

#load the model.
pre_model = tf.keras.models.load_model(input_path + model_name + file_type)

# Convert h5 to onnx.
tf2onnx.convert.from_keras(pre_model, output_path = output_path + model_name + ".onnx", opset = 9)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants