Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Pyinstaller: Error loading openvino model after packaging the scripts #8096

Closed
srbhgjr opened this issue Nov 12, 2023 · 2 comments
Closed

Pyinstaller: Error loading openvino model after packaging the scripts #8096

srbhgjr opened this issue Nov 12, 2023 · 2 comments
Labels
triage Please triage and relabel this issue

Comments

@srbhgjr
Copy link

srbhgjr commented Nov 12, 2023

Problem

I have setup a simple project which is importing an openvino model from xml in a python script. When I run the script directly, everything works fine, but when I run the exe after packaging my project, the script fails to load the model correctly.
I have updated the imports so as to make sure the data files are loaded appropriately, when the script is running in a frozen or unfrozen state.

Details

Here's the project directory structure

.
├── cleanup.sh
├── model_assets
│   ├── v1
│   │   ├── ir_aip.py
│   │   ├── new_frcnn.bin
│   │   └── new_frcnn.xml
│   └── v2
│       ├── classifier_model.bin
│       ├── classifier_model.xml
│       ├── ir_aip.py
│       └── localization.png
├── models
│   ├── v1.py
│   └── v2.py
├── run.py
└── setup.py

Here's run.py

from PIL import Image
import numpy as np
from models.v2 import process_image

if __name__ == '__main__':
  image = Image.open("./model_assets/v2/localization.png")

  image_pil = image.convert("RGB")
  process_image(image_pil)
  print("processed")

and here's v2.py

from torchvision import transforms
from PIL import Image
import glob
import random
import openvino as ov
import torch 
import time
import os
import sys

if getattr(sys, 'frozen', False):
    print("in frozen")
    # application_path = os.path.dirname(sys.executable) # if bundled using cx freeze
    application_path = sys._MEIPASS  # if bundled using pyinstaller
else:
    print("not frozen", os.getcwd())
    application_path = os.getcwd()

def process_image(original_img):
    try:
        device = torch.device("cpu")

        script_dir = os.path.dirname(os.path.abspath(__file__))
        iropt_model01 = application_path +  "/model_assets/v2/classifier_model.xml"
        iropt_weights01 = application_path + "/model_assets/v2/classifier_model.bin"
        print('__file__', __file__)
        print("root list", os.listdir(os.path.dirname(application_path)))
        print('iropt_model01', iropt_model01)

        if os.path.exists(iropt_model01):
            print(f"The file {iropt_model01} exists.")
        else:
            print(f"The file {iropt_model01} does not exist.")


        ie = ov.Core()
        model = ie.read_model(model=iropt_model01, weights=iropt_weights01)
        compiled_model = ie.compile_model(model=model, device_name="CPU")


        data_transform = transforms.Compose([transforms.ToTensor()])
        original_img = original_img.resize((224, 224))
        img = data_transform(original_img)
        # expand batch dimension
        img = torch.unsqueeze(img, dim=0)

        final_result = compiled_model(img)
        return original_img, final_result
    except Exception as e:
        print("exception occured in model", e)
        return None, None, None

Script output logs

not frozen /home/azureuser/repos/testing/model_tests
__file__ /home/azureuser/repos/testing/model_tests/models/v2.py
root list ['exe_tests', 'readme.md', 'model_tests']
iropt_model01 /home/azureuser/repos/testing/model_tests/model_assets/v2/classifier_model.xml
The file /home/azureuser/repos/testing/model_tests/model_assets/v2/classifier_model.xml exists.
processed

Command I used to generate the exe

pyinstaller --onefile --add-data="./model_assets/:model_assets" --add-data="./models/:models" run.py

Spec file

# -*- mode: python ; coding: utf-8 -*-


a = Analysis(
    ['run.py'],
    pathex=[],
    binaries=[],
    datas=[('./model_assets/', 'model_assets'), ('./models/', 'models')],
    hiddenimports=[],
    hookspath=[],
    hooksconfig={},
    runtime_hooks=[],
    excludes=[],
    noarchive=False,
)
pyz = PYZ(a.pure)

exe = EXE(
    pyz,
    a.scripts,
    a.binaries,
    a.datas,
    [],
    name='run',
    debug=False,
    bootloader_ignore_signals=False,
    strip=False,
    upx=True,
    upx_exclude=[],
    runtime_tmpdir=None,
    console=True,
    disable_windowed_traceback=False,
    argv_emulation=False,
    target_arch=None,
    codesign_identity=None,
    entitlements_file=None,
)

Exe output logs

torchvision/io/image.py:13: UserWarning: Failed to load image Python extension: ''If you don't plan on using image functionality from `torchvision.io`, you can ignore this warning. Otherwise, there might be something wrong with your environment. Did you have `libjpeg` or `libpng` installed before building `torchvision` from source?
torch/_jit_internal.py:857: UserWarning: Unable to retrieve source for @torch.jit._overload function: <function _DenseLayer.forward at 0x7f33eb09ff70>.
  warnings.warn(
torch/_jit_internal.py:857: UserWarning: Unable to retrieve source for @torch.jit._overload function: <function _DenseLayer.forward at 0x7f33eb040940>.
  warnings.warn(
in frozen
__file__ /tmp/_MEIsnn22m/models/v2.pyc
root list ['.font-unix', 'appInsights-nodeAIF-d9b70cd4-b9f9-4d70-929b-a071c400b217', '.Test-unix', 'model.model.ce94127b', 'tmux-1000', '.XIM-unix', 'snap-private-tmp', 'vscode-typescript1000', 'tmp8n5nctpl', '.ICE-unix', 'pyright-760509-fT8Zvg5wrUc0', 'python-languageserver-cancellation', 'migrate_db', 'tmppg2rdcrz', 'systemd-private-5c0adf38851c4f4db92772706bd8d827-systemd-logind.service-kYsd8h', 'hsperfdata_jenkins', '_MEIsnn22m', 'jetty-0_0_0_0-9090-war-_-any-11329012959618428727', 'systemd-private-5c0adf38851c4f4db92772706bd8d827-systemd-resolved.service-YQZQxj', '.X11-unix', 'systemd-private-5c0adf38851c4f4db92772706bd8d827-chrony.service-SHtjG9', 'winstone14475318769676059656.jar']
iropt_model01 /tmp/_MEIsnn22m/model_assets/v2/classifier_model.xml
The file /tmp/_MEIsnn22m/model_assets/v2/classifier_model.xml exists.
exception occured in model Exception from src/inference/src/core.cpp:100:
[ NETWORK_NOT_READ ] Unable to read the model: /tmp/_MEIsnn22m/model_assets/v2/classifier_model.xml Please check that model format: xml is supported and the model is correct. Available frontends: tf pytorch 

processed

As you can see from the exe logs that the xml file is present but it still fails to load it.

@srbhgjr srbhgjr added the triage Please triage and relabel this issue label Nov 12, 2023
@rokm
Copy link
Member

rokm commented Nov 12, 2023

My first guess would be that a lazily-loaded plugin that handles your model type is not collected.

Does adding --collect-submodules openvino --collect-binaries openvino --collect-data openvino to your PyInstaller command help?

@srbhgjr
Copy link
Author

srbhgjr commented Nov 12, 2023

@rokm Thank you very much, this worked.

@rokm rokm closed this as completed Nov 12, 2023
@github-actions github-actions bot locked as resolved and limited conversation to collaborators Jan 12, 2024
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
triage Please triage and relabel this issue
Projects
None yet
Development

No branches or pull requests

2 participants