Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Possible to run on Jetson Nano? #54

Open
peteh opened this issue Dec 11, 2022 · 2 comments
Open

Possible to run on Jetson Nano? #54

peteh opened this issue Dec 11, 2022 · 2 comments

Comments

@peteh
Copy link

peteh commented Dec 11, 2022

I'm trying to run the gpu accelerated version on a jetson nano. I'm not sure if it's supposed to work though.

I updated docker to the latest version. Unfortunately the gpu version of the package does not support the arm architecture. Thus, I tried to build myself.

When trying to build the Dockerfile.gpu I'm running into the following errors:

#0 47.29 Collecting pycparser
#0 47.33   Downloading pycparser-2.21-py2.py3-none-any.whl (118 kB)
#0 47.41      ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 118.7/118.7 kB 1.6 MB/s eta 0:00:00
#0 48.74 Installing collected packages: webencodings, pylev, ptyprocess, msgpack, lockfile, distlib, zipp, urllib3, tomlkit, six, shellingham, pyrsistent, pycparser, poetry-core, platformdirs, pkginfo, pexpect, packaging, more-itertools, jeepney, idna, filelock, crashtest, charset-normalizer, certifi, cachy, attrs, virtualenv, requests, jsonschema, jaraco.classes, importlib-metadata, html5lib, dulwich, cleo, cffi, requests-toolbelt, cryptography, cachecontrol, SecretStorage, keyring, poetry-plugin-export, poetry
#0 58.98 Successfully installed SecretStorage-3.3.3 attrs-22.1.0 cachecontrol-0.12.11 cachy-0.3.0 certifi-2022.12.7 cffi-1.15.1 charset-normalizer-2.1.1 cleo-1.0.0a5 crashtest-0.3.1 cryptography-38.0.4 distlib-0.3.6 dulwich-0.20.50 filelock-3.8.2 html5lib-1.1 idna-3.4 importlib-metadata-4.13.0 jaraco.classes-3.2.3 jeepney-0.8.0 jsonschema-4.17.3 keyring-23.11.0 lockfile-0.12.2 more-itertools-9.0.0 msgpack-1.0.4 packaging-22.0 pexpect-4.8.0 pkginfo-1.9.2 platformdirs-2.6.0 poetry-1.2.0 poetry-core-1.1.0 poetry-plugin-export-1.1.2 ptyprocess-0.7.0 pycparser-2.21 pylev-1.4.0 pyrsistent-0.19.2 requests-2.28.1 requests-toolbelt-0.9.1 shellingham-1.5.0 six-1.16.0 tomlkit-0.11.6 urllib3-1.26.13 virtualenv-20.17.1 webencodings-0.5.1 zipp-3.11.0
#0 61.34 Looking in links: https://download.pytorch.org/whl/torch
#0 63.51 ERROR: Could not find a version that satisfies the requirement torch==1.13.0+cu117 (from versions: 1.8.0, 1.8.1, 1.9.0, 1.10.0, 1.10.1, 1.10.2, 1.11.0, 1.12.0, 1.12.1, 1.13.0)
#0 63.51 ERROR: No matching distribution found for torch==1.13.0+cu117
------
failed to solve: executor failed running [/bin/sh -c python3 -m venv $POETRY_VENV     && $POETRY_VENV/bin/pip install -U pip setuptools     && $POETRY_VENV/bin/pip install poetry==${POETRY_VERSION}     && $POETRY_VENV/bin/pip install torch==1.13.0+cu117 -f https://download.pytorch.org/whl/torch]: exit code: 1

@hslr4
Copy link

hslr4 commented Jul 20, 2023

For me it worked on a jetson orin using the following Dockerfile:

FROM swaggerapi/swagger-ui:v4.18.2 AS swagger-ui
FROM nvcr.io/nvidia/l4t-pytorch:r35.2.1-pth2.0-py3

RUN export DEBIAN_FRONTEND=noninteractive \
    && apt-get -qq update \
    && apt-get -qq install --no-install-recommends \
    python3-pip \
    ffmpeg \
    && rm -rf /var/lib/apt/lists/*

RUN pip3 install -U pip setuptools

WORKDIR /app

COPY requirements.txt ./

RUN pip3 install -r requirements.txt

COPY . .
COPY --from=swagger-ui /usr/share/nginx/html/swagger-ui.css swagger-ui-assets/swagger-ui.css
COPY --from=swagger-ui /usr/share/nginx/html/swagger-ui-bundle.js swagger-ui-assets/swagger-ui-bundle.js

CMD gunicorn --bind 0.0.0.0:9000 --workers 1 --timeout 0 app.webservice:app -k uvicorn.workers.UvicornWorker

It is based on nvidia's pytorch container so it can easily make use of the GPU. However this container is based on python 3.8 so additional adjustments are necessary. Since I haven't checked out poetry yet I replaced it with following requirements.txt:

unidecode >= 1.3.4, == 1.*
uvicorn [standard] >= 0.18.2, == 0.*
gunicorn >= 20.1.0, == 20.*
tqdm >= 4.64.1, == 4.*
transformers >= 4.22.1, == 4.*
python-multipart >= 0.0.5, == 0.*
ffmpeg-python >= 0.2.0, == 0.*
fastapi >= 0.95.1, == 0.*
llvmlite >= 0.39.1, == 0.*
numba >= 0.56.4, == 0.*
openai-whisper == 20230124
faster-whisper >= 0.4.1, == 0.*

Finally I removed the importlib dependency from webservice.py to fix some error.

I guess there are better ways to solve this problem and keep this repositories structure, so I am providing this just as a reference of one way to possibly achieve your goal.

@xiechengmude
Copy link

faster-whisper

Thanks for your script here.

How to set the model type =Largev3?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants