Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Importing onnxruntime on AWS Lambdas with ARM64 processor causes crash #10038

Open
glefundes opened this issue Dec 14, 2021 · 44 comments
Open

Importing onnxruntime on AWS Lambdas with ARM64 processor causes crash #10038

glefundes opened this issue Dec 14, 2021 · 44 comments

Comments

@glefundes
Copy link

glefundes commented Dec 14, 2021

Describe the bug
I'm currently migrating a service deployed as a serverless function on AWS Lambda to the new ARM64 Graviton2 processor. Importing onnxruntime throws a cpuinfo error and crashes the code with the following messages:

Error in cpuinfo: failed to parse the list of possible processors in /sys/devices/system/cpu/possible
--
Error in cpuinfo: failed to parse the list of present processors in /sys/devices/system/cpu/present
Error in cpuinfo: failed to parse both lists of possible and present processors
terminate called after throwing an instance of 'onnxruntime::OnnxRuntimeException'
what():  /onnxruntime_src/onnxruntime/core/common/cpuid_info.cc:62 onnxruntime::CPUIDInfo::CPUIDInfo() Failed to initialize CPU info.

The files /sys/devices/system/cpu/possible and /sys/devices/system/cpu/present don't exist and apparently this causes the crash. Is this expected behaviour? I'm not sure how to proceed. Is onnxruntime currently not supported by Graviton2 processors? The contents of /proc/cpuinfo are as follows:


processor	: 0
--
BogoMIPS	: 243.75
Features	: fp asimd evtstrm aes pmull sha1 sha2 crc32 atomics fphp asimdhp cpuid asimdrdm lrcpc dcpop asimddp ssbs
CPU implementer	: 0x41
CPU architecture: 8
CPU variant	: 0x3
CPU part	: 0xd0c
CPU revision	: 1
processor	: 1
BogoMIPS	: 243.75
Features	: fp asimd evtstrm aes pmull sha1 sha2 crc32 atomics fphp asimdhp cpuid asimdrdm lrcpc dcpop asimddp ssbs
CPU implementer	: 0x41
CPU architecture: 8
CPU variant	: 0x3
CPU part	: 0xd0c
CPU revision	: 1

System information

  • OS Platform and Distribution (e.g., Linux Ubuntu 16.04): Linux (AWS Lambda python runtime)
  • ONNX Runtime installed from (source or binary): binary (with pip)
  • ONNX Runtime version: 1.10.0
  • Python version: 3.8.5
@jcreinhold
Copy link

I'm also experiencing this issue with a similar setup (see "System information" below). The error message is below as well (the same as the OP). I can add more details if needed/helpful.

Error in cpuinfo: failed to parse the list of possible processors in /sys/devices/system/cpu/possible
Error in cpuinfo: failed to parse the list of present processors in /sys/devices/system/cpu/present
terminate called after throwing an instance of 'onnxruntime::OnnxRuntimeException'
what(): /onnxruntime_src/onnxruntime/core/common/cpuid_info.cc:62 onnxruntime::CPUIDInfo::CPUIDInfo() Failed to initialize CPU info.

System information

  • OS Platform and Distribution (e.g., Linux Ubuntu 16.04): Linux (AWS Lambda Python arm64 Docker container)
  • ONNX Runtime installed from (source or binary): binary (with pip)
  • ONNX Runtime version: 1.10.0
  • Python version: 3.9

@skottmckay
Copy link
Contributor

@chenfucn is this a known issue?

Should we handle cpuinfo failing more gracefully? If it's not critical to have the cpu info maybe logging and ignoring the error is an option.

@chenfucn
Copy link
Contributor

chenfucn commented Jan 5, 2022

Thanks for the info. This is a surprise. Here we are actually leveraging pytorch cpuinfo. This library is used in both pytorch and tensorflow. Do you guys have knowledge about pytorch cpuinfo library facing similar issues?

Currently we are using cpuinfo lib to detect hybrid cores and SDOT UDOT instruction support. Ignoring cpuinfo failure means we lose these functionalities and will cause performance degradation. Especially with DOT instructions, the matrix multiplication can be multiple times slower if we don't use DOT instructions and fall back to neon cores.

I can implement a very crude DOT detection logic in case of cpuinfo failure. However the best solution should be cpuinfo library authors to fix this problem.

@chenfucn
Copy link
Contributor

chenfucn commented Jan 5, 2022

@glefundes and @jcreinhold could you also file this issue to pytorch cpuinfo repo while I prepare a PR to get around this?

@jcreinhold
Copy link

Thanks for the fast response. I filed the issue on cpuinfo here: pytorch/cpuinfo#76

Let me know if you need me to test anything.

@chenfucn
Copy link
Contributor

chenfucn commented Jan 5, 2022

#10199

@workdd
Copy link

workdd commented Jan 14, 2022

Could I know if this issue has been resolved? I'm currently having the same problem.

@chenfucn
Copy link
Contributor

the above PR already merged, can you try it out?

@workdd
Copy link

workdd commented Feb 7, 2022

Thanks for response.
Did it published 'pip' also? I installed onnxruntime packages using just below command.
pip install onnxruntime
And I'm still faced same issue.

@skottmckay
Copy link
Contributor

It would be in the nightly package until the next official release. https://test.pypi.org/project/ort-nightly/

@workdd
Copy link

workdd commented Feb 8, 2022

Thank you for the fast response.
Then I'll wait next official release.

@jcreinhold
Copy link

Thanks for the quick response to this issue. I'm happy to test out the implementation when there is a release candidate, but I've already deployed the model on x86 hardware and want as little downtime as possible.

Will PR #10199 fix what @chenfucn brought up in the below comment?

Currently we are using cpuinfo lib to detect hybrid cores and SDOT UDOT instruction support. Ignoring cpuinfo failure means we lose these functionalities and will cause performance degradation. Especially with DOT instructions, the matrix multiplication can be multiple times slower if we don't use DOT instructions and fall back to neon cores.

Or does pytorch/cpuinfo#76 need to be resolved to fix that problem?

@yufenglee
Copy link
Member

You need to include both #10199 and #10334 .

@stale
Copy link

stale bot commented Apr 16, 2022

This issue has been automatically marked as stale due to inactivity and will be closed in 7 days if no further activity occurs. If further support is needed, please provide an update and/or more details.

@stale stale bot added the stale issues that have not been addressed in a while; categorized by a bot label Apr 16, 2022
@glefundes
Copy link
Author

Just got the chance to test release 1.11.1 on Graviton2 instances on AWS and can confirm that while the cpuinfo erro messages still show, execution is no longer halted and the lambda call finishes as expected. Thank you all :)

@stale stale bot removed the stale issues that have not been addressed in a while; categorized by a bot label May 25, 2022
@jcampbell05
Copy link

Good afternoon, we are suddenly getting this error for 1.14 on Graviton2. I'm not sure if there has been a regression ?

@skottmckay
Copy link
Contributor

@jcampbell05 There has been no change that I can see to print a warning instead of failing with an exception. What error exactly are you seeing?

LOGS_DEFAULT(WARNING) << "Failed to init pytorch cpuinfo library, may cause CPU EP performance degradation due to undetected CPU features.";

@jcampbell05
Copy link

So I'm seeing the following from Python. rolling back to 1.11.1 fixes it for us.

Error in cpuinfo: failed to parse the list of possible processors in /sys/devices/system/cpu/possible
--
Error in cpuinfo: failed to parse the list of present processors in /sys/devices/system/cpu/present
Error in cpuinfo: failed to parse both lists of possible and present processors
terminate called after throwing an instance of 'onnxruntime::OnnxRuntimeException'
what():  /onnxruntime_src/include/onnxruntime/core/common/logging/logging.h:294 static const onnxruntime::logging::Logger& onnxruntime::logging::LoggingManager::DefaultLogger() Attempt to use DefaultLogger but none has been registered.

@skottmckay
Copy link
Contributor

There's no exception thrown in the latest code so the failure is most likely coming from somewhere else. Problem is there's no default logger so the real error isn't clear. The Environment needs to be created prior to calling into other ORT code as that provides the default logger. However it's weird that that hasn't happened if you're calling from python as we typically create the environment internally so that it's available when needed.

Can you share the python code using ORT up to where it breaks?

@jcampbell05
Copy link

@skottmckay it took a while to track it down but it appears it's simply just this, since none of our other code has executed yet.

import onnxruntime

@DoctorSlimm
Copy link

TLDR: I don't see any Solution to this issue to using ONNX in AWS Lambda? Docker Image builds and runs fine locally on my M1 mac but in the cloud this happens:

Error in cpuinfo: failed to parse both lists of possible and present processors
terminate called after throwing an instance of 'onnxruntime::OnnxRuntimeException'
what():  /onnxruntime_src/include/onnxruntime/core/common/logging/logging.h:294 static const onnxruntime::logging::Logger& onnxruntime::logging::LoggingManager::DefaultLogger() Attempt to use DefaultLogger but none has been registered.

Pls Help.. Really need to run inference in AWS Lambda 🥲

@tianleiwu
Copy link
Contributor

@DoctorSlimm, @jcampbell05

Could you try the following package (built with #15661) to see whether the issue is resolved? You can rename the .zip file to .whl file and install like the following:

mv ort_nightly-1.15.0.dev20230427003-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.zip ort_nightly-1.15.0.dev20230427003-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl

pip uninstall onnxruntime

pip install ort_nightly-1.15.0.dev20230427003-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl

ort_nightly-1.15.0.dev20230427003-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.zip

@DoctorSlimm
Copy link

DoctorSlimm commented Apr 28, 2023

@tianleiwu

Still the same Error when I run it in the cloud, works totally fine when I run the function locally, but fails when I invoke it in AWS.

NOTE: I am building it locally on a M1 Mac and then pushing it to ECR registry

Local Build command, run in the same directory as the other files

docker build --platform linux/arm64 -t FUNCTION-NAME .

Here is my dockerfile:

FROM public.ecr.aws/lambda/python:3.9-arm64 AS model


# Install the runtime interface client
RUN python3.9 -m pip install --target . awslambdaric
RUN python3.9 -m pip install python-dotenv onnxruntime "transformers[torch]"

# https://github.com/microsoft/onnxruntime/issues/10038
ADD ort_nightly-1.15.0.dev20230427003-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.zip ort_nightly-1.15.0.dev20230427003-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.zip
RUN mv ort_nightly-1.15.0.dev20230427003-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.zip ort_nightly-1.15.0.dev20230427003-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl
RUN python3.9 -m pip uninstall -y onnxruntime
RUN python3.9 -m pip install ort_nightly-1.15.0.dev20230427003-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl

# Set Production Environment
ENV ENV=prod

# Copy files
COPY app.py ./

# Copy onnx directory
COPY onnx onnx


# Set Up Entrypoints
COPY ./entry_script.sh /entry_script.sh
ADD aws-lambda-rie-arm64 /usr/local/bin/aws-lambda-rie-arm64
ENTRYPOINT ["/entry_script.sh"]
CMD [ "app.handler" ]

app.py

import json
import traceback
from time import time
import numpy as np
from dotenv import load_dotenv
from onnxruntime import InferenceSession
from transformers import AutoTokenizer

load_dotenv()

# Worth Investigating
# https://blog.ml6.eu/the-art-of-pooling-embeddings-c56575114cf8
# https://github.com/UKPLab/sentence-transformers/issues/46#issuecomment-1152816277

tokenizer = AutoTokenizer.from_pretrained('onnx')
session = InferenceSession("onnx/model.onnx")


def lambda_handler(event, context):
    try:
        if 'ping' in event:
            print('Pinging')
            t0 = time()
            return {
                'total_time': time() - t0,
            }
        if 'modelInputs' in event:
            print('Inference\n')
            model_inputs = event['modelInputs']
            text = model_inputs['text']
            encoded_inputs = tokenizer(text, return_tensors="np")
            model_outputs = session.run(
                None, input_feed=dict(encoded_inputs)
            )  # (1, 1, 11, 768)

            token_embeddings = model_outputs[0]  # (1, 11, 768)
            special_token_ids = [
                tokenizer.cls_token_id,
                tokenizer.unk_token_id,
                tokenizer.sep_token_id,
                tokenizer.pad_token_id,
                tokenizer.mask_token_id,
            ]

            # Mask to exclude special tokens from pooling calculation
            mask = np.ones(token_embeddings.shape[:-1], dtype=bool)

            # Max Pooling Sentence Embedding
            for special_token_id in special_token_ids:
                mask &= encoded_inputs != special_token_id
            max_pooled_embeddings = np.max(token_embeddings * mask[..., np.newaxis], axis=1)
            max_pooled_embeddings = np.mean(max_pooled_embeddings, axis=0)

            # Mean Pooling Sentence Embedding
            for special_token_id in special_token_ids:
                mask &= encoded_inputs != special_token_id  # Exclude special tokens from mask
            mean_pooled_embeddings = np.sum(token_embeddings * mask[..., np.newaxis], axis=1)  # Apply mask and take sum over sequence dimension
            mean_pooled_embeddings = np.mean(mean_pooled_embeddings, axis=0)  # Take mean over batch dimension

            return {
                'statusCode': 200,
                'body': json.dumps(
                    {
                        'modelOutputs': {
                            # 'raw': model_outputs.tolist(),
                            'token_embeddings': token_embeddings.tolist(),
                            'max_pooled_embeddings': max_pooled_embeddings.tolist(),
                            'mean_pooled_embeddings': mean_pooled_embeddings.tolist(),
                        }
                    }
                )
            }

    except Exception as e:
        return {
            'error': str(traceback.format_exc()) + str(e)
        }

Response when run in AWS

{
  "errorType": "Runtime.ExitError",
  "errorMessage": "RequestId: dd954162-257e-448e-824e-0b78342f503a Error: Runtime exited with error: signal: aborted"
}

Log output

Error in cpuinfo: failed to parse the list of possible processors in /sys/devices/system/cpu/possible
Error in cpuinfo: failed to parse the list of present processors in /sys/devices/system/cpu/present
Error in cpuinfo: failed to parse both lists of possible and present processors
terminate called after throwing an instance of 'onnxruntime::OnnxRuntimeException'
what():  /onnxruntime_src/include/onnxruntime/core/common/logging/logging.h:294 static const onnxruntime::logging::Logger& onnxruntime::logging::LoggingManager::DefaultLogger() Attempt to use DefaultLogger but none has been registered.
Error in cpuinfo: failed to parse the list of possible processors in /sys/devices/system/cpu/possible
Error in cpuinfo: failed to parse the list of present processors in /sys/devices/system/cpu/present
Error in cpuinfo: failed to parse both lists of possible and present processors
terminate called after throwing an instance of 'onnxruntime::OnnxRuntimeException'
what():  /onnxruntime_src/include/onnxruntime/core/common/logging/logging.h:294 static const onnxruntime::logging::Logger& onnxruntime::logging::LoggingManager::DefaultLogger() Attempt to use DefaultLogger but none has been registered.
START RequestId: dd954162-257e-448e-824e-0b78342f503a Version: $LATEST
RequestId: dd954162-257e-448e-824e-0b78342f503a Error: Runtime exited with error: signal: aborted
Runtime.ExitError
END RequestId: dd954162-257e-448e-824e-0b78342f503a
REPORT RequestId: dd954162-257e-448e-824e-0b78342f503a	Duration: 2625.36 ms	Billed Duration: 2626 ms	Memory Size: 128 MB	Max Memory Used: 39 MB	

@johnsonchau-bulb
Copy link

johnsonchau-bulb commented Jun 26, 2023

@DoctorSlimm is there any update on your solution? I tried out your method of installing from nightly builds and it still leads to the same error:

Error in cpuinfo: failed to parse the list of possible processors in /sys/devices/system/cpu/possible
Error in cpuinfo: failed to parse the list of present processors in /sys/devices/system/cpu/present
Error in cpuinfo: failed to parse both lists of possible and present processors

Note: I'm also using aws lambda with ARM architecture

@DoctorSlimm
Copy link

DoctorSlimm commented Jun 26, 2023

@DoctorSlimm is there any update on your solution? I tried out your method of installing from nightly builds and it still leads to the same error:


Error in cpuinfo: failed to parse the list of possible processors in /sys/devices/system/cpu/possible

Error in cpuinfo: failed to parse the list of present processors in /sys/devices/system/cpu/present

Error in cpuinfo: failed to parse both lists of possible and present processors

Note: I'm also using aws lambda with ARM architecture

Hello my dude! Using X86 (or whatever the OTHER architecture is, maybe it's called AMD64) Architecture (+ maybe a few other tweaks including increasing the memory of the function to at least a few GB) I think solved it!

Will be getting back into this stuff later this week so will likely have more concrete answers then but for now I'm pretty sure that

Using X86 and increasing Memory gets you 97% of the way there - Good Luck!

@johnsonchau-bulb
Copy link

@DoctorSlimm I see, I was experimenting with x86 architecture but the docker buildx took incredibly long. I'm also on a M1 mac, which I saw you are also on. Will keep trying this x86 method out! Thank you +++++

@MengLinMaker
Copy link

MengLinMaker commented Sep 13, 2023

@johnsonchau-bulb It's likely that AWS lambda ARM does not populate CPU info into the "/sys" folder. So essentially onnxruntime is trying to read a nonexistent file and directory.

The following test confirms this:

# Script to test the existence of folders 
import os
print(os.listdir('/'))
print(os.listdir('/sys'))
print(os.listdir('/sys/devices'))

Result - "/sys" has no content:

['bin', 'boot', 'dev', 'etc', 'home', 'lib', 'media', 'mnt', 'opt', 'proc', 'root', 'run', 'sbin', 'srv', 'sys', 'tmp', 'usr', 'var']
[]
[ERROR] FileNotFoundError: [Errno 2] No such file or directory: '/sys/devices'

@johnsonchau-bulb
Copy link

@MengLinMaker thanks!

As a sidenote, I would not recommend deploying hugging face models in AWS lambda as it takes a long time to download models. Furthermore, even when using EFS to connect to lambda to cache the model, the read/write speeds are not fast enough to load LLMs in lambda quickly. Leaving this here to help anyone who wants to build an AI API microservice.

@MengLinMaker
Copy link

MengLinMaker commented Sep 13, 2023

@chenfucn, Referencing your PR #10199:
I located the file reader code in pytorch/cpuinfo that may be causing the file read issues for AWS lambda ARM64.

My AWS lambda directory probing tests confirm that these files do not exist, so read attempts lead to error:

  • /sys/devices/system/cpu/possible
  • /sys/devices/system/cpu/present

I also agree that the fix should be made in pytorch/cpuinfo as this is a cleaner solution.
Looking at the coding, failing should return a null pointer.

Actually, it may be this logger in python/cpuinfo that's throwing the exception.

@MengLinMaker
Copy link

As a sidenote, I would not recommend deploying hugging face models in AWS lambda as it takes a long time to download models. Furthermore, even when using EFS to connect to lambda to cache the model, the read/write speeds are not fast enough to load LLMs in lambda quickly. Leaving this here to help anyone who wants to build an AI API microservice.

@johnsonchau-bulb Thanks, almost dove down that rabbit hole.

Currently trying to decrease an 1.8GB docker image to 1.1GB by replacing pytorch with onnxruntime. My model is around 150mb.
Lambda cold start times are horrible through, up to 20 secs. So I'm breaking the model into sections so I can cold start them at the same time.

@MengLinMaker
Copy link

MengLinMaker commented Sep 17, 2023

@DoctorSlimm I see, I was experimenting with x86 architecture but the docker buildx took incredibly long. I'm also on a M1 mac, which I saw you are also on. Will keep trying this x86 method out! Thank you +++++

Can confirm that x86_64 is compatible with onnxruntime.

@johnsonchau-bulb, I found that creating a CICD pipeline with Github Actions is a nice solution for deploying x86_64 lambdas from Apple Silicon. I'm using Serverless Framework to deploy a dev app on commits.

For smaller onnx models, it is possible to deploy without docker by quantising the onnx model to reduce size and using serverless-python-requirements zip dependency option.

@satyajit-bagchi
Copy link

I also ran into this error today on lambda ARM64 architecture, with onnxruntime==1.16.1.

Is there a recommended version of onnxruntime which works on AWS Arm64 devices?

@chenfucn
Copy link
Contributor

chenfucn commented Oct 13, 2023

@chenfucn, Referencing your PR #10199: I located the file reader code in pytorch/cpuinfo that may be causing the file read issues for AWS lambda ARM64.

My AWS lambda directory probing tests confirm that these files do not exist, so read attempts lead to error:

  • /sys/devices/system/cpu/possible
  • /sys/devices/system/cpu/present

I also agree that the fix should be made in pytorch/cpuinfo as this is a cleaner solution. Looking at the coding, failing should return a null pointer.

Actually, it may be this logger in python/cpuinfo that's throwing the exception.

Thank you for the investigation!

I am not sure this is on cpuinfo's shoulders anymore. CPU feature detection is already a complex matter. CPUinfo library already has lots of code handling many different platforms. Why does AMW lambda missing two files that are present in most other Linux platforms? This would make cross platform programming unnecessarily complex. Why?

These two system files provide CPU information such as instruction set, number of big little cores, cache size, etc. This information is vital for onnxruntime to provide necessary performance on ARM64 systems. Without it, onnxruntime could run an order of magnitude slower. At that point I doubt onnxruntime is useful.

@jcampbell05
Copy link

jcampbell05 commented Oct 13, 2023

Thank you for the investigation!

I am not sure this is on cpuinfo's shoulders anymore. CPU feature detection is already a complex matter. CPUinfo library already has lots of code handling many different platforms. Why does AMW lambda missing two files that are present in most other Linux platforms? This would make cross platform programming unnecessarily complex. Why?

These two system files provide CPU information such as instruction set, number of big little cores, cache size, etc. This information is vital for onnxruntime to provide necessary performance on ARM64 systems. Without it, onnxruntime could run an order of magnitude slower. At that point I doubt onnxruntime is useful.

It's because ARM Lambda uses the custom Amazon Graviton Processor and in order to get the best support also run's Amazon Linux 2. I'm not sure if this is the exact reason they don't have /sys/devices/system/cpu/possible

But I have found that Amazon Linux and in particularly the one used for AWS Lambda is very heavily restricted and many things disabled on it.

For example you can't use multi processing from python because /dev/shm isn't supported by Amazon Linux on ARM Lambda which is almost always provided by any other OS

Regardless, the main issue is just that the lates onnxruntime crashes, even running very very slow would be an upgrade.

@MengLinMaker
Copy link

I also ran into this error today on lambda ARM64 architecture, with onnxruntime==1.16.1.

Is there a recommended version of onnxruntime which works on AWS Arm64 devices?

satyajit-bagchi, To summarise, ARM Lambda is missing a lot of features required by Onnxruntime that is standard across other Linux machines. As such, supporting ARM Lambda is unlikely and probably not worth the effort.

On a positive note, Onnxruntime does work on x86_64 lambda.
If you have an ARM machine, one deployment solution is to configure a deployment pipeline on an x86_64 machine - eg: GitHub Actions.

Hope this saves weeks of debugging.

Luflosi added a commit to Luflosi/nixpkgs that referenced this issue Oct 27, 2023
After the 0.2.0 version, there are even more possible cases to consider.
I found it too annoying to do all the testing manually.
Add some tests to make it easy to test everything automatically.

The test python3Packages.invisible-watermark.tests.withOnnx-rivaGan would fail in the nix sandbox on aarch64-linux:
```
Error in cpuinfo: failed to parse the list of possible processors in /sys/devices/system/cpu/possible
Error in cpuinfo: failed to parse the list of present processors in /sys/devices/system/cpu/present
Error in cpuinfo: failed to parse both lists of possible and present processors
terminate called after throwing an instance of 'onnxruntime::OnnxRuntimeException'
  what():  /build/source/include/onnxruntime/core/common/logging/logging.h:294 static const onnxruntime::logging::Logger& onnxruntime::logging::LoggingManager::DefaultLogger() Attempt to use DefaultLogger but none has been registered.

/build/.attr-0l2nkwhif96f51f4amnlf414lhl4rv9vh8iffyp431v6s28gsr90: line 9:     5 Aborted                 (core dumped) invisible-watermark --verbose --action encode --type bytes --method 'rivaGan' --watermark 'asdf' --output output.png '/nix/store/srl698a32n9d2pmyf5zqfk65gjzq3mhp-source/test_vectors/original.jpg'
Exit code of invisible-watermark was 134 while 0 was expected.
```
so I have disabled that test. I believe microsoft/onnxruntime#10038 describes the same issue.
Noodlez1232 pushed a commit to Noodlez1232/nixpkgs that referenced this issue Oct 28, 2023
After the 0.2.0 version, there are even more possible cases to consider.
I found it too annoying to do all the testing manually.
Add some tests to make it easy to test everything automatically.

The test python3Packages.invisible-watermark.tests.withOnnx-rivaGan would fail in the nix sandbox on aarch64-linux:
```
Error in cpuinfo: failed to parse the list of possible processors in /sys/devices/system/cpu/possible
Error in cpuinfo: failed to parse the list of present processors in /sys/devices/system/cpu/present
Error in cpuinfo: failed to parse both lists of possible and present processors
terminate called after throwing an instance of 'onnxruntime::OnnxRuntimeException'
  what():  /build/source/include/onnxruntime/core/common/logging/logging.h:294 static const onnxruntime::logging::Logger& onnxruntime::logging::LoggingManager::DefaultLogger() Attempt to use DefaultLogger but none has been registered.

/build/.attr-0l2nkwhif96f51f4amnlf414lhl4rv9vh8iffyp431v6s28gsr90: line 9:     5 Aborted                 (core dumped) invisible-watermark --verbose --action encode --type bytes --method 'rivaGan' --watermark 'asdf' --output output.png '/nix/store/srl698a32n9d2pmyf5zqfk65gjzq3mhp-source/test_vectors/original.jpg'
Exit code of invisible-watermark was 134 while 0 was expected.
```
so I have disabled that test. I believe microsoft/onnxruntime#10038 describes the same issue.
DrymarchonShaun pushed a commit to DrymarchonShaun/nixpkgs that referenced this issue Oct 28, 2023
After the 0.2.0 version, there are even more possible cases to consider.
I found it too annoying to do all the testing manually.
Add some tests to make it easy to test everything automatically.

The test python3Packages.invisible-watermark.tests.withOnnx-rivaGan would fail in the nix sandbox on aarch64-linux:
```
Error in cpuinfo: failed to parse the list of possible processors in /sys/devices/system/cpu/possible
Error in cpuinfo: failed to parse the list of present processors in /sys/devices/system/cpu/present
Error in cpuinfo: failed to parse both lists of possible and present processors
terminate called after throwing an instance of 'onnxruntime::OnnxRuntimeException'
  what():  /build/source/include/onnxruntime/core/common/logging/logging.h:294 static const onnxruntime::logging::Logger& onnxruntime::logging::LoggingManager::DefaultLogger() Attempt to use DefaultLogger but none has been registered.

/build/.attr-0l2nkwhif96f51f4amnlf414lhl4rv9vh8iffyp431v6s28gsr90: line 9:     5 Aborted                 (core dumped) invisible-watermark --verbose --action encode --type bytes --method 'rivaGan' --watermark 'asdf' --output output.png '/nix/store/srl698a32n9d2pmyf5zqfk65gjzq3mhp-source/test_vectors/original.jpg'
Exit code of invisible-watermark was 134 while 0 was expected.
```
so I have disabled that test. I believe microsoft/onnxruntime#10038 describes the same issue.
@jcampbell05
Copy link

jcampbell05 commented Nov 2, 2023

So interestingly the Intel Lambda also doesn't mount /sys and as a result cpuinfo has to implement a bunch of workarounds to detect the features. It will still emit a warning in this case.

So looks like this will be fixed as soon as cpuinfo is also fixed fro arm64.

But the fact it emits an error sounds like it might be a case of it not realising it needs to use those workarounds

pytorch/cpuinfo#14

mexisme pushed a commit to mexisme/nixpkgs that referenced this issue Nov 3, 2023
After the 0.2.0 version, there are even more possible cases to consider.
I found it too annoying to do all the testing manually.
Add some tests to make it easy to test everything automatically.

The test python3Packages.invisible-watermark.tests.withOnnx-rivaGan would fail in the nix sandbox on aarch64-linux:
```
Error in cpuinfo: failed to parse the list of possible processors in /sys/devices/system/cpu/possible
Error in cpuinfo: failed to parse the list of present processors in /sys/devices/system/cpu/present
Error in cpuinfo: failed to parse both lists of possible and present processors
terminate called after throwing an instance of 'onnxruntime::OnnxRuntimeException'
  what():  /build/source/include/onnxruntime/core/common/logging/logging.h:294 static const onnxruntime::logging::Logger& onnxruntime::logging::LoggingManager::DefaultLogger() Attempt to use DefaultLogger but none has been registered.

/build/.attr-0l2nkwhif96f51f4amnlf414lhl4rv9vh8iffyp431v6s28gsr90: line 9:     5 Aborted                 (core dumped) invisible-watermark --verbose --action encode --type bytes --method 'rivaGan' --watermark 'asdf' --output output.png '/nix/store/srl698a32n9d2pmyf5zqfk65gjzq3mhp-source/test_vectors/original.jpg'
Exit code of invisible-watermark was 134 while 0 was expected.
```
so I have disabled that test. I believe microsoft/onnxruntime#10038 describes the same issue.
@dandiep
Copy link

dandiep commented Dec 20, 2023

Would just like to add on that this is definitely a real issue for us... Migrating back to x86 introduces a whole separate set of problems that we're not really prepared to do right now.

@ctippur
Copy link

ctippur commented Feb 11, 2024

Yes, this is still a issue for us as well. I see this on lambda cloudwatch logs. I am running flask based api code using aws-lambda-adapter. I tried using x86_64 but the API wouldnt even get called, giving me error The requested URL was not found on the server. If you entered the URL manually please check your spelling and try again. Not sure if this is related.

Error in cpuinfo: failed to parse the list of possible processors in /sys/devices/system/cpu/possible
--
Error in cpuinfo: failed to parse the list of present processors in /sys/devices/system/cpu/present
Error in cpuinfo: failed to parse both lists of possible and present processors
terminate called after throwing an instance of 'onnxruntime::OnnxRuntimeException'
what():  /onnxruntime_src/include/onnxruntime/core/common/logging/logging.h:294 static const onnxruntime::logging::Logger& onnxruntime::logging::LoggingManager::DefaultLogger() Attempt to use DefaultLogger but none has been registered.

@jcampbell05
Copy link

jcampbell05 commented Feb 12, 2024

I was wondering if there wouldn't at least be a way of providing CPU info as a fallback. Since onnxruntime is looking at /sys folder for CPU info flags.

Wondering if workaround for us is to manually make these files in Lambda.

@wbudd
Copy link

wbudd commented Feb 23, 2024

Wondering if workaround for us is to manually make these files in Lambda.

@jcampbell05 Not exactly straightforward, but I feel like it should nonetheless be possible: zip up a tool like fakechroot as a Lambda layer, and generate the needed /sys paths inside some fake chroot destination within /tmp (being Lambda user-writable) such as /tmp/soveryfake (via a subprocess call from the main Lambda script): mkdir -p /tmp/soveryfake/sys/devices/system/cpu && echo "0-5" > /tmp/soveryfake/sys/devices/system/cpu/cpu_possible. Then, wrap the actual ORT-based inference subprocess within the fakechroot: /opt/bin/fakechroot --use-system-libs fakeroot chroot /tmp/soveryfake /opt/bin/my_ort_infer_prog.py.

Unfortunately, some quick and dirty testing suggests that the above method doesn't work as-is...
The reason (probably) being that /sys/* (and /proc/*) aren't ordinary files. fakechroot seems to show their wrapped process the real /sys/* file tree as-is rather than whatever is defined in the chroot origin (whereas emulating stuff inside /usr works just fine, for example).

Maybe tools like unshare or bubblewrap are better capable at faking /sys as a plain user, but I have my doubts about that.

(In my case, the better solution was to create a dedicated C++ inference mini-program statically compiled with the ONNX library parts needed. When done inside a multi-stage Dockerfile while building ONNX from source too, one can then grep and replace the /sys/* invocations with their own /tmp/* paths of choice, while also enabling a whole bunch of optimizations in terms of model size, binary/layer size, memory usage. All of this together greatly reduces the overall runtime of the Lambda.)

@jcampbell05
Copy link

(In my case, the better solution was to create a dedicated C++ inference mini-program statically compiled with the ONNX library parts needed. When done inside a multi-stage Dockerfile while building ONNX from source too, one can then grep and replace the /sys/* invocations with their own /tmp/* paths of choice, while also enabling a whole bunch of optimizations in terms of model size, binary/layer size, memory usage. All of this together greatly reduces the overall runtime of the Lambda.)

Nice I didn't realise this was an option, If you have any pointer around specficially building this runtime for Lambda then I would very much appreciate links to where I can read about that

@astahlman
Copy link

I'm able to run inference on an arm64 Lambda by building without cpuinfo via the CMake flag onnxruntime_ENABLE_CPUINFO, i.e.,

python tools/ci_build/build.py \
    --build_dir build/Linux \
    --config RelWithDebInfo \
    --build_shared_lib \
    --parallel \
    --compile_no_warning_as_error \
    --skip_submodule_sync \
    --build_wheel \
    --allow_running_as_root \
    --cmake_extra_defines onnxruntime_ENABLE_CPUINFO=OFF

@neo
Copy link

neo commented Apr 10, 2024

I'm able to run inference on an arm64 Lambda by building without cpuinfo via the CMake flag onnxruntime_ENABLE_CPUINFO, i.e.,

python tools/ci_build/build.py \
    --build_dir build/Linux \
    --config RelWithDebInfo \
    --build_shared_lib \
    --parallel \
    --compile_no_warning_as_error \
    --skip_submodule_sync \
    --build_wheel \
    --allow_running_as_root \
    --cmake_extra_defines onnxruntime_ENABLE_CPUINFO=OFF

I was able to have it running with the above custom build – however, it seems very slow... is it because of the following @chenfucn mentioned?

I am not sure this is on cpuinfo's shoulders anymore. CPU feature detection is already a complex matter. CPUinfo library already has lots of code handling many different platforms. Why does AMW lambda missing two files that are present in most other Linux platforms? This would make cross platform programming unnecessarily complex. Why?

These two system files provide CPU information such as instruction set, number of big little cores, cache size, etc. This information is vital for onnxruntime to provide necessary performance on ARM64 systems. Without it, onnxruntime could run an order of magnitude slower. At that point I doubt onnxruntime is useful.


and I do wonder if we can do detection without cpuinfo 🤔

Thanks for the quick response to this issue. I'm happy to test out the implementation when there is a release candidate, but I've already deployed the model on x86 hardware and want as little downtime as possible.

Will PR #10199 fix what @chenfucn brought up in the below comment?

Currently we are using cpuinfo lib to detect hybrid cores and SDOT UDOT instruction support. Ignoring cpuinfo failure means we lose these functionalities and will cause performance degradation. Especially with DOT instructions, the matrix multiplication can be multiple times slower if we don't use DOT instructions and fall back to neon cores.

Or does pytorch/cpuinfo#76 need to be resolved to fix that problem?

@MengLinMaker
Copy link

and I do wonder if we can do detection without cpuinfo 🤔

@neo AWS Arm64 does not provide the files required for "detection":

As suggested by the maintainers, Arm64 Lambda deviates from the norm by not providing these typical linux files:

  • /sys/devices/system/cpu/possible
  • /sys/devices/system/cpu/present

So the conclusion is that this issue should be ideally fixed by AWS instead. The issue is outside the scope of ONNXruntime and pytorch/cpuinfo.

A possible work around could be replacing any references to /sys/devices in the code and add your own files (No idea how that would work). If time is priority, then getting ONNXruntime working on Arm64 Lambda is probably a waste of time.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests