New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
pthread_setaffinity_np failed Error while running pepper #10
Comments
@kishwarshafin any ideas? |
@LYC-vio , Yes, I think you are using an older version that uses |
@kishwarshafin the current pepper version is @LYC-vio I'll post here once the new release with pepper 0.7 is ready. |
The new hapdup 0.5 show now work for you - it contains the updated PEPPER 0.7. |
Hi, Thank you very much for your help |
Hi, Thank you for updating to hapdup 0.5. However, I still get the MODEL QUANTIZATION ENABLED message. After checking the pepper-variant (r0.7) code I found:
I think the quantization was still turned on. By the way, would you please provide me some information about why this quantization takes such a long time? (it stuck in this step for days even with 64 threads) Thank you very much |
@kishwarshafin could you take a look? |
@fenderglass , sorry I made the no quantization default in the wrapper that calls all "PEPPER-Margin-DeepVariant". Can you please add |
@kishwarshafin thanks! @LYC-vio please try this docker image, let me know if it fixes the issue: |
@fenderglass @kishwarshafin |
Glad to hear, thanks! |
Hi,
I'm trying to run HapDup on my assembly from Flye. However, an error has occurred while running Pepper:
RuntimeError: /onnxruntime_src/onnxruntime/core/platform/posix/env.cc:173 onnxruntime::{anonymous}::PosixThread::PosixThread(const char*, int, unsigned int (*)(int, Eigen::ThreadPoolInterface*), Eigen::ThreadPoolInterface*, const onnxruntime::ThreadOptions&) pthread_setaffinity_np failed, error code: 0 error msg:
there's also a warning before this runtime error:
/usr/local/lib/python3.8/dist-packages/torch/onnx/symbolic_opset9.py:2095: UserWarning: Exporting a model to ONNX with a batch_size other than 1, with a variable length with LSTM can cause an error when running the ONNX model with a different batch size. Make sure to save the model with a batch size of 1, or define the initial states (h0/c0) as inputs of the model. warnings.warn("Exporting a model to ONNX with a batch_size other than 1, " +
Do you have any idea why this happens?
The commands that I use are like:
Thank you
The text was updated successfully, but these errors were encountered: