Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

pthread_setaffinity_np failed Error while running pepper #10

Closed
LYC-vio opened this issue Feb 2, 2022 · 11 comments
Closed

pthread_setaffinity_np failed Error while running pepper #10

LYC-vio opened this issue Feb 2, 2022 · 11 comments
Labels
bug Something isn't working

Comments

@LYC-vio
Copy link

LYC-vio commented Feb 2, 2022

Hi,

I'm trying to run HapDup on my assembly from Flye. However, an error has occurred while running Pepper:
RuntimeError: /onnxruntime_src/onnxruntime/core/platform/posix/env.cc:173 onnxruntime::{anonymous}::PosixThread::PosixThread(const char*, int, unsigned int (*)(int, Eigen::ThreadPoolInterface*), Eigen::ThreadPoolInterface*, const onnxruntime::ThreadOptions&) pthread_setaffinity_np failed, error code: 0 error msg:
there's also a warning before this runtime error:
/usr/local/lib/python3.8/dist-packages/torch/onnx/symbolic_opset9.py:2095: UserWarning: Exporting a model to ONNX with a batch_size other than 1, with a variable length with LSTM can cause an error when running the ONNX model with a different batch size. Make sure to save the model with a batch size of 1, or define the initial states (h0/c0) as inputs of the model. warnings.warn("Exporting a model to ONNX with a batch_size other than 1, " +
Do you have any idea why this happens?

The commands that I use are like:

reads=NA24385_ONT_Promethion.fastq
outdir=`pwd`
assembly=${outdir}/assembly.fasta
hapdup_sif=../HapDup/hapdup_0.4.sif

time minimap2 -ax map-ont -t 30 ${assembly} ${reads} | samtools sort -@ 4 -m 4G > assembly_lr_mapping.bam
samtools index -@ 4 assembly_lr_mapping.bam

time singularity exec --bind ${outdir} ${hapdup_sif}\
	hapdup --assembly ${assembly} --bam ${outdir}/assembly_lr_mapping.bam --out-dir ${outdir}/hapdup -t 64 --rtype ont

Thank you

@fenderglass
Copy link
Collaborator

@kishwarshafin any ideas?

@kishwarshafin
Copy link
Collaborator

@LYC-vio ,

Yes, I think you are using an older version that uses quantized by default. Can you please see if you have the option to do --no_pepper_quantized in the pipeline that you are using? Since r0.7 I turned it off by default.

@fenderglass
Copy link
Collaborator

@kishwarshafin the current pepper version is 0.6, and I am in the process of updating to 0.7. Thanks!

@LYC-vio I'll post here once the new release with pepper 0.7 is ready.

@fenderglass
Copy link
Collaborator

fenderglass commented Feb 6, 2022

The new hapdup 0.5 show now work for you - it contains the updated PEPPER 0.7.

@fenderglass fenderglass added the bug Something isn't working label Feb 6, 2022
@LYC-vio
Copy link
Author

LYC-vio commented Feb 7, 2022

Hi,
Sorry for the late reply. I found out that this error maybe caused by assigning less cpus than indicated by -t. After increasing the number of CPUs the error no longer appears. And then it stuck in the MODEL QUANTIZATION ENABLED step. I think you are right, something may have gone wrong with the --no_pepper_quantized, I'll try again with hapdup 0.5.

Thank you very much for your help

@LYC-vio
Copy link
Author

LYC-vio commented Feb 9, 2022

Hi,

Thank you for updating to hapdup 0.5. However, I still get the MODEL QUANTIZATION ENABLED message. After checking the pepper-variant (r0.7) code I found:

    parser.add_argument(
        "--quantized",
        default=True,
        action='store_true',
        help="PEPPER: Use quantization for inference while on CPU inference mode. Speeds up inference. Default is True."
    )
    parser.add_argument(
        "--no_quantized",
        dest='quantized',
        default=False,
        action='store_false',
        help="Do not use quantization for inference while on CPU inference mode. Speeds up inference."
    )

I think the quantization was still turned on.

By the way, would you please provide me some information about why this quantization takes such a long time? (it stuck in this step for days even with 64 threads)

Thank you very much

@fenderglass
Copy link
Collaborator

@kishwarshafin could you take a look?

@kishwarshafin
Copy link
Collaborator

@fenderglass , sorry I made the no quantization default in the wrapper that calls all "PEPPER-Margin-DeepVariant". Can you please add --no_quantized to your script in the pepper's parameters? Sorry for the confusion.

@fenderglass
Copy link
Collaborator

@kishwarshafin thanks!

@LYC-vio please try this docker image, let me know if it fixes the issue: mkolmogo/hapdup:0.5-iss10

@LYC-vio
Copy link
Author

LYC-vio commented Feb 22, 2022

@fenderglass @kishwarshafin
Thank you very much! It works smoothly now (hapdup:0.5-iss10)!

@LYC-vio LYC-vio closed this as completed Feb 22, 2022
@fenderglass
Copy link
Collaborator

Glad to hear, thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

3 participants