Skip to content

Unusually long running for generating site_level_m6A #45

@gsukrit

Description

@gsukrit

Hello Team,

I am trying to run CHEUI for my RNA modification data but I am stuck in the following step:

/CHEUI/scripts/CHEUI_predict_model2.py -i read_level_m6A_predictions_sorted.txt -m /home/CHEUI/CHEUI_trained_models/CHEUI_m6A_model2.h5 -o site_level_m6A_predictions.txt

The command runs for unusually long times (20+ hours) and then gets idle without any error or sometimes abruptly gets killed. My input file size of "read_level_m6A_predictions_sorted.txt" is 3.38 GB. The output of ite_level_m6A_predictions.txt generated an output file of 10.782 MB before getting killed in the last run I attempted.

I don't know why that is happening. I ran the same script with m5C modifications successfully, where the read_level input file size was ~954 MB, which generated a site_level_output of 3.5 MB. I don't know what has possibly gone wrong with the m6A site-level calling. Any help in this regard would be greatly appreciated.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions