Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Is it possible to repack pod5 files with a specific read number? #130

Closed
Shians opened this issue May 31, 2024 · 4 comments
Closed

Is it possible to repack pod5 files with a specific read number? #130

Shians opened this issue May 31, 2024 · 4 comments
Labels
question Further information is requested

Comments

@Shians
Copy link

Shians commented May 31, 2024

I have a situation where I have pod5 files of extremely variable size, some only 40MB while some over 20GB, this makes it incredible difficult to run parallel pipelines over the pod5 files as some jobs take 50 times longer than others. Is it possible to use pod5 subset or pod5 repack to split up the larger files to contain say 4000 reads each? Ideally there would be a command that traverses a folder and repacks its contents into pod5 files with some defined read limit.

@HalfPhoton
Copy link
Collaborator

Hi @Shians, yes absolutely.

If you want approximately similar sizes then you can get a more performant workflow versus exactly N reads.
Depending on your pipeline - (assuming it's something to do with basecalling) i'd recommend more reads than 4_000 records as there is a non-zero setup time for dorado as it needs to load the model and reference etc.

Tip

If you'rejust basecalling these records with dorado - Instead of subsetting files and cloning the data which is very IO intensive and can be slow. Use the -l, --read-ids A file with a newline-delimited list of reads to basecall. argument which will search for the read ids in the whole dataset and distribute the jobs by indexing ids instead of by providing complete seperate inputs.

Approximate sizes suggestion

This will be much quicker as merging is simple and requires no searching for specific records.
Please edit for your specific needs. This is an untested example but should be sufficient to show what to do.

pod5 view data/ --include "filename" | sort | uniq -c  > records_per_file.txt

head records_per_file.txt
100 file1.pod5
1000 file2.pod5
1234 file3.pod5
...

# Writes all the filenames to output_X.txt
awk -v N=<VALUE>'{
    sum += $1
    printf "%s", $2 > "data/output_" file_count ".txt"
    if (sum >= N) {
        sum = 0
        file_count++
    }
}' records_per_file.txt

for OUT in $(find . -iname "output_*.txt"); do
  NEW_POD5="${OUT%.txt}.pod5"
  pod5 merge $(cat $OUT) -o $NEW_POD5
done;

Exact subsetting into N equally sized batches

pod5 view data/ -IH > all_read_ids.txt
split all_read_ids.txt -n l/${BATCHES} batch. -a 4 -d --additional-suffix .txt
echo "read_id,dest" >> mapping.csv
for BATCH in $(find . -iname "batch.*txt"); do 
  NEW_POD5="${BATCH%.txt}.pod5"
  awk '{print $1 "," $NEW_POD5}' >> mapping.csv
done

pod5 subset data/  --table mapping.csv --columns dest 

@HalfPhoton HalfPhoton added the question Further information is requested label May 31, 2024
@Shians
Copy link
Author

Shians commented Jun 4, 2024

Thanks for the fast reply! My current workflow batches jobs up by folders, I take all the existing pod5 files and create symbolic links into folders each containing an equal number of pod5 files, then run dorado on the folders containing links so I don't duplicate the data.

Your solutions are helpful, and I think I will adapt them into a strategy where I identify all files >1GB and break only those files up into 1GB pod5s. Perhaps also to aggregate files that are too small <100MB with merge.

@Shians Shians closed this as completed Jun 4, 2024
@HalfPhoton
Copy link
Collaborator

HalfPhoton commented Jun 4, 2024

@Shians,
Please try using the -l, --read-ids dorado argument.
You can pass a symbolic link to the same pod5 file but instruct dorado to basecall only the first half of read ids and then have another worker basecall the other half. This way you don't need to duplicate any input data.

Something like this - but please make sure you're not missing / duplicating read_ids when splitting the read ids.

# get all read_ids from pod5 quickly
pod5 view my.pod5 -IH > all_read_ids.txt
# calculate total number of read_ids
NUM_READS=$(wc -l all_read_ids.txt)
# split into two parts
head -n $(NUM_READS/2) > first_half.txt
tail -n $(NUM_READS/2) > second_half.txt

# first worker
dorado basecall <model> my.pod5 --read-ids first_half.txt ..... > first_half.bam
# second worker
dorado basecall <model> my.pod5 --read-ids second_half.txt ..... > second_half.bam

@Shians
Copy link
Author

Shians commented Jun 5, 2024

Thanks for the suggestion. Unfortunately I think for my personal workflow it might be too messy to have to selectively process some pod5s like this, my current process creates folders like this

├── data
    └── pod5_links
        ├── block_1  # full of symlinks to pod5s
        ├── block_2 # full of symlinks to pod5s
        └── ...

That way I can just use the block_# folder as argument to each dorado call. It would add too much complexity for my liking.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested
Projects
None yet
Development

No branches or pull requests

2 participants