-
Notifications
You must be signed in to change notification settings - Fork 15
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Is it possible to repack pod5 files with a specific read number? #130
Comments
Hi @Shians, yes absolutely. If you want approximately similar sizes then you can get a more performant workflow versus exactly N reads. Tip If you'rejust basecalling these records with dorado - Instead of subsetting files and cloning the data which is very IO intensive and can be slow. Use the Approximate sizes suggestionThis will be much quicker as merging is simple and requires no searching for specific records. pod5 view data/ --include "filename" | sort | uniq -c > records_per_file.txt
head records_per_file.txt
100 file1.pod5
1000 file2.pod5
1234 file3.pod5
...
# Writes all the filenames to output_X.txt
awk -v N=<VALUE>'{
sum += $1
printf "%s", $2 > "data/output_" file_count ".txt"
if (sum >= N) {
sum = 0
file_count++
}
}' records_per_file.txt
for OUT in $(find . -iname "output_*.txt"); do
NEW_POD5="${OUT%.txt}.pod5"
pod5 merge $(cat $OUT) -o $NEW_POD5
done; Exact subsetting into N equally sized batchespod5 view data/ -IH > all_read_ids.txt
split all_read_ids.txt -n l/${BATCHES} batch. -a 4 -d --additional-suffix .txt
echo "read_id,dest" >> mapping.csv
for BATCH in $(find . -iname "batch.*txt"); do
NEW_POD5="${BATCH%.txt}.pod5"
awk '{print $1 "," $NEW_POD5}' >> mapping.csv
done
pod5 subset data/ --table mapping.csv --columns dest |
Thanks for the fast reply! My current workflow batches jobs up by folders, I take all the existing pod5 files and create symbolic links into folders each containing an equal number of pod5 files, then run dorado on the folders containing links so I don't duplicate the data. Your solutions are helpful, and I think I will adapt them into a strategy where I identify all files >1GB and break only those files up into 1GB pod5s. Perhaps also to aggregate files that are too small <100MB with merge. |
@Shians, Something like this - but please make sure you're not missing / duplicating read_ids when splitting the read ids. # get all read_ids from pod5 quickly
pod5 view my.pod5 -IH > all_read_ids.txt
# calculate total number of read_ids
NUM_READS=$(wc -l all_read_ids.txt)
# split into two parts
head -n $(NUM_READS/2) > first_half.txt
tail -n $(NUM_READS/2) > second_half.txt
# first worker
dorado basecall <model> my.pod5 --read-ids first_half.txt ..... > first_half.bam
# second worker
dorado basecall <model> my.pod5 --read-ids second_half.txt ..... > second_half.bam |
Thanks for the suggestion. Unfortunately I think for my personal workflow it might be too messy to have to selectively process some pod5s like this, my current process creates folders like this
That way I can just use the |
I have a situation where I have pod5 files of extremely variable size, some only 40MB while some over 20GB, this makes it incredible difficult to run parallel pipelines over the pod5 files as some jobs take 50 times longer than others. Is it possible to use
pod5 subset
orpod5 repack
to split up the larger files to contain say 4000 reads each? Ideally there would be a command that traverses a folder and repacks its contents into pod5 files with some defined read limit.The text was updated successfully, but these errors were encountered: