You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
"hic_mapping_haplo" ran out of RAM with a 1 TB server. I tried to follow shilpagarg/DipAsm#16 like you suggested in a previous issue but it didn't work.
Do you know how could I solve it?
Cheers,
Luis
The text was updated successfully, but these errors were encountered:
Hi,
I was interested in running pstools with Omni-C data but I had the next error segementation fault error running hic_mapping_haplo.
... [M::tb_pipeline::10452.864*42.22] processed 1333334 sequences; [M::tb_pipeline::10462.076*42.23] processed 1333334 sequences; [M::tb_pipeline::10470.184*42.24] processed 1333334 sequences; [M::tb_pipeline::10474.139*42.25] processed 671892 sequences; /var/spool/slurmd/job106890/slurm_script: line 11: 12235 Segmentation fault (core dumped) ./pstools_1 hic_mapping_haplo -t64 02.4_Nypro_Flye_sspace_lr1_gc1_SLRr1_gc1.scaff_seqs <(zcat /cluster/home/shared/nypro/08.8_ALLHiC/fastq/RacoonDogHiC_EKDL200002762-1a_R1.fastq.gz) <(zcat /cluster/home/shared/nypro/08.8_ALLHiC/fastq/RacoonDogHiC_EKDL200002762-1a_R2.fastq.gz) -o scaff_connections.txt /var/spool/slurmd/job106890/slurm_script: line 12: 48458 Killed
"hic_mapping_haplo" ran out of RAM with a 1 TB server. I tried to follow shilpagarg/DipAsm#16 like you suggested in a previous issue but it didn't work.
Do you know how could I solve it?
Cheers,
Luis
The text was updated successfully, but these errors were encountered: