You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have been using sPARTA to identify miRNA targets in both genic and intergenic regions using two PARE libraries from bread wheat. While I successfully processed the genic regions, I encountered memory issues when attempting to process the intergenic regions, despite allocating approximately 450GB of memory for the task.
I wonder if you have any suggestions on how to handle this issue. Would you recommend separating the genome into several smaller files and running the analysis multiple times? Are there any resource allocation parameters in the script that I can revise to better manage the memory usage?
Thanks,
Jo-Wei
============
Here is my command for intergenic regions
nohup python3 ~/bin/sPARTA/sPARTA.py -genomeFile iwgsc_refseqTwoPointOne_assembly.fa -annoType GFF -annoFile iwgsc_refseqTwoPointOne.gff3 -genomeFeature 1 -miRNAFile miRNA_combined_noDup.fa -libs 11787.processed.txt 11788.processed.txt -tarPred H -tarScore N --tag2FASTA --map2DD --validate -accel 60 -minTagLen 18 &
Here is the error
Creating PAGe Index dictionary for lib 11787.processed
PAGe Indexing took 248.73 seconds
Writing PAGeIndex file...
median = 6.0
seventyFivePercentile = 13.0
ninetyPercentile = 49.0
File written. Process took 2.07 seconds
Finding the validated targets
Exception in thread Thread-16 (_handle_workers):
Traceback (most recent call last):
File "/work4/home/joweihsieh/miniconda3/lib/python3.12/threading.py", line 1073, in _bootstrap_inner
self.run()
File "/work4/home/joweihsieh/miniconda3/lib/python3.12/threading.py", line 1010, in run
self._target(*self._args, **self._kwargs)
File "/work4/home/joweihsieh/miniconda3/lib/python3.12/multiprocessing/pool.py", line 516, in _handle_workers
cls._maintain_pool(ctx, Process, processes, pool, inqueue,
File "/work4/home/joweihsieh/miniconda3/lib/python3.12/multiprocessing/pool.py", line 340, in _maintain_pool
Pool._repopulate_pool_static(ctx, Process, processes, pool,
File "/work4/home/joweihsieh/miniconda3/lib/python3.12/multiprocessing/pool.py", line 329, in _repopulate_pool_static
w.start()
File "/work4/home/joweihsieh/miniconda3/lib/python3.12/multiprocessing/process.py", line 121, in start
self._popen = self._Popen(self)
^^^^^^^^^^^^^^^^^
File "/work4/home/joweihsieh/miniconda3/lib/python3.12/multiprocessing/context.py", line 282, in _Popen
return Popen(process_obj)
^^^^^^^^^^^^^^^^^^
File "/work4/home/joweihsieh/miniconda3/lib/python3.12/multiprocessing/popen_fork.py", line 19, in init
self._launch(process_obj)
File "/work4/home/joweihsieh/miniconda3/lib/python3.12/multiprocessing/popen_fork.py", line 66, in _launch
self.pid = os.fork()
^^^^^^^^^
OSError: [Errno 12] Cannot allocate memory
The text was updated successfully, but these errors were encountered:
I'm running into the same problem with Hordeum vulgare. Trying to find intergenic targets uses more than 1.5TB of memory. Would it be possible to add an option to limit the used memory?
Hi Atul,
Thank you for developing this tool.
I have been using sPARTA to identify miRNA targets in both genic and intergenic regions using two PARE libraries from bread wheat. While I successfully processed the genic regions, I encountered memory issues when attempting to process the intergenic regions, despite allocating approximately 450GB of memory for the task.
I wonder if you have any suggestions on how to handle this issue. Would you recommend separating the genome into several smaller files and running the analysis multiple times? Are there any resource allocation parameters in the script that I can revise to better manage the memory usage?
Thanks,
Jo-Wei
============
Here is my command for intergenic regions
nohup python3 ~/bin/sPARTA/sPARTA.py -genomeFile iwgsc_refseqTwoPointOne_assembly.fa -annoType GFF -annoFile iwgsc_refseqTwoPointOne.gff3 -genomeFeature 1 -miRNAFile miRNA_combined_noDup.fa -libs 11787.processed.txt 11788.processed.txt -tarPred H -tarScore N --tag2FASTA --map2DD --validate -accel 60 -minTagLen 18 &
Here is the error
Creating PAGe Index dictionary for lib 11787.processed
PAGe Indexing took 248.73 seconds
Writing PAGeIndex file...
median = 6.0
seventyFivePercentile = 13.0
ninetyPercentile = 49.0
File written. Process took 2.07 seconds
Finding the validated targets
Exception in thread Thread-16 (_handle_workers):
Traceback (most recent call last):
File "/work4/home/joweihsieh/miniconda3/lib/python3.12/threading.py", line 1073, in _bootstrap_inner
self.run()
File "/work4/home/joweihsieh/miniconda3/lib/python3.12/threading.py", line 1010, in run
self._target(*self._args, **self._kwargs)
File "/work4/home/joweihsieh/miniconda3/lib/python3.12/multiprocessing/pool.py", line 516, in _handle_workers
cls._maintain_pool(ctx, Process, processes, pool, inqueue,
File "/work4/home/joweihsieh/miniconda3/lib/python3.12/multiprocessing/pool.py", line 340, in _maintain_pool
Pool._repopulate_pool_static(ctx, Process, processes, pool,
File "/work4/home/joweihsieh/miniconda3/lib/python3.12/multiprocessing/pool.py", line 329, in _repopulate_pool_static
w.start()
File "/work4/home/joweihsieh/miniconda3/lib/python3.12/multiprocessing/process.py", line 121, in start
self._popen = self._Popen(self)
^^^^^^^^^^^^^^^^^
File "/work4/home/joweihsieh/miniconda3/lib/python3.12/multiprocessing/context.py", line 282, in _Popen
return Popen(process_obj)
^^^^^^^^^^^^^^^^^^
File "/work4/home/joweihsieh/miniconda3/lib/python3.12/multiprocessing/popen_fork.py", line 19, in init
self._launch(process_obj)
File "/work4/home/joweihsieh/miniconda3/lib/python3.12/multiprocessing/popen_fork.py", line 66, in _launch
self.pid = os.fork()
^^^^^^^^^
OSError: [Errno 12] Cannot allocate memory
The text was updated successfully, but these errors were encountered: