Join GitHub today
GitHub is home to over 28 million developers working together to host and review code, manage projects, and build software together.
Sign upjplephem should cleanup resources better #24
Comments
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
brandon-rhodes
Jun 26, 2018
Owner
Thanks for the kudos, that was very nice of you to include! :)
The library is designed to open the file exactly once, when open() is called, so I'm surprised that you're getting a "too many open files" error — but maybe your system also returns than when some other limit is reached, like the number of memory maps? Because the library does map into memory each segment and then keeps it there, so that code like:
a = kernel[p].compute(t0)
a = kernel[p].compute(t1)
—doesn't have to do the expensive memory map operation twice but can keep using the same memory map over again.
How many segments does your kernel have?
What is the full traceback, so that we can see which line of code is erroring? That might tell us which resource is running out on your system.
|
Thanks for the kudos, that was very nice of you to include! :) The library is designed to open the file exactly once, when
—doesn't have to do the expensive memory map operation twice but can keep using the same memory map over again. How many segments does your kernel have? What is the full traceback, so that we can see which line of code is erroring? That might tell us which resource is running out on your system. |
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
kghose
Jun 26, 2018
In [50]: import numpy as np
...: from jplephem.spk import SPK
...: kernel = SPK.open('ast343de430.bsp')
...: pos = np.array([kernel[p].compute(2457061.5) for p in kernel.pairs.keys()])
...:
...:
ERROR:root:Internal Python error in the inspect module.
Below is the traceback from this internal error.
Traceback (most recent call last):
File "/Users/kghose/miniconda2/envs/kaggle/lib/python3.6/site-packages/IPython/core/interactiveshell.py", line 2963, in run_code
exec(code_obj, self.user_global_ns, self.user_ns)
File "<ipython-input-50-08a22ec2a65b>", line 4, in <module>
pos = np.array([kernel[p].compute(2457061.5) for p in kernel.pairs.keys()])
File "<ipython-input-50-08a22ec2a65b>", line 4, in <listcomp>
pos = np.array([kernel[p].compute(2457061.5) for p in kernel.pairs.keys()])
File "/Users/kghose/miniconda2/envs/kaggle/lib/python3.6/site-packages/jplephem/spk.py", line 114, in compute
File "/Users/kghose/miniconda2/envs/kaggle/lib/python3.6/site-packages/jplephem/spk.py", line 161, in generate
File "/Users/kghose/miniconda2/envs/kaggle/lib/python3.6/site-packages/jplephem/spk.py", line 136, in _load
File "/Users/kghose/miniconda2/envs/kaggle/lib/python3.6/site-packages/jplephem/daf.py", line 154, in map_array
File "/Users/kghose/miniconda2/envs/kaggle/lib/python3.6/site-packages/jplephem/daf.py", line 112, in map_words
OSError: [Errno 24] Too many open files
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/Users/kghose/miniconda2/envs/kaggle/lib/python3.6/site-packages/IPython/core/interactiveshell.py", line 1863, in showtraceback
stb = value._render_traceback_()
AttributeError: 'OSError' object has no attribute '_render_traceback_'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/Users/kghose/miniconda2/envs/kaggle/lib/python3.6/site-packages/IPython/core/ultratb.py", line 1095, in get_records
File "/Users/kghose/miniconda2/envs/kaggle/lib/python3.6/site-packages/IPython/core/ultratb.py", line 311, in wrapped
File "/Users/kghose/miniconda2/envs/kaggle/lib/python3.6/site-packages/IPython/core/ultratb.py", line 345, in _fixed_getinnerframes
File "/Users/kghose/miniconda2/envs/kaggle/lib/python3.6/inspect.py", line 1480, in getinnerframes
File "/Users/kghose/miniconda2/envs/kaggle/lib/python3.6/inspect.py", line 1438, in getframeinfo
File "/Users/kghose/miniconda2/envs/kaggle/lib/python3.6/inspect.py", line 693, in getsourcefile
File "/Users/kghose/miniconda2/envs/kaggle/lib/python3.6/inspect.py", line 722, in getmodule
File "/Users/kghose/miniconda2/envs/kaggle/lib/python3.6/inspect.py", line 706, in getabsfile
File "/Users/kghose/miniconda2/envs/kaggle/lib/python3.6/posixpath.py", line 374, in abspath
OSError: [Errno 24] Too many open files
kghose
commented
Jun 26, 2018
|
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
kghose
Jun 26, 2018
In [2]: print(kernel)
File type DAF/SPK and format LTL-IEEE with 343 segments:
2287184.50..2688976.50 Sun (10) -> Unknown Target (2000276)
2287184.50..2688976.50 Sun (10) -> Unknown Target (2000145)
2287184.50..2688976.50 Sun (10) -> Unknown Target (2000268)
...
kghose
commented
Jun 26, 2018
|
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
kghose
Jun 26, 2018
> launchctl limit maxfiles
maxfiles 256 unlimited
Restricting the keys to the first 256 gets rid of this problem, naturally.
Sorry, this number can vary - restricting it to the first 200 seems to always run. Getting closer to 256 starts to get me more and more failures (i.e. there is some variability in when it actually fails)
kghose
commented
Jun 26, 2018
•
Sorry, this number can vary - restricting it to the first 200 seems to always run. Getting closer to 256 starts to get me more and more failures (i.e. there is some variability in when it actually fails) |
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
brandon-rhodes
Jun 27, 2018
Owner
Wow — 343 segments!
The library's current approach, of only mmap'ing the segments that the user tries to access, is designed to minimize the amount of the process's RAM image that's in use by the library in the case where someone only needs to use a few segements. On 32-bit machines and kernels with only a few huge segments, the trade-off seemed important.
But with modern 64-bit address spaces it probably would make more sense to simply mmap() the entire kernel, instead of making a separate map for each segment — and, of course, that approach would prevent you from running out of file descriptors in the case where the number of segments is large.
I wonder if I should switch the default behavior for everyone, or make the new behavior an option? I'm thinking of kernels like jup310.bsp whose 931M would be a pretty big dent in the address space of a 32-bit program.
|
Wow — 343 segments! The library's current approach, of only mmap'ing the segments that the user tries to access, is designed to minimize the amount of the process's RAM image that's in use by the library in the case where someone only needs to use a few segements. On 32-bit machines and kernels with only a few huge segments, the trade-off seemed important. But with modern 64-bit address spaces it probably would make more sense to simply mmap() the entire kernel, instead of making a separate map for each segment — and, of course, that approach would prevent you from running out of file descriptors in the case where the number of segments is large. I wonder if I should switch the default behavior for everyone, or make the new behavior an option? I'm thinking of kernels like |
brandon-rhodes
self-assigned this
Jun 27, 2018
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
brandon-rhodes
Jul 22, 2018
Owner
I've just released version 2.8, which should hopefully fix your problem. I'm going to close the issue, but feel free to re-open if you run into this problem again!
|
I've just released version 2.8, which should hopefully fix your problem. I'm going to close the issue, but feel free to re-open if you run into this problem again! |
kghose commentedJun 25, 2018
•
edited
Edited 1 time
-
kghose
edited Jun 25, 2018 (most recent)
-
kghose
created Jun 25, 2018
Use case
I have a file (
ast343de430.bsp) containing about 300 bodies. I want to plot the positions of all the bodies at one given instant of time.Problem
When I use the following code
I get:
OSError: [Errno 24] Too many open filesExpected behavior
Each call to
kernel[p].compute(2457061.5)should clean up such that we don't run into this issue.Kudos
BTW, library writers often just get gripes from users, so I'd like to point out here that I really like
jplephemand it is my go to Python solution for all things SPK related. Thank you for all the effort you have put into it!