-
-
Notifications
You must be signed in to change notification settings - Fork 208
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Memory issue with satellite.at()? #373
Comments
Good question! On the one hand, with a quick script I cannot confirm performance worse than linear — in fact I show slightly better than linear, with the cost per point generated dropping as the size of the time vector grows larger:
But, on the other hand, my laptop runs out of memory trying to generate a mere 100k positions!
It should only take about 4.8 MB to hold 100,000 points × 3 coordinates (x,y,z) × 8 bytes per floating point number × 2 for both position and velocity, so there’s definitely something unexpected happening with memory allocation. Let me go look at the |
Thanks---I appreciate. (Note this is not a real blocker for me, although I could definitely used some speed-up.) I know anecdotal evidence isn't really useful for a bug report, but I am also under the impression that when doing the same thing twice the second is much faster. Does it make sense? |
Yes, it makes perfect sense given the results of my investigation! Here’s what I found: The cost here is not in computing the satellite positions with The reason you are seeing speedup is probably that you are re-using the same So we should think about three things: immediately getting you a workaround; second, thinking about whether the workaround should kick in automatically for Earth satellite coordinates; and, third, looking into the IAU2000A implementation to see if those huge intermediate arrays can be avoided. So, first: Here's a script that measures the degradation in coordinates if you switch to the less accurate but much less time-and-memory-intensive IAU2000B:
And the result: Given that satellite coordinates are generally only accurate to a kilometer or so, this tiny 1mm or 2mm difference in position is negligible. I encourage you to immediately update your script create time objects where you set Second: Delivering a quicker and less accurate ICRF matrix in cases where Earth satellite objects are involved will be tricky. The actual coordinate transform routine just asks the time for its So would we invent a new rotation matrix Third: I am not sure why those intermediate matrices get so large. I'll have to debug the |
Wow! The I haven't made extensive tests, yet, but at the first glance the time/memory footprint is well below the level at which I would notice it. I don't know if mine is really a peculiar use case, but it might be worth noting this in the docs. Thanks! |
@lucabaldini — I spent yesterday looking at the nutation routines, and I found several spots where very large intermediate results were being produced, and figured out how to use Here's how to try out the new version if you would like to let me know ahead of the next release if you see a difference in your own calculations:
|
The new code has now been released in Skyfield 1.21. I'm going to close the issue for now, but please re-open if you run into further problems! |
I have an application where I am trying to calculate the position of a satellite on a very fine time grid (say millions of points).
The simple call to
geocentric = satellite.at(t)
seems to run into performance issue, where for large n scales much worse than linearly, and by the time n is O(1 M) the thing is essentially requiring >10 GB or RAM and bringing my terminal down.
I can make a more detailed report, including all the related information, but I want to make a quick check first of whether I am doing something wrong and/or you have any immediate insight into this.
Thanks in advance!
The text was updated successfully, but these errors were encountered: