Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

TDB TT conversion accuracy versus CPU time #26

Closed
bgodard opened this issue May 20, 2018 · 8 comments
Closed

TDB TT conversion accuracy versus CPU time #26

bgodard opened this issue May 20, 2018 · 8 comments

Comments

@bgodard
Copy link
Member

bgodard commented May 20, 2018

The conversion between TDB and TT in AstroTime.jl taken from ERFA/SOFA is time consuming because it is computing a large series.

The accuracy is probably of the order of a few nanoseconds but for what purpose: the difference between TDB and TT is a quantity which is different for each observers and dependent on their trajectory (their position and velocities in GCRS). The quantity can vary by a few microseconds from one ground station at the surface of the Earth to another.

In the routine dtdb(jd1, jd2, ut, elong, u, v), arguments ut, elong, u and v are only used to compute the observer position correction (observer position is actually defined by elong, u and v but the correction needs additionally the value of ut) neglecting radial velocity correction (which is only important for observer not on the surface such as an Earth satellite), but they are set to zeros (correction disabled) in the rest of the code because there is no observer trajectory information. Thus the quantity computed is the TT minus TDB for an observer at the geocenter.

While it is good to have the routine dtdb available for high accuracy computation, for most use cases where the observer location is unspecified a faster and lower accuracy routine would probably be better.

Additionally the difference between TDB and TT can be computed faster and more accurately using state of the art numerically integrated geocenter time ephemeris. Example:

using BenchmarkTools

using AstroTime
using CALCEPH

astrotime_TTminusTDB(tdb1,tdb2)= - AstroTime.Epochs.dtdb(tdb1,tdb2,0.0,0.0,0.0,0.0)

eph1=Ephem("inpop13c_TDB_m100_p100_tt.dat")
calceph_TTminusTDB(tdb1,tdb2)=compute(eph1,tdb1,tdb2,naifId.id[:ttmtdb],naifId.id[:timecenter],unitSec+useNaifId,0)[1]

eph2=Ephem("inpop13c_TDB_m100_p100_tt.dat")
prefetch(eph2)
calceph_TTminusTDB_prefetched(tdb1,tdb2)=compute(eph2,tdb1,tdb2,naifId.id[:ttmtdb],naifId.id[:timecenter],unitSec+useNaifId,0)[1]


tdb = 2458257.0:2458257.0+366*10

difference(tdb) = astrotime_TTminusTDB(tdb,0.0)-calceph_TTminusTDB(tdb,0.0)

using Plots
pyplot()

plot(tdb,difference.(tdb))
xlabel!("JD")
ylabel!("seconds")

tdb = 2414105.5:0.49:2488984.5

function test1(fun,tab)
   for x in tab
      fun(floor(x),x-floor(x))
   end
end


b1 = @benchmark test1(astrotime_TTminusTDB,$tdb)
display(b1)
println()
b2 = @benchmark test1(calceph_TTminusTDB,$tdb)
display(b2)
println()
b3 = @benchmark test1(calceph_TTminusTDB_prefetched,$tdb)
display(b3)
println()

Result:

BenchmarkTools.Trial: 
  memory estimate:  0 bytes
  allocs estimate:  0
  --------------
  minimum time:     2.024 s (0.00% GC)
  median time:      2.026 s (0.00% GC)
  mean time:        2.026 s (0.00% GC)
  maximum time:     2.029 s (0.00% GC)
  --------------
  samples:          3
  evals/sample:     1

BenchmarkTools.Trial: 
  memory estimate:  30.31 MiB
  allocs estimate:  916890
  --------------
  minimum time:     45.561 ms (3.81% GC)
  median time:      46.358 ms (3.98% GC)
  mean time:        46.627 ms (5.28% GC)
  maximum time:     50.704 ms (3.63% GC)
  --------------
  samples:          108
  evals/sample:     1

BenchmarkTools.Trial: 
  memory estimate:  30.31 MiB
  allocs estimate:  916890
  --------------
  minimum time:     39.932 ms (4.32% GC)
  median time:      41.296 ms (4.55% GC)
  mean time:        41.379 ms (6.10% GC)
  maximum time:     46.952 ms (9.21% GC)
  --------------
  samples:          121
  evals/sample:     1

figure_1
The computed values differ by only a few nanoseconds but the runtime is much better.
If AstroTime could target microsecond TDB-TT conversion accuracy (better accuracy doesn't make sense without a location specification) while being much faster than interpolating the time ephemeris that would be very useful.

EDIT: I mixed up nanoseconds and picoseconds. This is fixed now!

@helgee
Copy link
Member

helgee commented May 21, 2018

Thanks for your input, Bernard! I agree and this is something I wanted to tackle eventually.

Orekit uses the following formula:

By convention, TDB = TT + 0.001658 sin(g) + 0.000014 sin(2g)seconds where g = 357.53 + 0.9856003 (JD - 2451545) degrees.

@bgodard
Copy link
Member Author

bgodard commented May 21, 2018

Orekit uses the following formula:

By convention, TDB = TT + 0.001658 sin(g) + 0.000014 sin(2g)seconds where g = 357.53 + >>0.9856003 (JD - 2451545) degrees.

I don't think the term "convention" should be used here.

function orekitlike_TTminusTDB(tdb1,tdb2)
   T = (tdb1 - 2451545)  + tdb2
   g = 357.53 + 0.9856003 * T
   - 0.001658 * sind(g) - 0.000014 * sind(2g)
end

difference2(tdb) = orekitlike_TTminusTDB(tdb,0.0)-calceph_TTminusTDB(tdb,0.0)

plot(tdb,difference2.(tdb))

b4 = @benchmark test1(orekitlike_TTminusTDB,$tdb)
display(b4)
println()

Results

BenchmarkTools.Trial:
  memory estimate:  0 bytes
  allocs estimate:  0
  --------------
  minimum time:     10.339 ms (0.00% GC)
  median time:      10.402 ms (0.00% GC)
  mean time:        10.600 ms (0.00% GC)
  maximum time:     12.613 ms (0.00% GC)
  --------------
  samples:          472
  evals/sample:     1

plot

In summary:

  • 4 times faster than ephemeris interpolation
  • accuracy about 30 microseconds (us)

Not bad for only using two periodic terms (amplitudes 1.6 ms and 14 us) due to Earth motion.
Neglecting the smaller of the two terms:

BenchmarkTools.Trial:
  memory estimate:  0 bytes
  allocs estimate:  0
  --------------
  minimum time:     6.022 ms (0.00% GC)
  median time:      6.065 ms (0.00% GC)
  mean time:        6.182 ms (0.00% GC)
  maximum time:     7.720 ms (0.00% GC)
  --------------
  samples:          809
  evals/sample:     1

while the error is within 40-50 us.

Plotting the FFT of the residuals:

tdb = 2414105.5:1.0:2488984.5

diff_fft = norm.(fft(difference2.(tdb)))
N = size(diff_fft)[1]
T = (tdb[end]-tdb[1])/(365.25) # in years
Δf = 1/T

fmax = Δf*N/2
fgrid = 0.0:Δf:fmax
diff_fft = diff_fft[1:size(fgrid)[1]]

indices = 0.9.<fgrid.<0.975
plot(fgrid[indices],diff_fft[indices])

indices = 0.0.<fgrid.<0.3
plot(fgrid[indices],diff_fft[indices])

we see a clear peak at frequency ~0.92 year^-1 (corresponding to synodic period of Jupiter)
and smaller peaks at:
~ 0.96 year^-1 (corresponding to synodic period of Saturn)
~ 0.08 year^-1 (corresponding to period of Jupiter)
~ 0.03 year^-1 (corresponding to period of Saturn)

A fit of an additional sine function starting initially with the approximate frequency of Jupiter synodic period improves the error by a factor 2 for the 200 years range covered by the used INPOP ephemeris.

function model(tdb,x)
   T = (tdb - 2451545)
   g = 357.53 + 0.9856003 * T
   - 0.001658 * sind(g) - 0.000014 * sind(2g) + x[1]*sin(x[2]*T+x[3])
end
xdata = tdb
ydata = calceph_TTminusTDB.(tdb,0.0)
x0 = [30e-6, 2*π/365.25 * 0.915 ,0.0]

using LsqFit
fit = curve_fit(model, xdata, ydata, x0)

display(fit)

@helgee
Copy link
Member

helgee commented May 23, 2018

Great analysis @bgodard! Could you open a PR that implements this difference function?

I would then integrate it with the transformation tree and possibly implement a new time scale, e.g. TDBObserver, that enables the user to use the full corrections.

@bgodard
Copy link
Member Author

bgodard commented May 23, 2018

I could not find a reference in the Orekit source code but I think I found the source
https://www.cv.nrao.edu/~rfisher/Ephemerides/times.html
which itself references the explanatory supplement to the Astronomical Almanac.

I will implement the same as Orekit. Accuracy about 30 us while the maximum difference for two fixed observers on the Earth surface is about 4.3us (two observers on opposite sides of the equator aligned with Earth barycentric velocity with Earth at pericentre).

I would then integrate it with the transformation tree and possibly implement a new time scale, e.g. TDBObserver, that enables the user to use the full corrections.

I am not sure about defining multiple version of TDB, then you would need TCBObserver. But you could also make it such that there is only one TDB and a TTObserver...
There is multiple version of the timescale transformation (for different accuracy) but only one TDB and one TT timescale?

I think that for accurate observation modeling, the user should not in general rely on automatic timescales conversion because he needs to specify additional information about the observer. He should only be given the tools to perform the transformation.

@helgee
Copy link
Member

helgee commented May 23, 2018

Good point, having an explicit constructor would be the cleanest solution and convenient enough for advanced users:

function TDBEpoch(ep::TTEpoch, ut, elong, u, v)
    ...
end

@bgodard
Copy link
Member Author

bgodard commented May 23, 2018

I guess the inverse function is needed too:

TTEpoch(ep::TDBEpoch, ut, elong, u, v)

When modelling 2-way range for a deep space probe, you need the quantity TDB-TT at the ground station at the transmission and reception events.
This difference needs to be accurate (eg you would not want to compute the difference of two epochs given as julian day in 2 different timescales TDB and TT, even if you are using a higher precision scalar type).
The TDB or TT timetags do not need the same level of accuracy as the quantity TDB-TT.
So it is more important to give user access to the method:

dtdb(jd1, jd2, ut, elong, u, v)

Then if the methods:

TDBEpoch(ep::TTEpoch, ut, elong, u, v)
TTEpoch(ep::TDBEpoch, ut, elong, u, v)

are exported then dtdb(jd1, jd2, ut, elong, u, v) should be too.

And similar methods which would be even more useful are:

TDBEpoch(ep::TTEpoch, ΔT)
TTEpoch(ep::TDBEpoch, ΔT)

where ΔT is TDB-TT/TT-TDB computed by the user because there are many different ways to compute a TDB-TT difference (eg time ephemeris interpolation, function of planetary ephemeris state vectors, time series...) both for the geocenter and for the observer correction.

@helgee
Copy link
Member

helgee commented May 23, 2018

@prakharcode See above. This is something we should consider when we start refactoring the conversion functions.

@helgee
Copy link
Member

helgee commented Jun 4, 2018

@bgodard Using a custom TDB/TT offset should work now, e.g. by manually calling the TDB constructor like this:

tt = TTEpoch(now())
tdb_tt = ???
tdb = TDBEpoch(tt, tdb_tt)

@helgee helgee closed this as completed Jun 4, 2018
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants