ORCS fitting performances can certainly be improved. There is a lot of processing around the pure fitting procedure which can be optimized and the fitting procedure itself may be too time consuming.
In order to help you decide if something's wrong you can find in the following ideas of how fast it should be.
import pylab as pl
import numpy as np
import scipy.optimize
import timeit
def gaussian1d(x,h,a,dx,fwhm):
w = fwhm / (2. * np.sqrt(2. * np.log(2.)))
return h + a * np.exp(-(x - dx)**2. / (2. * w**2.))
size = 1000
preal = 0, 1, size/2. + np.random.standard_normal(), 1.25
spectrum = gaussian1d(np.arange(size), *preal)
p0 = (0, 1, np.argmax(spectrum), 1)
pfit = scipy.optimize.curve_fit(gaussian1d, np.arange(spectrum.size), spectrum, p0=p0)[0]
%timeit scipy.optimize.curve_fit(gaussian1d, np.arange(spectrum.size), spectrum, p0=p0)[0]
pl.plot(gaussian1d(np.arange(spectrum.size), *pfit))
pl.plot(spectrum)
pl.xlim((size/2 - size*0.05, size/2 + size*0.05))
print('input parameters', p0)
print('fitted parameters', pfit)
print('real parameters', preal)
which output is on my machine
1.59 ms ± 110 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
input parameters (0, 1, 501, 1)
fitted parameters [-9.42471045e-15 9.99999999e-01 5.00700400e+02 1.25000000e+00]
real parameters (0, 1, 500.70040028696957, 1.25)
So 1.6ms per fit for one gaussian noiseless line, without using anything of orb or orcs machinery. This could be considered as the best performance that can be obtained for an ideally clean and esay to fit spectrum. I am not sure that a pure C implementation would make it much faster.
Using directly the model orb.utils.spectrum.gaussian1d instead of the model defined above makes virtually no difference (even if gvar is used instead of numpy, most certainly because it redirects directly to numpy if the input vector is a numpy.ndarray instance and not a gvar,GVar instance)
ORCS fitting performances can certainly be improved. There is a lot of processing around the pure fitting procedure which can be optimized and the fitting procedure itself may be too time consuming.
In order to help you decide if something's wrong you can find in the following ideas of how fast it should be.
which output is on my machine
So 1.6ms per fit for one gaussian noiseless line, without using anything of orb or orcs machinery. This could be considered as the best performance that can be obtained for an ideally clean and esay to fit spectrum. I am not sure that a pure C implementation would make it much faster.
Using directly the model orb.utils.spectrum.gaussian1d instead of the model defined above makes virtually no difference (even if gvar is used instead of numpy, most certainly because it redirects directly to numpy if the input vector is a numpy.ndarray instance and not a gvar,GVar instance)