Some benchmarking showed that SAXS and XPCS functions took up the majority of the code processing time, the profiling results can be seen here:
low, 10 trains, 2 jobs, 2 procs:
SAXS was largely bottlenecked by pyFAI, and there isn't too much we can do for that (apart from maybe looking a bit into ways to avoid potentially unneeded repeated calls to pyFAI?), but the XPCS functions come out of Xana, which we can more easily work on.
The main section of code is from Xana/XpcsAna/mp_corr3_err.py, finding a more well-optimised multi-tau autocorrelation function could potentially lead to a nice speedup in execution time.
Some benchmarking showed that SAXS and XPCS functions took up the majority of the code processing time, the profiling results can be seen here:
low, 10 trains, 2 jobs, 2 procs:
med 50 trains, 4 jobs, 4 procs:
high 100 trains, 8 jobs, 8 procs:
SAXS was largely bottlenecked by pyFAI, and there isn't too much we can do for that (apart from maybe looking a bit into ways to avoid potentially unneeded repeated calls to pyFAI?), but the XPCS functions come out of Xana, which we can more easily work on.
The main section of code is from
Xana/XpcsAna/mp_corr3_err.py, finding a more well-optimised multi-tau autocorrelation function could potentially lead to a nice speedup in execution time.