Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Contribution from Pierre #8

Closed
petitcactusorange opened this issue Oct 7, 2015 · 5 comments
Closed

Contribution from Pierre #8

petitcactusorange opened this issue Oct 7, 2015 · 5 comments

Comments

@petitcactusorange
Copy link

I discussed with Pierre (Billoir some of you know him already), who has been since he started working on LHCb (and for a while before) thinking about potential speed ups and improvements to the upgrade-tracking. For the time being we can summarize the discussion in two points :

  • Weight matrix formalism for the Kalman Filter.
  • Trials to reduce computations in some steps of the reconstruction.

It seems to me this can nicely fit in the Event Model. The way I see it he could share with us his thoughts in a // session. @betatim @GerhardRaven @manuelschiller Thoughts ?

@GerhardRaven
Copy link
Contributor

The BaBar track fit flipped 'on demand' between covariance and weights -- as for measurement nodes one need weights, but (IIRC!) transport typically wants covariances... And I think I discussed this with Wouter (he is the right person to ask!) but I think the conclusion was that either it didn't match the LHCb coding convention of strict separation between 'data' and 'algorithms' or it didn't gain us enough to bother... But regardless of that, we should reconsider this as an option, as maybe things have changed and the balance has shifted.

As for the 2nd point, one class of speedups we have implemented in a few hot spots is to not compute with more accuracy than needed (eg. Manuel's improvement on the T0 walk correction, which doesn't need to be known with 53 bits (16 digits) of precision, when the TDC that it corrects has only 8 bits in the first place). This should (IMHO) be propagated 'throughout' the code. There is precious little information on how accurate individual bits of code have to be. And in many cases, we just spend time computing 'random digits' beyond the required accuracy -- I've suggested a few times that we should sometimes explicitly truncate our precision to make things more reproducible. Conversely, I'd also like to know which parts of the code are 'borderline' on precision. I can imagine that there are algorithms where the precision isn't good enough. So in a perfect world, I'd like each bit of reconstruction code come with a specification on how accurate it should be, but I doubt we'll ever get to that stage. But it is something to strive for.

@petitcactusorange
Copy link
Author

Regarding the first point, we put Pierre in contact with Wouter so hopefully there are exchanging valuable information.

Regarding the second point : "There is precious little information on how accurate individual bits of code have to be. " I agree, I failed miserably to find this information when Pierre asked me questions about this. Maybe we can be optimistic and decide from now on "specification on how accurate it should be" shall be quoted together with the efficiencies, ghost rate etc. ? This is probably algorithm dependent but maybe @manuelschiller could teach us how to work this out ?

@petitcactusorange
Copy link
Author

@GerhardRaven ps : what does it mean (IIRC!) ?

@GerhardRaven
Copy link
Contributor

IIRC = If I Remenber Correctly

@petitcactusorange
Copy link
Author

Chatted with @manuelschiller, it seems tough to have a talk about code accuracy, since this is too algorithm dependent. We stick to the talk from Pierre.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants