-
Notifications
You must be signed in to change notification settings - Fork 157
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: Support multi-output models in OnnxRuntimeBase #2171
Conversation
📊 Physics performance monitoring for 6dd6afaSummary VertexingSeedingCKFAmbiguity resolutionTruth tracking (Kalman Filter)Truth tracking (GSF) |
Failure in CI Bridge:
😢 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This seems good to me. If it pass the CI then that means it doesn't change the output of the MLSolver.
It is up to you, but I think the effort to change all the use of runONNXInference to runONNXInferenceMultiOutput in ACTS is probably minor so if you don't mind doing it, only keeping the multi output one would make sense for me.
I think we should still keep the "single output" runOnnxInference as a convenience, since I think the majority of usecases would still be single-output. But under the hood it is now implemented in terms of the multi-output version |
Codecov Report
@@ Coverage Diff @@
## main #2171 +/- ##
==========================================
+ Coverage 49.41% 49.48% +0.07%
==========================================
Files 441 439 -2
Lines 25176 25130 -46
Branches 11617 11600 -17
==========================================
- Hits 12441 12436 -5
+ Misses 4481 4443 -38
+ Partials 8254 8251 -3 see 17 files with indirect coverage changes 📣 We’re building smart automated test selection to slash your CI/CD build times. Learn more |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks great, once we understand what is going on with the CI this can go in !
Maybe rebasing will fix the Bridges issue ? |
Let's try... 🤞 |
Just fyi @gagnonlg (and others): if you @-mention folks in the PR description, this will propagate into the commit message, and will cause lots of notifications when the commit makes it into a release etc. |
I think that was a transient failure due to the CI being unable to fetch G4 data content. This happens from time to time 🤷 |
This PR implements the Mixture Density Network measurement calibration that is used in the ATLAS experiment to calibrate pixel cluster positions. For documentation at the moment, there is a comment in the .hpp with a description of the method which I reproduce here: ``` /// Measurement position calibration based on mixture density network /// (MDN) model. The model takes as input: /// /// - A 7x7 charge matrix centered on the center pixel of the cluster; /// - The volume and layer identifiers from /// the GeometryIdentifier of the containing surface; /// - The bound phi and theta angles of the predicted track state; /// - The initial estimated position /// - The initial estimated variance /// /// Given these inputs, a mixture density network estimates /// the parameters of a gaussian mixture model: /// /// P(Y|X) = \sum_i P(Prior_i) N(Y|Mean_i(X), Variance_i(X)) /// /// These are translated to single position + variance estimate by /// taking the most probable value based on the estimated priors. /// The measurements are assumed to be 2-dimensional. /// /// This class implements the MeasurementCalibrator interface, and /// therefore internally computes the network input and runs the /// inference engine itself. ``` Few other things to note: - This PR does not include a reference .onnx model; I plan to publish that for the Generic detector at least in a standalone repository along with the training script and the dataset creation code. - At the moment, this is only implemented for the Generic detector's pixel volume. The calibrator class right now is configured with an allow-list of volume ids and a fall back PassThroughCalibrator. In the future, the calibrators should probably be setup as a GeometryHierarchyMap<MeasurementCalibrator>, but since this is a wider change (it will also make sense for the ScalingCalibrator, for instance) I propose to leave that to a subsequent PR. This will also be an opportunity to cleanup the implemementation of the calibrator configuration on the python side. - This PR depends on #2171
As mentioned in #2170, multi-output models are increasingly used, for instance when training models with multiple auxiliary objectives. However our ONNX plugin does not support this at the moment.
This PR adds a
runOnnxInferenceMultiOutput
that supports this usecase. The regularrunOnnxInference
methods can still be used on multi-output models -- the convention is that the output of the first node will be returned. This roughly matches the current behavior, but it was buggy in a subtle way -- the output of the first node was returned, but assuming the dimensions of the last node. This is now fixed.The change should be backwards compatible, hopefully the CI agrees.
closes #2170