Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Pushing custom information to the InferenceResult #216

Open
ThijsvdLaar opened this issue Jan 16, 2024 · 3 comments
Open

Pushing custom information to the InferenceResult #216

ThijsvdLaar opened this issue Jan 16, 2024 · 3 comments
Labels
documentation Improvements or additions to documentation enhancement New feature or request good first issue Good for newcomers

Comments

@ThijsvdLaar
Copy link
Contributor

I'm looking for a way to insert "probes" into update rules such that custom information can be passed back to the InferenceResult object. This would allow for a user to inspect (and visualize) information on the rule/message level, which would be helpful when developing custom rule implementations.

I currently have two use-cases in mind:

  1. Message statistics can be written to the probe so that the user can inspect (specific) messages;
  2. Custom information during rule computations can be returned, for example to check for convergence when iterating within a rule computation.

In a similar way that Meta and Pipeline objects offer context for inbounds collection and rule computation, a probe might be passed to an update rule. Custom information can then be written to the probe, which is returned to the InferenceResult object (perhaps similar to how posteriors are returned).

@bvdmitri
Copy link
Member

We offer partial support for this functionality through addons, although the documentation is currently limited. In essence, addons enable the insertion of custom steps both before and after invoking the rule function. The existing use cases for addons align closely with the points you've proposed:

  • Debugging information can be incorporated into the messages to trace how, where, and with what arguments they were computed. An illustrative example is available here.
  • Addons can also be utilized to compute the log-scale of the messages, proving useful in subsequent stages of the inference process. (not documented)

@wouterwln
Copy link
Member

@bvdmitri is this a matter of updating documentation or is there additional functionality required?

@wouterwln wouterwln added the enhancement New feature or request label Mar 15, 2024
@bvdmitri
Copy link
Member

I think this is a matter of updating the documentation, because both of the use-cases provided can be covered by the addons.

@wouterwln wouterwln added this to the RxInfer 3.1.0 release milestone Apr 12, 2024
@wouterwln wouterwln added documentation Improvements or additions to documentation good first issue Good for newcomers labels Apr 12, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
documentation Improvements or additions to documentation enhancement New feature or request good first issue Good for newcomers
Projects
None yet
Development

No branches or pull requests

3 participants