Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support FastDifferentiation.jl #34

Merged
merged 2 commits into from
Apr 3, 2024
Merged

Support FastDifferentiation.jl #34

merged 2 commits into from
Apr 3, 2024

Conversation

gdalle
Copy link
Collaborator

@gdalle gdalle commented Apr 3, 2024

Checklist

  • Appropriate tests were added
  • Any code changes were done in a way that does not break public API
  • All documentation related to code changes were updated
  • The new code follows the
    contributor guidelines, in particular the SciML Style Guide and
    COLPRAC.
  • Any new documentation only uses public API

Additional context

FastDifferentiation.jl is an efficient symbolic backend which DifferentiationInterface.jl can use

cc @brianguenter, do you think there should be other parameters here to specify how the backend is configured? similar to e.g. chunk size in ForwardDiff or compiled mode toggle in ReverseDiff

@Vaibhavdixit02 Vaibhavdixit02 merged commit ec248df into main Apr 3, 2024
4 checks passed
@Vaibhavdixit02 Vaibhavdixit02 deleted the gd/new_types branch April 3, 2024 06:14
@gdalle
Copy link
Collaborator Author

gdalle commented Apr 3, 2024

@Vaibhavdixit02 let's maybe wait until @brianguenter weighs in on the content of the new struct before registering v0.2.8?

@brianguenter
Copy link

@Vaibhavdixit02 @gdalle I'm happy to take a look. Which struct are you referring to?


Chooses [FastDifferentiation.jl](https://github.com/brianguenter/FastDifferentiation.jl).
"""
struct AutoFastDifferentiation <: AbstractSymbolicDifferentiationMode end
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@brianguenter this one. Right now it has no fields, but it would be the right spot to decide "settings" for the FastDifferentiation backend. You can take a look at the other structs to get an idea of the stuff that goes in there

@brianguenter
Copy link

Currently FastDifferentiation has three distinct phases that are visible to the end user: derivative graph analysis (to the user this looks like symbolic derivative calculation), code generation, and code execution.

I plan to make a new version that will collapse these into one. The system will do something similar to trace compilation.

In this system there might be parameters associated with the number of conditionals the compiler will trace through, or the number of traces that will be cached. How would that fit into your framework? Would those parameters go in this struct?

@gdalle
Copy link
Collaborator Author

gdalle commented Apr 4, 2024

Yes, but if those don't exist for now we can leave the struct empty at the moment

@brianguenter
Copy link

Should code generation parameters go in this struct? These are the code generation parameters currently available:

  • init result with zero or assume user has already initialized with zero
  • in-place result or new result

How about sparsity? Should this be specified here as well?

@Vaibhavdixit02
Copy link
Member

Do you mean like a boolean or are there choices for how the sparsity detection is done?

Also note that AutoSparseFastDifferentiation exists

@gdalle
Copy link
Collaborator Author

gdalle commented Apr 5, 2024

I think for sparsity the easiest is to split the backend in two, sparse and not sparse, as was done for other backends.

For in-place vs out-of-place, in the DifferentiationInterface framework the same backend object can be used in both ways, so I would leave it out.

For incrementation vs zeroing, I think the default will be zeroing so again I would leave it out.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants