-
Notifications
You must be signed in to change notification settings - Fork 1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Computational graph toolchain #73
Comments
Seems like in |
I think there are several options, e.g. in vp2 we use reportengine which does the DAG evaluation. Other simpler options are tf, networkx and dash. More sophisticated options are airflow and Luigi. Maybe you just need to code a simple DAG in python with numba precompilation. |
We looked into your proposals:
On the other hand we further investigated and we made up an alternative to
In conclusion probably we will try first the combination Thank you very much for your proposals @scarrazza, as said we will consider others but for a further goal. |
Yes, |
Other question, after looking at their documentation and the big "research" statement. |
When we have time, maybe, we should read this https://numba.pydata.org/numba-doc/latest/developer/inlining.html more carefully ... |
The second paragraph of the cited numba page is exactly:
Of course numba was the first try, the reason why we gave up at the time was that it seemed not to be able at all to inline nested functions. If you have any deeper understanding in numba of course it would be the perfect tool, since it’s the only one the provides only a compiler and nothing more. |
Relevant advice from @scarlehoff
|
Another relevant info would be to know in advance what happens when you are using closures together with In particular the function we are going to decorate will be only the very small coefficient functions (small, but there is a plethora of them), and they will usually depend on a single argument, Perhaps you (@scarlehoff, @scarrazza) already know?
|
Looks like
So the following alternatives:
More investigation it's needed, but probably this is the promising direction. References |
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. |
This time @Stale is correct, this one is not going anywhere: yadism is sufficiently fast on its own, light-speed (on light). The way to improve is simply to compile more ingredients, and maybe to parallelize a bit (if really needed). |
Keeping investigating from #29 we decided to have a look into a proper way to replace and speed up the hard part of this library.
The goal
The situation is the following:
f(x) = sum(alpha**k * f_k(x))
, that involves multiplying and summing functionsProposals?
We are searching for a proper tool. Currently:
lambda
s, and the only efficiency measure is try to keep nesting as minimal as possible, based on a manual type check (the best we can achieve withlambda
s only)what we would like:
tensorflow
, but maybe what we need is just the compile part, that maybe isxla
(I'm still trying to understand thetf
internals), and sojax
would be enough (it just shipautograd
, that we don't need, together withxla
and nothing more, rather minimal if compared withtf
)tfp.math.ode.Solver
to target performance?The text was updated successfully, but these errors were encountered: