-
-
Notifications
You must be signed in to change notification settings - Fork 4.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Tensorflow array printers #15462
Tensorflow array printers #15462
Conversation
✅ Hi, I am the SymPy bot (v134). I'm here to help you write a release notes entry. Please read the guide on how to write release notes. Your release notes are in good order. Here is what the release notes will look like: This will be added to https://github.com/sympy/sympy/wiki/Release-Notes-for-1.4. Note: This comment will be updated with the latest check if you edit the pull request. You need to reload the page to see it. Click here to see the pull request description that was parsed.
Update The release notes on the wiki have been updated. |
This seems to modify the numpy printers too. I don't see any tests added for tensorflow. |
I guess they are changes from https://github.com/sympy/sympy/pull/15452/files. GitHub isn't noticing that the commit is in master now. |
Yeah, maybe I should merge the master branch. |
N = tensorflow.placeholder(tensorflow.float32, (3, 3)) | ||
feed_dict = {M: M_, N: N_} | ||
return tensorflow.InteractiveSession().run(M + N, feed_dict=feed_dict) | ||
""" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@asmeurer I use this trick to call tensorflow. Tensorflow requires the input data to be declared as placeholders, there was no way I could put this in a lambda as I need to define some local variables.
Furthermore the objects in sympy.codegen
are of little use to me, as they all look to be optimized for C and Fortran code generation.
Probably this can be done in a better way, but I'd keep this trick as a temporary solution to concentrate on the code generation.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That is fine. Even lambdify
doesn't use a lambda anymore.
Regarding codegen, what is missing? We would like for it to be as language agnostic as possible, but so far only C and Fortran have been in mind, so some of the abstractions may be wrong. CC @bjodah
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
From what I've seen the function prototypes need to define the type already, which is the same for all variables of the function. This is a bit of a rigid concept, I think we should rather have the possibility of defining the type for each variable.
Anyway, there are other problems beyond functions that may arise, so I'll stick with this solution.
It looks like there is a test that randomly fails on Python 3.4:
|
Can you open a new issue for the failure, along with the test header (which includes the random seeds). |
|
||
|
||
def test_codegen_extra(): | ||
if not np: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should this be tf
, assuming the commented out tests are uncommented?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Still uncertain, we probably need both tensorflow and numpy.
I've restarted the build. |
So, apparently the current generated function for command lambdify((M, N), M*N, 'tensorflow') is def _lambdifygenerated(M, N):
return (M*N) which means it does not use tensorflow at all. This has to be fixed. |
Does tensorflow still not use operator overloading? |
I wasn't referring to operator overloading. Tensorflow uses placeholders as input streams in the computational graph. The generated lambda does not perform the computational immediately. But maybe it's the correct way to go. |
That's probably the most useful. The user can always perform the final calculation on the result. |
References to other Issues or PRs
Brief description of what is fixed or changed
Other comments
Release Notes