-
Notifications
You must be signed in to change notification settings - Fork 912
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Getting TypeError: float() argument must be a string or a number
#58
Comments
Thanks for the encouragement, and for making a minimal working example! This is indeed a known limitation of autograd - indexed assignment isn't yet supported, unfortunately. You can usually get around this by building a list and calling np.concatenate(). We'll add a try/catch block to give a more informative error message when this occurs. Also, we'd love to hear about what you're using autograd for, and if you have any other feature requests. |
I'm writing an auto-differentiating component for NASA's OpenMDAO framework. This will make it much easier to quickly & automatically provide numerical derivates across coupled engineering codes (without the potential instability of finite-difference approximations), which is very important for numerical optimization or sensitivity analysis. There are other alternatives as well (complex-step approximation, or closed-form specification of derivates of course), but AD has been on our radar for awhile. Unfortunately, some of our disciplinary analysis codes (engine cycle analysis in particular) do use indexed assignment in a few places, so my AD component isn't quite as general as I had hoped at the moment. But not all of our components require this. Also, looking through the source, it looks like you guys have implemented a jacobian function (which I did as well). I wish I had noticed that before! :) Again, awesome library 👍 |
Wow, sounds very cool! I hope autograd continues to help. Also, the more reasons we have to make indexed assignment work, the more impetus we'll have to figure out an implementation. By the way, there are currently two |
Another by the way: on the current master branch, the error message produced by the code in your original test case should be somewhat clearer!
|
@duvenaud what does it mean by "building a list and calling np.concatenate()"? |
In the case of the OP's code, I think he means rewriting lines like this: B = np.zeros((4,4))
B[:2, :2] = A as something like this: row1 = np.concatenate([A, np.zeros((2,2))], axis=1)
row2 = np.zeros((2, 4))
B = np.concatenate([row1, row2], axis=0) In general, instead of building arrays by allocating zeros and then assigning into blocks, you can instead build the blocks as separate arrays and then concatenate them. The latter method works with autograd, but the former method doesn't. |
@mattjj Can I give B a correct data type such that it can hold the autograd nodes? (Or is data type the cause of issue?) |
No, autograd doesn't support assignment into arrays. We could overload indexed assignment, but it would require a substantial amount of internal bookkeeping, and it might even obfuscate what happens when the program runs. In particular, to do reverse-mode autodiff, the intermediate values computed during the evaluation of the function need to be stored because they usually need to be read during the backward pass computation. Since assignment into arrays might clobber intermediate values, whenever an assignment happens we'd need to copy that data somewhere for use in the backward pass. Instead, by not supporting assignment, we basically force the user to be explicit about when this kind of copying happens. If you need to use assignment, one thing you can do is mark your function as a primitive and then define its vjp manually. Within a primitive you're just using naked numpy, so assignment works. In the case of the OP's code, which might be too simplified to be interesting, that might look something like import autograd.numpy as np
from autograd import grad
from autograd.core import primitive
@primitive
def f(A):
B = np.zeros((4, 4))
B[:2, :2] = A
return B.sum()
def f_vjp(g, ans, vs, gvs, A):
return g * np.ones((2, 2))
f.defvjp(f_vjp) |
Dear Sir , def interpolation(noisy , SNR , Number_of_pilot , interp):
TypeError: float() argument must be a string or a number, not 'dict' |
First of all, great library! I'm finding it very useful for some projects that I am working on.
However, in some instances I am running into a TypeError in models where an array is being sliced or assigned as a block within a larger array (despite being scalar functions ultimately).
I can reproduce it with a stripped-down example:
https://gist.github.com/thearn/faba933208316d71cdb9
which gives the traceback:
Is this expected (ie. a known and accepted limitation)?
The text was updated successfully, but these errors were encountered: