Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Getting TypeError: float() argument must be a string or a number #58

Closed
thearn opened this issue Nov 5, 2015 · 9 comments
Closed

Getting TypeError: float() argument must be a string or a number #58

thearn opened this issue Nov 5, 2015 · 9 comments

Comments

@thearn
Copy link

thearn commented Nov 5, 2015

First of all, great library! I'm finding it very useful for some projects that I am working on.
However, in some instances I am running into a TypeError in models where an array is being sliced or assigned as a block within a larger array (despite being scalar functions ultimately).

I can reproduce it with a stripped-down example:

https://gist.github.com/thearn/faba933208316d71cdb9

import autograd.numpy as np
from autograd import grad

def f(A):

    B = np.zeros((4,4))

    B[:2, :2] = A

    return B.sum()

A = np.random.randn(2,2)

df = grad(f)

print df(A)

# expected: [[ 1.,  1.],[ 1.,  1.]]

which gives the traceback:

Traceback (most recent call last):
  File "/Users/tristanhearn/Dropbox/code/adcomponent/src/adcomponent/test.py", line 17, in <module>
    print df(A)
  File "/Users/tristanhearn/Documents/thearn_repos/autograd/autograd/core.py", line 20, in gradfun
    return backward_pass(*forward_pass(fun,args,kwargs,argnum))
  File "/Users/tristanhearn/Documents/thearn_repos/autograd/autograd/core.py", line 61, in forward_pass
    end_node = fun(*args, **kwargs)
  File "/Users/tristanhearn/Dropbox/code/adcomponent/src/adcomponent/test.py", line 9, in f
    B[:2, :2] = A
TypeError: float() argument must be a string or a number

Is this expected (ie. a known and accepted limitation)?

@duvenaud
Copy link
Contributor

duvenaud commented Nov 5, 2015

Thanks for the encouragement, and for making a minimal working example!

This is indeed a known limitation of autograd - indexed assignment isn't yet supported, unfortunately. You can usually get around this by building a list and calling np.concatenate().

We'll add a try/catch block to give a more informative error message when this occurs.

Also, we'd love to hear about what you're using autograd for, and if you have any other feature requests.

@thearn
Copy link
Author

thearn commented Nov 6, 2015

Also, we'd love to hear about what you're using autograd for, and if you have any other feature requests.

I'm writing an auto-differentiating component for NASA's OpenMDAO framework. This will make it much easier to quickly & automatically provide numerical derivates across coupled engineering codes (without the potential instability of finite-difference approximations), which is very important for numerical optimization or sensitivity analysis. There are other alternatives as well (complex-step approximation, or closed-form specification of derivates of course), but AD has been on our radar for awhile.

Unfortunately, some of our disciplinary analysis codes (engine cycle analysis in particular) do use indexed assignment in a few places, so my AD component isn't quite as general as I had hoped at the moment. But not all of our components require this.

Also, looking through the source, it looks like you guys have implemented a jacobian function (which I did as well). I wish I had noticed that before! :)

Again, awesome library 👍

@mattjj
Copy link
Contributor

mattjj commented Nov 6, 2015

Wow, sounds very cool! I hope autograd continues to help. Also, the more reasons we have to make indexed assignment work, the more impetus we'll have to figure out an implementation.

By the way, there are currently two jacobian functions, one in core.py and another in convenience_wrappers.py. The one in core.py is better in at least three ways: it's faster (only one forward pass is done), it avoids repeated prints or side-effects (also because of the single forward pass), and it's more general (it can take jacobians of jacobians, which I think the wrapper version can't do). I'll probably delete the version in convenience_wrappers.py sometime today (unless someone protests).

@mattjj
Copy link
Contributor

mattjj commented Nov 6, 2015

Another by the way: on the current master branch, the error message produced by the code in your original test case should be somewhat clearer!

[... snip traceback ...]

/Users/mattjj/packages/autograd/issue58.py in f(x)
      6 def f(x):
      7     A = np.zeros((4,4))
----> 8     A[:2,:2] = x
      9     return A.sum()
     10

AutogradException: autograd doesn't support assigning into arrays
Sub-exception:
TypeError: float() argument must be a string or a number

@ziyuang
Copy link

ziyuang commented Feb 20, 2017

@duvenaud what does it mean by "building a list and calling np.concatenate()"?

@mattjj
Copy link
Contributor

mattjj commented Feb 20, 2017

In the case of the OP's code, I think he means rewriting lines like this:

B = np.zeros((4,4))
B[:2, :2] = A

as something like this:

row1 = np.concatenate([A, np.zeros((2,2))], axis=1)
row2 = np.zeros((2, 4))
B = np.concatenate([row1, row2], axis=0)

In general, instead of building arrays by allocating zeros and then assigning into blocks, you can instead build the blocks as separate arrays and then concatenate them. The latter method works with autograd, but the former method doesn't.

@ziyuang
Copy link

ziyuang commented Feb 21, 2017

@mattjj Can I give B a correct data type such that it can hold the autograd nodes? (Or is data type the cause of issue?)

@mattjj
Copy link
Contributor

mattjj commented Feb 21, 2017

No, autograd doesn't support assignment into arrays. We could overload indexed assignment, but it would require a substantial amount of internal bookkeeping, and it might even obfuscate what happens when the program runs. In particular, to do reverse-mode autodiff, the intermediate values computed during the evaluation of the function need to be stored because they usually need to be read during the backward pass computation. Since assignment into arrays might clobber intermediate values, whenever an assignment happens we'd need to copy that data somewhere for use in the backward pass. Instead, by not supporting assignment, we basically force the user to be explicit about when this kind of copying happens.

If you need to use assignment, one thing you can do is mark your function as a primitive and then define its vjp manually. Within a primitive you're just using naked numpy, so assignment works. In the case of the OP's code, which might be too simplified to be interesting, that might look something like

import autograd.numpy as np
from autograd import grad
from autograd.core import primitive

@primitive
def f(A):
    B = np.zeros((4, 4))
    B[:2, :2] = A
    return B.sum()

def f_vjp(g, ans, vs, gvs, A):
    return g * np.ones((2, 2))

f.defvjp(f_vjp)

@BassantTolba1234
Copy link

Dear Sir ,
Thank you for your efforts,
Please this is my part of code , when I try to run these lines , this error appears please can you kindly help me to solve it please ?

def interpolation(noisy , SNR , Number_of_pilot , interp):

noisy_image = np.zeros((40000,72,14,2))



noisy_image[:,:,:,0] = np.real(noisy)

noisy_image[:,:,:,1] = np.imag(noisy)

TypeError: float() argument must be a string or a number, not 'dict'

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants