-
Notifications
You must be signed in to change notification settings - Fork 25.1k
Closed
Description
Issue description
A function f(x): R^n -> R^m
will have Jacobian w.r.t x as [df1(x)/dx, df2(x)/dx, ... df_m(x)/dx]
where each df_m(x)/dx
is an R^n
vector.
As far as I know, Pytorch autograd's library doesn't provide a "one-shot" solution for this calculation. Th current solution is to call torch.autograd.grad
multiple times on different parts of the output. This could be slow since it doesn't (presumably) make use of the parallelization of GPUs.
Code example
The current solution I know is:
J = []
F = f(x)
for i in range(len(x)):
J.append(torch.autograd.grad(F[i], x))
mdfirman, wpeebles, danfeiX, tttor, kuc2477 and 7 more
Metadata
Metadata
Assignees
Labels
No labels