We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Running the following C# code:
var fgrad = AD.Grad(v => -v[0] - v[1] * 2.0); var fgradValue = fgrad(new DV(new double[] {1, 1}));
gives the unexpected result (0, -2.0) [first value should be -1.0] The result is the same when the first line is modified to:
var fgrad = AD.Grad(v => -v[0] * 5.0 - v[1] * 2.0); // still gives (0, -2.0) at (1, 1)
The correct result is given when the code is changed to:
var fgrad = AD.Grad(v => -1.0 * v[0] - v[1] * 2.0); // and var fgrad = AD.Grad(v => -5.0 * v[0] - v[1] * 2.0);
DiffSharp version: 0.7.7.0
The text was updated successfully, but these errors were encountered:
I checked in dev and the result is correct
dev
open DiffSharp let fgrad = dsharp.grad(fun v -> -v.[0] - v.[1] * 2.0); let fgradValue = fgrad(dsharp.tensor [ 1.0, 1.0 ]) printfn "fgradValue = %A" fgradValue
gives
fgradValue = Tensor [-1.000000, -2.000000]
I'll close this out for this reason - see README for building dev branch
Sorry, something went wrong.
No branches or pull requests
Running the following C# code:
var fgrad = AD.Grad(v => -v[0] - v[1] * 2.0);
var fgradValue = fgrad(new DV(new double[] {1, 1}));
gives the unexpected result (0, -2.0) [first value should be -1.0]
The result is the same when the first line is modified to:
var fgrad = AD.Grad(v => -v[0] * 5.0 - v[1] * 2.0); // still gives (0, -2.0) at (1, 1)
The correct result is given when the code is changed to:
var fgrad = AD.Grad(v => -1.0 * v[0] - v[1] * 2.0); // and
var fgrad = AD.Grad(v => -5.0 * v[0] - v[1] * 2.0);
DiffSharp version: 0.7.7.0
The text was updated successfully, but these errors were encountered: