Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix _init_identity_matrix in bfgs.jl #1089

Open
wants to merge 1 commit into
base: master
Choose a base branch
from

Conversation

odow
Copy link
Contributor

@odow odow commented Mar 28, 2024

Copy link

codecov bot commented Mar 28, 2024

Codecov Report

All modified and coverable lines are covered by tests ✅

Project coverage is 85.43%. Comparing base (7cc8328) to head (84ca7f0).

Additional details and impacted files
@@           Coverage Diff           @@
##           master    #1089   +/-   ##
=======================================
  Coverage   85.43%   85.43%           
=======================================
  Files          45       45           
  Lines        3276     3276           
=======================================
  Hits         2799     2799           
  Misses        477      477           

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

@devmotion
Copy link
Contributor

devmotion commented Mar 28, 2024

I think it might be more consistent to apply the pattern in

initial_scale = T(method.initial_stepnorm) * inv(norm(gradient(obj), Inf))
to
initial_scale = method.initial_stepnorm * inv(norm(gradient(d), Inf))
.

Generally, isn't the error caused by an incorrectly typed initial_stepnorm provided by the user? If you deal with Float32 data, you can avoid the problem by specifying also initial_stepnorm as a Float32. That being said, I guess

initial_scale = T(method.initial_stepnorm) * inv(norm(gradient(obj), Inf))
makes it a bit more convenient for users.

@pkofod
Copy link
Member

pkofod commented Apr 17, 2024

I think it might be more consistent to apply the pattern in

I agree

@pkofod
Copy link
Member

pkofod commented Apr 17, 2024

Generally, isn't the error caused by an incorrectly typed initial_stepnorm provided by the user? If you deal with Float32 data, you can avoid the problem by specifying also initial_stepnorm as a Float32. That being said, I guess

Sure, but usually I think it should be enough to supply x0 as the type you want even if there might be some cases where it's harder to prepare for that.. but I think we can apply the "correct" pattern here for the user..

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants