Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Remove pytorch warning message #828

Merged
merged 195 commits into from Sep 15, 2023
Merged

Remove pytorch warning message #828

merged 195 commits into from Sep 15, 2023

Conversation

yifan0330
Copy link
Contributor

Closes # .

Changes proposed in this pull request:

  • Replace functorch.hessian by torch.func.hessian in cbmr code, to remove warning message if pytorch version >=2.0.

yifan0330 and others added 30 commits June 16, 2022 17:07
@codecov
Copy link

codecov bot commented Aug 26, 2023

Codecov Report

Patch coverage: 100.00% and project coverage change: +0.17% 🎉

Comparison is base (0864fd2) 88.67% compared to head (8347750) 88.84%.
Report is 1 commits behind head on main.

Additional details and impacted files
@@            Coverage Diff             @@
##             main     #828      +/-   ##
==========================================
+ Coverage   88.67%   88.84%   +0.17%     
==========================================
  Files          47       47              
  Lines        6020     5997      -23     
==========================================
- Hits         5338     5328      -10     
+ Misses        682      669      -13     
Files Changed Coverage Δ
nimare/diagnostics.py 98.72% <100.00%> (ø)
nimare/meta/cbmr.py 78.32% <100.00%> (ø)
nimare/meta/models.py 90.29% <100.00%> (+2.47%) ⬆️

... and 1 file with indirect coverage changes

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

if len(np.where((mask_ijk == coord).all(axis=1))[0])
]
# Only retain coordinates inside the brain mask
keep_idx = [i for i, coord in enumerate(dset_ijk) if masker_array[coord[0], coord[1], coord[2]] == 1]
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nice! this is several orders of magnitude faster! 🚀

Comment on lines +113 to 115
n_iter=2000,
lr=1,
lr_decay=0.999,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

what's the rationale for the changes to the defaults? is it because of the changes to the optimization procedure?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the rationale is to take a full step for the initial iteration, then the learning rate decay will exponentially reduce the step size.

Comment on lines +46 to +49
lr=1,
lr_decay=0.999,
n_iter=1000,
tol=1e-2,
n_iter=2000,
tol=1e-9,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

same here, how were defaults decided?

Copy link
Member

@jdkent jdkent left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM!

@jdkent jdkent merged commit a6c21d3 into neurostuff:main Sep 15, 2023
19 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants