-
Notifications
You must be signed in to change notification settings - Fork 64
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Move AtomRef Fitting to numpy to avoid bug #267
Conversation
WalkthroughThe changes in the codebase involve transitioning certain tensor operations from PyTorch to NumPy within the Changes
Recent Review DetailsConfiguration used: .coderabbit.yaml Files selected for processing (1)
Files skipped from review as they are similar to previous changes (1)
Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media? TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (invoked as PR comments)
Additionally, you can add CodeRabbit Configration File (
|
I think I also went into this before. This seems to be an issue only when using CPU, while it gives consistent results when using CUDA. Not sure why. |
@JiQi535 This may be the reason see the linkhttps://github.com/pytorch/pytorch/issues/71222. I also want to say that I didn't discover the problem when I used GPU. Anyway, I think it is better to have more consistent outputs with CPU and CUDA. Thanks Bowen for finding out the problem! |
Summary
Change the fitting method of AtomRef from
torch.linalg.lstsq
tonp.linalg.pinv
mthodIssue with previous implementation
The output for the above block gives:
The fitted parameters have super large deviations from several runs.
np.linalg.pinv
should solve the issue