Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update PhysNet.py #197

Open
wants to merge 1 commit into
base: main
Choose a base branch
from
Open

Update PhysNet.py #197

wants to merge 1 commit into from

Conversation

McJackTang
Copy link
Collaborator

No description provided.

Copy link
Collaborator

@yahskapar yahskapar left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm a bit confused by this change, isn't this exact same standardization done in the trainer file here? What's the point of performing said standardization twice?

If this is necessary, I think some comments should be added somewhere (preferably in the second place that standardization occurs, in the trainer file) explaining why. If it's a mistake to do this twice and the code in the trainer file that already does this is fine as is, I'd recommend not making this change.

@KegangWangCCNU
Copy link

The main issue is that the testing section in trainer.py does not use normalization. Adding this directly to the end of the model can ensure that normalization is not overlooked.

@yahskapar
Copy link
Collaborator

The main issue is that the testing section in trainer.py does not use normalization. Adding this directly to the end of the model can ensure that normalization is not overlooked.

That makes sense. In that case, maybe @McJackTang can remove the redundant normalization in the training portion of trainer.py such that normalization only takes place at the end of the model.

@xliucs
Copy link
Member

xliucs commented Sep 5, 2023

@yahskapar @McJackTang Is it ready to merge? I think it looks fine to me.

Copy link
Member

@xliucs xliucs left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@yahskapar
Copy link
Collaborator

@yahskapar @McJackTang Is it ready to merge? I think it looks fine to me.

I think it's fine to merge. @McJackTang, when you have time and prior to merging, can you also consider commenting within the model file why that normalization is there (e.g., to not overlook normalization when testing)? Alternatively, maybe you can consider adding normalization to the test function in the trainer file instead.

I'll approve now either way since the aforementioned can be up to you.

@yahskapar yahskapar self-requested a review September 5, 2023 04:07
@yahskapar
Copy link
Collaborator

@McJackTang, any plans to merge this? Was there something further you wanted to do with this PR before merging?

@McJackTang
Copy link
Collaborator Author

@McJackTang, any plans to merge this? Was there something further you wanted to do with this PR before merging?

Sorry for the late action. I remember that it would help with training. I just need to double-check the details of the performance of this branch.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants