Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add PyTorch Finetuning Capability, Examples #59

Merged
merged 18 commits into from Nov 15, 2022
Merged

Add PyTorch Finetuning Capability, Examples #59

merged 18 commits into from Nov 15, 2022

Conversation

mwalmsley
Copy link
Owner

@mwalmsley mwalmsley commented Nov 15, 2022

Key change is adding pytorch.training.finetune() method. Works on either classification (e.g. 0, 1) data or count (e.g. 12 said yes, 4 said no) data.

Includes three working examples:

  • Binary classification, with tiny rings subset
  • Counts for single question, with full internal rings data
  • Counts for all questions, with GZ Cosmic Dawn schema

Also updates various imports for the galaxy-datasets refactor, fixes prediction method to work on unlabelled data, minor QoL improvements.

Finally, changes PyTorch dense layer initialisation to custom high-uncertainty initialisation - see efficientnet_custom.py

cc @camallen

@mwalmsley mwalmsley merged commit 85a975c into benchmarks Nov 15, 2022
@mwalmsley mwalmsley deleted the finetune branch November 15, 2022 16:00
@mwalmsley mwalmsley mentioned this pull request Nov 15, 2022
@mwalmsley
Copy link
Owner Author

PyTorch benchmarks train successfully - punching merge

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

1 participant