You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I think it might be possible to provide an evo-tuning API.
I think the end-user should only be concerned with as few things as possible, and we can provide sane defaults for other things. Namely, my thoughts on what a user should be concerned with providing are:
starter weights (whether it’s the pre-loaded UniRef50 weights, or randomly generated, or other previously evotuned weights)
sequences of varying lengths
Then, there are other things we can probably just set to some sane defaults. For example, we probably need an automated protocol (probably using optuna) to figure out how many epochs to fine-tune for. (I read the pre-print supplement, it’s on the order of 10^4 steps, “until outer validation loss began to increase”. My prior experience with simple DL models and Optuna is that we probably can offload babysitting the model to Optuna.)
So a possible API would look like:
defevotune(weights, sequences: List[str]):
# Rough steps:# 1. Find out number of epochs N using parallel Optuna.# 2. Optimize weights using N epochs.# 3. Save weights at the end of N epochs of training.
What do you think?
The text was updated successfully, but these errors were encountered:
Implementing Evotuning (training in general) is a logical next step i think. Using optuna for further automation sounds like an interesting idea! I'll open a new branch.
I think it might be possible to provide an evo-tuning API.
I think the end-user should only be concerned with as few things as possible, and we can provide sane defaults for other things. Namely, my thoughts on what a user should be concerned with providing are:
Then, there are other things we can probably just set to some sane defaults. For example, we probably need an automated protocol (probably using
optuna
) to figure out how many epochs to fine-tune for. (I read the pre-print supplement, it’s on the order of 10^4 steps, “until outer validation loss began to increase”. My prior experience with simple DL models and Optuna is that we probably can offload babysitting the model to Optuna.)So a possible API would look like:
What do you think?
The text was updated successfully, but these errors were encountered: