You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
given atom lengths and a sequence, do an average pool based on those lengths - atom -> token
given atom lengths and a sequence, expand sequence to consecutives, for token -> atom
fix packed atom representation when going from token level -> atom level pairwise repr
packed repr - make sure repeating pairwise is done in one specialized function, also take care of curtailing or padding the mask through some kwarg
able to pass in residue indices for only protein training, everything else derived, test with sidechainnet
atom transformer attention bias needs to be calculated efficiently in the Alphafold3 module, use asserts to make sure shape is correct within local_attn fn
take care of residue identities / indices -> atom feats + atom bonds + attention biasing for atom transformers
training
validation and test dataset
add config driven training with pydantic validation for constructing trainer and base model
saving and loading for both base alphafold3 model as well as trainer + optimizer states
add trainer orchestrator config that contains many training configs and one model
modules
miscellaneous
f_tokenbond
embedding to pairwise init (default to one single chain for starters if not passed in)atom_mask
(variable number of atoms per batch sample)MSAModule
@lucidrains take care of
Alphafold3
module, use asserts to make sure shape is correct withinlocal_attn
fntraining
datasets
improvisations
cleanup
The text was updated successfully, but these errors were encountered: