Releases: lucidrains/alphafold2
Releases · lucidrains/alphafold2
0.0.21
add setting to row-tie attention for MSA, but with a strict warning t… …hat the batches of MSA contains no padding or masked out tokens
0.0.20
use axial attention for MSA processing, inline with paper released tw… …o days ago by @rmrao , the MSA transformer
0.0.19
add ability to toggle inter MSA self attention
0.0.18
add an assert to make sure total sequence length of MSA or AA does no… …t exceed the set length of the sparse attention kernel
0.0.17
make all masking optional
0.0.16
add memory compressed attention to make cross attention more efficient
0.0.11
0.0.11
0.0.10
complete reversibility
0.0.9
bump
0.0.8
more cleanup