Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Molecule similarity aware jax.vmap(nanoDFT) #65

Open
AlexanderMath opened this issue Sep 4, 2023 · 1 comment
Open

Molecule similarity aware jax.vmap(nanoDFT) #65

AlexanderMath opened this issue Sep 4, 2023 · 1 comment

Comments

@AlexanderMath
Copy link
Contributor

AlexanderMath commented Sep 4, 2023

Our DFT function considers only a single molecule. We could compute DFT for a batch of molecules using jax.vmap.

Problem: This consumes more on-chip memory.

Idea: Take a molecule a. Construct molecule b by modifying the position a single atom. Most floats we need to store for a and b overlap: only O(N^3) floats will differ. If the molecule has n_atom<64 atoms we could then make batch_size=64<n_atom such "one-atom" perturbations while increasing memory consumption by only batch_size*n_atom^3.

Note: From deep learning perspective this feels similar to data augmentation. From DFT we are just re-using stuff when doing batch_size=64 similar DFTs.

@AlexanderMath AlexanderMath changed the title Memory-aware jax.vmap(nanoDFT) Data augmentation aware jax.vmap(nanoDFT) Sep 4, 2023
@AlexanderMath AlexanderMath changed the title Data augmentation aware jax.vmap(nanoDFT) Molecule similarity aware jax.vmap(nanoDFT) Sep 4, 2023
@AlexanderMath
Copy link
Contributor Author

AlexanderMath commented Oct 27, 2023

Note: This also reduces FLOPs

Note: Consider GD wrt density_matrix as in D4FT. If we instead do GD wrt NN(mol)=dm and then DFT_energy(NN(mol)) a single DFT iteration is sufficient to evaluate how well NN did.

Note: This also works if we change within {C,N,O,F}.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant