New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
FermiNet Training Complete (Bacward pass + Forward) #3689
Conversation
@@ -462,7 +470,7 @@ def prepare_hf_solution(self): | |||
self.mf = pyscf.scf.UHF(self.mol) | |||
_ = self.mf.kernel() | |||
|
|||
def random_walk(self, x: np.ndarray) -> np.ndarray: | |||
def random_walk(self, x: np.ndarray): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
doing this to avoid Mypy errors
@@ -476,42 +484,151 @@ def random_walk(self, x: np.ndarray) -> np.ndarray: | |||
A numpy array containing the joint probability of the hartree fock and the sampled electron's position coordinates | |||
""" | |||
x_torch = torch.from_numpy(x).view(self.batch_no, -1, 3) | |||
x_torch.requires_grad = True | |||
if self.tasks == 'pretraining': |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
adding pretraining and training part separately in the random_Walk fucntion
(self.energy_sampled, energy.unsqueeze(0))) | ||
return 2 * np.log(np.abs(np_output)) | ||
|
||
def prepare_train(self, burn_in: int = 100): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this function performs burn-in and changes the parameter before training is done
self.loss_value = (torch.mean(self.model.running_diff) / | ||
self.random_walk_steps) | ||
self.loss_value.backward() | ||
optimizer.step() | ||
self.model.running_diff = torch.zeros(self.batch_no) | ||
|
||
if (self.tasks == 'training'): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
adding the training part
@@ -164,12 +164,19 @@ def loss(self, | |||
indicates whether the model is pretraining |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A List[bool] is very awkward; let's swap this to True/False in a subsequent PR
weight_decay: float = 0, | ||
std: float = 0.08, | ||
std_init: float = 0.02, | ||
steps_std: int = 100): | ||
""" | ||
function to run training or pretraining. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This docstring should explain in detail why we need to overwrite the TorchModel implementation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Explain in detail how pretrain works vs train works. Multiple paragraphs please
@@ -462,7 +470,7 @@ def prepare_hf_solution(self): | |||
self.mf = pyscf.scf.UHF(self.mol) | |||
_ = self.mf.kernel() | |||
|
|||
def random_walk(self, x: np.ndarray) -> np.ndarray: | |||
def random_walk(self, x: np.ndarray): | |||
""" | |||
Function to be passed on to electron sampler for random walk and gets called at each step of sampling | |||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Document why the random walk is different for pretraining and finetuning
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also document burn as a phase
|
||
def prepare_train(self, burn_in: int = 100): | ||
""" | ||
Function to perform burn-in and to change the model parameters for training. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
More details on why this is necessary
accept = self.molecule.move(stddev=std_init) | ||
if iteration % steps_std == 0: | ||
if accept > 0.55: | ||
std_init *= 1.1 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Magic numbers are bad; These need to be documented or tunable parameters
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
@shaipranesh2 Please fix the documentation requests in a follow up PR
Description
Fix #(issue)
Type of change
Please check the option that is related to your PR.
Checklist
yapf -i <modified file>
and check no errors (yapf version must be 0.32.0)mypy -p deepchem
and check no errorsflake8 <modified file> --count
and check no errorspython -m doctest <modified file>
and check no errors