-
Notifications
You must be signed in to change notification settings - Fork 25
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Issues trying to reproduce atom typing recovery experiment #202
Comments
Figured out the issue, should've realized that I wouldn't need to specify In case others might run into the same issue, the problem was resolved by using |
Glad you figured this out! Is there anything we can do to make this more clear in our documentation? |
Thanks for following up! It might be helpful to update the atom typing recovery docs with some of the changes I mentioned. Specifically, changing And modifying the last code block on the page to something like: # define optimizer
optimizer = torch.optim.Adam(net.parameters(), 1e-5)
# Uncomment below to use the GPU for training
# if torch.cuda.is_available():
# net = net.cuda()
# train the model
for _ in range(3000):
for g in ds_tr:
optimizer.zero_grad()
# Uncomment below to use the GPU for training
# if torch.cuda.is_available():
# g = g.to("cuda:0")
g=net(g)
loss = loss_fn(g)
loss.requires_grad = True
loss.backward()
optimizer.step() Happy to submit a Pull Request if it's appropriate! |
I'm trying to reproduce the atom typing recovery experiment from the docs and ran into some issues. I'm including the steps I've tried below but I had a couple general questions:
Steps I've tried so far
First, in order to set up the environment I used
mamba create -n espaloma-032 -c conda-forge espaloma=0.3.2
as suggested in #195 (comment)The URL for the zinc dataset was not working so I replaced that chunk of code with the suggestion in #120
Following along with the code after that, I ran into the following warnings and error:
At this point I tried referring to the docs for some of the other experiments and modified the following chunks of code:
With this I was able to get the model to train but the training loss looks off so I'm probably doing something wrong. Anyone have any ideas/suggestions?
The text was updated successfully, but these errors were encountered: