Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Pytorch version of the code #59

Open
kdmsit opened this issue Mar 12, 2022 · 1 comment
Open

Pytorch version of the code #59

kdmsit opened this issue Mar 12, 2022 · 1 comment

Comments

@kdmsit
Copy link

kdmsit commented Mar 12, 2022

Is it possible to have the torch version of the code in place of torch-ignite? I want to go deep dive into the actual code and want to make some changes in loss function. I have tried to replace the trainer.run(train_loader, max_epochs=config.epochs) with simple iterative torch version of it as follows :

for epoch in range(config.epochs):
for batch in train_loader:
net = net.train()
optimizer.zero_grad()
graph, line_graph, target = batch
if torch.cuda.is_available():
graph = graph.to(device, non_blocking=True)
line_graph = line_graph.to(device, non_blocking=True)
target = target.to(device, non_blocking=True)
g=(graph,line_graph)
output = net(g)
loss = criterion(output, target)
mae_error = mae(output.data.cpu(), target.cpu())
loss.backward()
optimizer.step()

But I am not able to reproduce the result. Could you please help me to resolve the issue?

@bdecost
Copy link
Collaborator

bdecost commented Mar 15, 2022

Hi @kdmsit -- would you mind giving some more details about the loss function you're using and what is not working as expected?

One thing that I noticed is that you aren't using a learning rate scheduler in this example -- if you want to explicitly write the training loop, the default scheduler we use is cosine annealing with linear warmup, and you'll want to call scheduler.step() on every iteration, just after optimizer.step(), in the inner loop.

If you just want a different (or custom) loss, you can add it here (and to the configuration settings) and keep using ignite. I've found that ignite really helps reduce the duplication and complexity of evaluation logic, for example.

Do you have a working example you could share maybe?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants