Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Proposal in "model/loss.py" - Use loss classes instead of functional API #21

Closed
borgesa opened this issue Sep 21, 2018 · 5 comments
Closed
Assignees

Comments

@borgesa
Copy link
Collaborator

borgesa commented Sep 21, 2018

Hi,

Currently, 'get_loss_function' queries for local functions in the same file and returns the object, if it finds a function with the matching name. I propose that we instead use the interface of 'torch.nn.modules.loss' classes and return an instantiated class object, instead of a function reference.

These classes either way call the functional API, and are better documented.

As an example:

  • Current functionality:
    • 'my_loss' returns 'F.nll_loss(y_in, y_target)'
    • Function 'get_loss_function' returns the function reference to 'my_loss'
  • Proposed functionality:
    • 'get_loss_function' can instead use a combination of 'getattr(torch.nn.modules.loss, loss_fn_name)' (finding all built in loss classes in PyTorch) and searching for custom loss classes inside the file ('model/loss.py')
    • An instantiated class object is returned

What do you think?

@borgesa
Copy link
Collaborator Author

borgesa commented Sep 21, 2018

Reference (example):

To clarify:

  • If we do the that change that I propose, nothing changes inside the trainer:
    • Now 'loss' is a reference to the function 'my_loss', that returns *
    • With my proposal, 'loss' is a class instance with a 'forward' function. Calling 'loss(y_input, y_target)' returns **

*

F.nll_loss(y_input, y_target)

**

F.nll_loss(y_input, y_target, weight=self.weight, ignore_index=self.ignore_index, reduction=self.reduction)```

@victoresque
Copy link
Owner

Hi, with your examples, I do agree that using loss classes is better than using functional APIs. Though the purpose of this part in the template is just an example, I do think using loss objects in the example code could provide a better convention for someone who is not familiar with pytorch.

@borgesa
Copy link
Collaborator Author

borgesa commented Sep 22, 2018

Hi again,

I have already implemented this in an offline copy of the repository. Should I create a pull request with the proposal?

The pull request will then update:

  • config.json: parameter "loss" set to "NLLLoss" (for the purpose of the example)
  • README.md: Add some text that all built in loss functions are available as standard (only requirement is to change "config.json"
  • loss.py: Search for classes from inside "torch.nn.modules.loss"

What I likely not implement for this pull request (todo for later):

  • Implement possibility to load custom loss functions + documentation of this

@victoresque
Copy link
Owner

Hi, since this is a relatively minor change, I think combining this and the future custom loss feature in a single PR should be fine, thanks

borgesa added a commit to borgesa/pytorch-template that referenced this issue Sep 23, 2018
@borgesa
Copy link
Collaborator Author

borgesa commented Sep 23, 2018

I have created a commit with updates addressing this issue.

While I agree that my last bullet is a small change, however, I am not certain how to best implement it. Therefore I left it as an 'TODO', (which I also mention that in the updated README.md).

PS: I need to fix "master" on my fork to synchronize with this repository. If you merge pull request #15, then I will pretty much be back on track. Thereafter I will create a tidy pull request for this issue. Sorry for that inconvenience and thanks.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants