Skip to content

Nanne/WeightedMultiLabelBinaryCrossEntropyCriterion

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

3 Commits
 
 
 
 
 
 

Repository files navigation

Weighted Multi-label Binary Cross-entropy Criterion

Implementation of the 'one-versus-all logistic loss' function described in the paper "Learning Visual Features from Large Weakly Supervised Data".

It's only been minimally tested, but I couldn't really find any existing code of any weighted multi-label loss functions so it might be useful for someone else.

Usage:

criterion = nn.WeightedMultiLabelBinaryCrossEntropyCriterion([label_count_tensor], [dataset_size])

As with any Torch loss: by default, the losses are averaged over observations for each minibatch. However, if the field sizeAverage is set to false, the losses are instead summed for each minibatch.

Example:

N = 100 -- 100 Images
C = 10 -- 10 Classes

-- Label is one when class is present, 0 when not.
labels = torch.Tensor(N,C):uniform() 

-- Occurence counts for each label, normally you'd obtain these from training set
NNK = torch.sum(labels, 1) 

criterion = nn.WeightedMultiLabelBinaryCrossEntropyCriterion(NNK, N)

About

Weighted Multi-Label Binary Cross-Entropy Criterion for Torch

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages