Skip to content
/ OVR Public

ONLY SPARSITY BASED LOSS FUNCTION FOR LEARNING REPRESENTATIONS

Notifications You must be signed in to change notification settings

Vivek-B/OVR

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

6 Commits
 
 
 
 

Repository files navigation

OVR

ONLY SPARSITY BASED LOSS FUNCTION FOR LEARNING REPRESENTATIONS

Abstract
We study the emergence of sparse representations in neural networks. We show that in unsupervised models with regularization, the emergence of sparsity is the result of the input data samples being distributed along highly non-linear or discontinuous manifold. We also derive a similar argument for discriminatively trained networks and present experiments to support this hypothesis. Based on our study of sparsity, we introduce a new loss function which can be used as regularization term for models like autoencoders and MLPs. Further, the same loss function can also be used as a cost function for an unsupervised single-layered neural network model for learning efficient representations.

Run the file
'python ovr_encoder_git.py --encoded_size=8000 --lamda=0.0001'

Article
https://arxiv.org/abs/1903.02893

Contributers
Kishore Reddy Konda
Vivek Bakaraju

About

ONLY SPARSITY BASED LOSS FUNCTION FOR LEARNING REPRESENTATIONS

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages