Skip to content

Finite mixture of full-rank Gaussians for density estimation & variational inference.

License

Notifications You must be signed in to change notification settings

zhihanyang2022/gradient-gmm

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

34 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Fitting mixtures of full-rank Gaussians using gradient

Local minima is a known problem for latent variable models. Common solutions include random starts and good initialization (e.g., K-means). In 2D, this problem can be by-passed by having a mixture of many Gaussians, as I have done here. This might be a good motivation for putting a sparsity prior over the mixture weights, as done in Bayesian mixture of Gaussians, though such a model is too complicated. For higher dimensions, having many Gaussians might not be a good strategy due to the curse of dimensionality (i.e., how many is enough?), though my code supports arbitrary dimensionality and would run fine.

Use case 1: density estimation given a dataset

Legend:

  • 1st image: true (empirical) density (from data)
  • 2nd image: learned (empirical) density
  • 3rd image: learning curve

Use case 2: variational inference given the log of an unnormalized density

Legend:

  • 1st image: true (unnormalized) density
  • 2nd image: learned (empirical) density with arrows showing the initial and final positions of Gaussian means
  • 3rd image: learned mixture weights
  • 4th image: learning curve

Potential function U2

Potential function U3

Potential function U3

Potential function U4

Potential function U8

About

Finite mixture of full-rank Gaussians for density estimation & variational inference.

Topics

Resources

License

Stars

Watchers

Forks

Languages