Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Examples/tutorials of Adaptive Metropolis for images using PyMC #653

Closed
giumas opened this issue Dec 4, 2014 · 5 comments
Closed

Examples/tutorials of Adaptive Metropolis for images using PyMC #653

giumas opened this issue Dec 4, 2014 · 5 comments

Comments

@giumas
Copy link

giumas commented Dec 4, 2014

I asked this question on Stackoverflow, but without any luck at this time:

http://stackoverflow.com/questions/27261233/examples-tutorials-of-adaptive-metropolis-for-images-using-pymc

What's wrong with the question? Too generic?

If not specific for image processing, can somebody help me to find a minimal code example using PyMC and data stored on numpy 2/3d arrays for Bayesian inference?

@twiecki
Copy link
Member

twiecki commented Dec 4, 2014

The closest I know of is here: http://nbviewer.ipython.org/github/CamDavidsonPilon/Probabilistic-Programming-and-Bayesian-Methods-for-Hackers/blob/master/Chapter5_LossFunctions/LossFunctions.ipynb (see
Example: Kaggle contest on Observing Dark World)

I think the question is a big too vague. What kind of example are you looking for specifically? Object recognition, etc?

@twiecki twiecki closed this as completed Dec 4, 2014
@giumas
Copy link
Author

giumas commented Dec 6, 2014

I see. What about finding the peak on this simulated array?

import numpy as np
from matplotlib import pyplot as plt

sz = (12,18)
data_input = np.random.normal( loc=5.0, size=sz )
data_input[7:10, 2:6] = np.random.normal( loc=100.0, size=(3,4) )

fig = plt.figure()
ax = fig.add_subplot(1,1,1)
im = ax.imshow( data_input )
ax.set_title("input")

index

@twiecki
Copy link
Member

twiecki commented Dec 8, 2014

In that case it seems like you are trying to estimate a 2-D uniform distribution with gaussian noise. You'll have to translate this into an actual model but this would be one idea:

lower_x ~ DiscreteUniform(0, 20)
upper_x ~ DiscreteUniform(0,20)
lower_y ~ DiscreteUniform(0, 20)
upper_y ~ DiscreteUniform(0, 20)
height ~ Normal(100, 1)
noise ~ InvGamma(1, 1)
means = zeros((20, 20))
means[[lower_x:upper_x,lower_y: upper_y]] = height # this needs to be a deterministic

data ~ Normal(mu=means, sd=noise)

@twiecki
Copy link
Member

twiecki commented Dec 8, 2014

You can add a potential to enforce lower_x < upper_x and the same for y.

@giumas
Copy link
Author

giumas commented Dec 11, 2014

Thank you for the help! I've attempted to follow your suggestion, I wrote this:

%matplotlib inline
import numpy as np
from matplotlib import pyplot as plt

sz = (20,20)
data_input = np.random.normal( loc=5.0, size=sz )
data_input[15:18, 2:6] = np.random.normal( loc=100.0, size=(3,4) )

fig = plt.figure()
ax = fig.add_subplot(1,1,1)
im = ax.imshow( data_input, interpolation='none'  )
ax.set_title("input")

import pymc

def make_model( data_input ):

    lower_x = pymc.DiscreteUniform( "lower_x", 0, 19, value=0 )
    upper_x = pymc.DiscreteUniform( "upper_x", 0, 19, value=19 ) 
    lower_y = pymc.DiscreteUniform( "lower_y", 0, 19, value=0 )
    upper_y = pymc.DiscreteUniform( "upper_y", 0, 19, value=19 ) 

    height = pymc.Normal( "height", 100, 1 )
    noise = pymc.InverseGamma( "noise", 1, 1 )

    @pymc.deterministic
    def calc_means( l_x=lower_x, u_x=upper_x, l_y=lower_y, u_y=upper_y, h=height ):
        means = np.zeros((20,20))
        means[l_x:u_x, l_y:u_y] = h 
        return means

    @pymc.potential
    def x_constraint(l_x=lower_x, u_x=upper_x):
        if l_x - u_x >= 0:
            return -np.inf
        return 0

    @pymc.potential
    def y_constraint(l_y=lower_y, u_y=upper_y):
        if l_y - u_y >= 0:
            return -np.inf
        return 0

    data = pymc.Normal( "data", calc_means, noise, observed=True, value=data_input )

    return locals()

model = pymc.Model( make_model(data_input), name='bbox_model' )

M = pymc.MCMC(model, name='bbox_image')
M.sample(50000, burn=10000)

print( 'step methods:' )
for i in M.step_method_dict.items():
    print( '> ', i )

But it is clearly wrong, there is not model convergence (xs' and ys' traces are steady).
Should I expect an automatic selection of an Adaptive Metropolis step method?

Incidentally, could you please answer the question also on StackOverflow?
Also because I have a bounty on that question and it is almost lost!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants