Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Dev monad bayes #186

Merged
merged 1 commit into from
Feb 17, 2023
Merged

Dev monad bayes #186

merged 1 commit into from
Feb 17, 2023

Conversation

turion
Copy link
Owner

@turion turion commented Sep 13, 2022

@reubenharry
Copy link

I have a question about runtime and concurrency. I notice that the framerate of the simulation slows when I have a large number of particles. It would be nice if these were somehow decoupled, so inference would happen in its own time but the simulation speed would never change. Rhine seems like it has the tools to do this sort of thing (i.e. is intended for this sort of thing) - does it seem conceptually feasible to you?

@turion
Copy link
Owner Author

turion commented Sep 14, 2022

I have a question about runtime and concurrency. I notice that the framerate of the simulation slows when I have a large number of particles. It would be nice if these were somehow decoupled, so inference would happen in its own time but the simulation speed would never change. Rhine seems like it has the tools to do this sort of thing (i.e. is intended for this sort of thing) - does it seem conceptually feasible to you?

Yes, that's what rhine is for. One would put the simulation in a separate clock that runs on a busy loop (and thus samples as fast as the machine can do), and the graphics on the gloss clock which ticks every 30th of a second by default. I was planning to do this, I just didn't get around to it yet.

@turion
Copy link
Owner Author

turion commented Sep 14, 2022

Ah, maybe I've misunderstood. You want to decouple the simulation/sampling of the latent position from the inference/particle filtering step? That's more elaborate, but I believe it could also work. Really good idea, since sampling should be fast!

@reubenharry
Copy link

Btw, I refactored some of your code as follows, to give an alternate way of looking at it:

prior :: (MonadSample m, Diff td ~ Float) => BehaviourF m td () Position
prior = fmap V.fromTuple $ model1D &&& model1D where

    model1D = proc _ -> do
        acceleration <- constM (normal 0 4 ) -< ()
        velocity <- decayIntegral 2 -< double2Float acceleration -- Integral, dying off exponentially
        position <- decayIntegral 2 -< velocity
        returnA -< float2Double position

    decayIntegral timeConstant = undefined -- average timeConstant >>> arr (timeConstant *^)

generativeModel :: (MonadSample m, Diff td ~ Float) => BehaviourF m td Position Observation
generativeModel = proc p -> do
    n <- fmap V.fromTuple $ noise &&& noise -< ()
    returnA -< p + n

    where noise = constM (normal 0 std)

posterior :: (MonadInfer m, Diff td ~ Float) => BehaviourF m td Position Observation
posterior = proc (V2 oX oY) -> do
  latent@(V2 trueX trueY) <- prior -< ()
  arrM factor -< normalPdf oY std trueY * normalPdf oX std trueX
  returnA -< latent



----------
-- display
----------

gloss :: IO ()
gloss = sampleIO $
        launchGlossThread defaultSettings
            { display = InWindow "rhine-bayes" (1024, 960) (10, 10) } 
        $ runIdentityT $ reactimateCl glossClock proc () -> do

                actualPosition <- prior -< ()
                measuredPosition <- generativeModel -< actualPosition
                samples <- onlineSMC 100 resampleMultinomial posterior -< measuredPosition
                (withSideEffect_ (lift $ lift clearIO) >>> visualisation) -< Result { estimate = averageOf samples
                                    , stdDev = stdDevOf (first xCoord <$> samples) + stdDevOf (first yCoord <$> samples)
                                    , measured = measuredPosition
                                    , latent = actualPosition
                                    }

@turion
Copy link
Owner Author

turion commented Sep 18, 2022

Btw, I refactored some of your code as follows, to give an alternate way of looking at it:

That makes a lot of sense, I refactored it in that direction!

@reubenharry
Copy link

Nice! I've been writing a tutorial in a notebook, and trying a few variations of the model, which I'll quickly describe.

Variations:

  • have a constant parameter be jointly inferred (like the mean of the acceleration). Describing a constant parameter was slightly tricky, since I needed to write liftMSF :: m (MSF m a a) -> MSF m a a, but doing that seemed to work.
  • have two observation streams, one of position with high noise, one of acceleration with low noise. Use both. Would like to have the streams on separate clocks, but my Rhine skills are not yet up to it
  • control which things are shown in the simulation by toggling a button (I had some clock related trouble with this, because once I started using gloss input, everything ran on the gloss input clock).
  • duplicate the model, so there are two dots to track. Very easy to either duplicate the particle filter, or to have a single particle filter on the joint space. Interesting to consider the difference between those options.
  • random switching behaviour (e.g. the particle random changes its mean of acceleration at random times). This is a bit trickier for me, because I'm still getting used to MSFExcept

@reubenharry
Copy link

(Btw I started putting together an example in a notebook, and will extend it to a tutorial. Draft here: https://monad-bayes-site.netlify.app/realtimeinference)

@turion
Copy link
Owner Author

turion commented Sep 20, 2022

(Btw I started putting together an example in a notebook, and will extend it to a tutorial. Draft here: https://monad-bayes-site.netlify.app/realtimeinference)

Wow, awesome! If you want I can comment on the source.

I pushed a version with multiple clocks. Right now it doesn't have any advantages yet because the computation is all single threaded, but we can improve on that later with a little work in rhine. Also, it paves the way for interaction, because there you can add event listening & handling.

@reubenharry
Copy link

Cool! I think I understand how dunai and Rhine's FRP work, and the implementation of the particle filter, but the details of how to combine clocks with schedules, how to use resampling buffers, and the use of Sns and Rhines are still tricky for me. The new code is helpful for that though, so I'll read through carefully.

@reubenharry
Copy link

reubenharry commented Sep 21, 2022

OK, I have a Rhine question: is there a simple way to shift a signal forward or backwards? I'd like to do sf >>> shift @Millisecond 300. (I can see that this would involve knowing the clock I'm on.) The reason I'm asking is I'd like to adapt the model so that we predict trajectories of the dot, rather than just track at the current time. I think that would be a cool application.

EDIT: ok, I just used accumulateWith, and that worked fine. But that's on the dunai level, not the Rhine level.

@turion
Copy link
Owner Author

turion commented Sep 22, 2022

OK, I have a Rhine question: is there a simple way to shift a signal forward

Forecasting is a nontrivial task, both conceptually and computationally. I haven't implemented a general solution for that in rhine, but now that you ask for it, there actually should be a way to do this in general. For now you might probably hack something together with these building blocks:

  • embed executes an MSF for a given number of steps
  • eraseClock transforms a Rhine into an MSF, hiding all the clocks

The tricky bit here is to forecast from the current state of the signal function & clock. I'm guessing that there should be a function forecast :: Diff td -> Rhine m td a b -> Rhine m td a b.

But note that this may have unintended consequences regarding side effects: When we forecast e.g. arrMCl (print =<< sampleIO (normal 1 1)) for one second, should it also execute the side effects (e.g. under a Millisecond 100 clock, printing a random number 10 times) during every forecasting? Probably the right way is to keep side effects algebraic and discard them.

or backwards?

Yes, that's delayBy

I'd like to do sf >>> shift @Millisecond 300. (I can see that this would involve knowing the clock I'm on.) The reason I'm asking is I'd like to adapt the model so that we predict trajectories of the dot, rather than just track at the current time. I think that would be a cool application.

EDIT: ok, I just used accumulateWith, and that worked fine.

Ah, how did that work out? Can I see the WIP code somewhere?

But that's on the dunai level, not the Rhine level.

That's fine, all dunai functions are valid rhine functions as well. If you want to incorporate the time information ad-hoc into your accumulateWith, simply use timeInfo or similar.

@turion
Copy link
Owner Author

turion commented Sep 22, 2022

the details of how to combine clocks with schedules, how to use resampling buffers, and the use of Sns and Rhines are still tricky for me. The new code is helpful for that though, so I'll read through carefully.

Yes, that is tricky. I'm going to get rid of schedules soon, dropping one piece of complexity.

@reubenharry
Copy link

Ah, how did that work out? Can I see the WIP code somewhere?

I put my WIP at https://github.com/reubenharry/rhine-bayes-examples

Here's where I track the past/future: https://github.com/reubenharry/rhine-bayes-examples/blob/master/src/Future.hs

@turion turion force-pushed the dev_monad_bayes branch 3 times, most recently from 41b9272 to 2b23ed5 Compare February 16, 2023 16:26
@turion turion mentioned this pull request Feb 16, 2023
3 tasks
@turion
Copy link
Owner Author

turion commented Feb 16, 2023

Leftovers will be addressed in #206 .

@turion turion merged commit 3db0e8c into master Feb 17, 2023
@turion turion deleted the dev_monad_bayes branch February 17, 2023 10:45
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants