Join GitHub today
GitHub is home to over 28 million developers working together to host and review code, manage projects, and build software together.Sign up
Inefficient sampling of large categorical model #1018
I have been trying to implement a large categorical model in pymc3 but without much progress. The model I am trying to implement is detailed in the last chapter of Lee and Wagenmakers' Bayesian Cognitive Modeling book.
In brief, each entry of the response data is generated by a different categorical pdf (constructed under some hierarchical constraints).Thus the probability vector of these categorical pdf is trial/subject specific. I have the following difficulties:
1, The model is very slow. I adapted the codes from the book (JAGS and STAN), both end up too many nodes for Theano. I hadn't manage to sample using NUTS - it took too long to compile.
Is there a working example of a similar problem? Or how should I build my model in this case (e.g., categorical response with a large sets of priors) to make it more efficient?
All my attempts could be found here: https://github.com/junpenglao/Bayesian-Cognitive-Modeling-in-Pymc3/blob/master/CaseStudies/NumberConceptDevelopment.ipynb
edit: this might relate to #624