Join GitHub today
GitHub is home to over 28 million developers working together to host and review code, manage projects, and build software together.Sign up
Welcome to the Turing.jl wiki!
Please use the Table of Content in the sidebar on the right for navigation.
Several models are benchmarked against Stan with HMC sampler for 2,000 iterations.
|Latent Dirichlet Allocation||156.8||205.3||0.76|
|Mixture of Categorical||16.6||10.9||1.52|
|Hidden Markov Model||274.97||21.85||12.58|
Numbers here are inference time in second - smaller number indicates better performance.
Numbers are highly dependent on models and size of data. For different models, Turing.jl is generally 3 ~ 30 times slower than Stan. For the same model, when the size of data increases, the gap between Turing and Stan becomes very small.
Turing.jl is currently using ForwardDiff.jl for automatic differentiation, which is believed to be the main reason of slowness of Turing.jl. This AD backend will be replaced by a reverse-mode AD implementation when Julia has a mature one.
New benchmark based on ReverseDiff.jl. Note when data or model is very small, the constant overhead of Turing.jl makes the ratio very large.
- Commit: f4ca7bfc8a63e5a6825ec272e7dffed7be623b31
Hardware: MacBook Pro (Retina, 13-inch, Late 2013),
- Processor: 2.4 GHz Intel Core i5,
- Memory: 8 GB 1600 MHz DDR3
- Graphics: Intel Iris 1536 MB (not used)