This tutorial is a guide to understanding the Zig-Zag Process, a new sampling method, described in the document THE ZIG-ZAG PROCESS AND SUPER-EFFICIENT SAMPLING FOR BAYESIAN ANALYSIS OF BIG DATA Bierkens, Fearnhead and Roberts (23 Apr 2018) arXiv 1607.03188v2. This document is released with the aim of diffusion and share of knowledge. It includes the main algorithm, and has an overview of the theoretical statistic background, and some example codes of the algorithm.
Is a undergraduate student of Astronomy and Physics at the Universidad de Chile. She is deeply interested into Cosmology and how we can improve the tools from Statistics and Informatics currently used in the area of the astrophysics. Through the month of January of 2019 she did her internship at CMM (Center for Mathematical Modeling of the University of Chile) in Santiago, under the supervision of Claire Delplancke, a postdoctoral investigator at CMM, and this tutorial is one of the main results of it.
Big Data is defined as the set of information in which volume, complexity and growth rate difficulties the capture, management and analysis. Machine Learning is the main tool used now a day in scientific investigations in order to avoid inefficiency. There is a big expectation for next decade related to the amount of data that will be released, for example, in astronomy.
If anyone had ever asked itself how to estimate values given some sort of data, then that person must read this tutorial: It has been created to spread the last advances in Bayesian Inference, a section of Machine Learning that is used in almost every investigation area of the STEM careers.
For any comment, question or suggestion you can send me an E-mail: bernarditariedg@gmail.com
Enjoy your lecture.
Bernardita