This is the second edition of the QML Reading group at ICFO. The archive of the first edition is here and here. We will face some restructuring and aim for more self initiative than in the last Session. We will go back to the roots of this RG and just discuss publications of interest that will be chosen by someone.
So for the first sessions we have the following structure in mind.
We will again define papers to read for each session. We are happy if people come forward with interesting topics or papers. This session will more be about keeping track of latest advances in ML, QML and ML assisted physics.
Most people attending will already be quite advanced in ML. Therefore we will not start with ML basics in this group. But we are aware that there are ML beginners who attend the RG and we are also happy to explain basics if needed.
We are not sure yet if there will be coding exercises. In the last session most people were too busy anyway. But we are always happy to discuss coding problems or suggestions.
Topics will be announced a week in advance. Coding will be done collaboratively through this repository.
The reading group requires commitment: apart from the 1.5-2 contact hours a week, at least another 2-3 hours must be dedicated to reading and coding. You are not expected to know machine learning or programming before joining the group, but you are expected to commit the time necessary to catch up and develop the relevant skills.
The language of choice is Python 3.
The broader QML community is still taking shape. We are attempting to organize it through the website quantummachinelearning.org. You can also sign up for the mailing list there. Please also consider contributing to the recently rewritten Wikipedia article on QML. Apart from new content, stylistic and grammatical edits, figures, and translations are all welcome.
The best way to learn machine learning is by doing it. The book Python Machine Learning is a good starter, along with its GitHub repository. For the deep learning part, you can have look at the github of the course we gave at the upc where we discuss convolutional neural networks, Boltzman machines and reinforcement learning. Kaggle is a welcoming community of data scientists. It is not only about competitions: several hundred datasets are hosted on Kaggle, along with notebooks and scripts (collectively known as kernels) that do interesting stuff with the data. These provide perfect stepping stones for beginners. Find a dataset that is close to your personal interests and dive in. For a sufficiently engaging theoretical introduction to machine learning, the book The Elements of Statistical Learning: Data Mining, Inference, and Prediction is highly recommended.
Anaconda is the recommended Python distribution if you are new to the language. It ships with most of the scientific and machine learning ecosystem around Python. It includes Scikit-learn, which is excellent for prototyping machine learning models. For scalable deep learning, Keras is easy to start with, bit it uses the more intransparent and complicated tensorflow backend. For simple implementations we recommend it, but as soon as someone wants to implement more advanced neural networks we recommend changing to pytorch. QuTiP is an excellent quantum simulation library, and with the latest version (4.1), it is reasonably straightforward to install it in Anaconda with conda-forge. QuTiP is somewhat limited in scalability, so perhaps it is worth checking out other simulators, such as ProjectQ.
We will follow the good practices of software carpentry, that is, elementary IT skills that every scientist should have. In particular, we use git as a version control system and host the repository right here on GitHub. When editing text or Markdown documents like this one, please write every sentence in a new line to ensure that conflicts are efficiently resolved when several people edit the same file.
10.30-12.00, 18.October 2018, Aquarium Room (AMR) (280).
- We will have a look at this paper. Knowledge graph refinement: A survey of approaches and evaluation methods.
10.30-12.00, 8.November 2018, Aquarium Room (AMR) (280).
- We will go through the paper Bayesian Deep Learning on a Quantum Computer. This will include a short introduction to supervised learning in feedforward neural networks for the newcomers, and notes on Bayesian learning.
10.30-12.00, 15.November 2018, Aquarium Room (AMR) (280).
- Our journey through Bayesian Deep Learning on a Quantum Computer continues. We will also review equivalences between GP training and training of deep neural networks and quantum-assisted training of GPs.
09.30-11.00, 13.December 2018, Aquarium Room (AMR) (280).
- In this session we will review the recent paper , where the author proposes a classical algorithm that mimics the quantum-algorithm for recommendation systems  by using stochastic sampling . For preparation, it is recommended going over  and reading the nice introduction of . Knowing ? Even better.
10.30-12.00, 10.January 2019, Aquarium Room (AMR) (280).
- To kick-start the 2019 reading group, I will follow the recent trend of showing the loopholes of by presenting this paper.
The basic message is to argue that that there are major problems when trying to perform gradient descent on classically parametrised quantum circuits (i.e. 'quantum neural networks'), since the gradient will be essentially zero everywhere.
10.30-12.00, 17.January 2019, Aquarium Room (AMR) (280).
- In this session we will focus on Reinforcement Learning. We will breifly review the basics of RL and Q-learning. You can take a look at this repo for an introduction to the topic. Then, we will see how this paper applies such algorithms to do Quantum Control.
10.30-12.00, 24.January 2019, Aquarium Room (AMR) (280).
This week we look at methods of interpretability in classical machine learning. For this we start with an overview of activation maximization, sensitivity analysis and layer-wiserelevancepropagation. The 2nd paper Learning Deep Features for Discriminative Localization shows how heatmaps can be implemented in a very simple way. And finally this and this distill publications show how activation maximization looks like if used on proper deep learning examples.
10.30-12.00, 31.January 2019, Seminar Room!!
We will have a look at this paper. It describes how to formulate division and matrix inversion as a QUBO problem that can be solved by a quantum annealer like DWAVE.
10.30-12.00, 7.February 2019, Aquarium Room (AMR) (280).
This week we have a look at QAOA on CV systems.