Skip to content

SforAiDl/NeurIPS2020

Repository files navigation

NeurIPS 2020 Reading Group

[Program Synthesis](./Program Synthesis/Program Synthesis 966cd25148d34502887de0ec105586dd.md)

  • Nye, Maxwell I., Armando Solar-Lezama, Joshua B. Tenenbaum, and Brenden M. Lake. "Learning Compositional Rules via Neural Program Synthesis." arXiv preprint arXiv:2003.05562 (2020). link
  • Pu, Yewen, Kevin Ellis, Marta Kryven, Josh Tenenbaum, and Armando Solar-Lezama. "Program Synthesis with Pragmatic Communication." arXiv preprint arXiv:2007.05060 (2020). link
  • Tian, Lucas, Kevin Ellis, Marta Kryven, and Josh Tenenbaum. "Learning abstract structure for drawing by efficient motor program induction." Advances in Neural Information Processing Systems 33 (2020). link

[Structural Abstraction in RL](./Structural Abstraction in RL/Structural Abstraction in RL f6eaae97eaee49c6b939b07d3c31a710.md)

  • Johnson, Daniel D., Hugo Larochelle, and Daniel Tarlow. "Learning graph structure with a finite-state automaton layer." arXiv preprint arXiv:2007.04929 (2020). link
  • Pertsch, Karl, Oleh Rybkin, Frederik Ebert, Shenghao Zhou, Dinesh Jayaraman, Chelsea Finn, and Sergey Levine. "Long-horizon visual planning with goal-conditioned hierarchical predictors." Advances in Neural Information Processing Systems 33 (2020). link
  • Laskin, Michael, Scott Emmons, Ajay Jain, Thanard Kurutach, Pieter Abbeel, and Deepak Pathak. "Sparse graphical memory for robust planning." arXiv preprint arXiv:2003.06417 (2020). link
  • Anderson, Greg, Abhinav Verma, Isil Dillig, and Swarat Chaudhuri. "Neurosymbolic Reinforcement Learning with Formally Verified Exploration." arXiv preprint arXiv:2009.12612 (2020). link
  • Tasse, Geraud Nangue, Steven James, and Benjamin Rosman. "A Boolean Task Algebra for Reinforcement Learning." arXiv preprint arXiv:2001.01394 (2020). link

[Self-Supervised Learning](./Self-Supervised Learning/NeurIPS-2020 bf388a517513448fa51e5e4daec8f650.md)

  • Qizhe Xie and Minh-Thang Luong and Eduard Hovy and Quoc V. Le "Self-training with Noisy Student improves ImageNet classification" link
  • Jean-Bastien Grill, Florian Strub, Florent Altché, Corentin Tallec, Pierre H. Richemond, Elena Buchatskaya, Carl Doersch, Bernardo Avila Pires, Zhaohan Daniel Guo, Mohammad Gheshlaghi Azar, Bilal Piot, Koray Kavukcuoglu, Rémi Munos, Michal Valko "Bootstrap Your Own Latent: A New Approach to Self-Supervised Learning" link
  • Ting Chen, Simon Kornblith, Kevin Swersky, Mohammad Norouzi, Geoffrey Hinton. "Big Self-Supervised Models are Strong Semi-Supervised Learners" link
  • Barret Zoph, Golnaz Ghiasi, Tsung-Yi Lin, Yin Cui, Hanxiao Liu, Ekin D. Cubuk, Quoc V. Le "Rethinking Pre-training and Self-training" link
  • A. Emin Orhan, Vaibhav V. Gupta, Brenden M. Lake "Self-supervised learning through the eyes of a child" link

[Social Learning](./Social Learning/Social Learning c56c5f3585014620a0472c8724570b59.md)

  • Kamal Ndousse, Douglas Eck, Sergey Levine, and Natasha Jaques. "Learning Social Learning." arXiv preprint arXiv:2010.00581 (2020). link
  • Rose E. Wang, Sarah A. Wu, James A. Evans, Joshua B. Tenenbaum, David C. Parkes, and Max Kleiman-Weiner. "Too many cooks: Bayesian inference for coordinating multi-agent collaboration." arXiv preprint arXiv:2003.11778 (2020). link

Recordings

Title Recording
Recent Advances in Self-Supervised Learning https://youtu.be/LO1dKIr-85s
Structural Abstractions for Reinforcement Learning https://www.youtube.com/watch?v=fy7Dp1c2hUA