Skip to content

Workshop CogSci 2024: In-context learning in natural and artificial intelligence

Notifications You must be signed in to change notification settings

JacquesPesnot/2024_CogSci_Workshop

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 

Repository files navigation

About the workshop

In-context learning refers to the ability of a neural network to learn from information presented in its context. While traditional learning in neural networks requires adjusting network weights for every new task, in-context learning operates purely by updating internal activations without needing any updates to network weights. The emergence of this ability in large language models has led to a paradigm shift in machine learning and has forced researchers to reconceptualize how they think about learning in neural networks. Looking beyond language models, we can find in-context learning in many computational models relevant to cognitive science, including those that emerge from meta-learning.

This workshop presented at CogSci 2024 aims to delineate and discuss the implications of this phenomenon for the cognitive sciences. For this, we have invited a diverse group of researchers to map out the following questions:

  • How well can human learning be modeled using in-context learning?
  • Which neural architectures support in-context learning?
  • When and why do natural and artificial systems rely on in-context versus in-weights learning?
  • How does in-context learning relate to classical concepts from cognitive science?

Workshop proposal available here.

Program

Time (CEST) Speaker Title
9:00 – 9:30 Marcel Binz Introduction to in-context learning
9:30 – 10:00 Thomas McCoy Understanding and controlling neural networks through the problem they are trained to solve
10:00 – 10:30 Coffee break
10:30 – 11:00 Jacques Pesnot Lerousseau Training data distribution drives in-context learning in humans and transformers
11:00 – 11:30 Greta Tuckute LLMs as strong learners: applications to language neuroscience
11:30 – 12:00 Roma Patel Towards understanding the (conceptual) structure of language models
12:00 – 13:00 Lunch break
13:00 – 13:30 Akshay K. Jagadish Ecologically rational meta-learned inference explains human category learning
13:30 – 14:00 James Whittington Different algorithms for in-context learning in prefrontal cortex and the hippocampal formation
14:00 – 14:30 Stephanie Chan What do you need for in-context learning in transformers?
14:30 – 15:00 Coffee break
15:00 – 15:30 Zhu Jianqiao Transforming language models into cognitive models
15:30 – 16:30 Christopher Summerfield
Morgan Barense
Micha Heilbron
Thomas L. Griffiths
Brenden M. Lake
Panel discussion

Speakers

Stephanie Chan

Stephanie Chan Stephanie Chan is a senior research scientist at Google Deepmind. Having a background in both cognitive and computer science, she studies how data distributional properties drive emergent in-context learning.

Roma Patel

Roma Patel Roma Patel is a senior research scientist at Google Deepmind working on grounded language learning, interpretability and safety of large language models.

James Whittington

James Whittington James Whittington is a Sir Henry Wellcome postdoctoral fellow at Stanford University & the University of Oxford. He works on building models and theories for understanding structured neural representations in brains and machines.

Greta Tuckute

Greta Tuckute Greta Tuckute is a PhD candidate at the Department of Brain and Cognitive Sciences at MIT. She studies how language is processed in the biological brain, and how the representations and processes in artificial neural networks models compare to those in humans.

Thomas McCoy

Thomas McCoy Thomas McCoy is an assistant professor in the Department of Linguistics at Yale University. He studies the computational principles that underlie human language using techniques from cognitive science, machine learning, and natural language processing.

Zhu Jianqiao

Zhu Jianqiao Zhu Jianqiao is a postdoctoral researcher in the Department of Computer Science at Princeton University. He explores the computational principles underlying human judgment and decision-making processes. His current research aims to bridge Bayesian and neural network models of human cognition.

Organisers

Marcel Binz

Marcel Binz Marcel Binz is a research scientist at Helmholtz Munich. He works on modeling human behavior using ideas from meta-learning, resource rationality, and language models.

Ishita Dasgupta

Ishita Dasgupta Ishita Dasgupta is a research scientist at Google Deepmind. She uses advances in machine learning to build models of human reasoning, applies cognitive science approaches toward understanding black-box AI systems, and combines these insights to build better, more human-like artificial intelligence.

Akshay K. Jagadish

Akshay K. Jagadish Akshay K. Jagadish is a PhD student at the Max Planck Institute for Biological Cybernetics, Tübingen. His current research is dedicated towards understanding the ingredients essential for explaining human adaptive behavior across multiple task domains.

Jacques Pesnot Lerousseau

Jacques Pesnot Lerousseau Jacques Pesnot Lerousseau is a postdoc at the Institute for Language, Communication, and the Brain, Marseille. His current research addresses the question of in-context learning in human brains and artificial neural networks, aiming to uncover the mechanisms behind rule generalization in the brain and algorithms.

Christopher Summerfield

Christopher Summerfield Christopher Summerfield is a Professor of Cognitive Neuroscience at the University of Oxford. His work focusses on the neural and computational mechanisms by which humans make decisions.

About

Workshop CogSci 2024: In-context learning in natural and artificial intelligence

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages