Skip to content

Code for paper "Large Language Model-based Test Case Generation for GP Agents" - GECCO 2024

Notifications You must be signed in to change notification settings

gpietrop/LLM-Connect4

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

4 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Large Language Model-based Test Case Generation for GP Agents

python implementation for paper "Large Language Model-based Test Case Generation for GP Agents", Steven Jorgensen, Giorgia Nadizar, Gloria Pietropolli, Luca Manzoni, Eric Medvet, Una-May O'Reilly, Erik Hemberg.

Abstract

Genetic programming (GP) is a popular problem-solving and optimization technique. However, generating effective test cases for training and evaluating GP programs requires strong domain knowledge. Furthermore, GP programs often prematurely converge on local optima when given excessively difficult problems early in their training. Curriculum learning (CL) has been effective in addressing similar issues across different reinforcement learning (RL) domains, but it requires the manual generation of progressively difficult test cases as well as their careful scheduling. In this work, we leverage the domain knowledge and the strong generative abilities of large language models (LLMs) to generate effective test cases of increasing difficulties and schedule them according to various curricula. We show that by integrating a curriculum scheduler with LLM-generated test cases we can effectively train a GP agent player with environments-based curricula for a single-player game and opponent-based curricula for a multi-player game. Finally, we discuss the benefits and challenges of implementing this method for other problem domains.

About

Code for paper "Large Language Model-based Test Case Generation for GP Agents" - GECCO 2024

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages