Skip to content

maxandersen/adventofcode-2025

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

4 Commits
 
 
 
 
 
 
 
 

Repository files navigation

Advent of Code 2025: The LLM Experiment

  • I love the challenge and creativity of Advent of Code.
  • I definitely don’t have the hours (or caffeine) to do it all myself.
  • Luckily, large language models are here—and apparently, they like puzzles too.

So! This year I’m taking a hands-off approach: let the agents tackle Advent of Code and see how well AI fares when put to the test.

Scoring System

  • 100%: I give the LLM a puzzle, it produces working code, I do nothing. Pure AI magic.
  • -1 per hint: Each time I have to prod, nudge, or reword my request, 1 point vanishes from perfection.
  • -10 for hands-on: If I roll up my sleeves and write code myself, that’s a 10-point penalty (because the point was for the AI to do the work, right?).

At this stage, there’s no penalty for code style or efficiency—it’s all about solving the problem.

Current Standings:

  • Day 1: 100% (Perfect run—AI did all the work!)
  • Day 2: 100% (No hints needed—AI solved both parts solo!)
  • Day 3: 100% (Did have to tell it not to choose python...)

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages