Skip to content

eusef/eusef-python-cram-harness

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

1 Commit
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

eusef-python-cram-harness

A throwaway Python practice rig, built in an afternoon by an AI agent, used over two days to cram Python syntax fluency before a 4-level, 90-minute timed Python coding screen. Posted as the artifact described in this blog post: I Built a Throwaway Tool to Cram Python in Two Days. The Lesson Wasn't About Python..

The blog post is the "why." This repo is the "what."

What's in here

Path What it is
practice/ The drill harness: idiom drills, level problems, mock generator, dashboard, queue, gap log.
practice/BUILD.md The merged spec the harness was built from.
practice/STATE.md Cross-session state. Useful if you hand the rig to an agent.
practice/README.md Daily usage.
practice/voice-drill.md A read-aloud recall script for off-keyboard practice (~30 min).
docs/claude-python-crash-course-spec.md The Claude-generated curriculum spec.
docs/chatgpt-python-crash-course-spec.md The ChatGPT-generated curriculum spec.
start.sh One-command bootstrap: installs pytest if missing, launches the daily drill loop.

Quickstart

chmod +x start.sh
./start.sh

The first run seeds the queue. After that, ./start.sh orients you with current state and drops you into the next drill.

Requirements: Python 3.10+ (3.12 is what it was built and tested on), pytest. Stdlib only beyond pytest. No other dependencies.

How the artifact was built

Two AI agents (Claude and ChatGPT) were asked the same question independently: given this candidate's profile and the test format, what specifically needs to be drilled? Their answers are preserved in docs/. A PM agent merged the two into a single spec (practice/BUILD.md). A dev agent then built the rig from that spec.

The full story, including what the harness did and didn't do, is in the blog post.

What the rig does

  • 30 idiom drills — small Python idioms (list comprehensions, defaultdict, Counter.most_common, slicing, dict merge, etc.), one file each, each with 2-4 pytest assertions.
  • 12 level problems — three scenarios (bank, filesystem, taskmanager) × four progressive levels each. L1 is CRUD; L2 adds ranking; L3 adds timestamps and scheduled events; L4 adds backup/restore.
  • Mock generatorbin/mock new produces a fresh 4-level mock with shuffled entity names so you can't memorize.
  • Dashboardbin/dashboard renders progress, weak hours, and time trends from the gap log.
  • Spaced repetition — failures come back tomorrow, then +3 days, then +7, then +14.
  • Reset on every drill — your prior solution is archived; the next attempt always starts from a pristine stub. Pure recall.

Disclaimer

This was built for one person over two days. It worked for that purpose. It is not packaged for general use, has no test coverage of its own, and the "curriculum" is opinionated to one candidate's gaps.

If you want to use it: clone, edit the candidate profile in CLAUDE.md and practice/STATE.md, possibly regenerate the curriculum by re-running the two-agents prompt for your own gap list, and rebuild from there. The blueprint is the value, not the specific scenarios.

License

MIT.

About

Throwaway Python practice rig built by an AI agent in an afternoon. Companion artifact to a blog post on agent-built learning tools.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors