Skip to content

murnanedaniel/experiments-a

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

8 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

# Experiments A - AI Agent Playground **TL;DR**: Public experimental workspace for testing tasks with Claude Code Web, Codex Cloud, and other AI coding tools. Each experiment lives in its own branch and subdirectory with self-contained documentation. ## Purpose This repository serves as a sandbox for: - Testing AI coding assistants (Claude Code, Cursor, etc.) - Rapid prototyping and experimentation - Learning new tools and techniques - Documenting what works (and what doesn't) ## Current Experiments _No experiments yet. Each new experiment will be listed here with a brief description and link to its branch._ --- ## Agent Rules & Best Practices ### Workflow for New Experiments When starting a new experiment or task: 1. **Create a descriptive branch** - Branch names should be kebab-case and descriptive - Format: `experiment/description-of-task` or `task/what-youre-doing` - Examples: `experiment/web-scraper`, `task/data-pipeline-poc` 2. **Create a dedicated subdirectory** - Directory name should match the branch name (minus the prefix) - All work for this experiment stays in this directory - Keeps experiments isolated and easy to find 3. **Work within the subdirectory** - Code, configs, data, everything goes in the experiment folder - Each experiment is self-contained 4. **Document as you go** - Create a single `README.md` in your experiment subdirectory - Keep a **TL;DR section at the top** (2-3 sentences max) - Update the TL;DR as the experiment evolves - Document decisions, learnings, and results - Don't over-document: focus on what matters 5. **Update the main README** - Add your experiment to the "Current Experiments" section above - Include: name, brief description, status, and branch link ### Code Quality Guidelines - **Prioritize clarity over cleverness** - code should be readable - **Use type hints** where applicable (Python, TypeScript, etc.) - **Keep functions small** - one responsibility per function - **Fail fast** - validate inputs early, fail with clear messages - **No secrets in code** - use environment variables or config files ### Documentation Standards - **TL;DR is sacred** - always keep it current and concise - **Explain the "why"** - not just the "what" - **Include examples** where helpful - **Document failures** - they're learning opportunities - **Link to resources** - papers, docs, tutorials you found useful ### Git Hygiene - **Commit early, commit often** - small, focused commits - **Descriptive commit messages** - future you will thank you - **Don't commit secrets** - use `.gitignore` liberally - **Keep branches focused** - one experiment per branch - **Merge to main when complete** - clean up merged branches ### AI Agent Collaboration Tips - **Be specific** - vague requests get vague results - **Iterate** - start simple, add complexity - **Review everything** - AI is smart but not infallible - **Ask questions** - if something's unclear, ask the agent to explain - **Learn from it** - understand what the agent builds, don't just copy ### Claude Code Web Optimization This playground is designed for Claude Code Web. Here's how to use it effectively: **Parallel Experiments** - Claude Code Web excels at async execution and parallel work - Run multiple experiments simultaneously in different branches - Use the web interface for monitoring while multitasking - Switch to CLI (via "Open in CLI") when you need hands-on control **Experiment Setup Automation** - Create `.claude/settings.json` in your experiment subdirectory - Use SessionStart hooks to automate dependency installation - Example: Auto-install Python/Node packages when session begins - Persist environment variables with `CLAUDE_ENV_FILE` in hooks **Documentation Patterns** - Add `CLAUDE.md` to experiment directories for detailed specifications - Use `@filename.md` syntax to reference shared guidelines - Maintain single source of truth across documentation - Run `check-tools` before starting to verify available toolchains **Security & Environment** - Check `CLAUDE_CODE_REMOTE` env var in scripts for conditional execution - Use minimal network access (allowlist only what you need) - Never commit secrets - use env files that are gitignored - SessionStart hooks can source environment configs automatically --- ## Repository Structure ``` experiments_a/ � README.md # This file � .cursorrules # AI agent guidelines � experiment-name-1/ # Each experiment in its own folder � � README.md # Experiment-specific docs � � src/ # Code � � ... � experiment-name-2/ � ... ``` --- ## Getting Started 1. Clone this repo 2. Create a new branch for your experiment 3. Create a subdirectory with the same name 4. Start experimenting! 5. Document what you learn --- **This is a playground. Break things. Learn. Have fun.**

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors