Claude Code Hooks - Quickly master how to use Claude Code hooks to add deterministic (or non-deterministic) control over Claude Code's behavior.
This requires:
- Astral UV - Fast Python package installer and resolver
- Claude Code - Anthropic's CLI for Claude AI
Optional:
- ElevenLabs - Text-to-speech provider
- OpenAI - Language model provider + Text-to-speech provider
- Anthropic - Language model provider
This demo captures all 5 Claude Code hook lifecycle events with their JSON payloads:
Fires: Before any tool execution
Payload: tool_name
, tool_input
parameters
Enhanced: Blocks dangerous commands (rm -rf
, .env
access)
Fires: After successful tool completion
Payload: tool_name
, tool_input
, tool_response
with results
Fires: When Claude Code sends notifications (waiting for input, etc.)
Payload: message
content
Enhanced: TTS alerts - "Your agent needs your input" (30% chance includes name)
Fires: When Claude Code finishes responding
Payload: stop_hook_active
boolean flag
Enhanced: AI-generated completion messages with TTS playback
Fires: When Claude Code subagents (Task tools) finish responding
Payload: stop_hook_active
boolean flag
Enhanced: TTS playback - "Subagent Complete"
- Complete hook lifecycle coverage - All 5 hook events implemented and logging
- Intelligent TTS system - AI-generated audio feedback with voice priority (ElevenLabs > OpenAI > pyttsx3)
- Security enhancements - Blocks dangerous commands and sensitive file access
- Personalized experience - Uses engineer name from environment variables
- Automatic logging - All hook events are logged as JSON to
logs/
directory - Chat transcript extraction - PostToolUse hook converts JSONL transcripts to readable JSON format
Warning: The
chat.json
file contains only the most recent Claude Code conversation. It does not preserve conversations from previous sessions - each new conversation is fully copied and overwrites the previous one. This is unlike the other logs which are appended to from every claude code session.
This project leverages UV single-file scripts to keep hook logic cleanly separated from your main codebase. All hooks live in .claude/hooks/
as standalone Python scripts with embedded dependency declarations.
Benefits:
- Isolation - Hook logic stays separate from your project dependencies
- Portability - Each hook script declares its own dependencies inline
- No Virtual Environment Management - UV handles dependencies automatically
- Fast Execution - UV's dependency resolution is lightning-fast
- Self-Contained - Each hook can be understood and modified independently
This approach ensures your hooks remain functional across different environments without polluting your main project's dependency tree.
.claude/settings.json
- Hook configuration with permissions.claude/hooks/
- Python scripts using uv for each hook typepre_tool_use.py
- Security blocking and loggingpost_tool_use.py
- Logging and transcript conversionnotification.py
- Logging with optional TTS (--notify flag)stop.py
- AI-generated completion messages with TTSsubagent_stop.py
- Simple "Subagent Complete" TTSutils/
- Intelligent TTS and LLM utility scriptstts/
- Text-to-speech providers (ElevenLabs, OpenAI, pyttsx3)llm/
- Language model integrations (OpenAI, Anthropic)
logs/
- JSON logs of all hook executionspre_tool_use.json
- Tool use events with security blockingpost_tool_use.json
- Tool completion eventsnotification.json
- Notification eventsstop.json
- Stop events with completion messagessubagent_stop.json
- Subagent completion eventschat.json
- Readable conversation transcript (generated by --chat flag)
ai_docs/cc_hooks_docs.md
- Complete hooks documentation from Anthropic
Hooks provide deterministic control over Claude Code behavior without relying on LLM decisions.
- Command logging and auditing
- Automatic transcript conversion
- Permission-based tool access control
- Error handling in hook execution
Run any Claude Code command to see hooks in action via the logs/
files.
Claude Code hooks provide powerful mechanisms to control execution flow and provide feedback through exit codes and structured JSON output.
Hooks communicate status and control flow through exit codes:
Exit Code | Behavior | Description |
---|---|---|
0 | Success | Hook executed successfully. stdout shown to user in transcript mode (Ctrl-R) |
2 | Blocking Error | Critical: stderr is fed back to Claude automatically. See hook-specific behavior below |
Other | Non-blocking Error | stderr shown to user, execution continues normally |
Each hook type has different capabilities for blocking and controlling Claude Code's behavior:
- Primary Control Point: Intercepts tool calls before they execute
- Exit Code 2 Behavior: Blocks the tool call entirely, shows error message to Claude
- Use Cases: Security validation, parameter checking, dangerous command prevention
- Example: Our
pre_tool_use.py
blocksrm -rf
commands with exit code 2
# Block dangerous commands
if is_dangerous_rm_command(command):
print("BLOCKED: Dangerous rm command detected", file=sys.stderr)
sys.exit(2) # Blocks tool call, shows error to Claude
- Primary Control Point: Provides feedback after tool completion
- Exit Code 2 Behavior: Shows error to Claude (tool already ran, cannot be undone)
- Use Cases: Validation of results, formatting, cleanup, logging
- Limitation: Cannot prevent tool execution since it fires after completion
- Primary Control Point: Handles Claude Code notifications
- Exit Code 2 Behavior: N/A - shows stderr to user only, no blocking capability
- Use Cases: Custom notifications, logging, user alerts
- Limitation: Cannot control Claude Code behavior, purely informational
- Primary Control Point: Intercepts when Claude Code tries to finish responding
- Exit Code 2 Behavior: Blocks stoppage, shows error to Claude (forces continuation)
- Use Cases: Ensuring tasks complete, validation of final state use this to FORCE CONTINUATION
- Caution: Can cause infinite loops if not properly controlled
Beyond simple exit codes, hooks can return structured JSON for sophisticated control:
{
"continue": true, // Whether Claude should continue (default: true)
"stopReason": "string", // Message when continue=false (shown to user)
"suppressOutput": true // Hide stdout from transcript (default: false)
}
{
"decision": "approve" | "block" | undefined,
"reason": "Explanation for decision"
}
- "approve": Bypasses permission system,
reason
shown to user - "block": Prevents tool execution,
reason
shown to Claude - undefined: Normal permission flow,
reason
ignored
{
"decision": "block" | undefined,
"reason": "Explanation for decision"
}
- "block": Automatically prompts Claude with
reason
- undefined: No action,
reason
ignored
{
"decision": "block" | undefined,
"reason": "Must be provided when blocking Claude from stopping"
}
- "block": Prevents Claude from stopping,
reason
tells Claude how to proceed - undefined: Allows normal stopping,
reason
ignored
When multiple control mechanisms are used, they follow this priority:
"continue": false
- Takes precedence over all other controls"decision": "block"
- Hook-specific blocking behavior- Exit Code 2 - Simple blocking via stderr
- Other Exit Codes - Non-blocking errors
# Block dangerous patterns
dangerous_patterns = [
r'rm\s+.*-[rf]', # rm -rf variants
r'sudo\s+rm', # sudo rm commands
r'chmod\s+777', # Dangerous permissions
r'>\s*/etc/', # Writing to system directories
]
for pattern in dangerous_patterns:
if re.search(pattern, command, re.IGNORECASE):
print(f"BLOCKED: {pattern} detected", file=sys.stderr)
sys.exit(2)
# Validate file operations
if tool_name == "Write" and not tool_response.get("success"):
output = {
"decision": "block",
"reason": "File write operation failed, please check permissions and retry"
}
print(json.dumps(output))
sys.exit(0)
# Ensure critical tasks are complete
if not all_tests_passed():
output = {
"decision": "block",
"reason": "Tests are failing. Please fix failing tests before completing."
}
print(json.dumps(output))
sys.exit(0)
- Timeout: 60-second execution limit per hook
- Parallelization: All matching hooks run in parallel
- Environment: Inherits Claude Code's environment variables
- Working Directory: Runs in current project directory
- Input: JSON via stdin with session and tool data
- Output: Processed via stdout/stderr with exit codes
- Use PreToolUse for Prevention: Block dangerous operations before they execute
- Use PostToolUse for Validation: Check results and provide feedback
- Use Stop for Completion: Ensure tasks are properly finished
- Handle Errors Gracefully: Always provide clear error messages
- Avoid Infinite Loops: Check
stop_hook_active
flag in Stop hooks - Test Thoroughly: Verify hooks work correctly in safe environments
And prepare for Agentic Engineering
Learn to code with AI with foundational Principles of AI Coding
Follow the IndyDevDan youtube channel for more AI coding tips and tricks.