This example demonstrates how to use the Hooks system in the Instructor library to monitor, log, and debug your LLM interactions.
Hooks provide a powerful mechanism for intercepting and handling events during the completion and parsing process. They allow you to add custom behavior, logging, or error handling at various stages of the API interaction.
The Instructor library supports several predefined hooks:
completion:kwargs
: Emitted when completion arguments are providedcompletion:response
: Emitted when a completion response is receivedcompletion:error
: Emitted when an error occurs during completioncompletion:last_attempt
: Emitted when the last retry attempt is madeparse:error
: Emitted when an error occurs during response parsing
This example demonstrates:
- Basic Hook Registration: How to register handlers for different hook events
- Multiple Handlers: How to register multiple handlers for the same event
- Statistics Collection: How to collect and track API usage statistics
- Error Handling: How to catch and process different types of errors
- Hook Cleanup: How to remove hooks when they're no longer needed
The code demonstrates three scenarios:
- Successful Extraction: A basic example that works correctly
- Parse Error: An example that triggers a validation error
- Multiple Hooks: Shows how to attach multiple handlers to the same event
# Navigate to the hooks example directory
cd examples/hooks
# Run the example
python run.py
The example will print detailed information about each request, including:
- π Request details (model, prompt)
- π Approximate input token count
- π Token usage statistics
- β Successful responses
β οΈ Parse errors- β Completion errors
- π Retry attempt notifications
At the end, it will print a summary of the statistics collected.
For more information about hooks in Instructor, see the hooks documentation.