A small yet powerful helper package that turns error messages or unexpected program behaviour into structured, step‑by‑step troubleshooting guidance.
It sends your problem description to LLM7 (or any other LLM you provide), matches the LLM’s reply against a strict regex pattern using llmatch-messages, and returns a plain list of strings containing the recommended fix steps.
Main function –
debugscribe().
Package name –debugscribe
| Feature | Details |
|---|---|
| LLM‑agnostic | Uses ChatLLM7 by default, but you can pass any langchain BaseChatModel. |
| Structured output | Output is filtered through a regex; you get clean, consistently formatted text. |
| Quick installation | One pip command. |
| Easy integration | Works in scripts, Jupyter notebooks, or any Python project. |
pip install debugscribeThe package pulls in its dependencies automatically:
langchain-corelangchain-llm7– the default LLM7 wrapperllmatch-messages– for regex‑based validation
from debugscribe import debugscribe
error_msg = """
Traceback (most recent call last):
File "/home/user/app.py", line 5, in <module>
main()
File "/home/user/app.py", line 2, in main
x = 1/0
ZeroDivisionError: division by zero
"""
steps = debugscribe(error_msg)
for i, step in enumerate(steps, 1):
print(f"{i}. {step}")You can hand‑off any langchain model.
Here are a few popular examples.
from langchain_openai import ChatOpenAI
from debugscribe import debugscribe
llm = ChatOpenAI() # automatically picks up OpenAI API key from env
response = debugscribe(error_msg, llm=llm)from langchain_anthropic import ChatAnthropic
from debugscribe import debugscribe
llm = ChatAnthropic() # needs ANTHROPIC_API_KEY
response = debugscribe(error_msg, llm=llm)from langchain_google_genai import ChatGoogleGenerativeAI
from debugscribe import debugscribe
llm = ChatGoogleGenerativeAI() # needs GOOGLE_API_KEY
response = debugscribe(error_msg, llm=llm)Tip – The default
ChatLLM7is accessed withfrom langchain_llm7 import ChatLLM7; it will pick up the free‑tier rate limits unless you provide your ownapi_keyor setLLM7_API_KEYin the environment.
debugscribe(
user_input: str,
api_key: Optional[str] = None,
llm: Optional[BaseChatModel] = None
) -> List[str]| Parameter | Type | Description |
|---|---|---|
user_input |
str |
The raw error message or behaviour description you want to debug. |
llm |
Optional[BaseChatModel] |
Any langchain LLM instance. If omitted, a ChatLLM7 instance is created. |
api_key |
Optional[str] |
API key for LLM7. If omitted, the library checks the LLM7_API_KEY environment variable or falls back to the default free‑tier key. |
- Prompt construction – The function stitches together a system prompt and the user input according to a pattern defined in
prompts.py. - LLM call –
llmatch()fromllmatch-messagessends the request via the chosen LLM. - Regex validation – The LLM’s reply is matched against a compiled pattern; only well‑formed replies are surfaced.
- Return value – A list of strings, each a distinct step for resolution.
The default free tier offers generous limits suitable for most developers.
If you require higher limits, acquire an API key at https://token.llm7.io/ and either:
export LLM7_API_KEY="my_awesome_key"or pass it directly:
response = debugscribe(error_msg, api_key="my_awesome_key")Pull requests are welcome! For bug reports, questions, or suggestions, open an issue in the dedicated GitHub Issues tracker:
https://github.com/chigwell/debugscribe/issues
- Author: Eugene Evstafev
- Email: hi@euegne.plus
- GitHub: @chigwell
MIT © Eugene Evstafev