-
Notifications
You must be signed in to change notification settings - Fork 130
Description
📝 Description
The LLM system prompt is currently hardcoded directly inside the build_prompt method in src/llm.py. If a user wants to change how the AI behaves or extracts data, they are forced to edit the core Python source code.
💡 Rationale
Separating configuration from application logic is a standard software engineering best practice. Externalizing the prompt allows different fire departments to easily customize the AI's instructions for their specific local jargon or formatting needs without risking syntax errors or breaking the app.
🛠️ Proposed Solution
-
Create a new
src/prompt.txtfile to isolate and store the system instructions. -
Update
llm.pyto dynamically load this file at runtime usingos.path. -
Inject the loaded text into the final payload string before querying Ollama.
-
Logic change in
src/ -
Update to
requirements.txt -
New prompt for Mistral/Ollama
✅ Acceptance Criteria
- Feature works in Docker container.
- Documentation updated in
docs/(mentioning the newprompt.txtconfig file). - JSON output validates against the schema.
📌 Additional Context
By moving this out of the core script, we make FireForm much closer to a true "Zero-Config" plug-and-play tool for non-developers.