Skip to content

[FEAT]: Externalize LLM System Prompt to a Text File #259

@Cubix33

Description

@Cubix33

📝 Description

The LLM system prompt is currently hardcoded directly inside the build_prompt method in src/llm.py. If a user wants to change how the AI behaves or extracts data, they are forced to edit the core Python source code.

💡 Rationale

Separating configuration from application logic is a standard software engineering best practice. Externalizing the prompt allows different fire departments to easily customize the AI's instructions for their specific local jargon or formatting needs without risking syntax errors or breaking the app.

🛠️ Proposed Solution

  • Create a new src/prompt.txt file to isolate and store the system instructions.

  • Update llm.py to dynamically load this file at runtime using os.path.

  • Inject the loaded text into the final payload string before querying Ollama.

  • Logic change in src/

  • Update to requirements.txt

  • New prompt for Mistral/Ollama

✅ Acceptance Criteria

  • Feature works in Docker container.
  • Documentation updated in docs/ (mentioning the new prompt.txt config file).
  • JSON output validates against the schema.

📌 Additional Context

By moving this out of the core script, we make FireForm much closer to a true "Zero-Config" plug-and-play tool for non-developers.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions