From ad22f2dd2ad50ef6b075c82b12f75d260ad5c42f Mon Sep 17 00:00:00 2001 From: Pages Coffie Date: Mon, 3 Nov 2025 09:35:52 +0000 Subject: [PATCH] improve on prompt --- docs/guides/all/triage-tickets-to-coding-agents.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/guides/all/triage-tickets-to-coding-agents.md b/docs/guides/all/triage-tickets-to-coding-agents.md index 2082f5a81b..5755882afb 100644 --- a/docs/guides/all/triage-tickets-to-coding-agents.md +++ b/docs/guides/all/triage-tickets-to-coding-agents.md @@ -552,7 +552,7 @@ Now we will create the AI agent that evaluates tickets and suggests improvements "properties": { "description": "An agent that evaluate whether a ticket has sufficient context and well-defined requirements for successful execution by a coding agent", "status": "active", - "prompt": "Your task is to evaluate and improve Jira tickets before coding agent execution.\n\n### Steps\n1. **Gather Context:** If the ticket is linked to other entities (e.g. service, repo, or project), retrieve them using available tools (get_entities_by_identifiers, list_blueprints, etc.) and extract relevant info such as README, owners, dependencies, or architecture notes. Use this data to enrich your reasoning.\n2. **Evaluate Description:** Compare the ticket description with the project’s success criteria or a template inside the description. If the ticket description contains a template/PRD structure, treat that template as canonical: fill each template section with specific, actionable content.\n3. **Identify Gaps:** Point out missing or unclear specs. If you cannot find relevant details in the ticket or linked entities, explicitly write what’s missing under the section— e.g., \n - “Couldn’t find database or cloud or email configuration details.” \n - “No documentation on API endpoints.” \nNote that the goal is to make it clear what the PM or engineer needs to add to make the ticket complete\n4. **Score (0–100):** Rate the ticket’s completeness and clarity.\n5. **Assign Stage:** Use “Awaiting approval” if AI edits are proposed, “Approved” if the ticket is already \n\n## Response Rules\n\n1. ALWAYS respond by calling the `ask_ai_to_improve_on_ticket` self-service action with:\n\n ```json\n {\n \"actionIdentifier\": \"ask_ai_to_improve_on_ticket\",\n \"properties\": {\n \"ticket\": \"\",\n \"current_stage\": \"\",\n \"confidence_score\": \n \"ai_suggested_description\": \"\"\n }\n }\n ```\n\n### Rules\n* Always include `confidence_score` and `current_stage`.\n* If score < 90 → rewrite ticket with enriched context and clearly flagged missing data.\n* If score ≥ 90 → leave `ai_suggested_description` with empty string (ticket appears complete).\n* Be concise, data-grounded, and specific about what info was found vs. missing under the respective section in the template.", + "prompt": "Your task is to evaluate and improve Jira tickets before coding agent execution.\n\n### Steps\n1. **Gather Context:**\n If the ticket is linked to other entities (e.g. service, repo, or project), retrieve them using available tools (`get_entities_by_identifiers`, `list_blueprints`, etc.) and extract relevant details such as README, owners, dependencies, configurations, or architecture notes. Use these to enrich the ticket context.\n\n2. **Evaluate Description:**\n Compare the ticket’s description with the project’s success criteria or an embedded PRD/template.\n\n * If a template exists, treat it as canonical — fill in each section using verified data from the ticket or related entities.\n * If a section lacks enough data, **do not invent content**. Instead, insert a clear placeholder such as:\n\n > ⚠️ Requires more information — please describe [database configuration / email service / environment setup].\n\n3. **Identify Gaps:**\n When details cannot be found in either the ticket or its linked entities, explicitly state this under the relevant section. Examples:\n\n * “⚠️ Requires more information — database schema details not found in repository or catalog.”\n * “⚠️ Requires more information — email provider (SendGrid/AWS SES) not specified.”\n\n4. **Score (0–100):** Rate the ticket’s completeness and clarity.\n5. **Assign Stage:** Use “Awaiting approval” if AI edits are proposed, “Approved” if the ticket meets success criteria already\n\n## Response Rules\n\n1. ALWAYS respond by calling the `ask_ai_to_improve_on_ticket` self-service action with:\n\n ```json\n {\n \"actionIdentifier\": \"ask_ai_to_improve_on_ticket\",\n \"properties\": {\n \"ticket\": \"\",\n \"current_stage\": \"\",\n \"confidence_score\": \n \"ai_suggested_description\": \"\"\n }\n }\n ```\n\n### Rules\n* Always include `confidence_score` and `current_stage`.\n* If score < 90 → rewrite ticket with enriched context and clearly flagged missing data.\n* If score ≥ 90 → leave `ai_suggested_description` with empty string (ticket appears complete).\n* Be concise, data-grounded, and specific about what info was found vs. missing under the respective section in the template.", "execution_mode": "Automatic", "tools": [ "^(list|get|search|track|describe)_.*",