Skip to content

feat(external-issues): Use LLM generated title/description for ticket creation#114760

Merged
leeandher merged 6 commits intomasterfrom
leanderrodrigues/iswf-2529-allow-llm-proxy-to-generate-issue-title-and-description-when
May 8, 2026
Merged

feat(external-issues): Use LLM generated title/description for ticket creation#114760
leeandher merged 6 commits intomasterfrom
leanderrodrigues/iswf-2529-allow-llm-proxy-to-generate-issue-title-and-description-when

Conversation

@leeandher
Copy link
Copy Markdown
Member

Adds a flag which will support making an LLMGenerateRequest when a user attempts to create an issue link.

A few design decisions:

  • It adds a trade-off, a few seconds for an LLM call so that the default response comes with the text, rather than having the browser make a form call, then an LLM call, then replace the content in the browser. This avoids a whole bunch of issues if the user eagerly makes form changes prior to the LLM call.
  • This does not apply to all ticket creation. If you set up Jira Tickets or Github Issues to be generated from alerts, those fire through
    data["title"] = installation.get_group_title(event.group, event)
    workflow_id = data.get("workflow_id")
    if workflow_id is not None:
    data["description"] = build_description_workflow_engine_ui(
    event, workflow_id, installation, generate_footer
    )
    , which is a separate code path -- we are not introducing an LLM call on every issue firing, only UI linkages
  • We are omitting sentry apps from this change (unfortunately). those titles/descriptions are assembled on the frontend. That said, we could expose this LLMGenerateRequest through an API and service those as well in the future.
  • We are retaining the existing title + description in the description, in pseudocode:
# Assuming LLM generates a title + description
Issue Title: LLM Title
Issue Description: f"**{DefaultTitle}**\n\n" + LLM Description + `\n\n---\n\n` + DefaultDescription"

Examples

Screenshot 2026-05-04 at 3 35 04 PM image

leeandher and others added 2 commits May 4, 2026 10:17
…nal issues

When creating an external issue (Jira, GitHub, Linear, etc.), use the
LLM proxy to generate a more actionable title and description from the
Sentry error context. Falls back to the existing defaults when the
feature flag is disabled, the org hides AI features, or the LLM call
fails.

Gated behind the organizations:external-issues-ai-generate flag.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
…ssue helper

Rename generate_external_issue_details to maybe_generate_external_issue_details
across the mixin, tests, and utility module. Update test mocks to return
content as a JSON string matching the new json.loads parsing, fix the
except block to also catch TypeError for null content, update title/
description format assertions, and remove a redundant local import.

Co-Authored-By: Claude Opus 4.6 <noreply@example.com>
@leeandher leeandher requested review from a team as code owners May 4, 2026 19:41
@linear-code
Copy link
Copy Markdown

linear-code Bot commented May 4, 2026

@github-actions github-actions Bot added the Scope: Backend Automatically applied to PRs that change backend components label May 4, 2026
Comment thread src/sentry/integrations/utils/external_issues.py Outdated
cursor[bot]

This comment was marked as outdated.

Add missing gen-ai-features master flag check to match all other AI
features in the codebase. Wrap response.json() in try-except for
better observability on non-JSON responses. Pass event from caller
to avoid a duplicate Snuba query.

Co-Authored-By: Claude <noreply@anthropic.com>
Copy link
Copy Markdown
Contributor

@Christinarlong Christinarlong left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd just watch out for increased latency on the modal loading

Comment thread src/sentry/integrations/utils/external_issues.py Outdated
default_title = self.get_group_title(group, event, **kwargs)
default_description = self.get_group_description(group, event, **kwargs)

llm_title, llm_description = maybe_generate_external_issue_details(
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Question in general, but will the LLM title/descr. generation be in addition to the integration requests to get things like repo and assignees?

My only callout here would be about latency, since I believe just opening this modal currently can take a while (unless we've already added improvements that idk 'bout). Idk if we have tracing/observability on the getting of issue config process?

Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah the latency will definitely increase for organizations opted into this flag. I wanted to test it with production times since I don't have a great gauge for how noticeable it will be when running it locally.

Since it's feature flagged, our SaaS org is probably a good test bed for whether this does need to be retooled once we have it enabled, but either way this is required to power the feature 🤷

Comment on lines +72 to +73
temperature=0.3,
max_tokens=750,
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Curious Qs, but what do temperature and max tokens do? is temp analogous to effort? What happens if you hit max tokens? Will 750 be enough? is there a way to test how many will be used?

Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Honestly, it's not a science and kind of a best guess approximation. these were the values I landed on to consistently get results locally without timeouts or token issues.

if you hit the max tokens the request fails, and we fall back to the default, but since these things aren't deterministic its kinda guess work.

the temperature is a sorta analog for creativity or randomness, closer to 0, is supposedly more deterministic, but we also want to avoid being boring (i.e. every title being "A sentry python issue occurred." or something like that)

Comment thread src/sentry/integrations/utils/external_issues.py Outdated
…al issue AI generation

Replace NamedTuple with TypedDict for GeneratedExternalIssueDetails to
better represent the dict-based return type from Seer. Add exc_info=True
to error and warning logs so tracebacks are captured in Sentry. Fix test
mock that would KeyError on empty TypedDict construction.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Comment thread src/sentry/features/temporary.py
…-to-generate-issue-title-and-description-when
Copy link
Copy Markdown
Contributor

@cursor cursor Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Cursor Bugbot has reviewed your changes and found 2 potential issues.

Fix All in Cursor

❌ Bugbot Autofix is OFF. To automatically fix reported issues with cloud agents, enable autofix in the Cursor dashboard.

Reviewed by Cursor Bugbot for commit 9410c88. Configure here.

title = content.get("title")
description = content.get("description")
if title and description:
return {"title": title.strip(), "description": description.strip()}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Truthiness check before strip causes inconsistent fallback

Low Severity

The if title and description: truthiness check on line 112 runs on pre-stripped values, but the returned values on line 113 are post-strip(). If the LLM returns a whitespace-only title (truthy pre-strip, falsy post-strip) but a valid description, the function returns {"title": "", "description": "real desc"}. The caller in issues.py then independently checks each field's truthiness, causing the title to fall back to default while the description uses the LLM-formatted template — an inconsistent pairing.

Additional Locations (1)
Fix in Cursor Fix in Web

Reviewed by Cursor Bugbot for commit 9410c88. Configure here.

return None

title = content.get("title")
description = content.get("description")
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Non-dict JSON response causes uncaught AttributeError

Low Severity

If json.loads(content) succeeds but returns a non-dict type (e.g., a list, string, or number), the subsequent content.get("title") call raises an AttributeError. The local except block only catches json.JSONDecodeError, TypeError, and ValueError, so this error escapes to the generic except Exception in maybe_generate_external_issue_details, losing the specific logging context (group_id, viewer_context) and logging a misleading error message.

Fix in Cursor Fix in Web

Reviewed by Cursor Bugbot for commit 9410c88. Configure here.

@leeandher leeandher enabled auto-merge (squash) May 8, 2026 20:46
@leeandher leeandher merged commit 8915308 into master May 8, 2026
77 checks passed
@leeandher leeandher deleted the leanderrodrigues/iswf-2529-allow-llm-proxy-to-generate-issue-title-and-description-when branch May 8, 2026 20:46
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Scope: Backend Automatically applied to PRs that change backend components

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants