Conversation
…nal issues When creating an external issue (Jira, GitHub, Linear, etc.), use the LLM proxy to generate a more actionable title and description from the Sentry error context. Falls back to the existing defaults when the feature flag is disabled, the org hides AI features, or the LLM call fails. Gated behind the organizations:external-issues-ai-generate flag. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
…ssue helper Rename generate_external_issue_details to maybe_generate_external_issue_details across the mixin, tests, and utility module. Update test mocks to return content as a JSON string matching the new json.loads parsing, fix the except block to also catch TypeError for null content, update title/ description format assertions, and remove a redundant local import. Co-Authored-By: Claude Opus 4.6 <noreply@example.com>
Add missing gen-ai-features master flag check to match all other AI features in the codebase. Wrap response.json() in try-except for better observability on non-JSON responses. Pass event from caller to avoid a duplicate Snuba query. Co-Authored-By: Claude <noreply@anthropic.com>
Christinarlong
left a comment
There was a problem hiding this comment.
I'd just watch out for increased latency on the modal loading
| default_title = self.get_group_title(group, event, **kwargs) | ||
| default_description = self.get_group_description(group, event, **kwargs) | ||
|
|
||
| llm_title, llm_description = maybe_generate_external_issue_details( |
There was a problem hiding this comment.
Question in general, but will the LLM title/descr. generation be in addition to the integration requests to get things like repo and assignees?
My only callout here would be about latency, since I believe just opening this modal currently can take a while (unless we've already added improvements that idk 'bout). Idk if we have tracing/observability on the getting of issue config process?
There was a problem hiding this comment.
Yeah the latency will definitely increase for organizations opted into this flag. I wanted to test it with production times since I don't have a great gauge for how noticeable it will be when running it locally.
Since it's feature flagged, our SaaS org is probably a good test bed for whether this does need to be retooled once we have it enabled, but either way this is required to power the feature 🤷
| temperature=0.3, | ||
| max_tokens=750, |
There was a problem hiding this comment.
Curious Qs, but what do temperature and max tokens do? is temp analogous to effort? What happens if you hit max tokens? Will 750 be enough? is there a way to test how many will be used?
There was a problem hiding this comment.
Honestly, it's not a science and kind of a best guess approximation. these were the values I landed on to consistently get results locally without timeouts or token issues.
if you hit the max tokens the request fails, and we fall back to the default, but since these things aren't deterministic its kinda guess work.
the temperature is a sorta analog for creativity or randomness, closer to 0, is supposedly more deterministic, but we also want to avoid being boring (i.e. every title being "A sentry python issue occurred." or something like that)
…al issue AI generation Replace NamedTuple with TypedDict for GeneratedExternalIssueDetails to better represent the dict-based return type from Seer. Add exc_info=True to error and warning logs so tracebacks are captured in Sentry. Fix test mock that would KeyError on empty TypedDict construction. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
…-to-generate-issue-title-and-description-when
There was a problem hiding this comment.
Cursor Bugbot has reviewed your changes and found 2 potential issues.
❌ Bugbot Autofix is OFF. To automatically fix reported issues with cloud agents, enable autofix in the Cursor dashboard.
Reviewed by Cursor Bugbot for commit 9410c88. Configure here.
| title = content.get("title") | ||
| description = content.get("description") | ||
| if title and description: | ||
| return {"title": title.strip(), "description": description.strip()} |
There was a problem hiding this comment.
Truthiness check before strip causes inconsistent fallback
Low Severity
The if title and description: truthiness check on line 112 runs on pre-stripped values, but the returned values on line 113 are post-strip(). If the LLM returns a whitespace-only title (truthy pre-strip, falsy post-strip) but a valid description, the function returns {"title": "", "description": "real desc"}. The caller in issues.py then independently checks each field's truthiness, causing the title to fall back to default while the description uses the LLM-formatted template — an inconsistent pairing.
Additional Locations (1)
Reviewed by Cursor Bugbot for commit 9410c88. Configure here.
| return None | ||
|
|
||
| title = content.get("title") | ||
| description = content.get("description") |
There was a problem hiding this comment.
Non-dict JSON response causes uncaught AttributeError
Low Severity
If json.loads(content) succeeds but returns a non-dict type (e.g., a list, string, or number), the subsequent content.get("title") call raises an AttributeError. The local except block only catches json.JSONDecodeError, TypeError, and ValueError, so this error escapes to the generic except Exception in maybe_generate_external_issue_details, losing the specific logging context (group_id, viewer_context) and logging a misleading error message.
Reviewed by Cursor Bugbot for commit 9410c88. Configure here.


Adds a flag which will support making an LLMGenerateRequest when a user attempts to create an issue link.
A few design decisions:
sentry/src/sentry/rules/actions/integrations/create_ticket/utils.py
Lines 146 to 152 in 9644ccb
Examples