Skip to content

Replays Self-Serve Bulk Delete System#77

Open
camcalaquian wants to merge 32 commits into
tenki/base-5from
tenki/head-5
Open

Replays Self-Serve Bulk Delete System#77
camcalaquian wants to merge 32 commits into
tenki/base-5from
tenki/head-5

Conversation

@camcalaquian
Copy link
Copy Markdown

@camcalaquian camcalaquian commented Apr 28, 2026

armenzg and others added 30 commits June 20, 2025 12:49
…o 'low' (getsentry#93927)"

This reverts commit 8d04522.

Co-authored-by: roaga <47861399+roaga@users.noreply.github.com>
Missed in the initial commit, leading to some relevant logs being
unannotated.
)

We have had a few tasks get killed at 10% rollout.
Also add a test, so that this doesn't happen again
Fixes DE-129 and DE-156

---------

Co-authored-by: Cursor Agent <cursoragent@cursor.com>
These transitions should be matching
getsentry#93946)

Use `project_id` on the replay record instead of the URL (where it does
not always exist).

---------

Co-authored-by: Cursor Agent <cursoragent@cursor.com>
Co-authored-by: getsantry[bot] <66042841+getsantry[bot]@users.noreply.github.com>
Also fixed `replay.view_html` -> `replay.view-html`

---------

Co-authored-by: Michelle Zhang <56095982+michellewzhang@users.noreply.github.com>
The conditions associated with a DCG can change over time, and it's good
if we can be completely confident that they're consistent within a given
task execution.
)

This is unused and most regex experiments have required broader changes
to ensure that regexes are evaluated in a specific order (ex:
traceparent). Removing this for now to simplify the code and very
slightly improve runtime performance.
From some testing (on feedback lists of all different lengths), this
prompt seems to work better. It doesn't write overly long sentences and
also does a better job at "summarizing" versus just mentioning a few
specific topics and leaving out others.
…try#93809)

Just remove a couple custom Flex* classes in favor of the Flex primitive
This has been killed a few times.

Refs SENTRY-42M7
…n table (getsentry#93892)

<!-- Describe your PR here. -->

[ticket](https://linear.app/getsentry/issue/ID-156/grouping-info-remove-type-field-from-ui)
The Type field in the Grouping Info section of the issue details page
was redundant.
This removes the Type row from all variant types while keeping the
underlying data structure intact.

before
![Screenshot 2025-06-20 at 12 00
54 PM](https://github.com/user-attachments/assets/97ca72da-0a52-4446-9825-cd4fcb505adf)

after
![Screenshot 2025-06-20 at 11 59
29 AM](https://github.com/user-attachments/assets/a4284d2b-c9f5-442f-b010-7fe72a598e39)
…entry#93902)

### Changes
Related to this PR: getsentry#93810. This
is part 1 of the change, which is pulling out the new component and just
adding it to the repo. Also includes some simplification of the logic in
the base component.

Part 2 will be replacing tables in widgets.

### Before/After

There is no UI change as the table is not being used yet. There is a new
story page for the component.
…etsentry#93943)

to prevent this issue from becoming too noisy, add a noise config
Unfortunately, 'event_data' went from being the variable for current
event context to being the complete parsed data from Redis, and we
continued logging it per group.
That's more data than we should be logging even arguably once, let alone
per group.
Co-authored-by: Abdullah Khan <abdullahkhan@PG9Y57YDXQ.local>
Adds some simple analytics to our endpoint so we can begin building a
dashboard in Amplitude.
Previously, explore supported multiple y axis per chart, so each
visualize supported multiple y axis. That functionality has since been
removed for simplicity so update the types here to match. Keep in mind
that saved queries still store them as an array so when
serializing/deserializing, we still need to treat it as an array.
…try#93739)

We'll need the `useGetTraceItemAttributeKeys` hook in other places so
refactoring it so that it can exported.
mrduncan and others added 2 commits June 20, 2025 13:20
When the max segment ID is null the process fails. We should exit early
since if there aren't any segments to delete there's nothing to do.
@tenki-reviewer
Copy link
Copy Markdown

tenki-reviewer Bot commented Apr 28, 2026

Tenki Code Review - Complete

Files Reviewed: 106
Findings: 17

By Severity:

  • 🔴 High: 7
  • 🟠 Medium: 6
  • 🟡 Low: 4

PR #77 introduces 26 significant issues spanning code-quality, performance, security, and logic bugs across 47 changed files. The most critical issues are hardcoded empty data in feature-flagged table components, validation bypass bugs in browser reporting, wrong dictionary keys preventing updates, and cross-org cache pollution.

Files Reviewed (106 files)
devservices/config.yml
src/sentry/constants.py
src/sentry/feedback/usecases/feedback_summaries.py
src/sentry/grouping/parameterization.py
src/sentry/hybridcloud/tasks/deliver_webhooks.py
src/sentry/integrations/github/integration.py
src/sentry/integrations/gitlab/integration.py
src/sentry/integrations/source_code_management/commit_context.py
src/sentry/issues/endpoints/browser_reporting_collector.py
src/sentry/issues/grouptype.py
src/sentry/migrations/0917_convert_org_saved_searches_to_views.py
src/sentry/migrations/0920_convert_org_saved_searches_to_views_revised.py
src/sentry/options/defaults.py
src/sentry/preprod/__init__.py
src/sentry/preprod/analytics.py
src/sentry/preprod/api/endpoints/organization_preprod_artifact_assemble.py
src/sentry/projectoptions/defaults.py
src/sentry/replays/endpoints/project_replay_summarize_breadcrumbs.py
src/sentry/replays/usecases/delete.py
src/sentry/snuba/ourlogs.py
src/sentry/tasks/auth/check_auth.py
src/sentry/workflow_engine/endpoints/validators/base/detector.py
src/sentry/workflow_engine/processors/delayed_workflow.py
src/sentry/workflow_engine/processors/workflow.py
static/app/components/codeSnippet.tsx
static/app/components/codecov/branchSelector/branchSelector.tsx
static/app/components/codecov/datePicker/dateSelector.tsx
static/app/components/codecov/integratedOrgSelector/integratedOrgSelector.tsx
static/app/components/codecov/repoPicker/repoSelector.tsx
static/app/components/core/button/styles.chonk.tsx
static/app/components/events/eventAttachments.tsx
static/app/components/events/groupingInfo/groupingVariant.tsx
static/app/components/events/interfaces/spans/newTraceDetailsHeader.tsx
static/app/components/feedback/feedbackSummary.tsx
static/app/components/feedback/list/useFeedbackSummary.tsx
static/app/components/group/times.tsx
static/app/components/replays/breadcrumbs/breadcrumbItem.tsx
static/app/components/replays/timeAndScrubberGrid.tsx
static/app/components/scrollCarousel.tsx
static/app/utils/analytics/replayAnalyticsEvents.tsx
static/app/views/alerts/list/rules/alertRuleStatus.tsx
static/app/views/alerts/list/rules/row.tsx
static/app/views/codecov/tests/onboardingSteps/addUploadToken.tsx
static/app/views/dashboards/widgetCard/chart.tsx
static/app/views/dashboards/widgets/common/types.tsx
static/app/views/dashboards/widgets/tableWidget/defaultTableCellRenderers.tsx
static/app/views/dashboards/widgets/tableWidget/fixtures/sampleHTTPRequestTableData.ts
static/app/views/dashboards/widgets/tableWidget/tableWidgetVisualization.spec.tsx
static/app/views/dashboards/widgets/tableWidget/tableWidgetVisualization.stories.tsx
static/app/views/dashboards/widgets/tableWidget/tableWidgetVisualization.tsx
static/app/views/explore/charts/index.tsx
static/app/views/explore/components/traceItemSearchQueryBuilder.tsx
static/app/views/explore/contexts/pageParamsContext/aggregateFields.tsx
static/app/views/explore/contexts/pageParamsContext/index.spec.tsx
static/app/views/explore/contexts/pageParamsContext/index.tsx
static/app/views/explore/contexts/pageParamsContext/sortBys.tsx
static/app/views/explore/contexts/pageParamsContext/visualizes.spec.tsx
static/app/views/explore/contexts/pageParamsContext/visualizes.tsx
static/app/views/explore/hooks/useAddToDashboard.tsx
static/app/views/explore/hooks/useAnalytics.tsx
static/app/views/explore/hooks/useExploreAggregatesTable.tsx
static/app/views/explore/hooks/useExploreTimeseries.tsx
static/app/views/explore/hooks/useGetTraceItemAttributeKeys.tsx
static/app/views/explore/hooks/useGetTraceItemAttributeValues.spec.tsx
static/app/views/explore/hooks/useGetTraceItemAttributeValues.tsx
static/app/views/explore/hooks/useTopEvents.tsx
static/app/views/explore/hooks/useTraceItemAttributeKeys.tsx
static/app/views/explore/spans/spansTab.tsx
static/app/views/explore/tables/aggregateColumnEditorModal.spec.tsx
static/app/views/explore/tables/aggregateColumnEditorModal.tsx
static/app/views/explore/toolbar/index.spec.tsx
static/app/views/explore/toolbar/toolbarSaveAs.tsx
static/app/views/explore/toolbar/toolbarSortBy.tsx
static/app/views/explore/toolbar/toolbarVisualize.tsx
static/app/views/explore/types.tsx
static/app/views/explore/utils.spec.tsx
static/app/views/explore/utils.tsx
static/app/views/feedback/feedbackListPage.tsx
static/app/views/insights/common/components/chartActionDropdown.tsx
static/app/views/insights/pages/transactionNameSearchBar.tsx
static/app/views/organizationStats/teamInsights/teamMisery.tsx
static/app/views/performance/newTraceDetails/traceDrawer/details/span/eapSections/attributes.tsx
static/app/views/performance/newTraceDetails/traceTabsAndVitals.tsx
static/app/views/performance/newTraceDetails/traceWaterfall.tsx
static/app/views/profiling/profileSummary/index.tsx
static/app/views/replays/detail/ai/index.tsx
static/app/views/settings/dynamicSampling/organizationSampleRateInput.tsx
static/app/views/settings/organizationAuditLog/auditLogList.tsx
static/app/views/settings/organizationIntegrations/detailedView/integrationLayout.tsx
static/app/views/settings/project/projectOwnership/codeOwnerFileTable.tsx
static/gsAdmin/views/instanceLevelOAuth/instanceLevelOAuthDetails.tsx
tests/js/fixtures/tabularColumn.ts
tests/js/fixtures/tabularColumns.ts
tests/sentry/api/endpoints/test_browser_reporting_collector.py
tests/sentry/api/endpoints/test_project_details.py
tests/sentry/api/serializers/test_project.py
tests/sentry/grouping/test_parameterization.py
tests/sentry/integrations/github/tasks/test_pr_comment.py
tests/sentry/integrations/gitlab/tasks/test_pr_comment.py
tests/sentry/migrations/test_0917_convert_org_saved_searches_to_views.py
tests/sentry/replays/tasks/test_delete_replays_bulk.py
tests/sentry/replays/test_project_replay_summarize_breadcrumbs.py
tests/sentry/workflow_engine/endpoints/test_organization_detector_details.py
tests/sentry/workflow_engine/endpoints/test_organization_detector_index.py
tests/sentry/workflow_engine/processors/test_delayed_workflow.py
tests/snuba/api/endpoints/test_organization_events_stats.py

Copy link
Copy Markdown

@tenki-reviewer tenki-reviewer Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Review Summary

This PR (#77) modifies 47 files with changes to dashboards, explore views, integrations, and infrastructure services. The review identified 26 findings (9 high, 10 medium, 7 low severity) requiring attention before merging.

Critical Issues (High Severity)

1. Feature-Flagged Table Widget Renders Empty (finding-004)

  • Files: static/app/views/dashboards/widgetCard/chart.tsx (lines 165–174)
  • Impact: Users with the 'use-table-widget-visualization' feature flag see blank tables
  • Root Cause: Component receives hardcoded empty columns=[] and tableData={{ data: [], ... }} instead of actual query results
  • Fix: Pass result.data and result.meta to the component

2. Double Newlines in PR Comments (finding-001)

  • Files: src/sentry/integrations/source_code_management/commit_context.py (line 143)
  • Impact: GitHub/GitLab PR comments display malformed bullet lists with blank lines between items
  • Root Cause: Template string ends with , but callers already join with " ", causing `

` separators

  • Fix: Remove trailing from MERGED_PR_SINGLE_ISSUE_TEMPLATE

3. Browser Report Validation Bypass on Zero Values (finding-002)

  • Files: src/sentry/issues/endpoints/browser_reporting_collector.py (lines 52, 58)
  • Impact: Malformed browser reports with both age and timestamp fields incorrectly pass validation when either is 0
  • Root Cause: Validation uses falsy checks (if get('age'):) instead of is not None checks
  • Fix: Replace with is not None comparisons

4. Detector Type Update Never Applied (finding-003)

  • Files: src/sentry/workflow_engine/endpoints/validators/base/detector.py (line 64)
  • Impact: Detector type updates via the API are silently dropped
  • Root Cause: update() method looks for key "detector_type" but serializer produces "type"
  • Fix: Change validated_data.get("detector_type", ...) to validated_data.get("type", ...)

5. Webhook Deadline Mismatch (finding-005)

  • Files: src/sentry/hybridcloud/tasks/deliver_webhooks.py (line 239)
  • Impact: Webhook delivery tasks are killed by the taskworker before completing their internal deadline
  • Root Cause: processing_deadline_duration=120s is less than internal loop timeout (180s)
  • Fix: Set processing_deadline_duration=300 to match the non-parallel variant

6. Cross-Organization Cache Pollution (finding-006)

  • Files: static/app/views/explore/hooks/useTraceItemAttributeKeys.tsx (lines 39–42)
  • Impact: Switching organizations may return cached attribute data from a different organization
  • Root Cause: Cache key omits organization slug; the queryFn URL includes the org slug but key doesn't
  • Fix: Include organization.slug in the queryKey array

Performance Issues (Medium/High Confidence)

7. N+1 Queries in Environment Info Fetching (finding-007)

  • Files: src/sentry/integrations/source_code_management/commit_context.py (lines 584–597)
  • Impact: PR comment generation fires up to 10 external queries (5 Snuba + 5 DB) for a single operation
  • Root Cause: get_environment_info(issue) called per-issue in a loop without pre-batching
  • Fix: Pre-fetch environment data in bulk or extract from cached group data

8. Event Data Paired with Wrong IDs (finding-008)

  • Files: src/sentry/replays/endpoints/project_replay_summarize_breadcrumbs.py
  • Impact: Error event titles/messages paired with wrong event IDs in AI breadcrumb summaries
  • Root Cause: nodestore.backend.get_multi() returns dict with uncertain ordering; zip(error_ids, events.values()) creates positional pairs that may not align

9. Analytics Recorded Before Authorization (finding-009)

  • Files: src/sentry/preprod/api/endpoints/organization_preprod_artifact_assemble.py (lines 81–88)
  • Impact: Metrics recorded for unauthorized requests, inflating usage statistics
  • Root Cause: analytics.record() fires before feature flag check

Data Model & Logic Issues

10. Broken Optional Chaining on tableData?.meta (finding-011)

  • Files: static/app/views/dashboards/widgets/tableWidget/tableWidgetVisualization.tsx (lines 94–99)
  • Impact: Runtime TypeError if tableData is undefined
  • Root Cause: tableData?.meta.fields uses ?. on tableData but not on meta

11. Workflow Deleted Before Processing (finding-018, finding-019)

  • Files: src/sentry/workflow_engine/processors/delayed_workflow.py (lines 366, 422)
  • Impact: KeyError crash if workflow is deleted between queuing and processing
  • Root Cause: workflows_to_envs[workflow_id] assumes workflow exists; use .get() instead

12. Asymmetric Data Models (finding-002, finding-016)

  • Files: static/app/views/explore/contexts/pageParamsContext/visualizes.tsx, index.tsx
  • Impact: Potential ref-count mismatches between readable and writable field paths
  • Root Cause: New Visualize.yAxis (string) differs from legacy BaseVisualize.yAxes (array); asymmetric enumeration in field tracking

Minor Issues (Low Severity)

13. Misleading key Prop on React Root Element (finding-012)

  • Files: static/app/views/dashboards/widgets/tableWidget/defaultTableCellRenderers.tsx (line 84)
  • Impact: Developer confusion; prop has no effect
  • Fix: Remove the key prop

14. Dead Utility Function After Refactor (finding-016)

  • Files: src/sentry/grouping/parameterization.py (lines 335–343)
  • Impact: Unreachable code; maintenance burden
  • Fix: Delete the _handle_regex_match helper

15. Unnecessary Migration Hints (finding-017)

  • Files: src/sentry/migrations/0917_convert_org_saved_searches_to_views.py
  • Impact: No-op migration still acquires table locks on production
  • Fix: Clear the hints parameter

16. Overly Generic Query Guard (finding-023)

  • Files: static/app/components/feedback/list/useFeedbackSummary.tsx (line 37)
  • Impact: No-op condition; always evaluates to true
  • Fix: Remove redundant Boolean(normalizedDateRange) guard

Files with Multiple Issues

  • src/sentry/integrations/source_code_management/commit_context.py: 2 high + 1 medium
  • src/sentry/issues/endpoints/browser_reporting_collector.py: 1 high + 1 medium
  • src/sentry/workflow_engine/processors/delayed_workflow.py: 1 high + 2 medium
  • Frontend explore views: 5 issues across context, hooks, and models

Recommendation

Request Changes — The PR contains 9 high-severity bugs that must be fixed before merging. Most fixes are one-liners (wrong key, missing parameter, removed newline). Several require slightly more investigation (env info batching, event data ordering, cache invalidation).



ISSUE_TITLE_MAX_LENGTH = 50
MERGED_PR_SINGLE_ISSUE_TEMPLATE = "* ‼️ [**{title}**]({url}){environment}\n"
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🔴 Double newline between issue list items due to
in template and
join
(bug)

In commit_context.py line 143, MERGED_PR_SINGLE_ISSUE_TEMPLATE is defined with a trailing :

MERGED_PR_SINGLE_ISSUE_TEMPLATE = "* ‼️ [**{title}**]({url}){environment}
"

Both GitHubPRCommentWorkflow.get_comment_body (github/integration.py line 404) and GitlabPRCommentWorkflow.get_comment_body (gitlab/integration.py line 252) join the rendered items with " ":

issue_list = "
".join([self.get_merged_pr_single_issue_template(...) for issue in issues])

This produces `item1

item2

item3
— a blank line between each item and a trailing blank line after the last one. The old template (MERGED_PR_SINGLE_ISSUE_TEMPLATE = "- ‼️ {title} ...") had no trailing newline, so the "
".join(...)` rendered correctly. The new template breaks the tight bullet list rendering in both GitHub and GitLab PR comment markdown.

💡 Suggestion: Remove the trailing \n from MERGED_PR_SINGLE_ISSUE_TEMPLATE. The join already provides the separator between items; the template should not embed its own trailing newline.

Suggested change
MERGED_PR_SINGLE_ISSUE_TEMPLATE = "* ‼️ [**{title}**]({url}){environment}\n"
MERGED_PR_SINGLE_ISSUE_TEMPLATE = "* ‼️ [**{title}**]({url}){environment}"
📋 Prompt for AI Agents

In src/sentry/integrations/source_code_management/commit_context.py at line 143, remove the trailing \n from MERGED_PR_SINGLE_ISSUE_TEMPLATE. The current value is "* ‼️ [**{title}**]({url}){environment}\n" — change it to "* ‼️ [**{title}**]({url}){environment}". Both GitHub and GitLab callers already join items with "\n", so the embedded trailing newline in the template causes double blank lines between every item in the PR comment output.

Comment on lines +52 to +59
if self.initial_data.get("age"):
raise serializers.ValidationError("If timestamp is present, age must be absent")
return value

def validate_age(self, value: int) -> int:
"""Validate that age is present, but not timestamp."""
if self.initial_data.get("timestamp"):
raise serializers.ValidationError("If age is present, timestamp must be absent")
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🔴 validate_timestamp/validate_age silently skip mutual-exclusion check when age=0 or timestamp=0 (bug)

In BrowserReportSerializer, validate_timestamp uses self.initial_data.get('age') to decide whether to reject a report that carries both timestamp and age. If a browser sends age=0 (a report created 0 ms ago, which is technically valid per the spec), Python evaluates get('age')0 → falsy, so the check is skipped. The same applies symmetrically in validate_age: a timestamp=0 value (Unix epoch, allowed by min_value=0) bypasses the guard. A request containing both fields is supposed to be rejected as malformed, but will pass validation silently.

Lines 52 and 58 are the two affected checks.

💡 Suggestion: Use is not None instead of truthiness tests so that 0 is treated as a present value:

if self.initial_data.get('age') is not None:
    raise serializers.ValidationError('If timestamp is present, age must be absent')

and

if self.initial_data.get('timestamp') is not None:
    raise serializers.ValidationError('If age is present, timestamp must be absent')
Suggested change
if self.initial_data.get("age"):
raise serializers.ValidationError("If timestamp is present, age must be absent")
return value
def validate_age(self, value: int) -> int:
"""Validate that age is present, but not timestamp."""
if self.initial_data.get("timestamp"):
raise serializers.ValidationError("If age is present, timestamp must be absent")
if self.initial_data.get("age") is not None:
raise serializers.ValidationError("If timestamp is present, age must be absent")
return value
def validate_age(self, value: int) -> int:
"""Validate that age is present, but not timestamp."""
if self.initial_data.get("timestamp") is not None:
raise serializers.ValidationError("If age is present, timestamp must be absent")
📋 Prompt for AI Agents

In src/sentry/issues/endpoints/browser_reporting_collector.py, lines 52 and 58, the mutual-exclusion checks for 'age' and 'timestamp' use falsy tests (.get('age') / .get('timestamp')), which silently skip the check when the value is 0. Change both to use is not None comparisons so that the value 0 is treated as 'field is present'. Specifically: line 52 if self.initial_data.get("age"):if self.initial_data.get("age") is not None: and line 58 if self.initial_data.get("timestamp"):if self.initial_data.get("timestamp") is not None:.

@@ -60,6 +62,22 @@ def data_conditions(self) -> BaseDataConditionValidator:
def update(self, instance: Detector, validated_data: dict[str, Any]):
instance.name = validated_data.get("name", instance.name)
instance.type = validated_data.get("detector_type", instance.group_type).slug
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🔴 Wrong key 'detector_type' in update() prevents type from ever being updated (bug)

In BaseDetectorTypeValidator.update() at line 64, validated_data.get("detector_type", instance.group_type) will always fall back to instance.group_type because the serializer field is declared as type (line 35), so validated_data contains the key "type", not "detector_type". The create() method correctly uses validated_data["type"].slug (line 133), confirming the key is "type". As written, the detector's type can never be changed via the update endpoint — any type value in the request body is silently dropped.

💡 Suggestion: Replace "detector_type" with "type" to match the serializer field name used in create() and in validated_data.

Suggested change
instance.type = validated_data.get("detector_type", instance.group_type).slug
instance.type = validated_data.get("type", instance.group_type).slug
📋 Prompt for AI Agents

In src/sentry/workflow_engine/endpoints/validators/base/detector.py at line 64, the key 'detector_type' does not exist in validated_data (the serializer field is declared as type = serializers.CharField()). Change validated_data.get("detector_type", instance.group_type) to validated_data.get("type", instance.group_type) so the detector's type can actually be updated. The create() method on the same class already uses validated_data["type"].slug as the correct key.

Comment on lines +165 to +174
<TableWidgetVisualization
columns={[]}
tableData={{
data: [],
meta: {
fields: {},
units: {},
},
}}
/>
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🔴 Feature-flagged TableWidgetVisualization renders with empty data instead of query results (bug)

In chart.tsx lines 165-174, when 'use-table-widget-visualization' feature flag is enabled, TableWidgetVisualization is rendered with hardcoded columns={} and stub tableData (data: [], meta: {fields: {}, units: {}}). The actual result variables (result.data, result.meta, fields, loading, etc.) are available in scope but never passed to the component. Users with this flag enabled will always see a blank table.

💡 Suggestion: Pass actual result data to TableWidgetVisualization. Use tableData={{ data: result.data, meta: { fields: result.meta?.fields ?? {}, units: result.meta?.units ?? {} } }} and remove hardcoded columns={} to let the component derive columns from data.

📋 Prompt for AI Agents

In static/app/views/dashboards/widgetCard/chart.tsx around lines 165-174, pass the actual result data to TableWidgetVisualization component. Change tableData from stub empty data to: tableData={{ data: result.data, meta: { fields: result.meta?.fields ?? {}, units: result.meta?.units ?? {} } }} and remove columns={[]} so it derives columns from the data.

silo_mode=SiloMode.CONTROL,
taskworker_config=TaskworkerConfig(
namespace=hybridcloud_control_tasks,
processing_deadline_duration=120,
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🔴 drain_mailbox_parallel processing_deadline_duration (120s) is shorter than its internal loop timeout (180s) (bug)

drain_mailbox_parallel uses BATCH_SCHEDULE_OFFSET = timedelta(minutes=BACKOFF_INTERVAL) = timedelta(minutes=3) = 180 seconds as its internal deadline. The new processing_deadline_duration=120 tells the taskworker to consider the task failed after only 120 seconds. Because 120 < 180, the taskworker will terminate the task while the task's own loop still believes it has 60 more seconds to deliver webhooks. This can cause partial mailbox drains to be incorrectly treated as failures, potentially causing duplicate retries or message-ordering problems.

By contrast, drain_mailbox (the non-parallel variant) correctly sets processing_deadline_duration=300 > 180s, so only the parallel variant is broken.

💡 Suggestion: Set processing_deadline_duration for drain_mailbox_parallel to at least 180 seconds (matching BATCH_SCHEDULE_OFFSET) plus headroom. Using 300 (matching drain_mailbox) is appropriate:

processing_deadline_duration=300,

Alternatively, define a named constant derived from BACKOFF_INTERVAL to keep the two values in sync automatically.

Suggested change
processing_deadline_duration=120,
processing_deadline_duration=300,
📋 Prompt for AI Agents

In src/sentry/hybridcloud/tasks/deliver_webhooks.py at line 239, the TaskworkerConfig for drain_mailbox_parallel sets processing_deadline_duration=120, but the function's internal while-loop runs for up to BATCH_SCHEDULE_OFFSET (timedelta(minutes=3) = 180 seconds). Since 120 < 180, the taskworker kills the task before the internal deadline fires. Change processing_deadline_duration=120 to processing_deadline_duration=300 (matching the drain_mailbox task) to ensure the taskworker deadline is always longer than the internal loop timeout.

Comment on lines +599 to +606
@staticmethod
def get_merged_pr_single_issue_template(title: str, url: str, environment: str) -> str:
truncated_title = PRCommentWorkflow._truncate_title(title)
return MERGED_PR_SINGLE_ISSUE_TEMPLATE.format(
title=truncated_title,
url=url,
environment=environment,
)
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🟡 @staticmethod get_merged_pr_single_issue_template hardcodes PRCommentWorkflow._truncate_title, bypassing subclass overrides (bug)

In commit_context.py lines 599–606, get_merged_pr_single_issue_template is decorated as @staticmethod and references PRCommentWorkflow._truncate_title directly by class name:

@staticmethod
def get_merged_pr_single_issue_template(title: str, url: str, environment: str) -> str:
    truncated_title = PRCommentWorkflow._truncate_title(title)  # hardcoded class ref
    return MERGED_PR_SINGLE_ISSUE_TEMPLATE.format(...)

Since _truncate_title is also a @staticmethod, the hardcoded class reference means any subclass that overrides _truncate_title will not have its override invoked when get_merged_pr_single_issue_template is called. The fix is to either call _truncate_title(title) as a bare function reference (which also bypasses inheritance but is at least explicit), or remove the @staticmethod decorator and call cls._truncate_title(title) via @classmethod, which correctly dispatches through the MRO.

💡 Suggestion: Convert get_merged_pr_single_issue_template from a @staticmethod to a @classmethod, accepting cls as first parameter, and call cls._truncate_title(title) so that subclass overrides of _truncate_title are respected. Alternatively, if subclassing _truncate_title is not a concern, call the module-level helper directly to make the intent explicit.

📋 Prompt for AI Agents

In src/sentry/integrations/source_code_management/commit_context.py at lines 599-606, get_merged_pr_single_issue_template is a @staticmethod that calls PRCommentWorkflow._truncate_title(title) by hardcoded class name. Change it to a @classmethod by replacing @staticmethod with @classmethod, adding cls as the first parameter, and calling cls._truncate_title(title) so that subclasses overriding _truncate_title have their implementation respected. Update all three call sites in github/integration.py (line 406) and gitlab/integration.py (line 254) — they already use self.get_merged_pr_single_issue_template(...) so instance dispatch will automatically route through cls.

Comment on lines +111 to +114
logger.warning(
"browser_report_validation_failed",
extra={"validation_errors": serializer.errors, "raw_report": report},
)
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🟡 Raw attacker-controlled report data logged on validation failure (security)

In the validation-failure branch of BrowserReportingCollectorEndpoint.post (line 111-114), when a report fails serializer validation, the full raw report is logged as 'raw_report': report. This is an unvalidated, attacker-supplied dict from any unauthenticated caller. The body field is an unbounded DictField(), meaning an attacker can include arbitrarily nested/large JSON in the body of a crafted report, cause it to fail validation on another field (e.g. by omitting the required url), and have the entire payload written to logs. This allows log pollution with attacker-controlled content and potential data exposure if the logs are later analyzed or stored.

💡 Suggestion: Change the validation-failure log to only include serializer.errors and safe metadata (e.g., the report's type field if it is a string, or just the index), not the full raw report dict. Example: extra={'validation_errors': serializer.errors, 'report_index': raw_data.index(report)}.

Suggested change
logger.warning(
"browser_report_validation_failed",
extra={"validation_errors": serializer.errors, "raw_report": report},
)
logger.warning(
"browser_report_validation_failed",
extra={"validation_errors": serializer.errors},
)
📋 Prompt for AI Agents

In src/sentry/issues/endpoints/browser_reporting_collector.py at lines 111-114, the logger.warning call includes 'raw_report': report which is attacker-supplied unvalidated data on an unauthenticated endpoint. Replace extra={'validation_errors': serializer.errors, 'raw_report': report} with extra={'validation_errors': serializer.errors} to prevent attacker-controlled content from being written to logs. The raw report is unnecessary for diagnosing validation failures since serializer.errors already captures what went wrong.

const unit = tableData.meta.units?.[columnKey] as string;

return (
<div key={`${rowIndex}-${columnIndex}:${column.name}`}>
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🟡 key prop on root element of renderBodyCell is silently ignored (bug)

In defaultTableCellRenderers.tsx line 84, the returned div has a key prop on the root element. React only uses key for reconciliation in arrays; a key on the root returned element of a render callback is ignored. GridEditable already handles keying with key={${col.key}${i}} at a higher level.

💡 Suggestion: Remove the key prop from the div on line 84 since it has no effect on the root returned element.

Suggested change
<div key={`${rowIndex}-${columnIndex}:${column.name}`}>
<div>
📋 Prompt for AI Agents

In static/app/views/dashboards/widgets/tableWidget/defaultTableCellRenderers.tsx at line 84, remove the key prop from the root div. Change

to just
. The key prop on a render callback's root return value is silently ignored; reconciliation keying is already handled by GridEditable.


return attributes;
},
[api, organization, selection, traceItemType, projectIds, type]
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🟠 getTraceItemAttributeKeys callback has selection as dependency causing unnecessary re-creation (bug)

In useGetTraceItemAttributeKeys.tsx line 113, the useCallback dependency array includes selection (the full page filter selection object). usePageFilters typically returns a new selection reference on every render cycle. This causes getTraceItemAttributeKeys to be recreated on every render. While TanStack Query won't re-run the fetch just because queryFn changed (it uses queryKey to determine staleness), the repeated recreation of this callback has a small performance cost and could cause unexpected behavior if the callback is passed as a prop. The dependency should be narrowed to selection.datetime and selection.projects rather than the entire selection object.

💡 Suggestion: Replace selection with selection.datetime and selection.projects in the useCallback dependency array, so the callback is only recreated when the actually-used fields change.

Suggested change
[api, organization, selection, traceItemType, projectIds, type]
[api, organization, selection.datetime, selection.projects, traceItemType, projectIds, type]
📋 Prompt for AI Agents

In static/app/views/explore/hooks/useGetTraceItemAttributeKeys.tsx line 113, the useCallback dependency array includes the entire selection object ([api, organization, selection, traceItemType, projectIds, type]). Since usePageFilters returns a new selection reference on every render, this causes the callback to be recreated on every render. The callback only uses selection.datetime (line 69) and falls back to selection.projects when projectIds is not provided (line 70). Update the dependency array to [api, organization, selection.datetime, selection.projects, traceItemType, projectIds, type] to avoid unnecessary recreation.

{
staleTime: 5000,
enabled:
Boolean(normalizedDateRange) &&
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🟡 enabled guard Boolean(normalizedDateRange) is always true — never disables the query (bug)

In useFeedbackSummary.tsx line 37, the query is gated on Boolean(normalizedDateRange). However, normalizeDateTimeParams always returns a non-empty object (it defaults to {statsPeriod: '14d'} when no date is provided), so Boolean(normalizedDateRange) is always true. The guard can never disable the query. If the intent was to skip the API call when no date range is selected, this check does not achieve it. The actual disabling guard is only organization.features.includes('user-feedback-ai-summaries').

💡 Suggestion: Remove the Boolean(normalizedDateRange) && guard since it's always true and provides no protection. If the intent is to only query when a non-default date range is selected, use a more specific check such as verifying that selection.datetime.period or selection.datetime.start is set.

Suggested change
Boolean(normalizedDateRange) &&
organization.features.includes('user-feedback-ai-summaries'),
📋 Prompt for AI Agents

In static/app/components/feedback/list/useFeedbackSummary.tsx at line 36-38, the enabled option for useApiQuery includes Boolean(normalizedDateRange) as a guard. Remove this guard because normalizeDateTimeParams always returns a non-empty object (defaulting to {statsPeriod: '14d'}), so Boolean(normalizedDateRange) is always true and never prevents the query. The only effective guard is the feature flag check. Change the enabled option to just: organization.features.includes('user-feedback-ai-summaries').

@camcalaquian camcalaquian marked this pull request as draft April 30, 2026 16:11
@camcalaquian camcalaquian marked this pull request as ready for review April 30, 2026 16:11
Copy link
Copy Markdown

@devin-ai-integration devin-ai-integration Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Devin Review found 3 potential issues.

View 10 additional findings in Devin Review.

Open in Devin Review

timestamp=data.get("timestamp", 0.0),
message=data.get("message", ""),
)
for event_id, data in zip(error_ids, events.values())
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🔴 fetch_error_details incorrectly zips error_ids with nodestore dict values, causing data misalignment

fetch_error_details uses zip(error_ids, events.values()) at line 118 to pair event IDs with their data from nodestore. This is incorrect for two reasons:

  1. Missing entries cause misalignment: nodestore.backend.get_multi() returns a dict keyed by node_id. The Django backend (src/sentry/nodestore/django/backend.py:48-49) only includes entries found in the DB — missing entries are excluded. If any event is missing, events.values() is shorter than error_ids, and the zip pairs wrong error IDs with wrong data (e.g., error_id B gets paired with data for error_id C).

  2. Dict order is not guaranteed to match input order: With partial cache hits, get_multi (src/sentry/nodestore/base.py:198-205) builds the result by first inserting uncached items then calling items.update(cache_items), placing cached items at the end regardless of their original position. The Django backend's DB query returns results in arbitrary order. Either case breaks the positional alignment with error_ids.

This causes error titles, messages, and timestamps to be attributed to the wrong error events in replay AI summaries.

Prompt for agents
The bug is in the fetch_error_details function in src/sentry/replays/endpoints/project_replay_summarize_breadcrumbs.py. The function calls nodestore.backend.get_multi(node_ids) which returns a dict keyed by node_id (not error_id). Using zip(error_ids, events.values()) creates incorrect pairings because: (1) missing entries shrink the dict causing positional misalignment, and (2) dict ordering may not match the input list order due to cache behavior.

The fix should look up each node_id in the events dict individually, maintaining the correct mapping between error_ids and their data. For example:

return [
    ErrorEvent(
        category="error",
        id=event_id,
        title=data.get("title", ""),
        timestamp=data.get("timestamp", 0.0),
        message=data.get("message", ""),
    )
    for event_id, node_id in zip(error_ids, node_ids)
    if (data := events.get(node_id)) is not None
]
Open in Devin Review

Was this helpful? React with 👍 or 👎 to provide feedback.

Comment on lines +81 to +86
analytics.record(
"preprod_artifact.api.assemble",
organization_id=project.organization_id,
project_id=project.id,
user_id=request.user.id,
)
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🟡 Analytics event recorded before feature flag check, tracking rejected requests

analytics.record() is called at line 81 before the feature flag check at line 88. This means every authenticated request to this endpoint records an analytics event, including those from organizations that don't have the preprod-artifact-assemble feature enabled and will receive a 404 response. This inflates analytics data with rejected requests.

Prompt for agents
In src/sentry/preprod/api/endpoints/organization_preprod_artifact_assemble.py, the analytics.record() call at line 81-86 should be moved to after the feature flag check at line 88-91. The analytics event currently fires for all requests, including those that are immediately rejected with a 404 because the organization doesn't have the feature enabled. Move the analytics.record() block to right after the feature check so only requests that pass the feature gate are tracked.
Open in Devin Review

Was this helpful? React with 👍 or 👎 to provide feedback.

Comment on lines +52 to +59
if self.initial_data.get("age"):
raise serializers.ValidationError("If timestamp is present, age must be absent")
return value

def validate_age(self, value: int) -> int:
"""Validate that age is present, but not timestamp."""
if self.initial_data.get("timestamp"):
raise serializers.ValidationError("If age is present, timestamp must be absent")
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🟡 Browser report validator uses truthy check instead of key-existence check for mutual exclusion

validate_timestamp (line 52) and validate_age (line 58) use self.initial_data.get("age") and self.initial_data.get("timestamp") respectively to check for mutual exclusion. These are truthy checks — if age=0 is present alongside a timestamp, self.initial_data.get("age") returns 0 which is falsy, so validate_timestamp won't reject the invalid report. Per the Reporting API spec, age and timestamp are mutually exclusive regardless of their value. The correct check should be "age" in self.initial_data / "timestamp" in self.initial_data.

Suggested change
if self.initial_data.get("age"):
raise serializers.ValidationError("If timestamp is present, age must be absent")
return value
def validate_age(self, value: int) -> int:
"""Validate that age is present, but not timestamp."""
if self.initial_data.get("timestamp"):
raise serializers.ValidationError("If age is present, timestamp must be absent")
if "age" in self.initial_data:
raise serializers.ValidationError("If timestamp is present, age must be absent")
return value
def validate_age(self, value: int) -> int:
"""Validate that age is present, but not timestamp."""
if "timestamp" in self.initial_data:
raise serializers.ValidationError("If age is present, timestamp must be absent")
Open in Devin Review

Was this helpful? React with 👍 or 👎 to provide feedback.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.