Replays Self-Serve Bulk Delete System#77
Conversation
…y#93917) This validates both the [Working Draft](https://www.w3.org/TR/reporting-1/#concept-reports) and the [Editor's Draft](https://w3c.github.io/reporting/#concept-reports) formats. Fixes [ID-730 - Accept current and upcoming data model](https://linear.app/getsentry/issue/ID-730/accept-current-and-upcoming-data-model).
…o 'low' (getsentry#93927)" This reverts commit 8d04522. Co-authored-by: roaga <47861399+roaga@users.noreply.github.com>
Missed in the initial commit, leading to some relevant logs being unannotated.
Also add a test, so that this doesn't happen again
Fixes DE-129 and DE-156 --------- Co-authored-by: Cursor Agent <cursoragent@cursor.com>
These transitions should be matching
getsentry#93946) Use `project_id` on the replay record instead of the URL (where it does not always exist). --------- Co-authored-by: Cursor Agent <cursoragent@cursor.com> Co-authored-by: getsantry[bot] <66042841+getsantry[bot]@users.noreply.github.com>
Also fixed `replay.view_html` -> `replay.view-html` --------- Co-authored-by: Michelle Zhang <56095982+michellewzhang@users.noreply.github.com>
…sentry#93948) gets `npx @typescript/native-preview` passing again
This adds mode for all things tracing. This encompasses transactions/metrics/spans. Taken from https://github.com/getsentry/sentry/blob/feeaf393deeca8b97675bff23039c6320270aab5/src/sentry/runner/commands/devserver.py#L370
The conditions associated with a DCG can change over time, and it's good if we can be completely confident that they're consistent within a given task execution.
From some testing (on feedback lists of all different lengths), this prompt seems to work better. It doesn't write overly long sentences and also does a better job at "summarizing" versus just mentioning a few specific topics and leaving out others.
…try#93809) Just remove a couple custom Flex* classes in favor of the Flex primitive
This has been killed a few times. Refs SENTRY-42M7
…n table (getsentry#93892) <!-- Describe your PR here. --> [ticket](https://linear.app/getsentry/issue/ID-156/grouping-info-remove-type-field-from-ui) The Type field in the Grouping Info section of the issue details page was redundant. This removes the Type row from all variant types while keeping the underlying data structure intact. before  after 
…entry#93902) ### Changes Related to this PR: getsentry#93810. This is part 1 of the change, which is pulling out the new component and just adding it to the repo. Also includes some simplification of the logic in the base component. Part 2 will be replacing tables in widgets. ### Before/After There is no UI change as the table is not being used yet. There is a new story page for the component.
…etsentry#93943) to prevent this issue from becoming too noisy, add a noise config
Unfortunately, 'event_data' went from being the variable for current event context to being the complete parsed data from Redis, and we continued logging it per group. That's more data than we should be logging even arguably once, let alone per group.
Co-authored-by: Abdullah Khan <abdullahkhan@PG9Y57YDXQ.local>
Adds some simple analytics to our endpoint so we can begin building a dashboard in Amplitude.
Previously, explore supported multiple y axis per chart, so each visualize supported multiple y axis. That functionality has since been removed for simplicity so update the types here to match. Keep in mind that saved queries still store them as an array so when serializing/deserializing, we still need to treat it as an array.
…try#93739) We'll need the `useGetTraceItemAttributeKeys` hook in other places so refactoring it so that it can exported.
- getsentry#93894 removed usage - getsentry/sentry-options-automator#4243 removed the last override
When the max segment ID is null the process fails. We should exit early since if there aren't any segments to delete there's nothing to do.
|
Tenki Code Review - Complete Files Reviewed: 106 By Severity:
PR #77 introduces 26 significant issues spanning code-quality, performance, security, and logic bugs across 47 changed files. The most critical issues are hardcoded empty data in feature-flagged table components, validation bypass bugs in browser reporting, wrong dictionary keys preventing updates, and cross-org cache pollution. Files Reviewed (106 files) |
There was a problem hiding this comment.
Review Summary
This PR (#77) modifies 47 files with changes to dashboards, explore views, integrations, and infrastructure services. The review identified 26 findings (9 high, 10 medium, 7 low severity) requiring attention before merging.
Critical Issues (High Severity)
1. Feature-Flagged Table Widget Renders Empty (finding-004)
- Files:
static/app/views/dashboards/widgetCard/chart.tsx(lines 165–174) - Impact: Users with the
'use-table-widget-visualization'feature flag see blank tables - Root Cause: Component receives hardcoded empty
columns=[]andtableData={{ data: [], ... }}instead of actual query results - Fix: Pass
result.dataandresult.metato the component
2. Double Newlines in PR Comments (finding-001)
- Files:
src/sentry/integrations/source_code_management/commit_context.py(line 143) - Impact: GitHub/GitLab PR comments display malformed bullet lists with blank lines between items
- Root Cause: Template string ends with
, but callers already join with" ", causing `
` separators
- Fix: Remove trailing
fromMERGED_PR_SINGLE_ISSUE_TEMPLATE
3. Browser Report Validation Bypass on Zero Values (finding-002)
- Files:
src/sentry/issues/endpoints/browser_reporting_collector.py(lines 52, 58) - Impact: Malformed browser reports with both
ageandtimestampfields incorrectly pass validation when either is0 - Root Cause: Validation uses falsy checks (
if get('age'):) instead ofis not Nonechecks - Fix: Replace with
is not Nonecomparisons
4. Detector Type Update Never Applied (finding-003)
- Files:
src/sentry/workflow_engine/endpoints/validators/base/detector.py(line 64) - Impact: Detector type updates via the API are silently dropped
- Root Cause:
update()method looks for key"detector_type"but serializer produces"type" - Fix: Change
validated_data.get("detector_type", ...)tovalidated_data.get("type", ...)
5. Webhook Deadline Mismatch (finding-005)
- Files:
src/sentry/hybridcloud/tasks/deliver_webhooks.py(line 239) - Impact: Webhook delivery tasks are killed by the taskworker before completing their internal deadline
- Root Cause:
processing_deadline_duration=120sis less than internal loop timeout (180s) - Fix: Set
processing_deadline_duration=300to match the non-parallel variant
6. Cross-Organization Cache Pollution (finding-006)
- Files:
static/app/views/explore/hooks/useTraceItemAttributeKeys.tsx(lines 39–42) - Impact: Switching organizations may return cached attribute data from a different organization
- Root Cause: Cache key omits organization slug; the queryFn URL includes the org slug but key doesn't
- Fix: Include
organization.slugin thequeryKeyarray
Performance Issues (Medium/High Confidence)
7. N+1 Queries in Environment Info Fetching (finding-007)
- Files:
src/sentry/integrations/source_code_management/commit_context.py(lines 584–597) - Impact: PR comment generation fires up to 10 external queries (5 Snuba + 5 DB) for a single operation
- Root Cause:
get_environment_info(issue)called per-issue in a loop without pre-batching - Fix: Pre-fetch environment data in bulk or extract from cached group data
8. Event Data Paired with Wrong IDs (finding-008)
- Files:
src/sentry/replays/endpoints/project_replay_summarize_breadcrumbs.py - Impact: Error event titles/messages paired with wrong event IDs in AI breadcrumb summaries
- Root Cause:
nodestore.backend.get_multi()returns dict with uncertain ordering;zip(error_ids, events.values())creates positional pairs that may not align
9. Analytics Recorded Before Authorization (finding-009)
- Files:
src/sentry/preprod/api/endpoints/organization_preprod_artifact_assemble.py(lines 81–88) - Impact: Metrics recorded for unauthorized requests, inflating usage statistics
- Root Cause:
analytics.record()fires before feature flag check
Data Model & Logic Issues
10. Broken Optional Chaining on tableData?.meta (finding-011)
- Files:
static/app/views/dashboards/widgets/tableWidget/tableWidgetVisualization.tsx(lines 94–99) - Impact: Runtime TypeError if
tableDatais undefined - Root Cause:
tableData?.meta.fieldsuses?.on tableData but not on meta
11. Workflow Deleted Before Processing (finding-018, finding-019)
- Files:
src/sentry/workflow_engine/processors/delayed_workflow.py(lines 366, 422) - Impact:
KeyErrorcrash if workflow is deleted between queuing and processing - Root Cause:
workflows_to_envs[workflow_id]assumes workflow exists; use.get()instead
12. Asymmetric Data Models (finding-002, finding-016)
- Files:
static/app/views/explore/contexts/pageParamsContext/visualizes.tsx,index.tsx - Impact: Potential ref-count mismatches between readable and writable field paths
- Root Cause: New
Visualize.yAxis(string) differs from legacyBaseVisualize.yAxes(array); asymmetric enumeration in field tracking
Minor Issues (Low Severity)
13. Misleading key Prop on React Root Element (finding-012)
- Files:
static/app/views/dashboards/widgets/tableWidget/defaultTableCellRenderers.tsx(line 84) - Impact: Developer confusion; prop has no effect
- Fix: Remove the
keyprop
14. Dead Utility Function After Refactor (finding-016)
- Files:
src/sentry/grouping/parameterization.py(lines 335–343) - Impact: Unreachable code; maintenance burden
- Fix: Delete the
_handle_regex_matchhelper
15. Unnecessary Migration Hints (finding-017)
- Files:
src/sentry/migrations/0917_convert_org_saved_searches_to_views.py - Impact: No-op migration still acquires table locks on production
- Fix: Clear the
hintsparameter
16. Overly Generic Query Guard (finding-023)
- Files:
static/app/components/feedback/list/useFeedbackSummary.tsx(line 37) - Impact: No-op condition; always evaluates to true
- Fix: Remove redundant
Boolean(normalizedDateRange)guard
Files with Multiple Issues
src/sentry/integrations/source_code_management/commit_context.py: 2 high + 1 mediumsrc/sentry/issues/endpoints/browser_reporting_collector.py: 1 high + 1 mediumsrc/sentry/workflow_engine/processors/delayed_workflow.py: 1 high + 2 medium- Frontend explore views: 5 issues across context, hooks, and models
Recommendation
Request Changes — The PR contains 9 high-severity bugs that must be fixed before merging. Most fixes are one-liners (wrong key, missing parameter, removed newline). Several require slightly more investigation (env info batching, event data ordering, cache invalidation).
|
|
||
|
|
||
| ISSUE_TITLE_MAX_LENGTH = 50 | ||
| MERGED_PR_SINGLE_ISSUE_TEMPLATE = "* ‼️ [**{title}**]({url}){environment}\n" |
There was a problem hiding this comment.
🔴 Double newline between issue list items due to
in template and
join (bug)
In commit_context.py line 143, MERGED_PR_SINGLE_ISSUE_TEMPLATE is defined with a trailing :
MERGED_PR_SINGLE_ISSUE_TEMPLATE = "* ‼️ [**{title}**]({url}){environment}
"Both GitHubPRCommentWorkflow.get_comment_body (github/integration.py line 404) and GitlabPRCommentWorkflow.get_comment_body (gitlab/integration.py line 252) join the rendered items with " ":
issue_list = "
".join([self.get_merged_pr_single_issue_template(...) for issue in issues])This produces `item1
item2
item3
— a blank line between each item and a trailing blank line after the last one. The old template (MERGED_PR_SINGLE_ISSUE_TEMPLATE = "- ) had no trailing newline, so the "
".join(...)` rendered correctly. The new template breaks the tight bullet list rendering in both GitHub and GitLab PR comment markdown.
💡 Suggestion: Remove the trailing \n from MERGED_PR_SINGLE_ISSUE_TEMPLATE. The join already provides the separator between items; the template should not embed its own trailing newline.
| MERGED_PR_SINGLE_ISSUE_TEMPLATE = "* ‼️ [**{title}**]({url}){environment}\n" | |
| MERGED_PR_SINGLE_ISSUE_TEMPLATE = "* ‼️ [**{title}**]({url}){environment}" |
📋 Prompt for AI Agents
In src/sentry/integrations/source_code_management/commit_context.py at line 143, remove the trailing \n from MERGED_PR_SINGLE_ISSUE_TEMPLATE. The current value is "* ‼️ [**{title}**]({url}){environment}\n" — change it to "* ‼️ [**{title}**]({url}){environment}". Both GitHub and GitLab callers already join items with "\n", so the embedded trailing newline in the template causes double blank lines between every item in the PR comment output.
| if self.initial_data.get("age"): | ||
| raise serializers.ValidationError("If timestamp is present, age must be absent") | ||
| return value | ||
|
|
||
| def validate_age(self, value: int) -> int: | ||
| """Validate that age is present, but not timestamp.""" | ||
| if self.initial_data.get("timestamp"): | ||
| raise serializers.ValidationError("If age is present, timestamp must be absent") |
There was a problem hiding this comment.
🔴 validate_timestamp/validate_age silently skip mutual-exclusion check when age=0 or timestamp=0 (bug)
In BrowserReportSerializer, validate_timestamp uses self.initial_data.get('age') to decide whether to reject a report that carries both timestamp and age. If a browser sends age=0 (a report created 0 ms ago, which is technically valid per the spec), Python evaluates get('age') → 0 → falsy, so the check is skipped. The same applies symmetrically in validate_age: a timestamp=0 value (Unix epoch, allowed by min_value=0) bypasses the guard. A request containing both fields is supposed to be rejected as malformed, but will pass validation silently.
Lines 52 and 58 are the two affected checks.
💡 Suggestion: Use is not None instead of truthiness tests so that 0 is treated as a present value:
if self.initial_data.get('age') is not None:
raise serializers.ValidationError('If timestamp is present, age must be absent')and
if self.initial_data.get('timestamp') is not None:
raise serializers.ValidationError('If age is present, timestamp must be absent')| if self.initial_data.get("age"): | |
| raise serializers.ValidationError("If timestamp is present, age must be absent") | |
| return value | |
| def validate_age(self, value: int) -> int: | |
| """Validate that age is present, but not timestamp.""" | |
| if self.initial_data.get("timestamp"): | |
| raise serializers.ValidationError("If age is present, timestamp must be absent") | |
| if self.initial_data.get("age") is not None: | |
| raise serializers.ValidationError("If timestamp is present, age must be absent") | |
| return value | |
| def validate_age(self, value: int) -> int: | |
| """Validate that age is present, but not timestamp.""" | |
| if self.initial_data.get("timestamp") is not None: | |
| raise serializers.ValidationError("If age is present, timestamp must be absent") |
📋 Prompt for AI Agents
In src/sentry/issues/endpoints/browser_reporting_collector.py, lines 52 and 58, the mutual-exclusion checks for 'age' and 'timestamp' use falsy tests (.get('age') / .get('timestamp')), which silently skip the check when the value is 0. Change both to use is not None comparisons so that the value 0 is treated as 'field is present'. Specifically: line 52 if self.initial_data.get("age"): → if self.initial_data.get("age") is not None: and line 58 if self.initial_data.get("timestamp"): → if self.initial_data.get("timestamp") is not None:.
| @@ -60,6 +62,22 @@ def data_conditions(self) -> BaseDataConditionValidator: | |||
| def update(self, instance: Detector, validated_data: dict[str, Any]): | |||
| instance.name = validated_data.get("name", instance.name) | |||
| instance.type = validated_data.get("detector_type", instance.group_type).slug | |||
There was a problem hiding this comment.
🔴 Wrong key 'detector_type' in update() prevents type from ever being updated (bug)
In BaseDetectorTypeValidator.update() at line 64, validated_data.get("detector_type", instance.group_type) will always fall back to instance.group_type because the serializer field is declared as type (line 35), so validated_data contains the key "type", not "detector_type". The create() method correctly uses validated_data["type"].slug (line 133), confirming the key is "type". As written, the detector's type can never be changed via the update endpoint — any type value in the request body is silently dropped.
💡 Suggestion: Replace "detector_type" with "type" to match the serializer field name used in create() and in validated_data.
| instance.type = validated_data.get("detector_type", instance.group_type).slug | |
| instance.type = validated_data.get("type", instance.group_type).slug |
📋 Prompt for AI Agents
In src/sentry/workflow_engine/endpoints/validators/base/detector.py at line 64, the key 'detector_type' does not exist in validated_data (the serializer field is declared as type = serializers.CharField()). Change validated_data.get("detector_type", instance.group_type) to validated_data.get("type", instance.group_type) so the detector's type can actually be updated. The create() method on the same class already uses validated_data["type"].slug as the correct key.
| <TableWidgetVisualization | ||
| columns={[]} | ||
| tableData={{ | ||
| data: [], | ||
| meta: { | ||
| fields: {}, | ||
| units: {}, | ||
| }, | ||
| }} | ||
| /> |
There was a problem hiding this comment.
🔴 Feature-flagged TableWidgetVisualization renders with empty data instead of query results (bug)
In chart.tsx lines 165-174, when 'use-table-widget-visualization' feature flag is enabled, TableWidgetVisualization is rendered with hardcoded columns={} and stub tableData (data: [], meta: {fields: {}, units: {}}). The actual result variables (result.data, result.meta, fields, loading, etc.) are available in scope but never passed to the component. Users with this flag enabled will always see a blank table.
💡 Suggestion: Pass actual result data to TableWidgetVisualization. Use tableData={{ data: result.data, meta: { fields: result.meta?.fields ?? {}, units: result.meta?.units ?? {} } }} and remove hardcoded columns={} to let the component derive columns from data.
📋 Prompt for AI Agents
In static/app/views/dashboards/widgetCard/chart.tsx around lines 165-174, pass the actual result data to TableWidgetVisualization component. Change tableData from stub empty data to: tableData={{ data: result.data, meta: { fields: result.meta?.fields ?? {}, units: result.meta?.units ?? {} } }} and remove columns={[]} so it derives columns from the data.
| silo_mode=SiloMode.CONTROL, | ||
| taskworker_config=TaskworkerConfig( | ||
| namespace=hybridcloud_control_tasks, | ||
| processing_deadline_duration=120, |
There was a problem hiding this comment.
🔴 drain_mailbox_parallel processing_deadline_duration (120s) is shorter than its internal loop timeout (180s) (bug)
drain_mailbox_parallel uses BATCH_SCHEDULE_OFFSET = timedelta(minutes=BACKOFF_INTERVAL) = timedelta(minutes=3) = 180 seconds as its internal deadline. The new processing_deadline_duration=120 tells the taskworker to consider the task failed after only 120 seconds. Because 120 < 180, the taskworker will terminate the task while the task's own loop still believes it has 60 more seconds to deliver webhooks. This can cause partial mailbox drains to be incorrectly treated as failures, potentially causing duplicate retries or message-ordering problems.
By contrast, drain_mailbox (the non-parallel variant) correctly sets processing_deadline_duration=300 > 180s, so only the parallel variant is broken.
💡 Suggestion: Set processing_deadline_duration for drain_mailbox_parallel to at least 180 seconds (matching BATCH_SCHEDULE_OFFSET) plus headroom. Using 300 (matching drain_mailbox) is appropriate:
processing_deadline_duration=300,Alternatively, define a named constant derived from BACKOFF_INTERVAL to keep the two values in sync automatically.
| processing_deadline_duration=120, | |
| processing_deadline_duration=300, |
📋 Prompt for AI Agents
In src/sentry/hybridcloud/tasks/deliver_webhooks.py at line 239, the TaskworkerConfig for drain_mailbox_parallel sets processing_deadline_duration=120, but the function's internal while-loop runs for up to BATCH_SCHEDULE_OFFSET (timedelta(minutes=3) = 180 seconds). Since 120 < 180, the taskworker kills the task before the internal deadline fires. Change processing_deadline_duration=120 to processing_deadline_duration=300 (matching the drain_mailbox task) to ensure the taskworker deadline is always longer than the internal loop timeout.
| @staticmethod | ||
| def get_merged_pr_single_issue_template(title: str, url: str, environment: str) -> str: | ||
| truncated_title = PRCommentWorkflow._truncate_title(title) | ||
| return MERGED_PR_SINGLE_ISSUE_TEMPLATE.format( | ||
| title=truncated_title, | ||
| url=url, | ||
| environment=environment, | ||
| ) |
There was a problem hiding this comment.
🟡 @staticmethod get_merged_pr_single_issue_template hardcodes PRCommentWorkflow._truncate_title, bypassing subclass overrides (bug)
In commit_context.py lines 599–606, get_merged_pr_single_issue_template is decorated as @staticmethod and references PRCommentWorkflow._truncate_title directly by class name:
@staticmethod
def get_merged_pr_single_issue_template(title: str, url: str, environment: str) -> str:
truncated_title = PRCommentWorkflow._truncate_title(title) # hardcoded class ref
return MERGED_PR_SINGLE_ISSUE_TEMPLATE.format(...)Since _truncate_title is also a @staticmethod, the hardcoded class reference means any subclass that overrides _truncate_title will not have its override invoked when get_merged_pr_single_issue_template is called. The fix is to either call _truncate_title(title) as a bare function reference (which also bypasses inheritance but is at least explicit), or remove the @staticmethod decorator and call cls._truncate_title(title) via @classmethod, which correctly dispatches through the MRO.
💡 Suggestion: Convert get_merged_pr_single_issue_template from a @staticmethod to a @classmethod, accepting cls as first parameter, and call cls._truncate_title(title) so that subclass overrides of _truncate_title are respected. Alternatively, if subclassing _truncate_title is not a concern, call the module-level helper directly to make the intent explicit.
📋 Prompt for AI Agents
In src/sentry/integrations/source_code_management/commit_context.py at lines 599-606, get_merged_pr_single_issue_template is a @staticmethod that calls PRCommentWorkflow._truncate_title(title) by hardcoded class name. Change it to a @classmethod by replacing @staticmethod with @classmethod, adding cls as the first parameter, and calling cls._truncate_title(title) so that subclasses overriding _truncate_title have their implementation respected. Update all three call sites in github/integration.py (line 406) and gitlab/integration.py (line 254) — they already use self.get_merged_pr_single_issue_template(...) so instance dispatch will automatically route through cls.
| logger.warning( | ||
| "browser_report_validation_failed", | ||
| extra={"validation_errors": serializer.errors, "raw_report": report}, | ||
| ) |
There was a problem hiding this comment.
🟡 Raw attacker-controlled report data logged on validation failure (security)
In the validation-failure branch of BrowserReportingCollectorEndpoint.post (line 111-114), when a report fails serializer validation, the full raw report is logged as 'raw_report': report. This is an unvalidated, attacker-supplied dict from any unauthenticated caller. The body field is an unbounded DictField(), meaning an attacker can include arbitrarily nested/large JSON in the body of a crafted report, cause it to fail validation on another field (e.g. by omitting the required url), and have the entire payload written to logs. This allows log pollution with attacker-controlled content and potential data exposure if the logs are later analyzed or stored.
💡 Suggestion: Change the validation-failure log to only include serializer.errors and safe metadata (e.g., the report's type field if it is a string, or just the index), not the full raw report dict. Example: extra={'validation_errors': serializer.errors, 'report_index': raw_data.index(report)}.
| logger.warning( | |
| "browser_report_validation_failed", | |
| extra={"validation_errors": serializer.errors, "raw_report": report}, | |
| ) | |
| logger.warning( | |
| "browser_report_validation_failed", | |
| extra={"validation_errors": serializer.errors}, | |
| ) |
📋 Prompt for AI Agents
In src/sentry/issues/endpoints/browser_reporting_collector.py at lines 111-114, the logger.warning call includes 'raw_report': report which is attacker-supplied unvalidated data on an unauthenticated endpoint. Replace extra={'validation_errors': serializer.errors, 'raw_report': report} with extra={'validation_errors': serializer.errors} to prevent attacker-controlled content from being written to logs. The raw report is unnecessary for diagnosing validation failures since serializer.errors already captures what went wrong.
| const unit = tableData.meta.units?.[columnKey] as string; | ||
|
|
||
| return ( | ||
| <div key={`${rowIndex}-${columnIndex}:${column.name}`}> |
There was a problem hiding this comment.
🟡 key prop on root element of renderBodyCell is silently ignored (bug)
In defaultTableCellRenderers.tsx line 84, the returned div has a key prop on the root element. React only uses key for reconciliation in arrays; a key on the root returned element of a render callback is ignored. GridEditable already handles keying with key={${col.key}${i}} at a higher level.
💡 Suggestion: Remove the key prop from the div on line 84 since it has no effect on the root returned element.
| <div key={`${rowIndex}-${columnIndex}:${column.name}`}> | |
| <div> |
📋 Prompt for AI Agents
In static/app/views/dashboards/widgets/tableWidget/defaultTableCellRenderers.tsx at line 84, remove the key prop from the root div. Change
|
|
||
| return attributes; | ||
| }, | ||
| [api, organization, selection, traceItemType, projectIds, type] |
There was a problem hiding this comment.
🟠 getTraceItemAttributeKeys callback has selection as dependency causing unnecessary re-creation (bug)
In useGetTraceItemAttributeKeys.tsx line 113, the useCallback dependency array includes selection (the full page filter selection object). usePageFilters typically returns a new selection reference on every render cycle. This causes getTraceItemAttributeKeys to be recreated on every render. While TanStack Query won't re-run the fetch just because queryFn changed (it uses queryKey to determine staleness), the repeated recreation of this callback has a small performance cost and could cause unexpected behavior if the callback is passed as a prop. The dependency should be narrowed to selection.datetime and selection.projects rather than the entire selection object.
💡 Suggestion: Replace selection with selection.datetime and selection.projects in the useCallback dependency array, so the callback is only recreated when the actually-used fields change.
| [api, organization, selection, traceItemType, projectIds, type] | |
| [api, organization, selection.datetime, selection.projects, traceItemType, projectIds, type] |
📋 Prompt for AI Agents
In static/app/views/explore/hooks/useGetTraceItemAttributeKeys.tsx line 113, the useCallback dependency array includes the entire selection object ([api, organization, selection, traceItemType, projectIds, type]). Since usePageFilters returns a new selection reference on every render, this causes the callback to be recreated on every render. The callback only uses selection.datetime (line 69) and falls back to selection.projects when projectIds is not provided (line 70). Update the dependency array to [api, organization, selection.datetime, selection.projects, traceItemType, projectIds, type] to avoid unnecessary recreation.
| { | ||
| staleTime: 5000, | ||
| enabled: | ||
| Boolean(normalizedDateRange) && |
There was a problem hiding this comment.
🟡 enabled guard Boolean(normalizedDateRange) is always true — never disables the query (bug)
In useFeedbackSummary.tsx line 37, the query is gated on Boolean(normalizedDateRange). However, normalizeDateTimeParams always returns a non-empty object (it defaults to {statsPeriod: '14d'} when no date is provided), so Boolean(normalizedDateRange) is always true. The guard can never disable the query. If the intent was to skip the API call when no date range is selected, this check does not achieve it. The actual disabling guard is only organization.features.includes('user-feedback-ai-summaries').
💡 Suggestion: Remove the Boolean(normalizedDateRange) && guard since it's always true and provides no protection. If the intent is to only query when a non-default date range is selected, use a more specific check such as verifying that selection.datetime.period or selection.datetime.start is set.
| Boolean(normalizedDateRange) && | |
| organization.features.includes('user-feedback-ai-summaries'), |
📋 Prompt for AI Agents
In static/app/components/feedback/list/useFeedbackSummary.tsx at line 36-38, the enabled option for useApiQuery includes Boolean(normalizedDateRange) as a guard. Remove this guard because normalizeDateTimeParams always returns a non-empty object (defaulting to {statsPeriod: '14d'}), so Boolean(normalizedDateRange) is always true and never prevents the query. The only effective guard is the feature flag check. Change the enabled option to just: organization.features.includes('user-feedback-ai-summaries').
| timestamp=data.get("timestamp", 0.0), | ||
| message=data.get("message", ""), | ||
| ) | ||
| for event_id, data in zip(error_ids, events.values()) |
There was a problem hiding this comment.
🔴 fetch_error_details incorrectly zips error_ids with nodestore dict values, causing data misalignment
fetch_error_details uses zip(error_ids, events.values()) at line 118 to pair event IDs with their data from nodestore. This is incorrect for two reasons:
-
Missing entries cause misalignment:
nodestore.backend.get_multi()returns a dict keyed bynode_id. The Django backend (src/sentry/nodestore/django/backend.py:48-49) only includes entries found in the DB — missing entries are excluded. If any event is missing,events.values()is shorter thanerror_ids, and thezippairs wrong error IDs with wrong data (e.g., error_id B gets paired with data for error_id C). -
Dict order is not guaranteed to match input order: With partial cache hits,
get_multi(src/sentry/nodestore/base.py:198-205) builds the result by first inserting uncached items then callingitems.update(cache_items), placing cached items at the end regardless of their original position. The Django backend's DB query returns results in arbitrary order. Either case breaks the positional alignment witherror_ids.
This causes error titles, messages, and timestamps to be attributed to the wrong error events in replay AI summaries.
Prompt for agents
The bug is in the fetch_error_details function in src/sentry/replays/endpoints/project_replay_summarize_breadcrumbs.py. The function calls nodestore.backend.get_multi(node_ids) which returns a dict keyed by node_id (not error_id). Using zip(error_ids, events.values()) creates incorrect pairings because: (1) missing entries shrink the dict causing positional misalignment, and (2) dict ordering may not match the input list order due to cache behavior.
The fix should look up each node_id in the events dict individually, maintaining the correct mapping between error_ids and their data. For example:
return [
ErrorEvent(
category="error",
id=event_id,
title=data.get("title", ""),
timestamp=data.get("timestamp", 0.0),
message=data.get("message", ""),
)
for event_id, node_id in zip(error_ids, node_ids)
if (data := events.get(node_id)) is not None
]
Was this helpful? React with 👍 or 👎 to provide feedback.
| analytics.record( | ||
| "preprod_artifact.api.assemble", | ||
| organization_id=project.organization_id, | ||
| project_id=project.id, | ||
| user_id=request.user.id, | ||
| ) |
There was a problem hiding this comment.
🟡 Analytics event recorded before feature flag check, tracking rejected requests
analytics.record() is called at line 81 before the feature flag check at line 88. This means every authenticated request to this endpoint records an analytics event, including those from organizations that don't have the preprod-artifact-assemble feature enabled and will receive a 404 response. This inflates analytics data with rejected requests.
Prompt for agents
In src/sentry/preprod/api/endpoints/organization_preprod_artifact_assemble.py, the analytics.record() call at line 81-86 should be moved to after the feature flag check at line 88-91. The analytics event currently fires for all requests, including those that are immediately rejected with a 404 because the organization doesn't have the feature enabled. Move the analytics.record() block to right after the feature check so only requests that pass the feature gate are tracked.
Was this helpful? React with 👍 or 👎 to provide feedback.
| if self.initial_data.get("age"): | ||
| raise serializers.ValidationError("If timestamp is present, age must be absent") | ||
| return value | ||
|
|
||
| def validate_age(self, value: int) -> int: | ||
| """Validate that age is present, but not timestamp.""" | ||
| if self.initial_data.get("timestamp"): | ||
| raise serializers.ValidationError("If age is present, timestamp must be absent") |
There was a problem hiding this comment.
🟡 Browser report validator uses truthy check instead of key-existence check for mutual exclusion
validate_timestamp (line 52) and validate_age (line 58) use self.initial_data.get("age") and self.initial_data.get("timestamp") respectively to check for mutual exclusion. These are truthy checks — if age=0 is present alongside a timestamp, self.initial_data.get("age") returns 0 which is falsy, so validate_timestamp won't reject the invalid report. Per the Reporting API spec, age and timestamp are mutually exclusive regardless of their value. The correct check should be "age" in self.initial_data / "timestamp" in self.initial_data.
| if self.initial_data.get("age"): | |
| raise serializers.ValidationError("If timestamp is present, age must be absent") | |
| return value | |
| def validate_age(self, value: int) -> int: | |
| """Validate that age is present, but not timestamp.""" | |
| if self.initial_data.get("timestamp"): | |
| raise serializers.ValidationError("If age is present, timestamp must be absent") | |
| if "age" in self.initial_data: | |
| raise serializers.ValidationError("If timestamp is present, age must be absent") | |
| return value | |
| def validate_age(self, value: int) -> int: | |
| """Validate that age is present, but not timestamp.""" | |
| if "timestamp" in self.initial_data: | |
| raise serializers.ValidationError("If age is present, timestamp must be absent") |
Was this helpful? React with 👍 or 👎 to provide feedback.
Uh oh!
There was an error while loading. Please reload this page.