Skip to content

feat(attributes): Add app.vitals.* attributes for mobile performance#313

Merged
buenaflor merged 10 commits intomainfrom
giancarlobuenaflor/add-app-start-ttid-ttfd-attributes
Apr 14, 2026
Merged

feat(attributes): Add app.vitals.* attributes for mobile performance#313
buenaflor merged 10 commits intomainfrom
giancarlobuenaflor/add-app-start-ttid-ttfd-attributes

Conversation

@buenaflor
Copy link
Copy Markdown
Contributor

@buenaflor buenaflor commented Apr 8, 2026

Add new app.vitals.* namespaced attributes for mobile app performance monitoring and deprecate their old counterparts.

New attributes:

  • app.vitals.start.cold.value — cold app start duration in milliseconds
  • app.vitals.start.warm.value — warm app start duration in milliseconds
  • app.vitals.start.type — app start type (cold/warm)
  • app.vitals.ttid.value — time to initial display in milliseconds
  • app.vitals.ttfd.value — time to full display in milliseconds
  • app.vitals.frames.total.count — total frames rendered during span lifetime
  • app.vitals.frames.slow.count — slow frames rendered during span lifetime
  • app.vitals.frames.frozen.count — frozen frames rendered during span lifetime
  • app.vitals.frames.delay.value — sum of delayed frame durations in seconds

Deprecated (backfill status, with aliases on both sides):

  • app_start_typeapp.vitals.start.type
  • time_to_initial_displayapp.vitals.ttid.value
  • time_to_full_displayapp.vitals.ttfd.value
  • frames.totalapp.vitals.frames.total.count
  • frames.slowapp.vitals.frames.slow.count
  • frames.frozenapp.vitals.frames.frozen.count
  • frames.delayapp.vitals.frames.delay.value

buenaflor and others added 2 commits April 8, 2026 13:18
Add app.start.cold.value, app.start.warm.value, app.ttid.value, and
app.ttfd.value attributes for tracking cold/warm app start durations
and time to initial/full display metrics.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Per CONTRIBUTING.md policy, pii MUST be maybe or true unless scrubbing
would break product features. Duration values don't need exemption.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
@github-actions
Copy link
Copy Markdown

github-actions bot commented Apr 8, 2026

Semver Impact of This PR

🟡 Minor (new features)

📋 Changelog Preview

This is how your changes will appear in the changelog.
Entries from this PR are highlighted with a left border (blockquote style).


New Features ✨

Attributes

  • Add app.vitals.* attributes for mobile performance by buenaflor in #313
  • Add remaining app context attributes by buenaflor in #304
  • Add sentry.mobile and sentry.main_thread attributes by buenaflor in #317
  • Add more device context attributes by buenaflor in #303
  • Add OS context attributes by buenaflor in #301
  • Add gen_ai.function_id attribute by constantinius in #308
  • Add gen_ai.context.window_size and gen_ai.context.utilization for generation spans by constantinius in #315
  • Add db.driver.name attribute by alexander-alderman-webb in #295
  • Add network connection-related attributes by Lms24 in #279
  • Add cache.write attribute by adinauer in #292
  • Add device context attributes by buenaflor in #300
  • Add app context attributes for mobile by buenaflor in #296
  • Add device memory and core count attributes by Lms24 in #281
  • Add ui.element.* attributes by Lms24 in #284
  • Add remaining TTFB, FCP and FP web vital attributes by Lms24 in #235
  • Add LCP web vital meta attributes by Lms24 in #233
  • Add CLS web vital source attribute by Lms24 in #234
  • Add core web web vital value attributes by Lms24 in #229
  • Add allow_any_value field to attribute schema by vgrozdanic in #272

Other

  • (http) Add http.server.request.time_in_queue attribute by dingsdax in #267
  • (resource) Add resource.deployment.environment by mjq in #266
  • Add sentry.timestamp.sequence attribute to the spec by logaretm in #262
  • Add changelog tracking to attribute definitions by ericapisani in #270

Bug Fixes 🐛

  • (attributes) Remove allow_any_value boolean attribute and allow any as type by vgrozdanic in #273
  • (gen_ai) Input and output token description by obostjancic in #261
  • (release) Run yarn install before yarn generate in pre-release script by vgrozdanic in #316
  • (sentry) Deprecate sentry.trace.parent_span_id by mjq in #287
  • Don't run changelog generation on yarn generate by Lms24 in #277
  • Avoid changelog generation recursion by Lms24 in #274

Documentation 📚

  • (sentry) Add deprecated sentry.source by s1gr1d in #288
  • Redirect from old /generated pages to new routes by Lms24 in #291
  • Remove extra yarn run format instruction by mjq in #289
  • Update README with up-to-date links by ericapisani in #258

Internal Changes 🔧

Deps

  • Bump defu from 6.1.4 to 6.1.6 by dependabot in #309
  • Bump vite from 6.4.1 to 6.4.2 by dependabot in #310
  • Bump pygments from 2.19.2 to 2.20.0 in /python by dependabot in #307
  • Bump smol-toml from 1.6.0 to 1.6.1 by dependabot in #305
  • Bump h3 from 1.15.5 to 1.15.9 by dependabot in #299
  • Bump devalue from 5.6.3 to 5.6.4 by dependabot in #286
  • Bump dompurify from 3.3.1 to 3.3.2 by dependabot in #278
  • Bump svgo from 3.3.2 to 3.3.3 by dependabot in #275
  • Bump svelte from 5.51.5 to 5.53.5 by dependabot in #271
  • Bump rollup from 4.40.1 to 4.59.0 by dependabot in #269
  • Bump svelte from 5.48.1 to 5.51.5 by dependabot in #260

Deps Dev

  • Bump tar from 7.5.10 to 7.5.11 by dependabot in #285
  • Bump tar from 7.5.8 to 7.5.10 by dependabot in #276
  • Bump tar from 7.5.7 to 7.5.8 by dependabot in #259

Other

  • (ai) Deprecate rest of ai.* attributes by constantinius in #264
  • (attributes) Ensure each attribute json has a changelog entry by Lms24 in #282
  • (docs) Upgrade to Astro 6 by Lms24 in #283
  • (gen_ai) Deprecate gen_ai.tool.input, gen_ai.tool.message, gen_ai.tool.output by constantinius in #265
  • (publish) Bump next entries in changelog when releasing by Lms24 in #290
  • (repo) Populate changelog property when running yarn create:attribute by Lms24 in #280
  • Pin GitHub Actions to full-length commit SHAs by joshuarli in #302
  • Wrong link to CONTRIBUTING.md in PR template by sentrivana in #298

Other

  • deprecate(attributes): Mark gen_ai.tool.type as deprecated by ericapisani in #312

🤖 This preview updates automatically when you update the PR.

@buenaflor buenaflor changed the title feat(attributes): Add app start and display timing attributes feat(attributes): Add app start and ttid/ttf attributes for mobile Apr 8, 2026
…old ones

Replace app.start.cold.value/app.start.warm.value with app.start.value
and app.start.type. Add app.frames.total.count, app.frames.slow.count,
app.frames.frozen.count, and app.frames.delay.value under the app
namespace. Deprecate frames.* and app_start_type with backfill status
per CONTRIBUTING.md policy.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
@buenaflor buenaflor changed the title feat(attributes): Add app start and ttid/ttf attributes for mobile feat(attributes): Add app start, display, and frames attributes Apr 9, 2026
@buenaflor buenaflor marked this pull request as ready for review April 13, 2026 08:37
Copilot AI review requested due to automatic review settings April 13, 2026 08:37
@buenaflor buenaflor requested review from a team, mjq and nsdeschenes as code owners April 13, 2026 08:37
Copy link
Copy Markdown

@cursor cursor bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Cursor Bugbot has reviewed your changes and found 1 potential issue.

Fix All in Cursor

Bugbot Autofix prepared a fix for the issue found in the latest run.

  • ✅ Fixed: Frames delay value typed as integer, not double
    • Changed the type from "integer" to "double" in the JSON model and regenerated the TypeScript and Python files to preserve fractional precision for frame delay durations.

Create PR

Or push these changes by commenting:

@cursor push dd6d85fe8e
Preview (dd6d85fe8e)
diff --git a/javascript/sentry-conventions/src/attributes.ts b/javascript/sentry-conventions/src/attributes.ts
--- a/javascript/sentry-conventions/src/attributes.ts
+++ b/javascript/sentry-conventions/src/attributes.ts
@@ -766,7 +766,7 @@
  *
  * Aliases: {@link FRAMES_DELAY} `frames.delay`
  *
- * @example 5
+ * @example 5.5
  */
 export const APP_FRAMES_DELAY_VALUE = 'app.frames.delay.value';
 
@@ -10437,7 +10437,7 @@
   [AI_TOTAL_TOKENS_USED]: 'integer',
   [AI_WARNINGS]: 'string[]',
   [APP_BUILD]: 'string',
-  [APP_FRAMES_DELAY_VALUE]: 'integer',
+  [APP_FRAMES_DELAY_VALUE]: 'double',
   [APP_FRAMES_FROZEN_COUNT]: 'integer',
   [APP_FRAMES_SLOW_COUNT]: 'integer',
   [APP_FRAMES_TOTAL_COUNT]: 'integer',
@@ -11909,12 +11909,12 @@
   [APP_FRAMES_DELAY_VALUE]: {
     brief:
       'The sum of all delayed frame durations in seconds during the lifetime of the span. For more information see [frames delay](https://develop.sentry.dev/sdk/performance/frames-delay/).',
-    type: 'integer',
+    type: 'double',
     pii: {
       isPii: 'maybe',
     },
     isInOtel: false,
-    example: 5,
+    example: 5.5,
     aliases: [FRAMES_DELAY],
     sdks: ['sentry.cocoa', 'sentry.java.android', 'sentry.javascript.react-native', 'sentry.dart.flutter'],
     changelog: [{ version: 'next', prs: [313], description: 'Added app.frames.delay.value to replace frames.delay' }],

diff --git a/model/attributes/app/app__frames__delay__value.json b/model/attributes/app/app__frames__delay__value.json
--- a/model/attributes/app/app__frames__delay__value.json
+++ b/model/attributes/app/app__frames__delay__value.json
@@ -1,14 +1,14 @@
 {
   "key": "app.frames.delay.value",
   "brief": "The sum of all delayed frame durations in seconds during the lifetime of the span. For more information see [frames delay](https://develop.sentry.dev/sdk/performance/frames-delay/).",
-  "type": "integer",
+  "type": "double",
   "pii": {
     "key": "maybe"
   },
   "is_in_otel": false,
   "alias": ["frames.delay"],
   "sdks": ["sentry.cocoa", "sentry.java.android", "sentry.javascript.react-native", "sentry.dart.flutter"],
-  "example": 5,
+  "example": 5.5,
   "changelog": [
     {
       "version": "next",

diff --git a/python/src/sentry_conventions/attributes.py b/python/src/sentry_conventions/attributes.py
--- a/python/src/sentry_conventions/attributes.py
+++ b/python/src/sentry_conventions/attributes.py
@@ -5,13 +5,10 @@
 import warnings
 from dataclasses import dataclass
 from enum import Enum
-from typing import Dict, List, Literal, Optional, TypedDict, Union
+from typing import List, Union, Literal, Optional, Dict, TypedDict
 
-AttributeValue = Union[
-    str, int, float, bool, List[str], List[int], List[float], List[bool]
-]
+AttributeValue = Union[str, int, float, bool, List[str], List[int], List[float], List[bool]]
 
-
 class AttributeType(Enum):
     STRING = "string"
     BOOLEAN = "boolean"
@@ -23,84 +20,75 @@
     DOUBLE_ARRAY = "double[]"
     ANY = "any"
 
-
 class IsPii(Enum):
     TRUE = "true"
     FALSE = "false"
     MAYBE = "maybe"
 
-
 @dataclass
 class PiiInfo:
     """Holds information about PII in an attribute's values."""
-
     isPii: IsPii
     reason: Optional[str] = None
 
-
 class DeprecationStatus(Enum):
     BACKFILL = "backfill"
     NORMALIZE = "normalize"
 
-
 @dataclass
 class DeprecationInfo:
     """Holds information about a deprecation."""
-
     replacement: Optional[str] = None
     reason: Optional[str] = None
     status: Optional[DeprecationStatus] = None
 
-
 @dataclass
 class ChangelogEntry:
     """A changelog entry tracking a change to an attribute."""
 
     version: str
     """The sentry-conventions release version"""
-
+    
     prs: Optional[List[int]] = None
     """GitHub PR numbers"""
-
+    
     description: Optional[str] = None
     """Optional description of what changed"""
 
-
 @dataclass
 class AttributeMetadata:
     """The metadata for an attribute."""
 
     brief: str
     """A description of the attribute"""
-
+    
     type: AttributeType
     """The type of the attribute value"""
-
+    
     pii: PiiInfo
     """If an attribute can have pii. Is either true, false or maybe. Optionally include a reason about why it has PII or not"""
-
+    
     is_in_otel: bool
     """Whether the attribute is defined in OpenTelemetry Semantic Conventions"""
-
+    
     has_dynamic_suffix: Optional[bool] = None
     """If an attribute has a dynamic suffix, for example http.response.header.<key> where <key> is dynamic"""
-
+    
     example: Optional[AttributeValue] = None
     """An example value of the attribute"""
-
+    
     deprecation: Optional[DeprecationInfo] = None
     """If an attribute was deprecated, and what it was replaced with"""
-
+    
     aliases: Optional[List[str]] = None
     """If there are attributes that alias to this attribute"""
-
+    
     sdks: Optional[List[str]] = None
     """If an attribute is SDK specific, list the SDKs that use this attribute. This is not an exhaustive list, there might be SDKs that send this attribute that are is not documented here."""
-
+    
     changelog: Optional[List[ChangelogEntry]] = None
     """Changelog entries tracking how this attribute has changed across versions"""
 
-
 class _AttributeNamesMeta(type):
     _deprecated_names = {
         "AI_CITATIONS",
@@ -237,7 +225,6 @@
             )
         return super().__getattribute__(name)
 
-
 class ATTRIBUTE_NAMES(metaclass=_AttributeNamesMeta):
     """Contains all attribute names as class attributes with their documentation."""
 
@@ -253,9 +240,7 @@
     """
 
     # Path: model/attributes/ai/ai__completion_tokens__used.json
-    AI_COMPLETION_TOKENS_USED: Literal["ai.completion_tokens.used"] = (
-        "ai.completion_tokens.used"
-    )
+    AI_COMPLETION_TOKENS_USED: Literal["ai.completion_tokens.used"] = "ai.completion_tokens.used"
     """The number of tokens used to respond to the message.
 
     Type: int
@@ -640,17 +625,15 @@
     APP_FRAMES_DELAY_VALUE: Literal["app.frames.delay.value"] = "app.frames.delay.value"
     """The sum of all delayed frame durations in seconds during the lifetime of the span. For more information see [frames delay](https://develop.sentry.dev/sdk/performance/frames-delay/).
 
-    Type: int
+    Type: float
     Contains PII: maybe
     Defined in OTEL: No
     Aliases: frames.delay
-    Example: 5
+    Example: 5.5
     """
 
     # Path: model/attributes/app/app__frames__frozen__count.json
-    APP_FRAMES_FROZEN_COUNT: Literal["app.frames.frozen.count"] = (
-        "app.frames.frozen.count"
-    )
+    APP_FRAMES_FROZEN_COUNT: Literal["app.frames.frozen.count"] = "app.frames.frozen.count"
     """The number of frozen frames rendered during the lifetime of the span.
 
     Type: int
@@ -827,9 +810,7 @@
     """
 
     # Path: model/attributes/browser/browser__script__invoker_type.json
-    BROWSER_SCRIPT_INVOKER_TYPE: Literal["browser.script.invoker_type"] = (
-        "browser.script.invoker_type"
-    )
+    BROWSER_SCRIPT_INVOKER_TYPE: Literal["browser.script.invoker_type"] = "browser.script.invoker_type"
     """Browser script entry point type.
 
     Type: str
@@ -839,9 +820,7 @@
     """
 
     # Path: model/attributes/browser/browser__script__source_char_position.json
-    BROWSER_SCRIPT_SOURCE_CHAR_POSITION: Literal[
-        "browser.script.source_char_position"
-    ] = "browser.script.source_char_position"
+    BROWSER_SCRIPT_SOURCE_CHAR_POSITION: Literal["browser.script.source_char_position"] = "browser.script.source_char_position"
     """A number representing the script character position of the script.
 
     Type: int
@@ -862,9 +841,7 @@
     """
 
     # Path: model/attributes/browser/browser__web_vital__cls__source__[key].json
-    BROWSER_WEB_VITAL_CLS_SOURCE_KEY: Literal["browser.web_vital.cls.source.<key>"] = (
-        "browser.web_vital.cls.source.<key>"
-    )
+    BROWSER_WEB_VITAL_CLS_SOURCE_KEY: Literal["browser.web_vital.cls.source.<key>"] = "browser.web_vital.cls.source.<key>"
     """The HTML elements or components responsible for the layout shift. <key> is a numeric index from 1 to N
 
     Type: str
@@ -876,9 +853,7 @@
     """
 
     # Path: model/attributes/browser/browser__web_vital__cls__value.json
-    BROWSER_WEB_VITAL_CLS_VALUE: Literal["browser.web_vital.cls.value"] = (
-        "browser.web_vital.cls.value"
-    )
+    BROWSER_WEB_VITAL_CLS_VALUE: Literal["browser.web_vital.cls.value"] = "browser.web_vital.cls.value"
     """The value of the recorded Cumulative Layout Shift (CLS) web vital
 
     Type: float
@@ -889,9 +864,7 @@
     """
 
     # Path: model/attributes/browser/browser__web_vital__fcp__value.json
-    BROWSER_WEB_VITAL_FCP_VALUE: Literal["browser.web_vital.fcp.value"] = (
-        "browser.web_vital.fcp.value"
-    )
+    BROWSER_WEB_VITAL_FCP_VALUE: Literal["browser.web_vital.fcp.value"] = "browser.web_vital.fcp.value"
     """The time it takes for the browser to render the first piece of meaningful content on the screen
 
     Type: float
@@ -902,9 +875,7 @@
     """
 
     # Path: model/attributes/browser/browser__web_vital__fp__value.json
-    BROWSER_WEB_VITAL_FP_VALUE: Literal["browser.web_vital.fp.value"] = (
-        "browser.web_vital.fp.value"
-    )
+    BROWSER_WEB_VITAL_FP_VALUE: Literal["browser.web_vital.fp.value"] = "browser.web_vital.fp.value"
     """The time in milliseconds it takes for the browser to render the first pixel on the screen
 
     Type: float
@@ -915,9 +886,7 @@
     """
 
     # Path: model/attributes/browser/browser__web_vital__inp__value.json
-    BROWSER_WEB_VITAL_INP_VALUE: Literal["browser.web_vital.inp.value"] = (
-        "browser.web_vital.inp.value"
-    )
+    BROWSER_WEB_VITAL_INP_VALUE: Literal["browser.web_vital.inp.value"] = "browser.web_vital.inp.value"
     """The value of the recorded Interaction to Next Paint (INP) web vital
 
     Type: float
@@ -928,9 +897,7 @@
     """
 
     # Path: model/attributes/browser/browser__web_vital__lcp__element.json
-    BROWSER_WEB_VITAL_LCP_ELEMENT: Literal["browser.web_vital.lcp.element"] = (
-        "browser.web_vital.lcp.element"
-    )
+    BROWSER_WEB_VITAL_LCP_ELEMENT: Literal["browser.web_vital.lcp.element"] = "browser.web_vital.lcp.element"
     """The HTML element selector or component name for which LCP was reported
 
     Type: str
@@ -941,9 +908,7 @@
     """
 
     # Path: model/attributes/browser/browser__web_vital__lcp__id.json
-    BROWSER_WEB_VITAL_LCP_ID: Literal["browser.web_vital.lcp.id"] = (
-        "browser.web_vital.lcp.id"
-    )
+    BROWSER_WEB_VITAL_LCP_ID: Literal["browser.web_vital.lcp.id"] = "browser.web_vital.lcp.id"
     """The id of the dom element responsible for the largest contentful paint
 
     Type: str
@@ -954,9 +919,7 @@
     """
 
     # Path: model/attributes/browser/browser__web_vital__lcp__load_time.json
-    BROWSER_WEB_VITAL_LCP_LOAD_TIME: Literal["browser.web_vital.lcp.load_time"] = (
-        "browser.web_vital.lcp.load_time"
-    )
+    BROWSER_WEB_VITAL_LCP_LOAD_TIME: Literal["browser.web_vital.lcp.load_time"] = "browser.web_vital.lcp.load_time"
     """The time it took for the LCP element to be loaded
 
     Type: int
@@ -967,9 +930,7 @@
     """
 
     # Path: model/attributes/browser/browser__web_vital__lcp__render_time.json
-    BROWSER_WEB_VITAL_LCP_RENDER_TIME: Literal["browser.web_vital.lcp.render_time"] = (
-        "browser.web_vital.lcp.render_time"
-    )
+    BROWSER_WEB_VITAL_LCP_RENDER_TIME: Literal["browser.web_vital.lcp.render_time"] = "browser.web_vital.lcp.render_time"
     """The time it took for the LCP element to be rendered
 
     Type: int
@@ -980,9 +941,7 @@
     """
 
     # Path: model/attributes/browser/browser__web_vital__lcp__size.json
-    BROWSER_WEB_VITAL_LCP_SIZE: Literal["browser.web_vital.lcp.size"] = (
-        "browser.web_vital.lcp.size"
-    )
+    BROWSER_WEB_VITAL_LCP_SIZE: Literal["browser.web_vital.lcp.size"] = "browser.web_vital.lcp.size"
     """The size of the largest contentful paint element
 
     Type: int
@@ -993,9 +952,7 @@
     """
 
     # Path: model/attributes/browser/browser__web_vital__lcp__url.json
-    BROWSER_WEB_VITAL_LCP_URL: Literal["browser.web_vital.lcp.url"] = (
-        "browser.web_vital.lcp.url"
-    )
+    BROWSER_WEB_VITAL_LCP_URL: Literal["browser.web_vital.lcp.url"] = "browser.web_vital.lcp.url"
     """The url of the dom element responsible for the largest contentful paint
 
     Type: str
@@ -1006,9 +963,7 @@
     """
 
     # Path: model/attributes/browser/browser__web_vital__lcp__value.json
-    BROWSER_WEB_VITAL_LCP_VALUE: Literal["browser.web_vital.lcp.value"] = (
-        "browser.web_vital.lcp.value"
-    )
+    BROWSER_WEB_VITAL_LCP_VALUE: Literal["browser.web_vital.lcp.value"] = "browser.web_vital.lcp.value"
     """The value of the recorded Largest Contentful Paint (LCP) web vital
 
     Type: float
@@ -1019,9 +974,7 @@
     """
 
     # Path: model/attributes/browser/browser__web_vital__ttfb__request_time.json
-    BROWSER_WEB_VITAL_TTFB_REQUEST_TIME: Literal[
-        "browser.web_vital.ttfb.request_time"
-    ] = "browser.web_vital.ttfb.request_time"
+    BROWSER_WEB_VITAL_TTFB_REQUEST_TIME: Literal["browser.web_vital.ttfb.request_time"] = "browser.web_vital.ttfb.request_time"
     """The time it takes for the server to process the initial request and send the first byte of a response to the user's browser
 
     Type: float
@@ -1032,9 +985,7 @@
     """
 
     # Path: model/attributes/browser/browser__web_vital__ttfb__value.json
-    BROWSER_WEB_VITAL_TTFB_VALUE: Literal["browser.web_vital.ttfb.value"] = (
-        "browser.web_vital.ttfb.value"
-    )
+    BROWSER_WEB_VITAL_TTFB_VALUE: Literal["browser.web_vital.ttfb.value"] = "browser.web_vital.ttfb.value"
     """The value of the recorded Time To First Byte (TTFB) web vital in Milliseconds
 
     Type: float
@@ -1146,9 +1097,7 @@
     """
 
     # Path: model/attributes/cloudflare/cloudflare__d1__rows_read.json
-    CLOUDFLARE_D1_ROWS_READ: Literal["cloudflare.d1.rows_read"] = (
-        "cloudflare.d1.rows_read"
-    )
+    CLOUDFLARE_D1_ROWS_READ: Literal["cloudflare.d1.rows_read"] = "cloudflare.d1.rows_read"
     """The number of rows read in a Cloudflare D1 operation.
 
     Type: int
@@ -1158,9 +1107,7 @@
     """
 
     # Path: model/attributes/cloudflare/cloudflare__d1__rows_written.json
-    CLOUDFLARE_D1_ROWS_WRITTEN: Literal["cloudflare.d1.rows_written"] = (
-        "cloudflare.d1.rows_written"
-    )
+    CLOUDFLARE_D1_ROWS_WRITTEN: Literal["cloudflare.d1.rows_written"] = "cloudflare.d1.rows_written"
     """The number of rows written in a Cloudflare D1 operation.
 
     Type: int
@@ -1319,9 +1266,7 @@
     """
 
     # Path: model/attributes/culture/culture__is_24_hour_format.json
-    CULTURE_IS_24_HOUR_FORMAT: Literal["culture.is_24_hour_format"] = (
-        "culture.is_24_hour_format"
-    )
+    CULTURE_IS_24_HOUR_FORMAT: Literal["culture.is_24_hour_format"] = "culture.is_24_hour_format"
     """Whether the culture uses 24-hour time format.
 
     Type: bool
@@ -1407,9 +1352,7 @@
     """
 
     # Path: model/attributes/db/db__query__parameter__[key].json
-    DB_QUERY_PARAMETER_KEY: Literal["db.query.parameter.<key>"] = (
-        "db.query.parameter.<key>"
-    )
+    DB_QUERY_PARAMETER_KEY: Literal["db.query.parameter.<key>"] = "db.query.parameter.<key>"
     """A query parameter used in db.query.text, with <key> being the parameter name, and the attribute value being a string representation of the parameter value.
 
     Type: str
@@ -1557,9 +1500,7 @@
     """
 
     # Path: model/attributes/device/device__memory__estimated_capacity.json
-    DEVICE_MEMORY_ESTIMATED_CAPACITY: Literal["device.memory.estimated_capacity"] = (
-        "device.memory.estimated_capacity"
-    )
+    DEVICE_MEMORY_ESTIMATED_CAPACITY: Literal["device.memory.estimated_capacity"] = "device.memory.estimated_capacity"
     """The estimated total memory capacity of the device, only a rough estimation in gigabytes. Browsers report estimations in buckets of powers of 2, mostly capped at 8 GB
 
     Type: int
@@ -1633,9 +1574,7 @@
     """
 
     # Path: model/attributes/effectiveConnectionType.json
-    EFFECTIVECONNECTIONTYPE: Literal["effectiveConnectionType"] = (
-        "effectiveConnectionType"
-    )
+    EFFECTIVECONNECTIONTYPE: Literal["effectiveConnectionType"] = "effectiveConnectionType"
     """Specifies the estimated effective type of the current connection (e.g. slow-2g, 2g, 3g, 4g).
 
     Type: str
@@ -1883,9 +1822,7 @@
     """
 
     # Path: model/attributes/gen_ai/gen_ai__cost__input_tokens.json
-    GEN_AI_COST_INPUT_TOKENS: Literal["gen_ai.cost.input_tokens"] = (
-        "gen_ai.cost.input_tokens"
-    )
+    GEN_AI_COST_INPUT_TOKENS: Literal["gen_ai.cost.input_tokens"] = "gen_ai.cost.input_tokens"
     """The cost of tokens used to process the AI input (prompt) in USD (without cached input tokens).
 
     Type: float
@@ -1895,9 +1832,7 @@
     """
 
     # Path: model/attributes/gen_ai/gen_ai__cost__output_tokens.json
-    GEN_AI_COST_OUTPUT_TOKENS: Literal["gen_ai.cost.output_tokens"] = (
-        "gen_ai.cost.output_tokens"
-    )
+    GEN_AI_COST_OUTPUT_TOKENS: Literal["gen_ai.cost.output_tokens"] = "gen_ai.cost.output_tokens"
     """The cost of tokens used for creating the AI output in USD (without reasoning tokens).
 
     Type: float
@@ -1907,9 +1842,7 @@
     """
 
     # Path: model/attributes/gen_ai/gen_ai__cost__total_tokens.json
-    GEN_AI_COST_TOTAL_TOKENS: Literal["gen_ai.cost.total_tokens"] = (
-        "gen_ai.cost.total_tokens"
-    )
+    GEN_AI_COST_TOTAL_TOKENS: Literal["gen_ai.cost.total_tokens"] = "gen_ai.cost.total_tokens"
     """The total cost for the tokens used.
 
     Type: float
@@ -1920,9 +1853,7 @@
     """
 
     # Path: model/attributes/gen_ai/gen_ai__embeddings__input.json
-    GEN_AI_EMBEDDINGS_INPUT: Literal["gen_ai.embeddings.input"] = (
-        "gen_ai.embeddings.input"
-    )
+    GEN_AI_EMBEDDINGS_INPUT: Literal["gen_ai.embeddings.input"] = "gen_ai.embeddings.input"
     """The input to the embeddings model.
 
     Type: str
@@ -2006,9 +1937,7 @@
     """
 
     # Path: model/attributes/gen_ai/gen_ai__request__available_tools.json
-    GEN_AI_REQUEST_AVAILABLE_TOOLS: Literal["gen_ai.request.available_tools"] = (
-        "gen_ai.request.available_tools"
-    )
+    GEN_AI_REQUEST_AVAILABLE_TOOLS: Literal["gen_ai.request.available_tools"] = "gen_ai.request.available_tools"
     """The available tools for the model. It has to be a stringified version of an array of objects.
 
     Type: str
@@ -2019,9 +1948,7 @@
     """
 
     # Path: model/attributes/gen_ai/gen_ai__request__frequency_penalty.json
-    GEN_AI_REQUEST_FREQUENCY_PENALTY: Literal["gen_ai.request.frequency_penalty"] = (
-        "gen_ai.request.frequency_penalty"
-    )
+    GEN_AI_REQUEST_FREQUENCY_PENALTY: Literal["gen_ai.request.frequency_penalty"] = "gen_ai.request.frequency_penalty"
     """Used to reduce repetitiveness of generated tokens. The higher the value, the stronger a penalty is applied to previously present tokens, proportional to how many times they have already appeared in the prompt or prior generation.
 
     Type: float
@@ -2032,9 +1959,7 @@
     """
 
     # Path: model/attributes/gen_ai/gen_ai__request__max_tokens.json
-    GEN_AI_REQUEST_MAX_TOKENS: Literal["gen_ai.request.max_tokens"] = (
-        "gen_ai.request.max_tokens"
-    )
+    GEN_AI_REQUEST_MAX_TOKENS: Literal["gen_ai.request.max_tokens"] = "gen_ai.request.max_tokens"
     """The maximum number of tokens to generate in the response.
 
     Type: int
@@ -2044,9 +1969,7 @@
     """
 
     # Path: model/attributes/gen_ai/gen_ai__request__messages.json
-    GEN_AI_REQUEST_MESSAGES: Literal["gen_ai.request.messages"] = (
-        "gen_ai.request.messages"
-    )
+    GEN_AI_REQUEST_MESSAGES: Literal["gen_ai.request.messages"] = "gen_ai.request.messages"
     """The messages passed to the model. It has to be a stringified version of an array of objects. The `role` attribute of each object must be `"user"`, `"assistant"`, `"tool"`, or `"system"`. For messages of the role `"tool"`, the `content` can be a string or an arbitrary object with information about the tool call. For other messages the `content` can be either a string or a list of objects in the format `{type: "text", text:"..."}`.
 
     Type: str
@@ -2068,9 +1991,7 @@
     """
 
     # Path: model/attributes/gen_ai/gen_ai__request__presence_penalty.json
-    GEN_AI_REQUEST_PRESENCE_PENALTY: Literal["gen_ai.request.presence_penalty"] = (
-        "gen_ai.request.presence_penalty"
-    )
+    GEN_AI_REQUEST_PRESENCE_PENALTY: Literal["gen_ai.request.presence_penalty"] = "gen_ai.request.presence_penalty"
     """Used to reduce repetitiveness of generated tokens. Similar to frequency_penalty, except that this penalty is applied equally to all tokens that have already appeared, regardless of their exact frequencies.
 
     Type: float
@@ -2092,9 +2013,7 @@
     """
 
     # Path: model/attributes/gen_ai/gen_ai__request__temperature.json
-    GEN_AI_REQUEST_TEMPERATURE: Literal["gen_ai.request.temperature"] = (
-        "gen_ai.request.temperature"
-    )
+    GEN_AI_REQUEST_TEMPERATURE: Literal["gen_ai.request.temperature"] = "gen_ai.request.temperature"
     """For an AI model call, the temperature parameter. Temperature essentially means how random the output will be.
 
     Type: float
@@ -2127,9 +2046,7 @@
     """
 
     # Path: model/attributes/gen_ai/gen_ai__response__finish_reasons.json
-    GEN_AI_RESPONSE_FINISH_REASONS: Literal["gen_ai.response.finish_reasons"] = (
-        "gen_ai.response.finish_reasons"
-    )
+    GEN_AI_RESPONSE_FINISH_REASONS: Literal["gen_ai.response.finish_reasons"] = "gen_ai.response.finish_reasons"
     """The reason why the model stopped generating.
 
     Type: str
@@ -2162,9 +2079,7 @@
     """
 
     # Path: model/attributes/gen_ai/gen_ai__response__streaming.json
-    GEN_AI_RESPONSE_STREAMING: Literal["gen_ai.response.streaming"] = (
-        "gen_ai.response.streaming"
-    )
+    GEN_AI_RESPONSE_STREAMING: Literal["gen_ai.response.streaming"] = "gen_ai.response.streaming"
     """Whether or not the AI model call's response was streamed back asynchronously
 
     Type: bool
@@ -2186,9 +2101,7 @@
     """
 
     # Path: model/attributes/gen_ai/gen_ai__response__time_to_first_token.json
-    GEN_AI_RESPONSE_TIME_TO_FIRST_TOKEN: Literal[
-        "gen_ai.response.time_to_first_token"
-    ] = "gen_ai.response.time_to_first_token"
+    GEN_AI_RESPONSE_TIME_TO_FIRST_TOKEN: Literal["gen_ai.response.time_to_first_token"] = "gen_ai.response.time_to_first_token"
     """Time in seconds when the first response content chunk arrived in streaming responses.
 
     Type: float
@@ -2198,9 +2111,7 @@
     """
 
     # Path: model/attributes/gen_ai/gen_ai__response__tokens_per_second.json
-    GEN_AI_RESPONSE_TOKENS_PER_SECOND: Literal["gen_ai.response.tokens_per_second"] = (
-        "gen_ai.response.tokens_per_second"
-    )
+    GEN_AI_RESPONSE_TOKENS_PER_SECOND: Literal["gen_ai.response.tokens_per_second"] = "gen_ai.response.tokens_per_second"
     """The total output tokens per seconds throughput
 
     Type: float
@@ -2210,9 +2121,7 @@
     """
 
     # Path: model/attributes/gen_ai/gen_ai__response__tool_calls.json
-    GEN_AI_RESPONSE_TOOL_CALLS: Literal["gen_ai.response.tool_calls"] = (
-        "gen_ai.response.tool_calls"
-    )
+    GEN_AI_RESPONSE_TOOL_CALLS: Literal["gen_ai.response.tool_calls"] = "gen_ai.response.tool_calls"
     """The tool calls in the model's response. It has to be a stringified version of an array of objects.
 
     Type: str
@@ -2246,9 +2155,7 @@
     """
 
     # Path: model/attributes/gen_ai/gen_ai__system_instructions.json
-    GEN_AI_SYSTEM_INSTRUCTIONS: Literal["gen_ai.system_instructions"] = (
-        "gen_ai.system_instructions"
-    )
+    GEN_AI_SYSTEM_INSTRUCTIONS: Literal["gen_ai.system_instructions"] = "gen_ai.system_instructions"
     """The system instructions passed to the model.
 
     Type: str
@@ -2259,9 +2166,7 @@
     """
 
     # Path: model/attributes/gen_ai/gen_ai__tool__call__arguments.json
-    GEN_AI_TOOL_CALL_ARGUMENTS: Literal["gen_ai.tool.call.arguments"] = (
-        "gen_ai.tool.call.arguments"
-    )
+    GEN_AI_TOOL_CALL_ARGUMENTS: Literal["gen_ai.tool.call.arguments"] = "gen_ai.tool.call.arguments"
     """The arguments of the tool call. It has to be a stringified version of the arguments to the tool.
 
     Type: str
@@ -2272,9 +2177,7 @@
     """
 
     # Path: model/attributes/gen_ai/gen_ai__tool__call__result.json
-    GEN_AI_TOOL_CALL_RESULT: Literal["gen_ai.tool.call.result"] = (
-        "gen_ai.tool.call.result"
-    )
+    GEN_AI_TOOL_CALL_RESULT: Literal["gen_ai.tool.call.result"] = "gen_ai.tool.call.result"
     """The result of the tool call. It has to be a stringified version of the result of the tool.
 
     Type: str
@@ -2285,9 +2188,7 @@
     """
 
     # Path: model/attributes/gen_ai/gen_ai__tool__definitions.json
-    GEN_AI_TOOL_DEFINITIONS: Literal["gen_ai.tool.definitions"] = (
-        "gen_ai.tool.definitions"
-    )
+    GEN_AI_TOOL_DEFINITIONS: Literal["gen_ai.tool.definitions"] = "gen_ai.tool.definitions"
     """The list of source system tool definitions available to the GenAI agent or model.
 
     Type: str
@@ -2297,9 +2198,7 @@
     """
 
     # Path: model/attributes/gen_ai/gen_ai__tool__description.json
-    GEN_AI_TOOL_DESCRIPTION: Literal["gen_ai.tool.description"] = (
-        "gen_ai.tool.description"
-    )
+    GEN_AI_TOOL_DESCRIPTION: Literal["gen_ai.tool.description"] = "gen_ai.tool.description"
     """The description of the tool being used.
 
     Type: str
@@ -2366,9 +2265,7 @@
     """
 
     # Path: model/attributes/gen_ai/gen_ai__usage__completion_tokens.json
-    GEN_AI_USAGE_COMPLETION_TOKENS: Literal["gen_ai.usage.completion_tokens"] = (
-        "gen_ai.usage.completion_tokens"
-    )
+    GEN_AI_USAGE_COMPLETION_TOKENS: Literal["gen_ai.usage.completion_tokens"] = "gen_ai.usage.completion_tokens"
     """The number of tokens used in the GenAI response (completion).
 
     Type: int
@@ -2380,9 +2277,7 @@
     """
 
     # Path: model/attributes/gen_ai/gen_ai__usage__input_tokens.json
-    GEN_AI_USAGE_INPUT_TOKENS: Literal["gen_ai.usage.input_tokens"] = (
-        "gen_ai.usage.input_tokens"
-    )
+    GEN_AI_USAGE_INPUT_TOKENS: Literal["gen_ai.usage.input_tokens"] = "gen_ai.usage.input_tokens"
     """The number of tokens used to process the AI input (prompt) including cached input tokens.
 
     Type: int
@@ -2393,9 +2288,7 @@
     """
 
     # Path: model/attributes/gen_ai/gen_ai__usage__input_tokens__cache_write.json
-    GEN_AI_USAGE_INPUT_TOKENS_CACHE_WRITE: Literal[
-        "gen_ai.usage.input_tokens.cache_write"
-    ] = "gen_ai.usage.input_tokens.cache_write"
+    GEN_AI_USAGE_INPUT_TOKENS_CACHE_WRITE: Literal["gen_ai.usage.input_tokens.cache_write"] = "gen_ai.usage.input_tokens.cache_write"
     """The number of tokens written to the cache when processing the AI input (prompt).
 
     Type: int
@@ -2405,9 +2298,7 @@
     """
 
     # Path: model/attributes/gen_ai/gen_ai__usage__input_tokens__cached.json
-    GEN_AI_USAGE_INPUT_TOKENS_CACHED: Literal["gen_ai.usage.input_tokens.cached"] = (
-        "gen_ai.usage.input_tokens.cached"
-    )
+    GEN_AI_USAGE_INPUT_TOKENS_CACHED: Literal["gen_ai.usage.input_tokens.cached"] = "gen_ai.usage.input_tokens.cached"
     """The number of cached tokens used to process the AI input (prompt).
 
     Type: int
@@ -2417,9 +2308,7 @@
     """
 
     # Path: model/attributes/gen_ai/gen_ai__usage__output_tokens.json
-    GEN_AI_USAGE_OUTPUT_TOKENS: Literal["gen_ai.usage.output_tokens"] = (
-        "gen_ai.usage.output_tokens"
-    )
+    GEN_AI_USAGE_OUTPUT_TOKENS: Literal["gen_ai.usage.output_tokens"] = "gen_ai.usage.output_tokens"
     """The number of tokens used for creating the AI output (including reasoning tokens).
 
     Type: int
@@ -2430,9 +2319,7 @@
     """
 
     # Path: model/attributes/gen_ai/gen_ai__usage__output_tokens__reasoning.json
-    GEN_AI_USAGE_OUTPUT_TOKENS_REASONING: Literal[
-        "gen_ai.usage.output_tokens.reasoning"
-    ] = "gen_ai.usage.output_tokens.reasoning"
+    GEN_AI_USAGE_OUTPUT_TOKENS_REASONING: Literal["gen_ai.usage.output_tokens.reasoning"] = "gen_ai.usage.output_tokens.reasoning"
     """The number of tokens used for reasoning to create the AI output.
 
     Type: int
@@ -2442,9 +2329,7 @@
     """
 
     # Path: model/attributes/gen_ai/gen_ai__usage__prompt_tokens.json
-    GEN_AI_USAGE_PROMPT_TOKENS: Literal["gen_ai.usage.prompt_tokens"] = (
-        "gen_ai.usage.prompt_tokens"
-    )
+    GEN_AI_USAGE_PROMPT_TOKENS: Literal["gen_ai.usage.prompt_tokens"] = "gen_ai.usage.prompt_tokens"
     """The number of tokens used in the GenAI input (prompt).
 
     Type: int
@@ -2456,9 +2341,7 @@
     """
 
     # Path: model/attributes/gen_ai/gen_ai__usage__total_tokens.json
-    GEN_AI_USAGE_TOTAL_TOKENS: Literal["gen_ai.usage.total_tokens"] = (
-        "gen_ai.usage.total_tokens"
-    )
+    GEN_AI_USAGE_TOTAL_TOKENS: Literal["gen_ai.usage.total_tokens"] = "gen_ai.usage.total_tokens"
     """The total number of tokens used to process the prompt. (input tokens plus output todkens)
 
     Type: int
@@ -2513,9 +2396,7 @@
     """
 
     # Path: model/attributes/http/http__decoded_response_content_length.json
-    HTTP_DECODED_RESPONSE_CONTENT_LENGTH: Literal[
-        "http.decoded_response_content_length"
-    ] = "http.decoded_response_content_length"
+    HTTP_DECODED_RESPONSE_CONTENT_LENGTH: Literal["http.decoded_response_content_length"] = "http.decoded_response_content_length"
     """The decoded body size of the response (in bytes).
 
     Type: int
@@ -2581,9 +2462,7 @@
     """
 
     # Path: model/attributes/http/http__request__connect_start.json
-    HTTP_REQUEST_CONNECT_START: Literal["http.request.connect_start"] = (
-        "http.request.connect_start"
-    )
+    HTTP_REQUEST_CONNECT_START: Literal["http.request.connect_start"] = "http.request.connect_start"
     """The UNIX timestamp representing the time immediately before the user agent starts establishing the connection to the server to retrieve the resource.
 
     Type: float
@@ -2593,9 +2472,7 @@
     """
 
     # Path: model/attributes/http/http__request__connection_end.json
-    HTTP_REQUEST_CONNECTION_END: Literal["http.request.connection_end"] = (
-        "http.request.connection_end"
-    )
+    HTTP_REQUEST_CONNECTION_END: Literal["http.request.connection_end"] = "http.request.connection_end"
     """The UNIX timestamp representing the time immediately after the browser finishes establishing the connection to the server to retrieve the resource. The timestamp value includes the time interval to establish the transport connection, as well as other time intervals such as TLS handshake and SOCKS authentication.
 
     Type: float
@@ -2605,9 +2482,7 @@
     """
 
     # Path: model/attributes/http/http__request__domain_lookup_end.json
... diff truncated: showing 800 of 9390 lines

This Bugbot Autofix run was free. To enable autofix for future PRs, go to the Cursor dashboard.

Reviewed by Cursor Bugbot for commit ddf3b25. Configure here.

Copy link
Copy Markdown

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Adds new namespaced mobile performance attributes under app.* (app start, initial/full display, and frames metrics) and deprecates legacy counterparts (app_start_type, frames.*) using the alias + backfill deprecation pattern.

Changes:

  • Introduce new app.start.*, app.tt{ i d| f d }.value, and app.frames.* attributes with SDK coverage metadata.
  • Deprecate legacy attributes (app_start_type, frames.total|slow|frozen|delay) and add symmetric aliases to the replacements.
  • Regenerate language bindings (Python + TypeScript) and the generated deprecated-attributes index.

Reviewed changes

Copilot reviewed 16 out of 16 changed files in this pull request and generated 5 comments.

Show a summary per file
File Description
shared/deprecated_attributes.json Adds generated deprecated entries for app_start_type and frames.* pointing to the new app.* keys.
python/src/sentry_conventions/attributes.py Regenerates Python constants/metadata/TypedDict to include new app.* attributes and deprecations.
javascript/sentry-conventions/src/attributes.ts Regenerates TS exports/types/metadata for new app.* attributes and deprecated legacy keys.
model/attributes/app/app__start__type.json New app.start.type attribute with alias to app_start_type.
model/attributes/app/app__start__value.json New app.start.value duration attribute (ms).
model/attributes/app/app__ttid__value.json New time-to-initial-display duration attribute (ms).
model/attributes/app/app__ttfd__value.json New time-to-full-display duration attribute (ms).
model/attributes/app/app__frames__total__count.json New app.frames.total.count attribute with alias to frames.total.
model/attributes/app/app__frames__slow__count.json New app.frames.slow.count attribute with alias to frames.slow.
model/attributes/app/app__frames__frozen__count.json New app.frames.frozen.count attribute with alias to frames.frozen.
model/attributes/app/app__frames__delay__value.json New app.frames.delay.value attribute with alias to frames.delay.
model/attributes/app_start_type.json Deprecates app_start_type in favor of app.start.type (backfill) and adds alias.
model/attributes/frames/frames__total.json Deprecates frames.total in favor of app.frames.total.count (backfill) and adds alias.
model/attributes/frames/frames__slow.json Deprecates frames.slow in favor of app.frames.slow.count (backfill) and adds alias.
model/attributes/frames/frames__frozen.json Deprecates frames.frozen in favor of app.frames.frozen.count (backfill) and adds alias.
model/attributes/frames/frames__delay.json Deprecates frames.delay in favor of app.frames.delay.value (backfill) and adds alias.

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

buenaflor and others added 3 commits April 13, 2026 10:44
…display

Add time_to_initial_display and time_to_full_display as deprecated
attributes with backfill status, pointing to app.ttid.value and
app.ttfd.value respectively. Adds aliases on both sides and SDK usages.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
- Change app.frames.delay.value type from integer to double since frame
  delays are fractional seconds
- Fix frames.* deprecation reasons to say "Old frames.* attribute" instead
  of "Old namespace-less attribute"
- Improve app.start.type brief to mention cold/warm examples

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Copy link
Copy Markdown
Member

@Lms24 Lms24 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Had two optional suggestions/questions but otherwise LGTM!

… keys

Rename all new attributes to app.vitals.* namespace. Replace
app.start.value + app.start.type with app.vitals.start.cold.value and
app.vitals.start.warm.value. Add app.vitals.start.type deprecating
app_start_type. Improve deprecation reasons across all deprecated
attributes.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
@buenaflor buenaflor changed the title feat(attributes): Add app start, display, and frames attributes feat(attributes): Add app.vitals.* attributes for mobile performance Apr 13, 2026
Copy link
Copy Markdown
Member

@Lms24 Lms24 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

thanks!

…mes.* namespace

The changelog descriptions for deprecated frames.* attributes incorrectly
referenced the intermediate app.frames.* namespace instead of the final
app.vitals.frames.* namespace, which is what the deprecation.replacement
fields already correctly use.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
…d/ttid namespace

The changelog descriptions for deprecated time_to_full_display and
time_to_initial_display attributes incorrectly referenced app.ttfd.value
and app.ttid.value instead of app.vitals.ttfd.value and
app.vitals.ttid.value.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
@buenaflor buenaflor merged commit 7f94a46 into main Apr 14, 2026
12 checks passed
@buenaflor buenaflor deleted the giancarlobuenaflor/add-app-start-ttid-ttfd-attributes branch April 14, 2026 12:25
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants