Add dynamic sitemap and robots.txt generation#99
Conversation
…back - Remove `export const revalidate` from sitemap.ts and robots.ts — not supported in Next.js 16 metadata route files, caused build failure - Add NEXT_PUBLIC_SITE_URL fallback when store.url is empty - Fix Biome formatting in sitemap.ts - Use single Date instance for static pages lastModified - Add early return optimization for single-chunk sitemaps - Remove unused SITEMAP_REVALIDATE_SECONDS env variable Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
robots.ts now imports generateSitemaps() and dynamically lists all sitemap chunk URLs (/sitemap/0.xml, /sitemap/1.xml, etc.) instead of the non-existent /sitemap.xml — Next.js does not auto-generate a sitemap index when using generateSitemaps(). Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- Add safeLastModified() helper that guards against null/undefined/invalid date strings in product.updated_at and taxon.updated_at — omits lastModified instead of emitting "Invalid Date" in the XML - Validate SITEMAP_LOCALE_MODE against allowed values; log a warning and fall back to "default" on typos instead of silently hitting the "all" branch Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
The type guard `url is string` asserted string type but the runtime check `!== null` allowed undefined to pass through. Changed to loose equality `!= null` which excludes both null and undefined. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Co-authored-by: Damian Legawiec <damian@getvendo.com>
- Fix promise cache poisoning: clear cached promise on rejection so transient failures can be retried on subsequent calls - Use original_url instead of large_url for sitemap product images (correct Spree 5.x field, consistent with MediaGallery) - Memoize resolveCountryLocales to avoid repeated getCountries() API calls - Add MAX_PAGES (1000) safety cap to pagination loops - Replace IIFE with clearer conditional for locale mode validation - Move VALID_LOCALE_MODES constant to top of module with other constants - Fix GTM_ID table row missing trailing pipe in README - Fix MD028: merge adjacent blockquotes into single contiguous block Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- Replace removed @/lib/data/store with getStoreUrl() from @/lib/seo - Replace removed @/lib/data/taxonomies with listCategories from @spree/next - Update StoreTaxon → Category type, /t/ → /c/ routes, /taxonomies → /c - Use limit/expand params instead of per_page/includes (new SDK naming) - Import directly from @spree/next with explicit locale options to avoid cookies() calls at build time - Add try-catch in generateSitemaps/sitemap for graceful API unavailability - Update country locale resolution to use market.default_locale - Default AI crawlers to allowed (ROBOTS_DISALLOW_AI=true to block) Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Instead of requiring manual SITEMAP_LOCALE_MODE/SITEMAP_COUNTRIES env configuration, fetch country/locale pairs directly from the Spree Markets API via listMarkets(). This removes 3 env variables and simplifies the sitemap to always reflect the actual store setup. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
The SDK's Product type doesn't include the `images` relation that comes back when using `expand: ["images"]`. Add a ProductWithImages type to bridge the gap and cast the API response accordingly. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
The SDK renamed Image → Media and product.images → product.media. Update sitemap expand param and type assertions accordingly. Also includes lockfile changes from npm install after rebase. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Remove the AI_CRAWLERS list, ROBOTS_DISALLOW_AI env var, and all related references in .env.example and README.md. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Replace removed listCategories/listMarkets/listProducts imports with getClient() pattern used across the codebase. Update StoreProduct to Product type, and remove category.updated_at (no longer in SDK type). Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
These env vars are no longer used — sitemap now discovers locales from the Spree markets API automatically. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
WalkthroughAdded Next.js App Router metadata routes for sitemap and robots plus related env/docs updates: new Changes
Sequence Diagram(s)sequenceDiagram
participant Build as Next.js Build/Server
participant Cache as Module Cache
participant API as Store API
participant SitemapGen as src/app/sitemap.ts
participant Robots as src/app/robots.ts
participant Output as Sitemap / robots.txt
Build->>SitemapGen: request sitemap(id)
SitemapGen->>Cache: check cached locales/counts/products/categories
alt cache hit
Cache-->>SitemapGen: return cached data
else cache miss
SitemapGen->>API: fetch country/locale pairs
API-->>SitemapGen: return locales
SitemapGen->>API: fetch counts (products, categories)
API-->>SitemapGen: return counts
SitemapGen->>API: fetch products (with media) & categories per locale
API-->>SitemapGen: return product/category data
SitemapGen->>Cache: store promises/results
end
SitemapGen->>SitemapGen: build URLs (static, products with images/lastModified, categories) and chunk slice
SitemapGen-->>Output: return MetadataRoute.Sitemap for id
Build->>Robots: request robots.txt
Robots->>SitemapGen: call generateSitemaps()
SitemapGen-->>Robots: return sitemap ids
Robots->>Output: emit robots.txt (host, sitemap list, allow/disallow rules)
Estimated code review effort🎯 4 (Complex) | ⏱️ ~50 minutes Possibly related PRs
Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches📝 Generate docstrings
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
|
The latest updates on your projects. Learn more about Vercel for GitHub.
|
The @spree/next package was dropped in main (PR #96), so sitemap.ts needs to use the internal getClient() from @/lib/spree instead. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
There was a problem hiding this comment.
Actionable comments posted: 5
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
README.md (1)
79-84:⚠️ Potential issue | 🟡 MinorDocument the sitemap env vars more completely.
NEXT_PUBLIC_SITE_URLis still missing from this table, andNEXT_PUBLIC_DEFAULT_COUNTRY/NEXT_PUBLIC_DEFAULT_LOCALEare also used bysrc/app/sitemap.tsfor build-time API calls, not just initial redirects. Right now the README understates the configuration needed to get non-empty sitemap/robots output.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@README.md` around lines 79 - 84, The README's "Optional variables" table omits NEXT_PUBLIC_SITE_URL and mislabels the purpose of NEXT_PUBLIC_DEFAULT_COUNTRY and NEXT_PUBLIC_DEFAULT_LOCALE; update the table to include NEXT_PUBLIC_SITE_URL and expand descriptions to state that NEXT_PUBLIC_DEFAULT_COUNTRY and NEXT_PUBLIC_DEFAULT_LOCALE are used at build time by src/app/sitemap.ts (not just for initial redirects) so the sitemap/robots output is populated, and list sensible defaults (e.g., NEXT_PUBLIC_SITE_URL required or empty, NEXT_PUBLIC_DEFAULT_COUNTRY=`us`, NEXT_PUBLIC_DEFAULT_LOCALE=`en`) so users know which env vars are needed to generate non-empty sitemap/robots.
🧹 Nitpick comments (2)
src/app/sitemap.ts (1)
24-29: Add explicit return types to these helpers.
getDefaultLocaleOptions()andgenerateSitemaps()both rely on inference right now. Please declare their return shapes explicitly so the route contract stays strict.As per coding guidelines, "Use strict TypeScript type checking. Always define explicit return types for functions, use 'satisfies' for type checking object literals, and avoid 'any' (use 'unknown' instead)."
Also applies to: 79-102
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/app/sitemap.ts` around lines 24 - 29, Add explicit TypeScript return types for getDefaultLocaleOptions and generateSitemaps: define a small type or interface (e.g., LocaleOptions with fields locale:string and country:string) and annotate getDefaultLocaleOptions(): LocaleOptions; for generateSitemaps declare its precise return type (e.g., SitemapEntry[] or a named SitemapResult type) and use that type in its signature; ensure any object literals inside use the satisfies operator for compile-time shape checking and replace any use of any with unknown or the proper types so the route contract is strictly typed (update exported types/interfaces as needed and adjust callers to the new signatures).src/app/robots.ts (1)
3-3: Use the@/alias for this import.The repo guidelines prefer absolute imports in
*.tsfiles, so@/app/sitemapwould be more consistent here.As per coding guidelines, "Use absolute imports with
@/alias prefix instead of relative imports."🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/app/robots.ts` at line 3, Replace the relative import in src/app/robots.ts with the repository's absolute alias: change the import that references generateSitemaps from "./sitemap" to use the "@/app/sitemap" alias so the symbol generateSitemaps is imported via the `@/` path; update any export/default references if needed to match the module export in sitemap.ts and run type checks to ensure the new path resolves.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/app/robots.ts`:
- Around line 18-22: The disallow list in src/app/robots.ts currently blocks
nested checkout paths but not the checkout root; update the disallow array (the
disallow variable/array in this file) to include the root checkout pattern
"/*/checkout" alongside the existing "/*/checkout/*" entry so that the route
"/[country]/[locale]/checkout" is also disallowed from crawling.
In `@src/app/sitemap.ts`:
- Around line 109-120: The baseUrl validation currently only checks for an empty
string; instead, parse the candidate URL (result of getStoreUrl() or
process.env.NEXT_PUBLIC_SITE_URL assigned to baseUrl) using new URL(...) and
verify its protocol is either "http:" or "https:" before proceeding; if parsing
fails or protocol is not http/https, log an error similar to the existing
message and return [] so sitemap generation (and robots.ts) never emits
malformed <loc> entries. Ensure this validation replaces the current if
(!baseUrl) check and that any subsequent code using baseUrl uses the validated
URL string.
- Around line 149-194: The sitemap currently stamps all entries with `now`/`new
Date()` (see `entries` pushes for basePath, `${basePath}/products`,
`${basePath}/c`, the loop over `allProducts` using
`product.slug`/`product.media`, and the loop over `nonRootCategories` using
`category.permalink`), which makes every URL appear freshly modified; instead,
for product entries use a stable product timestamp field (e.g.
`product.updated_at` or `product.updatedAt`) when present and omit
`lastModified` if the product has no stable timestamp, for category entries use
`category.updated_at`/`updatedAt` when available and otherwise omit
`lastModified`, and for static pages (basePath, `/products`, `/c`) remove
`lastModified` (or use a real site-level publish timestamp if you have one) so
the sitemap only contains real, stable modification dates.
- Around line 17-18: The MAX_PAGES constant is causing product/category fetchers
to stop early while generateSitemaps() still advertises chunks based on
meta.count; fix by making pagination consistent: either remove/raise MAX_PAGES
and paginate all pages derived from meta.count, or keep MAX_PAGES but use it to
cap the advertised chunk count. Concretely, update
fetchAllProducts()/fetchAllCategories() to accept page ranges (or iterate per
sitemap chunk) so each sitemap chunk fetches only its pages, or compute
advertisedChunks = Math.min(Math.ceil(meta.count / PAGE_SIZE), MAX_PAGES) and
use that when building entries so advertised sitemap files never exceed the
actual fetched pages; ensure references to MAX_PAGES, generateSitemaps(),
fetchAllProducts(), fetchAllCategories(), meta.count, and entries are updated
accordingly.
- Line 1: The import at the top of sitemap.ts references a non-existent package
"@spree/next"; update the import to the installed package (e.g., import {
getClient } from "@spree/sdk") so getClient resolves and TS2307 is fixed—locate
the getClient import statement in sitemap.ts and replace the module specifier
with the correct "@spree/sdk" entrypoint used across the repo.
---
Outside diff comments:
In `@README.md`:
- Around line 79-84: The README's "Optional variables" table omits
NEXT_PUBLIC_SITE_URL and mislabels the purpose of NEXT_PUBLIC_DEFAULT_COUNTRY
and NEXT_PUBLIC_DEFAULT_LOCALE; update the table to include NEXT_PUBLIC_SITE_URL
and expand descriptions to state that NEXT_PUBLIC_DEFAULT_COUNTRY and
NEXT_PUBLIC_DEFAULT_LOCALE are used at build time by src/app/sitemap.ts (not
just for initial redirects) so the sitemap/robots output is populated, and list
sensible defaults (e.g., NEXT_PUBLIC_SITE_URL required or empty,
NEXT_PUBLIC_DEFAULT_COUNTRY=`us`, NEXT_PUBLIC_DEFAULT_LOCALE=`en`) so users know
which env vars are needed to generate non-empty sitemap/robots.
---
Nitpick comments:
In `@src/app/robots.ts`:
- Line 3: Replace the relative import in src/app/robots.ts with the repository's
absolute alias: change the import that references generateSitemaps from
"./sitemap" to use the "@/app/sitemap" alias so the symbol generateSitemaps is
imported via the `@/` path; update any export/default references if needed to
match the module export in sitemap.ts and run type checks to ensure the new path
resolves.
In `@src/app/sitemap.ts`:
- Around line 24-29: Add explicit TypeScript return types for
getDefaultLocaleOptions and generateSitemaps: define a small type or interface
(e.g., LocaleOptions with fields locale:string and country:string) and annotate
getDefaultLocaleOptions(): LocaleOptions; for generateSitemaps declare its
precise return type (e.g., SitemapEntry[] or a named SitemapResult type) and use
that type in its signature; ensure any object literals inside use the satisfies
operator for compile-time shape checking and replace any use of any with unknown
or the proper types so the route contract is strictly typed (update exported
types/interfaces as needed and adjust callers to the new signatures).
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: b1b0d34b-8bcf-482c-ae48-a6c93318e1db
⛔ Files ignored due to path filters (1)
package-lock.jsonis excluded by!**/package-lock.json
📒 Files selected for processing (4)
.env.exampleREADME.mdsrc/app/robots.tssrc/app/sitemap.ts
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/app/sitemap.ts`:
- Around line 177-183: The sitemap image mapping currently only uses
img.original_url and drops images when original_url is null; update the mapping
in the product media block to use a fallback (e.g. img.original_url ||
img.large_url) consistent with src/lib/seo.ts:50 so that images with a large_url
are included; keep the existing filter type guard (.filter((url: string | null):
url is string => url != null)) to ensure the resulting images array is string[]
and unchanged behavior when both URLs are missing.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 1def60b9-e428-41f1-a074-6271aad33421
⛔ Files ignored due to path filters (1)
package-lock.jsonis excluded by!**/package-lock.json
📒 Files selected for processing (1)
src/app/sitemap.ts
- Add /*/checkout to robots.txt disallow list - Validate baseUrl with URL parsing and protocol check - Use real updated_at timestamps for lastModified instead of new Date() - Cap advertised sitemap chunks to match MAX_PAGES fetch limit - Extract ITEMS_PER_PAGE constant for pagination - Add NEXT_PUBLIC_SITE_URL to README env var table - Add LocaleOptions type and return type annotations Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Use img.original_url || img.large_url so products without original_url but with large_url still appear in image sitemaps. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Remove STORE_URL/getStoreUrl() dependency from sitemap.ts and robots.ts so that preview deployments on Vercel (where only NEXT_PUBLIC_* env vars are typically configured) can generate sitemaps without extra setup. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
There was a problem hiding this comment.
Actionable comments posted: 2
🧹 Nitpick comments (1)
src/app/sitemap.ts (1)
93-93: Add an explicit return type togenerateSitemaps().This is the only function in the file whose public contract is inferred instead of declared.
♻️ Suggested annotation
-export async function generateSitemaps() { +export async function generateSitemaps(): Promise<Array<{ id: number }>> {As per coding guidelines, "Use strict TypeScript type checking. Always define explicit return types for functions, use 'satisfies' for type checking object literals, and avoid 'any' (use 'unknown' instead)."
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/app/sitemap.ts` at line 93, The function generateSitemaps() currently has an inferred return type—add an explicit TypeScript return annotation on the function signature (e.g., Promise<void> or the concrete Promise<...>/type that matches what the function actually returns) to satisfy strict type checking; update the export async function generateSitemaps() signature to include the correct explicit return type and ensure any returned values conform to that type.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/app/sitemap.ts`:
- Around line 163-214: The sitemap loop is reusing allProducts and
nonRootCategories fetched with getDefaultLocaleOptions(), causing localized
slugs/permalinks to be wrong; update the code so inside the for (const {
country, locale } of countryLocales) loop you call the locale-aware fetch
(instead of using allProducts/nonRootCategories) — e.g., invoke the product and
category fetch functions with the current locale (replace uses of allProducts
and nonRootCategories when building product/category entries) or build a
localized lookup map keyed by locale before the loop and use that (ensure you
pass locale into the API calls that currently live in
getDefaultLocaleOptions()).
- Around line 113-116: Add a top-level export to force request-time rendering so
the sitemap metadata route can regenerate missing catalog data: in
src/app/sitemap.ts export const dynamic = 'force-dynamic' (placed at module
top-level) so the catch/fallback path that returns [{ id: 0 }] will be resolved
at request time; alternatively, replace the fallback strategy with a
revalidation approach, but the immediate fix is adding the dynamic export to
ensure getServer-like behavior for the sitemap generation.
---
Nitpick comments:
In `@src/app/sitemap.ts`:
- Line 93: The function generateSitemaps() currently has an inferred return
type—add an explicit TypeScript return annotation on the function signature
(e.g., Promise<void> or the concrete Promise<...>/type that matches what the
function actually returns) to satisfy strict type checking; update the export
async function generateSitemaps() signature to include the correct explicit
return type and ensure any returned values conform to that type.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: d51e80ab-dc40-4fd3-9a97-c609be8cba5c
📒 Files selected for processing (4)
.env.exampleREADME.mdsrc/app/robots.tssrc/app/sitemap.ts
✅ Files skipped from review due to trivial changes (2)
- .env.example
- README.md
🚧 Files skipped from review as they are similar to previous changes (1)
- src/app/robots.ts
Products and categories were fetched once with the default locale, causing non-default locales to get wrong slugs/permalinks in the sitemap. Spree returns locale-dependent slugs, so the sitemap now fetches per locale using a Map-based cache keyed by locale:country. Also adds `export const dynamic = 'force-dynamic'` so the sitemap regenerates at request time, and an explicit return type on generateSitemaps(). Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/app/sitemap.ts`:
- Around line 147-162: Robots generation currently constructs sitemap entries
without validating NEXT_PUBLIC_SITE_URL; replicate the validation used in
sitemap.ts: read candidate = (process.env.NEXT_PUBLIC_SITE_URL ||
"").replace(/\/$/, ""), attempt new URL(candidate), ensure protocol is "http:"
or "https:", then build baseUrl as parsed.origin +
parsed.pathname.replace(/\/$/, ""); if parsing/protocol check fails, return an
empty sitemap array from the robots.ts sitemap-building function (the same place
that currently creates entries like "/sitemap/0.xml") so robots.txt only
contains absolute, valid sitemap URLs consistent with sitemap.ts behavior.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
| const candidate = (process.env.NEXT_PUBLIC_SITE_URL || "").replace(/\/$/, ""); | ||
|
|
||
| let baseUrl: string; | ||
| try { | ||
| const parsed = new URL(candidate); | ||
| if (parsed.protocol !== "http:" && parsed.protocol !== "https:") { | ||
| throw new Error(`Unsupported protocol: ${parsed.protocol}`); | ||
| } | ||
| baseUrl = parsed.origin + parsed.pathname.replace(/\/$/, ""); | ||
| } catch { | ||
| console.error( | ||
| "Sitemap generation skipped: NEXT_PUBLIC_SITE_URL is missing or invalid. " + | ||
| "Sitemaps require absolute http(s) URLs.", | ||
| ); | ||
| return []; | ||
| } |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Check robots.ts for URL validation
echo "=== robots.ts URL handling ==="
cat src/app/robots.ts | head -30Repository: spree/storefront
Length of output: 792
🏁 Script executed:
#!/bin/bash
# Find and examine the generateSitemaps function
echo "=== Searching for generateSitemaps function ==="
rg -A 30 "export.*generateSitemaps" src/app/sitemap.ts | head -50Repository: spree/storefront
Length of output: 1330
Apply consistent URL validation in robots.ts to avoid invalid sitemap URLs in robots.txt.
While sitemap.ts validates NEXT_PUBLIC_SITE_URL and returns an empty array when invalid, robots.ts lacks this validation. When the environment variable is missing or invalid, robots.ts constructs relative sitemap URLs (e.g., /sitemap/0.xml) instead of absolute URLs, resulting in malformed entries in robots.txt. The actual sitemap requests would be empty, but robots.txt would still advertise these invalid URLs.
Add equivalent URL validation in robots.ts to return an empty sitemap array when NEXT_PUBLIC_SITE_URL is invalid or missing, matching the behavior of sitemap.ts.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/app/sitemap.ts` around lines 147 - 162, Robots generation currently
constructs sitemap entries without validating NEXT_PUBLIC_SITE_URL; replicate
the validation used in sitemap.ts: read candidate =
(process.env.NEXT_PUBLIC_SITE_URL || "").replace(/\/$/, ""), attempt new
URL(candidate), ensure protocol is "http:" or "https:", then build baseUrl as
parsed.origin + parsed.pathname.replace(/\/$/, ""); if parsing/protocol check
fails, return an empty sitemap array from the robots.ts sitemap-building
function (the same place that currently creates entries like "/sitemap/0.xml")
so robots.txt only contains absolute, valid sitemap URLs consistent with
sitemap.ts behavior.
Summary
sitemap.xmlgeneration with products, categories, and static pages for all store locales (auto-discovered from Spree Markets API)robots.txtwith proper disallow rules (account, cart, checkout, query params) and automatic sitemap linkingTest plan
npm run check— no lint/format errorsnpx tsc --noEmit— no TypeScript errorsnpm run build— builds successfully/robots.txtreturns correct rules and sitemap links/sitemap/0.xmlreturns valid XML with products, categories, static pages🤖 Generated with Claude Code
Summary by CodeRabbit
New Features
Documentation