Skip to content

Conversation

@miparnisari
Copy link
Contributor

@miparnisari miparnisari commented Sep 30, 2025

Description

  • move dispatch to operations and rename it to performance
  • add a section to this doc for materialize
  • add another section for the namespace cache

Testing

please review for correctness and completeness

References

Closes #307

@vercel
Copy link

vercel bot commented Sep 30, 2025

The latest updates on your projects. Learn more about Vercel for GitHub.

Project Deployment Preview Comments Updated (UTC)
docs Ready Ready Preview Comment Oct 6, 2025 7:38pm

Copy link
Contributor

@tstirrat15 tstirrat15 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM!

no-hard-tabs: false
no-inline-html: false
no-multiple-blanks: false
max-one-sentence-per-line: false
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🙏


Each SpiceDB node maintains an in-memory cache of permissions queries it has resolved in the past. When a new permissions query is encountered by one node, its answer may be present on another node, so SpiceDB will forward the request onward to the other node to check the shared cache.

For more details on how dispatching works, see the [Consistent Hash Load Balancing for gRPC] article.
Copy link
Contributor Author

@miparnisari miparnisari Oct 1, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

FYI i removed the architecture drawing since i felt that it didn't add much value and it looked oriented towards maintainers, not users

I'm not too happy on the level of detail that we've provided here but I also don't want to overwhelm people. I still don't know enough about dispatching to feel comfortable writing more. are there any details that a user of spicedb should know about implementation?

Comment on lines +74 to +75
1. **Just-In-Time (JIT) Caching**: The default mode that loads definitions on-demand. Uses less memory, but it incurs a cold start penalty on first access to each definition.
2. **Watching Cache**: An experimental mode that proactively maintains an always-up-to-date cache. This mode uses more memory but avoids cold start penalties. It is recommended when there are frequent schema changes.
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@josephschorr to confirm if this is accurate

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It is, but will all be going away soon-ish once the new cache lands

@@ -0,0 +1,90 @@
import { Callout } from 'nextra/components'

# Improving Performance
Copy link
Contributor Author

@miparnisari miparnisari Oct 1, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

note that this doc now has 3 sections: dispatch, materialize, namespace cache

@josephschorr any other sections worth adding? we could (later) add a section on schema considerations for better performance

@miparnisari miparnisari marked this pull request as ready for review October 1, 2025 19:35
@miparnisari miparnisari merged commit 970ceb4 into main Oct 6, 2025
10 checks passed
@miparnisari miparnisari deleted the docs-on-performance branch October 6, 2025 19:53
@github-actions github-actions bot locked and limited conversation to collaborators Oct 6, 2025
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Document the watching schema cache somewhere

4 participants