Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Getting the count of objects is very slow in large PostgreSQL databases #291

Closed
jamadden opened this issue Jul 28, 2019 · 0 comments · Fixed by #292
Closed

Getting the count of objects is very slow in large PostgreSQL databases #291

jamadden opened this issue Jul 28, 2019 · 0 comments · Fixed by #292

Comments

@jamadden
Copy link
Member

We use the generic query SELECT COUNT(*) FROM <table>, but that uses a full table scan which can be very slow. In a history-free database, <table> is object_state, which contains large rows and this query can take many minutes to run (on my system, for 400,000 rows with a slow-ish disk, it takes 342.794s (5 minutes)).

This is probably not a bottleneck for most applications, but it does hinder certain zodbshootout operations (specifically, when --min-objects is used). Can we do better?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging a pull request may close this issue.

1 participant