Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Optimize DB for large data sets #981

Closed
trustmaster opened this issue Jun 29, 2012 · 2 comments
Closed

Optimize DB for large data sets #981

trustmaster opened this issue Jun 29, 2012 · 2 comments

Comments

@trustmaster
Copy link
Member

Things to take care of:

  • user_auth may explode on large sites;
  • cache_value may explode on large sites, specifically for these vars:
    • cot_cfg (high risk)
    • cot_extrafields (low risk)
    • cot_groups (low risk)
    • cot_plugins (high risk)
    • forumsall (rss module, high risk)
    • structure (high risk)
  • indexes in cot_users table and maybe some others should be reconsidered.
@macik
Copy link
Member

macik commented Jun 30, 2012

Why it can «explode»? And what kind «large sites» it considered (100² , 100³ users/records) ?

@Kilandor
Copy link
Member

With my current workings of a site with over 2 million users. There are alot of bottlenecks because of no index's. That i had to put into place so the site could function.

Table cot_users
user_password, user_regdate, user_name ,user_lostpass ,user_maingrp ,user_sid ,user_country ,user_email
(*some of these are default but I didn't bother looking up which ones are by default)

These are all the fields I had to index to prevent queries from taking 30seconds~
I am very sure there are numerous other SQL bottlenecks that would appear on other areas such as large numbers of pages/comments/forums is always possible on sites.

In my case currently none of those are being used so I have not had to fix any issues with them.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants