Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Figure out Rate Limiting for Datasets on MySQL #55481

Closed
snickell opened this issue Dec 15, 2023 · 4 comments · Fixed by #58479
Closed

Figure out Rate Limiting for Datasets on MySQL #55481

snickell opened this issue Dec 15, 2023 · 4 comments · Fixed by #58479
Assignees
Labels
unfirebase https://github.com/orgs/code-dot-org/projects/4

Comments

@snickell
Copy link
Contributor

snickell commented Dec 15, 2023

Rate Limiting

In Firebase we maintain a rate limit per project/channel, 300 writes per 15 seconds, 600 writes per 60 seconds.

  1. Where does this data rate come from? Its pretty high. Is this required by the microbit projects?
  2. On MySQL we see inserts taking 0.15s per, which if we're not inserting in parallel gives us an insert rate of about 6 inserts/s, which is 90 writes per 15 seconds or 360 per 60 seconds. This is less than our existing limits.
  3. Should we be batching these together?
{
  "counters": {
    "limits": {
      "15": {
        "lastResetTime": 1654619184297,
        "writeCount": 175
      },
      "60": {
        "lastResetTime": 1654524726608,
        "writeCount": 188
      }
    }
  },
  "serverTime": 1661532798036,
@snickell
Copy link
Contributor Author

snickell commented Jan 2, 2024

We're using encrypted cookies for sessions storage, so it should be fast/performant to keep rate counters per-user if that's the approach we want to follow.

@cnbrenci
Copy link
Contributor

cnbrenci commented Jan 3, 2024

Firebase is doing async calls and using a websocket to be able to achieve the throughput. It doesn't have to do a roundtrip per transaction. We're investigating how we would deal with auto incrementing the record_id part of the composite primary key without locking the entire table. Here's a working guess on a query to use:

BEGIN;
 SELECT @id := IFNULL(MAX(record_id),0) + 1 FROM unfirebase.records WHERE channel_id='shared' AND table_name='words' FOR UPDATE;
 INSERT INTO unfirebase.records
    VALUES
    ('shared', 'words', @id, '{}');
 COMMIT;
 
 SELECT * FROM unfirebase.records WHERE channel_id='shared' AND table_name='words' and record_id > 4983;

todo: update the quary when we figure out a better one

# set profiling=1;
BEGIN;
  SELECT MAX(record_id) FROM unfirebase.records WHERE channel_id='shared' AND table_name='Rabbits' FOR UPDATE;
  SELECT @id := IFNULL(MAX(record_id),0)+1 FROM unfirebase.records WHERE channel_id='shared' AND table_name='Rabbits';
  DO SLEEP(10);
  INSERT INTO unfirebase.records
     VALUES
     ('shared', 'Rabbits', @id, '{}');
COMMIT;
  
# show profiles;

# SELECT * FROM unfirebase.records WHERE channel_id='shared' AND table_name='Rabbits';

@snickell
Copy link
Contributor Author

snickell commented Jan 3, 2024

See: #55554 (comment) for followup to the above (implementing manual auto increment)

@snickell snickell added the unfirebase https://github.com/orgs/code-dot-org/projects/4 label Feb 3, 2024
@cnbrenci
Copy link
Contributor

Current impl is client side rate limiting

Want to rate limit on blocks only (not in the UI)
commands.js seems like the right place to put the rate limiting.
Data blocks are re-implemented in multiple commands.js files for Applab and Gamelab. The impls are slightly different. The right way (we think) would prob be to refactor the blocks into a shared spot to be used for the 2 different labs, but that can be broken into a different task/PR.

snickell added a commit that referenced this issue May 9, 2024
* Implement rate limiting for datablock storage at the block level.
* Fixes #55481

---------
Co-authored-by: Cassi Brenci <cassi.brenci@code.org>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
unfirebase https://github.com/orgs/code-dot-org/projects/4
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants