-
Notifications
You must be signed in to change notification settings - Fork 482
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Design backend for Datablock Storage #55554
Comments
Performance and async characteristics of the existing Firebase blocks
Performance Characteristics of our MySQL DB
Implementation IdeasIf our target was 0.75s from "invocation of a createRecord to it existing in the DB", we could easily batch calls, reducing load on our rails backend. Strawman idea would be to batch into 0.5s or 1s batches. This would meet or exceed the existing latency until callback, and would take advantage of the improved performance for batching inserts (no need to re-lock every time, see below). |
Auto-increment of RecordIDs is TrickyOur existing record.ids are auto-incremented using a Firebase counters "table". MySQL innodb tables don't have support for auto-increment of one column in a composite key, at least not relative to the other keys (see, e.g., https://stackoverflow.com/questions/18120088/defining-composite-key-with-auto-increment-in-mysql).
The problem with MAX is that if you have three inserts going (one starts, another starts and delays a hiccup, while a third starts) you can still get a duplicate ID condition. Since we always increase the ID when inserting, we realized you could use MIN as a sort of "table lock" (not SQL table, but Datablock Storage table) that is more stable. There are probably some race conditions left with this (?) but under the common scenarios we thought of, this looks pretty good for manually incrementing ID for an insert: set profiling=1;
BEGIN;
SELECT MIN(record_id) FROM unfirebase.records WHERE channel_id='shared' AND table_name='words' LIMIT 1 FOR UPDATE;
SELECT @id := IFNULL(MAX(record_id),0)+1 FROM unfirebase.records WHERE channel_id='shared' AND table_name='words';
INSERT INTO unfirebase.records
VALUES
('shared', 'words', @id, '{}');
COMMIT;
show profiles; Profiles of the above approach look goodWe profiled the above approach to locking before inserting, on a Datablock Storage table with 5000 rows (shared/words), we see overall durations of about a millisecond for the combined set of steps to lock and insert a row (duration column is in seconds).
Additionally we used |
It appears that Firebase RTDB (what we are using) does in fact do optimistic updating: https://firebase.google.com/docs/database/admin/save-data#section-writes-offline There's very little reference to this, probably because its fundamental to Firebase's design. |
Enforcing limits in the backend:
|
Today, the data blocks do not use optimistic updating. e.g. If you call a |
We've basically finished this item, since we've already executed against the schema design. Breaking out remaining small issues and closing:
|
Datablock Storage (#55084) is switching from a browser=>firebase connection for project storage to a browser=>rails=>mysql connection. We have tentatively determined a schema (see: #55344 (comment)) and measured its perf characteristics (#55189), now we need to design the rails backend.
The text was updated successfully, but these errors were encountered: