-
Notifications
You must be signed in to change notification settings - Fork 242
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Restore is slow #266
Comments
hey @sirupsen! 👋 It's been a long time indeed! Still at Shopify? Restore performance is something that needs improvement but I'm surprised it's 10s for such a small workload. You can change the If you do want to retain data longer, then you could set Another option you could try is setting the Finally, there are plans for maintaining a hot backup so you can restore instantly but that's still a few months away. I'm also working on a version that works in a serverless environment but that's probably going to be ready later in the year. |
I stopped working at Shopify mid last year, doing infra consulting now :) Thank youuuu! FWIW I'm using Cloud Run, so for me, it's already working in serverless 😉 I'm stoked for hot standbys, and maybe one day the ability to 'merge' instances would be cool too. |
Yes! That'd be awesome. Thanks, Simon.
Cool. I saw some folks talking about getting Litestream running on Cloud Run but I haven't had a chance to give it a go yet. The idea of "serverless SQLite" that I'm thinking of is paging in data on-demand in a way that's transactionally safe. That way it'd give you zero startup time but also low-latency queries once data is hot on a serverless instance. I'm still toying around with the idea but I think it might have some legs. |
@sirupsen I saw your comment in #223 (comment) but I'm moving the discussion back over to this ticket.
Can you hit |
@benbjohnson Sorry, I might have misused the word 'stuck'—it doesn't get stuck in a loop inside Litestream, just stuck in a loop trying to boot the container by restoring Litestream.
This is the stacktrace I get from sending I have nothing sensitive in this database, so I've DM'ed you a zip of the |
@sirupsen The I think the issue might be that GCR doesn't enforce a single instance at a time and there could be overlap—especially when deploying—that's causing issues. I think GCR isn't going to work well until I can get better support for serverless in Litestream. I'm not sure if you're committed to GCR but another good alternative is fly.io. If you attach a persistent disk on their instances then they enforce a single instance at a time. |
Fair enough... I will consider migrating to fly.io. 👍🏻 How do I fix this error though even when nothing is running on GCP? To recover my dear database |
Unfortunately, with the missing initial WAL segment the best you can do is recover from the last snapshot. # Copy out the last snapshot.
cp generations/f6d6d1e96d38dafb/snapshots/00000093.snapshot.lz4 db.lz4
# Uncompress it
lz4 db.lz4
# Verify the database integrity
sqlite3 db
sqlite> PRAGMA integrity_check;
ok |
I don't deserve you, thank you :) |
Thanks for going on this debugging journey with me, @sirupsen! The doc updates are incredibly helpful. 🎉 |
@hifi it's very fast for me these days despite the database being far larger. Probably with improper snapshot intervals and retention you might be able to make it slow! |
Can this be closed? |
Hey @benbjohnson, long time no see!! Thank you for working on Litestream! 🙏🏻
I, too, love Sqlite. I wanted to track a few events on my website, e.g. what people search for, and saw this as an opportunity to use Litestream. Loved the idea of tracking events in Sqlite and just do analysis on a local copy.
However, even though my db is only ~100kb on disk and ~1000 rows over a few days, it takes ~10 seconds to restore with
litestream restore
, and this is going up fast.Is there a plan for a
litestream compress
or similar to avoid replaying the WAL from early on, similar to what databases do when the WAL gets big enough? Or am I doing something wrong? Unfortunately this will be a bit of a deal-breaker to me using this in production :(The text was updated successfully, but these errors were encountered: