Http interface to query S3 objects and Sqlite databases
Project that allows you to query SQLite databases hosted on AWS S3 with a very simple HTTP API. This is good when you have an static database that you want to make available to your applications but you can't, or do not want, to host the file yourself.
Some use cases might be:
- Making the database available for JAMStack applications hosted on services like Cloudflare workers, Vercel, Netlify, or others
- Sharing a database with multiple services without the need of replicating it
- Something else... (I am not creative today)
I developed this based on the first use case, I needed to make a small database available for an application I deployed in Vercel. However, I could not send the database itself (couple hundreds of megabytes) with the built application. Also, I did not want to spin up some expensive SQL database for it.
It is very cheap and fast enough if you do not need full scale of SQL queries to work fast. Besides, by being deployed using AWS Lambda, it costs almost nothing to run. Of course, if your application gets VERY popular, you should consider migrating off this approach.
As you can imagine, this implementation has many drawbacks in comparison to
hosting the file yourself besides your application. Depending on the queries
you need to make, they become SLOW. Really slow. If your database is too big,
hundreds of megabytes, you MUST create the proper indexes for this approach to
be usable. Yet, even with indexes, some queries force a full table scan, like
LIKE
, making this approach terribly slow.
This application was created to run on Lambda functions. It assumes you
have the S3 instance deployed and with the GetObject
permission properly set.
pip install -r requirements.txt
If you want to run it locally:
pip install uvicorn
export READQL_S3_BUCKET_NAME=bucket-name
uvicorn handler:app
# Application should be available on `localhost:8000`
We use Serverless framework to deploy the project to AWS Lambda. So, to deploy it is as easy as:
npx serverless deploy --verbose
Docker deployment is used because APSW did not like the default Lambda python environment.
To use this service is as easy as uploading your SQLite database to your S3 bucket and querying it using your favourite HTTP client:
aws s3 cp your.db s3://bucket/your.db
xh /your.db q=='SELECT 123 AS num'
[
{
"num": 123
}
]
If you want, you can use the hosted version:
xh https://readql.jmeyer.dev type==CSV
{
"key": "UUID.csv",
"url": "presigned.s3.url" # expires in 10 minutes
}
And just upload to that URL. You can query the file using:
xh https://readql.jmeyer.dev/test.csv q=='SELECT * FROM s3Object'
{
"a": "1",
"b": "2",
"c": "3",
}
That is it! Have fun!
- @rogerbinns for creating the APSW which allowed me to do this.
- @uktrade for the implementation that I used as a base to my own VFS.