Skip to content


Repository files navigation

Datasette Lite

Datasette running in your browser using WebAssembly and Pyodide

Live tool:

More about this project:

How this works

Datasette Lite runs the full server-side Datasette Python web application directly in your browser, using the Pyodide build of Python compiled to WebAssembly.

When you launch the demo, your browser will download and start executing a full Python interpreter, install the datasette package (and its dependencies), download one or more SQLite database files and start the application running in a browser window (actually a Web Worker attached to that window).

Load a different Datasette version

Datasette Lite uses the most recent stable Datasette release from PyPI.

To use the most recent preview version (alpha or beta) add ?ref=pre:

Or for a specific release pass the version number as ?ref=:

Loading CSV data

You can load data from a CSV file hosted online (provided it allows access-control-allow-origin: *) by passing that URL as a ?csv= parameter - or by clicking the "Load CSV by URL" button and pasting in a URL.

This example loads a CSV of college fight songs from the fivethirtyeight/data GitHub repository:

You can pass ?csv= multiple times to load more than one CSV file. You can then execute SQL joins to combine that data.

This example loads the latest Covid-19 per-county data from the NY Times, the 2019 county populations data from the US Census, joins them on FIPS code and runs a query that calculates cases per million across that data:*+1000000+as+cases_per_million%0Afrom%0A++%5Bus-counties-recent%5D%0A++join+us_census_county_populations_2019+on+us_census_county_populations_2019.fips+%3D+%5Bus-counties-recent%5D.fips%0Awhere%0A++population+%3E+10000%0Aorder+by%0A++cases_per_million+desc

Loading JSON data

If you have data in a JSON file that looks something like this you can load it directly into Datasette Lite using the ?json=URL parameter:

    "id": 1,
    "name": "Item 1"
    "id": 2,
    "name": "Item 2"

This also works with JSON documents where one of the keys is a list of objects, such as this one:

  "rows": [
      "id": 1,
      "name": "Item 1"
      "id": 2,
      "name": "Item 2"

In this case it will search for the first key that contains a list of objects.

If a document is a JSON object where every value is a JSON object, like this:

  "anchor-positioning": {
    "spec": ""
  "array-at": {
    "spec": ""
  "array-flat": {
    "caniuse": "array-flat",
    "spec": ""

Each of those objects will be loaded as a separate row, with a _key primary key column containing the object key.

This example loads scraped data from this repo.

Newline-delimited JSON works too - for example a file that looks like this:

{"id": 1, "name": "Item 1"}
{"id": 2, "name": "Item 2"}

Loading SQLite databases

You can use this tool to open any SQLite database file that is hosted online and served with a access-control-allow-origin: * CORS header. Files served by GitHub Pages automatically include this header, as do database files that have been published online using datasette publish.

Copy the URL to the .db file and either paste it into the "Load SQLite DB by URL" prompt, or construct a URL like the following:

Some examples to try out:

Loading Parquet

To load a Parquet file, pass a URL to ?parquet=.

For example this file:

Can be loaded like this:

Initializing with SQL

You can also initialize the data.db database by passing the URL to a SQL file. The easiest way to do this is to create a GitHub Gist.

This example SQL file creates a table and populates it with three records. It's hosted in this Gist.

You can paste this URL into the "Load SQL by URL" prompt, or you can pass it as the ?sql= parameter like this.

SQL will be executed before any CSV imports, so you can use initial SQL to create a table and then use ?csv= to import data into it.

Starting with just an in-memory database

To skip loading the default databases and just provide /_memory - useful for demonstrating plugins - pass ?memory=1, for example:

Loading metadata

Datasette supports metadata, as a metadata.json or metadata.yml file.

You can load a metadata file in either of these formats by passing a URL to the ?metadata= query string option.

Special handling of GitHub URLs

A tricky thing about using Datasette Lite is that the files you load via URL need to be hosted somewhere that serves open CORS headers.

Both regular GitHub and GitHub Gists do this by default. This makes them excellent options to host data files that you want to load into Datasette Lite.

You can paste in the "raw" URL to a file, but Datasette Lite also has a shortcut: if you paste in the URL to a page on GitHub or a Gist it will automatically convert it to the "raw" URL for you.

Try the following to see this in action:

Installing plugins

Datasette has a number of plugins that enable new features.

You can install plugins into Datasette Lite by adding one or more ?install=name-of-plugin parameters to the URL.

Not all plugins are compatible with Datasette Lite at the moment, for example plugins that load their own JavaScript and CSS do not currently work, see issue #8.

Here's a list of plugins that have been tested with Datasette Lite, plus demo links to see them in action:


By default, hits to are logged using Plausible.

Plausible is a privacy-focused, cookie-free, GDPR-compliant analytics system.

Each navigation within Datasette Lite is logged as a separate event to Plausible, capturing the fragment hash and the URL to the currently loaded file.

The site is hosted on GitHub Pages, which does not offer any analytics that are visible to the site owner. GitHub Pages can only log visits to the root page - it will not have visibility into any subsequent # fragment navigation.

To opt out of analytics, you can add ?analytics=off or &analytics=off to the URL. This will prevent any analytics being sent to Plausible.