Skip to content

Releases: qri-io/qri


10 May 16:06
Choose a tag to compare

v0.10.0 (2021-05-04)

Welcome to the long awaited 0.10.0 Qri release! We've focused on usability and bug fixes, specifically surrounding massive improvements to saving a dataset, the HTTP API, and the lib package interface. We've got a few new features (step-based transform execution, change reports over the api, progress bars on save, and a new component: Stats) and you should see an obvious change based on the speed, reliability, and usability in Qri, especially when saving a new version of a dataset.

Massive Improvements to Save performance

We've drastically improved the reliability and scalability of saving a dataset on Qri. Qri uses a bounded block of memory while saving, meaning it will only consume roughly a MAX of 150MB of memory while saving, regardless of how large your dataset is. This means the max size of dataset you can save is no longer tied to your available memory.

We've had to change some underlying functionality to get the scalability we want, to that end we no longer calculate Structure.Checksum, we no longer calculate commit messages for datasets over a certain size, and we no longer store all the error values found when validating the body of a dataset.

API Overhaul

Our biggest change has been a complete overhaul of our API.

We wanted to make our API easier to work with by making it more consistent across endpoints. After a great deal of review & discussion, this overhaul introduces an RPC-style centric API that expects JSON POST requests, plus a few GET requests we're calling "sugar" endpoints.

The RPC part of our api is an HTTP pass-through to our lib methods. This makes working with qri over HTTP the same as working with Qri as a library. We've spent a lot of time building & organizing qri's lib interface, and now all of that same functionality is exposed over HTTP. The intended audience for the RPC API are folks who want to automate qri across process boundaries, and still have very fine grained control. Think "command line over HTTP".

At the same time, however, we didn't want to lose a number of important-to-have endpoints, like being able to GET a dataset body via just a URL string, so we've moved all of these into a "sugar" API, and made lots of room to grow. We'll continue to add convenience-oriented endpoints that make it easy to work with Qri. The "sugar" API will be oriented to users who are prioritizing fetching data from Qri to use elsewhere.

We also noticed how quickly our open api spec fell out of date, so we decided to start generating our spec using the code itself. Take a look at our open api spec, for a full list of supported JSON endpoints.

Here is our full API spec, supported in this release:

API Spec


The purpose of the API package is to expose the lib.RPC api and add syntatic sugar for mapping RESTful HTTP requests to lib method calls

endpoint HTTP methods Lib Method Name
"/" GET api.HealthCheckHandler
"/health" GET api.HealthCheckHandler
"/qfs/ipfs/{path:.*}" GET qfs.Get
"/webui" GET api.WebuiHandler
/ds/get/{username}/{name} GET api.GetHandler
/ds/get/{username}/{name}/at/{path} GET api.GetHandler
/ds/get/{username}/{name}/at/{path}/{component} GET api.GetHandler
/ds/get/{username}/{name}/at/{path}/body.csv GET api.GetHandler


The purpose of the lib package is to expose a uniform interface for interacting with a qri instance

endpoint Return Type Lib Method Name
Aggregate Endpoints
"/list" []VersionInfo collection.List?
"/sql" [][]any sql.Exec
"/diff" Diff diff.Diff
"/changes" ChangeReport diff.Changes
Access Endpoints
"/access/token" JSON Web Token access.Token
Automation Endpoints
"/auto/apply" ApplyResult automation.apply
Dataset Endpoints
"/ds/componentstatus" []Status dataset.ComponentStatus
"/ds/get GetResult dataset.Get
"/ds/activity" []VersionInfo dataset.History
"/ds/rename" VersionInfo dataset.Rename
"/ds/save" dataset.Dataset dataset.Save
"/ds/pull" dataset.Dataset dataset.Pull
"/ds/push" DSRef dataset.Push
"/ds/render" []byte dataset.Render
"/ds/remove" RemoveResponse dataset.Remove
"/ds/validate" ValidateRes dataset.Validate
"/ds/unpack" Dataset dataset.Unpack
"/ds/manifest" Manifest dataset.Manifest
"/ds/manifestmissing" Manifest dataset.ManifestMissing
"/ds/daginfo" DagInfo dataset.DagInfo
Peer Endpoints
"/peer" Profile peer.Info
"/peer/connect" Profile peer.Connect
"/peer/disconnect" Profile peer.Disconnect
"/peer/list" []Profile peer.Profiles
Profile Endpoints
"/profile" Profile profile.GetProfile
"/profile/set" Profile profile.SetProfile
"/profile/photo" Profile profile.ProfilePhoto
"/profile/poster" Profile profile.PosterPhoto
Remote Endpoints
"/remote/feeds" Feed remote.Feeds
"/remote/preview" Dataset remote.Preview
"/remote/remove" - remote.Remove
"/remote/registry/profile/new" Profile registry.CreateProfile
"/remote/registry/profile/prove" Profile registry.ProveProfile
"/remote/search" SearchResult remote.Search
Working Directory Endpoints
"/wd/status" []StatusItem fsi.Status
"/wd/init" DSRef ...


12 Oct 21:24
Choose a tag to compare

Patch v0.9.13 brings improvements to the validate command, and lays the groundwork for OAuth within qri core.

qri validate gets a little smarter this release, printing a cleaner, more readable list of human errors, and now has flags to output validation error data in JSON and CSV formats.

Full description of changes are in


10 Sep 20:42
Choose a tag to compare

Patch release 0.9.12 features a number of fixes to various qri features, most aimed at improving general quality-of-life of the tool, and some others that lay the groundwork for future changes.

HTTP API Changes

Changed the qri api so that the /get endpoint gets dataset heads and bodies. /body still exists but is now deprecated.

P2P and Collaberation

A new way to resolve peers and references on the p2p network.
The start of access control added to our remote communication API.
Remotes serve a simple web ui.

General polish

Fix ref resolution with divergent logbook user data.
Working directories allow case-insensitive filenames.
Improve sql support so that dataset names don't need an explicit table alias.
The get command can fetch datasets from cloud.


10 Aug 15:59
@b5 b5
Choose a tag to compare

This patch release addresses a critical error in qri setup, and removes overly-verbose output when running qri connect.


30 Jul 01:07
@b5 b5
Choose a tag to compare

v0.9.10 (2020-07-27)

For this release we focused on clarity, reliability, major fixes, and communication (both between qri and the user, and the different working components of qri as well). The bulk of the changes surround the rename of publish and add to push and pull, as well as making the commands more reliable, flexible, and transparent.

push is the new publish

Although qri defaults to publishing datasets to our website (if you haven't checked it out recently, it's gone through a major facelift & has new features like dataset issues and vastly improved search!), we still give users tools to create their own services that can host data for others. We call these remotes ( is technically a very large, very reliable remote). However, we needed a better way to keep track of where a dataset has been "published", and also allow datasets to be published to different locations.

We weren't able to correctly convey, "hey this dataset has been published to remote A but not remote B", by using a simple boolean published/unpublished paradigm. We also are working toward a system, where you can push to a peer remote or make your dataset private even though it has been sent to live at a public location.

In all these cases, the name publish wasn't cutting it, and was confusing users.

After debating a few new titles in RFC0030, we settled on push. It properly conveys what is happening: you are pushing the dataset from your node to a location that will accept and store it. Qri keeps track of where it has been pushed, so it can be pushed to multiple locations.

It also helps that git has a push command, that fulfills a similar function in software version control, so using the verb push in this way has precident. We've also clarified the command help text: only one version of a dataset is pushed at a time.

pull is the new add

We decided that, for clarity, if we are renaming qri publish to qri pull, we should rename it's mirrored action, qri add to qri pull. Now it's clear: to send a dataset to another source use qri push, to get a dataset from another source use qri pull!

use get instead of export

qri export has been removed. Use qri get --format zip me/my_dataset instead. We want more folks to play with get, it's a far more powerful version of export, and we had too many folks miss out on get because they found export first, and it didn't meet their expectations.

major fix: pushing & pulling historical versions

qri push without a specified version will still default to pushing the latest version and qri pull without a specified version will still default to pulling every version of the dataset that is available. However, we've added the ability to push or pull a dataset at specific versions by specifying the dataset version's path! You can see a list of a dataset's versions and each version's path by using the qri log command.

In the past this would error:

$ qri publish me/dataset@/ipfs/SpecificVersion

With the new push command, this will now work:

$ qri push me/dataset@/ipfs/SpecificVersion

You can use this to push old versions to a remote, same with pull!

events, websockets & progress

We needed a better way for the different internal qri processes to coordinate. So we beefed up our events and piped the stream of events to a websocket. Now, one qri process can subscribe and get notified about important events that occur in another process. This is also great for users because we can use those events to communicate more information when resource intensive or time consuming actions are running! Check our our progress bars when you push and pull!

The websocket event API is still a work in progress, but it's a great way to build dynamic functionality on top of qri, using the same events qri uses internally to power things like progress bars and inter-subsystem communication.

other important changes

  • sql now properly handles dashes in dataset names
  • migrations now work on machines across multiple mounts. We fixed a bug that was causing the migration to fail. This was most prevalent on Linux.
  • the global --no-prompt flag will disable all interactive prompts, but now falls back on defaults for each interaction.
  • a global --migrate flag will auto-run a migration check before continuing with the given command
  • the default when we ask the user to run a migration is now "No". In order to auto-run a migration you need the --migrate flag, (not the --no-prompt flag, but they can both be use together for "run all migrations and don't bother me")
  • the remove now takes the duties of the --unpublish flag. run qri remove --all --remote=registry me/dataset instead of qri publish --unpublish me/dataset. More verbose? Yes. But you're deleting stuff, so it should be a think-before-you-hit-enter type thing.
  • We've made some breaking changes to our API, they're listed below in the YELLY CAPS TEXT below detailing breaking changes

full notes in the Changelog


01 Jul 22:20
Choose a tag to compare

Welcome to Qri 0.9.9! We've got a lot of internal changes that speed up the work you do on Qri everyday, as well as a bunch of new features, and key bug fixes!

Config Overhaul

We've taken a hard look at our config and wanted to make sure that, not only was every field being used, but also that this config could serve us well as we progress down our roadmap and create future features.

To that effect, we removed many unused fields, switched to using multiaddresses for all network configuration (replacing any port fields), formalized the hierarchy of different configuration sources, and added a new Filesystems field.

This new Filesystems field allows users to choose the supported filesystems on which they want Qri to store their data. For example, in the future, when we support s3 storage, this Filesystems field is where the user can go to configure the path to the storage, if it's the default save location, etc. More immediately however, exposing the Filesystems configuration also allows folks to point to a non-default location for their IPFS storage. This leads directly to our next change: moving the default IPFS repo location.


One big change we've been working on behind the scenes is upgrading our IPFS dependency. IPFS recently released version 0.6.0, and that's the version we are now relying on! This was a very important upgrade, as users relying on older versions of IPFS (below 0.5.0) would not be seen by the larger IPFS network.

We also wanted to move the Qri associated IPFS node off the default IPFS_PATH and into a location that advertises a bit more that this is the IPFS node we rely on. And since our new configuration allows users to explicitly set the path to the IPFS repo, if a user prefers to point their repo to the old location, we can still accommodate that. By default, the IPFS node that Qri relies on will now live on the QRI_PATH.

Migrations can be rough, so we took the time to ensure that upgrading to the newest version of IPFS, adjusting the Qri config, and moving the IPFS repo onto the QRI_PATH would go off without a hitch!

JSON schema

Qri now relies on a newer draft (draft2019_09) of JSON Schema. Our golang implementation of jsonschema now has better support for the spec, equal or better performance depending on the keyword, and the option to extend using your own keywords.

Removed Update

This was a real kill-your-darlings situation! The functionality of update - scheduling and running qri saves - can be done more reliably using other schedulers/taskmanagers. Our upcoming roadmap expands many Qri features, and we realized we couldn't justify the planning/engineering time to ensure update was up to our standards. Rather then letting this feature weigh us down, we realized it would be better to remove update and instead point users to docs on how to schedule updates. One day we may revisit updates as a plugin or wrapper.

Merkledag error

Some users were getting Merkledag not found errors when trying to add some popular datasets from Qri Cloud (for example nyc-transit-data/turnstile_daily_counts_2019). This should no longer be the case!

Specific Command Line Features/Changes

  • qri save - use the --drop flag to remove a component from that dataset version
  • qri log - use the --local flag to only get the logs of the dataset that are storied locally
    - use the --pull flag to only get the logs of the dataset from the network (explicitly not local)
    - use the --remote flag to specify a remote off of which you want to grab that dataset's log. This defaults to the qri cloud registry
  • qri get - use the -- zip flag to export a zip of the dataset

Specific API Features/Changes

  • /fetch - removed, use /history?pull=true
  • /history - use the local=true param to only get the logs of a dataset that are stored locally
    - use the pull=true param to get the logs of a dataset from the network only (explicitly not local)
    - use the remote=REMOTE_NAME to specify a remote off of which you want to grab that dataset's log. This defaults to the qri cloud registry


  • update command and all api endpoints are removed
  • removed /fetch endpoint - use /history instead. local=true param ensure that the logbook data is only what you have locally in your logbook


20 Apr 20:27
@b5 b5
Choose a tag to compare

0.9.8 is a quick patch release to fix export for a few users who have been having trouble getting certain datasets out of qri.

Fixed Export

This patch release fixes a problem that was causing some datasets to not export properly while running qri connect.

Naming rules

This patch also clarifies what characters are allowed in a dataset name and a peername. From now on a legal dataset name and username must:

  • consist of only lowercase letters, numbers 0-9, the hyphen "-", and the underscore "_".
  • start with a letter

Length limits vary between usernames and dataset names, but qri now enforces these rules more consistently. Existing dataset names that violate these rules will continue to work, but will be forced to rename in a future version. New datasets with names that don't match these rules cannot be created.

Full description of changes are in


07 Apr 20:01
@b5 b5
Choose a tag to compare

Qri CLI v0.9.7 is huge. This release adds SQL support, turning Qri into an ever-growing database of open datasets.

If that wasn't enough, we've added tab completion, nicer automatic commit messages, unified our command descriptions, and fixed a whole slew of bugs!

📊 Run SQL on datasets

Exprimental support for SQL is here! Landing this feature brings qri full circle to the original whitepaper we published in 2017.

We want to live in a world where you can SELECT * FROM any_qri_dataset, and we're delighted to say that day is here.

We have plans to improve & build upon this crucial feature, and are marking it as experimental while we flesh out our SQL implemetation. We'll drop the "experimental" flag when we support a healthy subset of the SQL spec.

We've been talking about SQL a bunch in our community calls:

🚗🏁 Autocomplete

The name says it all. after following the instructions on qri generate --help, type qri get, then press tab, and voilá, your list of datasets appears for the choosing. This makes working with datasets much easier, requiring you to remember and type less. 🎦 Here's a demo from our community call.

🤝📓 Friendlier Automatic Commit Messages

For a long time Qri has automatically generated commit messages for you if one isn't suppied by analyzing what's changed between versions. This release makes titles that look like this:

updated structure, viz, and transform

and adds detailed messages that look like this:

    updated schema.items.items.63.title
    updated scriptPath
    updated resources./ipfs/QmfQu6qBS3iJEE3ohUnhejb7vh5KwcS5j4pvNxZMi717pU.path
    added scriptBytes
    updated syntaxVersion

These automatic messages form a nice textual description of what's changed from version to version. Qri will automatically add these if you don't provide --title and/or --message values to qri save.

📙 Uniform CLI help

Finally, a big shout out to one of our biggest open source contributions to date! @Mr0grog not only contributed a massive cleanup of our command line help text, they also wrote a style guide based on the existing help text for others to follow in the future!

Full description of changes are in


05 Mar 21:25
Choose a tag to compare

This patch release fixes a number of small bugs, mainly in support of our Desktop app, and continues infrastructural improvements in preparation for larger feature releases. These include: our improved diff experience, significantly better filesystem integration, and a new method of dataset name resolution that better handles changes across a peer network.

Full description of changes are in


27 Feb 17:17
@b5 b5
Choose a tag to compare

This patch release is focused on a number of API refactors, and sets the stage for a new subsystem we're working on called dscache. It's a small release, but should help stabilize communication between peer remotes & the registry.