Skip to content


Switch branches/tags

Name already in use

A tag already exists with the provided branch name. Many Git commands accept both tag and branch names, so creating this branch may cause unexpected behavior. Are you sure you want to create this branch?

Latest commit

* Propagating 'login_enabled' parameter to Web UI

* Extended runtime config with feature flags (login, logout) and logged user
 Propagating user's full name from the OS via runtime_config

* GraphQL: filtering datasets by owner account

* 'kamu-adapter-oauth": isolates Github interaction from GraphQL.
GraphQL: simplified access token to a string (future JWT), and added password login endpoint stub

* Introduced `AuthenticationService` amd 'AuthenticationProvider' abstractions in the domain.
Github OAuth login is one of the Authentication Providers.
GraphQL: work via AuthenticationService, no linking of GitHub adapter directly.
GraphQL: minimized AccountInfo structure

* using AccountInfo in CurrentAccountSubject and WebUIServer

* Simplified authentication GraphQL

* CLI password authentication

Authentication provider based on predefined accounts (loaded via CLI config).
Password should match login.
If account is unknown, it's presumed existing without avatar

* Emit login instructions in `kamu ui` mode

GraphQL: generate  alias-based account instead of fakes in most contexts
Use random secret if no JWT secret in env.
Github: query env vars only before use.
GraphQL: bad login method/credentials are no longer internal error.
Relaxed local FS  multi-tenancy refs checks if account matches current.

* Emit login methods to UI instead of "enableLogin".
In single-tenant workspace, use "kamu" user

* Single--tenant workspace login limitations: no Github and no predefined, just kamu

* GraphQL Bearer Authentication

- API server propagates Bearer token into GraphQL request data.
- Implemented LoggedInGuard for basic authorization checks against anonymous use or invalid JWT tokens.
- Annotated mutable operations (except login/get-account) with LoggedInGuard
- Mock version for GraphQL unit tests

* Better names of fields within AccountInfo structure

* Separated base & cli dependency catalogs.
Emitting CurrentAccountSubject for authenticated requests.

* Anonymous accounts:
- inject when not logged in
- inject internally when token expired
- logged in guards use new attribute
- access errors in dataset rename, delete
- access errors in event commit / readme update

* `TaskExecutorInMemory` dependency on `PullService` is delayed until running.

* GraphQL dataset create APIs don't need accountId

* Early stage of dataset permissions API

* GraphQL: added 'alias' to Dataset properties

* Dataset access error becomes runtime GqlError

* Reworked CurrentAccountSubject resolution as an Axum middleware.
Injecting final catalog into GraphQL schema in GraphQL web handlers, where middlewares already inserted a valid patched catalog

* Drafted Account API implementation (resolve account information by name)

* Account API implemented + naming clarifications around accounts

* Corrections for single-tenant API server mode

* New GrpahQL endpoint that explains supported login methods. Removed static configuration in WebUI mode

* Flattened AccountRef & Account types using tokio::sync::OnceCell lazy loading util.
Restored old naming before introduction of AccountRef

* Version bumped + minor dependencies

* Major dependency updates

* CurrentAccountSubject became an enum

* Tests: OSO authorizer, get allowed actions
* Tests: Authentication layer in HTTP
* Tests + fix: repository iterate by unknown account
* Tests: AuthenticationService
* Tests: GraphQL LoggedInGuard
* Tests: GraphQL auth operations
* Tests: covered renamining changes in GraphQL operations

* UI image v0.10.0

Git stats


Failed to load latest commit information.
Latest commit message
Commit time


Using the installer script (Linux / MacOSX / WSL2):

curl -s "" | sh

Get Started


kamu (pronounced kaˈmju) is an easy-to-use command-line tool for managing, transforming, and collaborating on structured data.

In short, it can be described as:

  • Git for data
  • Decentralized data warehouse
  • A peer-to-peer stream processing data pipeline
  • Blockchain-like ledger for data
  • Or even Kubernetes for data :)

Using kamu, any person or smallest organization can easily share structured data with the world. Data can be static or flow continuously. In all cases kamu will ensure that it stays:

  • Reproducible - i.e. you can ask the publisher "Give me the same exact data you gave me a year ago"
  • Verifiable - i.e. you can ask the publisher "Is this the exact data you had a year ago?"

Teams and data communities can then collaborate on cleaning, enriching, and aggregating data by building arbitrarily complex decentralized data pipelines. Following the "data as code" philosophy kamu doesn't let you touch data manually - instead, you transform it using Streaming SQL (we support multiple frameworks). This ensures that data supply chains are:

  • Autonomous - write query once and run it forever, no more babysitting fragile batch workflows
  • Low latency - get accurate results immediately, as new data arrives
  • Transparent - see where every single piece of data came from, who transformed it, and how
  • Collaborative - collaborate on data just like on Open Source Software

Data scientists, analysts, ML/AI researchers, and engineers can then:

  • Access fresh, clean, and trustworthy data in seconds
  • Easily keep datasets up-to-date
  • Safely reuse data created by the hard work of the community

The reuse is achieved by maintaining unbreakable lineage and provenance trail in tamper-proof metadata, which lets you assess the trustworthyness of data, no matter how many hands and transformation steps it went through.

In a larger context, kamu is a reference implementation of Open Data Fabric - a Web 3.0 protocol for providing timely, high-quality, and verifiable data for data science, smart contracts, web and applications.

Open Data Fabric

Use Cases

In general, kamu is a great fit for cases where data is exchanged between several independent parties, and for (low to moderate frequency & volume) mission-critical data where high degree of trustworthiness and protection from malicious actors is required.


Open Data

To share data outside of your organization today you have limited options:

  • You can publish it on some open data portal, but lose ownership and control of your data
  • You can deploy and operate some open-source data portal (like CKAN or Dataverse), but you probably have neither time nor money to do so
  • You can self-host it as a CSV file on some simple HTTP/FTP server, but then you are making it extremely hard for others to discover and use your data

Let's acknowledge that for organizations that produce the most valuable data (governments, hospitals, NGOs), publishing data is not part of their business. They typically don't have the incentives, expertise, and resources to be good publishers.

This is why the goal of kamu is to make data publishing cheap and effortless:

  • It invisibly guides publishers towards best data management practices (preserving history, making data reproducible and verifiable)
  • Adds as little friction as exporting data to CSV
  • Lets you host your data on any storage (FTP, S3, GCS, etc.)
  • Maintain full control and ownership of your data

As opposed to just the download counter you get on most data portals, kamu brings publishers closer with the communities allowing them to see who and how uses their data. You no longer send data into "the ether", but create a closed feedback loop with your consumers.

Science & Research

One of the driving forces behind kamu's design was the ongoing reproducibility crisis in science, which we believe to a large extent is caused by our poor data management practices.

After incidents like The Surgisphere scandal the sentiment in research is changing from assuming that all research is done in good faith, to considering any research unreliable until proven otherwise.

Data portals like Dataverse, Dryad, Figshare, and Zenodo are helping reproducibility by archiving data, but this approach:

  • Results in hundreds of millions of poorly systematized datasets
  • Tends to produce the research based on stale and long-outdated data
  • Creates lineage and provenance trail that is very manual and hard to trace (through published papers)

In kamu we believe that the majority of valuable data (weather, census, health records, financial core data) flows continuously, and most of the interesting insights lie around the latest data, so we designed it to bring reproducibility and verifiability to near real-time data.

When using kamu:

  • Your data projects are 100% reproducible using a built-in stable references mechanism
  • Your results can be reproduced and verified by others in minutes
  • All the data prep work (that often accounts for 80% of time of a data scientist) can be shared and reused by others
  • Your data projects will continue to function long after you've moved on, so the work done years ago can continue to produce valuable insights with minimal maintenance on your part
  • Continuously flowing datasets are much easier to systematize than the exponentially growing number of snapshots
Data-driven Journalism

Data-driven journalism is on the rise and has proven to be extremely effective. In the world of misinformation and extremely polarized opinions data provides us an anchoring point to discuss complex problems and analyze cause and effect. Data itself is non-partisan and has no secret agenda, and arguments around different interpretations of data are infinitely more productive than ones based on gut feelings.

Unfortunately, too often data has issues that undermine its trustworthiness. And even if the data is correct, it's very easy to pose a question about its sources that will take too long to answer - the data will be dismissed, and the gut feelings will step in.

This is why kamu's goal is to make data verifiably trustworthy and make answering provenance questions a matter of seconds. Only when data cannot be easily dismissed we will start to pay proper attention to it.

And once we agree that source data can be trusted, we can build analyses and real-time dashboards that keep track of complex issues like corruption, inequality, climate, epidemics, refugee crises, etc.

kamu prevents good research from going stale the moment it's published!

Business core data

kamu aims to be the most reliable data management solution that provides recent data while maintaining the highest degree of accountability and tamper-proof provenance, without you having to put all data in some central database.

We're developing it with financial and pharmaceutical use cases in mind, where audit and compliance could be fully automated through our system.

Note that we currently focus on mission-critical data and kamu is not well suited for IoT or other high-frequency and high-volume cases, but can be a good fit for insights produced from such data that influence your company's decisions and strategy.

Personal analytics

Being data geeks, we use kamu for data-driven decision-making even in our personal lives.

Actually, our largest data pipelines so far were created for personal finance:

  • to collect and harmonize data from multiple bank accounts
  • convert currencies
  • analyze stocks trading data.

We also scrape a lot of websites to make smarter purchasing decisions. kamu lets us keep all this data up-to-date with an absolute minimal effort.


kamu connects publishers and consumers of data through a decentralized network and lets people collaborate on extracting insight from data. It offers many perks for everyone who participates in this first-of-a-kind data supply chain:

For Data Publishers
  • Easily share your data with the world without moving it anywhere
  • Retain full ownership and control of your data
  • Close the feedback loop and see who and how uses your data
  • Provide real-time, verifiable and reproducible data that follows the best data management practices Pull Data
For Data Scientists
  • Ingest any existing dataset from the web
  • Always stay up-to-date by pulling latest updates from the data sources with just one command
  • Use stable data references to make your data projects fully reproducible
  • Collaborate on cleaning and improving data of existing datasets
  • Create derivative datasets by transforming, enriching, and summarizing data others have published
  • Write query once and run it forever - our pipelines require nearly zero maintenance
  • Built-in support for GIS data
  • Share your results with others in a fully reproducible and reusable form
For Data Consumers
  • Download a dataset from a shared repository
  • Verify that all data comes from trusted sources using 100% accurate lineage
  • Audit the chain of transformations this data went through
  • Validate that downloaded was not tampered with a single command
  • Trust your data by knowing where every single bit of information came from with our fine grain provenance
For Data Exploration
  • Explore data and run ad-hoc SQL queries (backed by the power of Apache Spark)
  • Launch a Jupyter notebook with one command
  • Join, filter, and shape your data using SQL
  • Visualize the result using your favorite library
  • Explore complex pipelines in Web UI SQL Shell Jupyter Web UI


If you like what we're doing - support us by starring the repo, this helps us a lot!

Subscribe to our YouTube channel to get fresh tech talks and deep dives.

Stop by and say "hi" in our Discord Server - we're always happy to chat about data.

If you'd like to contribute start here.