Skip to content
Kernelci.org DB tools
Python Shell
Branch: master
Clone or download
Permalink
Type Name Latest commit message Commit time
Failed to load latest commit information.
kcidb io_schema: Add more documentation Oct 19, 2019
kernelci WIP: kernelci: example xfer from mongodb Sep 18, 2019
lkft Add a runfile for reference Sep 12, 2019
samples Replace sample.json with realistic CKI and KernelCI data Oct 4, 2019
.gitignore Package for pip Sep 13, 2019
.pylintrc Add flake8 and pylint to development install extras Sep 16, 2019
.travis.yml Downgrade to Python 3.6 for Ubuntu compatibility Oct 17, 2019
LICENSE
README.md README.md: add instructions to create BQ account Oct 4, 2019
setup.py Downgrade to Python 3.6 for Ubuntu compatibility Oct 17, 2019

README.md

KCIDB

Kcidb is a package for entering and querying data to/from kernelci.org test execution database.

Setup

To install the package for the current user, run this command:

pip3 install --user <SOURCE>

Where <SOURCE> is the location of the package source, e.g. a git repo:

pip3 install --user git+https://github.com/spbnick/kcidb.git

or a directory path:

pip3 install --user .

If you want to hack on the source code, install the package in the editable mode with the -e/--editable option, and with "dev" extra included. E.g.:

pip3 install --user --editable '.[dev]'

The latter installs kcidb executables which use the modules from the source directory, and changes to them will be reflected immediately without the need to reinstall. It also installs extra development tools, such as flake8 and pylint.

In any case, make sure your PATH includes the ~/.local/bin directory, e.g. with:

export PATH="$PATH":~/.local/bin

BigQuery

Kcidb uses Google BigQuery for data storage. To be able to store or query anything you need to create a BigQuery dataset.

The documentation to set up a BigQuery account with a data set and a token can be found here: https://cloud.google.com/bigquery/docs/quickstarts/quickstart-client-libraries

Alternatively, you may follow these quick steps:

  1. Create a Google account if you don't already have one
  2. Go to the "Google Cloud Console" for BigQuery: https://console.cloud.google.com/projectselector2/bigquery
  3. "CREATE" a new project, enter arbitrary project name e.g. kernelci001
  4. "CREATE DATASET" in that new project with an arbitrary ID e.g. kernelci001a
  5. Go to "Google Cloud Platform" -> "APIs & Services" -> "Credentials", or this URL if you called your project kernelci001: https://console.cloud.google.com/apis/credentials?project=kernelci001
  6. Go to "Create credentials" and select "Service Account Key"
  7. Fill these values:
  • Service Account Name: arbitrary e.g. admin
  • Role: Owner
  • Format: JSON
  1. "Create" to automatically download the JSON file with your credentials.

Usage

Before you execute any of the tools make sure you have the path to your BigQuery credentials stored in the GOOGLE_APPLICATION_CREDENTIALS variable. E.g.:

export GOOGLE_APPLICATION_CREDENTIALS=~/.bq.json

To initialize the dataset, execute kcidb-init -d <DATASET>, where <DATASET> is the name of the dataset to initialize.

To submit records use kcidb-submit, to query records - kcidb-query. Both use the same JSON schema on standard input and output respectively, which can be displayed by kcidb-schema.

To cleanup the dataset (remove the tables) use kcidb-cleanup.

API

You can use the kcidb module to do everything the command-line tools do.

First, make sure you have the GOOGLE_APPLICATION_CREDENTIALS environment variable set and pointing at the Google Cloud credentials file. Then you can create the client with kcidb.Client(<dataset_name>) and call its init(), cleanup(), submit() and query() methods.

You can find the I/O schema in kcidb.io_schema.JSON and use kcidb.io_schema.validate() to validate your I/O data.

See the source code for additional documentation.

You can’t perform that action at this time.