Chainquery parses and syncs the LBRY blockchain data into structured SQL
Clone or download
Permalink
Failed to load latest commit information.
.swagger-codegen fixed bug with debits vs credits for txAddresses. Apr 3, 2018
apiactions prevent auto-updating on the same tag. Jul 9, 2018
config added better explanation to config flag. Oct 7, 2018
daemon fixed rebase conflicts Aug 12, 2018
datastore made breaking changes to schema to fix and align it better for usage. Aug 12, 2018
db updated chainquery schema for 1.0 Aug 23, 2018
docs -changed autoupdate to be POST call for travis. May 10, 2018
global -code clean for golint. Apr 29, 2018
lbrycrd -Added migration to increase the size of fee amount May 16, 2018
meta -fixed golint errors. May 20, 2018
migration fixed rebase conflicts Aug 12, 2018
model made breaking changes to schema to fix and align it better for usage. Aug 12, 2018
scripts -Added automatic version maintenance. May 17, 2018
swagger updated to latest version of dep & lbry.go Jul 11, 2018
twilio Added twilio integration to send text messages when blockchain reorgs… Jul 4, 2018
util -Fix ignored errors and import cycle during unit tests. May 27, 2018
.gitignore added support for windows builds, updated dependencies. Sep 1, 2018
.goreleaser.yml go releaser - configure goos Sep 1, 2018
.swagger-codegen-ignore Initial commit for swagger integration Apr 1, 2018
.travis.yml added support for windows builds, updated dependencies. Sep 1, 2018
Gopkg.lock added support for windows builds, updated dependencies. Sep 1, 2018
Gopkg.toml fixed lbry.go issue caused by updates to a package. Updated bindata.g… Jul 7, 2018
LICENSE Initial commit Feb 10, 2018
README.md Corrected recommendation for invocation Oct 5, 2018
dev.sh -Added configuration to chainquery with file watching and updating Apr 1, 2018
main.go Added twilio integration to send text messages when blockchain reorgs… Jul 4, 2018
main.js -Created feature to leverage sql from API. May 6, 2018
package.json Initial commit for swagger integration Apr 1, 2018
sqlboiler.toml -Adjusted claim_type to be tinyint(2) so that go model uses int8 inst… Apr 22, 2018

README.md

LBRY Chainquery

Build Status

Code Climate

Go Report Card

Maintainability

GitHub release

Github commits (since latest release)

Coverage Status

Prerequisites

OS Specifics

OSX

  • In order to use wget you will need brew install wget (used in build.sh)
  • Chainquery is built for Linux by default in build.sh, so you will need to modify the cross compilation for an OSX build.
  • Be sure to give execute privileges to the scripts you plan to use.

Go

Make sure you have Go 1.10+ (required for go-releaser)

MySQL

  • Install and run mysql.(OSX: brew install mysql)
  • Create chainquery database.
  • Create user lbry with password lbry and grant it all permissions on chainquery db.

Lbrycrd

  • Install lbrycrdd (https://github.com/lbryio/lbrycrd/releases)

  • Ensure ~/.lbrycrd/lbrycrd.conf file exists with username and password. If you don't have one, run:

    mkdir -p ~/.lbrycrd
    echo -e "rpcuser=lbryrpc\nrpcpassword=$(env LC_CTYPE=C LC_ALL=C tr -dc A-Za-z0-9 < /dev/urandom | head -c 16 | xargs)" >> ~/.lbrycrd/lbrycrd.conf
    
  • Run ./lbrycrdd -server -daemon -txindex -conf=$HOME/.lbrycrd/lbrycrd.conf. If you get an error about indexing, add the -reindex flag for one run. You will only need to reindex once.

Configuration

Chainquery can be configured via toml file.

Running from Source

go get -u github.com/lbryio/chainquery
cd "$(go env GOPATH)/src/github.com/lbryio/chainquery"
./dev.sh

Running from Release

This will likely eventually be the main supported method of running Chainquery in your environment but this sections documentation is a WIP so YMMV

Get a download link for your operating system specific release from the releases page then use the following command with your download link.

  wget -O ~/chainquery.zip https://example.com/path/to/your/release.zip
  Example:
  wget -O ~/chainquery.zip https://github.com/lbryio/chainquery/releases/download/v1.1.2/chainquery_1.1.2_Linux_x86_64.zip

Unzip the package you just downloaded with the following.

cd ~/
unzip ~/chainquery.zip

Your console should show you something similar to the following.

root@8fe4046b6d46:~# unzip chainquery.zip
Archive:  chainquery.zip
  inflating: LICENSE
  inflating: README.md
  inflating: chainquery

Of course you don't have to extract all of this stuff to your machines home directory ~/ you must use whatever paths you prefer. One that could be beneficial is adding these executables into your systems $PATH this is out of the scope of this README.

The main Chainquery binary should be marked as Executable by default but if not you can run the following.

chmod +x ~/chainquery

Finally running chainquery should be as simple as.

~/chainquery serve

You can obtain information on the flags in Chainqueries main binary by running the following.

~/chainquery -help

The Model

The model of Chainquery at its foundation consist of the fundamental data types found in the block chain. This information is then expounded on with additional columns and tables that make querying the data much easier.

Latest Schema

What does Chainquery consist of?

Chainquery consists of 4 main parts. The API Server, the Daemon, the Job Scheduler, and the upgrade manager.

API Server

The API Server services either structured queries via defined APIs or raw SQL against the Chainquery MySQL database. The APIs are documented via Chainquery APIs, a work in progress :) .

Daemon

The Daemon is responsible for updating the Chainquery database to keep it in sync with lbrycrd data. The daemon runs periodically to check if there are newly created blocks that need to be processed. The Daemon simply processes the block and its transactions. It also handles blockchain reorganizations. It will remove the orphaned block data and processing the new blocks from that height it diverged. The entry points are daemon iterations(func daemonIteration()) block processing(func RunBlockProcessing(height *uint64)), transaction processing(func ProcessTx(jsonTx *lbrycrd.TxRawResult, blockTime uint64)).

Job Scheduler

The job scheduler schedules different types of jobs to update the Chainquery database example. These jobs synchronize different areas of the data either to make queries faster or ascertain information that is not directly part of the raw blockchain. The example provided is leveraged to handle the status of a claim which is actually stored in the ClaimTrie of LBRYcrd. So it runs periodically to make sure Chainquery has the most up to date status of claims in the trie. The table job_status stores the current state of a particular job, like when it last synced.

Upgrade Manager

The upgrade manager handles data upgrades between versions. The table application_status stores information about the state of the application as it relates to the data, api and app versions. This is all leveraged by the upgrade manager so it knows what scripts might need to be run to keep the data in sync across deployments. The scripts are foundation of the upgrade manager.

Contributing

Contributions to this project are welcome, encouraged, and compensated. For more details, see lbry.io/faq/contributing

The master branch is regularly built and tested, but is not guaranteed to be completely stable. Releases are created regularly to indicate new official, stable release versions.

Developers are strongly encouraged to write unit tests for new code, and to submit new unit tests for old code. Unit tests can be compiled and run with: go test ./... from the source directory which should be $GOPATH/github.com/lbryio/chainquery.

Updating the generated models

We use sqlboiler to generate our data models based on the db schema. If you make schema changes, run ./gen_models.sh to regenerate the models.

A note of caution: the models are generated by connecting to the MySQL server and inspecting the current schema. If you made any db schema changes by hand, then the schema may be out of sync with the migrations. Here's the safe way to ensure that the models match the migrations:

  • Put all the schema changes you want to make into a migration.
  • In mysql, drop and recreate the db you're using, so that it's empty.
  • Run ./dev.sh. This will run all the migrations on the empty db.
  • Run ./gen_models.sh to update the models.

This process ensures that the generated models will match the updated schema exactly, so there are no surprises when the migrations are applied to the live db.

License

This project is MIT licensed. For the full license, see LICENSE.

Security

We take security seriously. Please contact security@lbry.io regarding any security issues. Our PGP key is here if you need it.

Contact

The primary contact for this project is @tiger5226 (beamer@lbry.io)