Skip to content
Web-based exploration of Open Data Cube collections
JavaScript Python HTML CSS TypeScript Shell Other
Branch: develop
Clone or download
pyup-bot and jeremyh Scheduled weekly dependency update for week 33 (#20)
* Update amqp from 2.5.0 to 2.5.1

* Update billiard from 3.6.0.0 to 3.6.1.0

* Update checksumdir from 1.1.6 to 1.1.7

* Update coverage from 4.5.3 to 4.5.4

* Update cssselect from 1.0.3 to 1.1.0

* Update dask from 2.1.0 to 2.3.0

* Update docutils from 0.15 to 0.15.2

* Update identify from 1.4.5 to 1.4.6

* Update importlib-metadata from 0.18 to 0.19

* Update jsonschema from 3.0.1 to 3.0.2

* Update kombu from 4.6.3 to 4.6.4

* Update lxml from 4.3.4 to 4.4.1

* Update numpy from 1.16.4 to 1.17.0

* Update packaging from 19.0 to 19.1

* Update parse from 1.12.0 to 1.12.1

* Update pbr from 5.4.1 to 5.4.2

* Update pre-commit from 1.17.0 to 1.18.2

* Update pyparsing from 2.4.0 to 2.4.2

* Update pyproj from 2.2.1 to 2.2.2

* Update pyrsistent from 0.15.3 to 0.15.4

* Update pytest from 5.0.1 to 5.1.0

* Update python-rapidjson from 0.7.2 to 0.8.0

* Update pytz from 2019.1 to 2019.2

* Update pyyaml from 5.1.1 to 5.1.2

* Update rasterio from 1.0.24 to 1.0.25

* Update redis from 3.2.1 to 3.3.7

* Update scipy from 1.3.0 to 1.3.1

* Update soupsieve from 1.9.2 to 1.9.3

* Update sphinx from 2.1.2 to 2.2.0

* Update sqlalchemy from 1.3.6 to 1.3.7

* Update tqdm from 4.32.2 to 4.34.0

* Update twisted from 19.2.1 to 19.7.0

* Update virtualenv from 16.6.2 to 16.7.3

* Update w3lib from 1.20.0 to 1.21.0

* Update websockets from 8.0.1 to 8.0.2
Latest commit 9c2d5e8 Aug 21, 2019
Permalink
Type Name Latest commit message Commit time
Failed to load latest commit information.
.travis
cubedash update with black formatting Jul 29, 2019
deployment Revert load nci dump changes Aug 16, 2019
integration_tests Add benchmark code for footprint_wgs84 calc Jul 29, 2019
.coveragerc
.gitattributes Add versioneer for version numbers Feb 13, 2018
.gitignore
.pre-commit-config.yaml Allow Py3.7 in pre-commit Apr 24, 2019
.pyup.yml
.stylelintrc Add stylelint check, fix warnings Nov 20, 2018
.travis.yml Fix docker image tag format Aug 16, 2019
.yamllint Add pre-commit config Mar 19, 2019
Dockerfile Publish docker file to opendatacube/dashboard docker hub Aug 16, 2019
MANIFEST.in Add versioneer for version numbers Feb 13, 2018
Makefile Replace pyflakes with flake8 Mar 19, 2019
README.md
combined.py Start adding dataset viewer Feb 9, 2017
conftest.py Project setup overhaul: black, pyflakes, setup.cfg Mar 14, 2019
pylintrc Allow global cache variable Dec 4, 2017
pyproject.toml Don't use editable mode with pyproject.toml Apr 24, 2019
requirements-test.txt
run-dev.sh Add some docker Feb 1, 2017
run.sh
setup.cfg Remove dependency on develop datacube Mar 19, 2019
setup.py
versioneer.py Add versioneer for version numbers Feb 13, 2018

README.md

Data Cube Explorer Build Status Coverage Status

Explorer Screenshot

Developer Setup

These directions are for running from a local folder in development. But it will run from any typical Python WSGI server.

Firstly, install Data Cube. Use of a Data Cube conda environment is recommended.

Test that you can run datacube system check, and that it's connecting to the correct datacube instance.

Dependencies

Now install the explorer dependencies:

# These two should come from conda if you're using it, not pypi
conda install fiona shapely

pip install -e .

Summary generation

Cache some product summaries:

nohup cubedash-gen --all &>> summary-gen.log &

(This can take a while the first time, depending on your datacube size. We're using nohup .. & to run in the background.)

Run

Explorer can be run using any typical python wsgi server, for example:

pip install gunicorn
gunicorn -b '127.0.0.1:8080' -w 4 cubedash:app

Convenience scripts are available for running in development with hot-reload (./run-dev.sh) or gunicorn (./run.sh). Install the optional deployment dependencies for the latter: pip install -e .[deployment]

Products will begin appearing one-by-one as the summaries are generated in the background. If impatient, you can manually navigate to a product using /<product_name. (Eg /ls5_nbar_albers)

Code Style

All code is formatted using black, and checked with pyflakes.

They are included when installing the test dependencies:

pip install -e .[test]

Run make lint to check your changes, and make format to format your code automatically.

You may want to configure your editor to run black automatically on file save (see the Black page for directions), or install the pre-commit hook within Git:

Pre-commit setup

A pre-commit config is provided to automatically format and check your code changes. This allows you to immediately catch and fix issues before you raise a failing pull request (which run the same checks under Travis).

If you don't use Conda, install pre-commit from pip:

pip install pre-commit

If you do use Conda, install from conda-forge (required because the pip version uses virtualenvs which are incompatible with Conda's environments)

conda install pre_commit

Now install the pre-commit hook to the current repository:

pre-commit install

Your code will now be formatted and validated before each commit. You can also invoke it manually by running pre-commit run

FAQ

Can I use a different datacube environment?

Set ODC's environment variable before running the server:

export DATACUBE_ENVIRONMENT=staging

You can always see which environment/settings will be used by running datacube system check.

See the ODC documentation for config and datacube environments

Can I add custom scripts or text to the page (such as analytics)?

Create one of the following *.env.html files:

  • Global include: for <script> and other tags at the bottom of every page.

    cubedash/templates/include-global.env.html
    
  • Footer text include. For human text such as Copyright statements.

    echo "Server <strong>staging-1.test</strong>" > cubedash/templates/include-footer.env.html
    

(*.env.html is the naming convention used for environment-specific templates: they are ignored by Git)

How can I configure the deployment?

Add a file to the current directory called settings.env.py

You can alter default Flask or Flask Cache settings (default "CACHE_TYPE: simple"), as well as some cubedash-specific settings:

# Default product to display (picks first available)
CUBEDASH_DEFAULT_PRODUCTS = ('ls8_nbar_albers', 'ls7_nbar_albers')

# Which field should we use when grouping products in the top menu?
CUBEDASH_PRODUCT_GROUP_BY_FIELD = 'product_type'
# Ungrouped products will be grouped together in this size.
CUBEDASH_PRODUCT_GROUP_SIZE = 5

# Maximum search results
CUBEDASH_HARD_SEARCH_LIMIT = 100
# Maximum number of source/derived datasets to show
CUBEDASH_PROVENANCE_DISPLAY_LIMIT = 20

# Include load performance metrics in http response.
CUBEDASH_SHOW_PERF_TIMES = False

# Which theme to use (in the cubedash/themes folder)
CUBEDASH_THEME = 'odc'

# Customise '/stac' endpoint information
STAC_ENDPOINT_ID = 'my-odc-explorer'
STAC_ENDPOINT_TITLE = 'My ODC Explorer'
STAC_ENDPOINT_DESCRIPTION = 'Optional Longer description of this endpoint'

STAC_DEFAULT_PAGE_SIZE = 20
STAC_PAGE_SIZE_LIMIT = 1000

Sentry error reporting is supported by adding a SENTRY_CONFIG section. See their documentation.

Why aren't stylesheets updating?

The CSS is compiled from Sass. Run make style to rebuild them after a change, or use your editor to watch for changes (PyCharm will prompt to do so).

How do I run the integration tests?

The integration tests run against a real postgres database, which is dropped and recreated between each test method:

Install the test dependencies: pip install -e .[test]

Simple test setup

Set up a database on localhost that doesn't prompt for a password locally (eg. add credentials to ~/.pgpass)

Then: createdb dea_integration

And the tests should be runnable with no configuration: pytest integration_tests

Custom test configuration (using other hosts, postgres servers)

Add a .datacube_integration.conf file to your home directory in the same format as datacube config files.

(You might already have one if you run datacube's integration tests)

Then run pytest: pytest integration_tests

Warning All data in this database will be dropped while running tests. Use a separate one from your normal development db.

You can’t perform that action at this time.