Skip to content
An open source framework for deep learning on satellite and aerial imagery.
Python Other
  1. Python 98.9%
  2. Other 1.1%
Branch: master
Clone or download

Latest commit

lewfish Merge pull request #922 from azavea/lf/train-limit
Support groups of chips and limits on training size
Latest commit ae3d68a May 28, 2020

Files

Permalink
Type Name Latest commit message Commit time
Failed to load latest commit information.
.github Update PR template Dec 18, 2018
.travis Update Travis test script Sep 5, 2019
.vscode Remove vscode file Sep 20, 2019
docker add jupyter to docker environment Sep 20, 2019
docs Ensure that train_chip_sz==predict_chip_sz in examples May 17, 2020
integration_tests Use lower lr and batch size on integration tests Oct 1, 2019
integration_tests2 Add integration tests for rv2 May 13, 2020
rastervision Merge pull request #872 from azavea/lf/new-cc2 Feb 11, 2020
rastervision2 Support groups of chips and limits on training size May 27, 2020
scripts Exclude cookiecutter_template from format_code Apr 23, 2020
tests Handle downloading zxy tiles Dec 23, 2019
tests_v2 Add rv2 unit tests May 19, 2020
.codecov.yml Have codecov post new comment to clarify coverage change on update Mar 3, 2019
.coveragerc Ignore tfod_utils in CI May 8, 2019
.dockerignore Unify and improve Docker scripts Apr 3, 2019
.flake8 Style fixes Apr 6, 2020
.gitignore Ignore .vscode dir Sep 20, 2019
.travis.yml Update Travis config to do TF and PyTorch builds in parallel Aug 30, 2019
Dockerfile Fix Dockerfile Apr 9, 2020
Dockerfile-pytorch Speed up build Apr 6, 2020
Dockerfile-tf Support tf-cpu, tf-gpu, and pytorch RV images Aug 30, 2019
Dockerfile-tf-cpu Support tf-cpu, tf-gpu, and pytorch RV images Aug 30, 2019
Dockerfile-tf-gpu Support tf-cpu, tf-gpu, and pytorch RV images Aug 30, 2019
LICENSE Move LICENSE to top level Mar 2, 2017
MANIFEST.in Remove src directory and top-level all its contents Oct 7, 2018
README.md Version 0.10 Oct 2, 2019
extras_requirements.json Fully pin PyTorch versions Oct 14, 2019
optional-requirements.txt Add `optional-requirements.txt` File Jan 3, 2019
requirements-dev.txt add jupyter to docker environment Sep 20, 2019
requirements.txt Add new dependencies Dec 30, 2019
setup.py Make pip install all namespace packages Apr 17, 2020

README.md

Raster Vision Logo  

Pypi Docker Repository on Quay Join the chat at https://gitter.im/azavea/raster-vision License Build Status codecov Documentation Status

Raster Vision is an open source Python framework for building computer vision models on satellite, aerial, and other large imagery sets (including oblique drone imagery).

  • It allows users (who don't need to be experts in deep learning!) to quickly and repeatably configure experiments that execute a machine learning workflow including: analyzing training data, creating training chips, training models, creating predictions, evaluating models, and bundling the model files and configuration for easy deployment. Overview of Raster Vision workflow
  • There is built-in support for chip classification, object detection, and semantic segmentation with backends using PyTorch and Tensorflow. Examples of chip classification, object detection and semantic segmentation
  • Experiments can be executed on CPUs and GPUs with built-in support for running in the cloud using AWS Batch.
  • The framework is extensible to new data sources, tasks (eg. object detection), backends (eg. TF Object Detection API), and cloud providers.

See the documentation for more details.

Setup

There are several ways to setup Raster Vision:

  • To build Docker images from scratch, after cloning this repo, run docker/build, and run the container using docker/run.
  • Docker images are published to quay.io. The tag for the raster-vision image determines what type of image it is:
    • The tf-cpu-* tags are for running the Tensorflow CPU containers.
    • The tf-gpu-* tags are for running the Tensorflow GPU containers.
    • The pytorch-* tags are for running the PyTorch containers.
    • We publish a new tag per merge into master, which is tagged with the first 7 characters of the commit hash. To use the latest version, pull the latest suffix, e.g. raster-vision:pytorch-latest. Git tags are also published, with the Github tag name as the Docker tag suffix.
  • Raster Vision can be installed directly using pip install rastervision. However, some of its dependencies will have to be installed manually.

For more detailed instructions, see the Setup docs.

Example

The best way to get a feel for what Raster Vision enables is to look at an example of how to configure and run an experiment. Experiments are configured using a fluent builder pattern that makes configuration easy to read, reuse and maintain.

# tiny_spacenet.py

import rastervision as rv

class TinySpacenetExperimentSet(rv.ExperimentSet):
    def exp_main(self):
        base_uri = ('https://s3.amazonaws.com/azavea-research-public-data/'
                    'raster-vision/examples/spacenet')
        train_image_uri = '{}/RGB-PanSharpen_AOI_2_Vegas_img205.tif'.format(base_uri)
        train_label_uri = '{}/buildings_AOI_2_Vegas_img205.geojson'.format(base_uri)
        val_image_uri = '{}/RGB-PanSharpen_AOI_2_Vegas_img25.tif'.format(base_uri)
        val_label_uri = '{}/buildings_AOI_2_Vegas_img25.geojson'.format(base_uri)
        channel_order = [0, 1, 2]
        background_class_id = 2

        # ------------- TASK -------------

        task = rv.TaskConfig.builder(rv.SEMANTIC_SEGMENTATION) \
                            .with_chip_size(300) \
                            .with_chip_options(chips_per_scene=50) \
                            .with_classes({
                                'building': (1, 'red'),
                                'background': (2, 'black')
                            }) \
                            .build()

        # ------------- BACKEND -------------

        backend = rv.BackendConfig.builder(rv.PYTORCH_SEMANTIC_SEGMENTATION) \
            .with_task(task) \
            .with_train_options(
                batch_size=2,
                num_epochs=1,
                debug=True) \
            .build()

        # ------------- TRAINING -------------

        train_raster_source = rv.RasterSourceConfig.builder(rv.RASTERIO_SOURCE) \
                                                   .with_uri(train_image_uri) \
                                                   .with_channel_order(channel_order) \
                                                   .with_stats_transformer() \
                                                   .build()

        train_label_raster_source = rv.RasterSourceConfig.builder(rv.RASTERIZED_SOURCE) \
                                                         .with_vector_source(train_label_uri) \
                                                         .with_rasterizer_options(background_class_id) \
                                                         .build()
        train_label_source = rv.LabelSourceConfig.builder(rv.SEMANTIC_SEGMENTATION) \
                                                 .with_raster_source(train_label_raster_source) \
                                                 .build()

        train_scene =  rv.SceneConfig.builder() \
                                     .with_task(task) \
                                     .with_id('train_scene') \
                                     .with_raster_source(train_raster_source) \
                                     .with_label_source(train_label_source) \
                                     .build()

        # ------------- VALIDATION -------------

        val_raster_source = rv.RasterSourceConfig.builder(rv.RASTERIO_SOURCE) \
                                                 .with_uri(val_image_uri) \
                                                 .with_channel_order(channel_order) \
                                                 .with_stats_transformer() \
                                                 .build()

        val_label_raster_source = rv.RasterSourceConfig.builder(rv.RASTERIZED_SOURCE) \
                                                       .with_vector_source(val_label_uri) \
                                                       .with_rasterizer_options(background_class_id) \
                                                       .build()
        val_label_source = rv.LabelSourceConfig.builder(rv.SEMANTIC_SEGMENTATION) \
                                               .with_raster_source(val_label_raster_source) \
                                               .build()

        val_scene = rv.SceneConfig.builder() \
                                  .with_task(task) \
                                  .with_id('val_scene') \
                                  .with_raster_source(val_raster_source) \
                                  .with_label_source(val_label_source) \
                                  .build()

        # ------------- DATASET -------------

        dataset = rv.DatasetConfig.builder() \
                                  .with_train_scene(train_scene) \
                                  .with_validation_scene(val_scene) \
                                  .build()

        # ------------- EXPERIMENT -------------

        experiment = rv.ExperimentConfig.builder() \
                                        .with_id('tiny-spacenet-experiment') \
                                        .with_root_uri('/opt/data/rv') \
                                        .with_task(task) \
                                        .with_backend(backend) \
                                        .with_dataset(dataset) \
                                        .with_stats_analyzer() \
                                        .build()

        return experiment


if __name__ == '__main__':
    rv.main()

Raster Vision uses a unittest-like method for executing experiments. For instance, if the above was defined in tiny_spacenet.py, with the proper setup you could run the experiment using:

> rastervision run local -p tiny_spacenet.py

See the Quickstart for a more complete description of running this example.

Resources

Contact and Support

You can find more information and talk to developers (let us know what you're working on!) at:

Contributing

We are happy to take contributions! It is best to get in touch with the maintainers about larger features or design changes before starting the work, as it will make the process of accepting changes smoother.

Everyone who contributes code to Raster Vision will be asked to sign the Azavea CLA, which is based off of the Apache CLA.

  1. Download a copy of the Raster Vision Individual Contributor License Agreement or the Raster Vision Corporate Contributor License Agreement

  2. Print out the CLAs and sign them, or use PDF software that allows placement of a signature image.

  3. Send the CLAs to Azavea by one of:

  • Scanning and emailing the document to cla@azavea.com
  • Faxing a copy to +1-215-925-2600.
  • Mailing a hardcopy to: Azavea, 990 Spring Garden Street, 5th Floor, Philadelphia, PA 19107 USA
You can’t perform that action at this time.