This project is a continuation of the original version of DistrictBuilder, now called DistrictBuilder Classic, which is no longer being maintained. This repository is where active development of DistrictBuilder will continue to occur.
DistrictBuilder is web-based, open source software for collaborative redistricting.
- Command Line Interface
Ensure that you have an AWS credential profile for
district-builder configured on your host system.
The server backend will use this in order to access S3 assets if present, and any
manage commands that use S3 assets will require it.
The Docker containers used in development work very well on Linux, but require an additional layer of translation when running on non-Linux hosts. In particular, there are significant file-watching costs, which result in high CPU usage on macOS. On macOS, it is more efficient to run the containers within a Linux VM created with Vagrant.
On Linux, run
scripts/setup to prepare the development environment:
All other scripts can be run natively from the host, e.g.
On macOS, use the
--vagrant flag to create a Vagrant VM instead:
$ ./scripts/setup --vagrant
All other scripts must be run from the Vagrant VM, e.g.
$ vagrant ssh vagrant@vagrant:/vagrant$ ./scripts/update
$ vagrant ssh -c 'cd /vagrant && ./scripts/update'
For brevity, this document will use Linux examples throughout. You should run the scripts from the appropriate environment.
Note: It is recommended to configure your editor to auto-format your code via Prettier on save.
Once you've setup WSL and Docker, you can clone and setup this project from within your WSL2 environment following the Linux installation instructions above.
Note: Environments that use Vagrant require the Vagrant notification forwarder plugin for hot reloading. To install, run
$ vagrant plugin install vagrant-notify-forwarder $ vagrant reload
scripts/server to start the application:
Remote Server Proxy
If you want to develop the
client locally against a
server running in the AWS staging environment, you can configure a local proxy using the
BASE_URL environment variable:
BASE_URL=https://app.staging.districtbuilder.org docker-compose up client
This will proxy local all requests directed at
PlanScore API integration
You will need a PlanScore API token to test the PlanScore integration in development. Please email email@example.com to get a token, then run
./scripts/bootstrap to create a
.env file in the server directory and populate the
PLAN_SCORE_API_TOKEN environment variable with your token.
Using pre-processed data for development and testing
- Sign up for an account in your local dev instance of the application at http://localhost:3003(if you haven't already done so)
- Load testing data with
$ ./scripts/load-dev-data. This will:
- Load region configs for Pennsylvania, Michigan, and Dane County WI.
- Create an organization, accessible at
- Set the user you just created as the organization administrator
- In order to use any of the organization templates, you will need to confirm your email. You will see a banner asking you to confirm your email; when you click "Resend Email", an email form will appear in your terminal. Copy and paste the activation link within that form in your browser to activate your account.
Processing your own data for custom regions
To have data to work with, you'll need to do a two step process:
- Process the GeoJSON for your state/region (this outputs all the static files DistrictBuilder needs to work in a local directory)
- Publish the resulting files (upload to S3 for use by the app)
To process PA data, first copy the GeoJSON file into the
src/manage/data directory, create an output directory (eg.
src/manage/data/output-pa), and then run this command:
$ ./scripts/manage process-geojson data/PA.geojson -b -o data/output-pa -n 12,4,4 -x 12,12,12
$ ./scripts/manage publish-region data/output-pa US PA Pennsylvania
Once your data is published, you should be able to run the app and create a new project through the UI using that region and begin building districts.
If instead you'd like to use the processed data to update S3 in-place (and not insert a new region into the database), you may instead run the command:
$ ./scripts/manage update-region data/output-pa s3://previous/location/of/the/published/region
Note: when doing this, you will need to restart your server to see the new data, since it's cached on startup
In order to allow for code-sharing across the frontend and backend in conjunction with an unejected Create React App (CRA), it was decided that the simplest and least error-prone way forward was to structure the code as such:
. ├── package.json (Applies to the CRA frontend) ├── src │ ├── client (Location for all CRA frontend code) │ ├── index.tsx (This and another file need to be here for CRA-purposes) │ ├── manage (Command-line interface) │ │ ├── package.json (Applies to the command-line interface) │ ├── server (NestJS backend code) │ │ ├── package.json (Applies to the NestJS backend) │ └── shared (Code that is used by both the frontend and backend)
- TypeScript for type safety
- React as a declarative view layer
- Redux for state management
- redux-loop for effect management (eg. API calls)
- ts.data.json for JSON decoding
- PostgreSQL for a relational database
- NestJS for the backend web server
- TypeORM for database queries and migrations
- TopoJSON for fast, topologically-aware geospatial operations
|3003||Create React App|
||Build application for staging or a release.|
||Publish container images to Elastic Container Registry.|
||Enter a database shell.|
||Execute Terraform subcommands with remote state management.|
||Loads development data for testing|
||Execute commands with the
||Execute TypeORM migration CLI commands.|
||Bring up all of the services required for the project to function.|
||Setup the project's development environment.|
||Run linters and tests.|
||Build container images, update dependencies, and run database migrations.|
||Execute Yarn CLI commands.|
Command Line Interface
A command line interface is available for performing data processing operations.
src/manage/README.md for more info.