Skip to content

Send exec status independent of report permissions #1423

Send exec status independent of report permissions

Send exec status independent of report permissions #1423

Workflow file for this run

# This workflow will install Python dependencies, run tests and lint with a single version of Python
# For more information see: https://help.github.com/actions/language-and-framework-guides/using-python-with-github-actions
name: CI workflow
on: pull_request
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- name: Set up Python 3.9
uses: actions/setup-python@v2
with:
python-version: 3.9
- name: Install dependencies
run: |
python -m pip install --upgrade pip
pip install flake8 pytest
if [ -f requirements.txt ]; then pip install -r requirements.txt; fi
if [ -f cli/requirements.txt ]; then pip install -e cli; fi
pip install -r server/requirements.txt
pip install -r server/test-requirements.txt
- name: Lint with flake8
run: |
# stop the build if there are Python syntax errors or undefined names
flake8 . --count --max-line-length 127 --select=E9,F63,F7 --show-source --statistics --exclude=cli/medperf/templates/
# exit-zero treats all errors as warnings. The GitHub editor is 127 chars wide
# ignore warnings about undefined names due to using future annotations
# W503 is no longer recommended. https://www.flake8rules.com/rules/W503.html
# Exclude examples folder as it doesn't contain code related to medperf tools
# Exclude migrations folder as it contains autogenerated code
# Ignore E231, as it is raising warnings with auto-generated code.
flake8 . --count --max-complexity=10 --max-line-length=127 --ignore F821,W503,E231 --statistics --exclude=examples/,"*/migrations/*",cli/medperf/templates/
- name: Test with pytest
working-directory: ./cli
run: |
pytest
- name: Set server environment vars
working-directory: ./server
run: cp .env.local.local-auth .env
- name: Run migrations
working-directory: ./server
run: python manage.py migrate
- name: Run server unit tests
working-directory: ./server
run: python manage.py test --parallel