The frontend code is located in the assets
folder. It's a mixture of old jQuery code and newer React code.
The code is built with webpack and transpiled using Babel.
Python 3 with Django is used as a backend and is mostly located in the apps
folder.
Django templates are in templates
folder.
The API uses Django REST framework
To find the list of available commands when using python manage.py
(alternatively ./manage.py
), see the
docs.
git config core.autocrlf false
git config user.name "<your github username>"
git config user.email your.github@email.com
git clone git@github.com:dotkom/onlineweb4.git
cd onlineweb4
Alternatively we also recommend using GitHub's own Command Line Interface (CLI), which
also eases the creation of pull requests with gh pr create
.
The by far easiest way to start developing is to use Visual Studio Code (VS Code) with remote-containers. The development environment should be pre-built as part of our Gtihub Actions workflows, and should work automatically upon opening this repo locally in VS Code with the extension installed.
Note: the setup has a focus on the Djangp-backend, the frontend is likely to require more manual intervention, if you need help feel free to reach out to someone in Dotkom for help!
We have two ways to run the development environment, with the one suiting you depending on how you want to interact with the project dependencies.
Note: regardless of base image, the remaning dependencies (and pre-commit) will be installed when you create the container.
Important disclaimer: Be wary of combining local and devcontainer environments, in particular local
.venv
andnode_modules
-folders can quickly make your environment confusing.
Uses the imge built automatically on change to dependencies, can be quite big, and has a very long wait-time. Not recommended if you want to actively develop with changing dependencies.
You can build this image locally by adding "docker-compose.build.yml"
to the end of the
dockerComposeFile
-array in devcontainer.json
.
Useful if your development often involves changing dependencies in poetry
, but requires that you manually run
npm ci
and poetry install
.
You can use this method by adding "docker-compose.no-deps.yml"
to the end of the
dockerComposeFile
-array in devcontainer.json
.
You can also build the image locally instead of using our pre-built version by using docker-compose.no-deps-build.yml
instead.
# in one terminal
npm run build
# in another terminal
# only required first time
python manage.py migrate
python manage.py runserver
If you are using the devcontainer, onlineweb4 should then be available at http://localhost:8000. Please open an issue if that does not work!
The performance of the containers might be a little lackluster on macOS, in which case you can attempt to set up a local Python environemnt. Onlineweb4 is setup to use Poetry for dependency mangement.
The following commands should make py.test
work out of the box, if they do not please open an issue.
# static files are required for tests to pass
# we use Node 18 and npm, see e.g. https://github.com/nvm-sh/nvm
# for help with managing multiple Node versions on your system
npm ci
npm run build
# recommended for easier debugging
# saves the virtual environment and all packages to `.venv`
poetry config virtualenvs.in-project true
# if you do not have Python 3.11 installed, you can use e.g. pyenv to manage them.
poetry install
# use the virtual environment
poetry shell
py.test
To start the server first run the database migrations with python manage.py migrate
,
and run the server with python manage.py runserver
.
Next, you need to fire up the front-end stuff, by running npm install
followed by npm start
.
The project should now be available at http://localhost:8000
We use pre-commit to run linting before each commit.
pre-commit
should be automaticall available after running poetry install
,
you can set up the git-hooks locally:
pre-commit install --install-hooks
# or if you have not activated the Poetry environment
poetry run pre-commit install --install-hooks
And run the lints manually by running
pre-commit run --all-files
To run the tests you can call
# most tests using Django templates require the `webpack-stats.json` to exists
npm run build
py.test
Which should work as long as you have properly set up the Python environment.
For linting and testing the frontend, we have the following commands, which are not currently set
up with pre-commit
:
npm run lint
# you can then open the report in a browser, or instead generate XML-report and use any tool to view it.
py.test --cov= --cov-report html
CI will fail if our requirements for code style are not met. To ensure that you adhere to our code guidelines, we recommend you run linting tools locally before pushing your code. This can be done automatically by using the above-mentioned pre-commit
.
For potentially improved productivity, you can integrate the linting and formatting tools we use in your editor:
- Black
- isort
- Flake8
- ESLint, with editor plugins available here.
- stylelint for our stylesheets, with editor plugins available here.
After doing changes to a model, you will need to make an migration for the database-scheme. You can automatically create those commands by running the following:
python manage.py makemigrations
Note that migrations should be properly formatted and linted as other files, but should be fixed with our pre-commit hooks.
Pushes made to the main
branch will trigger a redeployment of the application on dev.online.ntnu.no.
Pull requests trigger containerized builds that perform code style checks and tests. You can view the details of these tests by clicking the "detail" link in the pull request checks status area.
Important: We have integration tests with Stripe that require valid test-API-keys, those tests are not run by default locally, or when creating a PR from a fork of this repository. To run them, first get ahold of the appropriate testing keys, then add them to an
.env
file in the project root, the name of the environment variables are in.devcontainer/devcontainer.env
.
The project is deployed to AWS Lambda with the use of Zappa. To
deploy (should be done automatically), build a Docker image and push it to AWS ECR.
Then you can run zappa update <stage> -d <docker-ecr-image>
. You'll have also have to build NPM and deploy static first if this has been changed since last deploy.
# create git tag / github release with release notes first
# if this is prod add `-prod` suffix
VERSION=4.X.X-prod
# OR VERSION=4.X.X if dev
STAGE=prod
REGION=eu-north-1
# log in to AWS in some way first https://docs.aws.amazon.com/cli/latest/userguide/getting-started-prereqs.html#getting-started-prereqs-keys
# jq is just to extract the "Account" json-key automatically
AWS_ACCOUNT_ID=$(aws sts get-caller-identity | jq .Account)
DOCKER_REGISTRY=$AWS_ACCOUNT_ID.dkr.ecr.$REGION.amazonaws.com
TAG=$DOCKER_REGISTRY/onlineweb4-zappa:$VERSION
aws ecr get-login-password --region $REGION | docker login --username AWS --password-stdin $DOCKER_REGISTRY
# If zappa is not available you must install it, alternatively use devcontainer:
poentry install -E prod
# then either run `poetry shell` first, or prepend `poetry run` before the command
zappa save-python-settings-file $STAGE
docker build . --build-arg VERSION=$VERSION -t $IMAGE --no-cache
docker push $TAG
zappa update $STAGE -d $TAG
# If you also have changed static files you must run the following:
docker build . --target=static-files -t ow4-static
ID=$(docker create ow4-static)
docker cp $ID:/srv/app/static static
BUCKET_NAME=$( yq ".${STAGE}.environment_variables.OW4_S3_BUCKET_NAME" zappa_settings.yml )
aws s3 sync static "s3://${BUCKET_NAME}/static" --delete --acl=public-read
Onlineweb4 comes with an API located at /api/v1
.
Autogenerated Swagger-docs can be found at /api/v1/docs
.
Some endpoints require user authentication. See OAuth/OIDC.
Onlineweb4 has an Oauth2/OIDC-provider built-in, allowing external projects to authenticate users through Onlineweb4.
Auth0: Which flow should I use?
[Digital Ocean: Introduction to Oauth 2](https://www.digitalocean.com/community/tutorials/an-intr
Auth0: Authorization Flow with PKCE - Single Page Apps
PKCE: What it is and how to use it with OAuth 2.0
To authenticate users through https://online.ntnu.no
, contact dotkom@online.ntnu.no
for issuing a new client.
Follow the steps in Usage in a project for how to use the client information.
Initialize OpenID Connect by creating an RSA-key:
python manage.py creatersakey
go to localhost:8000/admin
and log in with your administrator user.
Scroll to OpenID Connect Provider
and add a new Client
.
Fill in the information related to the selected flow (see linked documentation on flows).
Upon saving, a client-ID and client secret is generated for you. Revisit the client you created to retrieve the id and secret.
Automated configuration and all endpoints
There are many packages that provide oauth2/oidc-helpers. These can be quite helpful to automate much of the OAuth2-flow.
If you have configured your client correctly, the following cURL
-commands are the equivalent of the authorization code flow and illustrate how the HTTP-requests are setup.
curl -v http://{url:port}/openid/authorize?\
client_id={your client_id}&\
redirect_uri={your configured callback url}&\
response_type={your configured response type}&\
scope={your configured scopes}&\
state={persistant state through the calls}&\
Note: Two common scopes are openid
and onlineweb4
This will trigger a 302 Redirect where the user will be asked to login to Onlineweb4.
Upon successfull login, the user are redirect to your configured callback url with ?code={authorization_code}&state{persistant state}
added to the url.
Retrieve the code from the URL (and check that the state has not been tampered with, i.e. still is the same one.)
Exchange the code for an access token which will be used to identify the user in your application.
curl -v http://{url:port}/openid/token?
\grant_type={grant, i.e. the content of the parenthesis of the response type}
\code={code from previous step}
This response will contain some basic information about the user, based on which scopes you requested, as well as an access token which can be added to the authorization header to request data from the API.
To use the access token, add Authorization: Bearer {token}
to your requests to the API.
As a last step, basic user information can be retrieved from the endpoint /openid/userinfo
.
This endpoint requires the Authorization
-header to be set as mentioned above and the profile
-scope to have been requested.
More information, such as email
can be retrieved if the email
-scope have been requested.
The full set of scopes for userinfo are: [email, address, phone, offline_access].
More information about this endpoint can be found in the spec