Your project uses Nat Price's adr tool to document Adr records. Architectural Decisions are kept track of and the light weight format of markdown files in git combined with a small command line script to manage them serves the project well.
The records are stored in github (https://github/qwaneu/adr-spike).
There's one issue with the current setup. Price's tool does not support convertions to html and it is hard to keep an overview of all the records. The teams want a better way of navigation and rendering.
Specifically they want:
- the markdown records rendered as html.
- a link to the index in each html page.
- an index.html containing the list of pages as well as a graphical representation of the relations between the records.
The current structure of any repo containing records is like this:
└── doc
└── adr
├── 0001-record-architecture-decisions.md
├── 000N-other-records.md
├── index.dot
├── index.md
All records are stored in a markdown file. There may be links between the records which look like this:
[4. decouple HR++](0004-decouple-hr.md)
(i.e. the links point to markdown based records)
An ADR structure always contains an index.md and an index.dot. The index.md is a bulleted list of all files. And the index.dot is a graphviz file containing links to the records.
note that the links in the dot file are links to (non existing) html files, not to markdown files.
There is an explicit wish not to store markdown-to-html converted files anywhere. Someone came up with the idea to convert the html files when requested through an azure function. You took the challenge to do that.
So the idea is to
create an azure function that:
- is triggered on an http request
- takes the
name of the record
and thegithub repositories https url
as a parameter - renders a html version of the record with a header containing a 'home' link to the index
- makes sure that potential links to other records in the html is adjusted so that they refer to the same function with the correct paramters.
- renders the index page as a list of pages and the graph of all records.
And of course, you'll test drive the function.
In order to be able to work on the exercise you will need to:
- install python
- install azure cli
- install the Azure Functions Core tools
- install visual studio code
- install the Azure Functions extension for visual studio code
Install python 3.11.6 from https://www.python.org/downloads/
It is important to use the suggested version. At the time of this writing azure functions does not accept 3.12 yet
Go to the How to install Azure CLI page for instructions.
Go to the Develop Azure Functions Locally using Core tools page for instructions.
Install visual studio code from https://code.visualstudio.com/download
Open visual studio code. hit ctrl-p and type: ext install ms-azuretools.vscode-azurefunctions
Open this project in vs code. Then open a terminal (preferrably bash). Create a virtual environment:
python -m venv venv
Close the terminal and open it again. Code should have activated the environment, by running
source <absolute path to>venv/bin/activate
If not, activate it yourself.
Now install the dependencies:
pip install -r requirements.txt -r requirements-dev.txt
You should now be able to
- Run the tests
./run-test.sh watch
On windows you may need to create your own script on powershell. The way you set PYTHONPATH on windows is a bit different.
- Run the function locally
./run-local.sh
- Create the function app in the cloud
First login to Azure
az login
./provision-funcion-app.sh
Maybe, while your at it, you can 'terraform' the provisioning as well.
- Deploy the function app in the cloud
./publish-app.sh
- requests https://pypi.org/project/requests/
- markdown https://pypi.org/project/Markdown/
- d3.js for rendering fancy stuff in javascript