This is a template repo that nay serve as a starting point for new workflows based on R scripts and allow for the migration of applications to Azure. With very little configuration, the continuous integration and deployment of containerised R code is easily automated.
The CICD pipline does the following:
- Deploy a container registry
- Build a Docker image running the provided R script
- Push the image to the registry
Commits to 'main' triggers a workflow run that deploys a container image for testing. It may also be triggered manually.
A second workflow deploying a container instance for production use may be triggered manually.
If using RStudio, the simplest approach might be to create a new project via version control.
Should this fail, you may not have the necessary permissions to clone repos within the organisation. You may need to generate personal access token (PAT) and use the git clone <repo-url>
command from a terminal.
- Create a new GitHub repo using this repo as a template
- In RStudio, go to File > New Project > Version Control > Git
- Paste in the url for your new repo
- Click Create Project
- Open a terminal in RStudio (it should open in the current working directory) and run the following commands:
git remote set-url origin <url-for-this-repo>
git push
Your local project is now connected to your new GitHub repo. Changes can be committed through the RStudio interface or the command line, e.g. as shown below.
git add .
git commit -m "a short message describing any changes"
git push
Note that the GitHub Actions workflow, which builds and deploys your Docker container, is triggered by commits to the 'main' branch. You may therefore want to commit your work using feature branches. git checkout -b feature_branch_name
and git switch
are useful commands.
Make sure to modify the following files:
-
R script
- The first few lines of the sample R script shows how to connect to blob storage and retrieving a file. Make sure to change the resource names and file names as needed.
- Simillarly, at the bottom of the script, sample code shows how to upload a file to blob storage.
-
Dockerfile
- Uncomment the line with the 'install.packages()' command OR use the script called install.packages.r to install the required packages.
- Make sure that COPY, RUN and CMD commands refer to the correct filename, e.g. if you call your main script something else than script.r.
-
.github/workflows/workflow.yml and .github/workflows/workflow_release_prod.yml
- set the correct values for all variables under
ENV:
- set the correct values for all variables under
Note that if you're changing if you're changing the cpu or memory settings, you will need to first delete the ACI(s) before running the workflow.
For the workflow to work, GitHub must be granted permission to make changes to your Azure environment. This can be donw using the Azure CLI.
- First, login to Azure. The command opens a browser window with the Azure login page.
az login
The value for "id" in the JSON output will be your subscription id.
- Then, run the following command to create a Service Principal with access to your resource group. Please see the note below regarding permissions and roles.
az ad sp create-for-rbac \
--name "appname" \
--role Owner \
--scopes /subscriptions/<your-subscription-id>/resourceGroups/<your-resource-group-name> \
--sdk-auth
Note: if you get an error saying "No connection adapters were found...", run the command below and try again.
export MSYS_NO_PATHCONV=1
Note that using the 'Owner' role is not considered best practice as it allows the SP to perform much more than we actually need. You may therefore choose to create a more restricted custom role and assign it to the SP. This may be based on the Contributor role with added permissions to assign managed identities.
- Copy the JSON output.
- In your GitHub repo, go to Settings > Secrets and variables > Actions
- Create a new repository secret. It should be called AZURE_CREDENTIALS and its value should be the JSON string you just copied.
The below commands build and run a Docker image. You will need to have Docker Desktop installed.
docker build -t imagename .
docker run imagename
The -t (tag) parameter lets you provide a name for you Docker image. Make sure you're running these commands from the directory where the Dckerfile is located. The dot (.) indicates that the files and folders used to build the image are in the current directory.
The GitHub Actions workflow is set up such that test releases are triggered by commits to the main branch. The test release wortkflow may also be triggered manually. Deployment of container instances for production use can only be triggered manually. This setup is intended as a safeguard allowing users to test that the test ACI runs correctly before deploying the same changes that go into production.
The deployment workflow involves the following key actions performed in sequence:
- Deploy the ACR
- Build and push DOcker image to the ACR
- Deploy an ACI
- Update the ACI with permission to access a KV
This can be done using the portal or with Azure CLI:
az container logs --resource-group your-rg-name --name aci-name