An adapter to access a collection of CAS APIs
Layer | Technology |
---|---|
Microservices | C Sharp - Dotnet 9 |
Authentication | OAuth |
Container Platform | OpenShift 4 |
Logging | Splunk, and Serilog |
CI/CD Pipeline | GitHub Actions, Kubernetes Pipelines (Tekton) |
- test: Source for unit tests
- .openshift: Various OpenShift related material, including instructions for setup and templates.
This application is meant to be deployed to RedHat OpenShift version 4. Full instructions to deploy to OpenShift are in the .openshift
directory.
docker build . -t cas-adapter
# http
docker run --rm -p 8080:8080 --name cas-adapter cas-adapter
# https
docker run --rm -p 8080:8080 -e ASPNETCORE_HTTP_PORTS=8080 -e ASPNETCORE_Kestrel__Certificates__Default__KeyPath=/ssl/tls.key -e ASPNETCORE_Kestrel__Certificates__Default__Path=/ssl/tls.crt -e OPENSHIFT_BUILD_NAME=1 --name cas-adapter cas-adapter
Public Application
- .Net SDK (9.0)
- .NET Core IDE such as Visual Studio or VS Code
- JAG VPN with access to MS Dynamics
DevOps
- RedHat OpenShift tools
- Docker/Podman
- A familiarity with GitHub Actions and Tekton Pipelines
There are two main categories of Github Actions used in this project:
- Continuous Integration - these pipelines are used to integrate code from a Fork into the trunk "main"
- Continuous Delivery - these pipelines are used to assist in building code for delivery (deployment) in OpenShift.
An example of other Github Actions also used in the project is the stats action Code Cov uses.
-
OPENSHIFT_NAMESPACE - set to the full project identifier where images are stored. For example proj-tools.
-
DOCKER_USERNAME - the username for a Service Account with access to read / write OPENSHIFT_NAMESPACE images
Note that this username must have the following role bindings set:
oc policy add-role-to-user system:image-builder system:serviceaccount:<namespace>:<username>
-
OPENSHIFT_PASSWORD - the TOKEN from the OpenShift secret for the username in OPENSHIFT_USERNAME
-
OPENSHIFT_REGISTRY - the hostname for the public image repository. You can get this by viewing the details for an image in the given project; only put the hostname portion.
NOTE: Tekton Pipelines are not implemented yet.
There is also a series of Tekton (Kubernetes) pipelines:
To promote code to TEST, login to OpenShift and start the Kubernetes Pipeline for Promote to Test.
To promote code to PROD, login to OpenShift and start the Kubernetes Pipeline for Promote to Prod. Not that this pipeline will also make a backup of the current PROD deployment.
If you wish to revert to the previous PROD deployment, login to OpenShift and start the Kubernetes Pipeline for Restore PROD from Backup.
Proxies, Policies, and Redirects
Please report any issues.
Pull requests are always welcome.
If you would like to contribute, please see our contributing guidelines.
Please note that this project is released with a Contributor Code of Conduct. By participating in this project you agree to abide by its terms.
Copyright 2022 Province of British Columbia
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
This repository is maintained by BC Attorney General.
The following are example Splunk queries:
index=dev_emcr_dfa
index=dev_emcr_dfa Level=Error
You cannot connect to Splunk from your local machine, only in OpenShift.
To run Splunk locally for testing, you can run a local docker with the following command:
docker run -p 8000:8000 -p 8088:8088 --name splunk -e SPLUNK_START_ARGS='--accept-license' -e SPLUNK_PASSWORD='password' -e SPLUNK_HEC_TOKEN='efbb3565-5a2d-4277-80f8-9262b8362313' -e SPLUNK_HEC_SSL=false splunk/splunk:latest
Connect to Splunk http://localhost:8000 with admin and password supplied with the above command line.
See file DFA - CAS.postman_collection.json
for a Postman collection of the CAS Adapter APIs. To authenticate:
- Import environment files from a previous developer, this will contain the secrets and values needed to authenticate and configure the correct environment
- Send a "Get Token" request and copy the "access_token" value from the response.
- Paste the "access_token" value into the "Authorization" header of the collection "DFA - CAS Adapter".
- Send any request in the collection. If the token expires, repeat the above steps.