Skip to content

Transition Questions

fosol edited this page Mar 17, 2021 · 1 revision

Transition Questions

Contacts… who will be around to answer questions. See contact list below.

Based on the Support contract with Quartech, you should be able to submit a request for any bugs/issues to the number/email in the contract.

OpenShift - Questions will need to be directed to the Exchange Lab. This is best done through Rocket.Chat in the appropriate channels.

DevOps - This wiki has documentation related to the pipeline. Specific questions will need to be submitted to Quartech.

Geocoder - This service is provided by Data BC Warehouse. PIMS integration questions will need to be submitted to Quartech - link

React - Is an open-source library - link

Keycloak - This service is provided by the Exchange Lab. Use Rocket.Chat for specific questions. This is an open-source software solution - link

Leaflet/GIS - Is an open-source library - link and link

Database - The database is MS SQL Server 2019 - link. Specific questions will need to be submitted to Quartech.


What exactly are we responsible for maintaining? Are our dbas going to manage the db components? Bear in mind we use a Sql Server db running in a docker container. What about keycloak, GIS Services, DataBC services and other dependencies? We need to update our contact list to include names for folks managing these dependencies.

PIMS DevOps is a self-contained CI/CD process. How you want to manage it is up to you. It currently tightly integrates database migrations with release build/deploy.

Keycloak instances are setup and managed by the Exchange Lab, you only need to maintain the realm configuration. GIS service integration with DataBC is an external dependency, as such you would need to contact BC Data Warehouse.


What access do we have to the various envs? Can we add users to rocket chat, jira, keycloak, openshift, confluence, the db? With others managing solution components and restricting our access to such, this can affect our ability to troubleshoot problems. It also mean’s PIMS is tied to any component-version-upgrading mandated by the ‘owners’ of the components (the OCP4 upgrade was an example of this).

PIMS is a cloud based solution. It runs in an OpenShift on-premise offering owned by the Exchange Lab. Just like any cloud based solution, applications are required to be updated appropriately when breaking changes are made.

Rocket.Chat is managed by the Exchange Lab.

The Jira and Confluence instance used by the PIMS team is owned by SRES.

Keycloak instances are managed by the Exchange Lab. Resources are assigned the Realm Administrator role in DEV and TEST. Currently they do not have access to Keycloak in PROD due to lack of familiarity and knowledge with Keycloak.

OpenShift instances are managed by the Exchange Lab. Resources are assigned the Admin role within the TOOLS, DEV and TEST namespaces/projects. Resources are assigned the View role within the PROD namespace/project due to lack of familiarity and knowledge with OpenShift. However DevOps allows releasing to all environments through the TOOLS namespace/project.

The database is run in a linux container within OpenShift - link


How do we track change requests? Will the standard CAB setup be used for approval? The change management process for PIMS is largely handled internally via email (I think).

PIMS was developed through an Agile process. The Product Backlog was generated based on the Product Owner and Development Team continuos interactions with the users and stakeholders. Change management is specifically handled through Agile process by adding requests to the Product Backlog. How this is managed going forward is a government related decision.


How exactly are migrations managed? How are packages for migration prepared? DB scripts? Where do these go? Who runs these? The DBA?

PIMS uses Entity Framework Core to manage migrations. Migrations are managed in the Data Access Layer (/pims/backend/dal). Read more about EF migrations here.

PIMS has make commands that work in Windows to simplify the process of managing migrations.

make db-add n={version number} - Generates a new migration. Any related scripts that need to be run can be added to the appropriate folder structure /pims/backend/dal/Migrations/{version number}. You can review example migrations for additional information.

Migrations are automatically run when Jenkins deploys builds to each environment.


What about notifications (i.e. for unavailability of service or something similar). Are there templates for these in the db? If so, how are they used (sent)?

Setup your own monitoring solution with whatever you would normally use. PIMS currently uses a free account with UptimeRobot.


Since we’re transitioning from Aporeto networkSecurityPolicy (NSP) to OpenShift / Kubernetes NetworkPolicy (KNP), will all the steps associated with the transition be completed prior to handoff? I note some deadlines occur post hand off. Also note Aporeto will be decommissioned March 31, 2021. See revised schedule below.

Only the DEV namespace/project has been transitioned at this point. As they transition the other namespaces/projects it may require additional modifications. The Exchange Lab would be the primary contact point for this if any issues arise.


Need access (cannot authorize same user) to OpenShift and its associated logs.

Additional training will be required to understand OpenShift. This is outside of PIMS scope.


Currently there are 2 ‘Synk’ PRs for vulnerabilities. How high of a priority is it to get this fix in, and is there a way to ‘install’ these fixes locally to test before applying this fix to the DEV pipeline?

Automated PRs from Synk and GitHub are priorities like any other Product Backlog story. They are worked on when approved and applied after testing through standard processes.


What is the actual DEVOPS pipeline process for migration changes to DEV, from DEV to Test, and from TEST to PROD?

By default when the dev branch is merged it will trigger a build and deploy to the DEV namespace/project within OpenShift. There are OpenShift pipelines within the TOOLS namespace/project that can deploy to each environment. The master branch is used to provide the build for PROD.


Pipelines

  • dev-pipeline: dev branch build and deploy to DEV. This is automated.
  • dev-test-pipeline: Deploy the existing dev image to TEST
  • master-pipeline: master branch build and deploy to TEST
  • push-to-prod: Deploy the existing master tagged version image to PROD

More information on PIMS DevOps here.


What is the best way for maintaining all the json packages and how often do we need to check on which version to update? Presumably the best way forward is to update each package one by one locally and test locally?

NPM packages will need to be maintained as appropriate based on security warnings. Synk and GitHub are generally reliable sources for providing Pull Requests when these libraries are needed to be updated.

NuGest packages will need to be maintained as appropriate based on security warnings. Currently Synk and GitHub do not provide much insight into these. You will need to review manually.


What are some of the interrelationships or dependencies between map layers and other areas of the application that can get affected if there was a change in one area of the functionality in the map for example?

Too broad a question. The relationship dependencies are generally expressed here.


What would it take to modify an existing workflow for an existing ERP process and how is it configured?

The frontend does not support automatic modification of workflows. It would need to be modified manually to support any changes.

The backend does support automatic modification of workflows through changes in the database. More information here.


Need an overview of leaflet (the mapping component/library). Are there any current issues we should know about with this library?

Leaflet/GIS - Is an open-source library - link and link.

No known issues.


.env file breakdown would be nice. How overrides in this environment work and differences between the files that are required to enable local or component-specific debugging.

The .env files are a tool by which development can setup their own environments to run the solution without adding real environment variables. These configuration files mimic real environment variables.


Docker memory/space limit management (e.g. docker prune –all –force). Need to know more about memory/space allocation in Docker.

This requires better understanding of Docker, Docker Desktop, Kubernetes, and OpenShift.


Need an overview of subdivision management.

Subdivisions are a way to enable ministries and agencies the ability to submit Disposal Projects of properties that don't currently exist. Subdivisions are identical to Land Parcels within PIMS, however they have a different Type and as such their PID and PIN are not relevant. It is best practice that when the Subdivisions become title property that Inventory is updated and their type is changed.

You can essentially manage Subdivisions the same way Land is managed within Inventory.


Sources of documentation… any we don’t know about aside from product documentation in GitHub? How about the stuff in Confluence not counting the testing plans?

The GitHub wiki is the primary source of documentation. Confluence contains Test Plans and Sprint related information. Jira contains the stories that were worked on.


Clone this wiki locally