You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Downstream PEPs, bundled into the platform, sometimes need to interface with a database. There are a lot of concerns and considerations needed which should be addressed
Modular binary approach to support multiple deployment modalities
The desire to deliver a product which can be deployed as a monolith or in a microservice way via charts yields a number of challenges. Mainly, we need to ensure that services are loosely coupled while cohesion is enforced through gRPC and IPC.
Isolate PEPs
Along with isolation needs to meet the previous concern, we also want to ensure that PEPs have a level of guarantee that their data is only accessed and mutated by their service. Other services and PEPs will be required to use published contracts (i.e. protobuf, openapi, grpc) to interact with the data.
Enforce consistency
It is very important that PEPs are as consistent as possible. This will ensure PEPs are developed with quality software practices in mind, as well as enable engineers to move throughout the platform with minimal onboarding needed.
Decision Drivers
modular-binary to support monolith and microservice deployments
isolate PEPs so that data is protected from unauthorized mutations
enforce consistency among PEPs
Considered Options
Implement a common migration registration and heavily document the PEP development process
Decision Outcome
Chosen option: "{title of option}"
Confirmation
enhance Registration service to support binding the PEP/service migrations with the migration service
refactor Policy service to be fully configured in a way that would simulate the behavior
create an issue: refactor integration tests to support policy as an abstraction -- then move policy integration tests under the policy service
Implement a migration service registration and heavily document PEP development process
In scope:
need to isolate PEPs to a scoped schema
to ensure that they can make guarantees about their data and reduce coupling between PEPs
need to register with the migration handler
isolate migrations to their PEP directory and have them run when migrating up/down
enable any kind of database configuration -- whether strict schema or a simulated key-value store via JSON blobs
document common pattern for wrapping db handler as seen in policy
Out of scope:
how can we create multiple DB clients so that we can handle scoped users to prevent access / mutation of data that is not permissible
by not supporting this, a poisoned PEP could mutate policy data and grant access
non postgres environments
this needs to be solved at the platform level first -- many considerations regarding stored procedures and triggers and JSON data
handling distributive databases
enabling modular binary in ways that are not supported in the current state
any kind of sharding / caching / read replicas
read replicas will depend on the knowledge of multiple databases and db users to ensure the appropriate action is taken on the appropriate database
integration tests
utilizing the same functionality found in the platform to ensure resilient testing
SOLUTION: PEPs should manage their own integration tests and can add those jobs to the workflows
provisioning of fixtures
enable PEP developers to create scenarios within the platform for automated and manual testing
SOLUTION: utilize CLI to use policy as a service
The text was updated successfully, but these errors were encountered:
@strantalis Yes that's correct. Where the platform will most likely have a single schema PEPs will have schemas for themselves. This will allow DB admins to control access and partition the databases easily.
I do wonder if the services also need to use a unique schema. For instance, do we want the risk of KAS accessing Policy DB rather than using the SDK / gRPC interface?
The implementation of this effort was more complicated than originally identified. Additionally, there were some additional efforts that did not pan out and led to the creation of #675
Enabling downstream PEPs to integrate with the DB
Context and Problem Statement
Downstream PEPs, bundled into the platform, sometimes need to interface with a database. There are a lot of concerns and considerations needed which should be addressed
Modular binary approach to support multiple deployment modalities
The desire to deliver a product which can be deployed as a monolith or in a microservice way via charts yields a number of challenges. Mainly, we need to ensure that services are loosely coupled while cohesion is enforced through gRPC and IPC.
Isolate PEPs
Along with isolation needs to meet the previous concern, we also want to ensure that PEPs have a level of guarantee that their data is only accessed and mutated by their service. Other services and PEPs will be required to use published contracts (i.e. protobuf, openapi, grpc) to interact with the data.
Enforce consistency
It is very important that PEPs are as consistent as possible. This will ensure PEPs are developed with quality software practices in mind, as well as enable engineers to move throughout the platform with minimal onboarding needed.
Decision Drivers
Considered Options
Decision Outcome
Chosen option: "{title of option}"
Confirmation
Implement a migration service registration and heavily document PEP development process
In scope:
Out of scope:
The text was updated successfully, but these errors were encountered: