-
Notifications
You must be signed in to change notification settings - Fork 51
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Make driver/handler cache operations atomic across Client #189
Comments
willbeason
pushed a commit
to willbeason/frameworks
that referenced
this issue
Jan 27, 2022
Targets sometimes need to maintain state about the system. For example, the Gatekeeper target needs to track the set of current Namespaces on the cluster in order to properly match objects to Constraints when Audit is called. This commit adds a Cache interface which Targets may choose to implement. If they implement this interface, Client attempts to add and remove objects from the Target cache just as it does for Driver caches. These operations are not atomic, so it is possible for systems to get into an inconsistent state. There isn't a good solution to this now - I've opened open-policy-agent#189 to solve this in the future. The implications are quite complex and there's a lot of edge cases. This commit also modifies the test target handler matchers - they now access the test target's cache in order to function. These matchers aren't called yet - we don't want to break Gatekeeper since Gatekeeper Golang matchers are not yet implemented. Signed-off-by: Will Beason <willbeason@google.com>
willbeason
pushed a commit
to willbeason/frameworks
that referenced
this issue
Jan 27, 2022
Targets sometimes need to maintain state about the system. For example, the Gatekeeper target needs to track the set of current Namespaces on the cluster in order to properly match objects to Constraints when Audit is called. This commit adds a Cache interface which Targets may choose to implement. If they implement this interface, Client attempts to add and remove objects from the Target cache just as it does for Driver caches. These operations are not atomic, so it is possible for systems to get into an inconsistent state. There isn't a good solution to this now - I've opened open-policy-agent#189 to solve this in the future. The implications are quite complex and there's a lot of edge cases. This commit also modifies the test target handler matchers - they now access the test target's cache in order to function. These matchers aren't called yet - we don't want to break Gatekeeper since Gatekeeper Golang matchers are not yet implemented. Signed-off-by: Will Beason <willbeason@google.com>
willbeason
pushed a commit
that referenced
this issue
Feb 10, 2022
* Implement Target cache hook Targets sometimes need to maintain state about the system. For example, the Gatekeeper target needs to track the set of current Namespaces on the cluster in order to properly match objects to Constraints when Audit is called. This commit adds a Cache interface which Targets may choose to implement. If they implement this interface, Client attempts to add and remove objects from the Target cache just as it does for Driver caches. These operations are not atomic, so it is possible for systems to get into an inconsistent state. There isn't a good solution to this now - I've opened #189 to solve this in the future. The implications are quite complex and there's a lot of edge cases. This commit also modifies the test target handler matchers - they now access the test target's cache in order to function. These matchers aren't called yet - we don't want to break Gatekeeper since Gatekeeper Golang matchers are not yet implemented. Signed-off-by: Will Beason <willbeason@google.com> * Fix merge conflicts Signed-off-by: Will Beason <willbeason@google.com> * Remove ability for caches to fail deleting Otherwise it is easy to get into inconsistent cache states. There's lots of edge cases that can cause unpredictable behaviors that we don't want to allow. Signed-off-by: Will Beason <willbeason@google.com> * Make addition atomic Since adding data can fail in the target cache, remove data from the driver cache. Note that addition/deletion occur in opposite orders for AddData and RemoveData - this is because we want to prioritize reversible over potentially-irreversible operations. Removing data from the handler cache can't fail, so it is safe to add it first. Signed-off-by: Will Beason <willbeason@google.com> * Make it impossible for handler caches to fail deletion Otherwise we can easily end up in very annoying inconsistent states. If deleteion really, really needs to fail then the application should panic rather than allow things to get in an inconsistent state. Per discussion Signed-off-by: Will Beason <willbeason@google.com>
willbeason
pushed a commit
that referenced
this issue
Feb 14, 2022
* Implement Target cache hook Targets sometimes need to maintain state about the system. For example, the Gatekeeper target needs to track the set of current Namespaces on the cluster in order to properly match objects to Constraints when Audit is called. This commit adds a Cache interface which Targets may choose to implement. If they implement this interface, Client attempts to add and remove objects from the Target cache just as it does for Driver caches. These operations are not atomic, so it is possible for systems to get into an inconsistent state. There isn't a good solution to this now - I've opened #189 to solve this in the future. The implications are quite complex and there's a lot of edge cases. This commit also modifies the test target handler matchers - they now access the test target's cache in order to function. These matchers aren't called yet - we don't want to break Gatekeeper since Gatekeeper Golang matchers are not yet implemented. Signed-off-by: Will Beason <willbeason@google.com> * Fix merge conflicts Signed-off-by: Will Beason <willbeason@google.com> * Remove ability for caches to fail deleting Otherwise it is easy to get into inconsistent cache states. There's lots of edge cases that can cause unpredictable behaviors that we don't want to allow. Signed-off-by: Will Beason <willbeason@google.com> * Make addition atomic Since adding data can fail in the target cache, remove data from the driver cache. Note that addition/deletion occur in opposite orders for AddData and RemoveData - this is because we want to prioritize reversible over potentially-irreversible operations. Removing data from the handler cache can't fail, so it is safe to add it first. Signed-off-by: Will Beason <willbeason@google.com> * Make it impossible for handler caches to fail deletion Otherwise we can easily end up in very annoying inconsistent states. If deleteion really, really needs to fail then the application should panic rather than allow things to get in an inconsistent state. Per discussion Signed-off-by: Will Beason <willbeason@google.com> * Add type for matching Constraints Prep work for compiler sharding. This adds a store of the matchers corresponding to Constraints by-target and by-kind. We could do constriant matching per-target, but it's better for us to fail fast rather than partially review objects when we're just going to return errors. Thus, ConstraintsFor iterates over all targets itself rather than requiring its caller to. Signed-off-by: Will Beason <willbeason@google.com> * Split out per-handler matching Each Handler potentially has its own review type to be matched against, so this makes requests for matching happen per-target. Signed-off-by: Will Beason <willbeason@google.com> * Resolve reviewer comments Signed-off-by: Will Beason <willbeason@google.com> * Fix Remove test code Signed-off-by: Will Beason <willbeason@google.com> * Fix other tests Signed-off-by: Will Beason <willbeason@google.com>
Resolved for now since Templates can't change targets any more if they have any existing Constraints. |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Not part of the current performance optimization - we currently do not add/remove objects atomically across drivers/targets.
Right now it is possible for us to end up in inconsistent states - where some Drivers/Handlers in a Client are able to cache objects but others are not. We currently opt to leave things in an inconsistent state rather than try to get back to a baseline.
This work is fairly complex to do generally - possibly needing its own design as there's a lot of edge cases to making these operations atomic.
The text was updated successfully, but these errors were encountered: