Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Store Permissions in Index #6303

Closed
5 tasks
stephen-crawford opened this issue Feb 13, 2023 · 3 comments
Closed
5 tasks

Store Permissions in Index #6303

stephen-crawford opened this issue Feb 13, 2023 · 3 comments
Assignees

Comments

@stephen-crawford
Copy link
Contributor

stephen-crawford commented Feb 13, 2023

We want to be able to store permissions in a system index instead of them being stored in a local map.

When this issue is resolved, we should be able to perform the following operations on an index to:

  • Add permissions to the storage of the index
  • Fetch permissions from index store
  • Delete permissions from the index store
  • Use replicated storage system native to OpenSearch
  • Multi-node cluster support
@stephen-crawford stephen-crawford converted this from a draft issue Feb 13, 2023
@stephen-crawford stephen-crawford self-assigned this Feb 13, 2023
@stephen-crawford
Copy link
Contributor Author

stephen-crawford commented Feb 16, 2023

Here is a tikz diagram of the permission storage workflow that I am considering. I would appreciate feedback or suggestions. I am particularly interested in what @peternied, as they (you) suggested that I make a diagram like this for feedback and I want to know if this is what you had in mind.

image

Basic Description

When a client makes a request, they send it directly to the OpenSearchPermission Store interface. The interface allows the client via the Permission Service to treat any compatible data structure like a map with basic put, get, and delete operations (we may add update down the road if needed). When the user request reaches the Permission Store interface, the interface needs to handle several things before executing the request.

1st: The storage needs to check if the request is valid given the existing permissions (and principals?)
2nd: The Permission Store needs to verify the user has the permissions to perform the request
3rd: The interface needs to pass along the request for the action to be executed on the index layer.

When a request is valid, the Permission Store interface is able to verify that the permission exists or that the permission follows a pattern of permissions which can be granted during installation (i.e. plugin extensions may be granted before the plugin is installed as long as this occurs during the cluster start process). Then, the interface passes the request to the Identity OpenSearch System Index which is able to handle the actual configuration changes. Upon reaching the System Index, the action is already allowed to be added so the only failure that should result at this step is an unforeseen error of some kind.

Assuming the operation is successful, the System Index should notify the Permission Store. The Permission Store should then notify the client of the operation success/failure state (and potentially add the permission to a LRU cache?).

What is currently in place?

Currently, permissions are handled as part of the Security Plugin. The plugin makes use of a class called PrivilegesEvaluator which handles authorization based on roles.

The PrivilegesEvaluator also takes instances of SnapshotRestoreEvaluator, SecurityIndexAccessEvaluator, ProtectedIndexActionEvaluator, termsAggregationEvaluator, and pitPrivilegesEvaluator. These attributes of the class are each used to handle specific scenarios of permission evaluation.

Inside the PrivilegesEvaluator, the evaluate returns a PrivilegesEvaluatorResponse after parsing injected roles or mapped roles and then handling unique cases for each of the specific evaluator instances above.

The PrivilegesInterceptor is also used to replace the dashboards index for cluster permissions.

Eventually, the PrivilegesEvaluator.evaluate method will return a response that either contains the missing permissions required to perform the action and states the action is not allowed or will return a response allowing the action.

What are the issues with the current setup?

There are a couple limitations that come with the current setup.

First, the handling of the various privilege types is not very scalable. Right now, there are already 5 sub-evaluators being passed to the PrivilegesEvaluator as new types of actions get added. It is generally preferable to avoid having to add specific handlers whenever new permission types are added, especially with Extensions likely bringing many new permissions.

Second, we are already facing an issue with the way that permissions are handled as part of the ConfigurationRepository. Because all of the configuration settings are stored together as part, any change that could impact permissions requires a full configuration update and reload. Whenever there is an AbstractApiAction, the DynamicConfigFactory invalidates the in-memory cache and then TransportConfigUpdateAction causes all nodes to reload. This means that the there are multiple calls to reload nodes on every Security-configuration or user-related update. This issue is discussed further here, with @peternied offering some detailed findings about the current system.

What other options are there?

On the issue discussing the general configuration storage, there are a few alternative solutions offered.

One is option is looking into some type of improved caching where we only invalidated the cache on very specific operations instead of all. This would improve the overall speed of operation since we would not need to reload on every operation anymore but instead just the few that truly require the change. The issue with this solution, is that it is a bit of wishful thinking. Presumably, this solution would already be in place if it were feasible. It is not clear how to determine the cut off for the changes that require the reload, and in the case of the permission storage, all of the operations would likely require one.

Another alternative is to clear all the caches on an update but only reload them when they are queried. In a scenario where you had numerous nodes, you could clear the caches of all the nodes on update but only reload the security configurations when the particular node is queried. This would spread out the reboot time as it appeared to the end user since you could reload a single node, then serve the user off of that node while the others were reloading. The issue with this alternative, is that it is not clear that this would save any actual wall-clock time as while your nodes would be inactive while they waited for their first requests, that would just appear as idle to the user and not actually save the client any time in the end.

A final alternative worth mentioning is moving strictly to in-memory caching. This would save query time and reload time but pose significant scalability issues. Storing the permissions in memory would allow direct access to the settings and prevent the need of having a cache at all. Unfortunately, storing the permissions like this would result in tax on the memory of the nodes, a lack of persistence, and may also cause further synchronicity issues. While the OpenSearch indices are propagated across the different nodes, the memory will not be the same across all the nodes so storing configurations in memory long-term may lead to issues.

Other Details

Right now the intention is to use a standard 1 primary & 1 replica shard system. This is open to change but the idea would be to store the primary on one node and replica on another. Unlike the current system where read replicas do not always serve a purpose (if the configuration is going to be reloaded almost immediately, there is no point) since the Identity index will be separate from the other configuration settings, it would be beneficial to use read replicas. Read replicas will allow faster permission reading for checking permissions when operating on the standard configuration indices. This is similar to the current system but since the entire configuration index will not be reloaded every time the permission index is reloaded it seems reasonable to expend slight overhead on the front to make the replica a read replica.

@peternied
Copy link
Member

  • Distinguish between lucence index and system index seems a little too detailed, lets conflate these together as 'system' index
  • Using the word 'client' is that a user creating a REST call, or does that represent a PermissionService? If the later, callout the overall scenario to provide context for the broader audience
  • Include more 'whys' in this doc. What is implemented, why isn't it enough? Why a system index? What other options are used in the OpenSearch system, why not use them?
  • Is this strongly or eventually consistent?
  • Where do the replicate(s) live, do we use read replicas to make 'read' operations faster vs writes?

@stephen-crawford
Copy link
Contributor Author

Thank you for your great feedback Peter.

I am going to write the responses down here as well as update the original so that someone else can come along and not run into the same questions you have but also it is easy to see the specific answers.

  • Distinguish between lucence index and system index seems a little too detailed, lets conflate these together as 'system' index
    • Will swap. The intention was to show down to shards since that gives replication but I agree it does not really matter for the design and is abstracted away from the user.
  • Using the word 'client' is that a user creating a REST call, or does that represent a PermissionService? If the later, callout the overall scenario to provide context for the broader audience
    • The client is supposed to be the PermissionService like you mention. I will clarify that in the document so it is clear.
  • Include more 'whys' in this doc. What is implemented, why isn't it enough? Why a system index? What other options are used in the OpenSearch system, why not use them?
    • I will update this in the original. Briefly, right now we just have a single store of all configurations. Likewise, we just have a hashmap holding permissions.
  • Is this strongly or eventually consistent?
    • Technically this will be eventually consistent. You can force it to be strongly consistent by using a request id so that we are able to ensure the replica shard is not hit before it is updated.
  • Where do the replicate(s) live, do we use read replicas to make 'read' operations faster vs writes?
    • Replica shards will just be stored the same as the normal indices with the caveat that OpenSearch always distributes shards across different nodes. I think we should have read replicas but we will need to use the id'ing method of preventing asynchronous issues.

@stephen-crawford stephen-crawford moved this from In Progress to Design Document Awaiting Feedback in Identity in OpenSearch Feb 23, 2023
@github-project-automation github-project-automation bot moved this from Design Document Awaiting Review to Done in Identity in OpenSearch Jan 2, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
Status: Done
Development

No branches or pull requests

2 participants