-
Notifications
You must be signed in to change notification settings - Fork 58
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
A potential risk in piraeus which can be leveraged to make a cluster-level priviege escalation #449
Comments
Thank you for the report. We are aware of the potential impact of a compromised piraeus-operator-controller-manager deployment. However, we see limited possibility to change that: The Piraeus Operator manages all resources related to Piraeus Datastore, which, at the base level, is a Storage Provider for Kubernetes using the CSI mechanism. One of the main operations a storage provider must be able to do is using the "mount()" system call, which already requires local root privileges on a node. Then, we also need to manage DRBD resources, which also require the SYS_ADMIN capability. This means we necessarily need to run high-privileged workloads. On to the reported issues:
Thus, I do not see any way to mitigate this potential issue. All of the provided mitigations would impact the functionality in some way. If you do have further suggestions, we'd be interested in hearing them. |
The harder part is to mitigate most attack scenarios.
To mitigate all you would have delegate and verify risky rbac permissions to an protected workload on an protected node which does those risky actions on behalf of the workloads which are running on the worker nodes to provide storage. To be fair: |
I am Nanzi Yang, and I find a potential risk in piraeus which can be
leveraged to make a cluster-level privilege escalation.
Detailed analysis:
The piraeus has one deployment called piraeus-op-controller-manager,
which has two pods running on worker nodes randomly. The pod's service
account is piraeus-op, which has the piraeus-op-controller-manager
cluster role via cluster role binding. The cluster role has
get/list/watch verbs of secret resources, has create/patch/update
verbs of clusterrolebindings.rbac.authorization.k8s.io, and has
create/patch/update verbs of clusterroles.rbac.authorization.k8s.io.
Thus if a malicious user can access the worker node which has
piraeus-op-controller-manager:
entire cluster (e.g., the cluster's admin token), resulting in
cluster-level privilege escalation.
cluster role (e.g., cluster-admin cluster role) to whatever service
account he/she likes, resulting in cluster-level privilege escalation.
permissions (e.g., GET verbs of secret resources), resulting in
cluster-level privilege escalation.
Mitigation Discussion:
separate Kubernetes namespace, and use the RoleBinding, not the
ClusterRoleBinding to restrain the deployment can only access secrets
in the separate namespace.
resource names to restrain the secrets that can be accessed by the
deployment.
best way to mitigate the risks is removing the related permissions.
However, it needs a careful review of piraeus' source code without
disrupting its functionalities.
A few questions:
suggestions?
The text was updated successfully, but these errors were encountered: