Skip to content
This repository was archived by the owner on Dec 18, 2025. It is now read-only.
This repository was archived by the owner on Dec 18, 2025. It is now read-only.

Kubernetes Policy WG Discussion: Formal Verification #196

@rficcaglia

Description

@rficcaglia

I recommend deprecating this issue and referring interested readers to a discussion on the Policy WG call documented here:

https://docs.google.com/document/d/1ihFfEfgViKlUMbY2NKxaJzBkgHh-Phk5hqKTzK-NEEs/edit#

DEPRECATED:

During the K8S Policy WG session today (6/5/2019) we discussed how/if policies might be formally verified. We talked about using Datalog (pros and lots of cons) in formal verification, and insofar as OPA's Rego is similar to Datalog (though not strictly Datalog), it maybe might be possible. Maybe. In any case, I volunteered to write up a strawman outline of what this might look like in very hand wavy terms to get the discussion started. @hannibalhuang asked me to put it here in sig-security. @patrick-east @ericavonb were also interested in reviewing I believe. Enjoy...

Formal Verification of Policy In Kubernetes

  • Human writes a policy
    • it might be a policy to grant or deny user access to a resource in a multi-tenant cluster,
    • or it might be a policy that requires certain syscall activity to be monitored on some pods with certain labels,
    • or it might be a policy that says network traffic that is regulated by PCI or HIPAA should be read-only to some microservices but writeable by others,
    • or it might be a policy that specifies some alert action to be triggered when a given audit event occurs;
  • The policy is essentially a specification of what the expected behavior should be for the system, i.e. System + Policy => Safety Properties (nothing bad happens)
  • A tool checks "validity" of the policy and that the system execution matches the specification.
    • produces verification "proof" (model) if the policy is correct
      – or generates a counterexample if the policy is not correct
  • Verification is completely automatic
    • Human can say with confidence, "this policy correctly implements the behavior I intended"
    • Software can use the proof/model and reason with it.

How is Verification Done?

  • Conditions in Logic (e.g. Rego rules)
  • Given a set of logic rules, P, check whether there exists a proof/model of P
  • Given a model m (verification conditions) now use a solver to try to solve them

Metadata

Metadata

Assignees

No one assigned

    Labels

    policypolicy related topics

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions