Skip to content

packit/sandcastle

Repository files navigation

Sandcastle

Run untrusted code in a castle (OpenShift pod), which stands in a sandbox.

Usage

The prerequisite is that you're logged into an OpenShift cluster:

$ oc status
 In project Local Project (myproject) on server https://localhost:8443

The simplest use case is to invoke a command in a new openshift pod:

from sandcastle import Sandcastle

s = Sandcastle(
    image_reference="docker.io/this-is-my/image:latest",
    k8s_namespace_name="myproject"
)
output = s.run(command=["ls", "-lha"])

These things will happen:

  • A new pod is created, using the image set in image_reference.
  • The library actively waits for the pod to finish.
  • If the pod terminates with a return code greater than 0, an exception is raised.
  • Output of the command is returned from the .run() method.

Sharing data between sandbox and current pod

This library allows you to share volumes between sandbox and the pod it is running in.

There is a dedicated class and an interface to access this functionality:

  • VolumeSpec class
  • volume_mounts kwarg of Sandcastle constructor

An example is worth a thousand words:

from pathlib import Path
from sandcastle import Sandcastle, VolumeSpec

# the expectation is that volume assigned to PVC set
# via env var SANDCASTLE_PVC is mounted in current pod at /path
vs = VolumeSpec(path="/path", pvc_from_env="SANDCASTLE_PVC")

s = Sandcastle(
    image_reference="docker.io/this-is-my/image:latest",
    k8s_namespace_name="myproject",
    volume_mounts=[vs]
)
s.run()
s.exec(command=["bash", "-c", "ls -lha /path"])    # will be empty
s.exec(command=["bash", "-c", "mkdir /path/dir"])  # will create a dir
assert Path("/path/dir").is_dir()                  # should pass

Sharing data by copying them

Sandcastle is able to run the sandbox pod in a different namespace. This improves security since it's trivial to lock networking of this project down - the pod won't be able to access OpenShift API server nor any of your services deployed in the cluster. For more info, check out egress rules and network policy.

When you set up this sandbox namespace, please make sure that the service account of the namespace your app is deployed in can manage pods in the sandbox namespace. This command should help:

$ oc adm -n ${SANDBOX_NAMESPACE} policy add-role-to-user edit system:serviceaccount:${APP_NAMESPACE}:default

Real code:

m_dir = MappedDir(
    local_dir,             # share this dir
    sandbox_mountpoint,    # make it available here
    with_interim_pvc=True  # the data will be placed in a volume
)

o = Sandcastle(
    image_reference=container_image,
    k8s_namespace_name=namespace,      # can be a different namespace
    mapped_dir=m_dir,
    working_dir=sandbox_mountpoint,
)
o.run()
# happy execing
o.exec(command=["ls", "-lha", f"{sandbox_mountpoint}/"])

Developing sandcastle

In order to develop this project (and run tests), there are several requirements which need to be met.

  1. Build container images using makefile target make build-test-image.

  2. An openshift cluster that you are logged into

    Which means that running oc status should yield the cluster where you want to run the tests.

    The e2e test test_from_pod builds current codebase and runs the other e2e tests in a pod: to verify the E2E functionality. This expects that the openshift cluster is deployed in your current environment, meaning that openshift can access your local container images in your container engine daemon. Otherwise the image needs to be pushed somewhere so openshift can access it.

  3. In the default oc cluster up environment, the tests create sandbox pod using the default service account which is assigned to every pod. This SA doesn't have permissions to create nor delete pods (so the sandboxing would not work). With this command, the SA is allowed to change any objects in the namespace:

    oc adm policy add-role-to-user edit system:serviceaccount:myproject:default
    
  4. Container engine binary and daemon running. This is implied from the first point. NOTE: Running the tests requires either podman or docker.