Skip to content
Run Capture the Flags and Security Trainings with OWASP Juice Shop
JavaScript Go Dockerfile Smarty HTML
Branch: master
Clone or download
Latest commit 42b7c09 Oct 27, 2019

readme.md

JuicyCTF, Multi User Juice Shop Platform

Running CTFs and Security Trainings with OWASP Juice Shop is usually quite tricky, Juice Shop just isn't intended to be used by multiple users at a time. Instructing everybody how to start Juice Shop on their own machine works ok, but takes away too much valuable time.

JuicyCTF gives you the ability to run separate Juice Shop instances for every participant on a central kubernetes cluster, to run events without the need for local Juice Shop instances.

What it does:

  • dynamically create new Juice Shop instances when needed
  • runs on a single domain, comes with a LoadBalancer sending the traffic to the participants Juice Shop instance
  • backup and auto apply challenge progress in case of Juice Shop container restarts
  • cleanup old & unused instances automatically

JuicyCTF, High Level Architecture Diagram

Installation

JuicyCTF runs on kubernetes, to install it you'll need helm.

If you aren't familiar with helm, try out the helm 3 beta. It's easier to install and easier to use. It's pretty stable, and it doesn't have a server side component anymore. It just runs on your local machine.

helm repo add juicy-ctf https://iteratec.github.io/juicy-ctf/

# for helm <= 2
helm install juicy-ctf/juicy-ctf --name juicy-ctf

# for helm >= 3
helm install juicy-ctf juicy-ctf/juicy-ctf

Installation Guides for specific Cloud Providers

Generally JuicyCTF runs on pretty much any kubernetes cluster, but to make it easier for anybody who is new to kubernetes we got some guides on how to setup a kubernetes cluster with JuicyCTF installed for some specific Cloud providers.

Customizing the Setup

You got some options on how to setup the stack, with some option to customize the JuiceShop instances to your own liking. You can find the default config values under: helm/juicy-ctf/values.yaml

Download & Save the file and tell helm to use your config file over the default by running:

helm install -f values.yaml juicy-ctf ./juicy-ctf/helm/juicy-ctf/

Deinstallation

helm delete juicy-ctf
# Also delete all Juice Shop Deployments which still exist
kubectl delete deployment --selector app=juice-shop && kubectl delete service --selector app=juice-shop

FAQ

How much compute resources will the cluster require?

To be on the safe side calculate with:

  • 1GB memory & 1CPU overhead, for the balancer, redis & co
  • 200MB & 0.2CPU * number of participants, for the individual JuiceShop Instances

The numbers above reflect the default resource limits. These can be tweaked, see: Customizing the Setup

How many users can JuicyCTF handle?

There is no real fixed limit. (Even thought you can configure one 😉) The custom LoadBalancer, through which all traffic for the individual Instances flows, can be replicated as much as you'd like. You can also attach a Horizontal Pod Autoscaler to automatically scale the LoadBalancer.

When scaling up, also keep an eye on the redis instance. Make sure it is still able to handle the load.

Why a custom LoadBalancer?

There are some special requirements which we didn't find to be easily solved with any pre build load balancer:

  • Restricting the number of users for a deployment to only the members of a certain team.
  • The load balancers cookie must be save and not easy to spoof to access another instance.
  • Handling starting of new instances.

If you have awesome ideas on how to overcome these issues without a custom load balancer, please write us, we'd love to hear from you!

Why a separate kubernetes deployment for every team?

There are some pretty good reasons for this:

  • The ability delete the instances of a team separately. Scaling down safely, without removing instances of active teams, is really tricky with a scaled deployment. You can only choose the desired scale not which pods to keep and which to throw away.
  • To ensure that pods are still properly associated with teams after a pod gets recreated. This is a non problem with separate deployment and really hard with scaled deployments.
  • The ability to embed the team name in the deployment name. This seems like a stupid reason but make debugging SOOO much easier, with just using kubectl.

Did somebody actually ask any of these questions?

No 😉

Talk with Us!

You can reach us in the #project-juiceshop channel of the OWASP Slack Workspace. We'd love to hear any feedback or usage reports you got. If you are not already in the OWASP Slack Workspace, you can join via this link

You can’t perform that action at this time.