Skip to content
Branch: master
Find file History
Fetching latest commit…
Cannot retrieve the latest commit at this time.
Type Name Latest commit message Commit time
Failed to load latest commit information.
hipstershop Adds intro to Istio security demo (#7) Feb 15, 2019

Demo: Introduction to Istio Security

This example demonstrates how to leverage Istio's identity and access control policies to help secure microservices running on GKE.

We'll use the Hipstershop sample application to cover:

  • Incrementally adopting Istio mutual TLS authentication across the service mesh
  • Enabling end-user (JWT) authentication for the frontend service
  • Using an Istio access control policy to secure access to the frontend service



Google Cloud Shell is a browser-based terminal that Google provides to interact with your GCP resources. It is backed by a free Compute Engine instance that comes with many useful tools already installed, including everything required to run this demo.

Click the button below to open the demo instructions in your Cloud Shell:

Open in Cloud Shell

  1. Change into the demo directory.
cd security-intro

Create a GKE Cluster

  1. From Cloud Shell, enable the Kubernetes Engine API.
gcloud services enable
  1. Create a GKE cluster using Istio on GKE. This add-on will provision your GKE cluster with Istio.
gcloud beta container clusters create istio-security-demo \
    --addons=Istio --istio-config=auth=MTLS_PERMISSIVE \
    --zone=us-central1-f \
    --machine-type=n1-standard-2 \

This Istio installation uses the default MTLS_PERMISSIVE mesh-wide security option. This means that all services in the cluster will send unencrypted traffic by default. In PERMISSIVE mode, you can still enforce strict mutual TLS for individual services, which we'll explore below.

  1. Once the cluster is provisioned, check that Istio is ready by ensuring that all pods are Running or Completed.
kubectl get pods -n istio-system

Deploy the sample application

  1. Label the default namespace for Istio sidecar proxy auto-injection.
kubectl label namespace default istio-injection=enabled
  1. Deploy the sample application.
kubectl apply -f ./hipstershop
  1. Run kubectl get pods -n default to ensure that all pods are Running and Ready.
NAME                                     READY     STATUS    RESTARTS   AGE
adservice-76b5c7bd6b-zsqb8               2/2       Running   0          1m
checkoutservice-86f5c7679c-8ghs8         2/2       Running   0          1m
currencyservice-5749fd7c6d-lv6hj         2/2       Running   0          1m
emailservice-6674bf75c5-qtnd8            2/2       Running   0          1m
frontend-56fdfb866c-tvdm6                2/2       Running   0          1m
loadgenerator-b64fcb8bc-m6nd2            2/2       Running   0          1m
paymentservice-67c6696c54-tgnc5          2/2       Running   0          1m
productcatalogservice-76c6454c57-9zj2v   2/2       Running   0          1m
recommendationservice-78c7676bfb-xqtp6   2/2       Running   0          1m
shippingservice-7bc4bc75bb-kzfrb         2/2       Running   0          1m

🔎 Each pod has 2 containers, because each pod now has the injected Istio sidecar proxy. (the cartservice and redis pods live in the cart namespace, but will not be used for the purposes of this demo.)

Now we're ready to enforce security policies for this application.


Authentication refers to identity: who is this service? who is this end-user? and can I trust that they are who they say they are?

One benefit of using Istio that it provides uniformity for both service-to-service and end user-to-service authentication. Istio abstracts away authentication from your application code, by tunneling all service-to-service communication through the Envoy sidecar proxies. And by using a centralized Public-Key Infrastructure, Istio provides consistency to make sure authentication is set up properly across your mesh. Further, Istio allows you to adopt mTLS on a per-service basis, or easily toggle end-to-end encryption for your entire mesh. Let's see how.

Enable mTLS for the frontend service

Right now, the cluster is in PERMISSIVE mTLS mode, meaning all service-to-service ("east west") mesh traffic is unencrypted by default. First, let's toggle mTLS for the frontend microservice.

For both inbound and outbound requests for the frontend to be encrypted, we need two Istio resources: a Policy (require TLS for inbound requests) and a DestinationRule (TLS for the frontend's outbound requests).

  1. View both these resources in ./istio/mtls-frontend.yaml.

  2. Apply the resources to the cluster:

kubectl apply -f ./istio/mtls-frontend.yaml
  1. Verify that mTLS is enabled for the frontend by trying to reach it from the istio-proxy container of a different mesh service.

First, try to reach frontend from productcatalogservice with plain HTTP.

kubectl exec $(kubectl get pod -l app=productcatalogservice -o jsonpath={}) -c istio-proxy -- curl http://frontend:80/ -o /dev/null -s -w '%{http_code}\n'

You should see:

command terminated with exit code 56

Exit code 56 means "failure to receive network data." This is expected.

  1. Now run the same command but with HTTPS, passing the client-side key and cert for productcatalogservice.
kubectl exec $(kubectl get pod -l app=productcatalogservice -o jsonpath={}) -c istio-proxy \
-- curl https://frontend:80/ -o /dev/null -s -w '%{http_code}\n'  --key /etc/certs/key.pem --cert /etc/certs/cert-chain.pem --cacert /etc/certs/root-cert.pem -k

You should now see a 200 - OK response code.

🔎 The TLS key and certificate for productcatalogservice comes from Istio's Citadel component, running centrally. Citadel generates keys and certs for all mesh services, even if the cluster-wide mTLS is set to PERMISSIVE.

Enable mTLS for the default namespace

Now that we've adopted mTLS for one service, let's enforce mTLS for the entire default namespace. Doing so will automatically encrypt service-to-service traffic for every Hipstershop service running in the default namespace.

  1. Open istio/mtls-default-ns.yaml. Notice that we're using the same resources (Policy and DestinationRule) for namespace-wide mTLS as we did for service-specific mTLS.

  2. Apply the resources:

kubectl apply -f ./istio/mtls-default-ns.yaml

From here, we could enable mTLS globally across the mesh using the gcloud container clusters update command, with the --istio-config=auth=MTLS_STRICT flag. Read more in the Istio on GKE documentation.

You can also manually enable mesh-wide mTLS by applying a MeshPolicy resource to the cluster.

Overall, we hope this section showed how you can incrementally adopt encrypted service communication using Istio, at your own pace, without any code changes.

Add End-User JWT Authentication

Now that we've enabled service-to-service authentication in the default namespace, let's enforce end-user ("origin") authentication for the frontend service, using JSON Web Tokens (JWT).

⚠️ We recommend only using JWT authentication alongside mTLS (and not JWT by itself), because plaintext JWTs are not themselves encrypted, only signed. Forged or intercepted JWTs could compromise your service mesh. In this section, we're building on the mutual TLS authentication already configured for the default namespace.

First, we'll create an Istio Policy to enforce JWT authentication for inbound requests to the frontend service.

  1. Open the resource in ./istio/jwt-frontend.yaml.

🔎 This Policy uses Istio's test JSON Web Key Set (jwksUri), the public key used to verify incoming JWTs. When we apply this Policy, Istio's Pilot component will pass down this public key to the frontend's sidecar proxy, which will allow it to accept or deny requests.

Also note that this resource updates the existing frontend-authn Policy we created in the last section; this is because Istio only allows one service-matching Policy to exist at a time.

  1. Apply the updated frontend Policy:
kubectl apply -f ./istio/jwt-frontend.yaml
  1. Set a local TOKEN variable. We'll use this TOKEN on the client-side to make requests to the frontend.
TOKEN=$(curl -k -s); echo $TOKEN
  1. First try to reach the frontend with TLS keys/certs but without a JWT.
kubectl exec $(kubectl get pod -l app=productcatalogservice -o jsonpath={}) -c istio-proxy \
-- curl  https://frontend:80/ -o /dev/null -s -w '%{http_code}\n' \
--key /etc/certs/key.pem --cert /etc/certs/cert-chain.pem --cacert /etc/certs/root-cert.pem -k

You should see a 401 - Unauthorized response code.

  1. Now try to reach the frontend service, with TLS key/certs and a JWT:
kubectl exec $(kubectl get pod -l app=productcatalogservice -o jsonpath={}) -c istio-proxy \
-- curl --header "Authorization: Bearer $TOKEN" https://frontend:80/ -o /dev/null -s -w '%{http_code}\n' \
--key /etc/certs/key.pem --cert /etc/certs/cert-chain.pem --cacert /etc/certs/root-cert.pem -k

You should see a 200 response code.

🎉 Well done! You just secured the frontend service with both transport and origin authentication.


Unlike authentication, which refers to the "who," authorization refers to the "what", or: what is this service or user allowed to do?

Istio Authorization builds on Kubernetes Role-based Access Control (RBAC), which maps "subjects" (such as service accounts) to Roles.

Istio access control model consists of three building blocks:

  1. ServiceRole - an abstract persona with specific permissions (example: a frontend-viewer ServiceRole can make GET requests to the frontend.)
  2. Subject(s) - concrete users or services (example: the ServiceAccount running productcatalogservice)
  3. ServiceRoleBinding - a mapping of ServiceRole to Subject(s). (example: only the ServiceAccount running productcatalogservice can make GET requests to the frontend.)

Putting them together, we get:

ServiceRole + Subject(s) = ServiceRoleBinding

Let's put this into action.

Enable authorization (RBAC) for the frontend

  1. Enable authorization for the frontend service only:
kubectl apply -f ./istio/enable-authz.yaml
  1. Run the same GET request to the frontend as we did in the last section (with TLS key/cert and JWT).
kubectl exec $(kubectl get pod -l app=productcatalogservice -o jsonpath={}) -c istio-proxy \
-- curl  --header "Authorization: Bearer $TOKEN" https://frontend:80/ -o /dev/null -s -w '%{http_code}\n' \
--key /etc/certs/key.pem --cert /etc/certs/cert-chain.pem --cacert /etc/certs/root-cert.pem -k

You should receive a 403- Forbidden error. This is expected, because we just locked down the frontend service to only whitelisted subjects.

Control access to the frontend

  1. Open the YAML file at ./istio/rbac-frontend.yaml.

🔎 The ServiceRole resource, frontend-viewer, specifies an abstract persona that can make GET and HEAD requests to the frontend.

The ServiceRoleBinding maps the frontend-viewer role to only those subjects that have a hello:world request header. Also note how instead of specifying an explicit ServiceAccount that can make requests, we're using Istio's Constraints and Properties feature. This feature allows us to dynamically select subjects based on abstract selectors.

  1. Apply the RBAC resources to the cluster:
kubectl apply -f ./istio/rbac-frontend.yaml
  1. Make another request from productcatalogservice to the frontend. This time, pass the hello:world request header.
kubectl exec $(kubectl get pod -l app=productcatalogservice -o jsonpath={}) -c istio-proxy \
-- curl --header "Authorization: Bearer $TOKEN" --header "hello:world"  \
   https://frontend:80/ -o /dev/null -s -w '%{http_code}\n' \
  --key /etc/certs/key.pem --cert /etc/certs/cert-chain.pem --cacert /etc/certs/root-cert.pem -k

You should now see a 200 response code.

🔎 From here, if you wanted to expand authorization to the entire default namespace, you can apply similar resources. Learn more about that here.

🎉 Nice job! You just configured a fine-grained Istio access control policy for one service. We hope this section demonstrated how Istio can support specific, service-level authorization policies using a set of familiar, Kubernetes-based RBAC resources.


To avoid incurring additional costs, delete the GKE cluster created in this demo:

gcloud container clusters delete istio-security-demo --zone=us-central1-f

Or, to keep your GKE cluster with Istio and Hipstershop still installed, delete the Istio security resources only:

kubectl delete -f ./istio

What's next?

If you're interested in learning more about Istio's security features, read more at:

You can’t perform that action at this time.