Skip to content

Commit

Permalink
feat: Update Istio 1.3.0 (#237)
Browse files Browse the repository at this point in the history
  • Loading branch information
lordofthejars committed Sep 20, 2019
1 parent 60589a7 commit bbdc66d
Show file tree
Hide file tree
Showing 8 changed files with 243 additions and 48 deletions.
2 changes: 1 addition & 1 deletion documentation/antora.yml
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
name: istio-tutorial
title: Istio Tutorial
version: '1.1.x'
version: '1.3.x'
nav:
- modules/ROOT/nav.adoc
- modules/advanced/nav.adoc
Expand Down
10 changes: 5 additions & 5 deletions documentation/modules/ROOT/pages/1setup.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -109,13 +109,13 @@ NOTE: In this tutorial, you will often be polling the customer endpoint with `cu
#!/bin/bash
# Mac OS:
curl -L https://github.com/istio/istio/releases/download/1.1.9/istio-1.1.9-osx.tar.gz | tar xz
curl -L https://github.com/istio/istio/releases/download/1.2.5/istio-1.3.0-osx.tar.gz | tar xz
# Fedora/RHEL:
curl -L https://github.com/istio/istio/releases/download/1.1.9/istio-1.1.9-linux.tar.gz | tar xz
curl -L https://github.com/istio/istio/releases/download/1.1.9/istio-1.3.0-linux.tar.gz | tar xz
# Both:
cd istio-1.1.9
cd istio-1.3.0
export ISTIO_HOME=`pwd`
export PATH=$ISTIO_HOME/bin:$PATH
Expand All @@ -124,9 +124,9 @@ export PATH=$ISTIO_HOME/bin:$PATH
[source,bash,subs="+macros,+attributes"]
----
oc apply -f install/kubernetes/helm/istio-init/files/crd-11.yaml
for i in install/kubernetes/helm/istio-init/files/crd*yaml; do oc apply -f $i; done
or
kubectl apply -f install/kubernetes/helm/istio-init/files/crd-11.yaml
for i in install/kubernetes/helm/istio-init/files/crd*yaml; do kubectl apply -f $i; done
oc apply -f install/kubernetes/istio-demo.yaml
Expand Down
178 changes: 171 additions & 7 deletions documentation/modules/ROOT/pages/5circuit-breaker.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -91,7 +91,9 @@ customer => preference => recommendation v1 from '2039379827-h58vw': 130
[#timeout]
== Timeout

Wait only N seconds before giving up and failing. At this point, no other virtual service nor destination rule (in `tutorial` namespace) should be in effect. To check it run `kubectl get virtualservice` `kubectl get destinationrule` and if so `kubectl delete virtualservice virtualservicename -n tutorial{namespace-suffix}` and `kubectl delete destinationrule destinationrulename -n tutorial{namespace-suffix}`
Wait only N seconds before giving up and failing. At this point, no other virtual service nor destination rule (in `tutorial` namespace) should be in effect.

To check it run `kubectl get virtualservice` `kubectl get destinationrule` and if so `kubectl delete virtualservice virtualservicename -n tutorial{namespace-suffix}` and `kubectl delete destinationrule destinationrulename -n tutorial{namespace-suffix}`

NOTE: You will deploy docker images that were previously built. If you want to build recommendation to add a timeout visit: xref:2build-microservices.adoc#buildrecommendationv2-timeout[Modify recommendation:v2 to have timeout]

Expand Down Expand Up @@ -157,11 +159,97 @@ or you can run:
[#nocircuitbreaker]
=== Load test without circuit breaker

Let's perform a load test in our system with `siege`. We'll have 40 clients sending 1 concurrent requests each:
Let's perform a load test in our system with `siege`. We'll have 10 clients sending 4 concurrent requests each:

[source,bash,subs="+macros,+attributes"]
----
siege -r 10 -c 4 -v http://istio-ingressgateway-istio-system.{appdomain}/{path}
----

You should see an output similar to this:

image:siege_ok.png[siege output with all successful requests]

All of the requests to our system were successful.

Now let's make things a bit more interesting.

We will make pod `recommendation-v2` fail 100% of the time.
Get one of the pod names from your system and replace on the following command accordingly:

[source,bash,subs="+macros,+attributes"]
----
oc exec -it -n tutorial{namespace-suffix} $(oc get pods -n tutorial{namespace-suffix}|grep recommendation-v2|awk '{ print $1 }'|head -1) -c recommendation /bin/bash
or
kubectl exec -it -n tutorial{namespace-suffix} $(kubectl get pods -n tutorial{namespace-suffix}|grep recommendation-v2|awk '{ print $1 }'|head -1) -c recommendation /bin/bash
----

You will be inside the application container of your pod `recommendation-v2-2036617847-spdrb`. Now execute:

[source,bash,subs="+macros,+attributes"]
----
curl localhost:8080/misbehave
exit
----

Open a new terminal window and run next command to inspect the logs of this failing pod:

First you need the pod name:

[source,bash,subs="+macros,+attributes"]
----
oc get pods -n tutorial
or
kubectl get pods -n tutorial
NAME READY STATUS RESTARTS AGE
customer-3600192384-fpljb 2/2 Running 0 17m
preference-243057078-8c5hz 2/2 Running 0 15m
recommendation-v1-60483540-9snd9 2/2 Running 0 12m
recommendation-v2-2815683430-vpx4p 2/2 Running 0 15s
----

And get the pod name of `recommendation-v2`.
In previous case, it is `recommendation-v2-2815683430-vpx4p`.

Then check its log:

[source,bash,subs="+macros,+attributes"]
----
oc logs recommendation-v2-2815683430-vpx4p -c recommendation -n tutorial{namespace-suffix}
or
kubectl logs recommendation-v2-2815683430-vpx4p -c recommendation -n tutorial{namespace-suffix}
recommendation request from '99634814-sf4cl': 10
recommendation request from '99634814-sf4cl': 11
recommendation request from '99634814-sf4cl': 12
recommendation request from '99634814-sf4cl': 13
----

Scale up the recommendation v2 service to two instances:

[source,bash,subs="+macros,+attributes"]
----
oc scale deployment recommendation-v2 --replicas=2 -n tutorial{namespace-suffix}
or
kubectl scale deployment recommendation-v2 --replicas=2 -n tutorial{namespace-suffix}
----

Now, you've got one instance of `recommendation-v2` that is misbehaving and another one that is working correctly.
Let's redirect all traffic to `recommendation-v2`:

[source,bash,subs="+macros,+attributes"]
----
siege -r 40 -c 1 -v http://customer-tutorial{namespace-suffix}.{appdomain}
kubectl create -f link:{github-repo}/{istiofiles-dir}/destination-rule-recommendation-v1-v2.yml[istiofiles/destination-rule-recommendation-v1-v2.yml] -n tutorial{namespace-suffix}
kubectl create -f link:{github-repo}/{istiofiles-dir}/virtual-service-recommendation-v2.yml[istiofiles/virtual-service-recommendation-v2.yml] -n tutorial{namespace-suffix}
----

Let's perform a load test in our system with `siege`.
We'll have 10 clients sending 4 concurrent requests each:

[source,bash,subs="+macros,+attributes"]
----
siege -r 10 -c 4 -v http://istio-ingressgateway-istio-system.{appdomain}/{path}
----

You should see an output similar to this:
Expand All @@ -170,18 +258,94 @@ image:siege_ok.png[siege output with all successful requests]

All of the requests to our system were successful.

So the *automatic* retries are working as expected.
So far so good, the error is never send back to the client.
But inspect the logs of the failing pod again:

IMPORTANT: Substitute the pod name to your pod name.

[source,bash,subs="+macros,+attributes"]
----
oc logs recommendation-v2-2815683430-vpx4p -c recommendation -n tutorial{namespace-suffix}
or
kubectl logs recommendation-v2-2815683430-vpx4p -c recommendation -n tutorial{namespace-suffix}
recommendation request from '99634814-sf4cl': 35
recommendation request from '99634814-sf4cl': 36
recommendation request from '99634814-sf4cl': 37
recommendation request from '99634814-sf4cl': 38
----

Notice that the number of requests has been increased by an order of 20.
The reason is that the requests are still able to reach the failing service, so even though all consecutive requests to failing pod will fail, Istio is still sending traffic to this failing pod.

This is where the _Circuit Breaker_ comes into the scene.

[#circuitbreaker]
=== Load test with circuit breaker

Now let's see what is the behavior of the system running `siege` again but having 20 concurrent requests.
Circuit breaker and pool ejection are used to avoid reaching a failing pod for a specified amount of time.
In this way when some consecutive errors are produced, the failing pod is ejected from eligible pods and all further requests are not sent anymore to that instance but to a healthy instance.

[source,bash,subs="+macros,+attributes"]
----
kubectl replace -f link:{github-repo}/{istiofiles-dir}/destination-rule-recommendation_cb_policy_version_v2.yml[istiofiles/destination-rule-recommendation_cb_policy_version_v2.yml] -n tutorial{namespace-suffix}
----

[source,bash,subs="+macros,+attributes"]
----
siege -r 10 -c 4 -v http://istio-ingressgateway-istio-system.{appdomain}/{path}
----

You should see an output similar to this:

image:siege_ok.png[siege output with all successful requests]

All of the requests to our system were successful.

But now inspect again the logs of the failing pod:

[source,bash,subs="+macros,+attributes"]
----
siege -r 2 -c 20 -v http://customer-tutorial{namespace-suffix}.{appdomain}
oc logs recommendation-v2-2815683430-vpx4p -c recommendation -n tutorial{namespace-suffix}
or
kubectl logs recommendation-v2-2815683430-vpx4p -c recommendation -n tutorial{namespace-suffix}
recommendation request from '99634814-sf4cl': 38
recommendation request from '99634814-sf4cl': 39
recommendation request from '99634814-sf4cl': 40
----

image:siege_cb_503.png[siege output with some 503 requests due to open circuit breaker]
IMPORTANT: Substitute the pod name to your pod name.

Now the request is only send to this pod once or twice until the circuit is tripped and pod is ejected.
After this, no further request is send to failing pod.

=== Clean up

You can run siege multiple times, but in all of the executions you should see some `503` errors being displayed in the results. That's the circuit breaker being opened whenever Istio detects more than 1 pending request being handled by the instance/pod.
Remove Istio resources:

[source,bash,subs="+macros,+attributes"]
----
kubectl delete -f link:{github-repo}/{istiofiles-dir}/destination-rule-recommendation_cb_policy_version_v2.yml[istiofiles/destination-rule-recommendation_cb_policy_version_v2.yml] -n tutorial{namespace-suffix}
kubectl delete -f link:{github-repo}/{istiofiles-dir}/virtual-service-recommendation-v2.yml[istiofiles/virtual-service-recommendation-v2.yml] -n tutorial{namespace-suffix}
----

Scale down to one instance of `recommendation-v2`.

[source,bash,subs="+macros,+attributes"]
----
oc scale deployment recommendation-v2 --replicas=1 -n tutorial{namespace-suffix}
or
kubectl scale deployment recommendation-v2 --replicas=1 -n tutorial{namespace-suffix}
----

Restart `recommendation-v2` pod:

[source,bash,subs="+macros,+attributes"]
----
oc delete pod -l app=recommendation,version=v2
or
kubectl delete pod -l app=recommendation,version=v2
----
28 changes: 26 additions & 2 deletions documentation/modules/ROOT/pages/8mTLS.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -18,6 +18,30 @@ In this chapter, we are going to see how to secure the communication between all
[#testingtls]
== Testing mTLS

Depending on how you install Istio, you have mTLS enabled or not.
If you have followed this guide and installed the demo profile (xref:1setup.adoc[Setup]), then mTLS is not enabled.

To check if mTLS is enabled or not just run next command:

[source, bash]
----
istioctl authn tls-check $(oc get pods -n tutorial{namespace-suffix}|grep customer|awk '{ print $1 }'|head -1) customer.tutorial.svc.cluster.local
or
istioctl authn tls-check $(kubectl get pods -n tutorial{namespace-suffix}|grep customer|awk '{ print $1 }'|head -1) customer.tutorial.svc.cluster.local
HOST:PORT STATUS SERVER CLIENT AUTHN POLICY DESTINATION RULE
customer.tutorial.svc.cluster.local:8000 OK mTLS HTTP default/ default/
----

If `CLIENT` is with `HTTP` means that mTLS is not enabled.

To enable mTLS:

[source,bash,subs="+macros,+attributes"]
----
kubectl create -f link:{github-repo}/{istiofiles-dir}/authentication-enable-tls.yml[istiofiles/authentication-enable-tls.yml] -n tutorial{namespace-suffix}
kubectl create -f link:{github-repo}/{istiofiles-dir}/destination-rule-tls.yml[istiofiles/destination-rule-tls.yml] -n tutorial{namespace-suffix}
----

Check the mTLS by _sniffing_ traffic between services, which is a bit more tedious, open a new terminal tab and run next command:

Expand All @@ -29,7 +53,6 @@ or in Kuberentes:
CUSTOMER_POD=$(kubectl get pod | grep cust | awk '{ print $1}' )
oc exec -it $CUSTOMER_POD -c istio-proxy /bin/bash # <2>
or in Kubernetes:
Expand Down Expand Up @@ -79,7 +102,7 @@ Now, let's disable _TLS_:

[source, bash]
----
kubectl create -f istiofiles/disable-mtls.yml
kubectl replace -f istiofiles/disable-mtls.yml
----

And execute again:
Expand Down Expand Up @@ -115,6 +138,7 @@ Now, you can see that since there is no _TLS_ enabled, the information is not sh
[source,bash]
----
kubectl delete -f istiofiles/disable-mtls.yml
kubectl delete -f istiofiles/destination-rule-tls.yml
----

or you can run:
Expand Down
17 changes: 10 additions & 7 deletions istiofiles/acl-blacklist.yml
Original file line number Diff line number Diff line change
@@ -1,17 +1,20 @@
apiVersion: "config.istio.io/v1alpha2"
kind: denier
kind: handler
metadata:
name: denycustomerhandler
spec:
status:
code: 7
message: Not allowed
compiledAdapter: denier
params:
status:
code: 7
message: Not allowed
---
apiVersion: "config.istio.io/v1alpha2"
kind: checknothing
kind: instance
metadata:
name: denycustomerrequests
spec:
compiledTemplate: checknothing
---
apiVersion: "config.istio.io/v1alpha2"
kind: rule
Expand All @@ -20,6 +23,6 @@ metadata:
spec:
match: destination.labels["app"] == "preference" && source.labels["app"]=="customer"
actions:
- handler: denycustomerhandler.denier
instances: [ denycustomerrequests.checknothing ]
- handler: denycustomerhandler
instances: [ denycustomerrequests ]

18 changes: 11 additions & 7 deletions istiofiles/acl-whitelist.yml
Original file line number Diff line number Diff line change
@@ -1,17 +1,21 @@
apiVersion: "config.istio.io/v1alpha2"
kind: listchecker
kind: handler
metadata:
name: preferencewhitelist
spec:
overrides: ["recommendation"]
blacklist: false
compiledAdapter: listchecker
params:
overrides: ["recommendation"]
blacklist: false
---
apiVersion: "config.istio.io/v1alpha2"
kind: listentry
kind: instance
metadata:
name: preferencesource
spec:
value: source.labels["app"]
compiledTemplate: listentry
params:
value: source.labels["app"]
---
apiVersion: "config.istio.io/v1alpha2"
kind: rule
Expand All @@ -20,6 +24,6 @@ metadata:
spec:
match: destination.labels["app"] == "preference"
actions:
- handler: preferencewhitelist.listchecker
- handler: preferencewhitelist
instances:
- preferencesource.listentry
- preferencesource
Loading

0 comments on commit bbdc66d

Please sign in to comment.