**Is this a BUG or FEATURE REQUEST?: BUG
Did you review existing epics or issues to identify if this already being worked on? (please try to add the correct labels and epics):
Bug:
Y
What Version of Istio and Kubernetes are you using, where did you get Istio from, Installation details
istioctl version 0.2.7
kubectl version
Client Version: version.Info{Major:"1", Minor:"7", GitVersion:"v1.7.6", GitCommit:"4bc5e7f9a6c25dc4c03d4d656f2cefd21540e28c", GitTreeState:"clean", BuildDate:"2017-09-14T06:55:55Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"7+", GitVersion:"v1.7.4-1+1540c973d4ff9d", GitCommit:"1540c973d4ff9da2d6a204a7709084488a459ed4", GitTreeState:"clean", BuildDate:"2017-10-06T15:08:43Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}
Is Istio Auth enabled or not ?
No. Used istio.yaml
What happened:
I installed Istio on IBM Container Service (Armada) cluster with 3 worker nodes. All Istio pods in one of the nodes. I monitor the CPU and memory usage of nodes and pods while driving load to a microservices app benchmark. In the example below the load ran for about 40 min, and since the beginning the memory usage by the mixer kept growing. If the load continues, the node runs out of memory and kubernetes would restart all pods running there. After stopping the load, the memory usage goes down very slowly, and the CPU usage by the mixer pod is still very high.


What you expected to happen:
How to reproduce it:
Drive continuous load to microservices using Istio 0.2.7 and mixer enabled.
Feature Request:
N
Describe the feature: