Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Problem when running IOChaos #2305

Closed
faraktingi opened this issue Sep 14, 2021 · 38 comments
Closed

Problem when running IOChaos #2305

faraktingi opened this issue Sep 14, 2021 · 38 comments
Assignees
Labels
component/daemon lifecycle/frozen type/bug Report of an issue or malfunction.

Comments

@faraktingi
Copy link

Bug Report

What version of Kubernetes are you using?
Client Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.5", GitCommit:"e6503f8d8f769ace2f338794c914a96fc335df0f", GitTreeState:"clean", BuildDate:"2020-06-26T03:47:41Z", GoVersion:"go1.13.9", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.0+4c3480d", GitCommit:"4c3480dcd4299c3b3e9a75e28d643177600e7d72", GitTreeState:"clean", BuildDate:"2021-07-09T00:02:08Z", GoVersion:"go1.15.14", Compiler:"gc", Platform:"linux/amd64"}

What version of Chaos Mesh are you using?
Controller manager Version: version.Info{GitVersion:"v2.0.1", GitCommit:"2989e66ed8c1a9815cba025664a72ed59422c73d", BuildDate:"2021-08-26T10:04:06Z", GoVersion:"go1.15.11", Compiler:"gc", Platform:"linux/amd64"}

What did you do?
chaos-mesh was installed by executing the following command:
helm install chaos-mesh chaos-mesh/chaos-mesh -n=chaos-testing --set chaosDaemon.runtime=crio --set chaosDaemon.socketPath=/var/run/crio/crio.sock

Trying to do an I/O experiment:
Here is the experiment yaml:

kind: IOChaos
apiVersion: chaos-mesh.org/v1alpha1
metadata:
name: chaosiomqtest1
namespace: cp4i
annotations:
experiment.chaos-mesh.org/pause: 'true'
spec:
selector:
namespaces:
- cp4i
labelSelectors:
statefulset.kubernetes.io/pod-name: mq-ddd-qm-dev-ibm-mq-0
mode: one
action: latency
delay: 100ms
percent: 100
volumePath: /opt/mqm/bin/
duration: 5m

What did you expect to see?
I/O latency running without error.

What did you see instead?
Got this error:

Failed to apply chaos: cannot find daemonIP on node 10.183.69.209 in related Endpoints {{Endpoints v1} {chaos-daemon chaos-testing /api/v1/namespaces/chaos-testing/endpoints/chaos-daemon c33f3b35-e841-458b-841e-e6edec3f7ad5 2108161 0 2021-09-13 08:17:55 +0000 UTC map[app.kubernetes.io/component:chaos-daemon app.kubernetes.io/instance:chaos-mesh app.kubernetes.io/managed-by:Helm app.kubernetes.io/name:chaos-mesh app.kubernetes.io/part-of:chaos-mesh app.kubernetes.io/version:v2.0.1 helm.sh/chart:chaos-mesh-v2.0.1 service.kubernetes.io/headless:] map[endpoints.kubernetes.io/last-change-trigger-time:2021-09-13T08:17:55Z] [] [] [{kube-controller-manager Update v1 2021-09-13 08:17:55 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 101 110 100 112 111 105 110 116 115 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 108 97 115 116 45 99 104 97 110 103 101 45 116 114 105 103 103 101 114 45 116 105 109 101 34 58 123 125 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 112 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 99 111 109 112 111 110 101 110 116 34 58 123 125 44 34 102 58 97 112 112 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 105 110 115 116 97 110 99 101 34 58 123 125 44 34 102 58 97 112 112 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 109 97 110 97 103 101 100 45 98 121 34 58 123 125 44 34 102 58 97 112 112 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 110 97 109 101 34 58 123 125 44 34 102 58 97 112 112 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 112 97 114 116 45 111 102 34 58 123 125 44 34 102 58 97 112 112 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 118 101 114 115 105 111 110 34 58 123 125 44 34 102 58 104 101 108 109 46 115 104 47 99 104 97 114 116 34 58 123 125 44 34 102 58 115 101 114 118 105 99 101 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 104 101 97 100 108 101 115 115 34 58 123 125 125 125 125],}}]} []}

Output of chaosctl

2021-09-14T15:05:01.964Z DEBUG controller-runtime.manager.events Warning {"object": {"kind":"PodIOChaos","namespace":"cp4i","name":"mq-ddd-qm-dev-ibm-mq-0","uid":"cf1ce43d-0dcb-4cff-9e4c-8aeabfcae68a","apiVersion":"chaos-mesh.org/v1alpha1","resourceVersion":"3482590"}, "reason": "Failed", "message": "cannot find daemonIP on node 10.183.69.209 in related Endpoints {{Endpoints v1} {chaos-daemon chaos-testing /api/v1/namespaces/chaos-testing/endpoints/chaos-daemon c33f3b35-e841-458b-841e-e6edec3f7ad5 2108161 0 2021-09-13 08:17:55 +0000 UTC map[app.kubernetes.io/component:chaos-daemon app.kubernetes.io/instance:chaos-mesh app.kubernetes.io/managed-by:Helm app.kubernetes.io/name:chaos-mesh app.kubernetes.io/part-of:chaos-mesh app.kubernetes.io/version:v2.0.1 helm.sh/chart:chaos-mesh-v2.0.1 service.kubernetes.io/headless:] map[endpoints.kubernetes.io/last-change-trigger-time:2021-09-13T08:17:55Z] [] [] [{kube-controller-manager Update v1 2021-09-13 08:17:55 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 101 110 100 112 111 105 110 116 115 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 108 97 115 116 45 99 104 97 110 103 101 45 116 114 105 103 103 101 114 45 116 105 109 101 34 58 123 125 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 112 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 99 111 109 112 111 110 101 110 116 34 58 123 125 44 34 102 58 97 112 112 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 105 110 115 116 97 110 99 101 34 58 123 125 44 34 102 58 97 112 112 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 109 97 110 97 103 101 100 45 98 121 34 58 123 125 44 34 102 58 97 112 112 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 110 97 109 101 34 58 123 125 44 34 102 58 97 112 112 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 112 97 114 116 45 111 102 34 58 123 125 44 34 102 58 97 112 112 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 118 101 114 115 105 111 110 34 58 123 125 44 34 102 58 104 101 108 109 46 115 104 47 99 104 97 114 116 34 58 123 125 44 34 102 58 115 101 114 118 105 99 101 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 104 101 97 100 108 101 115 115 34 58 123 125 125 125 125],}}]} []}"}

@iguoyr iguoyr self-assigned this Sep 15, 2021
@iguoyr
Copy link
Member

iguoyr commented Sep 15, 2021

@faraktingi Hi, could you follow these suggestions, we need more information about the issue, thx.

  1. the chaos-daemon endpoint info kubectl get ep -n chaos-testing chaos-daemon -o yaml
  2. the chaos-daemon pod info kubectl get pod -n chaos-testing -l app.kubernetes.io/component=chaos-daemon -o wide

@faraktingi
Copy link
Author

Thanks for your help @iguoyr .
Here are the commands result:

  1. kubectl get ep -n chaos-testing chaos-daemon -o yaml

apiVersion: v1
kind: Endpoints
metadata:
annotations:
endpoints.kubernetes.io/last-change-trigger-time: "2021-09-13T08:17:55Z"
creationTimestamp: "2021-09-13T08:17:55Z"
labels:
app.kubernetes.io/component: chaos-daemon
app.kubernetes.io/instance: chaos-mesh
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: chaos-mesh
app.kubernetes.io/part-of: chaos-mesh
app.kubernetes.io/version: v2.0.1
helm.sh/chart: chaos-mesh-v2.0.1
service.kubernetes.io/headless: ""
managedFields:

  • apiVersion: v1
    fieldsType: FieldsV1
    fieldsV1:
    f:metadata:
    f:annotations:
    .: {}
    f:endpoints.kubernetes.io/last-change-trigger-time: {}
    f:labels:
    .: {}
    f:app.kubernetes.io/component: {}
    f:app.kubernetes.io/instance: {}
    f:app.kubernetes.io/managed-by: {}
    f:app.kubernetes.io/name: {}
    f:app.kubernetes.io/part-of: {}
    f:app.kubernetes.io/version: {}
    f:helm.sh/chart: {}
    f:service.kubernetes.io/headless: {}
    manager: kube-controller-manager
    operation: Update
    time: "2021-09-13T08:17:55Z"
    name: chaos-daemon
    namespace: chaos-testing
    resourceVersion: "2108161"
    selfLink: /api/v1/namespaces/chaos-testing/endpoints/chaos-daemon
    uid: c33f3b35-e841-458b-841e-e6edec3f7ad5
  1. kubectl get pod -n chaos-testing -l app.kubernetes.io/component=chaos-daemon -o wide

No resources found in chaos-testing namespace.

@iguoyr
Copy link
Member

iguoyr commented Sep 15, 2021

@faraktingi It's strange that there is no chaos-daemon pod in chaos-testing namespace, please provide more info to find the reason, thx.

  1. the chaos-testing namespace pod kubectl get pod -n chaos-testing -o wide
  2. the chaos-daemon daemonset kubectl describe ds -n chaos-testing chaos-daemon

@faraktingi
Copy link
Author

  1. kubectl get pod -n chaos-testing -o wide
NAME                                       READY   STATUS    RESTARTS   AGE    IP              NODE           NOMINATED NODE   READINESS GATES
chaos-controller-manager-878c96db7-lxx4x   1/1     Running   0          2d1h   172.30.227.93   10.189.92.10   <none>           <none>
chaos-dashboard-698c75dbd4-8wktj           1/1     Running   0          2d1h   172.30.227.78   10.189.92.10   <none>           <none>
  1. kubectl describe ds -n chaos-testing chaos-daemon
Name:           chaos-daemon
Selector:       app.kubernetes.io/component=chaos-daemon,app.kubernetes.io/instance=chaos-mesh,app.kubernetes.io/name=chaos-mesh
Node-Selector:  <none>
Labels:         app.kubernetes.io/component=chaos-daemon
                app.kubernetes.io/instance=chaos-mesh
                app.kubernetes.io/managed-by=Helm
                app.kubernetes.io/name=chaos-mesh
                app.kubernetes.io/part-of=chaos-mesh
                app.kubernetes.io/version=v2.0.1
                helm.sh/chart=chaos-mesh-v2.0.1
Annotations:    deprecated.daemonset.template.generation: 1
                meta.helm.sh/release-name: chaos-mesh
                meta.helm.sh/release-namespace: chaos-testing
Desired Number of Nodes Scheduled: 0
Current Number of Nodes Scheduled: 0
Number of Nodes Scheduled with Up-to-date Pods: 0
Number of Nodes Scheduled with Available Pods: 0
Number of Nodes Misscheduled: 0
Pods Status:  0 Running / 0 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:           app.kubernetes.io/component=chaos-daemon
                    app.kubernetes.io/instance=chaos-mesh
                    app.kubernetes.io/managed-by=Helm
                    app.kubernetes.io/name=chaos-mesh
                    app.kubernetes.io/part-of=chaos-mesh
                    app.kubernetes.io/version=v2.0.1
                    helm.sh/chart=chaos-mesh-v2.0.1
  Annotations:      rollme: j5199
  Service Account:  chaos-daemon
  Containers:
   chaos-daemon:
    Image:       pingcap/chaos-daemon:v2.0.1
    Ports:       31767/TCP, 31766/TCP
    Host Ports:  31767/TCP, 0/TCP
    Command:
      /usr/local/bin/chaos-daemon
      --runtime
      crio
      --http-port
      31766
      --grpc-port
      31767
      --pprof
      --ca
      /etc/chaos-daemon/cert/ca.crt
      --cert
      /etc/chaos-daemon/cert/tls.crt
      --key
      /etc/chaos-daemon/cert/tls.key
    Environment:
      TZ:  UTC
    Mounts:
      /etc/chaos-daemon/cert from chaos-daemon-cert (ro)
      /host-sys from sys-path (rw)
      /var/run/crio/crio.sock from socket-path (rw)
  Volumes:
   socket-path:
    Type:          HostPath (bare host directory volume)
    Path:          /var/run/crio/crio.sock
    HostPathType:
   sys-path:
    Type:          HostPath (bare host directory volume)
    Path:          /sys
    HostPathType:
   chaos-daemon-cert:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  chaos-mesh-daemon-certs
    Optional:    false
Events:
  Type     Reason        Age                     From                  Message
  ----     ------        ----                    ----                  -------
  Warning  FailedCreate  4m46s (x49 over 3h24m)  daemonset-controller  Error creating: pods "chaos-daemon-" is forbidden: unable to validate against any security context constraint: [provider restricted: .spec.securityContext.hostPID: Invalid value: true: Host PID is not allowed to be used provider restricted: .spec.securityContext.hostIPC: Invalid value: true: Host IPC is not allowed to be used spec.volumes[0]: Invalid value: "hostPath": hostPath volumes are not allowed to be used spec.volumes[1]: Invalid value: "hostPath": hostPath volumes are not allowed to be used spec.containers[0].securityContext.privileged: Invalid value: true: Privileged containers are not allowed spec.containers[0].securityContext.capabilities.add: Invalid value: "SYS_PTRACE": capability may not be added spec.containers[0].securityContext.containers[0].hostPort: Invalid value: 31767: Host ports are not allowed to be used spec.containers[0].securityContext.hostPID: Invalid value: true: Host PID is not allowed to be used spec.containers[0].securityContext.hostIPC: Invalid value: true: Host IPC is not allowed to be used]

@faraktingi
Copy link
Author

Perhaps we need to perform this command:

oc adm policy add-scc-to-user privileged system:serviceaccount:chaos-testing:chaos-daemon

What do you think?

@YangKeao
Copy link
Member

Perhaps we need to perform this command:

oc adm policy add-scc-to-user privileged system:serviceaccount:chaos-testing:chaos-daemon

What do you think?

Yes, as described in the FAQ

@faraktingi
Copy link
Author

now I have:
$ kubectl get pod -n chaos-testing -l app.kubernetes.io/component=chaos-daemon -o wide

NAME                 READY   STATUS    RESTARTS   AGE     IP              NODE            NOMINATED NODE   READINESS GATES
chaos-daemon-2brsz   1/1     Running   0          2m37s   172.30.227.85   10.189.92.10    <none>           <none>
chaos-daemon-sz4dg   1/1     Running   0          2m37s   172.30.220.53   10.183.69.209   <none>           <none>
chaos-daemon-tg675   1/1     Running   0          2m37s   172.30.66.55    10.190.81.48    <none>           <none>

@iguoyr
Copy link
Member

iguoyr commented Sep 15, 2021

@faraktingi Great! And try to run I/O latency chaos again, is there still having problems?

@faraktingi
Copy link
Author

yes i restarted it and got this error now:

Failed to apply chaos: rpc error: code = Unknown desc = toda startup takes too long or an error occurs: source: /opt/mqm/bin/, target: /opt/mqm/__chaosfs__bin__

@faraktingi
Copy link
Author

this command:
./chaosctl debug iochaos -n chaos-testing
gives:

I0915 12:51:23.346253   74236 request.go:668] Waited for 1.000311534s due to client-side throttling, not priority and fairness, request: GET:https://c100-e.us-east.containers.cloud.ibm.com:32691/apis/autoscaling/v1?timeout=32s
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x10 pc=0x25940a5]

goroutine 1 [running]:
github.com/chaos-mesh/chaos-mesh/api/v1alpha1.(*ChaosKind).SpawnList(...)
	/Users/faraktingi/Documents/_travail/github/chaos-mesh/api/v1alpha1/kinds.go:77
github.com/chaos-mesh/chaos-mesh/pkg/chaosctl/common.GetChaosList(0x2c9adf8, 0xc00053ae40, 0xc000a8f179, 0x7, 0x0, 0x0, 0x7ffeefbffae4, 0xd, 0x2cb2bd8, 0xc00042c000, ...)
	/Users/faraktingi/Documents/_travail/github/chaos-mesh/pkg/chaosctl/common/common.go:220 +0xc5
github.com/chaos-mesh/chaos-mesh/pkg/chaosctl/cmd.(*DebugOptions).Run(0xc00038a6c0, 0x2a0ae05, 0x7, 0xc0002ef7c0, 0x0, 0x2, 0xc000d94078, 0x0, 0x0)
	/Users/faraktingi/Documents/_travail/github/chaos-mesh/pkg/chaosctl/cmd/debug.go:178 +0x13b
github.com/chaos-mesh/chaos-mesh/pkg/chaosctl/cmd.NewDebugCommand.func5(0xc0004b5b80, 0xc0002ef7c0, 0x0, 0x2, 0x0, 0x0)
	/Users/faraktingi/Documents/_travail/github/chaos-mesh/pkg/chaosctl/cmd/debug.go:129 +0xa5
github.com/spf13/cobra.(*Command).execute(0xc0004b5b80, 0xc0002ef7a0, 0x2, 0x2, 0xc0004b5b80, 0xc0002ef7a0)
	/Users/faraktingi/go/pkg/mod/github.com/spf13/cobra@v1.1.1/command.go:850 +0x472
github.com/spf13/cobra.(*Command).ExecuteC(0x398af60, 0xc000713ec8, 0x1, 0x1)
	/Users/faraktingi/go/pkg/mod/github.com/spf13/cobra@v1.1.1/command.go:958 +0x375
github.com/spf13/cobra.(*Command).Execute(...)
	/Users/faraktingi/go/pkg/mod/github.com/spf13/cobra@v1.1.1/command.go:895
github.com/chaos-mesh/chaos-mesh/pkg/chaosctl/cmd.Execute()
	/Users/faraktingi/Documents/_travail/github/chaos-mesh/pkg/chaosctl/cmd/root.go:78 +0x675
main.main()
	/Users/faraktingi/Documents/_travail/github/chaos-mesh/cmd/chaosctl/main.go:19 +0x25

@faraktingi
Copy link
Author

faraktingi commented Sep 15, 2021

any idea please about last error I got?

@iguoyr
Copy link
Member

iguoyr commented Sep 15, 2021

chaos: rpc error: code = Unknown desc = toda startup takes too long or an error occurs: source: /opt/mqm/bin/, target: /opt/mqm/chaosfs__bin

@YangKeao It's seems like a issue with toda, could you help with this?

I0915 12:51:23.346253 74236 request.go:668] Waited for 1.000311534s due to client-side throttling, not priority and fairness, request: GET:https://c100-e.us-east.containers.cloud.ibm.com:32691/apis/autoscaling/v1?timeout=32s
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x10 pc=0x25940a5]
goroutine 1 [running]:
github.com/chaos-mesh/chaos-mesh/api/v1alpha1.(*ChaosKind).SpawnList(...)
/Users/faraktingi/Documents/_travail/github/chaos-mesh/api/v1alpha1/kinds.go:77

@faraktingi Sorry, it's seems like that chaoctl has some problem to debug iochaos, I'll create a pr to fix it.

@YangKeao
Copy link
Member

YangKeao commented Sep 16, 2021

@faraktingi Have this pod opened a lot of file? If the process has opened a lot of files, it may take too long for the IOChaos to inject and will throw this error. A temporary solution would be injecting IOChaos before the process opening so much files (but be careful, the IOChaos recovering procedure can also throw the same error if there are too much opened files).

We are actively exploring a better way to inject IOChaos more transparently, but still need some time to investigate and develop.

@faraktingi
Copy link
Author

Thanks for your message @YangKeao.

Have this pod opened a lot of file?

Here is the result of the following command:
kubectl exec -it mq-ddd-qm-dev-ibm-mq-0 -- /bin/sh -c 'ulimit -a'

core file size (blocks, -c) unlimited
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 514965
max locked memory (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files (-n) 1048576
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 1048576
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited

@faraktingi
Copy link
Author

faraktingi commented Sep 16, 2021

As an additional information I can provide that I get the same error with several others pods in different namespaces:

Failed to apply chaos: rpc error: code = Unknown desc = toda startup takes too long or an error occurs: source: /tmp, target: /chaosfs__tmp

So i'm not quite sure this error is related to the number of open files BTW...

@YangKeao
Copy link
Member

Could you provide the log of the chaos-daemon on related node while injecting an IOChaos? If the process hasn't opened many files, the log wouldn't be long (as it will print 2-3 lines of log per file). It would help us to identify the problem.

@faraktingi
Copy link
Author

faraktingi commented Sep 16, 2021

hi
in my chaos-mesh daemon pod logs I have this:
oc logs chaos-daemon-2brsz

Chaos-daemon Version: version.Info{GitVersion:"v2.0.1", GitCommit:"2989e66ed8c1a9815cba025664a72ed59422c73d", BuildDate:"2021-08-26T10:04:06Z", GoVersion:"go1.15.11", Compiler:"gc", Platform:"linux/amd64"}
2021-09-15T10:05:14.998Z	INFO	chaos-daemon	grant access to /dev/fuse
2021-09-15T10:05:14.999Z	INFO	chaos-daemon-server	Starting grpc endpoint	{"address": "0.0.0.0:31767", "runtime": "crio"}
2021-09-15T10:05:14.999Z	INFO	chaos-daemon-server	Starting http endpoint	{"address": "0.0.0.0:31766"}
2021-09-16T12:22:03.495Z	INFO	chaos-daemon-server	applying io chaos	{"Request": "actions:\"[{\\\"type\\\":\\\"latency\\\",\\\"path\\\":\\\"\\\",\\\"percent\\\":100,\\\"faults\\\":[{\\\"errno\\\":0,\\\"weight\\\":1}],\\\"latency\\\":\\\"100ms\\\",\\\"source\\\":\\\"chaos-testing/iotest2\\\"}]\" volume:\"/tmp\" container_id:\"cri-o://3ac96b23fff4c869fbbf5ba2f5a7268fc21e37f1856114eec333fe1574fbf183\" enterNS:true"}
2021-09-16T12:22:03.495Z	INFO	chaos-daemon-server	the length of actions	{"length": 1}
2021-09-16T12:22:03.501Z	INFO	chaos-daemon-server	executing	{"cmd": "/usr/local/bin/toda --path /tmp --verbose info"}
2021-09-16T12:22:03.501Z	INFO	background-process-manager	build command	{"command": "/usr/local/bin/nsexec -l -p /proc/119186/ns/pid -m /proc/119186/ns/mnt -- /usr/local/bin/toda --path /tmp --verbose info"}
2021-09-16T12:22:03.506Z	INFO	chaos-daemon-server	Waiting for toda to start
Sep 16 12:22:03.521  INFO toda: start with option: Options { path: "/tmp", mount_only: false, verbose: "info" }
Sep 16 12:22:03.523  INFO inject{injector_config=[]}: toda: inject with config []
Sep 16 12:22:03.523  INFO inject{injector_config=[]}: toda: canonicalizing path /tmp
Sep 16 12:22:03.523  INFO inject{injector_config=[]}: toda::replacer::fd_replacer: preparing fd replacer
Sep 16 12:22:03.525  INFO inject{injector_config=[]}:trace{pid=1}:attach_task{task=Task { pid: 1, tid: 1, root: "/proc/1/task/1" }}: toda::ptrace: attach task: 1 successfully
Sep 16 12:22:03.526  INFO inject{injector_config=[]}:trace{pid=1}:attach_task{task=Task { pid: 1, tid: 1, root: "/proc/1/task/1" }}: toda::ptrace: wait status: Stopped(Pid(1), SIGSTOP)
Sep 16 12:22:03.526  INFO inject{injector_config=[]}:trace{pid=1}:attach_task{task=Task { pid: 1, tid: 9, root: "/proc/1/task/9" }}: toda::ptrace: attach task: 9 successfully
Sep 16 12:22:03.526  INFO inject{injector_config=[]}:trace{pid=1}:attach_task{task=Task { pid: 1, tid: 9, root: "/proc/1/task/9" }}: toda::ptrace: wait status: Stopped(Pid(9), SIGSTOP)
Sep 16 12:22:03.526  INFO inject{injector_config=[]}:trace{pid=1}:attach_task{task=Task { pid: 1, tid: 10, root: "/proc/1/task/10" }}: toda::ptrace: attach task: 10 successfully
Sep 16 12:22:03.526  INFO inject{injector_config=[]}:trace{pid=1}:attach_task{task=Task { pid: 1, tid: 10, root: "/proc/1/task/10" }}: toda::ptrace: wait status: Stopped(Pid(10), SIGSTOP)
Sep 16 12:22:03.527  INFO inject{injector_config=[]}:trace{pid=1}:attach_task{task=Task { pid: 1, tid: 11, root: "/proc/1/task/11" }}: toda::ptrace: attach task: 11 successfully
Sep 16 12:22:03.527  INFO inject{injector_config=[]}:trace{pid=1}:attach_task{task=Task { pid: 1, tid: 11, root: "/proc/1/task/11" }}: toda::ptrace: wait status: Stopped(Pid(11), SIGSTOP)
Sep 16 12:22:03.527  INFO inject{injector_config=[]}:trace{pid=1}:attach_task{task=Task { pid: 1, tid: 12, root: "/proc/1/task/12" }}: toda::ptrace: attach task: 12 successfully
Sep 16 12:22:03.527  INFO inject{injector_config=[]}:trace{pid=1}:attach_task{task=Task { pid: 1, tid: 12, root: "/proc/1/task/12" }}: toda::ptrace: wait status: Stopped(Pid(12), SIGSTOP)
Sep 16 12:22:03.527  INFO inject{injector_config=[]}:trace{pid=1}:attach_task{task=Task { pid: 1, tid: 13, root: "/proc/1/task/13" }}: toda::ptrace: attach task: 13 successfully
Sep 16 12:22:03.527  INFO inject{injector_config=[]}:trace{pid=1}:attach_task{task=Task { pid: 1, tid: 13, root: "/proc/1/task/13" }}: toda::ptrace: wait status: Stopped(Pid(13), SIGSTOP)
Sep 16 12:22:03.528  INFO inject{injector_config=[]}:trace{pid=1}:attach_task{task=Task { pid: 1, tid: 14, root: "/proc/1/task/14" }}: toda::ptrace: attach task: 14 successfully
Sep 16 12:22:03.528  INFO inject{injector_config=[]}:trace{pid=1}:attach_task{task=Task { pid: 1, tid: 14, root: "/proc/1/task/14" }}: toda::ptrace: wait status: Stopped(Pid(14), SIGSTOP)
Sep 16 12:22:03.528  INFO inject{injector_config=[]}:trace{pid=1}:attach_task{task=Task { pid: 1, tid: 15, root: "/proc/1/task/15" }}: toda::ptrace: attach task: 15 successfully
Sep 16 12:22:03.529  INFO inject{injector_config=[]}:trace{pid=1}:attach_task{task=Task { pid: 1, tid: 15, root: "/proc/1/task/15" }}: toda::ptrace: wait status: Stopped(Pid(15), SIGSTOP)
Sep 16 12:22:03.530  INFO inject{injector_config=[]}:trace{pid=1}:attach_task{task=Task { pid: 1, tid: 16, root: "/proc/1/task/16" }}: toda::ptrace: attach task: 16 successfully
Sep 16 12:22:03.530  INFO inject{injector_config=[]}:trace{pid=1}:attach_task{task=Task { pid: 1, tid: 16, root: "/proc/1/task/16" }}: toda::ptrace: wait status: Stopped(Pid(16), SIGSTOP)
Sep 16 12:22:03.531  INFO inject{injector_config=[]}:trace{pid=1}:attach_task{task=Task { pid: 1, tid: 17, root: "/proc/1/task/17" }}: toda::ptrace: attach task: 17 successfully
Sep 16 12:22:03.531  INFO inject{injector_config=[]}:trace{pid=1}:attach_task{task=Task { pid: 1, tid: 17, root: "/proc/1/task/17" }}: toda::ptrace: wait status: Stopped(Pid(17), SIGSTOP)
Sep 16 12:22:03.531  INFO inject{injector_config=[]}:trace{pid=1}:attach_task{task=Task { pid: 1, tid: 18, root: "/proc/1/task/18" }}: toda::ptrace: attach task: 18 successfully
Sep 16 12:22:03.531  INFO inject{injector_config=[]}:trace{pid=1}:attach_task{task=Task { pid: 1, tid: 18, root: "/proc/1/task/18" }}: toda::ptrace: wait status: Stopped(Pid(18), SIGSTOP)
Sep 16 12:22:03.532  INFO inject{injector_config=[]}:trace{pid=1}:attach_task{task=Task { pid: 1, tid: 19, root: "/proc/1/task/19" }}: toda::ptrace: attach task: 19 successfully
Sep 16 12:22:03.532  INFO inject{injector_config=[]}:trace{pid=1}:attach_task{task=Task { pid: 1, tid: 19, root: "/proc/1/task/19" }}: toda::ptrace: wait status: Stopped(Pid(19), SIGSTOP)
Sep 16 12:22:03.532  INFO inject{injector_config=[]}:trace{pid=1}:attach_task{task=Task { pid: 1, tid: 20, root: "/proc/1/task/20" }}: toda::ptrace: attach task: 20 successfully
Sep 16 12:22:03.532  INFO inject{injector_config=[]}:trace{pid=1}:attach_task{task=Task { pid: 1, tid: 20, root: "/proc/1/task/20" }}: toda::ptrace: wait status: Stopped(Pid(20), SIGSTOP)
Sep 16 12:22:03.532  INFO inject{injector_config=[]}:trace{pid=1}:attach_task{task=Task { pid: 1, tid: 21, root: "/proc/1/task/21" }}: toda::ptrace: attach task: 21 successfully
Sep 16 12:22:03.533  INFO inject{injector_config=[]}:trace{pid=1}:attach_task{task=Task { pid: 1, tid: 21, root: "/proc/1/task/21" }}: toda::ptrace: wait status: Stopped(Pid(21), SIGSTOP)
Sep 16 12:22:03.533  INFO inject{injector_config=[]}:trace{pid=1}:attach_task{task=Task { pid: 1, tid: 22, root: "/proc/1/task/22" }}: toda::ptrace: attach task: 22 successfully
Sep 16 12:22:03.533  INFO inject{injector_config=[]}:trace{pid=1}:attach_task{task=Task { pid: 1, tid: 22, root: "/proc/1/task/22" }}: toda::ptrace: wait status: Stopped(Pid(22), SIGSTOP)
Sep 16 12:22:03.533  INFO inject{injector_config=[]}:trace{pid=1}:attach_task{task=Task { pid: 1, tid: 23, root: "/proc/1/task/23" }}: toda::ptrace: attach task: 23 successfully
Sep 16 12:22:03.533  INFO inject{injector_config=[]}:trace{pid=1}:attach_task{task=Task { pid: 1, tid: 23, root: "/proc/1/task/23" }}: toda::ptrace: wait status: Stopped(Pid(23), SIGSTOP)
Sep 16 12:22:03.533  INFO inject{injector_config=[]}:trace{pid=1}:attach_task{task=Task { pid: 1, tid: 24, root: "/proc/1/task/24" }}: toda::ptrace: attach task: 24 successfully
Sep 16 12:22:03.534  INFO inject{injector_config=[]}:trace{pid=1}:attach_task{task=Task { pid: 1, tid: 24, root: "/proc/1/task/24" }}: toda::ptrace: wait status: Stopped(Pid(24), SIGSTOP)
Sep 16 12:22:03.534  INFO inject{injector_config=[]}:trace{pid=1}:attach_task{task=Task { pid: 1, tid: 25, root: "/proc/1/task/25" }}: toda::ptrace: attach task: 25 successfully
Sep 16 12:22:03.534  INFO inject{injector_config=[]}:trace{pid=1}:attach_task{task=Task { pid: 1, tid: 25, root: "/proc/1/task/25" }}: toda::ptrace: wait status: Stopped(Pid(25), SIGSTOP)
Sep 16 12:22:03.534  INFO inject{injector_config=[]}:trace{pid=1}:attach_task{task=Task { pid: 1, tid: 26, root: "/proc/1/task/26" }}: toda::ptrace: attach task: 26 successfully
Sep 16 12:22:03.534  INFO inject{injector_config=[]}:trace{pid=1}:attach_task{task=Task { pid: 1, tid: 26, root: "/proc/1/task/26" }}: toda::ptrace: wait status: Stopped(Pid(26), SIGSTOP)
Sep 16 12:22:03.534  INFO inject{injector_config=[]}:trace{pid=1}:attach_task{task=Task { pid: 1, tid: 27, root: "/proc/1/task/27" }}: toda::ptrace: attach task: 27 successfully
Sep 16 12:22:03.535  INFO inject{injector_config=[]}:trace{pid=1}:attach_task{task=Task { pid: 1, tid: 27, root: "/proc/1/task/27" }}: toda::ptrace: wait status: Stopped(Pid(27), SIGSTOP)
Sep 16 12:22:03.535  INFO inject{injector_config=[]}:trace{pid=1}:attach_task{task=Task { pid: 1, tid: 28, root: "/proc/1/task/28" }}: toda::ptrace: attach task: 28 successfully
Sep 16 12:22:03.535  INFO inject{injector_config=[]}:trace{pid=1}:attach_task{task=Task { pid: 1, tid: 28, root: "/proc/1/task/28" }}: toda::ptrace: wait status: Stopped(Pid(28), SIGSTOP)
Sep 16 12:22:03.535  INFO inject{injector_config=[]}:trace{pid=1}:attach_task{task=Task { pid: 1, tid: 29, root: "/proc/1/task/29" }}: toda::ptrace: attach task: 29 successfully
Sep 16 12:22:03.535  INFO inject{injector_config=[]}:trace{pid=1}:attach_task{task=Task { pid: 1, tid: 29, root: "/proc/1/task/29" }}: toda::ptrace: wait status: Stopped(Pid(29), SIGSTOP)
Sep 16 12:22:03.536  INFO inject{injector_config=[]}:trace{pid=1}:attach_task{task=Task { pid: 1, tid: 30, root: "/proc/1/task/30" }}: toda::ptrace: attach task: 30 successfully
Sep 16 12:22:03.536  INFO inject{injector_config=[]}:trace{pid=1}:attach_task{task=Task { pid: 1, tid: 30, root: "/proc/1/task/30" }}: toda::ptrace: wait status: Stopped(Pid(30), SIGSTOP)
Sep 16 12:22:03.536  INFO inject{injector_config=[]}:trace{pid=1}:attach_task{task=Task { pid: 1, tid: 31, root: "/proc/1/task/31" }}: toda::ptrace: attach task: 31 successfully
Sep 16 12:22:03.536  INFO inject{injector_config=[]}:trace{pid=1}:attach_task{task=Task { pid: 1, tid: 31, root: "/proc/1/task/31" }}: toda::ptrace: wait status: Stopped(Pid(31), SIGSTOP)
Sep 16 12:22:03.536  INFO inject{injector_config=[]}:trace{pid=1}:attach_task{task=Task { pid: 1, tid: 32, root: "/proc/1/task/32" }}: toda::ptrace: attach task: 32 successfully
Sep 16 12:22:03.537  INFO inject{injector_config=[]}:trace{pid=1}:attach_task{task=Task { pid: 1, tid: 32, root: "/proc/1/task/32" }}: toda::ptrace: wait status: Stopped(Pid(32), SIGSTOP)
Sep 16 12:22:03.537  INFO inject{injector_config=[]}:trace{pid=1}:attach_task{task=Task { pid: 1, tid: 33, root: "/proc/1/task/33" }}: toda::ptrace: attach task: 33 successfully
Sep 16 12:22:03.537  INFO inject{injector_config=[]}:trace{pid=1}:attach_task{task=Task { pid: 1, tid: 33, root: "/proc/1/task/33" }}: toda::ptrace: wait status: Stopped(Pid(33), SIGSTOP)
Sep 16 12:22:03.538  INFO inject{injector_config=[]}:trace{pid=1}:attach_task{task=Task { pid: 1, tid: 34, root: "/proc/1/task/34" }}: toda::ptrace: attach task: 34 successfully
Sep 16 12:22:03.538  INFO inject{injector_config=[]}:trace{pid=1}:attach_task{task=Task { pid: 1, tid: 34, root: "/proc/1/task/34" }}: toda::ptrace: wait status: Stopped(Pid(34), SIGSTOP)
Sep 16 12:22:03.538  INFO inject{injector_config=[]}:trace{pid=1}:attach_task{task=Task { pid: 1, tid: 35, root: "/proc/1/task/35" }}: toda::ptrace: attach task: 35 successfully
Sep 16 12:22:03.538  INFO inject{injector_config=[]}:trace{pid=1}:attach_task{task=Task { pid: 1, tid: 35, root: "/proc/1/task/35" }}: toda::ptrace: wait status: Stopped(Pid(35), SIGSTOP)
Sep 16 12:22:03.538  INFO inject{injector_config=[]}:trace{pid=1}:attach_task{task=Task { pid: 1, tid: 36, root: "/proc/1/task/36" }}: toda::ptrace: attach task: 36 successfully
Sep 16 12:22:03.538  INFO inject{injector_config=[]}:trace{pid=1}:attach_task{task=Task { pid: 1, tid: 36, root: "/proc/1/task/36" }}: toda::ptrace: wait status: Stopped(Pid(36), SIGSTOP)
Sep 16 12:22:03.538  INFO inject{injector_config=[]}:trace{pid=1}:attach_task{task=Task { pid: 1, tid: 37, root: "/proc/1/task/37" }}: toda::ptrace: attach task: 37 successfully
Sep 16 12:22:03.539  INFO inject{injector_config=[]}:trace{pid=1}:attach_task{task=Task { pid: 1, tid: 37, root: "/proc/1/task/37" }}: toda::ptrace: wait status: Stopped(Pid(37), SIGSTOP)
Sep 16 12:22:03.539  INFO inject{injector_config=[]}:trace{pid=1}:attach_task{task=Task { pid: 1, tid: 38, root: "/proc/1/task/38" }}: toda::ptrace: attach task: 38 successfully
Sep 16 12:22:03.539  INFO inject{injector_config=[]}:trace{pid=1}:attach_task{task=Task { pid: 1, tid: 38, root: "/proc/1/task/38" }}: toda::ptrace: wait status: Stopped(Pid(38), SIGSTOP)
Sep 16 12:22:03.539  INFO inject{injector_config=[]}:trace{pid=1}:attach_task{task=Task { pid: 1, tid: 39, root: "/proc/1/task/39" }}: toda::ptrace: attach task: 39 successfully
Sep 16 12:22:03.539  INFO inject{injector_config=[]}:trace{pid=1}:attach_task{task=Task { pid: 1, tid: 39, root: "/proc/1/task/39" }}: toda::ptrace: wait status: Stopped(Pid(39), SIGSTOP)
Sep 16 12:22:03.539  INFO inject{injector_config=[]}:trace{pid=1}:attach_task{task=Task { pid: 1, tid: 40, root: "/proc/1/task/40" }}: toda::ptrace: attach task: 40 successfully
Sep 16 12:22:03.539  INFO inject{injector_config=[]}:trace{pid=1}:attach_task{task=Task { pid: 1, tid: 40, root: "/proc/1/task/40" }}: toda::ptrace: wait status: Stopped(Pid(40), SIGSTOP)
Sep 16 12:22:03.540  INFO inject{injector_config=[]}:trace{pid=1}:attach_task{task=Task { pid: 1, tid: 41, root: "/proc/1/task/41" }}: toda::ptrace: attach task: 41 successfully
Sep 16 12:22:03.540  INFO inject{injector_config=[]}:trace{pid=1}:attach_task{task=Task { pid: 1, tid: 41, root: "/proc/1/task/41" }}: toda::ptrace: wait status: Stopped(Pid(41), SIGSTOP)
Sep 16 12:22:03.540  INFO inject{injector_config=[]}:trace{pid=1}:attach_task{task=Task { pid: 1, tid: 42, root: "/proc/1/task/42" }}: toda::ptrace: attach task: 42 successfully
Sep 16 12:22:03.542  INFO inject{injector_config=[]}:trace{pid=1}:attach_task{task=Task { pid: 1, tid: 42, root: "/proc/1/task/42" }}: toda::ptrace: wait status: Stopped(Pid(42), SIGSTOP)
Sep 16 12:22:03.543  INFO inject{injector_config=[]}:trace{pid=1}: toda::ptrace: trace process: 1 successfully
Sep 16 12:22:03.543  INFO inject{injector_config=[]}:detach{pid=1}: toda::ptrace: detach process: 1
Sep 16 12:22:03.544  INFO inject{injector_config=[]}:detach{pid=1}: toda::ptrace: successfully detached task: 1
Sep 16 12:22:03.544  INFO inject{injector_config=[]}:detach{pid=1}: toda::ptrace: successfully detached task: 9
Sep 16 12:22:03.544  INFO inject{injector_config=[]}:detach{pid=1}: toda::ptrace: successfully detached task: 10
Sep 16 12:22:03.544  INFO inject{injector_config=[]}:detach{pid=1}: toda::ptrace: successfully detached task: 11
Sep 16 12:22:03.544  INFO inject{injector_config=[]}:detach{pid=1}: toda::ptrace: successfully detached task: 12
Sep 16 12:22:03.544  INFO inject{injector_config=[]}:detach{pid=1}: toda::ptrace: successfully detached task: 13
Sep 16 12:22:03.544  INFO inject{injector_config=[]}:detach{pid=1}: toda::ptrace: successfully detached task: 14
Sep 16 12:22:03.544  INFO inject{injector_config=[]}:detach{pid=1}: toda::ptrace: successfully detached task: 15
Sep 16 12:22:03.544  INFO inject{injector_config=[]}:detach{pid=1}: toda::ptrace: successfully detached task: 16
Sep 16 12:22:03.544  INFO inject{injector_config=[]}:detach{pid=1}: toda::ptrace: successfully detached task: 17
Sep 16 12:22:03.544  INFO inject{injector_config=[]}:detach{pid=1}: toda::ptrace: successfully detached task: 18
Sep 16 12:22:03.544  INFO inject{injector_config=[]}:detach{pid=1}: toda::ptrace: successfully detached task: 19
Sep 16 12:22:03.545  INFO inject{injector_config=[]}:detach{pid=1}: toda::ptrace: successfully detached task: 20
Sep 16 12:22:03.545  INFO inject{injector_config=[]}:detach{pid=1}: toda::ptrace: successfully detached task: 21
Sep 16 12:22:03.545  INFO inject{injector_config=[]}:detach{pid=1}: toda::ptrace: successfully detached task: 22
Sep 16 12:22:03.545  INFO inject{injector_config=[]}:detach{pid=1}: toda::ptrace: successfully detached task: 23
Sep 16 12:22:03.545  INFO inject{injector_config=[]}:detach{pid=1}: toda::ptrace: successfully detached task: 24
Sep 16 12:22:03.545  INFO inject{injector_config=[]}:detach{pid=1}: toda::ptrace: successfully detached task: 25
Sep 16 12:22:03.545  INFO inject{injector_config=[]}:detach{pid=1}: toda::ptrace: successfully detached task: 26
Sep 16 12:22:03.545  INFO inject{injector_config=[]}:detach{pid=1}: toda::ptrace: successfully detached task: 27
Sep 16 12:22:03.545  INFO inject{injector_config=[]}:detach{pid=1}: toda::ptrace: successfully detached task: 28
Sep 16 12:22:03.545  INFO inject{injector_config=[]}:detach{pid=1}: toda::ptrace: successfully detached task: 29
Sep 16 12:22:03.545  INFO inject{injector_config=[]}:detach{pid=1}: toda::ptrace: successfully detached task: 30
Sep 16 12:22:03.545  INFO inject{injector_config=[]}:detach{pid=1}: toda::ptrace: successfully detached task: 31
Sep 16 12:22:03.545  INFO inject{injector_config=[]}:detach{pid=1}: toda::ptrace: successfully detached task: 32
Sep 16 12:22:03.545  INFO inject{injector_config=[]}:detach{pid=1}: toda::ptrace: successfully detached task: 33
Sep 16 12:22:03.545  INFO inject{injector_config=[]}:detach{pid=1}: toda::ptrace: successfully detached task: 34
Sep 16 12:22:03.545  INFO inject{injector_config=[]}:detach{pid=1}: toda::ptrace: successfully detached task: 35
Sep 16 12:22:03.545  INFO inject{injector_config=[]}:detach{pid=1}: toda::ptrace: successfully detached task: 36
Sep 16 12:22:03.545  INFO inject{injector_config=[]}:detach{pid=1}: toda::ptrace: successfully detached task: 37
Sep 16 12:22:03.546  INFO inject{injector_config=[]}:detach{pid=1}: toda::ptrace: successfully detached task: 38
Sep 16 12:22:03.546  INFO inject{injector_config=[]}:detach{pid=1}: toda::ptrace: successfully detached task: 39
Sep 16 12:22:03.546  INFO inject{injector_config=[]}:detach{pid=1}: toda::ptrace: successfully detached task: 40
Sep 16 12:22:03.546  INFO inject{injector_config=[]}:detach{pid=1}: toda::ptrace: successfully detached task: 41
Sep 16 12:22:03.546  INFO inject{injector_config=[]}:detach{pid=1}: toda::ptrace: successfully detached task: 42
Sep 16 12:22:03.546  INFO inject{injector_config=[]}:detach{pid=1}: toda::ptrace: detach process: 1 successfully
Sep 16 12:22:03.546  INFO inject{injector_config=[]}:trace{pid=43}:attach_task{task=Task { pid: 43, tid: 43, root: "/proc/43/task/43" }}: toda::ptrace: attach task: 43 successfully
Sep 16 12:22:03.546  INFO inject{injector_config=[]}:trace{pid=43}:attach_task{task=Task { pid: 43, tid: 43, root: "/proc/43/task/43" }}: toda::ptrace: wait status: Stopped(Pid(43), SIGSTOP)
Sep 16 12:22:03.547  INFO inject{injector_config=[]}:trace{pid=43}: toda::ptrace: trace process: 43 successfully
Sep 16 12:22:03.547  INFO inject{injector_config=[]}:detach{pid=43}: toda::ptrace: detach process: 43
Sep 16 12:22:03.547  INFO inject{injector_config=[]}:detach{pid=43}: toda::ptrace: successfully detached task: 43
Sep 16 12:22:03.547  INFO inject{injector_config=[]}:detach{pid=43}: toda::ptrace: detach process: 43 successfully
Sep 16 12:22:03.547  INFO inject{injector_config=[]}:trace{pid=50}:attach_task{task=Task { pid: 50, tid: 50, root: "/proc/50/task/50" }}: toda::ptrace: attach task: 50 successfully
Sep 16 12:22:03.547  INFO inject{injector_config=[]}:trace{pid=50}:attach_task{task=Task { pid: 50, tid: 50, root: "/proc/50/task/50" }}: toda::ptrace: wait status: Stopped(Pid(50), SIGSTOP)
Sep 16 12:22:03.548  INFO inject{injector_config=[]}:trace{pid=50}: toda::ptrace: trace process: 50 successfully
Sep 16 12:22:03.548  INFO inject{injector_config=[]}:detach{pid=50}: toda::ptrace: detach process: 50
Sep 16 12:22:03.548  INFO inject{injector_config=[]}:detach{pid=50}: toda::ptrace: successfully detached task: 50
Sep 16 12:22:03.548  INFO inject{injector_config=[]}:detach{pid=50}: toda::ptrace: detach process: 50 successfully
Sep 16 12:22:03.548  INFO inject{injector_config=[]}: toda::replacer::cwd_replacer: preparing cmdreplacer
Sep 16 12:22:03.549  INFO inject{injector_config=[]}: toda::replacer::mmap_replacer: preparing mmap replacer
Sep 16 12:22:03.550  INFO inject{injector_config=[]}:trace{pid=1}:attach_task{task=Task { pid: 1, tid: 1, root: "/proc/1/task/1" }}: toda::ptrace: attach task: 1 successfully
Sep 16 12:22:03.550  INFO inject{injector_config=[]}:trace{pid=1}:attach_task{task=Task { pid: 1, tid: 1, root: "/proc/1/task/1" }}: toda::ptrace: wait status: Stopped(Pid(1), SIGSTOP)
Sep 16 12:22:03.550  INFO inject{injector_config=[]}:trace{pid=1}:attach_task{task=Task { pid: 1, tid: 9, root: "/proc/1/task/9" }}: toda::ptrace: attach task: 9 successfully
Sep 16 12:22:03.550  INFO inject{injector_config=[]}:trace{pid=1}:attach_task{task=Task { pid: 1, tid: 9, root: "/proc/1/task/9" }}: toda::ptrace: wait status: Stopped(Pid(9), SIGSTOP)
Sep 16 12:22:03.550  INFO inject{injector_config=[]}:trace{pid=1}:attach_task{task=Task { pid: 1, tid: 10, root: "/proc/1/task/10" }}: toda::ptrace: attach task: 10 successfully
Sep 16 12:22:03.551  INFO inject{injector_config=[]}:trace{pid=1}:attach_task{task=Task { pid: 1, tid: 10, root: "/proc/1/task/10" }}: toda::ptrace: wait status: Stopped(Pid(10), SIGSTOP)
Sep 16 12:22:03.551  INFO inject{injector_config=[]}:trace{pid=1}:attach_task{task=Task { pid: 1, tid: 11, root: "/proc/1/task/11" }}: toda::ptrace: attach task: 11 successfully
Sep 16 12:22:03.551  INFO inject{injector_config=[]}:trace{pid=1}:attach_task{task=Task { pid: 1, tid: 11, root: "/proc/1/task/11" }}: toda::ptrace: wait status: Stopped(Pid(11), SIGSTOP)
Sep 16 12:22:03.551  INFO inject{injector_config=[]}:trace{pid=1}:attach_task{task=Task { pid: 1, tid: 12, root: "/proc/1/task/12" }}: toda::ptrace: attach task: 12 successfully
Sep 16 12:22:03.551  INFO inject{injector_config=[]}:trace{pid=1}:attach_task{task=Task { pid: 1, tid: 12, root: "/proc/1/task/12" }}: toda::ptrace: wait status: Stopped(Pid(12), SIGSTOP)
Sep 16 12:22:03.551  INFO inject{injector_config=[]}:trace{pid=1}:attach_task{task=Task { pid: 1, tid: 13, root: "/proc/1/task/13" }}: toda::ptrace: attach task: 13 successfully
Sep 16 12:22:03.551  INFO inject{injector_config=[]}:trace{pid=1}:attach_task{task=Task { pid: 1, tid: 13, root: "/proc/1/task/13" }}: toda::ptrace: wait status: Stopped(Pid(13), SIGSTOP)
Sep 16 12:22:03.552  INFO inject{injector_config=[]}:trace{pid=1}:attach_task{task=Task { pid: 1, tid: 14, root: "/proc/1/task/14" }}: toda::ptrace: attach task: 14 successfully
Sep 16 12:22:03.552  INFO inject{injector_config=[]}:trace{pid=1}:attach_task{task=Task { pid: 1, tid: 14, root: "/proc/1/task/14" }}: toda::ptrace: wait status: Stopped(Pid(14), SIGSTOP)
Sep 16 12:22:03.552  INFO inject{injector_config=[]}:trace{pid=1}:attach_task{task=Task { pid: 1, tid: 15, root: "/proc/1/task/15" }}: toda::ptrace: attach task: 15 successfully
Sep 16 12:22:03.552  INFO inject{injector_config=[]}:trace{pid=1}:attach_task{task=Task { pid: 1, tid: 15, root: "/proc/1/task/15" }}: toda::ptrace: wait status: Stopped(Pid(15), SIGSTOP)
Sep 16 12:22:03.552  INFO inject{injector_config=[]}:trace{pid=1}:attach_task{task=Task { pid: 1, tid: 16, root: "/proc/1/task/16" }}: toda::ptrace: attach task: 16 successfully
Sep 16 12:22:03.552  INFO inject{injector_config=[]}:trace{pid=1}:attach_task{task=Task { pid: 1, tid: 16, root: "/proc/1/task/16" }}: toda::ptrace: wait status: Stopped(Pid(16), SIGSTOP)
Sep 16 12:22:03.552  INFO inject{injector_config=[]}:trace{pid=1}:attach_task{task=Task { pid: 1, tid: 17, root: "/proc/1/task/17" }}: toda::ptrace: attach task: 17 successfully
Sep 16 12:22:03.552  INFO inject{injector_config=[]}:trace{pid=1}:attach_task{task=Task { pid: 1, tid: 17, root: "/proc/1/task/17" }}: toda::ptrace: wait status: Stopped(Pid(17), SIGSTOP)
Sep 16 12:22:03.553  INFO inject{injector_config=[]}:trace{pid=1}:attach_task{task=Task { pid: 1, tid: 18, root: "/proc/1/task/18" }}: toda::ptrace: attach task: 18 successfully
Sep 16 12:22:03.553  INFO inject{injector_config=[]}:trace{pid=1}:attach_task{task=Task { pid: 1, tid: 18, root: "/proc/1/task/18" }}: toda::ptrace: wait status: Stopped(Pid(18), SIGSTOP)
Sep 16 12:22:03.553  INFO inject{injector_config=[]}:trace{pid=1}:attach_task{task=Task { pid: 1, tid: 19, root: "/proc/1/task/19" }}: toda::ptrace: attach task: 19 successfully
Sep 16 12:22:03.553  INFO inject{injector_config=[]}:trace{pid=1}:attach_task{task=Task { pid: 1, tid: 19, root: "/proc/1/task/19" }}: toda::ptrace: wait status: Stopped(Pid(19), SIGSTOP)
Sep 16 12:22:03.553  INFO inject{injector_config=[]}:trace{pid=1}:attach_task{task=Task { pid: 1, tid: 20, root: "/proc/1/task/20" }}: toda::ptrace: attach task: 20 successfully
Sep 16 12:22:03.553  INFO inject{injector_config=[]}:trace{pid=1}:attach_task{task=Task { pid: 1, tid: 20, root: "/proc/1/task/20" }}: toda::ptrace: wait status: Stopped(Pid(20), SIGSTOP)
Sep 16 12:22:03.553  INFO inject{injector_config=[]}:trace{pid=1}:attach_task{task=Task { pid: 1, tid: 21, root: "/proc/1/task/21" }}: toda::ptrace: attach task: 21 successfully
Sep 16 12:22:03.553  INFO inject{injector_config=[]}:trace{pid=1}:attach_task{task=Task { pid: 1, tid: 21, root: "/proc/1/task/21" }}: toda::ptrace: wait status: Stopped(Pid(21), SIGSTOP)
Sep 16 12:22:03.553  INFO inject{injector_config=[]}:trace{pid=1}:attach_task{task=Task { pid: 1, tid: 22, root: "/proc/1/task/22" }}: toda::ptrace: attach task: 22 successfully
Sep 16 12:22:03.553  INFO inject{injector_config=[]}:trace{pid=1}:attach_task{task=Task { pid: 1, tid: 22, root: "/proc/1/task/22" }}: toda::ptrace: wait status: Stopped(Pid(22), SIGSTOP)
Sep 16 12:22:03.554  INFO inject{injector_config=[]}:trace{pid=1}:attach_task{task=Task { pid: 1, tid: 23, root: "/proc/1/task/23" }}: toda::ptrace: attach task: 23 successfully
Sep 16 12:22:03.554  INFO inject{injector_config=[]}:trace{pid=1}:attach_task{task=Task { pid: 1, tid: 23, root: "/proc/1/task/23" }}: toda::ptrace: wait status: Stopped(Pid(23), SIGSTOP)
Sep 16 12:22:03.554  INFO inject{injector_config=[]}:trace{pid=1}:attach_task{task=Task { pid: 1, tid: 24, root: "/proc/1/task/24" }}: toda::ptrace: attach task: 24 successfully
Sep 16 12:22:03.554  INFO inject{injector_config=[]}:trace{pid=1}:attach_task{task=Task { pid: 1, tid: 24, root: "/proc/1/task/24" }}: toda::ptrace: wait status: Stopped(Pid(24), SIGSTOP)
Sep 16 12:22:03.554  INFO inject{injector_config=[]}:trace{pid=1}:attach_task{task=Task { pid: 1, tid: 25, root: "/proc/1/task/25" }}: toda::ptrace: attach task: 25 successfully
Sep 16 12:22:03.554  INFO inject{injector_config=[]}:trace{pid=1}:attach_task{task=Task { pid: 1, tid: 25, root: "/proc/1/task/25" }}: toda::ptrace: wait status: Stopped(Pid(25), SIGSTOP)
Sep 16 12:22:03.555  INFO inject{injector_config=[]}:trace{pid=1}:attach_task{task=Task { pid: 1, tid: 26, root: "/proc/1/task/26" }}: toda::ptrace: attach task: 26 successfully
Sep 16 12:22:03.555  INFO inject{injector_config=[]}:trace{pid=1}:attach_task{task=Task { pid: 1, tid: 26, root: "/proc/1/task/26" }}: toda::ptrace: wait status: Stopped(Pid(26), SIGSTOP)
Sep 16 12:22:03.555  INFO inject{injector_config=[]}:trace{pid=1}:attach_task{task=Task { pid: 1, tid: 27, root: "/proc/1/task/27" }}: toda::ptrace: attach task: 27 successfully
Sep 16 12:22:03.555  INFO inject{injector_config=[]}:trace{pid=1}:attach_task{task=Task { pid: 1, tid: 27, root: "/proc/1/task/27" }}: toda::ptrace: wait status: Stopped(Pid(27), SIGSTOP)
Sep 16 12:22:03.555  INFO inject{injector_config=[]}:trace{pid=1}:attach_task{task=Task { pid: 1, tid: 28, root: "/proc/1/task/28" }}: toda::ptrace: attach task: 28 successfully
Sep 16 12:22:03.555  INFO inject{injector_config=[]}:trace{pid=1}:attach_task{task=Task { pid: 1, tid: 28, root: "/proc/1/task/28" }}: toda::ptrace: wait status: Stopped(Pid(28), SIGSTOP)
Sep 16 12:22:03.555  INFO inject{injector_config=[]}:trace{pid=1}:attach_task{task=Task { pid: 1, tid: 29, root: "/proc/1/task/29" }}: toda::ptrace: attach task: 29 successfully
Sep 16 12:22:03.555  INFO inject{injector_config=[]}:trace{pid=1}:attach_task{task=Task { pid: 1, tid: 29, root: "/proc/1/task/29" }}: toda::ptrace: wait status: Stopped(Pid(29), SIGSTOP)
Sep 16 12:22:03.556  INFO inject{injector_config=[]}:trace{pid=1}:attach_task{task=Task { pid: 1, tid: 30, root: "/proc/1/task/30" }}: toda::ptrace: attach task: 30 successfully
Sep 16 12:22:03.556  INFO inject{injector_config=[]}:trace{pid=1}:attach_task{task=Task { pid: 1, tid: 30, root: "/proc/1/task/30" }}: toda::ptrace: wait status: Stopped(Pid(30), SIGSTOP)
Sep 16 12:22:03.556  INFO inject{injector_config=[]}:trace{pid=1}:attach_task{task=Task { pid: 1, tid: 31, root: "/proc/1/task/31" }}: toda::ptrace: attach task: 31 successfully
Sep 16 12:22:03.556  INFO inject{injector_config=[]}:trace{pid=1}:attach_task{task=Task { pid: 1, tid: 31, root: "/proc/1/task/31" }}: toda::ptrace: wait status: Stopped(Pid(31), SIGSTOP)
Sep 16 12:22:03.556  INFO inject{injector_config=[]}:trace{pid=1}:attach_task{task=Task { pid: 1, tid: 32, root: "/proc/1/task/32" }}: toda::ptrace: attach task: 32 successfully
Sep 16 12:22:03.556  INFO inject{injector_config=[]}:trace{pid=1}:attach_task{task=Task { pid: 1, tid: 32, root: "/proc/1/task/32" }}: toda::ptrace: wait status: Stopped(Pid(32), SIGSTOP)
Sep 16 12:22:03.556  INFO inject{injector_config=[]}:trace{pid=1}:attach_task{task=Task { pid: 1, tid: 33, root: "/proc/1/task/33" }}: toda::ptrace: attach task: 33 successfully
Sep 16 12:22:03.556  INFO inject{injector_config=[]}:trace{pid=1}:attach_task{task=Task { pid: 1, tid: 33, root: "/proc/1/task/33" }}: toda::ptrace: wait status: Stopped(Pid(33), SIGSTOP)
Sep 16 12:22:03.557  INFO inject{injector_config=[]}:trace{pid=1}:attach_task{task=Task { pid: 1, tid: 34, root: "/proc/1/task/34" }}: toda::ptrace: attach task: 34 successfully
Sep 16 12:22:03.557  INFO inject{injector_config=[]}:trace{pid=1}:attach_task{task=Task { pid: 1, tid: 34, root: "/proc/1/task/34" }}: toda::ptrace: wait status: Stopped(Pid(34), SIGSTOP)
Sep 16 12:22:03.557  INFO inject{injector_config=[]}:trace{pid=1}:attach_task{task=Task { pid: 1, tid: 35, root: "/proc/1/task/35" }}: toda::ptrace: attach task: 35 successfully
Sep 16 12:22:03.557  INFO inject{injector_config=[]}:trace{pid=1}:attach_task{task=Task { pid: 1, tid: 35, root: "/proc/1/task/35" }}: toda::ptrace: wait status: Stopped(Pid(35), SIGSTOP)
Sep 16 12:22:03.557  INFO inject{injector_config=[]}:trace{pid=1}:attach_task{task=Task { pid: 1, tid: 36, root: "/proc/1/task/36" }}: toda::ptrace: attach task: 36 successfully
Sep 16 12:22:03.557  INFO inject{injector_config=[]}:trace{pid=1}:attach_task{task=Task { pid: 1, tid: 36, root: "/proc/1/task/36" }}: toda::ptrace: wait status: Stopped(Pid(36), SIGSTOP)
Sep 16 12:22:03.557  INFO inject{injector_config=[]}:trace{pid=1}:attach_task{task=Task { pid: 1, tid: 37, root: "/proc/1/task/37" }}: toda::ptrace: attach task: 37 successfully
Sep 16 12:22:03.557  INFO inject{injector_config=[]}:trace{pid=1}:attach_task{task=Task { pid: 1, tid: 37, root: "/proc/1/task/37" }}: toda::ptrace: wait status: Stopped(Pid(37), SIGSTOP)
Sep 16 12:22:03.558  INFO inject{injector_config=[]}:trace{pid=1}:attach_task{task=Task { pid: 1, tid: 38, root: "/proc/1/task/38" }}: toda::ptrace: attach task: 38 successfully
Sep 16 12:22:03.558  INFO inject{injector_config=[]}:trace{pid=1}:attach_task{task=Task { pid: 1, tid: 38, root: "/proc/1/task/38" }}: toda::ptrace: wait status: Stopped(Pid(38), SIGSTOP)
Sep 16 12:22:03.558  INFO inject{injector_config=[]}:trace{pid=1}:attach_task{task=Task { pid: 1, tid: 39, root: "/proc/1/task/39" }}: toda::ptrace: attach task: 39 successfully
Sep 16 12:22:03.558  INFO inject{injector_config=[]}:trace{pid=1}:attach_task{task=Task { pid: 1, tid: 39, root: "/proc/1/task/39" }}: toda::ptrace: wait status: Stopped(Pid(39), SIGSTOP)
Sep 16 12:22:03.558  INFO inject{injector_config=[]}:trace{pid=1}:attach_task{task=Task { pid: 1, tid: 40, root: "/proc/1/task/40" }}: toda::ptrace: attach task: 40 successfully
Sep 16 12:22:03.558  INFO inject{injector_config=[]}:trace{pid=1}:attach_task{task=Task { pid: 1, tid: 40, root: "/proc/1/task/40" }}: toda::ptrace: wait status: Stopped(Pid(40), SIGSTOP)
Sep 16 12:22:03.558  INFO inject{injector_config=[]}:trace{pid=1}:attach_task{task=Task { pid: 1, tid: 41, root: "/proc/1/task/41" }}: toda::ptrace: attach task: 41 successfully
Sep 16 12:22:03.559  INFO inject{injector_config=[]}:trace{pid=1}:attach_task{task=Task { pid: 1, tid: 41, root: "/proc/1/task/41" }}: toda::ptrace: wait status: Stopped(Pid(41), SIGSTOP)
Sep 16 12:22:03.559  INFO inject{injector_config=[]}:trace{pid=1}:attach_task{task=Task { pid: 1, tid: 42, root: "/proc/1/task/42" }}: toda::ptrace: attach task: 42 successfully
Sep 16 12:22:03.559  INFO inject{injector_config=[]}:trace{pid=1}:attach_task{task=Task { pid: 1, tid: 42, root: "/proc/1/task/42" }}: toda::ptrace: wait status: Stopped(Pid(42), SIGSTOP)
Sep 16 12:22:03.564  INFO inject{injector_config=[]}:trace{pid=1}: toda::ptrace: trace process: 1 successfully
Sep 16 12:22:03.565  INFO inject{injector_config=[]}:detach{pid=1}: toda::ptrace: detach process: 1
Sep 16 12:22:03.565  INFO inject{injector_config=[]}:detach{pid=1}: toda::ptrace: successfully detached task: 1
Sep 16 12:22:03.565  INFO inject{injector_config=[]}:detach{pid=1}: toda::ptrace: successfully detached task: 9
Sep 16 12:22:03.565  INFO inject{injector_config=[]}:detach{pid=1}: toda::ptrace: successfully detached task: 10
Sep 16 12:22:03.565  INFO inject{injector_config=[]}:detach{pid=1}: toda::ptrace: successfully detached task: 11
Sep 16 12:22:03.565  INFO inject{injector_config=[]}:detach{pid=1}: toda::ptrace: successfully detached task: 12
Sep 16 12:22:03.566  INFO inject{injector_config=[]}:detach{pid=1}: toda::ptrace: successfully detached task: 13
Sep 16 12:22:03.566  INFO inject{injector_config=[]}:detach{pid=1}: toda::ptrace: successfully detached task: 14
Sep 16 12:22:03.566  INFO inject{injector_config=[]}:detach{pid=1}: toda::ptrace: successfully detached task: 15
Sep 16 12:22:03.566  INFO inject{injector_config=[]}:detach{pid=1}: toda::ptrace: successfully detached task: 16
Sep 16 12:22:03.566  INFO inject{injector_config=[]}:detach{pid=1}: toda::ptrace: successfully detached task: 17
Sep 16 12:22:03.566  INFO inject{injector_config=[]}:detach{pid=1}: toda::ptrace: successfully detached task: 18
Sep 16 12:22:03.566  INFO inject{injector_config=[]}:detach{pid=1}: toda::ptrace: successfully detached task: 19
Sep 16 12:22:03.566  INFO inject{injector_config=[]}:detach{pid=1}: toda::ptrace: successfully detached task: 20
Sep 16 12:22:03.566  INFO inject{injector_config=[]}:detach{pid=1}: toda::ptrace: successfully detached task: 21
Sep 16 12:22:03.566  INFO inject{injector_config=[]}:detach{pid=1}: toda::ptrace: successfully detached task: 22
Sep 16 12:22:03.566  INFO inject{injector_config=[]}:detach{pid=1}: toda::ptrace: successfully detached task: 23
Sep 16 12:22:03.566  INFO inject{injector_config=[]}:detach{pid=1}: toda::ptrace: successfully detached task: 24
Sep 16 12:22:03.567  INFO inject{injector_config=[]}:detach{pid=1}: toda::ptrace: successfully detached task: 25
Sep 16 12:22:03.567  INFO inject{injector_config=[]}:detach{pid=1}: toda::ptrace: successfully detached task: 26
Sep 16 12:22:03.568  INFO inject{injector_config=[]}:detach{pid=1}: toda::ptrace: successfully detached task: 27
Sep 16 12:22:03.568  INFO inject{injector_config=[]}:detach{pid=1}: toda::ptrace: successfully detached task: 28
Sep 16 12:22:03.568  INFO inject{injector_config=[]}:detach{pid=1}: toda::ptrace: successfully detached task: 29
Sep 16 12:22:03.568  INFO inject{injector_config=[]}:detach{pid=1}: toda::ptrace: successfully detached task: 30
Sep 16 12:22:03.568  INFO inject{injector_config=[]}:detach{pid=1}: toda::ptrace: successfully detached task: 31
Sep 16 12:22:03.568  INFO inject{injector_config=[]}:detach{pid=1}: toda::ptrace: successfully detached task: 32
Sep 16 12:22:03.568  INFO inject{injector_config=[]}:detach{pid=1}: toda::ptrace: successfully detached task: 33
Sep 16 12:22:03.568  INFO inject{injector_config=[]}:detach{pid=1}: toda::ptrace: successfully detached task: 34
Sep 16 12:22:03.568  INFO inject{injector_config=[]}:detach{pid=1}: toda::ptrace: successfully detached task: 35
Sep 16 12:22:03.568  INFO inject{injector_config=[]}:detach{pid=1}: toda::ptrace: successfully detached task: 36
Sep 16 12:22:03.568  INFO inject{injector_config=[]}:detach{pid=1}: toda::ptrace: successfully detached task: 37
Sep 16 12:22:03.568  INFO inject{injector_config=[]}:detach{pid=1}: toda::ptrace: successfully detached task: 38
Sep 16 12:22:03.568  INFO inject{injector_config=[]}:detach{pid=1}: toda::ptrace: successfully detached task: 39
Sep 16 12:22:03.568  INFO inject{injector_config=[]}:detach{pid=1}: toda::ptrace: successfully detached task: 40
Sep 16 12:22:03.568  INFO inject{injector_config=[]}:detach{pid=1}: toda::ptrace: successfully detached task: 41
Sep 16 12:22:03.568  INFO inject{injector_config=[]}:detach{pid=1}: toda::ptrace: successfully detached task: 42
Sep 16 12:22:03.568  INFO inject{injector_config=[]}:detach{pid=1}: toda::ptrace: detach process: 1 successfully
Sep 16 12:22:03.569  INFO inject{injector_config=[]}:trace{pid=43}:attach_task{task=Task { pid: 43, tid: 43, root: "/proc/43/task/43" }}: toda::ptrace: attach task: 43 successfully
Sep 16 12:22:03.569  INFO inject{injector_config=[]}:trace{pid=43}:attach_task{task=Task { pid: 43, tid: 43, root: "/proc/43/task/43" }}: toda::ptrace: wait status: Stopped(Pid(43), SIGSTOP)
Sep 16 12:22:03.569  INFO inject{injector_config=[]}:trace{pid=43}: toda::ptrace: trace process: 43 successfully
Sep 16 12:22:03.569  INFO inject{injector_config=[]}:detach{pid=43}: toda::ptrace: detach process: 43
Sep 16 12:22:03.570  INFO inject{injector_config=[]}:detach{pid=43}: toda::ptrace: successfully detached task: 43
Sep 16 12:22:03.570  INFO inject{injector_config=[]}:detach{pid=43}: toda::ptrace: detach process: 43 successfully
Sep 16 12:22:03.570  INFO inject{injector_config=[]}:trace{pid=50}:attach_task{task=Task { pid: 50, tid: 50, root: "/proc/50/task/50" }}: toda::ptrace: attach task: 50 successfully
Sep 16 12:22:03.570  INFO inject{injector_config=[]}:trace{pid=50}:attach_task{task=Task { pid: 50, tid: 50, root: "/proc/50/task/50" }}: toda::ptrace: wait status: Stopped(Pid(50), SIGSTOP)
Sep 16 12:22:03.570  INFO inject{injector_config=[]}:trace{pid=50}: toda::ptrace: trace process: 50 successfully
Sep 16 12:22:03.570  INFO inject{injector_config=[]}:detach{pid=50}: toda::ptrace: detach process: 50
Sep 16 12:22:03.571  INFO inject{injector_config=[]}:detach{pid=50}: toda::ptrace: successfully detached task: 50
Sep 16 12:22:03.571  INFO inject{injector_config=[]}:detach{pid=50}: toda::ptrace: detach process: 50 successfully
Sep 16 12:22:03.573  INFO toda: waiting for signal to exit
Sep 16 12:22:03.582  INFO toda::jsonrpc: Starting jsonrpc server
Sep 16 12:22:03.582  INFO toda::jsonrpc: Creating jsonrpc server
Sep 16 12:22:03.582  INFO toda::jsonrpc: Creating jsonrpc handler
Sep 16 12:22:03.588  INFO toda::jsonrpc: rpc update called
{"jsonrpc":"2.0","result":"source: /tmp, target: /__chaosfs__tmp__","id":1}
Sep 16 12:22:03.589  INFO toda::jsonrpc: rpc get_status called
{"jsonrpc":"2.0","result":"source: /tmp, target: /__chaosfs__tmp__","id":2}
2021-09-16T12:22:03.590Z	INFO	chaos-daemon-server	Starting toda takes too long or encounter an error
2021-09-16T12:22:03.590Z	INFO	chaos-daemon-server	killing toda	{"pid": 22712}
Sep 16 12:22:03.603  INFO toda: start to recover and exit
2021-09-16T12:22:03.610Z	INFO	background-process-manager	process stopped	{"pid": 22712}
{"jsonrpc":"2.0","result":"source: /tmp, target: /__chaosfs__tmp__","id":1}
{"jsonrpc":"2.0","result":"source: /tmp, target: /__chaosfs__tmp__","id":2}
{"jsonrpc":"2.0","result":"source: /tmp, target: /__chaosfs__tmp__","id":1}
{"jsonrpc":"2.0","result":"source: /tmp, target: /__chaosfs__tmp__","id":2}
{"jsonrpc":"2.0","result":"source: /tmp, target: /__chaosfs__tmp__","id":1}
{"jsonrpc":"2.0","result":"source: /tmp, target: /__chaosfs__tmp__","id":2}
{"jsonrpc":"2.0","result":"source: /tmp, target: /__chaosfs__tmp__","id":1}
{"jsonrpc":"2.0","result":"source: /tmp, target: /__chaosfs__tmp__","id":2}
{"jsonrpc":"2.0","result":"source: /tmp, target: /__chaosfs__tmp__","id":1}
{"jsonrpc":"2.0","result":"source: /tmp, target: /__chaosfs__tmp__","id":2}
{"jsonrpc":"2.0","result":"source: /tmp, target: /__chaosfs__tmp__","id":1}
{"jsonrpc":"2.0","result":"source: /tmp, target: /__chaosfs__tmp__","id":2}
{"jsonrpc":"2.0","result":"source: /tmp, target: /__chaosfs__tmp__","id":1}
{"jsonrpc":"2.0","result":"source: /tmp, target: /__chaosfs__tmp__","id":2}
{"jsonrpc":"2.0","result":"source: /tmp, target: /__chaosfs__tmp__","id":1}
{"jsonrpc":"2.0","result":"source: /tmp, target: /__chaosfs__tmp__","id":2}
{"jsonrpc":"2.0","result":"source: /tmp, target: /__chaosfs__tmp__","id":1}
{"jsonrpc":"2.0","result":"source: /tmp, target: /__chaosfs__tmp__","id":2}
{"jsonrpc":"2.0","result":"source: /tmp, target: /__chaosfs__tmp__","id":1}
{"jsonrpc":"2.0","result":"source: /tmp, target: /__chaosfs__tmp__","id":2}
{"jsonrpc":"2.0","result":"source: /tmp, target: /__chaosfs__tmp__","id":1}
{"jsonrpc":"2.0","result":"source: /tmp, target: /__chaosfs__tmp__","id":2}
{"jsonrpc":"2.0","result":"source: /tmp, target: /__chaosfs__tmp__","id":1}
{"jsonrpc":"2.0","result":"source: /tmp, target: /__chaosfs__tmp__","id":2}
{"jsonrpc":"2.0","result":"source: /tmp, target: /__chaosfs__tmp__","id":1}
{"jsonrpc":"2.0","result":"source: /tmp, target: /__chaosfs__tmp__","id":2}
{"jsonrpc":"2.0","result":"source: /tmp, target: /__chaosfs__tmp__","id":1}
{"jsonrpc":"2.0","result":"source: /tmp, target: /__chaosfs__tmp__","id":2}
{"jsonrpc":"2.0","result":"source: /tmp, target: /__chaosfs__tmp__","id":1}
{"jsonrpc":"2.0","result":"source: /tmp, target: /__chaosfs__tmp__","id":2}
{"jsonrpc":"2.0","result":"source: /tmp, target: /__chaosfs__tmp__","id":1}
{"jsonrpc":"2.0","result":"source: /tmp, target: /__chaosfs__tmp__","id":2}
{"jsonrpc":"2.0","result":"source: /tmp, target: /__chaosfs__tmp__","id":1}
{"jsonrpc":"2.0","result":"source: /tmp, target: /__chaosfs__tmp__","id":2}
{"jsonrpc":"2.0","result":"source: /tmp, target: /__chaosfs__tmp__","id":1}
{"jsonrpc":"2.0","result":"source: /tmp, target: /__chaosfs__tmp__","id":2}
{"jsonrpc":"2.0","result":"source: /tmp, target: /__chaosfs__tmp__","id":1}
{"jsonrpc":"2.0","result":"source: /tmp, target: /__chaosfs__tmp__","id":2}
{"jsonrpc":"2.0","result":"source: /tmp, target: /__chaosfs__tmp__","id":1}
{"jsonrpc":"2.0","result":"source: /tmp, target: /__chaosfs__tmp__","id":2}
{"jsonrpc":"2.0","result":"source: /tmp, target: /__chaosfs__tmp__","id":1}
{"jsonrpc":"2.0","result":"source: /tmp, target: /__chaosfs__tmp__","id":2}

@faraktingi
Copy link
Author

faraktingi commented Sep 16, 2021

Sorry I made several attempts but in my previous comment this is the last log with a /tmp folder....

But this is similar for every other pods I tried in my namespace.

@faraktingi
Copy link
Author

Hello @YangKeao,

Let me know if the logs I sent Yesterday are the one you asked for and if I need to provide something else please.

I appreciate your help on this problem.

Best,
Fabien

@YangKeao
Copy link
Member

Hello @YangKeao,

Let me know if the logs I sent Yesterday are the one you asked for and if I need to provide something else please.

I appreciate your help on this problem.

Best,
Fabien

Yes! Really Thanks for the log. I have located the error https://github.com/chaos-mesh/toda/blob/master/src/mount.rs#L40 . The chaos mesh is trying to execute mount --move /tmp /__chaosfs__tmp__ but got an error. However, the frustrating thing is that the log doesn't give me an error number of the syscall 😢 . Could you try to run mkdir /tmp/__chaosfs__tmp__ && mount --move /tmp /tmp/__chaosfs__tmp__ manually inside a pod? It should throw the same error as the chaos mesh.

Here are several possible situations for mount --move error:

1 . The source is not a volume. For example the /tmp or /opt/mqm/bin above is not a mount point, then there will be an error.
2. The parent mount point of the source is MS_SHARED. In most cases, the / of the container is not MS_SHARED. But in case the openshift cloud is different and have a SHARED root, you can check this by running findmnt -o TARGET,PROPAGATION / inside the container, and the output should be:

# findmnt -o TARGET,PROPAGATION /
TARGET PROPAGATION
/      private,slave

Thanks.

@faraktingi
Copy link
Author

faraktingi commented Sep 17, 2021

here are results of the commands above:

sh-4.4$ mkdir /tmp/__chaosfs__tmp__ && mount --move /tmp /tmp/__chaosfs__tmp__

mount: only root can use "--move" option

I do not have the root password.

sh-4.4$ findmnt -o TARGET,PROPAGATION /

TARGET PROPAGATION
/ private

@YangKeao
Copy link
Member

YangKeao commented Sep 17, 2021

Can you provide the definition of the pod (after hiding the sensitive information, if you have any security concern)? Or could you try to deploy a simple application (e.g. a sleeping ubuntu image with emptyDir volume) and inject IOChaos into it?

It would be even better to deploy a priviledged sleeping ubuntu, so that you will be able to run the former mount command in it.

@faraktingi
Copy link
Author

faraktingi commented Sep 17, 2021

@YangKeao thanks for your feedback.

Here is the definition of the pod I'm trying to inject I/O chaos:

oc describe pod mq-ddd-qm-dev-ibm-mq-1
Name:         mq-ddd-qm-dev-ibm-mq-1
Namespace:    cp4i
Priority:     0
Node:         10.189.92.10/10.189.92.10
Start Time:   Mon, 13 Sep 2021 16:16:14 +0200
Labels:       app.kubernetes.io/component=integration
              app.kubernetes.io/instance=mq-ddd-qm-dev
              app.kubernetes.io/managed-by=operator
              app.kubernetes.io/name=ibm-mq
              app.kubernetes.io/version=9.2.3.0
              controller-revision-hash=mq-ddd-qm-dev-ibm-mq-74664b8849
              statefulSetName=mq-ddd-qm-dev-ibm-mq
              statefulset.kubernetes.io/pod-name=mq-ddd-qm-dev-ibm-mq-1
Annotations:  cloudpakId: c8b82d189e7545f0892db9ef2731b90d
              cloudpakName: IBM Cloud Pak for Integration
              cloudpakVersion:
              cni.projectcalico.org/podIP: 172.30.227.77/32
              cni.projectcalico.org/podIPs: 172.30.227.77/32
              k8s.v1.cni.cncf.io/network-status:
                [{
                    "name": "",
                    "ips": [
                        "172.30.227.77"
                    ],
                    "default": true,
                    "dns": {}
                }]
              k8s.v1.cni.cncf.io/networks-status:
                [{
                    "name": "",
                    "ips": [
                        "172.30.227.77"
                    ],
                    "default": true,
                    "dns": {}
                }]
              openshift.io/scc: restricted
              productChargedContainers: qmgr
              productCloudpakRatio: 4:1
              productID: 21dfe9a0f00f444f888756d835334909
              productMetric: VIRTUAL_PROCESSOR_CORE
              productName: IBM MQ Advanced for Non-Production
              productVersion: 9.2.3.0
Status:       Running
IP:           172.30.227.77
IPs:
  IP:           172.30.227.77
Controlled By:  StatefulSet/mq-ddd-qm-dev-ibm-mq
Containers:
  qmgr:
    Container ID:   cri-o://7a985b23351b299a847ca4e8bd448a3ac27a0b668a22aeb056d3c2047b3b2b5a
    Image:          image-registry.openshift-image-registry.svc:5000/cp4i/mq-ddd:latest
    Image ID:       image-registry.openshift-image-registry.svc:5000/cp4i/mq-ddd@sha256:f062af640a66c3a52c070931b8995b212b0d5c47b1f47b76b3db40e4e430cb47
    Ports:          1414/TCP, 9157/TCP, 9443/TCP, 9414/TCP
    Host Ports:     0/TCP, 0/TCP, 0/TCP, 0/TCP
    State:          Running
      Started:      Mon, 13 Sep 2021 16:16:17 +0200
    Ready:          True
    Restart Count:  0
    Limits:
      cpu:     1
      memory:  1Gi
    Requests:
      cpu:      1
      memory:   1Gi
    Liveness:   exec [chkmqhealthy] delay=0s timeout=5s period=10s #success=1 #failure=1
    Readiness:  exec [chkmqready] delay=0s timeout=3s period=5s #success=1 #failure=1
    Startup:    exec [chkmqstarted] delay=0s timeout=5s period=5s #success=1 #failure=24
    Environment:
      MQS_PERMIT_UNKNOWN_ID:                        true
      LICENSE:                                      accept
      MQ_QMGR_NAME:                                 QUICKSTART
      MQ_MULTI_INSTANCE:                            false
      MQ_ENABLE_METRICS:                            true
      MQ_ENABLE_EMBEDDED_WEB_SERVER:                true
      LOG_FORMAT:                                   basic
      DEBUG:                                        false
      MQ_ENABLE_TRACE_STRMQM:                       false
      MQ_EPHEMERAL_PREFIX:                          /run/mqm
      MQ_GRACE_PERIOD:                              29
      MQ_NATIVE_HA:                                 true
      AMQ_CLOUD_PAK:                                true
      MQ_NATIVE_HA_INSTANCE_0_NAME:                 mq-ddd-qm-dev-ibm-mq-0
      MQ_NATIVE_HA_INSTANCE_0_REPLICATION_ADDRESS:  mq-ddd-qm-dev-ibm-mq-replica-0(9414)
      MQ_NATIVE_HA_INSTANCE_1_NAME:                 mq-ddd-qm-dev-ibm-mq-1
      MQ_NATIVE_HA_INSTANCE_1_REPLICATION_ADDRESS:  mq-ddd-qm-dev-ibm-mq-replica-1(9414)
      MQ_NATIVE_HA_INSTANCE_2_NAME:                 mq-ddd-qm-dev-ibm-mq-2
      MQ_NATIVE_HA_INSTANCE_2_REPLICATION_ADDRESS:  mq-ddd-qm-dev-ibm-mq-replica-2(9414)
      MQ_NATIVE_HA_TLS:                             false
      MQ_GENERATE_CERTIFICATE_HOSTNAME:             mq-ddd-qm-dev-ibm-mq-web-cp4i.cp4i-ha-ddd-chaos-058281adec3a8cab47db93f6de1c8681-0000.us-east.containers.appdomain.cloud
      MQ_BETA_ENABLE_SSO:                           true
      MQ_CONSOLE_DEFAULT_CCDT_HOSTNAME:             mq-ddd-qm-dev-ibm-mq-qm-cp4i.cp4i-ha-ddd-chaos-058281adec3a8cab47db93f6de1c8681-0000.us-east.containers.appdomain.cloud
      MQ_CONSOLE_DEFAULT_CCDT_PORT:                 443
      MQ_OIDC_CLIENT_ID:                            <XXXXXXX>      Optional: false
      MQ_OIDC_CLIENT_SECRET:                        <XXXXXX>  Optional: false
      MQ_OIDC_UNIQUE_USER_IDENTIFIER:               sub
      MQ_OIDC_AUTHORIZATION_ENDPOINT:               https://cp-console.cp4i-ha-ddd-chaos-058281adec3a8cab47db93f6de1c8681-0000.us-east.containers.appdomain.cloud:443/idprovider/v1/auth/authorize
      MQ_OIDC_TOKEN_ENDPOINT:                       https://cp-console.cp4i-ha-ddd-chaos-058281adec3a8cab47db93f6de1c8681-0000.us-east.containers.appdomain.cloud:443/idprovider/v1/auth/token
      MQ_OIDC_JWK_ENDPOINT:                         https://cp-console.cp4i-ha-ddd-chaos-058281adec3a8cab47db93f6de1c8681-0000.us-east.containers.appdomain.cloud:443/idprovider/v1/auth/jwk
      MQ_OIDC_ISSUER_IDENTIFIER:                    <set to the key 'OIDC_ISSUER_URL' of config map 'ibm-iam-bindinfo-platform-auth-idp'>  Optional: false
      IAM_URL:                                      https://cp-console.cp4i-ha-ddd-chaos-058281adec3a8cab47db93f6de1c8681-0000.us-east.containers.appdomain.cloud:443
      MQ_NAMESPACE:                                 cp4i
      MQ_CP4I_SERVICES_URL:
      MQ_ENABLE_OPEN_TRACING:                       false
    Mounts:
      /etc/mqm/example.ini from cm-mtlsmqsc (ro,path="example.ini")
      /etc/mqm/pki/keys/default from default (ro)
      /etc/mqm/pki/trust/0 from trust0 (ro)
      /etc/mqm/pki/trust/default from oidc-certificate (rw)
      /mnt/mqm from data (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from mq-ddd-qm-dev-ibm-mq-token-s7clt (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             True
  ContainersReady   True
  PodScheduled      True
Volumes:
  data:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  data-mq-ddd-qm-dev-ibm-mq-1
    ReadOnly:   false
  default:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  XXXXX
    Optional:    false
  trust0:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  XXXXXX
    Optional:    false
  cm-mtlsmqsc:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      XXXXXXXX
    Optional:  false
  oidc-certificate:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  XXxXXXXXXXXX
    Optional:    false
  mq-ddd-qm-dev-ibm-mq-token-s7clt:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  XXXXXXXXXXXX
    Optional:    false
QoS Class:       Guaranteed
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/memory-pressure:NoSchedule op=Exists
                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                 node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:          <none>

Or could you try to deploy a simple application (e.g. a sleeping ubuntu image with emptyDir volume) and inject IOChaos into it?
Can you provide me informations about how to perform those operations please?

@faraktingi
Copy link
Author

Hello,

I have created a new namespace and then created the following Pod:

apiVersion: v1
kind: Pod
metadata:
name: security-context-demo
spec:
securityContext:
runAsUser: 1000
runAsGroup: 3000
fsGroup: 2000
volumes:

  • name: sec-ctx-vol
    emptyDir: {}
    containers:
  • name: sec-ctx-demo
    image: busybox
    command: [ "sh", "-c", "sleep 100d" ]
    volumeMounts:
    • name: sec-ctx-vol
      mountPath: /data/demo
      securityContext:
      allowPrivilegeEscalation: true

Then I create a new I/O experiment on Volume Path: /data/demo

I only got this error:
Failed to update conditions: Operation cannot be fulfilled on iochaos.chaos-mesh.org "iotest4": the object has been modified; please apply your changes to the latest version and try again

And I think that the experiment passed at the end of the duration (1 minute) I got:
Time up according to the duration

Regarding the mount command I tried:
mkdir /data/__chaosfs__tmp__ && mount --move /data /data/__chaosfs__tmp__
and got:

mkdir: can't create directory '/data/__chaosfs__tmp__': Permission denied

My id is:
/data $ id

uid=1000(1000) gid=3000 groups=2000

@faraktingi
Copy link
Author

Additional information about MQ pod:

kind: Pod
apiVersion: v1
metadata:
generateName: mq-ddd-qm-test-ibm-mq-
annotations:
openshift.io/scc: restricted
cloudpakId: c8b82d189e7545f0892db9ef2731b90d
cni.projectcalico.org/podIP: 172.30.227.109/32
productVersion: 9.2.3.0
productID: 21dfe9a0f00f444f888756d835334909
cni.projectcalico.org/podIPs: 172.30.227.109/32
cloudpakName: IBM Cloud Pak for Integration
kubectl.kubernetes.io/last-applied-configuration: >
XXXXXXXX
cloudpakVersion: ''
productChargedContainers: qmgr
k8s.v1.cni.cncf.io/network-status: |-
[{
"name": "",
"ips": [
"172.30.227.109"
],
"default": true,
"dns": {}
}]
k8s.v1.cni.cncf.io/networks-status: |-
[{
"name": "",
"ips": [
"172.30.227.109"
],
"default": true,
"dns": {}
}]
productCloudpakRatio: '4:1'
productName: IBM MQ Advanced for Non-Production
productMetric: VIRTUAL_PROCESSOR_CORE
selfLink: /api/v1/namespaces/cp4i/pods/mq-ddd-qm-test-ibm-mq-1
resourceVersion: '4562586'
name: mq-ddd-qm-test-ibm-mq-1
uid: XXXXXXXX
creationTimestamp: '2021-09-13T14:26:01Z'
managedFields:
- manager: kube-controller-manager
operation: Update
apiVersion: v1
time: '2021-09-13T14:26:01Z'
fieldsType: FieldsV1

- manager: calico
  operation: Update
  apiVersion: v1
  time: '2021-09-13T14:26:18Z'
  fieldsType: FieldsV1
- manager: multus
  operation: Update
  apiVersion: v1
  time: '2021-09-13T14:26:18Z'
- manager: kubelet
  operation: Update
  apiVersion: v1
  time: '2021-09-15T13:32:34Z'  namespace: cp4i

ownerReferences:
- apiVersion: apps/v1
kind: StatefulSet
name: mq-ddd-qm-test-ibm-mq
uid: b4ddffb6-848a-4ce6-82b6-d48271880279
controller: true
blockOwnerDeletion: true
labels:
app.kubernetes.io/component: integration
app.kubernetes.io/instance: mq-ddd-qm-test
app.kubernetes.io/managed-by: operator
app.kubernetes.io/name: ibm-mq
app.kubernetes.io/version: 9.2.3.0
controller-revision-hash: mq-ddd-qm-test-ibm-mq-757fd4865
statefulSetName: mq-ddd-qm-test-ibm-mq
statefulset.kubernetes.io/pod-name: mq-ddd-qm-test-ibm-mq-1
spec:
restartPolicy: Always
serviceAccountName: mq-ddd-qm-test-ibm-mq
imagePullSecrets:
- name: sa-cp4i
- name: ibm-entitlement-key
- name: mq-ddd-qm-test-ibm-mq-dockercfg-gmxf4
priority: 0
subdomain: qm
schedulerName: default-scheduler
enableServiceLinks: true
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: statefulSetName
operator: In
values:
- mq-ddd-qm-test-ibm-mq
topologyKey: kubernetes.io/hostname
terminationGracePeriodSeconds: 30
preemptionPolicy: PreemptLowerPriority
nodeName: 10.189.92.10
securityContext:
seLinuxOptions:
level: 's0:c25,c20'
fsGroup: 1000640000
containers:
- resources:
limits:
cpu: '1'
memory: 1Gi
requests:
cpu: '1'
memory: 1Gi
readinessProbe:
exec:
command:
- chkmqready
timeoutSeconds: 3
periodSeconds: 5
successThreshold: 1
failureThreshold: 1
terminationMessagePath: /run/termination-log
name: qmgr
livenessProbe:
exec:
command:
- chkmqhealthy
timeoutSeconds: 5
periodSeconds: 10
successThreshold: 1
failureThreshold: 1
env:
XXXXXXXX
securityContext:
capabilities:
drop:
- ALL
- KILL
- MKNOD
- SETGID
- SETUID
privileged: false
runAsUser: 1000640000
runAsNonRoot: true
readOnlyRootFilesystem: false
allowPrivilegeEscalation: false
ports:
- containerPort: 1414
protocol: TCP
- containerPort: 9157
protocol: TCP
- containerPort: 9443
protocol: TCP
- containerPort: 9414
protocol: TCP
imagePullPolicy: Always
startupProbe:
exec:
command:
- chkmqstarted
timeoutSeconds: 5
periodSeconds: 5
successThreshold: 1
failureThreshold: 24
volumeMounts:
- name: data
mountPath: /mnt/mqm
- name: default
readOnly: true
mountPath: /etc/mqm/pki/keys/default
- name: trust0
readOnly: true
mountPath: /etc/mqm/pki/trust/0
- name: cm-mtlsmqsc
readOnly: true
mountPath: /etc/mqm/example.ini
subPath: example.ini
- name: oidc-certificate
mountPath: /etc/mqm/pki/trust/default
- name: mq-ddd-qm-test-ibm-mq-token-lvlzr
readOnly: true
mountPath: /var/run/secrets/kubernetes.io/serviceaccount
terminationMessagePolicy: File
image: 'image-registry.openshift-image-registry.svc:5000/cp4i/mq-ddd:latest-test'
hostname: mq-ddd-qm-test-ibm-mq-1
serviceAccount: mq-ddd-qm-test-ibm-mq
volumes:
- name: data
persistentVolumeClaim:
claimName: data-mq-ddd-qm-test-ibm-mq-1
- name: default
secret:
secretName: mqcert
items:
- key: tls.key
path: tls.key
- key: tls.crt
path: tls.crt
defaultMode: 288
- name: trust0
secret:
secretName: mqcert
items:
- key: app.crt
path: app.crt
defaultMode: 288
- name: cm-mtlsmqsc
configMap:
name: mtlsmqsc
items:
- key: example.ini
path: example.ini
defaultMode: 420
- name: oidc-certificate
secret:
secretName: ibmcloud-cluster-ca-cert
items:
- key: ca.crt
path: OIDC_CERTIFICATE.crt
defaultMode: 420
- name: mq-ddd-qm-test-ibm-mq-token-lvlzr
secret:
secretName: xxxxx
defaultMode: 420
dnsPolicy: ClusterFirst
tolerations:
- key: node.kubernetes.io/not-ready
operator: Exists
effect: NoExecute
tolerationSeconds: 300
- key: node.kubernetes.io/unreachable
operator: Exists
effect: NoExecute
tolerationSeconds: 300
- key: node.kubernetes.io/memory-pressure
operator: Exists
effect: NoSchedule

Hi @YangKeao here is some additional definition I did not send to you earlier but you could see the security context of the Pod. Just note that I have also performed the test with chaos-mesh pod (from chaos-mesh testing project) and I got same error.

@YangKeao
Copy link
Member

Then I create a new I/O experiment on Volume Path: /data/demo

I only got this error:
Failed to update conditions: Operation cannot be fulfilled on iochaos.chaos-mesh.org "iotest4": the object has been modified; please apply your changes to the latest version and try again

And I think that the experiment passed at the end of the duration (1 minute) I got:
Time up according to the duration

Thanks for your reply! The error "Failed to update conditions" doesn't matter. If there are not other error, and there are events like "Successfully apply chaos for NAMESPACE/PODNAME", the injection works well. You can check the latency by running ls -lah or other commands under that folder, and see whether it becomes slow.

During the injection, there is a file system (called toda) mounted at the volumePath, which can be verified through /proc/mounts.

@faraktingi
Copy link
Author

faraktingi commented Sep 17, 2021

@YangKeao
Ok so that worked for this Ubuntu test by giving more privileges to the security pod. But that it's not an option for me to change the pod security privileges in my MQ production pods.

What is the option we have in such case? Is it something no need to investigate deeply from your side?

@YangKeao
Copy link
Member

@YangKeao
Ok so that worked for this Ubuntu test by giving more privileges to the security pod. But that it's not an option for me to change the pod security privileges in my MQ production pods.

What is the option we have in such case? Is it something no need to investigate deeply from your side?

I don't know why the priviledge of target pod will affect the execution of Chaos Mesh, as Chaos Mesh runs all injection under its own user and container (by switching namespaces / cgroups) (and the chaos-daemon is "priviledged"). The OpenShift seems to have a lot of security protection (e.g. scc, SELinux ... ), and the users of Chaos Mesh on OpenShift have reported many different problems 😿 . But none of them can be easily reproduced on the CodeReady environment or a newly created cluster.

I really want to investigate, write down all possible situations into the document and enable OpenShift users to use Chaos Mesh out of box, but sometimes I don't know the direction to investigate. Let me read more about the OpenShift and SELinux 🧠 . I need more knowledge to solve this issue.

One more question, does other functions of Chaos Mesh (e.g. NetworkChaos) work well?

@faraktingi
Copy link
Author

Sure @YangKeao

Yes I was able to perform some NetworkChaos experiments successfully.

Many thanks for your help again.
Fabien

@faraktingi
Copy link
Author

hello @YangKeao

How are you doing?

Any news regarding this issue please?

Thanks Fabien.

@faraktingi
Copy link
Author

Hello @YangKeao - Do you think I could have an update on this issue soon please?

Thanks for your help,
Fabien

@YangKeao
Copy link
Member

Hello @YangKeao - Do you think I could have an update on this issue soon please?

Thanks for your help,
Fabien

No. I don't think I could get a solution soon 😿 .

@github-actions
Copy link

This issue is stale because it has been open 90 days with no activity. Remove stale label or comment or this will be closed in 21 days

@YangKeao YangKeao added the type/bug Report of an issue or malfunction. label Dec 28, 2021
@vishrantgupta
Copy link

By default, the chaos mesh daemon set does not get scheduled on the control-plane/ master node, and I was getting this error

Failed to apply chaos: cannot find daemonIP on node <ip>

In order to fix it, add a toleration for chaos daemon:

  chaosDaemon:
    tolerations:
      - effect: NoSchedule
        operator: Exists

@oldthreefeng
Copy link

add tolerations to chaos-daemon and redeploy
tolerations:
- effect: NoSchedule
operator: Exists

@michael-lam
Copy link

michael-lam commented Oct 1, 2024

image I don't know where I should add tolerations in? I assume that I should add tolerations into values.yaml file and change:

From
tolerations: []

To:
tolerations:

  • effect: NoSchedule
    operator: Exists

Thank you in advance

@STRRL
Copy link
Member

STRRL commented Oct 1, 2024

image I don't know where I should add tolerations in? I assume that I should add tolerations into values.yaml file and change:
From tolerations: []

To: tolerations:

  • effect: NoSchedule
    operator: Exists

Thank you in advance

Yes, here is the correct place to put the tolerations. And please notice the intent things.

@STRRL
Copy link
Member

STRRL commented Oct 1, 2024

I am going to close this issue because this issue was created years ago, and it does not active recently.

feel free to create new issues/discussions if you still have problem. Thanks!

@STRRL STRRL closed this as completed Oct 1, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
component/daemon lifecycle/frozen type/bug Report of an issue or malfunction.
Projects
None yet
Development

No branches or pull requests

9 participants