You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Describe the bug
With v51.0.0 kubeshark installed, I'm seeing next to no traffic reported over AMQP. The reports are very intermittent and usually attributed to the wrong source or destination. As a quick demo, I have two services publishing to rabbitmq and one service subscribed. The service subscribed is showing that there are around 10 messages per second being received. The UI rarely shows anything. When it does show things, it's typically a basic deliver method. These typically show up as going between "kubernetes" and "kubernetes". Even more rarely I see a basic publish method that is attributed to the proper source and destination.
Navigate to front end in web browser at http://ks.svc.cluster.local/
Observe little to no AMQP traffic
Expected behavior
One basic publish method in the UI per message produced by publishers. One basic deliver method in UI per message received by subscriber.
Desktop (please complete the following information):
OS: Ubuntu 20.04 kernel 5.15.0-86-generic
Web Browser: Chrome
Additional context
Resource utilization of the worker looked stable
I have tried turning off other applications at the same time to free up system resources. I have also tried adjusting the tap.regex setting in the chart to reduce the number of pods we're capturing traffic for to just those of interest.
The text was updated successfully, but these errors were encountered:
In case it's useful, the worker always starts up with a failure in tracer and then runs succesfully after a restart. This is the error it crashes with.
2023-10-20T18:20:30Z INF tracer/misc/data.go:20 > Set the data directory to: data-dir=data
2023-10-20T18:20:30Z INF tracer/main.go:41 > Starting tracer...
2023-10-20T18:20:30Z INF tracer/tracer.go:39 > Initializing tracer (chunksSize: 409600) (logSize: 4096)
2023-10-20T18:20:30Z INF tracer/tracer.go:53 > Detected Linux kernel version: 5.15.0-86-generic
2023-10-20T18:20:30Z INF tracer/pkg/pipe/impl.go:50 > Created a named pipe: name=data/pipe.log
2023-10-20T18:20:30Z INF tracer/pkg/pipe/impl.go:57 > Opened the named pipe: name=data/pipe.log
2023-10-20T18:20:30Z ERR tracer/main.go:75 > error="failed to create perf ring for CPU 0: can't mmap: operation not permitted"
panic: failed to create perf ring for CPU 0: can't mmap: operation not permitted
goroutine 1 [running]:
main.run()
/app/tracer/main.go:49 +0x1b2
main.main()
/app/tracer/main.go:37 +0x4bf
Here's an example of it not attributing source and destination correctly. This is supposed to be a publish from one of my services to the message broker.
Describe the bug
With v51.0.0 kubeshark installed, I'm seeing next to no traffic reported over AMQP. The reports are very intermittent and usually attributed to the wrong source or destination. As a quick demo, I have two services publishing to rabbitmq and one service subscribed. The service subscribed is showing that there are around 10 messages per second being received. The UI rarely shows anything. When it does show things, it's typically a
basic deliver
method. These typically show up as going between "kubernetes" and "kubernetes". Even more rarely I see abasic publish
method that is attributed to the proper source and destination.Provide more information
minikube
To Reproduce
Set up helm chart values
Install helm chart
Add minikube ingress controller
Navigate to front end in web browser at
http://ks.svc.cluster.local/
Observe little to no AMQP traffic
Expected behavior
One
basic publish
method in the UI per message produced by publishers. Onebasic deliver
method in UI per message received by subscriber.Logs
Worker logs typically show this
Hub logs
Screenshots
N/A
Desktop (please complete the following information):
Additional context
Resource utilization of the worker looked stable
I have tried turning off other applications at the same time to free up system resources. I have also tried adjusting the
tap.regex
setting in the chart to reduce the number of pods we're capturing traffic for to just those of interest.The text was updated successfully, but these errors were encountered: