You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Trebuchet consumes from both the 'goku' and 'adjust_data' events. When running in production, however, the rate at which faust consumes from these topics vary wildly, sometimes even going to zero. See the graph below:
What's notable about these graphs is that they are the exact inverses of each other. When one spikes, the other drops, and vice versa, so Faust is overall consuming at approximately a constant rate. Goku events sometimes even goes to zero for a while.
Expected Behavior
Faust should be consuming from both topics at a reasonable, constant rate.
Steps to reproduce
To reproduce this issue locally, set up trebuchet locally (https://github.com/robinhoodmarkets/trebuchet), and then add print("logging event") to line 14 and print("adjust event") to line 24 of trebuchet/agents.py. Also, change the replication factor in trebuchet/config.py to 1. After building and installing, start trebuchet with trebuchet worker -l info. tests/sample contains sample adjust_data and goku events. To produce sample events, run the following two commands on two separate shells at the same time: kafka-console-producer --broker-list localhost:9092 --topic adjust_data < tests/local/adjust_data kafka-console-producer --broker-list localhost:9092 --topic goku < tests/local/goku
This should start producing to kafka, and you will see a stream of print statements on Trebuchet. If reproduced correctly, you should see times when just adjust events (and no goku/logging events) being processed for a while, before processing both topics for a while (with a lower rate for adjust events), then repeating this cycle, similar to what the graphs above are showing.
Versions
Python version: 3.6.4
Faust version: 0.9.36
Operating system: macOS High Sierra
Kafka version: 0.11.0.1
The text was updated successfully, but these errors were encountered:
Issue resolved. Trebuchet was using asyncio.gather() to call all of the push to kafka (pushing app, device, user, and event). Removing that caused everything to return to normal.
Actual Behavior
Trebuchet consumes from both the 'goku' and 'adjust_data' events. When running in production, however, the rate at which faust consumes from these topics vary wildly, sometimes even going to zero. See the graph below:
What's notable about these graphs is that they are the exact inverses of each other. When one spikes, the other drops, and vice versa, so Faust is overall consuming at approximately a constant rate. Goku events sometimes even goes to zero for a while.
Expected Behavior
Faust should be consuming from both topics at a reasonable, constant rate.
Steps to reproduce
To reproduce this issue locally, set up trebuchet locally (https://github.com/robinhoodmarkets/trebuchet), and then add
print("logging event")
to line 14 andprint("adjust event")
to line 24 of trebuchet/agents.py. Also, change the replication factor in trebuchet/config.py to 1. After building and installing, start trebuchet withtrebuchet worker -l info
. tests/sample contains sample adjust_data and goku events. To produce sample events, run the following two commands on two separate shells at the same time:kafka-console-producer --broker-list localhost:9092 --topic adjust_data < tests/local/adjust_data
kafka-console-producer --broker-list localhost:9092 --topic goku < tests/local/goku
This should start producing to kafka, and you will see a stream of print statements on Trebuchet. If reproduced correctly, you should see times when just adjust events (and no goku/logging events) being processed for a while, before processing both topics for a while (with a lower rate for adjust events), then repeating this cycle, similar to what the graphs above are showing.
Versions
The text was updated successfully, but these errors were encountered: