You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Apr 26, 2024. It is now read-only.
Three times now I've run into an issue where the appservice worker process exits after the Telegram bridge catches up. This has only started happening since my 0.34+python3 upgrade (both happened at the same time).
The last couple lines in the log each time are:
appservice_1 - 2018-12-29 00:22:00,414 - synapse.appservice.scheduler - 171 - INFO - as-recoverer-telegram-7- Successfully recovered application service AS ID telegram
appservice_1 - 2018-12-29 00:22:00,415 - synapse.appservice.scheduler - 172 - INFO - as-recoverer-telegram-7- Remaining active recoverers: 0
For perspective, the Telegram bridge stacks up hundreds of transactions after a restart for ~15 minutes. After a while, it catches up and starts streaming normally, assuming the appservice sender is still working.
I have not seen the worker survive the Telegram bridge restarting in the 3 times I've done it.
The text was updated successfully, but these errors were encountered:
Three times now I've run into an issue where the appservice worker process exits after the Telegram bridge catches up. This has only started happening since my 0.34+python3 upgrade (both happened at the same time).
The last couple lines in the log each time are:
For perspective, the Telegram bridge stacks up hundreds of transactions after a restart for ~15 minutes. After a while, it catches up and starts streaming normally, assuming the appservice sender is still working.
I have not seen the worker survive the Telegram bridge restarting in the 3 times I've done it.
The text was updated successfully, but these errors were encountered: