-
-
Notifications
You must be signed in to change notification settings - Fork 2.1k
Forward extremities accumulate and lead to poor performance #1760
Comments
More logging for resolving state groups was added in #1767 which will hopefully help explain this |
As a workaround for people with seriously fragmented rooms (e.g. @Half-Shot has 209 extremities in #mozilla_#rust:matrix.org atm):
...is a dangerous and risky and not-really-recommended solution which will remove all but the newest extremities from rooms with multiple extremities. If it leaves the 'wrong' extremity for the room, bad things could happen, however. it's useful if your server is so hosed you can't otherwise send dummy messages into the room to heal it. it should be run whilst the server is shutdown. so far we haven't seen it make problems worse; only better. |
won't necessarily give you the latest event id. It may sort of work given that synapse sends out events with an auto-incrementing integer at the front, but that won't be true across different servers. To get the latest you'd need to compare the stream_orderings. |
DELETE FROM event_forward_extremities AS e
USING (
SELECT DISTINCT ON (room_id)
room_id,
last_value(event_id) OVER w AS event_id
FROM event_forward_extremities
NATURAL JOIN events
WINDOW w AS (
PARTITION BY room_id
ORDER BY stream_ordering
range between unbounded preceding and unbounded following
)
ORDER BY room_id, stream_ordering
) AS s
WHERE
s.room_id = e.room_id
AND e.event_id != s.event_id
AND e.room_id = '!jpZMojebDLgJdJzFWn:matrix.org'; is probably more how you can do it on postgres |
I've been looking at this over the last few days, as it appears to be a common cause of poor performance for many people. Conclusions so far follow. There are two principal causes for the accumulation of extremities: The first is your server being offline, or unreachable by other servers in the federation. This can lead to a gap in the room DAG. Your server will make an attempt to backfill when it receives events after a gap, but will cap this to 10 events, and the backfill attempt may not succeed. To some extent, this situation is to be expected. However, it is particularly nasty because the accumulation of extremities makes your server perform poorly, which makes it slow to respond to federation requests, which makes other servers more likely to consider your server offline and stop trying to send to it - thus exacerbating the problem. The second cause is a rejected event. If your server receives an event over federation which it believes was forbidden under the auth rules of the room, it will reject it. However, if other servers in the federation accept it, then it will become part of the DAG as they see it; this means that your server will see a gap in the DAG, and the rejected event's predecessor will become a forward_extremity. This problem is also self-perpetuating, because a rejected event also causes the homeserver's view of the room state to be reset (#1935), which can lead to more rejections (and hence more forward extremities) down the line. This second cause shouldn't really happen, because we don't expect to see rejections unless someone is doing something nefarious, because all HSes should agree on which events are allowed in the DAG. It clearly is happening though, so my current investigation is focussed on trying to pin down why. I'd also like to do something about #1935, such that when a rejection does happen (through incompetence or malice), it doesn't completely mess everything up thereafter. |
The rejections appeared to stem from the fact that the state of the room was out of sync from the very start - it looked like events were received over federation while the join was still in progress, and a race condition meant that the state ended up in an invalid, uh, state. Hopefully this will be fixed by #2016. |
This seems to have gotten worse (at least for me) over the last week or so. Every second day I'm having to clear extremities from t2bot.io just to keep the thing running in a reasonable fashion. No apparent consistency between rooms, just 25+ extremities for 10-15 rooms after a couple days. |
It still seems to be quite a problem (it is for me at least). If that interests anyone, I monitor the evolution of these extremities with this SQL query (it is a little big because it retrieves the canonical id of offending rooms as well): SELECT f.count, concat(f.alias, ' (', f.room_id, ')')
FROM (
SELECT t.room_id, t.count, se.event_id, e.content::json->'alias' AS alias
FROM (
SELECT room_id, count(*)
FROM event_forward_extremities
GROUP BY room_id HAVING count(*)>1
) t
LEFT OUTER JOIN current_state_events AS se
ON se.room_id = t.room_id AND se.type = 'm.room.canonical_alias'
LEFT OUTER JOIN events AS e
ON se.event_id = e.event_id
) f; |
From what I see, worst offenders seem to be IRC-bridged rooms with a high join/part turnover. Such as #mozilla_#rust:matrix.org, #mozilla_#rust-offtopic:matrix.org, and #haskell:matrix.org |
I haven't had a serious breakdown or runaway forward extremity accumulation while on #rust for several months, FWIW. It seems that either there was a specific event that triggered it which hasn't recurred in that room, or at least some of the causes have been addressed. |
I had no catastrophic accumulation, but these rooms sat at around 60-80 extremities. I finally got around just leaving these rooms, and I must tell, my HS is much more responsive since I've done that. |
I just had what I assume was this issue. I had multiple rooms with >6 (35 max) extremities. Synapse became completely unresponsive: Edit: Just hit it again. Looks like one of the room IDs that are getting extremities the matrix hq room. Is there a way to remove users from rooms via the db? Maybe I can remove them via a cron job? |
I experience this with the #haskell room on freenode. Is there any way to reset the room or delete it? The extremities come back as soon as I delete them. |
We just had to run this on matrix.org after lots of freenode membership churn seemingly fragmented lots of DAGs, causing a feedback loop where subsequent freenode joins got slower and slower, making freenode grind to a halt. For the record, the query used was: BEGIN;
SET enable_seqscan=off;
DELETE FROM event_forward_extremities AS e
USING (
SELECT DISTINCT ON (room_id)
room_id,
last_value(event_id) OVER w AS event_id
FROM event_forward_extremities
NATURAL JOIN events
WINDOW w AS (
PARTITION BY room_id
ORDER BY stream_ordering
range between unbounded preceding and unbounded following
)
ORDER BY room_id, stream_ordering
) AS s,
(
select room_id from event_forward_extremities group by room_id having count(*)>1
) AS x
WHERE
s.room_id = e.room_id
AND e.event_id != s.event_id
AND e.room_id = x.room_id;
COMMIT; ...which took a few minutes to run. |
On the receiving end of many of those membership events, I've also seen extremities skyrocket. Under normal load, extremities accumulate slowly, however the last day or so has caused fairly major outages on my end :( |
Guys, this is a major problem. I've been running a synapse instance for a year and a half now (some 50 active users, joining the typical huge channels), and the general experience is that everything is mostly fine as long as no forward extremities are accumulated, but as soon as it happens (5+), it comes out of nowhere and, grinds everything to a halt and needs manual intervention. Really, for the first two to three months my impression of admin complexity was "just apt-get upgrade once in a while, you're good. no advanced skills necessary". This has since changed to "better know about these sql statements from that issue on the tracker, or your hs is bound to blow up sooner or later". One admin of a major HS I had talked to told me they'd pretty much just regularly schedule downtimes to run the above "dangerous and risky and not-really-recommended" query. For myself I mostly hope that I'll not be asleep when this happens so I can handle things fast enough to minimize downtimes for my users. Still this is unacceptable reliability for what people want to use as messenger, not to mention the admin load. I don't know how much this shows on matrix.org since it's kind of in a special position, but for other HSes I cannot overstate how much of an impact this has on maintaining a synapse instance. Really, please allocate more time to this. There are workable suggestions above, maybe send dummy events to channels when there is more than a few extremities - this is pretty much what I end up doing manually every once in a while. |
So I was getting around 24 for a few weeks now for #mozilla_#rust:matrix.org, and today it changed to 74, and things are really slow again. This is using 1.2.1. |
@kroeckx can you turn on the |
I already ran the query. I've now enabled it. I still see 2 rooms with 4 extremities. |
So the rust room grew to 6 again so far. |
The way that option works is to only kick in and suppress the extremities if they grow beyond 10. Are you seeing perf issues currently? |
I guess this is still normal. I'll let you know if I ever see it higher than 10 again. |
So I see the values change over time, but they stay below 10. |
this is as expected. the question is whether you hit perf problems or if 10 is an adequate threshold. |
I can't say I see performance differences. The only time I could clearly see a difference is when it was much higher. |
So it sounds like the new setting is correctly keeping rooms below 10 extremities, so it is no longer causing a performance problem? |
On our not-too-large public ru-matrix.org server, with Synapse v1.3.1 having too much cpu/load usage, extremities count are too large:
and more than 12 rooms with >100 count! After cleaning up with sql queries, there are becomes one room with 2 count, all other with 1, after restart groves to 5-7 count, and cpu/mem/load groves down dramatically, thanks! Does we need to repeat this cleanups periodically? |
We no longer consider #5480 to be experimental and have enabled it by default in 1.4.0. So death by extremity buildup should be a thing of the past. Feedback very much welcome. I'll leave this issue open for now in case folks still have problems. |
I think we can consider this closed now. |
Glad this one didn't make it to 2020 |
I seem to be getting these extremities still, I cleared them out about a month ago and since then these new forward extremities have accumulated:
My Synapse version: 1.23.0 |
fwiw, a query to help identify the room names and such of those rooms is:
It's not pretty, but it does at least give a bit more context if one is trying to determine if a particular room even needs to be on the server. |
There is a related PR #9062 |
PR is merged, cool! But how to reveal rooms, that have too much extremities, via admin API call, instead of old-school raw SQL query? |
TLDR: To determine if you are affected by this problem, run the following query:
Any rows showing a count of more than a handful (say 10) are cause for concern. You can probably gain some respite by running the query at #1760 (comment).
Whilst investigating the cause of heap usage spikes in synapse, correlating jumps in RSZ with logs showed that 'resolving state for !curbaf with 49 groups' loglines took ages to execute and would temporarily take loads of heap (resulting in a permenant hike in RSZ, as python is bad at reclaiming heap).
On looking at the groups being resolved, it turns out that these were the extremities of the current room, and whenever the synapse queries the current room state, it has to merge these all together, whose implementation is currently very slow. To clear the extremities, one has to talk in the room (each message 'heals' 10 extremities, as max prev-events for a message is 10).
Problems here are:
The text was updated successfully, but these errors were encountered: