Skip to content
This repository has been archived by the owner on Apr 26, 2024. It is now read-only.

major state reset / corruption bug impacting the main official GrapheneOS room #14481

Open
thestinger opened this issue Nov 18, 2022 · 33 comments
Labels
A-Federation O-Occasional Affects or can be seen by some users regularly or most users rarely S-Major Major functionality / product severely impaired, no satisfactory workaround. T-Defect Bugs, crashes, hangs, security vulnerabilities, or other reported issues. z-WTF Causing the user to exclaim! These issues are high impact and low effort.

Comments

@thestinger
Copy link

Description

#grapheneos:grapheneos.org (!SayHlEYXdrpSerhLMC:matrix.org) which was created with room version 6 has been impacted by a major synapse / protocol bug resulting in losing around 1500 members of the room followed by users being unable to join the room. It was acting as if it was set to invite-only mode despite being public. We think we've worked around that issue by setting it to invite-only and back to public. We know little about the Matrix protocol and synapse so we're unable to determine what happened. Ideally we can also get help with restoring the room members. Users reported getting a 403 error message when trying to rejoin and were confused thinking they had been banned.

Steps to reproduce

We don't know how to reproduce the problem.

Homeserver

all

Synapse Version

all

Installation Method

No response

Platform

All

Relevant log output

n/a

Anything else that would be useful to know?

No response

@thestinger
Copy link
Author

Room membership was ~14900 across each server. State calculation somehow screwed up and showed it as ~13300 across each server. It has somehow partially fixed itself now and shows as ~14380 across servers. It seems as if we hit some state calculation bug and after more people joined, it resolved itself. I don't know how this works in synapse or the protocol in a detailed way so I don't know how this could be happening. The issue has half resolved itself but it's still happening. I'm surprised that it partially resolved itself this way.

@thestinger
Copy link
Author

Many matrix.org users are still unable to join / rejoin without being invited.

@ara4n
Copy link
Member

ara4n commented Nov 18, 2022

Ugh, sorry about this, and thanks for reporting it. We'll dig into it asap (but may need some server logs to diagnose; please capture them to stop them rotating if you haven't already).

@thestinger
Copy link
Author

Saved the current set of logs. The drop in the number of users and the partial increase back towards the previous amount occurred across all servers we checked including matrix.org so it seems that the issue is at least mostly deterministic and occurred on each server.

@DMRobertson
Copy link
Contributor

xref #8629

@DMRobertson
Copy link
Contributor

I took a quick look at this today. It looks like this room has been firefighting resets for a while. Delving into matrix.org's view of things (every row with an occurrence value greater than one is a reset):

matrix=> SELECT
    csds.*,
    rank() OVER (PARTITION BY event_id ORDER BY stream_id ASC) as occurrence,
    json::json->'content'->'join_rule' as "join rule"
FROM current_state_delta_stream csds
    NATURAL JOIN event_json
WHERE room_id=:'room_id'
    AND type = 'm.room.join_rules'
    AND state_key = ''
ORDER BY stream_id ASC;
 stream_id  │            room_id             │       type        │ state_key │                   event_id                   │                prev_event_id                 │   instance_name   │ occurrence │ join rule 
════════════╪════════════════════════════════╪═══════════════════╪═══════════╪══════════════════════════════════════════════╪══════════════════════════════════════════════╪═══════════════════╪════════════╪═══════════
 2027843842 │ !SayHlEYXdrpSerhLMC:matrix.org │ m.room.join_rules │           │ $fcWc_nauhvLTD8dMRJbujUmPb5USkws-UTuKpfDiLfI │ ¤                                            │ event_persister-1 │          1 │ "public"
 2121461477 │ !SayHlEYXdrpSerhLMC:matrix.org │ m.room.join_rules │           │ $z9eXQiXcMf71bM9X2tMKxDKUQxuk5DCJ4sCDx_-d134 │ $fcWc_nauhvLTD8dMRJbujUmPb5USkws-UTuKpfDiLfI │ event_persister-2 │          1 │ "invite"
 2121492824 │ !SayHlEYXdrpSerhLMC:matrix.org │ m.room.join_rules │           │ $zJx5ixkFmz45Zxb5ZtBDpA1EsVY-djBt5zAiLwBxE2Q │ $z9eXQiXcMf71bM9X2tMKxDKUQxuk5DCJ4sCDx_-d134 │ event_persister-2 │          1 │ "public"
 2671580781 │ !SayHlEYXdrpSerhLMC:matrix.org │ m.room.join_rules │           │ $hE3zfL-7jHEeqDh7pX80tQA67PPkXt2OlOoOllfn9cQ │ $zJx5ixkFmz45Zxb5ZtBDpA1EsVY-djBt5zAiLwBxE2Q │ event_persister-2 │          1 │ "invite"
 2671688085 │ !SayHlEYXdrpSerhLMC:matrix.org │ m.room.join_rules │           │ $VWpaeXTzgoRNUsV0iws2IQKPOCSLS0mE8j9l94BQ5jg │ $hE3zfL-7jHEeqDh7pX80tQA67PPkXt2OlOoOllfn9cQ │ event_persister-2 │          1 │ "public"
 2674054841 │ !SayHlEYXdrpSerhLMC:matrix.org │ m.room.join_rules │           │ $iWRT1pdlXF_Lkb-woVnvWh4oPznWeJPBPaeCaSe9I0Q │ $VWpaeXTzgoRNUsV0iws2IQKPOCSLS0mE8j9l94BQ5jg │ event_persister-2 │          1 │ "invite"
 2674054852 │ !SayHlEYXdrpSerhLMC:matrix.org │ m.room.join_rules │           │ $Hfo2yWg15viSsdszm06I4sprCV_dELc9MRPPXNO945M │ $iWRT1pdlXF_Lkb-woVnvWh4oPznWeJPBPaeCaSe9I0Q │ event_persister-2 │          1 │ "invite"
 2674236322 │ !SayHlEYXdrpSerhLMC:matrix.org │ m.room.join_rules │           │ $GIH1nMuxiHTekSVmiWhF83KoCJzarNPHgu6wsXrL9gs │ $Hfo2yWg15viSsdszm06I4sprCV_dELc9MRPPXNO945M │ event_persister-2 │          1 │ "public"
 2689269590 │ !SayHlEYXdrpSerhLMC:matrix.org │ m.room.join_rules │           │ $hE3zfL-7jHEeqDh7pX80tQA67PPkXt2OlOoOllfn9cQ │ $GIH1nMuxiHTekSVmiWhF83KoCJzarNPHgu6wsXrL9gs │ event_persister-2 │          2 │ "invite"
 2691601114 │ !SayHlEYXdrpSerhLMC:matrix.org │ m.room.join_rules │           │ $EueSb2uGb3dOPoAsW2vL8j1FhIydoTTNRz6oLT_cMvU │ $hE3zfL-7jHEeqDh7pX80tQA67PPkXt2OlOoOllfn9cQ │ event_persister-2 │          1 │ "invite"
 2691601239 │ !SayHlEYXdrpSerhLMC:matrix.org │ m.room.join_rules │           │ $zYsJ9_moRx4MKOUfJd9YpCQLof3_gICQt92fx6jRqgk │ $EueSb2uGb3dOPoAsW2vL8j1FhIydoTTNRz6oLT_cMvU │ event_persister-2 │          1 │ "public"
 2694889147 │ !SayHlEYXdrpSerhLMC:matrix.org │ m.room.join_rules │           │ $l_jH895Xk8kiVOKlFMqTYTJVES6Hu4EBolMDQ3pwnbk │ $zYsJ9_moRx4MKOUfJd9YpCQLof3_gICQt92fx6jRqgk │ event_persister-2 │          1 │ "invite"
 2694938440 │ !SayHlEYXdrpSerhLMC:matrix.org │ m.room.join_rules │           │ $woA0F6cjRBYNuTsHAB_rwoZmH8Ql5mxtzUHfwjnh_Lc │ $l_jH895Xk8kiVOKlFMqTYTJVES6Hu4EBolMDQ3pwnbk │ event_persister-2 │          1 │ "public"
 2711031303 │ !SayHlEYXdrpSerhLMC:matrix.org │ m.room.join_rules │           │ $hE3zfL-7jHEeqDh7pX80tQA67PPkXt2OlOoOllfn9cQ │ $woA0F6cjRBYNuTsHAB_rwoZmH8Ql5mxtzUHfwjnh_Lc │ event_persister-2 │          3 │ "invite"
 2711308198 │ !SayHlEYXdrpSerhLMC:matrix.org │ m.room.join_rules │           │ $woA0F6cjRBYNuTsHAB_rwoZmH8Ql5mxtzUHfwjnh_Lc │ $hE3zfL-7jHEeqDh7pX80tQA67PPkXt2OlOoOllfn9cQ │ event_persister-2 │          2 │ "public"
 2825815234 │ !SayHlEYXdrpSerhLMC:matrix.org │ m.room.join_rules │           │ $j9K4YREoClTXl9FbjnDHLYBuNpL3tLYz9NWM3fNlFzA │ $woA0F6cjRBYNuTsHAB_rwoZmH8Ql5mxtzUHfwjnh_Lc │ event_persister-2 │          1 │ "invite"
 2825824697 │ !SayHlEYXdrpSerhLMC:matrix.org │ m.room.join_rules │           │ $s7njU-2Ll0NygwkK7OhaYPCnJQdxr9GpMsSGTQlSess │ $j9K4YREoClTXl9FbjnDHLYBuNpL3tLYz9NWM3fNlFzA │ event_persister-2 │          1 │ "public"
 2825860549 │ !SayHlEYXdrpSerhLMC:matrix.org │ m.room.join_rules │           │ $woA0F6cjRBYNuTsHAB_rwoZmH8Ql5mxtzUHfwjnh_Lc │ $s7njU-2Ll0NygwkK7OhaYPCnJQdxr9GpMsSGTQlSess │ event_persister-2 │          3 │ "public"
 2832161693 │ !SayHlEYXdrpSerhLMC:matrix.org │ m.room.join_rules │           │ $woA0F6cjRBYNuTsHAB_rwoZmH8Ql5mxtzUHfwjnh_Lc │ $woA0F6cjRBYNuTsHAB_rwoZmH8Ql5mxtzUHfwjnh_Lc │ event_persister-3 │          4 │ "public"
 3076015917 │ !SayHlEYXdrpSerhLMC:matrix.org │ m.room.join_rules │           │ $bF-OUxkxF-vkHCFBHsXIV_aWRvAwOIWGD5VzVEetDeM │ $woA0F6cjRBYNuTsHAB_rwoZmH8Ql5mxtzUHfwjnh_Lc │ event_persister-3 │          1 │ "invite"
 3076035278 │ !SayHlEYXdrpSerhLMC:matrix.org │ m.room.join_rules │           │ $woA0F6cjRBYNuTsHAB_rwoZmH8Ql5mxtzUHfwjnh_Lc │ $bF-OUxkxF-vkHCFBHsXIV_aWRvAwOIWGD5VzVEetDeM │ event_persister-3 │          5 │ "public"
 3078616063 │ !SayHlEYXdrpSerhLMC:matrix.org │ m.room.join_rules │           │ $-qyi3oyftLH04PWtSs8dF878M16pCK3AhVrl97blHMQ │ $woA0F6cjRBYNuTsHAB_rwoZmH8Ql5mxtzUHfwjnh_Lc │ event_persister-3 │          1 │ "public"
 3082573369 │ !SayHlEYXdrpSerhLMC:matrix.org │ m.room.join_rules │           │ $woA0F6cjRBYNuTsHAB_rwoZmH8Ql5mxtzUHfwjnh_Lc │ $-qyi3oyftLH04PWtSs8dF878M16pCK3AhVrl97blHMQ │ event_persister-3 │          6 │ "public"
 3250852188 │ !SayHlEYXdrpSerhLMC:matrix.org │ m.room.join_rules │           │ $-qyi3oyftLH04PWtSs8dF878M16pCK3AhVrl97blHMQ │ $woA0F6cjRBYNuTsHAB_rwoZmH8Ql5mxtzUHfwjnh_Lc │ event_persister-3 │          2 │ "public"
 3256789857 │ !SayHlEYXdrpSerhLMC:matrix.org │ m.room.join_rules │           │ $woA0F6cjRBYNuTsHAB_rwoZmH8Ql5mxtzUHfwjnh_Lc │ $-qyi3oyftLH04PWtSs8dF878M16pCK3AhVrl97blHMQ │ event_persister-3 │          7 │ "public"
 3260761338 │ !SayHlEYXdrpSerhLMC:matrix.org │ m.room.join_rules │           │ $-qyi3oyftLH04PWtSs8dF878M16pCK3AhVrl97blHMQ │ $woA0F6cjRBYNuTsHAB_rwoZmH8Ql5mxtzUHfwjnh_Lc │ event_persister-3 │          3 │ "public"
 3281626337 │ !SayHlEYXdrpSerhLMC:matrix.org │ m.room.join_rules │           │ $woA0F6cjRBYNuTsHAB_rwoZmH8Ql5mxtzUHfwjnh_Lc │ $-qyi3oyftLH04PWtSs8dF878M16pCK3AhVrl97blHMQ │ event_persister-3 │          8 │ "public"
 3282213708 │ !SayHlEYXdrpSerhLMC:matrix.org │ m.room.join_rules │           │ $-qyi3oyftLH04PWtSs8dF878M16pCK3AhVrl97blHMQ │ $woA0F6cjRBYNuTsHAB_rwoZmH8Ql5mxtzUHfwjnh_Lc │ event_persister-3 │          4 │ "public"
 3350858457 │ !SayHlEYXdrpSerhLMC:matrix.org │ m.room.join_rules │           │ $woA0F6cjRBYNuTsHAB_rwoZmH8Ql5mxtzUHfwjnh_Lc │ $-qyi3oyftLH04PWtSs8dF878M16pCK3AhVrl97blHMQ │ event_persister-3 │          9 │ "public"
 3352978473 │ !SayHlEYXdrpSerhLMC:matrix.org │ m.room.join_rules │           │ $-qyi3oyftLH04PWtSs8dF878M16pCK3AhVrl97blHMQ │ $woA0F6cjRBYNuTsHAB_rwoZmH8Ql5mxtzUHfwjnh_Lc │ event_persister-3 │          5 │ "public"
 3456233260 │ !SayHlEYXdrpSerhLMC:matrix.org │ m.room.join_rules │           │ $hE3zfL-7jHEeqDh7pX80tQA67PPkXt2OlOoOllfn9cQ │ $-qyi3oyftLH04PWtSs8dF878M16pCK3AhVrl97blHMQ │ event_persister-3 │          4 │ "invite"
 3457954990 │ !SayHlEYXdrpSerhLMC:matrix.org │ m.room.join_rules │           │ $vN059rsafuVEVlpEFwBHtHwRpkAiso62uBVAfQgYIYs │ $hE3zfL-7jHEeqDh7pX80tQA67PPkXt2OlOoOllfn9cQ │ event_persister-3 │          1 │ "invite"
 3457955303 │ !SayHlEYXdrpSerhLMC:matrix.org │ m.room.join_rules │           │ $MFrkQ1oPqUT__UCG-AI1QOLynf62L0xBuUodXQhicV8 │ $vN059rsafuVEVlpEFwBHtHwRpkAiso62uBVAfQgYIYs │ event_persister-3 │          1 │ "public"
 3458962379 │ !SayHlEYXdrpSerhLMC:matrix.org │ m.room.join_rules │           │ $D2qCAcFRpeALi7z1hAWCybAB8ZFS4cr3eSZyKfhzxFg │ $MFrkQ1oPqUT__UCG-AI1QOLynf62L0xBuUodXQhicV8 │ event_persister-3 │          1 │ "invite"
 3458962585 │ !SayHlEYXdrpSerhLMC:matrix.org │ m.room.join_rules │           │ $6yKd1FwK1AdLpXRfIa9osN5o6LDpgg2vnwUCE5Eoel8 │ $D2qCAcFRpeALi7z1hAWCybAB8ZFS4cr3eSZyKfhzxFg │ event_persister-3 │          1 │ "public"
 3458963301 │ !SayHlEYXdrpSerhLMC:matrix.org │ m.room.join_rules │           │ $gYU2PdEaUD4ErZkrMtOvMl1KxOcj2UXiKbVPZL0aJLI │ $6yKd1FwK1AdLpXRfIa9osN5o6LDpgg2vnwUCE5Eoel8 │ event_persister-3 │          1 │ "invite"
 3458963413 │ !SayHlEYXdrpSerhLMC:matrix.org │ m.room.join_rules │           │ $000eDwMNNl9h8dT0vQhO4dnQ5Qc-pVlHLIyvKNtWjPA │ $gYU2PdEaUD4ErZkrMtOvMl1KxOcj2UXiKbVPZL0aJLI │ event_persister-3 │          1 │ "public"
(37 rows)

The first reset that affected join rules looks to be from 11th Februrary 2022 (stream ordering 2689269590), when join rules reset to $hE3zfL-7jHEeqDh7pX80tQA67PPkXt2OlOoOllfn9cQ. That reoccurred recently (17th November 2022, ordering 3456233260) after a spate of other resets.

I haven't yet been able to confirm it, but I suspect that the cause is a mixture of a recently-fixed Synapse bug and known defects in state resolution v2. Unfortunately, the former makes it impact on the room (and continues to do so, even after the bug is fixed).

As mentioned in #8629 (comment) there's a long-running project trying to characterise and fix these defects in Synapse. We hope to have an update to share soon.

@DMRobertson DMRobertson added A-Federation S-Major Major functionality / product severely impaired, no satisfactory workaround. T-Defect Bugs, crashes, hangs, security vulnerabilities, or other reported issues. O-Occasional Affects or can be seen by some users regularly or most users rarely z-WTF Causing the user to exclaim! These issues are high impact and low effort. labels Nov 18, 2022
@DMRobertson
Copy link
Contributor

In the meantime: you could try upgrading the graphene OS room. This is a blunt workaround rather than a comprehensive fix, but it might be the best way to avoid the resets you've been seeing in the short term.

If you do so, I'd advise that all privileged users (those able to set power levels and adjust join rules) in that room which use Synapse upgrade their instance to 1.64 or higher to pull in the fix I mentioned.

@thestinger
Copy link
Author

I don't really want to do a room upgrade but I don't think we have much choice when it keeps rolling back to 13300 users and blocking people joining. This is a really bad experience. A room upgrade is going to cause us to lose a bunch of users and the room history so that's not great.

@ara4n
Copy link
Member

ara4n commented Nov 19, 2022

room upgrades do not impact room history, nor do they lose you users. also, what the hell is this? We have burnt a bunch of time investigating this and trying to help you on this today, and in return we get slagged off on twitter?!

Perhaps you think this is acceptable behaviour for interacting with an open source project - but as an open source project yourselves you should know better. We have no interest in supporting folks who scream and jeer at bugs when we try to help them.

@thestinger thestinger closed this as not planned Won't fix, can't repro, duplicate, stale Nov 19, 2022
@SplittyDev
Copy link

SplittyDev commented Nov 19, 2022

@ara4n I don't think the tweet is to be understood in such a way. They're merely communicating to users that they're hitting some protocol-layer bugs, and that it's not a great situation (which is a neutral statement and doesn't imply blame). I'm sure they didn't mean this in a disrespectful manner, and I didn't see them calling the Matrix team out for being unhelpful. Being an open source maintainer myself I understand how such misunderstandings can arise, but please try not to interpret too much into something like that. It just ruins your day for no good reason.

Feel free to mark this as off-topic if you think it contributes nothing to the conversation, just trying to mediate. We've all had bad days and took something at face value which wasn't meant that way. Always better to see things in a positive light until proven otherwise.

@SplittyDev
Copy link

@thestinger I just saw your reaction, the same thing applies to you. There's no reason to make this into a fight when it's most likely just a misunderstanding. Calm down, all of you. We're all trying to help each other out here.

@thestinger
Copy link
Author

@SplittyDev Are you referring to me closing the issue?

@SplittyDev
Copy link

@thestinger yes, I believe there was a huge misunderstanding. @ara4n understood the tweet to be an offensive statement, which as you clarified, it wasn't. You saw his comment as an attack (which I understand, don't get me wrong), but since this "fight" arose from a misunderstanding I don't think we should come to hasty conclusions. I'm sure everyone here is just as eager to get this resolved. It's a stressful situation for everyone involved.

@thestinger
Copy link
Author

We made a post on Twitter with an update on the situation since it occurred again (https://twitter.com/GrapheneOS/status/1593759013061234691). I don't understand what we've done wrong or why we're being attacked for it especially in such a public way. We already have a lot of confused and upset users due to what's happening with the room. As I said, it's not a good situation, and this is not helping.

@SplittyDev
Copy link

I 100% agree. But you know how it is. Matrix or synapse bugs affecting big servers isn't fun for anyone here, the synapse team is trying to figure out how to remedy the situation and felt disrespected by the tweet. I fully understand that the tweet was merely meant as a statement to your users and that there were no ill intentions towards the synapse team. It's a simple misunderstanding, which I'm sure can be resolved in a calm manner.

@thestinger
Copy link
Author

thestinger commented Nov 19, 2022

@ara4n I've removed both of our threads about the state reset bug on Twitter/Mastodon. The threads were only intended to inform our community about the situation. They weren't intended to be attacks on Matrix as a protocol / platform. The situation is genuinely very frustrating for us and we're not trying to blame or attack Matrix by expressing it.

I would greatly appreciate if everything including my comment about not wanting to do a room upgrade (#14481 (comment)) and below could be deleted here. I want to be able to link to this issue to explain why we upgraded the room but feel I can't do that without creating drama here due to this.

The room upgrade does preserve the history much better than I remember it previously but it's not seamless. Since people have to notice the room upgrade and manually interact to join the new room, it's inherently going to reduce the room members quite a lot. On the positive side, as far as I can tell doing the room upgrade led to several people leaving and that led to it adding back a bunch of the room state bringing members back to ~14800.

@no-usernames-left
Copy link

@thestinger It may reduce numbers, but those wishing to continue to interact will click through; in other words, inactive lurkers are likely the only ones to be lost, but only until they wish to interact again.

@thestinger
Copy link
Author

@ThePowerofDreamS Many users are confused about what happened. It's not as clear as it could be and many people miss the notification. It also disrupts the discussion in the room for ages as people gradually move there and are confused about what happened. You can look and see that's happening. It's far from seamless and I really didn't want to upgrade the rooms until years from now when the experience is better. Too late to take it back now and I'm not sure what the alternative would have been.

@thestinger
Copy link
Author

thestinger commented Nov 20, 2022

We currently have 10% as many room members as before (1500 vs. 15000), which is slowly growing but many of the people who were active did not get any notice about the room upgrade and weren't aware until I had invited them. Some don't accept invites by default, etc. so that's far from perfect.

Many are confused about what has happened so it has derailed the discussion in the room. Some people think they did something wrong and were kicked. We would not normally ever do a room upgrade from a room version like 6 because we're very aware of the consequences.

I didn't feel we had much choice, but it doesn't change that this is not a good situation for us. Perhaps choosing room version 10 was a slight mistake since a few servers used by a total of maybe a 200 users are outdated by many months and unable to join it but that's insignificant compared to it effectively being a new room.

As I said on that thread on Twitter, I'm also concerned about hitting the same issues again. We've always had close to fully updated synapse as have the other servers used by our moderators.

It currently feels a lot like rebuilding the community after the freenode takeover even though nothing like that happened, just a bug. If room upgrades worked better and servers automatically moved the users, it'd mostly be fine, but that's not the status quo and we were going to wait until that was fully baked before upgrading our rooms. Server admins can join users to a new room with the API but our users aren't on our server.

@thestinger
Copy link
Author

This has happened in 2 more of rooms now, and the project has been significantly impacted by the disruption to the main room already.

@thestinger
Copy link
Author

@ara4n @DMRobertson

Rooms bricked so far: #grapheneos:grapheneos.org (room upgraded after), #infra:grapheneos.org (room upgraded after), #dev:grapheneos.org (have not done room upgrade yet, probably need to) and potentially others we aren't aware of yet. If any of the newly upgraded rooms end up getting bricked, that's definitely going to be a dealbreaker for us. Same probably applies to our offtopic and community (space) rooms. This is too much. We believe this issue is still happening because nothing was wrong with #dev:grapheneos.org until recently. All of our mods have always had near the latest synapse release for their homeserver. Our rooms get frequently raided and sometimes there are redundant bans where 2 people ban someone at the same time, etc. Perhaps something involved in defending against the raids caused this, particularly raids with mass joins. Maybe it has something to do with server ACLs. For a while we were using allowlist server ACLs to defend against raids.

Room upgrades are really bad for us. We have lost a substantial portion of the room members, the room history is not available to anyone not in the old room or has a client unable to transparently search it and we haven't gotten help from any homeserver admins to automatically join users in the old room to the new room. We would not have normally done room upgrades until years from now when the experience is better and users get automatically migrated by their server or client with some kind of anti-DoS limitation such as a limit on how often it can be done automatically (like once per week).

Many users thought they were banned/kicked due to the state resets, many users were confused and didn't accept my invite because they didn't realize it was an invite to a new version of the room, etc. It's still regularly a topic of conversation. Many people thought it was someone impersonating me and inviting them to a fake GrapheneOS room because of that repeatedly happening to attack our project as part of the raids on our rooms. I would not be surprised if the cause of this was the raids on our rooms which occasionally caused our synapse instance to run out of memory and get killed, or needed to be restarted after blocking a bunch of IPs with nftables or nginx.

As an entirely donation-based project, it deeply matters to us to have a very large and active community. This could have been a positive experience where we reported an issue, it was looked into and resolved. Homeserver admins could have helped us move over users since that's still not automated. My experience was that our posts about this which were based on being quite upset about how bad this is for our project being taken as an attack on Matrix and the developers. That seems to have ended any discussion about it, but the issue is still there, and now this is part of our experience.

Not expecting any response to this but this situation is quite bad for us, is getting worse and the end result is going to be us having to figure out a new approach to our chat platform followed by a huge push to get people to deal with the change. Maybe we'll use XMPP, maybe we'll go back to focusing on IRC or maybe we'll use a proprietary chat platform. I don't know, but this really doesn't work for us. The level of abuse we're targeted with, the fact that most of it comes from matrix.org which we can't defederate, our rooms being bricked and overall terrible experiences with trying to get something done about the abuse and now this.

@thestinger
Copy link
Author

This has happened again. I think it's caused by setting the rooms invite-only to deal with raids. Switching to invite-only and back appears to have a high likelihood of bricking the rooms by causing endless state resets. Perhaps it's caused by doing it too quickly without waiting a long time between changes for things to sync properly. We need help fixing our rooms or we're going to be unable to use Matrix anymore. We'll have to move to a new platform and publish an article explaining why we're doing that and why Matrix hasn't worked out for us both due to technical reasons, the extremely toxic overall community on the platform involved in attacks on us and the lack of help dealing with either of these things.

@thestinger thestinger reopened this Sep 25, 2023
@thestinger
Copy link
Author

There were rapid join and ban state events during the raids so it's possible that's what caused it too. It was almost certainly caused by these recent events though.

@thestinger
Copy link
Author

This impacts both our main room (which was recreated after the last state reset cycle) and our offtopic room.

@sempervictus
Copy link

@SplittyDev - if the historical states of prior buggy versions prevent remedy of current state, can the histories be summarized or a new version of the protocol/state representation applied as a migration of sorts? This looks like a limiting factor for the SoA if there's a data dependency on currently invalid state differentiating "same" classes of objects from each other by the nature of their data (vs function).

@thestinger
Copy link
Author

Our main room is having state resets again. I think we'll be leaving Matrix.

@thestinger
Copy link
Author

If this doesn't get resolved, we're just going to be sharing our experiences with the platform in an article and moving to a platform without these problems.

@RokeJulianLockhart

This comment was marked as off-topic.

@thestinger
Copy link
Author

thestinger commented Dec 4, 2023

@RokeJulianLockhart I didn't report this as a security bug. If people are aware of it being exploitable without malicious moderators / moderator homeservers, I think that should be disclosed rather than covered up.

I do think I could make a good list of guidelines of things to do and not to do in order to avoid this happening.

  1. use a dedicated admin account on your own server, and never add any other admins or change the admin of the room
  2. restrict changing power levels, settings, join rules, etc. to the admin account
  3. minimize the number of homeservers for mods, ideally ONLY mods on own server matching the admin account
  4. never set join rules to invite-only which is terrible for defending against raids, but it's too dangerous

By following those 4 guidelines, the chance of this happening is minimized. If you've already done any of those things differently then it's probably best to make a new room sooner rather than later, especially if join rules have ever been set to invite-only which seems to be the most damaging issue when resets occur since it invalidates tons of joins and bricks the room for those members.

I expect we aren't going to run into comparably awful issues again for our new rooms since we're going to follow those rules, although we still have several home servers for mods but we're likely going to reduce it to only grapheneos.org eventually.

@RokeJulianLockhart
Copy link

RokeJulianLockhart commented Dec 4, 2023

#14481 (comment)

I didn't report this as a security bug.

Indeed, @thestinger, I believe it was referred to as a security concern in redacted communication by the notable counterpart to this issue in response to the post on Twitter.

If people are aware of it being exploitable without malicious moderators / moderator homeservers, I think that should be disclosed rather than covered up.

I implore you to more generally paraphrase this at the linked issue.

@eslerm
Copy link

eslerm commented Dec 4, 2023

@RokeJulianLockhart this issue is already public. Even if it is possible for a non-privileged user to exploit this as a vulnerability, the issue is already public and therefore a disclosure timeline would not apply.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
A-Federation O-Occasional Affects or can be seen by some users regularly or most users rarely S-Major Major functionality / product severely impaired, no satisfactory workaround. T-Defect Bugs, crashes, hangs, security vulnerabilities, or other reported issues. z-WTF Causing the user to exclaim! These issues are high impact and low effort.
Projects
None yet
Development

No branches or pull requests

9 participants