Skip to content
This repository has been archived by the owner on Apr 26, 2024. It is now read-only.

Only try to backfill event if we haven't tried before recently (exponential backoff) #13635

Merged
Show file tree
Hide file tree
Changes from 18 commits
Commits
Show all changes
78 commits
Select commit Hold shift + click to select a range
e0d7fab
Keep track when we tried to backfill an event
MadLittleMods Aug 23, 2022
b8d55d3
Record in some fail spots
MadLittleMods Aug 25, 2022
f63d823
Merge branch 'develop' into madlittlemods/keep-track-when-we-tried-to…
MadLittleMods Aug 25, 2022
bec26e2
Record and clear attempts
MadLittleMods Aug 25, 2022
fee37c3
Add changelog
MadLittleMods Aug 25, 2022
d1290be
Remove from when spam checker fails
MadLittleMods Aug 25, 2022
f9119d0
Custom upsert to increment
MadLittleMods Aug 25, 2022
f5c6fe7
Fix lints
MadLittleMods Aug 25, 2022
16ebec0
Remove extra whitespace
MadLittleMods Aug 25, 2022
ce07aa1
Move to correct folder
MadLittleMods Aug 25, 2022
5811ba1
Set the version back
MadLittleMods Aug 25, 2022
cf2b093
Fix `TypeError: not all arguments converted during string formatting`
MadLittleMods Aug 25, 2022
cbb4194
Add test to make sure failed backfill attempts are recorded
MadLittleMods Aug 26, 2022
621c5d3
Clean up test
MadLittleMods Aug 26, 2022
75c07bb
Fix comments
MadLittleMods Aug 26, 2022
783cce5
Add test for clearing backfill attempts
MadLittleMods Aug 26, 2022
54ef84b
Maybe a better comment
MadLittleMods Aug 26, 2022
7bf3e7f
WIP: Just working on the query
MadLittleMods Aug 26, 2022
37ff009
Move comment to where it matters
MadLittleMods Aug 26, 2022
a58d191
Silly graph pt 1
MadLittleMods Aug 26, 2022
f127ad1
Silly graph pt 2
MadLittleMods Aug 26, 2022
18abbf4
Tests running (not working)
MadLittleMods Aug 27, 2022
23310f5
Passing test
MadLittleMods Aug 27, 2022
64e01d8
Add test for A and B
MadLittleMods Aug 27, 2022
47bac25
Add tests for backfill attempts
MadLittleMods Aug 27, 2022
2ebed9d
Remove `GROUP BY backward_extrem.event_id` (seems unnecessary)
MadLittleMods Aug 27, 2022
60b3b92
Clarify why that much time
MadLittleMods Aug 27, 2022
e9f603d
Label ? slot
MadLittleMods Aug 27, 2022
a8f1464
Better explanation
MadLittleMods Aug 27, 2022
bbd1c94
Add changelog
MadLittleMods Aug 27, 2022
dd1db25
Fix lints
MadLittleMods Aug 27, 2022
c583eef
Update docstring
MadLittleMods Aug 27, 2022
ea4a3ad
Apply same changes to `get_insertion_event_backward_extremities_in_room`
MadLittleMods Aug 27, 2022
f495752
Use power and capitalize AS
MadLittleMods Aug 27, 2022
f2061b9
Use SQLite compatible power of 2 (left shift)
MadLittleMods Aug 31, 2022
e4192d7
Update table name with "failed" and include room_id in the primary key
MadLittleMods Aug 31, 2022
7a44932
Rename to record_event_failed_backfill_attempt
MadLittleMods Aug 31, 2022
86d98ca
Merge branch 'develop' into madlittlemods/keep-track-when-we-tried-to…
MadLittleMods Aug 31, 2022
29f584e
Merge branch 'madlittlemods/keep-track-when-we-tried-to-backfill-an-e…
MadLittleMods Aug 31, 2022
506a8dd
Changes after merging madlittlemods/keep-track-when-we-tried-to-backf…
MadLittleMods Aug 31, 2022
361ce5c
Use compatible least/min on each db platform
MadLittleMods Aug 31, 2022
b09d8a2
Fix SQLite no such column error when comparing table to null
MadLittleMods Aug 31, 2022
965d142
Add comment about how these are sorted by depth now
MadLittleMods Aug 31, 2022
267777f
Apply same least compatiblity to insertion event extremities
MadLittleMods Aug 31, 2022
d0cd42a
Fix lints
MadLittleMods Aug 31, 2022
3d9f625
Try fix ambiguous column (remove unsued table)
MadLittleMods Sep 1, 2022
33a3c64
Fix ambiguous column
MadLittleMods Sep 1, 2022
6736d10
Add tests for get_insertion_event_backward_extremities_in_room
MadLittleMods Sep 1, 2022
6eba1d4
Fix up test descriptions
MadLittleMods Sep 1, 2022
1464101
Add _unsafe_to_upsert_tables check
MadLittleMods Sep 1, 2022
71c7738
Add foreign key references
MadLittleMods Sep 1, 2022
df8c76d
Merge branch 'develop' into madlittlemods/keep-track-when-we-tried-to…
MadLittleMods Sep 1, 2022
d45b078
Remove reference to event that won't be in the events table
MadLittleMods Sep 1, 2022
c939422
Merge branch 'madlittlemods/keep-track-when-we-tried-to-backfill-an-e…
MadLittleMods Sep 1, 2022
599e212
Fix approximate typo
MadLittleMods Sep 1, 2022
bc8046b
Clarify what depth sort
MadLittleMods Sep 1, 2022
ea08006
Fix typos
MadLittleMods Sep 1, 2022
9a85bb4
Normal is not obvious
MadLittleMods Sep 1, 2022
7204cce
Fix left-shift math
MadLittleMods Sep 1, 2022
8f214b1
Fix foreign key constraint
MadLittleMods Sep 2, 2022
33ad64e
Merge branch 'develop' into madlittlemods/keep-track-when-we-tried-to…
MadLittleMods Sep 9, 2022
63bec99
Remove emulated upsert code (all of our dbs no support it)
MadLittleMods Sep 9, 2022
31d7502
Table rename `event_failed_pull_attempts`
MadLittleMods Sep 9, 2022
0b5f1db
Add `last_cause` column
MadLittleMods Sep 9, 2022
4ce7709
Merge branch 'develop' into madlittlemods/keep-track-when-we-tried-to…
MadLittleMods Sep 12, 2022
d3a1f84
Merge branch 'develop' into madlittlemods/keep-track-when-we-tried-to…
MadLittleMods Sep 13, 2022
1347686
Update schema version summary
MadLittleMods Sep 13, 2022
57182dc
Remove backfilled check since we plan to go general anyway
MadLittleMods Sep 14, 2022
e58bc65
Merge branch 'develop' into madlittlemods/keep-track-when-we-tried-to…
MadLittleMods Sep 14, 2022
70019d2
Move change to latest delta 73
MadLittleMods Sep 14, 2022
46a1a20
Merge branch 'madlittlemods/keep-track-when-we-tried-to-backfill-an-e…
MadLittleMods Sep 14, 2022
91c5be0
Merge branch 'develop' into madlittlemods/13622-do-not-retry-backfill…
MadLittleMods Sep 14, 2022
7ea40b1
Updates after schema changes in the other PR
MadLittleMods Sep 14, 2022
40ec8d8
Remove debug logging
MadLittleMods Sep 14, 2022
47aa375
Merge branch 'develop' into madlittlemods/13622-do-not-retry-backfill…
MadLittleMods Sep 22, 2022
1208540
Remove orthogonal `current_depth` changes
MadLittleMods Sep 22, 2022
a121bc3
Fix lints
MadLittleMods Sep 22, 2022
491aac6
Add context for why we have the is_state condition
MadLittleMods Sep 22, 2022
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions changelog.d/13589.feature
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
Keep track when we attempt to backfill an event but fail so we can intelligently back-off in the future.
9 changes: 8 additions & 1 deletion synapse/handlers/federation.py
Original file line number Diff line number Diff line change
Expand Up @@ -258,7 +258,14 @@ async def _maybe_backfill_inner(
backwards_extremities = [
_BackfillPoint(event_id, depth, _BackfillPointType.BACKWARDS_EXTREMITY)
for event_id, depth in await self.store.get_oldest_event_ids_with_depth_in_room(
room_id
room_id=room_id,
# We don't want events that come after-in-time from our current
# position when we're backfilling looking backwards.
#
# current_depth (ignore events that come after this, ignore 2-4)
# |
# <oldest-in-time> [0]<--[1]▼<--[2]<--[3]<--[4] <newest-in-time>
current_depth=current_depth,
)
]

Expand Down
5 changes: 5 additions & 0 deletions synapse/handlers/federation_event.py
Original file line number Diff line number Diff line change
Expand Up @@ -862,6 +862,8 @@ async def _process_pulled_event(
self._sanity_check_event(event)
except SynapseError as err:
logger.warning("Event %s failed sanity check: %s", event_id, err)
if backfilled:
await self._store.record_event_backfill_attempt(event_id)
return

try:
Expand Down Expand Up @@ -897,6 +899,9 @@ async def _process_pulled_event(
backfilled=backfilled,
)
except FederationError as e:
if backfilled:
await self._store.record_event_backfill_attempt(event_id)

if e.code == 403:
logger.warning("Pulled event %s failed history check.", event_id)
else:
Expand Down
141 changes: 129 additions & 12 deletions synapse/storage/databases/main/event_federation.py
Original file line number Diff line number Diff line change
Expand Up @@ -11,6 +11,7 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import datetime
import itertools
import logging
from queue import Empty, PriorityQueue
Expand Down Expand Up @@ -71,6 +72,12 @@

logger = logging.getLogger(__name__)

BACKFILL_EVENT_BACKOFF_UPPER_BOUND_SECONDS: int = int(
datetime.timedelta(days=7).total_seconds()
)
BACKFILL_EVENT_EXPONENTIAL_BACKOFF_STEP_SECONDS: int = int(
datetime.timedelta(hours=1).total_seconds()
)
erikjohnston marked this conversation as resolved.
Show resolved Hide resolved
Comment on lines +76 to +81
Copy link
Contributor Author

@MadLittleMods MadLittleMods Aug 27, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Any opinions on these values?

I chose 7 days because trying again next week seemed reasonable if someone's server was offline and then they got it working again.

This means it takes 8 attempts to get to the upper bound (2^7 = 128, 2^8 = 256)

2hr, 4hr, 8hr, 16hr, 32hr, 64hr, 128hr, (capped at 168hr from now on)


Although for really dead end extremities, it feels like bumping up to 30 days doesn't feel awful either.

And even maybe calling some extremity completely dead at a certain point and never retrying it. We can iterate on this in a future PR though.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

as a data point, for backoff from homeservers, we have

# a cap on the backoff. (Essentially none)
MAX_RETRY_INTERVAL = 2**62

'essentially no limit'.

I think I'm happy to leave it at 7 for now; I note that homeserver backoff timers are reset if the homeserver contacts us first and I would be uncomfortable with doing the same for backfill — unsure how much a room would break, but still.


# All the info we need while iterating the DAG while backfilling
@attr.s(frozen=True, slots=True, auto_attribs=True)
Expand Down Expand Up @@ -715,7 +722,9 @@ def _get_auth_chain_difference_txn(
@trace
@tag_args
async def get_oldest_event_ids_with_depth_in_room(
self, room_id: str
self,
room_id: str,
current_depth: int,
) -> List[Tuple[str, int]]:
"""Gets the oldest events(backwards extremities) in the room along with the
aproximate depth.
Expand All @@ -735,34 +744,69 @@ async def get_oldest_event_ids_with_depth_in_room(
def get_oldest_event_ids_with_depth_in_room_txn(
txn: LoggingTransaction, room_id: str
) -> List[Tuple[str, int]]:
# Assemble a dictionary with event_id -> depth for the oldest events
# Assemble a tuple lookup of event_id -> depth for the oldest events
# we know of in the room. Backwards extremeties are the oldest
# events we know of in the room but we only know of them because
# some other event referenced them by prev_event and aren't peristed
# in our database yet (meaning we don't know their depth
# some other event referenced them by prev_event and aren't
# persisted in our database yet (meaning we don't know their depth
# specifically). So we need to look for the aproximate depth from
# the events connected to the current backwards extremeties.
sql = """
SELECT b.event_id, MAX(e.depth) FROM events as e
SELECT backward_extrem.event_id, MAX(event.depth) FROM events as event
/**
* Get the edge connections from the event_edges table
* so we can see whether this event's prev_events points
* to a backward extremity in the next join.
*/
INNER JOIN event_edges as g
ON g.event_id = e.event_id
INNER JOIN event_edges as edge
ON edge.event_id = event.event_id
/**
* We find the "oldest" events in the room by looking for
* events connected to backwards extremeties (oldest events
* in the room that we know of so far).
*/
INNER JOIN event_backward_extremities as b
ON g.prev_event_id = b.event_id
WHERE b.room_id = ? AND g.is_state is ?
GROUP BY b.event_id
INNER JOIN event_backward_extremities as backward_extrem
ON edge.prev_event_id = backward_extrem.event_id
/**
* We use this info to make sure we don't retry to use a backfill point
* if we've already attempted to backfill from it recently.
*/
INNER JOIN event_backfill_attempts as backfill_attempt_info
ON backfill_attempt_info.event_id = backward_extrem.event_id
WHERE
backward_extrem.room_id = ?
/* We only care about normal events because TODO: why? */
AND edge.is_state is ? /* False */
/**
* We only want backwards extremities that are older than or at
* the same position of the given `current_depth` (where older
* means less than a given depth).
*/
AND MAX(event.depth) <= current_depth
/**
* Exponential back-off (up to the upper bound) so we don't retry the
* same backfill point over and over. ex. 2hr, 4hr, 8hr, 16hr, etc
*/
AND ? /* current_time */ >= backfill_attempt_info.last_attempt_ts + least(2^backfill_attempt_info.num_attempts * ?, ? /* upper bound */)
MadLittleMods marked this conversation as resolved.
Show resolved Hide resolved
/* TODO: Why? */
GROUP BY backward_extrem.event_id
/**
* Sort from highest (closest to the `max_depth`) to the lowest depth
* because the closest are most relevant to backfill from first.
*/
ORDER BY MAX(event.depth) DESC
"""

txn.execute(sql, (room_id, False))
txn.execute(
sql,
(
room_id,
False,
self._clock.time_msec(),
1000 * BACKFILL_EVENT_EXPONENTIAL_BACKOFF_STEP_SECONDS,
1000 * BACKFILL_EVENT_BACKOFF_UPPER_BOUND_SECONDS,
),
)

return cast(List[Tuple[str, int]], txn.fetchall())

Expand Down Expand Up @@ -1292,6 +1336,79 @@ def _get_backfill_events(

return event_id_results

@trace
async def record_event_backfill_attempt(self, event_id: str) -> None:
if self.database_engine.can_native_upsert:
await self.db_pool.runInteraction(
"record_event_backfill_attempt",
self._record_event_backfill_attempt_upsert_native_txn,
event_id,
db_autocommit=True, # Safe as its a single upsert
)
else:
await self.db_pool.runInteraction(
"record_event_backfill_attempt",
self._record_event_backfill_attempt_upsert_emulated_txn,
event_id,
)

def _record_event_backfill_attempt_upsert_native_txn(
self,
txn: LoggingTransaction,
event_id: str,
) -> None:
assert self.database_engine.can_native_upsert

sql = """
INSERT INTO event_backfill_attempts (
event_id, num_attempts, last_attempt_ts
)
VALUES (?, ?, ?)
ON CONFLICT (event_id) DO UPDATE SET
event_id=EXCLUDED.event_id,
num_attempts=event_backfill_attempts.num_attempts + 1,
last_attempt_ts=EXCLUDED.last_attempt_ts;
"""

txn.execute(sql, (event_id, 1, self._clock.time_msec()))

def _record_event_backfill_attempt_upsert_emulated_txn(
self,
txn: LoggingTransaction,
event_id: str,
) -> None:
self.database_engine.lock_table(txn, "event_backfill_attempts")

prev_row = self.db_pool.simple_select_one_txn(
txn,
table="event_backfill_attempts",
keyvalues={"event_id": event_id},
retcols=("num_attempts"),
allow_none=True,
)

if not prev_row:
self.db_pool.simple_insert_txn(
txn,
table="event_backfill_attempts",
values={
"event_id": event_id,
"num_attempts": 1,
"last_attempt_ts": self._clock.time_msec(),
},
)
else:
self.db_pool.simple_update_one_txn(
txn,
table="event_backfill_attempts",
keyvalues={"event_id": event_id},
updatevalues={
"event_id": event_id,
"num_attempts": prev_row["num_attempts"] + 1,
"last_attempt_ts": self._clock.time_msec(),
},
)

async def get_missing_events(
self,
room_id: str,
Expand Down
35 changes: 26 additions & 9 deletions synapse/storage/databases/main/events.py
Original file line number Diff line number Diff line change
Expand Up @@ -2435,17 +2435,34 @@ def _update_backward_extremeties(
"DELETE FROM event_backward_extremities"
" WHERE event_id = ? AND room_id = ?"
)
backward_extremity_tuples_to_remove = [
(ev.event_id, ev.room_id)
for ev in events
if not ev.internal_metadata.is_outlier()
# If we encountered an event with no prev_events, then we might
# as well remove it now because it won't ever have anything else
# to backfill from.
or len(ev.prev_event_ids()) == 0
]
txn.execute_batch(
query,
[
(ev.event_id, ev.room_id)
for ev in events
if not ev.internal_metadata.is_outlier()
# If we encountered an event with no prev_events, then we might
# as well remove it now because it won't ever have anything else
# to backfill from.
or len(ev.prev_event_ids()) == 0
],
backward_extremity_tuples_to_remove,
)

# Since we no longer need these backward extremities, it also means that
# they won't be backfilled from again so we no longer need to store the
# backfill attempts around it.
query = """
DELETE FROM event_backfill_attempts
WHERE event_id = ?
"""
backward_extremity_event_ids_to_remove = [
(extremity_tuple[0],)
for extremity_tuple in backward_extremity_tuples_to_remove
]
txn.execute_batch(
query,
backward_extremity_event_ids_to_remove,
)


Expand Down
Original file line number Diff line number Diff line change
@@ -0,0 +1,27 @@
/* Copyright 2022 The Matrix.org Foundation C.I.C
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/


-- Add a table that keeps track of when we last tried to backfill an event. This
-- allows us to be more intelligent when we decide to retry (we don't need to
-- fail over and over) and we can process that event in the background so we
-- don't block on it each time.
CREATE TABLE IF NOT EXISTS event_backfill_attempts(
event_id TEXT NOT NULL,
num_attempts INT NOT NULL,
last_attempt_ts BIGINT NOT NULL
);

CREATE UNIQUE INDEX IF NOT EXISTS event_backfill_attempts_event_id ON event_backfill_attempts(event_id);
Loading