-
-
Notifications
You must be signed in to change notification settings - Fork 2.1k
startup error upgrading 1.69 -> 1.70 or greater: constraint "receipts_graph_uniqueness" does not exist #14377
Comments
The Synapse database is missing the Can you try running ALTER TABLE ONLY receipts_graph
ADD CONSTRAINT receipts_graph_uniqueness UNIQUE (room_id, receipt_type, user_id); in |
What do I do if there's a duplicate key? synapse=# ALTER TABLE ONLY receipts_graph ADD CONSTRAINT receipts_graph_uniqueness UNIQUE (room_id, receipt_type, user_id);
ERROR: could not create unique index "receipts_graph_uniqueness"
DETAIL: Key (room_id, receipt_type, user_id)=(!LZirCxnkkeBudrQzPj:matrix.org, m.read, @f:whomst.online) is duplicated. Sorry, I'm a database noob. |
In that case let's try something different. We're going to execute the
SELECT * FROM applied_schema_deltas;
version | file
---------+-------------------------------------------------------------------
...
73 | 73/04pending_device_list_updates.sql
73 | 73/05old_push_actions.sql.postgres
73 | 73/06thread_notifications_thread_id_idx.sql
(### rows)
-- It's okay if this fails because `receipts_linearized_uniqueness` does not exist.
ALTER TABLE receipts_linearized DROP CONSTRAINT receipts_linearized_uniqueness;
-- It's okay if this fails because `receipts_graph_uniqueness` does not exist.
ALTER TABLE receipts_graph DROP CONSTRAINT receipts_graph_uniqueness;
INSERT INTO applied_schema_deltas(version, file) VALUES (73, '73/08thread_receipts_non_null.sql.postgres');
|
Wow thank you so much for the help. I'm seeing the last delta is |
We'll have to run -- 73/06thread_notifications_thread_id_idx.sql
-- Allow there to be multiple summaries per user/room.
DROP INDEX IF EXISTS event_push_summary_unique_index;
INSERT INTO background_updates (ordering, update_name, progress_json, depends_on) VALUES
(7306, 'event_push_actions_thread_id_null', '{}', 'event_push_backfill_thread_id');
INSERT INTO background_updates (ordering, update_name, progress_json, depends_on) VALUES
(7306, 'event_push_summary_thread_id_null', '{}', 'event_push_backfill_thread_id');
INSERT INTO applied_schema_deltas(version, file) VALUES (73, '73/06thread_notifications_thread_id_idx.sql'); -- 73/08thread_receipts_non_null.sql.postgres
-- It's okay if this fails because `receipts_linearized_uniqueness` does not exist.
ALTER TABLE receipts_linearized DROP CONSTRAINT receipts_linearized_uniqueness;
-- It's okay if this fails because `receipts_graph_uniqueness` does not exist.
ALTER TABLE receipts_graph DROP CONSTRAINT receipts_graph_uniqueness;
INSERT INTO applied_schema_deltas(version, file) VALUES (73, '73/08thread_receipts_non_null.sql.postgres'); |
Ok I applied the migrations, upgraded to 1.71.0, and now federation is occurring but I'm seeing this error: 2022-11-08 16:21:42,226 - synapse.metrics.background_process_metrics - 244 - ERROR - background_updates-0 - Background process 'background_updates' threw an exception
Traceback (most recent call last):
File "/usr/local/lib/python3.9/site-packages/synapse/storage/background_updates.py", line 294, in run_background_updates
result = await self.do_next_background_update(sleep)
File "/usr/local/lib/python3.9/site-packages/synapse/storage/background_updates.py", line 424, in do_next_background_update
await self._do_background_update(desired_duration_ms)
File "/usr/local/lib/python3.9/site-packages/synapse/storage/background_updates.py", line 467, in _do_background_update
items_updated = await update_handler(progress, batch_size)
File "/usr/local/lib/python3.9/site-packages/synapse/storage/background_updates.py", line 624, in updater
await self.db_pool.runWithConnection(runner)
File "/usr/local/lib/python3.9/site-packages/synapse/storage/database.py", line 976, in runWithConnection
return await make_deferred_yieldable(
File "/usr/local/lib/python3.9/site-packages/twisted/python/threadpool.py", line 244, in inContext
result = inContext.theWork() # type: ignore[attr-defined]
File "/usr/local/lib/python3.9/site-packages/twisted/python/threadpool.py", line 260, in <lambda>
inContext.theWork = lambda: context.call( # type: ignore[attr-defined]
File "/usr/local/lib/python3.9/site-packages/twisted/python/context.py", line 117, in callWithContext
return self.currentContext().callWithContext(ctx, func, *args, **kw)
File "/usr/local/lib/python3.9/site-packages/twisted/python/context.py", line 82, in callWithContext
return func(*args, **kw)
File "/usr/local/lib/python3.9/site-packages/twisted/enterprise/adbapi.py", line 282, in _runWithConnection
result = func(conn, *args, **kw)
File "/usr/local/lib/python3.9/site-packages/synapse/storage/database.py", line 969, in inner_func
return func(db_conn, *args, **kwargs)
File "/usr/local/lib/python3.9/site-packages/synapse/storage/background_updates.py", line 575, in create_index_psql
c.execute(sql)
File "/usr/local/lib/python3.9/site-packages/synapse/storage/database.py", line 388, in execute
self._do_execute(self.txn.execute, sql, *args)
File "/usr/local/lib/python3.9/site-packages/synapse/storage/database.py", line 436, in _do_execute
return func(sql, *args, **kwargs)
psycopg2.errors.UniqueViolation: could not create unique index "receipts_graph_unique_index"
DETAIL: Key (room_id, receipt_type, user_id)=(!LZirCxnkkeBudrQzPj:matrix.org, m.read, @f:whomst.online) is duplicated.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/local/lib/python3.9/site-packages/synapse/metrics/background_process_metrics.py", line 242, in run
return await func(*args, **kwargs)
File "/usr/local/lib/python3.9/site-packages/synapse/storage/background_updates.py", line 299, in run_background_updates
raise RuntimeError(
RuntimeError: 5 back-to-back background update failures; aborting. In the mean time I downgraded to 1.69.0 because I worried that state wouldn't be stored correctly. |
We're working on a fix that will go in 1.72.0. It's being tracked in #14406. |
@squahtx upon upgrading to 1.72.0, I get this: 2022-11-22 15:11:46,824 - synapse.storage.background_updates - 431 - INFO - background_updates-0 - Starting update batch on background update 'receipts_graph_unique_index'
2022-11-22 15:11:46,865 - synapse.metrics.background_process_metrics - 244 - ERROR - background_updates-0 - Background process 'background_updates' threw an exception
Traceback (most recent call last):
File "/usr/local/lib/python3.9/site-packages/synapse/storage/background_updates.py", line 294, in run_background_updates
result = await self.do_next_background_update(sleep)
File "/usr/local/lib/python3.9/site-packages/synapse/storage/background_updates.py", line 424, in do_next_background_update
await self._do_background_update(desired_duration_ms)
File "/usr/local/lib/python3.9/site-packages/synapse/storage/background_updates.py", line 467, in _do_background_update
items_updated = await update_handler(progress, batch_size)
File "/usr/local/lib/python3.9/site-packages/synapse/storage/databases/main/receipts.py", line 1053, in _background_receipts_graph_unique_index
await self._create_receipts_index(
File "/usr/local/lib/python3.9/site-packages/synapse/storage/databases/main/receipts.py", line 958, in _create_receipts_index
await self.db_pool.runWithConnection(_create_index)
File "/usr/local/lib/python3.9/site-packages/synapse/storage/database.py", line 976, in runWithConnection
return await make_deferred_yieldable(
File "/usr/local/lib/python3.9/site-packages/twisted/python/threadpool.py", line 244, in inContext
result = inContext.theWork() # type: ignore[attr-defined]
File "/usr/local/lib/python3.9/site-packages/twisted/python/threadpool.py", line 260, in <lambda>
inContext.theWork = lambda: context.call( # type: ignore[attr-defined]
File "/usr/local/lib/python3.9/site-packages/twisted/python/context.py", line 117, in callWithContext
return self.currentContext().callWithContext(ctx, func, *args, **kw)
File "/usr/local/lib/python3.9/site-packages/twisted/python/context.py", line 82, in callWithContext
return func(*args, **kw)
File "/usr/local/lib/python3.9/site-packages/twisted/enterprise/adbapi.py", line 282, in _runWithConnection
result = func(conn, *args, **kw)
File "/usr/local/lib/python3.9/site-packages/synapse/storage/database.py", line 969, in inner_func
return func(db_conn, *args, **kwargs)
File "/usr/local/lib/python3.9/site-packages/synapse/storage/databases/main/receipts.py", line 953, in _create_index
c.execute(sql)
File "/usr/local/lib/python3.9/site-packages/synapse/storage/database.py", line 388, in execute
self._do_execute(self.txn.execute, sql, *args)
File "/usr/local/lib/python3.9/site-packages/synapse/storage/database.py", line 436, in _do_execute
return func(sql, *args, **kwargs)
psycopg2.errors.DuplicateTable: relation "receipts_graph_unique_index" already exists That's repeated 5 times before giving up. I'm guessing this is because of the manual upgrades I did? |
Was the Could you post the output of \d receipts_graph and \d receipts_linearized ? |
|
Thanks. That all looks okay. Could you run |
synapse=# SELECT * FROM background_updates;
update_name | progress_json | depends_on | ordering
-----------------------------------+---------------+-------------------------------+----------
event_push_actions_thread_id_null | {} | event_push_backfill_thread_id | 7306
event_push_summary_thread_id_null | {} | event_push_backfill_thread_id | 7306
threads_backfill | {} | | 7309
(3 rows)
If you're willing to help debug further somewhere other than GitHub (and huge thanks for what you've done so far), we could take this to matrix |
That looks okay then. Are you still seeing errors in the logs when restarting on 1.72.0? |
Hmm, well now I restarted again with 1.72.0 and I'm not seeing any errors. Not sure how it got fixed, since I haven't made any changes since the previous attempt. Thanks so much for you help! |
After upgrading the Docker container from v1.68.0 to v1.70.1, I see this error on startup:
I then proceeded to downgrade to 1.70.0, and then 1.69.0, at which point the server worked correctly. Just to be sure, I then tried to upgrade back to 1.70.1, but I saw the same issue. Currently running 1.69.0 now.
The text was updated successfully, but these errors were encountered: