You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We still have a UX issue with the import of slashing protection data from clients that erase the signing roots (e.g. Teku). Discord messages for context:
seamusf: Need some advise please: I moved my validators to a teku system a few days ago as a test - all went well. Now i'm moving them back to the lighthouse system - however the slashing-protection.json from teku gives errors when importing into lighthouse. It seems to be a teku issue. So should I start my vaidators with the old slashing db or start using --init-slashing-protection?
sproul: Ah, this is a known corner case. It’s a quirk of how Lighthouse does imports and how Teku erases the message hashes from the slashing protection data. Your node would have exported a record “validator A signed a block at slot X, with block hash 0x0001”, and then Teku would have exported “validator A signed a block at slot X, hash of block unknown”. When Lighthouse re-imports that record from Teku it just sees a block at slot X and assumes that it’s a double vote wrt the block it already has in its database (because the block hashes don’t match). I think the correct solution at this point is probably to modify Lighthouse’s import behaviour so that it ignores the apparent slashability of the block and just imports the file (displaying a warning). The other solution would be requiring Teku (and other clients) to preserve signing roots, but I pushed for that when we initially drafted EIP-3076 and it wasn’t popular.
I need to think more about whether we're safe to ignore slashability of imported files in general, and whether this should be clarified in EIP-3076.
The text was updated successfully, but these errors were encountered:
## Issue Addressed
Closes#2419
## Proposed Changes
Address a long-standing issue with the import of slashing protection data where the import would fail due to the data appearing slashable w.r.t the existing database. Importing is now idempotent, and will have no issues importing data that has been handed back and forth between different validator clients, or different implementations.
The implementation works by updating the high and low watermarks if they need updating, and not attempting to check if the input is slashable w.r.t itself or the database. This is a strengthening of the minification that we started to do by default since #2380, and what Teku has been doing since the beginning.
## Additional Info
The only feature we lose by doing this is the ability to do non-minified imports of clock drifted messages (cf. Prysm on Medalla). In theory, with the previous implementation we could import all the messages in case of clock drift and be aware of the "gap" between the real present time and the messages signed in the far future. _However_ for attestations this is close to useless, as the source epoch will advance as soon as justification occurs, which will require us to make slashable attestations with respect to our bogus attestation(s). E.g. if I sign an attestation 100=>200 when the current epoch is 101, then I won't be able to vote in any epochs prior to 101 becoming justified because 101=>102, 101=>103, etc are all surrounded by 100=>200. Seeing as signing attestations gets blocked almost immediately in this case regardless of our import behaviour, there's no point trying to handle it. For blocks the situation is more hopeful due to the lack of surrounds, but losing block proposals from validators who by definition can't attest doesn't seem like an issue (the other block proposers can pick up the slack).
We still have a UX issue with the import of slashing protection data from clients that erase the signing roots (e.g. Teku). Discord messages for context:
I need to think more about whether we're safe to ignore slashability of imported files in general, and whether this should be clarified in EIP-3076.
The text was updated successfully, but these errors were encountered: