New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[roq-ftx] Validate MarketByPrice #56
Comments
A client suggestion is to only checksum validate on every N updates. This strategy can be combined with a check on choice or inverted prices. The reason this will work is that most updates are likely to be close to best bid/ask and lost messages will therefore manifest as some price level not getting removed when the market trades through it. That can quickly be noticed as choice or inverted prices. The checksum calculation will catch it a little while later if the lost messages only involved changing a size (for an existing price level) or if the price level was removed further down the book. If N is a flag (>=0), it's up to the user to choose the balance between correctness and speed
|
This has required changes to roq-server and in turn to roq-api:
|
FTX resubscription has been added. Currently only triggered by detecting choice/inverted prices. |
Some notes CRC32 checksum based on Python's string representation (
|
More fun stuff
|
Summary of changes
|
The gateway is currently defaulting to not validate checksum. This can be enabled by However, optimizations are required. As can be confirmed by running That's 70 microseconds for a 100 level deep order book... |
The online documentation is subtle, it says (about a checksum)
Live testing shows that this could actually mean that the CDN has lost packets. And the reason for saying this is that the updates are disseminated over WebSocket which is using lossless TCP/IP.
The real problem here is that the checksum is very expensive, it is the CRC32 of a string concatenation of the top 100 bid/ask prices/sizes.
One can either cache all string representations or compute those on the fly. Two evils -- either bad for the caches or bad for CPU cycles. Either way, this is a computation that ideally should be done on every update.
Detecting lost updates should be followed by resubscription: #31
The text was updated successfully, but these errors were encountered: