Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix incorrect sponsor displayed for L1 ships after escape #1116

Merged
merged 1 commit into from
Sep 8, 2023

Conversation

shawntobin
Copy link
Contributor

There’s a bug in the OS section where an incorrect sponsor will be displayed for an L1 ship that has escaped to a new point (if the adoption transaction has been done using Bridge/L2).

For context, Bridge currently seems to only use L2 to execute the adopt transaction even if the sponsee is on L1 and the escape request is through L1. This is valid behaviour as per the docs -

"...sponsorship actions may be performed on layer 2 using the ownership or management proxies regardless of the dominion status...".

Because of this, when using getPoint() in azimuth-js, the returned values for ‘sponsor’, ‘escapeRequested’, and ‘escapeRequestedTo’ fields may not be accurate.

This fix removes the condition that checks whether the point is L1 or L2, and instead only uses the roller to fetch additional details about the point.

Note that there’s still a try-catch fallback which uses azimuth-js if the initial call to the roller fails. I’ve left this in place for now but it arguably should also be removed.

- Removed use of 'azimuth.getPoint()' and instead using roller data to fetch additional details regardless of dominion. This is because 'azimuth.getPoint()' will show stale sponsor data for any L1 ship that has been adopted using an L2 transaction.
@jalehman jalehman requested a review from pkova September 8, 2023 15:27
@pkova pkova merged commit a142095 into master Sep 8, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants