-
Notifications
You must be signed in to change notification settings - Fork 54
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Refactor the c8y remote access plugin to use the c8y auth proxy #2597
Refactor the c8y remote access plugin to use the c8y auth proxy #2597
Conversation
Robot Results
|
fea8836
to
78f95c1
Compare
Codecov ReportAttention:
Additional details and impacted files
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm no expert of either the axum
or the tungstenite
crates. But the connection/conversion logic looked sensible. Just spotted one minor bug in the conversion task error handler.
error!("Websocket error proxying messages from Cumulocity to the client: {e}"); | ||
} | ||
c8y_to_client.abort(); | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Query: Since this whole function doesn't return anything even while detecting one of the above failures, how would the wrapping server react? I'm assuming it would continue functioning without the proxying capability. Wouldn't it be better to propagate these errors so that the server can choose to shutdown/panic, with the hope of restarting without that issue?
This might be out of scope for this PR specifically, as it's related to our general strategy on how we handle partial failures of a single component, in a multi-component process.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not sure exactly what axum does internally, but my belief is that this is a separate task spawned when we call ws.on_upgrade
. In any case, it only affects the current connection, nothing else, and I believe this is the most desirable behaviour. If we were talking over HTTP, this is a point at which I would give up and respond with 500 Internal Server Error
to the current request, but I wouldn't want the server to stop running, particularly as the issue at this point is quite likely one with Cumulocity.
let (server, port) = axum_server(); | ||
tokio::spawn(server.serve(test_app.into_make_service())); | ||
let proxy_port = start_server_port(port, vec!["outdated token", "correct token"]); | ||
let (mut ws, _) = connect_to_websocket_port(proxy_port).await; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks like a block of code repeated in the previous test as well, that can be moved to a test setup function that takes the Router
definition as an argument.
Signed-off-by: James Rhodes <jarhodes314@gmail.com>
5217054
to
845480e
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Approved. Working as expected and nicely done.
Proposed changes
This PR adds support for websockets to the auth proxy, and modifies the c8y remote access plugin to connect via the auth proxy, rather than making a direct connection to c8y.
There are still a few TODOs in the code, but the broad shape of the changes are in place and the happy path works for me testing locally.
Types of changes
Paste Link to the issue
Checklist
cargo fmt
as mentioned in CODING_GUIDELINEScargo clippy
as mentioned in CODING_GUIDELINESFurther comments