Skip to content

Conversation

@RogerZhongAWS
Copy link
Contributor

Motivation

  • Localproxy support for parallel tcp connections.

Modifications

Change summary

  • Modifications to existing functions and handlers, and new handlers for new control messages CONNECTION_START and CONNECTION_RESET.
  • New mappings for tracking connectionId.
  • Changed source and destination tcp socket setup logic

Revision diff summary

If there is more than one revision, please explain what has been changed since the last revision.

Testing

Is your change tested? If not, please justify the reason.
Please list your testing steps and test results.

  • CI test run result:

By submitting this pull request, I confirm that you can use, modify, copy, and redistribute this contribution, under the terms of your choice.

@RogerZhongAWS RogerZhongAWS force-pushed the multipleTCP branch 2 times, most recently from 07e9253 to 49b7674 Compare August 15, 2022 22:56
@RogerZhongAWS RogerZhongAWS changed the title [Work In Progress] Support for v3 protocol Support for v3 protocol Aug 31, 2022
Copy link
Contributor

@marcoemorais-aws marcoemorais-aws left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Read through the protocol doc and very quickly went through the code. Some comments on both, but mostly looks good. Want to spend a little more time with the code in my next pass.

async_send_message(tac, outgoing_message);
}

void tcp_adapter_proxy::async_send_stream_reset(tcp_adapter_context &tac, std::string const & service_id, uint32_t const & connection_id)
Copy link
Contributor

@marcoemorais-aws marcoemorais-aws Sep 2, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Minor. What is the meaning of connection_id for StreamReset? It seems we ignore the parameter and the intended behavior is that receipt of a StreamReset will immediately close all open connections for the service. Recommend to drop the parameter and either (1) not set on the outgoing message -- (although with proto3 this has less meaning since there is no support for HasField); or (2) the protocol seems to describe some logic that any connection id of 0 is interpreted as id of 1, make that explicit and pass 1 instead of 0.

}
tcp_server::pointer server = tac.serviceId_to_tcp_server_map[service_id];

uint32_t new_connection_id = ++server->highest_connection_id;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see that highest_connection_id is monotonically increasing and will overflow to 0 at 4B. Since 0 is invalid connection_id, do we have to implement wraparound to 1 instead of 0?

I know that MAX_ACTIVE_CONNECTIONS limits the number of active connections, but it would be possible in some rare case to start-reset enough connections to hit the maximum. If that happens, then without wraparound to 1, we could have a broken connection mapped to 0 or (worse) duplicate connections mapped to 1.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I originally made connection id an unsigned long, but after a bit of deliberation with the team, we agreed to keep it as a 32 bit int. The tunnel lifetime imposes a strict time limit, after which the localproxy would shut down. This would require almost 100000 connections to be created per second to overflow within 12 hours

@RogerZhongAWS RogerZhongAWS merged commit 6716623 into main Sep 20, 2022
@RogerZhongAWS RogerZhongAWS deleted the multipleTCP branch September 20, 2022 17:13
RogerZhongAWS added a commit that referenced this pull request Dec 22, 2022
* implementation for simultaneous tcp connections
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants