Skip to content

tomerb/ship-control

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

1 Commit
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

  _________.__    .__         _________                __                .__   
 /   _____/|  |__ |__|_____   \_   ___ \  ____   _____/  |________  ____ |  |  
 \_____  \ |  |  \|  \____ \  /    \  \/ /  _ \ /    \   __\_  __ \/  _ \|  |  
 /        \|   Y  \  |  |_> > \     \___(  <_> )   |  \  |  |  | \(  <_> )  |__
/_______  /|___|  /__|   __/   \______  /\____/|___|  /__|  |__|   \____/|____/
        \/      \/   |__|             \/            \/                         

Control a fleet of ships through a centralized server.

Code structure

The solution is composed of three projects:

  • ShipControlHQ (exe) - a server application that handles client connections, and manages the distribution of requests to connected clients
  • ShipControlClient (exe) - a simple application to simulate a ship connecting to the HQ
  • ShipControlCommon (lib) - common code used by other projects

General operation flow

The server provides a simple CLI for lifecycle management. Run the server executable and start listening for incoming clients by choosing the start server command.

The server listens by default on http://localhost:8080.

Run one or many clients by using the client executable. Each client identifies itself with a unique ID, performs a dummy (useless) authentication with the server.

The server holds a dictionary of client ids mapped to a client service instance. It is expected that clients will get disconnected randomly, thus entries are never removed, and pending commands are kept until handled by the client. Thus, operators are free to add new commands even when the ship is offline, but it needs to connect at least once for this to happens, as the ship id is needed to push a command to the ship service.

Client design

The client is currently very basic, and it's entire functionality resides in one file.

Once the client is "authenticated" with the server, it will establish a websocket connection with the server. This connection is dedicated to the client, and is in the format of ws://localhost:9000/{clientId}, where client id is the unique GUID generated by the client, and will be used by the server to store pending commands pushed by the operator(s).

The client will await incoming requests from the server, and perform them immediately, responding with a response object indicating the result of the operation. For simplicity, the commands implementations currently just marks the arrival and the commands, and provides a dummy result.

Server design

The server has three main components:

  • HqOperator: a basic CLI that simulates a frontend for the operator
  • HqManager: the main entry point for the server
  • ShipService: connected to clients, and handles client requests pushed by the operator

The server can handle multiple operators at the same time, but currently the implementation is one operator to one server instance. The operator is free to manage the lifecycle of the server through the CLI.

Missing features/TODOs

  • Implement commands execution
  • Add unit + functional tests
  • Improve all-around resiliency and error handling
  • Add proper authentication
  • Save activity to log file
  • Add proper architecture to client code
  • Read commands from more sources, other than the operator CLI
  • Add operator endpoints to server, then make the operator another type of client. Consider using the same continuos connection design, but add authorization capabilities to differentiate between a ship and an operator.
  • Add a frontend for server management. Right now there are some operations that can be used from the operator CLI, but they are basic or incomplete.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages