Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Api interface for ssh tunnel management #58

Closed
fork04 opened this issue Dec 16, 2019 · 10 comments · Fixed by #64
Closed

Api interface for ssh tunnel management #58

fork04 opened this issue Dec 16, 2019 · 10 comments · Fixed by #64

Comments

@fork04
Copy link

@fork04 fork04 commented Dec 16, 2019

Suggested features:
kill tunnel, list of active tunnels, number of active tunnels, info about specified tunnel (like in log - 2019/12/14 - 19:34:00 | host.com | 200 | 6.792909ms | ip | GET /api/v2/test) ,tunnel stats with bytes in/out , list of dropped tunnels with timestamp and reason

@antoniomika

This comment has been minimized.

Copy link
Owner

@antoniomika antoniomika commented Dec 16, 2019

Hi @fork04! Thanks for the issue! Do you have any ideas about how this information should be exposed/what type of APi semantics should be used? My immediate thought was to add it as part of the web handler and just have an APi key passed as part of the commandline. The endpoint would then just serve raw JSOn with the data

@BenHarris

This comment has been minimized.

Copy link

@BenHarris BenHarris commented Dec 16, 2019

Like the idea @fork04! We could even build a little web-based dashboard to poll and show the stats along the lines of https://demo.nginx.com/

@antoniomika, web handler sounds good to me. I think we would be best to start with:

  • General stats
  • List active tunnels
  • Tunnel info
  • Kill tunnel

My only slight concern with the others is that we will probably need to implement some data storage, which increases the general complexity a little. I guess we could just have some memory allocated to logs and just rotate them, although it raises the question of whether it should persist restarts.

@fork04

This comment has been minimized.

Copy link
Author

@fork04 fork04 commented Dec 16, 2019

If someone wants to persist sish container logs , they can use docker "built-in" feature, for example:
docker run --log-driver syslog --log-opt syslog-address=udp://1.2.3.4:1111 [...]

@BenHarris

This comment has been minimized.

Copy link

@BenHarris BenHarris commented Dec 16, 2019

Yeah, of course. Are you thinking they should simply be output as per the current logging, as opposed to an API approach then?

@fork04

This comment has been minimized.

Copy link
Author

@fork04 fork04 commented Dec 18, 2019

I think that the current loging system is sufficient, my proposal is rather to expand the application's operating parameters stored in memory, and make them available via API.
This data can later be used both as JSON from the console level and to build a simple web-based interface.

@antoniomika

This comment has been minimized.

Copy link
Owner

@antoniomika antoniomika commented Dec 22, 2019

I think we can actually get away with not needing a local datastore. My thought is we wouldn't store any requests as part of the stack and only stream those to any connected client. That would definitely make it less complex. The way I think this would work is as so:

  1. Websocket handler that streams requests in real time to the web client
  2. API methods to list and disconnect connections
  3. API handler at something like /_sish/console for each HTTP service. This would be protected by HTTP basic auth or a token created when the tunnel is made and then sent as output with all of the other tunnel output.
  4. A global /_sish/console that can be used by the sish service operator to see all of the connected services, all of the logs for each service, and the ability to disconnect clients or only select services for each client. This could be accessed by a token or httpbasic auth creds that are supplied at program runtime.

Love to hear your opinions on this. I have some downtime tomorrow that I think I could get a decent amount of this implemented.

@fork04

This comment has been minimized.

Copy link
Author

@fork04 fork04 commented Dec 22, 2019

@antoniomika - for me, it`s great solution!

antoniomika added a commit that referenced this issue Dec 23, 2019
@antoniomika

This comment has been minimized.

Copy link
Owner

@antoniomika antoniomika commented Dec 23, 2019

@fork04 I have a PR up that implements the APIs necessary and the routes for the services. The "frontend" is extremely crude and is definitely not a production ready service, but it'll get the job done temporarily. I don't have time to work on a proper frontend, but if someone else is interested it is very easy to do. The docker image tag to play with the features is 1ceea74541767d659a91f1dd3257502134e25195

@fork04

This comment has been minimized.

Copy link
Author

@fork04 fork04 commented Dec 23, 2019

Thanks for this PR @antoniomika!, in fact the interface needs some refinement but it doesn't look like much work to do. I will test the solution and propose corrections as far as I can.

@antoniomika

This comment has been minimized.

Copy link
Owner

@antoniomika antoniomika commented Dec 23, 2019

Thanks @fork04! Let's track things to fix on this issue. Something I found is that I need to fix gzip decoding when running behind a reverse proxy in my use on my production service.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
3 participants
You can’t perform that action at this time.