-
Notifications
You must be signed in to change notification settings - Fork 1.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Prepare crate for library use #139
Conversation
This PR has conflicts with the lastest commit. |
Alright, I'll look into this beginning of next week. |
BTW. Because |
In the context of being a library it's a bit strange, and dangerous, to even have code that touches on signals and direct process exits. That's something only a binary should ever do basically. Also having the binaries main function end with a panic, because they are not supposed to terminate from that execution path, is a bit odd IMO. With the redesign proposed here I feel that the entire Tokio runtime is being taken care of better. Futures are being properly evaluated and the reactor can gracefully shut down before the process exits cleanly from returning from EDIT: To decouple the signal/break and process exit even more from the library it would be nice not to have the shorthand |
That's because Rust's main function doesn't have a return value. If the server exited with an error, it should be reported to the process manager (supervisord, systemd, initd). Otherwise, the process manager will not report the error and restart the process. I completely agree to remove that signal handler in the library.
|
This is a commit for demonstrating my idea. |
Thanks. However, that commit doesn't address the case where a different signal monitor is needed. When using the service as a library we need to pass in a signal monitor that can listen for requests from the owning application to shut down the service. |
Yes, you can. let config = ...; // Somewhere constructed Config
let option = Options {
enable_signal_monitor: false, // Disable the default monitor
};
let ss_service = run_opt(option);
let your_monitor = ...; // Your monitor Future
futures::select(your_monitor, ss_service); // This future will return if any of them returns. So you can just let |
OK. Perhaps there is some misconception on my part. If you check the code I submitted, you'll see that all sub processes (plugins) are terminated in a controlled manner. How can the same thing be achieved with the code you posted? |
let mon = create_signal_monitor();
run_with_monitor(config, mon);
|
@mvd-ows I read your commits twice, I think I got your idea of "all sub processes (plugins) are terminated in a controlled manner".
I finally got time to review your code seriously. I have to say that you have done a great job. But I still have some questions:
|
Hello, Thanks for your comments, your analysis of my code is correct. Regarding your questions:
The only way you could have
|
The benefit that |
Because Well, I don't know much about Windows. If it is only for compatibility reason, we could just add a monitor sub-module, which implements subprocesses monitor platform dependently (
|
Implementing select functionality for waiting for multiple subprocesses via |
@faern How about |
How many plugins do you expect the most advanced setup to have? The extra threads will not do anything, just sit and wait for process to exit. If we swap to |
A thread will cost way less resources than a subprocess. And if the number of threads vs subprocesses always have a 1:1 relationship the amount of resources "wasted" on threads will be dwarfed by those of the actual subprocess. So no matter how many plugins are started, the threads will not be the issue. |
@zonyitoo I could swap out |
739b38d
to
7c0fbde
Compare
Sorry.. Force pushed a completely wrong branch. I reset it. Still working on fixing this PR as well, but that comes later. |
@faern That branch is for adding a |
@zonyitoo Is the plan to merge that branch? Or was it an experiment? Since it touches on a lot of stuff this PR also changes it would be good to know if it's worth spending time on coding on the code base as it looks pre A different, unrelated question: Why is running the TCP part of the code optional in the server, but not in the client? I'm talking about this code: https://github.com/shadowsocks/shadowsocks-rust/blob/master/src/relay/server.rs#L53 |
Because
Because it is required for UDP associate in Socks5, before associating UDP ports, authentications and commands must be sent via TCP to the client server.
Currently it is an experiment for adding optional signal handler, if user wants to handle signals by his own. |
How can it do that if the server does not run a TCP endpoint? It feels like they should be mandatory on both ends, and not just one? So what I wonder is if it was a bug to force it, and it's now fixed. From my understanding it feels like it should be mandatory, since otherwise no one can do the initial authentication / port negotiation etc.
Yes, this is exactly the feature this PR is trying to introduce :D which is why I wonder. I will continue work towards running with arbitrary stop signal futures in this PR. |
No. So, do you want me to merge |
I don't know. I have not read that code. All I want is for this project to have good structure and code quality and be usable both as a library and as binaries 🙂 EDIT: I think we are on to something good in this PR right here. But master has changed quite substantially. So I and @mvd-ows need to take what we have learned from this discussion and apply it on top of master again. From that perspective it would be better if |
Merged. EDIT: Ah.... I haven't seen your EDIT yet!!! I have already merged!!! |
I will guarantee I will not make any breaking changes unless someone has found any fatal bugs until this PR was merged. |
While working on this I found a few other stuff that might be worth fixing. So I did those in a separate branch and PR: #142. |
I have a really nice solution going on here. Will polish and test it a bit more, and maybe wait for #142 to be discussed/merged before I push to this PR. But this solution solves detecting dying plugins on Windows, and it also does not cost any extra threads per plugin. |
7c0fbde
to
57a43f4
Compare
Now I pushed a completely different implementation, with the same goal as this PR. Then I swap out the Also changed some |
On top of making this crate usable as a library, this also solves the problem of the Windows version not properly monitoring plugins for exits. This code should now do that, thanks to
|
Travis is currently having an error on nightly. But it's while building
EDIT: I have accidentally upgraded it from |
Just keep it as |
But it fails building? Is that maybe something about the Linux distro on travis is too old? Do you plan on merging it despite Travis failures or how should we proceed? |
It seems that they requires this flags for building: https://github.com/miscreant/miscreant.rs/blob/master/.travis.yml#L18 |
This PR's primary purpose is to decouple the shadowsocks service from the console. Doing so will enable one to use the crate as a library, and start/stop the service independently of the hosting process' lifetime.
The CTRL+C handler is no longer hardwired into the service logic, and no longer invokes a process shutdown when activated, but merely a shutdown of the service.
Additionally, it improves the monitoring of plugin subprocesses and ensures they are cleaned up when the shadowsocks service is stopped.