New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Automatically disable registration after a certain threshold #6376
Comments
That threshold is very hard to determine automatically. If we set a fixed number, we'll just waste resources that the podmin pays for and directing load towards setups on easier exhausted hardware. Disregarding that the total user count is still a bad measure, as there might be many inactive accounts or a few causing a ton of load. |
Yeah, I'm assuming that inactive account cleaning will be a big part of making this work. Also, to be clear, this wouldn't be something hardcoded. It'd be on by default, but podmins would be able to go into |
But the default would become a defacto hardcode. Podmins actively watching their pods can close registrations as they please, it's people not so actively watching it that would "benefit" from it, thus it'll hardly ever changed. And it's not just completely inactive accounts. If you check diaspora* once a day you already cause tremendously less load than a power user with 1k contacts checking in every half hour. |
Part of me wants to say that an underutilized pod is far better than an overutilized and slow pod. But I also see your point about pushing users away onto less powerful parts of the network. Not sure what the right answer is here.
That's true. Perhaps we could measure this a different way? E.g. average number of Sidekiq jobs, or average run time for Sidekiq jobs? I'm not familiar enough with the codebase to know if either of these will be accurate enough, though. The other thing that could be done is to wait until the pod is obviously starting to near resource exhaustion. This solution basically concedes that automatically setting a threshold to prevent sluggishness is hard, but it's easier to detect when things are starting to get sluggish. (Presumably.) It's not ideal, but it's way better than another joindiaspora situation. |
I've had this same idea on my "log as an issue" worklog :P Conditional allow_registrations would be a good idea. It could be something simple like It doesn't have to be scientifically efficient or super clever. |
I think we can do a simple heuristic, but I wouldn't like it to be enabled by default. |
Sure, default could still be as it is. |
When it's not enabled by default (which I also would not do), what's the purpose of such a feature? When the podmin has to do something to limit the amount of users, he could also simply disable the registrations at some point. |
So the podmin can set a maximum active users that he/she wants to support and then leave it. Constantly turning some setting on and off and having to restart the app server just to do that doesn't really support this case. Also, in the case that this would be activated by the podmin, it stops the pod being swamped suddenly. A more unlikely case but have seen it happen. |
I see the use case, but I try to question all feature requests in consideration of a possible increase in code complexity, especially if it's a feature that will be turned off by default and will only get used by a very small edge-casy group. |
The idea feels weird to me. Especially, I see down cases like invitation links which are given without knowing when they will be used. I think the podmin should always know if it's currently possible to register to his pod, so he should manually turning registration on or off. It's a very important decision and should not be taken automatically without control. A setting in the admin panel to turn it on / off without restarting would be nice though. |
If there is some use in having this option, would it make more sense for the feature, if enabled, to send an alert to the podmin when |
@goobertron in that case, the load of the server is a better indicator than the number of users. A podmin can configure his server to send him an email when the load is too high. But this is not related to diaspora* anymore. And I honestly think we should not automatically close registration, this is a sysadmin deal which needs human decision. Let's trust them ;) |
That's exactly what I suggested! |
I don't think the
So instead of having a nice automated way to control your pod size, one would need to constantly jump on emails to restart the app server? :) I get the fact that not everything needs to be implemented and configuration overload can happen at some point. But to me, this would be worth implementing. Especially if at some point we manage to make some kind of pod choosing wizard which would push registering users to smaller pods around the network. As a software project, we should aim to make running a pod easier, not "sysadmin required". |
I don't agree with you: I don't want servers reachable over the internet, without "sysadmin required", if diaspora or not! it is a bad idea to let people administrate servers, if they don't know what they are doing. you can't simply install a pod, and let it run and run and run ... you need someone, who knows what he does, and knows when he need to do something (install security-updates, for base-system and/or diaspora ... there are still pods online, with version < 0.5), this is even more important for pods with open registration. (I agree that a podmin shouldn't need to know ruby or rails, but basic sysadmin stuff is a requirement) the default is closed registration, and if someone changes this to open, he should know, if his server has enough capacity for more users, and he can close it later again. you don't need to change this every day. you need to restart the pod anyway when you upgrade it, and then you can also check, if you want to open or close registration (and this also only, if you are near the limit). so I think there is no need here to increase the code complexity. a podmin with a open pod should know his server enough to decide if he wants to open or close the registration. |
This is true, right now. Doesn't mean that we shouldn't be trying to change that. See e.g. projects like Sandstorm and arkOS.
Yes, but they also shouldn't have to constantly monitor pod size. There should be some way for the software to implement this policy in an automatic fashion. |
I want diaspora to be easily installable and friendly to podmins - whether they are sysadmins or not. That is the only way forward to actual adoption, and not just a few hundred pods like now. In the long term, it would be great to have things like online configuration, so diaspora bundled with things like arkOS etc are actually usable. Right now diaspora requires too much sysadmin information which is stopping a lot of people running a personal pod.
No, the default is open registrations. |
I completely agree with Jason when he says pods should be easy to administrate, though I still don't think automatically decide is a good thing. To be easy means for me doable in an interface and not with ssh, and without having to restart the pod. So a toggle in the admin panel to enable / disable registration coupled with an indicator in the main page of the admin panel (like the "pod up-to-date" indication) to warn when the load is too heavy could be the way to deal with that problem. |
Looks like we need a loomio vote! |
I'm new to diaspora*, so this may not make sense given the current state of the network deployment (or whatever), but it seems like it might be a good idea to automatically disable registrations after a certain threshold is reached, by default. This solves two problems:
The idea behind the first problem is that podmins wouldn't have to watch their pod for resource usage as much; it would become more of a fire-and-forget experience. The idea behind the second problem is the concept that podmins might intentionally close pod registrations, not because they were having performance problems, but simply to encourage the overall decentralization of the network.
Thoughts?
The text was updated successfully, but these errors were encountered: