Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Better way of avoiding port conflicts #32

Closed
rysiekpl opened this issue Mar 28, 2022 · 8 comments · Fixed by #237
Closed

Better way of avoiding port conflicts #32

rysiekpl opened this issue Mar 28, 2022 · 8 comments · Fixed by #237
Labels
enhancement New feature or request
Milestone

Comments

@rysiekpl
Copy link

Continuing discussion from #3.

It would be good to have a better way of avoiding port conflicts when running multiple instances of lemmy. Current approach (randomizing ports on every deploy) has several downsides.

Quick brainstorm about options here:

  1. include nginx in the docker-compose.yml, allowing us to proxy_pass directly to specific containers; name relevant containers using the {{domain}}: lemmy-{{domain}}, lemmy-ui-{{domain}}, pictrs-{{domain}}, etc. Host-installed nginx would be unnecessary, or if already present would only need to hit a single port: the one exposed by the nginx container.

  2. Have some faith in the admins running instances, and make it possible to explicitly set ports per deployment, such that and admin deploying 3 different Lemmy instances would just explicitly define three sets of ports. A version of this would be to have a "starting port" configurable per instance, and each actual service port just offset by a well-defined value (say, lemmy_port would be starting_port; lemmy_ui_port would be starting_port + 1, etc).

Option 1. is the cleanest, keeps all Lemmy deploys in a "package" within the docker-compose.yml, all managed from a single lemmy-ansible checkout with minimal side-effects on the host system. From a perspective of a sysadmin this might be the most preferable, and also is the most in-line with the "docker way of doing things" so to speak.

@dessalines
Copy link
Member

I definitely prefer option 2 for performance reasons... why have multiple nginx layers when you only need one.

@rysiekpl
Copy link
Author

I definitely prefer option 2 for performance reasons... why have multiple nginx layers when you only need one.

Option 1. could/would be used with a single nginx layer, the difference would be that the nginx would be in the docker-compose.yml. The only reason for additional nginx on the host system would be if the system administrator wants to use it this way for whatever reason (other services running on the system, etc).

The important benefit of Option 1. is encapsulation. All services needed to run Lemmy are in that case encapsulated in and managed by docker-compose.yml, there are no side-effects on the host system.

Performance hit should be negligible. I've run infrastructures with 2 nginx layers, serving hundreds or thousands of requests per second; nginx was never the bottleneck.

@Nutomic
Copy link
Member

Nutomic commented Mar 29, 2022

We have an internal repo that we use to deploy lemmy.ml and associated test instances, it also uses the method 2 you explain. Just specify the starting port for each instance, and other ports are immediately above it.

I prefer to use native nginx, because that makes it much easier to run other services besides Lemmy (its so lightweight that there is little reason to use a whole server). Also system packages will be updated more frequently (especially in case of security vulnerabilities), eg using unattended-upgrades. If we use a Docker image for nginx, it has to be updated manually in this repo all the time.

@rysiekpl
Copy link
Author

We have an internal repo that we use to deploy lemmy.ml and associated test instances, it also uses the method 2 you explain. Just specify the starting port for each instance, and other ports are immediately above it.

Makes sense.

Also system packages will be updated more frequently (especially in case of security vulnerabilities), eg using unattended-upgrades. If we use a Docker image for nginx, it has to be updated manually in this repo all the time

That's a valid consideration. It also makes it easier to upgrade from the current set-up. Option 2. it is, then!

@rysiekpl
Copy link
Author

We have an internal repo that we use to deploy lemmy.ml and associated test instances, it also uses the method 2 you explain. Just specify the starting port for each instance, and other ports are immediately above it.

So, as I might have the time to work on this a bit soon, question: how do you specify the port in that setup? I could come up with a scheme, but maybe there's no need to reinvent the wheel?

@Nutomic
Copy link
Member

Nutomic commented Apr 21, 2022

Its like this:

# List of instance domains that are deployed and managed.
domains:
  - lemmy.ml
  - slrpnk.net
  - lemmy.perthchat.org
  - community.xmpp.net
  - jeremmy.ml
# Internal ports that are used for each instance. These should be in steps of 10
# because we need ports for different services
ports:
  "lemmy.ml": 8000
  "slrpnk.net": 8020
  "lemmy.perthchat.org": 8030
  "community.xmpp.net": 8040
  "jeremmy.ml": 8050

@ticoombs
Copy link
Collaborator

ticoombs commented Oct 4, 2023

@rysiekpl Did you find a solution to your problem?

As we now have a vars.yml file per-domain. We'll look at setting up a new variable called lemmy_web_port (or something like that) which we will check if it exists in the vars file, if not it will be random. Would that be sufficient?

Edit: The main reason I want to fix this is because it forces nginx to be reloaded every time we deploy, when it doesn't need to be.

@ticoombs ticoombs added this to the 1.3.0 milestone Oct 4, 2023
@rysiekpl
Copy link
Author

rysiekpl commented Oct 4, 2023

I have not, I have not had the time to dive into it.

@ticoombs ticoombs modified the milestones: 1.3.0, 1.4.0 Oct 9, 2023
@ticoombs ticoombs added the enhancement New feature or request label Dec 16, 2023
@ticoombs ticoombs modified the milestones: 1.4.0, 2.x Feb 1, 2024
@ticoombs ticoombs mentioned this issue Apr 13, 2024
@ticoombs ticoombs modified the milestones: 2.x, 1.5.0 Apr 13, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants