Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Docker compose for recording server #9177

Merged
merged 3 commits into from
Aug 29, 2023

Conversation

mishamosher
Copy link
Contributor

A simple Docker compose definition for the recording server.

Copy link
Member

@danxuliu danxuliu left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks a lot for your contributions and sorry for the delay.

Besides the comments, could you rebase into latest master? In Nextcloud Talk we prefer rebasing the branch and force pushing it (--force-with-lease :-) ) rather than merging the master branch into the pull request branch.

Could you also squash all the commits in a single one? The first four commits add each one a single file, but those files are all interrelated, so they should go in the same commit. Similarly, the executable bit should have been there already when the executable file was introduced, and the commit that removes the useragent config is simply removing something added in the previous commits, so the original commit does not need it to being with :-)

Thanks!

recording/docker-compose/docker-compose.yml Outdated Show resolved Hide resolved
@@ -0,0 +1,39 @@
FROM phusion/baseimage:focal-1.2.0
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would prefer to use an official Ubuntu image. What benefits does phusion/baseimage provide (in the case of the recording server)? If I am not mistaken the PID 1 zombie reaping problem should be fixed by launching Tini with --init.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The official docs have a comprehensive list of differences: https://github.com/phusion/baseimage-docker#overview

Personally these are the reasons why I didn't go with a bare-bones Ubuntu image:

  • Simplified management of services. Auto-restart, running as another user.
  • Built-in logger that plays nice with docker.
  • Possibility to run multiple services at once.
  • A working CRON daemon.

Anyway, those are personal preferences in the end, and I understand if an untouched Ubuntu image is preferred.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

From my point of view, while those can be useful features, in the case of a container for the recording server they are not strictly needed:

  • The recording server should always run as its own user, which (if I recall correctly) can be set in the Dockerfile and overriden if needed when creating the container with docker run --user XXX. Auto-restarting can also be set when creating the container with docker run --restart on-failure.
  • As the recording server is directly run and it prints to stdout the logs should be easily redirected to a proper logging system on the host, like journald
  • Only a single service, the recording server, is expected to be run inside the container; other services (like a web server to provide TLS termination for the recording server) should be run in other containers or the host
  • CRON is not used by the recording server, so it should not be needed in its container

Of course all this is based also on my personal view on how the container for the recording server should work, but for maintenance reasons I would prefer something as simple and generic as possible :-)

Copy link
Contributor Author

@mishamosher mishamosher Apr 24, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm managing my docker containers with docker compose, not with docker run.

My reason for this is to be able to build everything with docker compose build, and then run a simple docker compose up for starting everything (and docker compose down for the opposite).

I get the feeling the suggestions go towards a docker run approach, with a single Dockerfile, and no docker-compose.yml.

The docker compose use-case is kinda specific to my environment, and I'm unsure how useful would it be to others (and have it included as part of the spreed/Talk source code).

In the end, I think the question can be resumed to a logical or:

  • Improve start-container.sh? Personally, I see it in a good enough state. Maybe add Chromium?
  • Add a separate docker compose implementation (this PR)?
  • Something else?

No hard feelings if the docker compose approach is undesirable (:

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I used docker run as an example, but user and restart can be used too in Docker compose (and probably anything that can be passed to docker run). And even if I start my containers with docker run for development purposes I would like to have a Docker compose file :-) After all the Docker compose file can be seen as an implicit documentation of the parameters that need to be given to docker run ;-)

Actually I always wanted to provide a Dockerfile in addition to start-container.sh; start-container.sh was modified from a hacky script I was using when the recording server development started, but although it works for development I think that being able to build a proper image would be better for end "users".

# chromium
RUN add-apt-repository -y ppa:phd/chromium-browser
RUN printf "Package: *\nPin: release o=LP-PPA-phd-chromium-browser\nPin-Priority: 1001" > /etc/apt/preferences.d/phd-chromium-browser
RUN apt-get -y install chromium-browser chromium-chromedriver
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I guess the PPA is needed due to Chromium not being updated to latest versions in Ubuntu Focal, is that right? In that case, could you add a comment about that in the Dockerfile?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Lately all the browsers have been transitioning to snap packages in Ubuntu. It is not a trivial task to get those installed in Docker.

The PPA in question offers the latest non-snap Chromium available for Ubuntu 18.04, but built for newer Ubuntu versions.

Will add a comment to the Dockerfile.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Lately all the browsers have been transitioning to snap packages in Ubuntu. It is not a trivial task to get those installed in Docker.

Oh, I very well know... I wast... spent a lot of time to build Ubuntu packages for the recording server thanks to that transition to Snap packages :-P

The PPA in question offers the latest non-snap Chromium available for Ubuntu 18.04, but built for newer Ubuntu versions.

Perfect, thanks for the clarification.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Now that I see it the PPA is not an official PPA but something maintained by individuals, isn't it? I would prefer to use something official (I mean, not necessarily something directly coming from Chromium team, the official package of a Debian based distribution not using Snaps would be fine as long as it is updated), but I do not know if there is something like that for Chromium 🤔 (but I guess no and that is why the phd PPA was used in first place ;-) ).

If I am not mistaken a package of Chrome (rather than Chromium) is provided by Google, but... in that case I would rather use a community version of the free (as in freedom) Chromium rather than an official version of the closed Chrome.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There are official Debian DEBs, but they're often outdated if one is not using the Sid rolling release. That PPA is the only one I've found so far that works for Ubuntu+ARM64, and is a simple&clean rebuild.

Fedora and Enterprise Linux 9 offer an up-to-date non-snap package too, but it would imply moving away from Ubuntu.

As you've mentioned, it is not an official PPA. It is mantained by:

recording/docker-compose/dockerfile-talk-recording Outdated Show resolved Hide resolved
@mishamosher mishamosher force-pushed the docker-compose-recording branch 6 times, most recently from d94f40f to cde5b90 Compare April 26, 2023 15:34
@mishamosher
Copy link
Contributor Author

@danxuliu I think I've got most of the changes implemented.

recording/docker-compose/Dockerfile Outdated Show resolved Hide resolved
recording/docker-compose/docker-compose.yml Outdated Show resolved Hide resolved
recording/docker-compose/docker-compose.yml Outdated Show resolved Hide resolved
@mishamosher
Copy link
Contributor Author

@danxuliu thank you for spotting the remaining issues. Finished another attempt at solving them.


# spreed-recording dependencies
RUN apt-get -y install ffmpeg pulseaudio python3-pip xvfb
RUN pip3 install --upgrade requests
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This should not be needed, as the package will be updated if needed when running python3 -m pip install /tmp/recording/

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The command python3 -m pip install /tmp/recording/ updates the locally installed package, and pip3 install --upgrade requests updates the globally installed package. Something akin to /usr/bin and /usr/local/bin.

Even though it works, on a default Ubuntu 20.04 Docker image there are warnings about version mismatches between the locally and globally installed requests package. Updating the globally installed requests package takes care of the warnings.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Specifically, this is what I get:

/usr/lib/python3/dist-packages/requests/__init__.py:89: RequestsDependencyWarning: urllib3 (1.26.15) or chardet (3.0.4) doesn't match a supported version!
  warnings.warn("urllib3 ({}) or chardet ({}) doesn't match a supported "
No configured backends
 * Serving Flask app 'nextcloud.talk.recording.Server'
 * Debug mode: off
INFO:werkzeug:WARNING: This is a development server. Do not use it in a production deployment. Use a production WSGI server instead.
 * Running on all addresses (0.0.0.0)
 * Running on http://127.0.0.1:8082
 * Running on http://172.25.0.2:8082
INFO:werkzeug:Press CTRL+C to quit

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah, ok, it seems that I did not get the error because the branch was rooted on 9aed218, so the pull request that added requests-toolbelt as dependency had not been merged yet. Now I see them as well :-)

I would have expected python3 -m pip install /tmp/recording/ to anyway update the dependencies as needed in /usr/local and use them overriding the ones from /usr/, but if it does not work that way and an explicit upgrade is needed... so be it 👍

recording/docker-compose/Dockerfile Show resolved Hide resolved
recording/docker-compose/docker-compose.yml Show resolved Hide resolved
recording/docker-compose/docker-compose.yml Outdated Show resolved Hide resolved
@danxuliu danxuliu mentioned this pull request Apr 27, 2023
@mishamosher mishamosher force-pushed the docker-compose-recording branch 2 times, most recently from 8cc46a6 to 8d75b7c Compare April 27, 2023 10:47
@szaimen
Copy link
Contributor

szaimen commented Jun 7, 2023

Hi 👋

I just wanted to mention that we got it running with an alpine linux based image in AIO v6.1.0 with this dockerfile: https://github.com/nextcloud/all-in-one/blob/main/Containers/talk-recording/Dockerfile and start.sh: https://github.com/nextcloud/all-in-one/blob/main/Containers/talk-recording/start.sh and this docker-compose: https://github.com/nextcloud/all-in-one/blob/6ef94e742f254f1a546fb4c0a3d1d884e3ec0303/manual-install/latest.yml#L162-L176. Maxbe it helps :)

@SystemKeeper
Copy link
Contributor

@danxuliu Changes regarding ports added, fine from my side - merge?

mishamosher and others added 3 commits August 7, 2023 03:58
Signed-off-by: Elmer Miroslav Mosher Golovin <miroslav@mishamosher.com>
By default Chromium runs in sandboxed mode, which do not seem to work
inside Docker containers (Chromium fails with "Failed to move to new
namespace"); it may work depending on the configuration (for example,
with seccomp profiles), but it does not with the default configuration.

To work around that Chromium is started with "--no-sandbox" by wrapping
the real executable with a script to add that argument.

"wrap_chromium_binary" was adjusted from "NodeChrome/wrap_chrome_binary"
of repository "https://github.com/SeleniumHQ/docker-selenium" at commit
"c6df1ab8dc6a5aca05c163c429a062ada1d79c51".

Docker images for Selenium are licensed under the Apache license 2.0
(see LICENSE.md at above commit).

Besides that, in Selenium 4.6 the Selenium Manager was introduced, which
is a tool that takes care of downloading the browser and its driver if
needed when running Selenium. Unfortunately, when using the "chrome"
driver the Selenium Manager assumes that the executable will be called
"chrome", and if it does not find it, it downloads it to a cache
directory of the current user. However, as the crome driver is found it
will use the one in the path, which may not be compatible with the
latest Chromium version just downloaded, and in the end the version
mismatch causes Chromium to not be launched.

Given that Chromium needs to be launched with "--no-sandbox" and that
the Selenium Manager downloads the executable to its own directory and
launches it directly from there it is not possible either to just leave
the Selenium Manager to install both the browser and its driver.

To work around that the Chromium wrapper is also linked as
"/ur/bin/chrome" to ensure that Selenium Manager will find and use it.

Signed-off-by: Daniel Calviño Sánchez <danxuliu@gmail.com>
The phd/chromium PPA is discontinued and no longer updated, so it was
replaced with its equivalent repository also for Ubuntu but with
packages from Linux Mint.

Signed-off-by: Daniel Calviño Sánchez <danxuliu@gmail.com>
@danxuliu
Copy link
Member

danxuliu commented Aug 7, 2023

I have squashed your changes and also made a little change in the Dockerfile to use long option names when available (as they are more descriptive).

I also added the same wrapper script (almost, adjusted for Chromium rather than Chrome) used in Selenium Docker images to disable Chromium sandbox, as otherwise Chromium does not work in a default Docker setup.

But... it does not work either :-( The problem is that in Selenium 4.6 the Selenium Manager was introduced, which is a tool that takes care of downloading the browser and its driver if needed when running Selenium. Unfortunately, when using the chrome driver the Selenium Manager assumes that the browser executable will be called chrome) and, if it does not find it, it downloads it to the .cache/selenium directory of the current user. However, as the chrome driver is found it will use the one in the path, which is not compatible with the latest Chromium version that it just downloaded, and in the end the version mismatch causes Chromium to not be launched. It is likely that this went unnoticed when the Dockerfile was initially created because back then the driver in the image and the downloaded browser were compatible.

The path to the browser executable can be overriden, so it would use the given one rather than trying to find it. In the future it might be worth checking how that could be done, but for the time being... I just added a symlink from /usr/bin/chrome to /usr/bin/chromium when the wrapper is created 🤷

In this case all that was complex enough to deserve its own commit rather than being squashed in the main one ;-)

Besides all that turns out that the PPA that Chromium was downloaded from will be no longer updated. Fortunately there is a recommended alternative repository (which gets the packages from Linux Mint), and it provides both Chromium and its corresponding Chromium driver (all of them in a single chromium package), so I changed the Dockerfile to use that instead. I also made this in its own commit for future reference regarding the original PPA.

@SystemKeeper In any case, I can not properly test if Chromium is working because in my Docker Chromium works in sandbox mode, without disabling it 🤷 Could you try to see if it works for you? Maybe first try without the wrapper and replace RUN /opt/bin/wrap_chromium_binary with RUN ln --symbolic /usr/bin/chromium /usr/bin/chrome in the Dockerfile to verify that it does not work with sandbox in your case. Thanks! :-)

@surfparadise

This comment was marked as off-topic.

Copy link
Member

@danxuliu danxuliu left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Tested with @nickvergessen and disabling the sandbox works 👍

Note that if you have a self signed certificate for the Nextcloud server you will see an error about OCA.Talk not defined which causes the recording to fail with Chromium; this is unrelated and will be fixed in another pull request.

@danxuliu danxuliu merged commit 7b8b370 into nextcloud:master Aug 29, 2023
17 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants