Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Containerize #1589

Closed
7 of 17 tasks
ZendaiOwl opened this issue Sep 27, 2022 · 36 comments
Closed
7 of 17 tasks

Containerize #1589

ZendaiOwl opened this issue Sep 27, 2022 · 36 comments
Assignees
Labels
design docker enhancement has-updates roadmap Used for tickets that are meant to document the state of milestones/features internally

Comments

@ZendaiOwl
Copy link
Collaborator

ZendaiOwl commented Sep 27, 2022

Containerize NCP

A project that started from a brainstorming in the Matrix Wiki chat room.
First post & preliminary information and all that sort of stuff 🙏

A very much Work In Progress, confirming and testing if a design idea could indeed work as thought-out theoretically

Project idea

Convert NCP and it's tool into something like a "binary" application container (or containers that only do "one thing/task") and services capable of being integrated with others, also making it possible to update/upgrade parts of the whole instead of everything.

Where ncp-config is the master container over the others, and this image can then be used as a service.

End goal

Containerize NCP completely

Starting point & proof-of-concept

Convert ncp-config's various scripts into individual containers & ncp-config to a container as well, being used as the master container, to control the others

Edit: To use one container, a bash control script (maybe?) called ncp-tools, or something, is entrypoint, possibly install it as plugin or only nc-encrypt which needs admin permissions. And put all the ncp script tools into one container directly using a bash script as a controller with case checking (?) for the different parts inside the container. Which right now seems to be the better option, but I don't know 🙏

Then combine that with nextcloud-aio, Nextcloud, PHP, mariaDB or a database and Caddy as front-end or reverse proxy, which is how I've used Caddy the most (reverse-proxy). Anyone have any other ideas?

  • Category re-design/re-structuring (?)

  • New category suggestion

    • BACKUP
    • NETWORK
    • SYSTEM
    • UPDATE

Status

  • Stopped
  • Not started
  • Not continuing
  • Researching
  • Testing
  • Ongoing
  • Paused °
  • Completed

° As I'm finishing my undergraduate degree at the moment this is currently paused

TODO

    • Added a few relevant help articles, for basic understanding around the subject of the project.
    • Added some more relevant help articles from the Docker documentation, can be really hard to find otherwise.
    • Add links and script names to the categories for ncp-config until completed
    • Expand explanations (partly done)
    • Begin research
    • Begin testing
    • What else? ..

Related Help articles & Documentation information

Google - Best practice, Building containers
Google - Best practice, Operating containers
Docker - Best practice, Dockerfile
Docker - Best practice, Development
Docker - Best practice, Image-building
Docker - Build enhancements
Docker - Choosing a build driver
Docker - Manage images
Docker - Create a base image

Docker - Multi-container apps
Docker - Update the application
Docker - Packaging your software
Docker - Multi-stage builds
Docker - Compose, Overview
Docker - Reference, run command
Docker - Specify a Dockerfile

Docker - Announcement, Compose V2

Red Hat Dev - Blog Post, Systemd in Containers

Docker docs, Deprecated Features

Notes

Docker Hub, Nextcloudpi

Docker docs, IPv6 Support

A Nextcloud instance's directories to restore settings.

  1. Config
  2. Database
  3. Data (User files & App data (?))

Commands to get IP-addresses in the terminal

# INTERNAL IP-ADDRESS
# IPv4 - String manipulation
"$(ip addr | grep 192 | awk '{print $2}' | cut -b 1-14)"

# IPv4 & IPv6 - String manipulation
ip a | grep "scope global" | awk '{print $2}' | head -2 | sed 's|/.*||g'

# IPv4, IPv6 & Link-local - JSON
ip -j address | jq '.[2].addr_info' | jq '.[].local'

# Without quotes - JSON
ip -j address | jq '.[2].addr_info' | jq -r '.[].local'

# IPv4 - JSON
ip -j address | jq '.[2].addr_info' | jq -r '.[0].local'

# IPv6 - JSON
ip -j address | jq '.[2].addr_info' | jq -r '.[1].local'

# Link-local - JSON
ip -j address | jq '.[2].addr_info' | jq -r '.[2].local'
# PUBLIC IP ADDRESS
# IPv4
curl -sL -m4 -4 https://icanhazip.com
# IPv6
curl -sL -m4 -6 https://icanhazip.com

Docker Context

Docker docs, Manage contexts

Docker Buildx

docker buildx build . \
--file /path/Dockerfile \
--tag ${OWNER}/${REPO}:${TAG}
# In this context it's regarding the docker hub
# Owner, Repo & Tag @DockerHub

Options

  • --platform
    • Architecture(s) for the image
  • --builder
  • --push
  • --build-arg

Create builder

docker buildx create --use \
--name container \
--driver docker-container \
--platform linux/arm64,linux/amd64,linux/armhf

Docker Driver

  • docker
  • docker-container Recommended for multiple architecture compatibility
  • kubernetes

Orchestration

  • Docker Swarm Default
  • Kubernetes Deprecated in stack & context @v20.10 Source

Docker Compose

Docker docs, Compose extend services
Docker docs, Compose networking
Docker docs, Compose in production
Docker docs, Compose V2 compatibility
Docker docs, Compose FAQ

Old syntax - V1

  • docker-compose

New syntax - V2

  • docker compose
Ex. docker-compose.yml
services:
  nextcloudpi:
    command: "$(ip addr | grep 192 | awk '{print $2}' | cut -b 1-14)"
    container_name: nextcloudpi
    image: ownyourbits/nextcloudpi:latest
    ports:
    - published: 80
      target: 80
    - published: 443
      target: 443
    - published: 4443
      target: 4443
    restart: unless-stopped
    volumes:
    - ncdata:/data:ro
    - /etc/localtime:/etc/localtime:ro
version: '3.3'
volumes:
  ncdata:
    external: true

Docker Run

A working docker run command with the --init flag for PID 1 management and reaping of zombie processes.

docker run --init \
--publish 4443:4443 \
--publish 443:443 \
--publish 80:80 \
--volume ncdata:/data \
--name nextcloudpi \
--detach ownyourbits/nextcloudpi:latest \
"$(ip addr | grep 192 | awk '{print $2}' | cut -b 1-14)"
  • "$(ip addr | grep 192 | awk '{print $2}' | cut -b 1-14)"

Greps an IP-address beginning with 192, modify to fit your system, test in terminal.

See "Commands to get IP-addresses in the terminal" above for other examples.

Nextcloud AIO

Used as example and reference

Docker Run AIO arm64
sudo docker run \
--sig-proxy=false \
--name nextcloud-aio-mastercontainer \
--restart always \
--publish 80:80 \
--publish 8080:8080 \
--publish 8443:8443 \
--volume nextcloud_aio_mastercontainer:/mnt/docker-aio-config \
--volume /var/run/docker.sock:/var/run/docker.sock:ro \
nextcloud/all-in-one:latest-arm64

Reverse proxy AIO

Dockerfile

Docker docs, Dockerfile reference

Naming scheme

  • Dockerfile.name

Instead of using the ARG example below and getting each individual script.
Use alpine-git image instead and clone repo, alternatively clone it beforehand

ADD can be used in Dockerfile to import scripts

  • ADD ${URL} ${PATH}

URL to fetch scripts in raw text

  • https://raw.githubusercontent.com/
  • Ex. https://raw.githubusercontent.com/${OWNER}/${REPO}/${BRANCH}/${PATH}
Ex. Docker ARG
ARG DESCRIPTION
OWNER Repository owner @ GitHub
REPO Repository @ GitHub
BRANCH Branch of repository @ GitHub
PATH Path to the script directory
CATEGORY Category in /bin/ncp (PATH)
PATH_BASH Path to bash binary
URL GH URL to get scripts in raw text

ARG Example

ARG OWNER ["nextcloud"]
ARG REPO ["nextcloudpi"]
ARG BRANCH ["master"]
ARG PATH ["bin/ncp"]
ARG CATEGORY ["BACKUPS"]
ARG SCRIPT ["nc-backup-auto.sh"]
ARG URL ["https://raw.githubusercontent.com"]
ARG PATH_BASH ["/usr/local/bin/bash"]

ADD ["${URL}/${OWNER}/${REPO}/${BRANCH}/${PATH}/${CATEGORY}/${SCRIPT}","${PATH}/${CATEGORY}/${SCRIPT}"]
COPY --from=bash ["$PATH_BASH", "$PATH_BASH"]
RUN ["$PATH_BASH","-c","chmod +x ${PATH}/${CATEGORY}/${SCRIPT}"]
SHELL ["$PATH_BASH"]
ENTRYPOINT ["$PATH_BASH","-c","${PATH}/${CATEGORY}/${SCRIPT}"]

Existing Containers

Dockerized Bash Scripts - Examples

  1. Transforming Bash Script to Docker Compose
  2. Automatic Docker Container creation w/bash script
  3. Docker w/Shell script or Makefile
  4. Run scripts, Docker arguments
  5. Run a scripts inside Docker container using Shell script
  6. Run Script, with dev docker image

Scripts, Dependencies & Packages

IMPORTANT

Script shebang must be #!/usr/bin/env bash and not #!/bin/bash, to be compatible with the bash docker image natively.

Notes
There are a few main things that are important to note regarding this image:

Bash itself is installed at /usr/local/bin/bash, not /bin/bash, so the recommended shebang is #!/usr/bin/env bash, not #!/bin/bash (or explicitly running your script via bash /.../script.sh instead of letting the shebang invoke Bash automatically). The image does not include /bin/bash, but if it is installed via the package manager included in the image, that package will install to /bin/bash and might cause confusion (although /usr/local/bin is ahead of /bin in $PATH, so as long as plain bash or /usr/bin/env are used consistently, the image-provided Bash will be preferred).

Bash is the only thing included, so if your scripts rely on external tools (such as jq, for example), those will need to be added manually (via apk add --no-cache jq, for example).

Nestybox & Sysbox

Sysbox, an open-source runc, it's project was acquired by Docker, Inc. and they help solve the user permissions issue (mapping of user id) inside the docker containers

Nestybox website

Quote from Sysbox GitHub page

Sysbox solves problems such

Enhancing the isolation of containerized microservices (root in the container maps to an unprivileged user on the host).

Enabling a highly capable root user inside the container without compromising host security.

Securing CI/CD pipelines by enabling Docker-in-Docker (DinD) or Kubernetes-in-Docker (KinD) without insecure privileged containers or host Docker socket mounts.

Enabling the use of containers as "VM-like" environments for development, local testing, learning, etc., with strong isolation and the ability to run systemd, Docker, IDEs, and more inside the container.

Running legacy apps inside containers (instead of less efficient VMs).

Replacing VMs with an easier, faster, more efficient, and more portable container-based alternative, one that can be deployed across cloud environments easily.

Partitioning bare-metal hosts into multiple isolated compute environments with 2X the density of VMs (i.e., deploy twice as many VM-like containers as VMs on the same hardware at the same performance).

Partitioning cloud instances (e.g., EC2, GCP, etc.) into multiple isolated compute environments without resorting to expensive nested virtualization.


Packages in Docker environment/build

Docker Packages
DOCKER PACKAGES
git bash

Extraction of the different environment variables, dependencies on/in other scripts & their dependencies in turn and which packages are required together with their location.

File & location
File Repository Installed Dependencies
library.sh /etc/library.sh /usr/local/etc/library.sh $ncc,$ARCH,$NCPCFG,$CFGDIR,$BINDIR,$NCDIR
ncc /bin/ncc /usr/local/bin/ncc occ,$NCDIR
ncp.cfg /etc/ncp.cfg /usr/local/etc/ncp.cfg -
occ - /var/www/nextcloud/ $NCDIR
Environment variables
ENVIRONMENT VARIABLE VALUE
$ncc /usr/local/bin/ncc
$CFGDIR /usr/local/etc/ncp-config.d/
$BINDIR /usr/local/bin/ncp/
$NCDIR /var/www/nextcloud/
$NCPCFG "${NCPCFG:-etc/ncp.cfg}"
$ARCH "$(dpkg --print-architecture)"
$DESTDIR ``
$INCLUDEDATA ``
$COMPRESS ``
ncp-tools:
$BACKUPLIMIT ``
$BACKUPDAYS ``
$NCLATESTVER $(jq -r .nextcloud_version < "$NCPCFG")
$PHPVER $(jq -r .php_version < "$NCPCFG")
$RELEASE $(jq -r .release < "$NCPCFG")
$NEXTCLOUD_URL https://localhost sudo -E -u www-data "/var/www/nextcloud/apps/notify_push/bin/${ARCH}/notify_push" --allow-self-signed /var/www/nextcloud/config/config.php &>/dev/null &
Packages
PACKAGES
dpkg bash jq
apt dialog cat
awk mktemp sudo
Users
USERS
www-data
Permissions
PERMISSIONS
sudo

@ZendaiOwl ZendaiOwl added enhancement docker design roadmap Used for tickets that are meant to document the state of milestones/features internally labels Sep 27, 2022
@ZendaiOwl ZendaiOwl self-assigned this Sep 27, 2022
@sunjam
Copy link

sunjam commented Sep 27, 2022

Do you want to also make a linked copy of this on the forum for wiki collab and further discussion?

@ZendaiOwl
Copy link
Collaborator Author

Hmm ... I don't know 🤔

I don't mind if you want to link to it in the forum to start a discussion as such, I just hope it won't be regarded as a promise or anything like that :D it's still very early as I just started writing this up and I don't know how far it'll go, how long it'll take or even if it'll all work out in the end fully yet. But if that is understood (or those expectations are set, to say it in another way), then I don't mind at all, you can go ahead and link it in the forum 🙏

@sunjam
Copy link

sunjam commented Sep 27, 2022

Great, in that case we'll discuss things on the forum if it makes sense. Just thinking of ways to keep collaborating on this together. Thanks for writing this up!

@ZendaiOwl
Copy link
Collaborator Author

ZendaiOwl commented Sep 30, 2022

Moving this to a readme instead within the branch

https://github.com/nextcloud/nextcloudpi/tree/containerize/build/docker/containerize

@szaimen
Copy link
Contributor

szaimen commented Oct 2, 2022

Hello, just had a look and the idea sounds interesting!

A possible approach that I could imagine would be building Nextcloudpi in the future around AIO which would allow to split-up the maintenance work. You could simply run AIO in reverse proxy mode and implement further features then that NextlcoudPi currently offers, but AIO can't. Going the same path like AIO and introducing a mastercontainer that manages other docker containers would make NextcloudPi easier to maintain but would introduce some limits regarding what NextcloudPis mastercontainer can do so it should be carefully discussed.

Also as a sidenote: modularization using docker containers is a good approach for maintainability but creating ~50 docker containers that need to get mounted and run in a specific way does not make it more maintainable.

@theCalcaholic
Copy link
Collaborator

theCalcaholic commented Oct 2, 2022

@szaimen I did not have time yet to do a write up on my ideas and criteria regarding this topic, but I agree - my vision encompasses just one container that runs ncp tasks and overwrites the entry point or command for the various scripts and one or two othwr containers for the web interface and maybe a cron-like service (we'll have to see what's a good approach here). And then of course we need the kind of services that AIO contains (or maybe just integrate with AIO). :)

Some thoughts I have about this (a complete writeup will follow):

Generally, I want to explore how well we can abstract script environments that way to generalize them for testing.
E. g. a script can always operate on fixed paths if running and docker and selecting the correct part could be implemented by how the container is run - effectively making scripts more agnostic to the system environment, which could make automated tests a lot easier and more powerful.

A big challenge will be to provide our users with working upgrade paths (seemless upgrades from both, the current ncp docker container as well as bare metal installations to a multi-container setup) - I'm actually not sure yet, if it will be feasible at all, but I will at least explore that

@szaimen
Copy link
Contributor

szaimen commented Oct 2, 2022

A big challenge will be to provide our users with working upgrade paths (seemless upgrades from both, the current ncp docker container as well as bare metal installations to a multi-container setup) - I'm actually not sure yet, if it will be feasible at all, but I will at least explore that

Just discussed this with @st3iny and came to the conclusion that this should indeed be carefully considered 👍

@szaimen
Copy link
Contributor

szaimen commented Oct 2, 2022

And then of course we need the kind of services that AIO contains (or maybe just integrate with AIO). :)

Would be really cool if you would integrate AIO into NextcloudPi :)

@theCalcaholic
Copy link
Collaborator

And then of course we need the kind of services that AIO contains (or maybe just integrate with AIO). :)

Would be really cool if you would integrate AIO into NextcloudPi :)

That would be my preferable option - I just don't know enough about AIO yet to understand all technical implications :)

@szaimen
Copy link
Contributor

szaimen commented Oct 2, 2022

That would be my preferable option - I just don't know enough about AIO yet to understand all technical implications :)

Feel free to ask me anything about it :)

@ZendaiOwl
Copy link
Collaborator Author

ZendaiOwl commented Oct 2, 2022

@theCalcaholic @szaimen

Thank you 🙏 I'm glad you think the idea is interesting and for expressing your interest 🥳

I had also come to similar thoughts/conclusions but haven't really written it down yet in the README ^^

To use one container, ncp-config is entrypoint, possibly install it as plugin or only nc-encrypt which needs admin permissions. And in that way combine the bigger ncp tools into one, or maybe, just put them all into one directly using a bash script with case checking for the different parts inside the container. Which right now seems to be the better option, but I don't know

Then combine that with nextcloud-aio, Nextcloud, PHP, mariaDB or a database and Caddy as front-end or reverse proxy, which is how I've used Caddy the most (reverse-proxy)

I'm currently working right now, making it difficult to write a proper response on my phone 😅 but something like described above is about where I'm at in my thoughts right now

@szaimen
Copy link
Contributor

szaimen commented Oct 3, 2022

@theCalcaholic regarding version pinning, this would be easily possible if we add the build time to the mastercontainer and publish that to docker hub as tag like we already do with all the other containers. However this will break mastercontainer updates as watchtower is not able to update to a different docker tag so you would need to take care of that youself. E.g. by stopping the mastercontainer, removing it and recreating it with the new docker tag. Afterwards, you can use the AUTOMATIC_UPDATES flag of https://github.com/nextcloud/all-in-one#how-to-stopstartupdate-containers-or-trigger-the-daily-backup-from-a-script-externally to update all the other containers as well. Please tell me if this would be sufficient for you. I could then add this to the build script as well.


@ZendaiOwl AIO already includes the parts that are important for running a full-fletched Nextcloud instance so NextcloudPi should probably rather add all additional options that are not already included in AIO.

@theCalcaholic
Copy link
Collaborator

@szaimen Sounds good to me. That's basically how I imagined that it could work. 👍

@ZendaiOwl
Copy link
Collaborator Author

ZendaiOwl commented Oct 4, 2022

I had a little thought, while I've been writing java apps for Android and reading about docker I also read your responses here which got me thinking..

Since the focus then would be to mainly containerize the features of ncp perhaps implementing the ncp features within an API is the easiest way to get it working? I don't know, it seems that is the standard way for containers to communicate between each other within a docker network, it's also possible to use an alias for that within the containers, which I've tested out.

Anyway .. I had a thought if it would be possible to use Java (maybe with Spring) or Go (or something else) to make that API framework and I tested it out now in a quick test, which was successful 🥳, so I wanted to share and see what you think?

I created a very simple Java program which uses Java's Runtime.getRuntime().exec() command to run ping -c 2 google.com, which is similar to bash's exec and is the recommended execution command inside a container or command

If it's built using the GraalVM, it can be made into a single file basically, which is perfect for a docker container, or to use Go which also makes it into a single binary. Python is also an option

GraalVM Java "Binaries" - Article

┌─ 🖼🖌️ 🎨♬ 🌸#️⃣ 🔮~/AndroidStudioProjects/Test 
└࿓❯ javac ExecuteCommandTest.java 
┌─ 🖼🖌️ 🎨♬ 🌸#️⃣ 🔮~/AndroidStudioProjects/Test 
└࿓❯ java ExecuteCommandTest 
PING google.com(ams16s32-in-x0e.1e100.net (2a00:1450:400e:80c::200e)) 56 data bytes
64 bytes from ams16s32-in-x0e.1e100.net (2a00:1450:400e:80c::200e): icmp_seq=1 ttl=118 time=10.9 ms
64 bytes from ams16s32-in-x0e.1e100.net (2a00:1450:400e:80c::200e): icmp_seq=2 ttl=118 time=9.91 ms

--- google.com ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 2ms
rtt min/avg/max/mdev = 9.905/10.407/10.910/0.512 ms

(I'm working on my studies, so my current working directory was in the android studio projects folder, it was a quick test anyway so I didn't chage directory)

ExecuteCommandTest.java

public class ExecuteCommandTest {
  
  public static void main (String[] args) {
    ExecuteCommandTest obj = new ExecuteCommandTest();
    String command = "ping -c 2 google.com";
    String output = obj.executeCommand(command);
    System.out.println(output);
  }

  private String executeCommand(String command) {
    StringBuffer output = new StringBuffer();
    Process p;
    try {
      p = Runtime.getRuntime().exec(command);
      p.waitFor();
      BufferedReader reader =  new BufferedReader(new InputStreamReader(p.getInputStream()));
      String line = "";     
      while ((line = reader.readLine())!= null) {
        output.append(line + "\n");
      }
    } catch (Exception e) {
      e.printStackTrace();
    }
    return output.toString();
  }
  
}

Can be used to execute shell commands (sh, bash, ksh etc) on both Windows (Windows uses cmd, the example above is for Linux/Mac), Mac & Linux platforms. Passing the arg or arg & env variables along in the method parameter values (I think it was called that iirc 😅, my bad if it isn't), then using a named pipe, perhaps a socket or a shared volume maybe to make changes on the nextcloud container for those settings and one for the host system, for enabling SSH and those things..

Found some examples & a little conversation around something like this, on this Stackoverflow question

File permissions in Docker - article

Edit: Or perhaps it is the easiest to use the volume_from option in a Compose file?

Mounting a contianer with all the NCP scripts using this option and allow them to manipulate the files through the volume_from volume (mount?) option

Compose Specs - volumes_from

I don't know, what do you think? 🙏

@ZendaiOwl
Copy link
Collaborator Author

ZendaiOwl commented Oct 5, 2022

Since the web interface already implements an API to execute the scripts, perhaps putting them all together with the web interface and mounting the volumes from MariaDB & Nextcloud will work


Edit

Something like this perhaps

Using --volume_from mount deprecated in version 3 of compose file source. Use the new structure within the volumes sections of the compose file with named volumes instead → | for the ncp web interface, a Nextcloud app, and nextcloud container using the relevant directories for a Nextcloud app. Then using bind mounts from the ncp container to the relevant files the scripts need to access and manipulate on the host

@szaimen
Copy link
Contributor

szaimen commented Oct 14, 2022

@szaimen Sounds good to me. That's basically how I imagined that it could work. 👍

Done. Should work now :)

E.g. nextcloud/all-in-one:20221014_110407-latest-arm64 for arm64 and nextcloud/all-in-one:20221014_110407-latest for amd64

@szaimen
Copy link
Contributor

szaimen commented Oct 14, 2022

BTW, https://github.com/Nitrokey/nextbox does have an interesting architecture as well. Maybe it is worth having a look at that project as well since you are thinking about rewriting stuff :)

@sunjam
Copy link

sunjam commented Oct 14, 2022 via email

@szaimen
Copy link
Contributor

szaimen commented Oct 14, 2022

I looked at it in person. It uses a USB connected spinning disk. Same basic layout as ourselves, but already setup for the user.

Thanks for the insights! :)

However I meant the software-architecture ;)

@sunjam
Copy link

sunjam commented Oct 14, 2022 via email

@theCalcaholic
Copy link
Collaborator

Our main challenges are the ones where we differ from NitroKey, though (i. e. integrating the admin web UI and NCP apps into the docker architecture).

@theCalcaholic
Copy link
Collaborator

However, I talked to the NotroKey guys and I'll certainly have a look at their DynDNS integration, which is very neat

@szaimen
Copy link
Contributor

szaimen commented Oct 14, 2022

Our main challenges are the ones where we differ from NitroKey, though (i. e. integrating the admin web UI and NCP apps into the docker architecture).

true

@daringer
Copy link

daringer commented Oct 18, 2022

However I meant the software-architecture ;)

I'll shortly outline the main components and ideas:

  • Nextcloud (official docker image, Apache) is running from within a docker-compose set, which adds MariaDB and Redis
  • The Admin UI is build as a Nextcloud App, which has the neat advantage that it can rely on Nextcloud authentification and security. But this also means, that if Nextcloud is not starting/broken one can also not access the NextBox App.
  • This is why we came up with a hardware-button, inspired from other "embedded" devices to factory-reset the NextBox, if things go massively wrong
  • Updates are introduced via unattended-updates and a nextbox package brings in all NextBox components
  • The backend is a so-called nextbox-daemon, which is a systemd service
  • Communication between the NextBox App and the daemon is done through a forwarding mechanism through Nextcloud to the daemon through a socket and a REST API

The dynamic DNS integration is based on deSEC, they provide an API which we utilize and as they also provide a certbot hook there is no need for a port forwarding as authentication is done through a DNS entry during the certbot run.

@szaimen
Copy link
Contributor

szaimen commented Nov 10, 2022

@theCalcaholic Since we talked about this: it is starting with v3.0.0 of AIO possible to disable imagick in AIO. See https://github.com/nextcloud/all-in-one#how-to-add-packets-permanently-to-the-nextcloud-container and https://github.com/nextcloud/all-in-one#how-to-add-php-extensions-permanently-to-the-nextcloud-container

Please note: v3.0.0 is currently in Beta and will likely get released to the latest channel in 7 days.

@theCalcaholic
Copy link
Collaborator

@szaimen Yeah, I saw this and it's great news! I already tried (unsuccessfully) to set it up with my private instance :)

@szaimen
Copy link
Contributor

szaimen commented Nov 10, 2022

@szaimen Yeah, I saw this and it's great news! I already tried (unsuccessfully) to set it up with my private instance :)

What exactly did you try unsuccessfully? Feel free to open a new thread here. I will help you there :)
https://github.com/nextcloud/all-in-one/discussions/new?category=questions
(I dont want to spam this thread with this)

@szaimen
Copy link
Contributor

szaimen commented Nov 17, 2022

AIO v3.0.0 stable is released as of today :)

@szaimen
Copy link
Contributor

szaimen commented Feb 9, 2023

Hi @theCalcaholic, there is now this guide: https://nextcloud.com/blog/how-to-install-the-nextcloud-all-in-one-on-linux/. Maybe it helps you to figure out what went wrong when you wanted to try it out :)

@theCalcaholic
Copy link
Collaborator

@szaimen Thank you :)

My failed attempt was to integrate the imaginary container with a bare metal installation of nextcloud (so not AIO) though.

However we will soon start to migrate our services to docker one by one and will attempt to integrate AIO in NCP once we're done with that.

So we will probably need these resources within the next months :D

@szaimen
Copy link
Contributor

szaimen commented Mar 7, 2023

@szaimen Sounds good to me. That's basically how I imagined that it could work. 👍

Done. Should work now :)

E.g. nextcloud/all-in-one:20221014_110407-latest-arm64 for arm64 and nextcloud/all-in-one:20221014_110407-latest for amd64

BTW, I just wanted to mention that the containers are now multiarch since v4.4.0 so e.g. nextcloud/all-in-one:20230302_085724-latest will work on both x64 and arm64 :)

@szaimen
Copy link
Contributor

szaimen commented Apr 17, 2023

Hey, I just thought I inform you about an idea that I had a while ago as you may be interested in this: nextcloud/all-in-one#1581

@ZendaiOwl
Copy link
Collaborator Author

ZendaiOwl commented Apr 17, 2023

Thank you for sharing!

It looks really interesting, allowing containers to be added & configured through json will be incredibly useful

I don't know, it's not quite the same but it reminds me of something I've been slowly working on, a Docker dashboard to manage containers and images.

The backend is somewhat there, excluding swarm functionality at the moment link

@theCalcaholic
Copy link
Collaborator

@szaimen Looks very interesting indeed!

@szaimen
Copy link
Contributor

szaimen commented Apr 19, 2023

Thank you for sharing!

No problem! :)

It looks really interesting, allowing containers to be added & configured through json will be incredibly useful

Yes, I thought so as well. That was why I came up with this idea in the first place :D

I don't know, it's not quite the same but it reminds me of something I've been slowly working on, a Docker dashboard to manage containers and images.

The backend is somewhat there, excluding swarm functionality at the moment link

Indeed sounds comparable. Actually we are already using such a json for all our containers since the beginning of AIO: https://github.com/nextcloud/all-in-one/blob/main/php/containers.json. (You could even consider it to be the heart of AIO 😉) The idea would simply be exposing this functionality for further containers that could be made and maintained by the community :)

@ZendaiOwl
Copy link
Collaborator Author

ZendaiOwl commented Apr 7, 2024

@theCalcaholic Since you mentioned you are doing technical exploration and would share the details when you're ready I'll close this issue :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
design docker enhancement has-updates roadmap Used for tickets that are meant to document the state of milestones/features internally
Projects
Status: Done
Development

When branches are created from issues, their pull requests are automatically linked.

5 participants