Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Deploy to DockerHub #11

Open
VikashKothary opened this issue Oct 10, 2020 · 26 comments
Open

Deploy to DockerHub #11

VikashKothary opened this issue Oct 10, 2020 · 26 comments

Comments

@VikashKothary
Copy link
Member

To the AnkiCommunity Org.

@kuklinistvan
Copy link
Collaborator

I think the main README needs a clean overview of the (new) links somewhere in the front - let me deploy the image after the README gets cleaned. I'm sorry, I'm now a little bit busy with another project but I'll check back from time to time.

@kuklinistvan
Copy link
Collaborator

I think I'll continue with this next time

@kuklinistvan
Copy link
Collaborator

I did some quick inventory. Right now on DockerHub, under the organization we have:

  • djankiserv
  • djankiserv-static

None of them seems to have publicity (i.e. mentioned on the wiki or elsewhere).
Under my account I have docker-anki-sync-server, which supports Anki Desktop version 2.1.19 only.

Before I move the images to the organization, I think it would be a good idea to clarify some details:

I think the cleanest solution would be to move the djankiserv images into https://github.com/ankicommunity/docker-anki-sync-server/ and simply have different tags for them.

What do you think?

@VikashKothary
Copy link
Member Author

VikashKothary commented Oct 25, 2020

@kuklinistvan I totally agree with all your points and your solution. My current position is that we support both images but they will later by combined as the different projects are combined.
@AntonOfTheWoods Adding you to the thread.

As the actions, as I see them are:

  • Move the addon to our new addons repo: This is more a separate initiative, but we've got a lot of addons flying around. We moving them all to one place.
  • Rename current Docker folder to the Docker image name: I think anki-sync-server works best but it's completely up to you.
  • Move the djankiserv images to this repo: I'll create an issue on that repo that links to this one.
  • Decide on the best place for the djankiserv's Kubernetes manifests: My heart says, that's here to keep things simple. But there is an option of creating a separate repo for just developing Helm charts which is an option if there is demand for it and someone is willing to support it.
  • Rename repo: This is to reflect the fact this is for the ankicommunity as a whole rather than just for anki-sync-server which is more of a "brand" name for the original project.
  • Set Up a CI/CD pipeline to Deploy images: Optional, but nice.
  • Update README with a list of images: Just something simple to help users pick the right one for them.
  • Update wiki: Everything else related to the images. But that's completely up to you.

Anything I missed. Once agreed, we can create separate issues for each one. And make this dependant as applicable.

@kuklinistvan
Copy link
Collaborator

Rename repo: This is to reflect the fact this is for the ankicommunity as a whole rather than just for anki-sync-server which is more of a "brand" name for the original project.

Rename to what? :)

Got it otherwise, next time I'll continue this

@kuklinistvan
Copy link
Collaborator

Hello,

Yesterday I wrote a longer reply - but it seems that I have closed the browser window before hitting send.

Long story short: I started moving things around, but I had trouble launching the server and building the Docker image. With time I will probably figure out how to do it on my own, but if you could wrote here a quick draft on how you usually build djankiserv and the images, that would help me a lot. Sorry if there was a README and I did not find it.

@kovacs-andras
Copy link

Have you set up automated builds into the dockerhub?

@AntonOfTheWoods
Copy link

AntonOfTheWoods commented Nov 16, 2020

Wow, I keep missing these messages... sorry @kuklinistvan :-(. https://github.com/ankicommunity/djankiserv/blob/master/scripts/buildimages.sh builds and uploads the images to the repo for the envvar DJANKISERV_DOCKER_REPO. If the commit is on a git tag, it gets tagged and pushed with that. If not, you get the last tag + nb_commits_since_tag + short_commit_sha (via git describe --tags).
As long as docker is logged in, it should work fine to push to ankicommunity.

@AntonOfTheWoods
Copy link

BTW, the reason I haven't done any publicity or intend to do any is that I am really prejudiced against using straight docker. I see straight docker (with or without compose) as something you use if you have to use windows and don't have the memory to run a proper container orchestrator on your laptop. That's it. If you ever want to put something on a server you use an orchestrator. I provide a helm chart for kubernetes, which really is the only container orchestrator anyone who doesn't work at FB or Google should ever consider using, unless they have the skills and knowledge to not need any sort of guidance around these things. With a helm chart you just do helm install djankiserv ./charts/djankiserv (or similar), so you don't really care where the image is (or even much how it's created or configured).

That said, at some point it might be nice to spin out the helm chart into a separate repo and have it auto-build and deploy to a github pages chart repo (actually it wouldn't strictly speaking need to be spun out, but it would certainly be a little cleaner).

@kuklinistvan
Copy link
Collaborator

I see straight docker (with or without compose) as something you use if you have to use windows

I have Void Linux installed on my laptop as of yet and use Docker, and actually prefer that way. I do not use Windows with this project.

and don't have the memory to run a proper container orchestrator on your laptop

Some people, including me, just want to get going with synchronization without diving deep into the technical details, and know how to run docker-compose up. I want to target that audience as well.

Also, I'm not against Kubernetes (yet :) - just kidding, hopefully I will discover in the near future what a fantastic solution it is) I simply don't have experience with Kubernetes and Helm, as of yet. As far as I know, Kubernetes can utilize Docker container images, but, I've only heard that somewhere. I'm completely okay with shipping to that platform as well - as long as we have the resources for that.

However, in order to prioritize, I would be happy to have some data on how much of our audience is interested in a Docker-based and in a Kubernetes-based solution.

I provide a helm chart for kubernetes, which really is the only container orchestrator anyone who doesn't work at FB or Google should ever consider using

Yes, why? :) Again, I did not have the time yet to learn about Kubernetes, but that statement did sound to me something that is maybe a bit at the extremes, and would use some justification.

@kuklinistvan
Copy link
Collaborator

kuklinistvan commented Dec 3, 2020

With a helm chart you just do helm install djankiserv ./charts/djankiserv

And so is it true for docker-compose :) You only need these manual steps if you wish to get involved in the development process.

I hardly think that you can skip these instructions just by using plain Helm:
https://github.com/ankicommunity/docker-anki-sync-server/tree/djankiserv-incorp/Docker%20Compose/djankiserv-postgres-nginx

@AntonOfTheWoods
Copy link

With the helm solution provided you should just need to provide a domain in the publicly accessible DNS that points to your machine and it will not only configure everything, it will also configure you a valid SSL cert. There are a few insecure defaults (like hard-coded passwords) that you could/should override but if you are able to attack that you are already inside.

docker-compose is basically helm for bare metal from what I understand. I have no idea but I would guess you can use it with Docker Swarm, which is (was?) Docker's container orchestration product. Why do I dislike docker? It is a desktop hack, not a proper server technology. The company Docker is part of some cool projects, mainly spinoffs from it's original tech like containerd, that are used as components in proper platforms. Docker is hurtling towards irrelevance though, as can be seen by the fact that they themselves (like Mesosphere with DC/OS, VMWare, etc.) are providing Kubernetes services and integrations. Everyone who wants to run anything serious is only interested in Kubernetes (or they are big enough not to care, like Borg at Google or Twine at FB), so competitors are helping their customers convert, and providing services to do that so they can keep some revenue.

Think of it like this. Right now you just need a gas-powered scooter to get down to the shops every couple of days. Sometimes it is a bit of a hassle but you can get by. I am used to using a Tesla - sure it is a bigger hassle to get started and purchase (learn, it's all free of couse :-)) - but once you know how to drive it, well, it pretty much drives itself. And then when you need to drive to the capital for the day you just get in the car and drive - you don't need to think about getting the train or a long-distance bus.

Most importantly, I don't need docker-compose and don't have the time to learn about something I will never use - I have huge amounts of research to do and need to work on that. I literally don't have the time to test whether my changes will work on pure docker, or whether others' changes will mess with running outside it, so if that starts to get in the way that will be a problem. If it is never a problem and others take care of everything to do with pure docker, that is great! Power to the desktop!

@VikashKothary
Copy link
Member Author

TL;DR; You're both right. It just depends on what you want to do. I would suggest we create an Action Plan so we can agree on next steps. And a list of blockers and so we can get them resolved. @kuklinistvan I know you're doing a ton of tasks between this and the wiki so let us know how we can help.

Adding my 2 cents to the conversation. I think they all have their own use cases. Personally I use all of the above depending on the project so here's my workflow.

  1. Local: Personally I like the simplicity of running things locally. I find it minimises the issues to that which are coming from your code. It also minimises the level of barrier to entry. This is mainly used for development.

  2. Docker: Once your code is running locally, you should also try and run it in Docker. I do this using a CI system. I also use the same Docker image to running things locally when debugging issues found in the CI. I like creating docker images for all my projects, both for development and deployment.

  3. Docker Compose: When running docker images locally, I always run them using Docker Compose. This comes with the benefits of the simplicity of docker-compose up and docker-compose down. This is related to the above point about debugging issues in Docker. You would only run one instance of everything using Docker Compose.

3.5) Docker Swarm: I've never used it, but if that's your preferred method of orchestration, then you can deploy directly using your docker-compose file. I personally don't use it but the way I develop I'm perfectly happy for users to do that.

  1. Kubernetes Manifests: So I like to take a smaller steps between Docker Compose and Helm Charts. So what I do is use Kompose to generate Kubernetes Manifests using the Docker-Compose file. The benefit of this is it's a one line command and I can then deploy to a Kubernetes cluster. You would do this for deployment, and when you want the application to run continuously. You also want to deal with all the additional kubernetes aspects of the deployment which Docker doesn't/shouldn't handle, i.e. secrets, ingress/proxying, blue/green deployments, multiple instances, etc.

  2. Helm Charts: Finially I use Helm charts, when I'm happy with all the previous steps and I'm happy for others to use it on their Kubernetes clusters. You make the Manifests from the last step, clean them up, add properties to allow customisation, then deploy it for others to use.

Now this approach has downsides. It's considerably more overhead. But I find it minimises the complexity so you can catch bugs earlier. And it also scales a lot better because each step is properly linted, tested, and versioned.

@VikashKothary
Copy link
Member Author

Also for a name, tbh I don't really know. I've been thinking of a nice naming convention for all the repos but I wouldn't said it's the highest priority so let's just drop the sync for now. But if you have any better ideas go for it. Maybe you'll inspire me for the rest.

@VikashKothary
Copy link
Member Author

Here's what I'm thinking for the Docker CI/CID pipeline

  1. If master
  • Deploy to DockerHub and GitHub Packages
  • Use CalVer Version: YYYYMMDD
  • The Same image should also be tagged: latest
  1. if develop
  • Deploy to GitHub Packages
  • Use CalVer Version with Short Commit Hash: YYYYMMDD-abcdef
  • (optional) The Same image should also be tagged: latest-develop

@VikashKothary
Copy link
Member Author

Thanks @automatedempire for your PR for Github Actions.
I'll make a release this weekend and hopefully we'll be able to automatically push an image from master to DockerHub.

The next step is to add the djankiserv docker image to the CI system. I'm not sure if the images works so that'll need to be checked also.
@automatedempire Will you be able to work on that next? (No pressure, just want to know so we're aligned on who does what)

Here are some advancements around this topic, which will eventually need to become their own tickets.

  1. I think a good release cycle would be once a month. We can cut a release from develop to master. Although it might be too

  2. We should also have a continuous delivery model between develop and Github Packages. To follow the conversation here: Agree on git workflow for djankiserv ankicommunity-api-server#12. This means users don't have to wait for a monthly release if they're willing to handle the occasional bug here and there.

  3. Changes in the software repos should trigger builds in this repo. This can be done using the Github Actions API. This is needed to resolve the following issue: Suggestion: Docker images that track this repo ankicommunity-sync-server#69.

@automatedempire
Copy link
Contributor

automatedempire commented Mar 3, 2021

To add a data point to @kuklinistvan's question:

However, in order to prioritize, I would be happy to have some data on how much of our audience is interested in a Docker-based and in a Kubernetes-based solution.

I had used docker-compose to build and run the anki-sync-server image on a Raspberry Pi, which worked for me for a nice long time. I'm currently working on building a K8s Pi cluster home lab (not done yet), and my eventual goal is to have a locally-hosted Anki sync server running as part of that.

  1. We should also have a continuous delivery model between develop and Github Packages. To follow the conversation here: Agree on git workflow for djankiserv ankicommunity-api-server#12. This means users don't have to wait for a monthly release if they're willing to handle the occasional bug here and there.

I am currently working on updating the anki-sync-server workflow to match @VikashKothary suggestions above for deploying to Docker Hub and GitHub Container Registry based on branch. I almost have it (only took a small amount of refactoring my original idea), but I'm having some issues with testing the pushes to GHCR. Once I work through those, I'd be happy to move over to the djankiserv workflow.

  1. Changes in the software repos should trigger builds in this repo. This can be done using the Github Actions API. This is needed to resolve the following issue: ankicommunity/anki-sync-server#69.

I apologize, but I think I misunderstood something while reading across the different issues, which informed how I originally built the triggers for the first workflow. Are the ankicommunity/anki-sync-server and ankicommunity/djankiserv repos staying separate from this one? I thought they were being migrated under this repo in the services directory, making this the one repo to rule them all. If they are remaining separate, why are we building the devops services in this repo rather than beside the respective code? I want to make sure I understand the direction we're heading so I can build the workflow accordingly.

@VikashKothary
Copy link
Member Author

Amazing! Thanks for taking the initiative on this. Much appreciated!

So here's my vision, feel free to share your opinion.

  • This repo will handle all the docker images/helm charts from both anki-sync-server and ankisyncd.
  • The development of their software would be independent in their respective repos.
  • When a change is made to this repo, a new image is released.
  • When a change is made in the respective software repos, a new image is released.
  • This repo will release Docker Images and Helm charts. The software repos will release applications.

The value here is a clear separation of concerns between development and operation, while allowing them to work together. The was a brief debate about whether this is the best process here: ankicommunity/ankicommunity-sync-server#69 but it was tabled for the sake of this issue.

Does this makes sense? What are your thoughts?

@VikashKothary VikashKothary mentioned this issue Mar 8, 2021
1 task
@VikashKothary
Copy link
Member Author

VikashKothary commented Mar 8, 2021

@automatedempire So I made a release, and it pushed to my personal dockerhub account. 😅

But other than that it worked great! Thanks so much for your hard work.

So I think this is because you're using DOCKER_USERNAME for login and in the docker image name.
My first question is, can I create a organisation access token? My understanding is not.

My understanding is that I have to use my personal access token and personal username for the login. And there should be a separate variable for the docker image (or it can be hardcoded).

How does that sound?

@automatedempire
Copy link
Contributor

My understanding is that I have to use my personal access token and personal username for the login. And there should be a separate variable for the docker image (or it can be hardcoded).

How does that sound?

Yep, that was a mistake on my part. I thought that variable would be the name of the org, but looking now, I can't find a way to generate an org access token. I tested this against a regular Docker Hub account (didn't have an org to test against, but I'll correct that), so I clearly missed it. I prefer it to be an environment variable to make development/testing easier, so I'll make a patch for this and submit another PR.

The value here is a clear separation of concerns between development and operation, while allowing them to work together. The was a brief debate about whether this is the best process here: ankicommunity/ankicommunity-sync-server#69 but it was tabled for the sake of this issue.

That direction makes sense to me, and I think that issue is where I got confused. I haven't done cross-repo actions before, but I'll look into it and build out that workflow as well under the other issue.

@samyak-jain
Copy link

Hey everyone! Just a friendly ping regarding ankicommunity/ankicommunity-sync-server#69. Is the cross-repo github action thing being considered/worked on? While doing it this way probably isn't ideal IMO (at least, I have never seen anyone do it this way), it should be possible. From some quick reading on this, it looks like we will need to do a repository_dispatch (https://docs.github.com/en/actions/learn-github-actions/events-that-trigger-workflows#repository_dispatch). Let me know in case nobody is tackling this. I'll be happy to contribute.

@VikashKothary
Copy link
Member Author

Hi @samyak-jain, as far as I am aware, the last update on this topic was in March as mentioned above.

If you're happy to work on it, then I'm sure it'll be appreciated. I think we've still got the issue above where the GitHub Actions need to be modified to deploy to the AnkiCommunity DockerHub account and not my personal Dockerhub.

Once that it set up, we can work on triggering a new build on a software release.

Note one new approach I've come up with recently is to create an image in this repository that uses ARGS and ONBUILD to configure and copy the code, and then use it in the software repository's CI build. I feel like this would be closer to your original vision @samyak-jain, however, should share (almost) all the benefits of an cross-repo approach due to be developed separately. I just though I'd mention it if that's the path you want to take.

@samyak-jain
Copy link

Hey @VikashKothary. I think changing to the AnkiCommunity DockerHub account should be fairly straightforward. Are the secrets already configured? Do you know what are the names of the secrets I need to use?

I am not sure I understand your proposal using ARGS and ONBUILD. Can you elaborate on that? If I'm understanding this correctly, you want to build an initial image in this repo which has ONBUILD which will then be used in another Dockerfile inside the anki-sync-server repo? Did I understand that correctly? I'm a bit confused on why this helps.

@VikashKothary
Copy link
Member Author

DOCKERHUB_USERNAME and DOCKERHUB_TOKEN have been set. I can change/update them as you need so just tell me what you need me to do :)

@VikashKothary
Copy link
Member Author

And for the docker image, that's exactly right.

It's just another approach that'll allow us to develop production-ready Dockerfiles in this repo while allowing for automatic deployments when a new release is created in the software repo.

Both approaches allow us to cover all the complexities when developing Dockerfiles (see: ankicommunity/ankicommunity-sync-server#69 (comment)) as well as other issues that will appear in the future like supporting multiple architectures (#9) and supporting multiple cloud environments (#11 (comment)).
One thing that's very important to me is that as part of the devops repo, we've got good documentation about how to maintain and deploy these images (manifests/charts/etc). Hence I'm keep in keeping all the base Dockerfiles here.

That being said, we've been talking about this for a while, and it's clear you know your stuff so frankly I'm happy for you to complete this ticket however you see fit. Once we've got a working automatic deployment, we can obviously improve the process overtime.

That sound good to you?

@VikashKothary
Copy link
Member Author

VikashKothary commented Apr 20, 2022

I've made the change. So the anki-sync-server dockerfile should now be available in DockerHub. Thank you to everyone who helped contribute to this feature.

I'm going to keep this issue open util I see a few pulls from Docker Hub to confirm it's working correctly.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Development

No branches or pull requests

6 participants