Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Allow to specify docker network options, such as mtu #775

Closed
sharonovd opened this issue Oct 29, 2020 · 9 comments
Closed

Allow to specify docker network options, such as mtu #775

sharonovd opened this issue Oct 29, 2020 · 9 comments
Labels
enhancement New feature or request

Comments

@sharonovd
Copy link

Describe the enhancement
It would be great to set up custom options for networks created for container jobs.
As of now, it is only possible to pass options to docker create command via jobs.<job_id>.container.options

Code Snippet
This could look this way:

container:
  image: centos:7
  network:
    options:
      mtu: 1400

Additional information
See, according to moby/moby#34981 , docker daemon options, such as mtu, are not passed to new bridge networks. So my CI jobs fail while trying to e. g. run yum -y update.

NOTE: if the feature request has been agreed upon then the assignee will create an ADR. See docs/adrs/README.md

@sharonovd sharonovd added the enhancement New feature or request label Oct 29, 2020
@kim0
Copy link

kim0 commented Sep 23, 2021

Hello @sharonovd .. The lack of possibility to set MTU has bit me hard the past few days. Is there any progress on this front? Any workarounds available today to pass custom options at the docker network create step ? Thanks!

@lasse-aagren
Copy link

lasse-aagren commented Sep 27, 2021

Hack'ish workaround: if you are creating your own runners, like us, You can wrap every call to docker network create... like:

#!/usr/bin/env bash

# move current docker to docker.bin
# mv /path/to/bin/docker /path/to/bin/docker.bin
# place this script in /path/to/bin/docker

MTU=1460

if [[ $1 = "network" ]] && [[ $2 = "create" ]] 
then
    shift; shift #pop 2 first parameters
    /path/to/bin/docker.bin network create --opt com.docker.network.driver.mtu=$MTU "${@}"
else
    #just call docker as normal if not network create
    /path/to/bin/docker.bin "${@}"
fi

Not a pretty hack - but it works :)

@kim0
Copy link

kim0 commented Sep 27, 2021

Thanks for the hack! This was breaking pretty consistently for me, so I ended up up'ing the VPC MTU to 1500. Glad GCP allows that now. Things are OK for me now.

@NicklasWallgren
Copy link

Thanks for the workaround!, we encountered the same issue using k3s and default flannel configuration.

@alexdepalex
Copy link

alexdepalex commented Feb 18, 2022

Instead of applying all these hacks and using non-official container image (although thanks for making that available @tiagoblackcode), can we get this fixed in the runner itself? Seems like it comes down to checking the MTU from the network bridge and using that when creating the network.

actions/actions-runner-controller#1046 (comment)

@michaelfindlater
Copy link

michaelfindlater commented Mar 8, 2022

Hi, we've encountered the same issue and are unfortunately having to use the work around.

Would be excellent to see some more flexibility in the form of allowing options to be configured/passed (or just an MTU, as in #1650) when creating Docker networks for workflows jobs.

@mzwennes
Copy link

In the README it specifies the option to pass an optional MTU to the RunnerDeployment:

# Optional Docker containers network MTU
# If your network card MTU is smaller than Docker's default 1500, you might encounter Docker networking issues.
# To fix these issues, you should setup Docker MTU smaller than or equal to that on the outgoing network card.
# More information:
# - https://mlohr.com/docker-mtu/
dockerMTU: 1500

Is this not what you need? I ended up here because I am noticing network performance issues when when running Github Actions in these self-hosted pods (on GKE). Sometimes a job takes 5 minutes, and other times it takes 15 minutes. I am currently verifying if MTU is indeed the problem (since the VPC has a MTU of 1460 and Docker default is 1500).

@michaelfindlater
Copy link

michaelfindlater commented Apr 20, 2022

dockerMTU:
Is this not what you need?

@mzwennes dockerMTU for ARC propagates MTU as an environment variable to the runner pod and also adds your setting into the Dockerd config file. While it is an attempt, unfortunately it doesn't solve the issue 😢 It's difficult to solve unless addressed by adding an argument to actions/runner's Docker call (read on to see why).

Problem

The problem is that the Runner creates custom networks. When doing so, it does not honor the MTU settings in the Docker daemon config file. Thus, all networks it creates are at Docker's default MTU setting (1500) regardless of anything else.

This is particularly an issue for those on GKE who use the actions/runner, as GCP's MTU setting for VPCs is 1460. Containers and actions using the Docker default of 1500 results in packet fragmentation.

Workarounds (in projects using actions/runner, e.g. ARC)

Possible Solution (to support non 1500 bytes MTU environments)

If actions/runner could provide some option to allow us to specify an MTU on the networks it creates, it would avoid other projects having to implement hacky workarounds. It could be added here:

#if OS_WINDOWS
return await ExecuteDockerCommandAsync(context, "network", $"create --label {DockerInstanceLabel} {network} --driver nat", context.CancellationToken);
#else
return await ExecuteDockerCommandAsync(context, "network", $"create --label {DockerInstanceLabel} {network}", context.CancellationToken);
#endif

It would need to pass this option when creating docker networks: --opt com.docker.network.driver.mtu=.

Or/also to add a default by as noted above:

checking the MTU from the network bridge and using that when creating the network.

There's a PR here to do this! 👉 #1650 ❤️

@thboop
Copy link
Collaborator

thboop commented Sep 8, 2022

We really don't want the runner to be a pass through for default configuration settings for docker. There are other network options which could be helpful, but we don't want to expose them as runner features in this way.

Customizable default settings for the docker instance would be better set as a feature request to docker daemon itself. In the meantime, you have a few options:

@thboop thboop closed this as not planned Won't fix, can't repro, duplicate, stale Sep 8, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
8 participants