From ceeb6c160c3ea72cd2f3758360a2468fcefe1cce Mon Sep 17 00:00:00 2001 From: Sam Foo Date: Thu, 19 Oct 2023 02:24:52 -0700 Subject: [PATCH] Add support for service containers (#1949) * Support services (#42) Removed createSimpleContainerName and AutoRemove flag Co-authored-by: Lunny Xiao Co-authored-by: Jason Song Reviewed-on: https://gitea.com/gitea/act/pulls/42 Reviewed-by: Jason Song Co-authored-by: Zettat123 Co-committed-by: Zettat123 * Support services options (#45) Reviewed-on: https://gitea.com/gitea/act/pulls/45 Reviewed-by: Lunny Xiao Co-authored-by: Zettat123 Co-committed-by: Zettat123 * Support intepolation for `env` of `services` (#47) Reviewed-on: https://gitea.com/gitea/act/pulls/47 Reviewed-by: Lunny Xiao Co-authored-by: Zettat123 Co-committed-by: Zettat123 * Support services `credentials` (#51) If a service's image is from a container registry requires authentication, `act_runner` will need `credentials` to pull the image, see [documentation](https://docs.github.com/en/actions/using-workflows/workflow-syntax-for-github-actions#jobsjob_idservicesservice_idcredentials). Currently, `act_runner` incorrectly uses the `credentials` of `containers` to pull services' images and the `credentials` of services won't be used, see the related code: https://gitea.com/gitea/act/src/commit/0c1f2edb996a87ee17dcf3cfa7259c04be02abd7/pkg/runner/run_context.go#L228-L269 Co-authored-by: Jason Song Reviewed-on: https://gitea.com/gitea/act/pulls/51 Reviewed-by: Jason Song Reviewed-by: Lunny Xiao Co-authored-by: Zettat123 Co-committed-by: Zettat123 * Add ContainerMaxLifetime and ContainerNetworkMode options from: https://gitea.com/gitea/act/commit/b9c20dcaa43899cb3bb327619d447248303170e0 * Fix container network issue (#56) Follow: https://gitea.com/gitea/act_runner/pulls/184 Close https://gitea.com/gitea/act_runner/issues/177 - `act` create new networks only if the value of `NeedCreateNetwork` is true, and remove these networks at last. `NeedCreateNetwork` is passed by `act_runner`. 'NeedCreateNetwork' is true only if `container.network` in the configuration file of the `act_runner` is empty. - In the `docker create` phase, specify the network to which containers will connect. Because, if not specify , container will connect to `bridge` network which is created automatically by Docker. - If the network is user defined network ( the value of `container.network` is empty or ``. Because, the network created by `act` is also user defined network.), will also specify alias by `--network-alias`. The alias of service is ``. So we can be access service container by `:` in the steps of job. - Won't try to `docker network connect ` network after `docker start` any more. - Because on the one hand, `docker network connect` applies only to user defined networks, if try to `docker network connect host ` will return error. - On the other hand, we just specify network in the stage of `docker create`, the same effect can be achieved. - Won't try to remove containers and networks berfore the stage of `docker start`, because the name of these containers and netwoks won't be repeat. Co-authored-by: Jason Song Reviewed-on: https://gitea.com/gitea/act/pulls/56 Reviewed-by: Jason Song Co-authored-by: sillyguodong Co-committed-by: sillyguodong * Check volumes (#60) This PR adds a `ValidVolumes` config. Users can specify the volumes (including bind mounts) that can be mounted to containers by this config. Options related to volumes: - [jobs..container.volumes](https://docs.github.com/en/actions/using-workflows/workflow-syntax-for-github-actions#jobsjob_idcontainervolumes) - [jobs..services..volumes](https://docs.github.com/en/actions/using-workflows/workflow-syntax-for-github-actions#jobsjob_idservicesservice_idvolumes) In addition, volumes specified by `options` will also be checked. Currently, the following default volumes (see https://gitea.com/gitea/act/src/commit/a72822b3f83d3e68ffc697101b713b7badf57e2f/pkg/runner/run_context.go#L116-L166) will be added to `ValidVolumes`: - `act-toolcache` - `` and `-env` - `/var/run/docker.sock` (We need to add a new configuration to control whether the docker daemon can be mounted) Co-authored-by: Jason Song Reviewed-on: https://gitea.com/gitea/act/pulls/60 Reviewed-by: Jason Song Co-authored-by: Zettat123 Co-committed-by: Zettat123 * Remove ContainerMaxLifetime; fix lint * Remove unused ValidVolumes * Remove ConnectToNetwork * Add docker stubs * Close docker clients to prevent file descriptor leaks * Fix the error when removing network in self-hosted mode (#69) Fixes https://gitea.com/gitea/act_runner/issues/255 Reviewed-on: https://gitea.com/gitea/act/pulls/69 Co-authored-by: Zettat123 Co-committed-by: Zettat123 * Move service container and network cleanup to rc.cleanUpJobContainer * Add --network flag; default to host if not using service containers or set explicitly * Correctly close executor to prevent fd leak * Revert to tail instead of full path * fix network duplication * backport networkingConfig for aliaes * don't hardcode netMode host * Convert services test to table driven tests * Add failing tests for services * Expose service container ports onto the host * Set container network mode in artifacts server test to host mode * Log container network mode when creating/starting a container * fix: Correctly handle ContainerNetworkMode * fix: missing service container network * Always remove service containers Although we usually keep containers running if the workflow errored (unless `--rm` is given) in order to facilitate debugging and we have a flag (`--reuse`) to always keep containers running in order to speed up repeated `act` invocations, I believe that these should only apply to job containers and not service containers, because changing the network settings on a service container requires re-creating it anyway. * Remove networks only if no active endpoints exist * Ensure job containers are stopped before starting a new job * fix: go build -tags WITHOUT_DOCKER --------- Co-authored-by: Zettat123 Co-authored-by: Lunny Xiao Co-authored-by: Jason Song Co-authored-by: sillyguodong Co-authored-by: ChristopherHX Co-authored-by: ZauberNerd --- cmd/input.go | 1 + cmd/root.go | 3 + pkg/container/container_types.go | 38 +-- pkg/container/docker_network.go | 79 ++++++ pkg/container/docker_run.go | 50 ++-- pkg/container/docker_run_test.go | 1 + pkg/container/docker_stub.go | 12 + pkg/runner/job_executor.go | 10 +- pkg/runner/run_context.go | 233 ++++++++++++++++-- pkg/runner/runner.go | 81 +++--- pkg/runner/runner_test.go | 5 + .../testdata/services-host-network/push.yml | 14 ++ .../testdata/services-with-container/push.yml | 16 ++ pkg/runner/testdata/services/push.yaml | 26 ++ 14 files changed, 469 insertions(+), 100 deletions(-) create mode 100644 pkg/container/docker_network.go create mode 100644 pkg/runner/testdata/services-host-network/push.yml create mode 100644 pkg/runner/testdata/services-with-container/push.yml create mode 100644 pkg/runner/testdata/services/push.yaml diff --git a/cmd/input.go b/cmd/input.go index 1ae27930b65..f2f8edcdc28 100644 --- a/cmd/input.go +++ b/cmd/input.go @@ -56,6 +56,7 @@ type Input struct { matrix []string actionCachePath string logPrefixJobID bool + networkName string } func (i *Input) resolve(path string) string { diff --git a/cmd/root.go b/cmd/root.go index c076bf17ded..6b84cd762c4 100644 --- a/cmd/root.go +++ b/cmd/root.go @@ -14,6 +14,7 @@ import ( "github.com/AlecAivazis/survey/v2" "github.com/adrg/xdg" "github.com/andreaskoch/go-fswatch" + docker_container "github.com/docker/docker/api/types/container" "github.com/joho/godotenv" gitignore "github.com/sabhiram/go-gitignore" log "github.com/sirupsen/logrus" @@ -96,6 +97,7 @@ func Execute(ctx context.Context, version string) { rootCmd.PersistentFlags().StringVarP(&input.cacheServerAddr, "cache-server-addr", "", common.GetOutboundIP().String(), "Defines the address to which the cache server binds.") rootCmd.PersistentFlags().Uint16VarP(&input.cacheServerPort, "cache-server-port", "", 0, "Defines the port where the artifact server listens. 0 means a randomly available port.") rootCmd.PersistentFlags().StringVarP(&input.actionCachePath, "action-cache-path", "", filepath.Join(CacheHomeDir, "act"), "Defines the path where the actions get cached and host workspaces created.") + rootCmd.PersistentFlags().StringVarP(&input.networkName, "network", "", "host", "Sets a docker network name. Defaults to host.") rootCmd.SetArgs(args()) if err := rootCmd.Execute(); err != nil { @@ -612,6 +614,7 @@ func newRunCommand(ctx context.Context, input *Input) func(*cobra.Command, []str ReplaceGheActionWithGithubCom: input.replaceGheActionWithGithubCom, ReplaceGheActionTokenWithGithubCom: input.replaceGheActionTokenWithGithubCom, Matrix: matrixes, + ContainerNetworkMode: docker_container.NetworkMode(input.networkName), } r, err := runner.New(config) if err != nil { diff --git a/pkg/container/container_types.go b/pkg/container/container_types.go index 767beb520a3..37d293a3817 100644 --- a/pkg/container/container_types.go +++ b/pkg/container/container_types.go @@ -4,28 +4,32 @@ import ( "context" "io" + "github.com/docker/go-connections/nat" "github.com/nektos/act/pkg/common" ) // NewContainerInput the input for the New function type NewContainerInput struct { - Image string - Username string - Password string - Entrypoint []string - Cmd []string - WorkingDir string - Env []string - Binds []string - Mounts map[string]string - Name string - Stdout io.Writer - Stderr io.Writer - NetworkMode string - Privileged bool - UsernsMode string - Platform string - Options string + Image string + Username string + Password string + Entrypoint []string + Cmd []string + WorkingDir string + Env []string + Binds []string + Mounts map[string]string + Name string + Stdout io.Writer + Stderr io.Writer + NetworkMode string + Privileged bool + UsernsMode string + Platform string + Options string + NetworkAliases []string + ExposedPorts nat.PortSet + PortBindings nat.PortMap } // FileEntry is a file to copy to a container diff --git a/pkg/container/docker_network.go b/pkg/container/docker_network.go new file mode 100644 index 00000000000..8a7528ab300 --- /dev/null +++ b/pkg/container/docker_network.go @@ -0,0 +1,79 @@ +//go:build !(WITHOUT_DOCKER || !(linux || darwin || windows)) + +package container + +import ( + "context" + + "github.com/docker/docker/api/types" + "github.com/nektos/act/pkg/common" +) + +func NewDockerNetworkCreateExecutor(name string) common.Executor { + return func(ctx context.Context) error { + cli, err := GetDockerClient(ctx) + if err != nil { + return err + } + defer cli.Close() + + // Only create the network if it doesn't exist + networks, err := cli.NetworkList(ctx, types.NetworkListOptions{}) + if err != nil { + return err + } + common.Logger(ctx).Debugf("%v", networks) + for _, network := range networks { + if network.Name == name { + common.Logger(ctx).Debugf("Network %v exists", name) + return nil + } + } + + _, err = cli.NetworkCreate(ctx, name, types.NetworkCreate{ + Driver: "bridge", + Scope: "local", + }) + if err != nil { + return err + } + + return nil + } +} + +func NewDockerNetworkRemoveExecutor(name string) common.Executor { + return func(ctx context.Context) error { + cli, err := GetDockerClient(ctx) + if err != nil { + return err + } + defer cli.Close() + + // Make shure that all network of the specified name are removed + // cli.NetworkRemove refuses to remove a network if there are duplicates + networks, err := cli.NetworkList(ctx, types.NetworkListOptions{}) + if err != nil { + return err + } + common.Logger(ctx).Debugf("%v", networks) + for _, network := range networks { + if network.Name == name { + result, err := cli.NetworkInspect(ctx, network.ID, types.NetworkInspectOptions{}) + if err != nil { + return err + } + + if len(result.Containers) == 0 { + if err = cli.NetworkRemove(ctx, network.ID); err != nil { + common.Logger(ctx).Debugf("%v", err) + } + } else { + common.Logger(ctx).Debugf("Refusing to remove network %v because it still has active endpoints", name) + } + } + } + + return err + } +} diff --git a/pkg/container/docker_run.go b/pkg/container/docker_run.go index cf58aee902d..d24c4ac33f8 100644 --- a/pkg/container/docker_run.go +++ b/pkg/container/docker_run.go @@ -29,6 +29,7 @@ import ( "github.com/docker/docker/api/types" "github.com/docker/docker/api/types/container" "github.com/docker/docker/api/types/mount" + "github.com/docker/docker/api/types/network" "github.com/docker/docker/client" "github.com/docker/docker/pkg/stdcopy" specs "github.com/opencontainers/image-spec/specs-go/v1" @@ -66,7 +67,7 @@ func supportsContainerImagePlatform(ctx context.Context, cli client.APIClient) b func (cr *containerReference) Create(capAdd []string, capDrop []string) common.Executor { return common. - NewInfoExecutor("%sdocker create image=%s platform=%s entrypoint=%+q cmd=%+q", logPrefix, cr.input.Image, cr.input.Platform, cr.input.Entrypoint, cr.input.Cmd). + NewInfoExecutor("%sdocker create image=%s platform=%s entrypoint=%+q cmd=%+q network=%+q", logPrefix, cr.input.Image, cr.input.Platform, cr.input.Entrypoint, cr.input.Cmd, cr.input.NetworkMode). Then( common.NewPipelineExecutor( cr.connect(), @@ -78,7 +79,7 @@ func (cr *containerReference) Create(capAdd []string, capDrop []string) common.E func (cr *containerReference) Start(attach bool) common.Executor { return common. - NewInfoExecutor("%sdocker run image=%s platform=%s entrypoint=%+q cmd=%+q", logPrefix, cr.input.Image, cr.input.Platform, cr.input.Entrypoint, cr.input.Cmd). + NewInfoExecutor("%sdocker run image=%s platform=%s entrypoint=%+q cmd=%+q network=%+q", logPrefix, cr.input.Image, cr.input.Platform, cr.input.Entrypoint, cr.input.Cmd, cr.input.NetworkMode). Then( common.NewPipelineExecutor( cr.connect(), @@ -346,8 +347,8 @@ func (cr *containerReference) mergeContainerConfigs(ctx context.Context, config } if len(copts.netMode.Value()) == 0 { - if err = copts.netMode.Set("host"); err != nil { - return nil, nil, fmt.Errorf("Cannot parse networkmode=host. This is an internal error and should not happen: '%w'", err) + if err = copts.netMode.Set(cr.input.NetworkMode); err != nil { + return nil, nil, fmt.Errorf("Cannot parse networkmode=%s. This is an internal error and should not happen: '%w'", cr.input.NetworkMode, err) } } @@ -391,10 +392,11 @@ func (cr *containerReference) create(capAdd []string, capDrop []string) common.E input := cr.input config := &container.Config{ - Image: input.Image, - WorkingDir: input.WorkingDir, - Env: input.Env, - Tty: isTerminal, + Image: input.Image, + WorkingDir: input.WorkingDir, + Env: input.Env, + ExposedPorts: input.ExposedPorts, + Tty: isTerminal, } logger.Debugf("Common container.Config ==> %+v", config) @@ -430,13 +432,14 @@ func (cr *containerReference) create(capAdd []string, capDrop []string) common.E } hostConfig := &container.HostConfig{ - CapAdd: capAdd, - CapDrop: capDrop, - Binds: input.Binds, - Mounts: mounts, - NetworkMode: container.NetworkMode(input.NetworkMode), - Privileged: input.Privileged, - UsernsMode: container.UsernsMode(input.UsernsMode), + CapAdd: capAdd, + CapDrop: capDrop, + Binds: input.Binds, + Mounts: mounts, + NetworkMode: container.NetworkMode(input.NetworkMode), + Privileged: input.Privileged, + UsernsMode: container.UsernsMode(input.UsernsMode), + PortBindings: input.PortBindings, } logger.Debugf("Common container.HostConfig ==> %+v", hostConfig) @@ -445,7 +448,22 @@ func (cr *containerReference) create(capAdd []string, capDrop []string) common.E return err } - resp, err := cr.cli.ContainerCreate(ctx, config, hostConfig, nil, platSpecs, input.Name) + var networkingConfig *network.NetworkingConfig + logger.Debugf("input.NetworkAliases ==> %v", input.NetworkAliases) + if hostConfig.NetworkMode.IsUserDefined() && len(input.NetworkAliases) > 0 { + endpointConfig := &network.EndpointSettings{ + Aliases: input.NetworkAliases, + } + networkingConfig = &network.NetworkingConfig{ + EndpointsConfig: map[string]*network.EndpointSettings{ + input.NetworkMode: endpointConfig, + }, + } + } else { + logger.Debugf("not a use defined config??") + } + + resp, err := cr.cli.ContainerCreate(ctx, config, hostConfig, networkingConfig, platSpecs, input.Name) if err != nil { return fmt.Errorf("failed to create container: '%w'", err) } diff --git a/pkg/container/docker_run_test.go b/pkg/container/docker_run_test.go index 8309df6ca80..96bda59214e 100644 --- a/pkg/container/docker_run_test.go +++ b/pkg/container/docker_run_test.go @@ -19,6 +19,7 @@ func TestDocker(t *testing.T) { ctx := context.Background() client, err := GetDockerClient(ctx) assert.NoError(t, err) + defer client.Close() dockerBuild := NewDockerBuildExecutor(NewDockerBuildExecutorInput{ ContextDir: "testdata", diff --git a/pkg/container/docker_stub.go b/pkg/container/docker_stub.go index b28c90de613..36f530eecc5 100644 --- a/pkg/container/docker_stub.go +++ b/pkg/container/docker_stub.go @@ -55,3 +55,15 @@ func NewDockerVolumeRemoveExecutor(volume string, force bool) common.Executor { return nil } } + +func NewDockerNetworkCreateExecutor(name string) common.Executor { + return func(ctx context.Context) error { + return nil + } +} + +func NewDockerNetworkRemoveExecutor(name string) common.Executor { + return func(ctx context.Context) error { + return nil + } +} diff --git a/pkg/runner/job_executor.go b/pkg/runner/job_executor.go index 3f2e41e2b9f..148c9ff5de1 100644 --- a/pkg/runner/job_executor.go +++ b/pkg/runner/job_executor.go @@ -19,6 +19,7 @@ type jobInfo interface { result(result string) } +//nolint:contextcheck,gocyclo func newJobExecutor(info jobInfo, sf stepFactory, rc *RunContext) common.Executor { steps := make([]common.Executor, 0) preSteps := make([]common.Executor, 0) @@ -87,7 +88,7 @@ func newJobExecutor(info jobInfo, sf stepFactory, rc *RunContext) common.Executo postExec := useStepLogger(rc, stepModel, stepStagePost, step.post()) if postExecutor != nil { - // run the post exector in reverse order + // run the post executor in reverse order postExecutor = postExec.Finally(postExecutor) } else { postExecutor = postExec @@ -101,7 +102,12 @@ func newJobExecutor(info jobInfo, sf stepFactory, rc *RunContext) common.Executo // always allow 1 min for stopping and removing the runner, even if we were cancelled ctx, cancel := context.WithTimeout(common.WithLogger(context.Background(), common.Logger(ctx)), time.Minute) defer cancel() - err = info.stopContainer()(ctx) //nolint:contextcheck + + logger := common.Logger(ctx) + logger.Infof("Cleaning up container for job %s", rc.JobName) + if err = info.stopContainer()(ctx); err != nil { + logger.Errorf("Error while stop job container: %v", err) + } } setJobResult(ctx, info, rc, jobError == nil) setJobOutputs(ctx, rc) diff --git a/pkg/runner/run_context.go b/pkg/runner/run_context.go index 2c55e8a88f6..70d2f8f489d 100644 --- a/pkg/runner/run_context.go +++ b/pkg/runner/run_context.go @@ -17,12 +17,12 @@ import ( "runtime" "strings" - "github.com/opencontainers/selinux/go-selinux" - + "github.com/docker/go-connections/nat" "github.com/nektos/act/pkg/common" "github.com/nektos/act/pkg/container" "github.com/nektos/act/pkg/exprparser" "github.com/nektos/act/pkg/model" + "github.com/opencontainers/selinux/go-selinux" ) // RunContext contains info about current job @@ -40,6 +40,7 @@ type RunContext struct { IntraActionState map[string]map[string]string ExprEval ExpressionEvaluator JobContainer container.ExecutionsEnvironment + ServiceContainers []container.ExecutionsEnvironment OutputMappings map[MappableOutput]MappableOutput JobName string ActionPath string @@ -87,6 +88,18 @@ func (rc *RunContext) jobContainerName() string { return createContainerName("act", rc.String()) } +// networkName return the name of the network which will be created by `act` automatically for job, +// only create network if using a service container +func (rc *RunContext) networkName() (string, bool) { + if len(rc.Run.Job().Services) > 0 { + return fmt.Sprintf("%s-%s-network", rc.jobContainerName(), rc.Run.JobID), true + } + if rc.Config.ContainerNetworkMode == "" { + return "host", false + } + return string(rc.Config.ContainerNetworkMode), false +} + func getDockerDaemonSocketMountPath(daemonPath string) string { if protoIndex := strings.Index(daemonPath, "://"); protoIndex != -1 { scheme := daemonPath[:protoIndex] @@ -226,6 +239,7 @@ func (rc *RunContext) startHostEnvironment() common.Executor { } } +//nolint:gocyclo func (rc *RunContext) startJobContainer() common.Executor { return func(ctx context.Context) error { logger := common.Logger(ctx) @@ -259,41 +273,126 @@ func (rc *RunContext) startJobContainer() common.Executor { ext := container.LinuxContainerEnvironmentExtensions{} binds, mounts := rc.GetBindsAndMounts() + // specify the network to which the container will connect when `docker create` stage. (like execute command line: docker create --network ) + // if using service containers, will create a new network for the containers. + // and it will be removed after at last. + networkName, createAndDeleteNetwork := rc.networkName() + + // add service containers + for serviceID, spec := range rc.Run.Job().Services { + // interpolate env + interpolatedEnvs := make(map[string]string, len(spec.Env)) + for k, v := range spec.Env { + interpolatedEnvs[k] = rc.ExprEval.Interpolate(ctx, v) + } + envs := make([]string, 0, len(interpolatedEnvs)) + for k, v := range interpolatedEnvs { + envs = append(envs, fmt.Sprintf("%s=%s", k, v)) + } + username, password, err = rc.handleServiceCredentials(ctx, spec.Credentials) + if err != nil { + return fmt.Errorf("failed to handle service %s credentials: %w", serviceID, err) + } + serviceBinds, serviceMounts := rc.GetServiceBindsAndMounts(spec.Volumes) + + exposedPorts, portBindings, err := nat.ParsePortSpecs(spec.Ports) + if err != nil { + return fmt.Errorf("failed to parse service %s ports: %w", serviceID, err) + } + + serviceContainerName := createContainerName(rc.jobContainerName(), serviceID) + c := container.NewContainer(&container.NewContainerInput{ + Name: serviceContainerName, + WorkingDir: ext.ToContainerPath(rc.Config.Workdir), + Image: spec.Image, + Username: username, + Password: password, + Env: envs, + Mounts: serviceMounts, + Binds: serviceBinds, + Stdout: logWriter, + Stderr: logWriter, + Privileged: rc.Config.Privileged, + UsernsMode: rc.Config.UsernsMode, + Platform: rc.Config.ContainerArchitecture, + Options: spec.Options, + NetworkMode: networkName, + NetworkAliases: []string{serviceID}, + ExposedPorts: exposedPorts, + PortBindings: portBindings, + }) + rc.ServiceContainers = append(rc.ServiceContainers, c) + } + rc.cleanUpJobContainer = func(ctx context.Context) error { - if rc.JobContainer != nil && !rc.Config.ReuseContainers { - return rc.JobContainer.Remove(). - Then(container.NewDockerVolumeRemoveExecutor(rc.jobContainerName(), false)). - Then(container.NewDockerVolumeRemoveExecutor(rc.jobContainerName()+"-env", false))(ctx) + reuseJobContainer := func(ctx context.Context) bool { + return rc.Config.ReuseContainers + } + + if rc.JobContainer != nil { + return rc.JobContainer.Remove().IfNot(reuseJobContainer). + Then(container.NewDockerVolumeRemoveExecutor(rc.jobContainerName(), false)).IfNot(reuseJobContainer). + Then(container.NewDockerVolumeRemoveExecutor(rc.jobContainerName()+"-env", false)).IfNot(reuseJobContainer). + Then(func(ctx context.Context) error { + if len(rc.ServiceContainers) > 0 { + logger.Infof("Cleaning up services for job %s", rc.JobName) + if err := rc.stopServiceContainers()(ctx); err != nil { + logger.Errorf("Error while cleaning services: %v", err) + } + if createAndDeleteNetwork { + // clean network if it has been created by act + // if using service containers + // it means that the network to which containers are connecting is created by `act_runner`, + // so, we should remove the network at last. + logger.Infof("Cleaning up network for job %s, and network name is: %s", rc.JobName, networkName) + if err := container.NewDockerNetworkRemoveExecutor(networkName)(ctx); err != nil { + logger.Errorf("Error while cleaning network: %v", err) + } + } + } + return nil + })(ctx) } return nil } + jobContainerNetwork := rc.Config.ContainerNetworkMode.NetworkName() + if rc.containerImage(ctx) != "" { + jobContainerNetwork = networkName + } else if jobContainerNetwork == "" { + jobContainerNetwork = "host" + } + rc.JobContainer = container.NewContainer(&container.NewContainerInput{ - Cmd: nil, - Entrypoint: []string{"tail", "-f", "/dev/null"}, - WorkingDir: ext.ToContainerPath(rc.Config.Workdir), - Image: image, - Username: username, - Password: password, - Name: name, - Env: envList, - Mounts: mounts, - NetworkMode: "host", - Binds: binds, - Stdout: logWriter, - Stderr: logWriter, - Privileged: rc.Config.Privileged, - UsernsMode: rc.Config.UsernsMode, - Platform: rc.Config.ContainerArchitecture, - Options: rc.options(ctx), + Cmd: nil, + Entrypoint: []string{"tail", "-f", "/dev/null"}, + WorkingDir: ext.ToContainerPath(rc.Config.Workdir), + Image: image, + Username: username, + Password: password, + Name: name, + Env: envList, + Mounts: mounts, + NetworkMode: jobContainerNetwork, + NetworkAliases: []string{rc.Name}, + Binds: binds, + Stdout: logWriter, + Stderr: logWriter, + Privileged: rc.Config.Privileged, + UsernsMode: rc.Config.UsernsMode, + Platform: rc.Config.ContainerArchitecture, + Options: rc.options(ctx), }) if rc.JobContainer == nil { return errors.New("Failed to create job container") } return common.NewPipelineExecutor( + rc.pullServicesImages(rc.Config.ForcePull), rc.JobContainer.Pull(rc.Config.ForcePull), rc.stopJobContainer(), + container.NewDockerNetworkCreateExecutor(networkName).IfBool(createAndDeleteNetwork), + rc.startServiceContainers(networkName), rc.JobContainer.Create(rc.Config.ContainerCapAdd, rc.Config.ContainerCapDrop), rc.JobContainer.Start(false), rc.JobContainer.Copy(rc.JobContainer.GetActPath()+"/", &container.FileEntry{ @@ -369,16 +468,50 @@ func (rc *RunContext) UpdateExtraPath(ctx context.Context, githubEnvPath string) return nil } -// stopJobContainer removes the job container (if it exists) and its volume (if it exists) if !rc.Config.ReuseContainers +// stopJobContainer removes the job container (if it exists) and its volume (if it exists) func (rc *RunContext) stopJobContainer() common.Executor { return func(ctx context.Context) error { - if rc.cleanUpJobContainer != nil && !rc.Config.ReuseContainers { + if rc.cleanUpJobContainer != nil { return rc.cleanUpJobContainer(ctx) } return nil } } +func (rc *RunContext) pullServicesImages(forcePull bool) common.Executor { + return func(ctx context.Context) error { + execs := []common.Executor{} + for _, c := range rc.ServiceContainers { + execs = append(execs, c.Pull(forcePull)) + } + return common.NewParallelExecutor(len(execs), execs...)(ctx) + } +} + +func (rc *RunContext) startServiceContainers(_ string) common.Executor { + return func(ctx context.Context) error { + execs := []common.Executor{} + for _, c := range rc.ServiceContainers { + execs = append(execs, common.NewPipelineExecutor( + c.Pull(false), + c.Create(rc.Config.ContainerCapAdd, rc.Config.ContainerCapDrop), + c.Start(false), + )) + } + return common.NewParallelExecutor(len(execs), execs...)(ctx) + } +} + +func (rc *RunContext) stopServiceContainers() common.Executor { + return func(ctx context.Context) error { + execs := []common.Executor{} + for _, c := range rc.ServiceContainers { + execs = append(execs, c.Remove().Finally(c.Close())) + } + return common.NewParallelExecutor(len(execs), execs...)(ctx) + } +} + // Prepare the mounts and binds for the worker // ActionCacheDir is for rc @@ -853,3 +986,53 @@ func (rc *RunContext) handleCredentials(ctx context.Context) (string, string, er return username, password, nil } + +func (rc *RunContext) handleServiceCredentials(ctx context.Context, creds map[string]string) (username, password string, err error) { + if creds == nil { + return + } + if len(creds) != 2 { + err = fmt.Errorf("invalid property count for key 'credentials:'") + return + } + + ee := rc.NewExpressionEvaluator(ctx) + if username = ee.Interpolate(ctx, creds["username"]); username == "" { + err = fmt.Errorf("failed to interpolate credentials.username") + return + } + + if password = ee.Interpolate(ctx, creds["password"]); password == "" { + err = fmt.Errorf("failed to interpolate credentials.password") + return + } + + return +} + +// GetServiceBindsAndMounts returns the binds and mounts for the service container, resolving paths as appopriate +func (rc *RunContext) GetServiceBindsAndMounts(svcVolumes []string) ([]string, map[string]string) { + if rc.Config.ContainerDaemonSocket == "" { + rc.Config.ContainerDaemonSocket = "/var/run/docker.sock" + } + binds := []string{} + if rc.Config.ContainerDaemonSocket != "-" { + daemonPath := getDockerDaemonSocketMountPath(rc.Config.ContainerDaemonSocket) + binds = append(binds, fmt.Sprintf("%s:%s", daemonPath, "/var/run/docker.sock")) + } + + mounts := map[string]string{} + + for _, v := range svcVolumes { + if !strings.Contains(v, ":") || filepath.IsAbs(v) { + // Bind anonymous volume or host file. + binds = append(binds, v) + } else { + // Mount existing volume. + paths := strings.SplitN(v, ":", 2) + mounts[paths[0]] = paths[1] + } + } + + return binds, mounts +} diff --git a/pkg/runner/runner.go b/pkg/runner/runner.go index 09c17319e0d..e1d646eb119 100644 --- a/pkg/runner/runner.go +++ b/pkg/runner/runner.go @@ -7,10 +7,10 @@ import ( "os" "runtime" - log "github.com/sirupsen/logrus" - + docker_container "github.com/docker/docker/api/types/container" "github.com/nektos/act/pkg/common" "github.com/nektos/act/pkg/model" + log "github.com/sirupsen/logrus" ) // Runner provides capabilities to run GitHub actions @@ -20,44 +20,45 @@ type Runner interface { // Config contains the config for a new runner type Config struct { - Actor string // the user that triggered the event - Workdir string // path to working directory - ActionCacheDir string // path used for caching action contents - BindWorkdir bool // bind the workdir to the job container - EventName string // name of event to run - EventPath string // path to JSON file to use for event.json in containers - DefaultBranch string // name of the main branch for this repository - ReuseContainers bool // reuse containers to maintain state - ForcePull bool // force pulling of the image, even if already present - ForceRebuild bool // force rebuilding local docker image action - LogOutput bool // log the output from docker run - JSONLogger bool // use json or text logger - LogPrefixJobID bool // switches from the full job name to the job id - Env map[string]string // env for containers - Inputs map[string]string // manually passed action inputs - Secrets map[string]string // list of secrets - Vars map[string]string // list of vars - Token string // GitHub token - InsecureSecrets bool // switch hiding output when printing to terminal - Platforms map[string]string // list of platforms - Privileged bool // use privileged mode - UsernsMode string // user namespace to use - ContainerArchitecture string // Desired OS/architecture platform for running containers - ContainerDaemonSocket string // Path to Docker daemon socket - ContainerOptions string // Options for the job container - UseGitIgnore bool // controls if paths in .gitignore should not be copied into container, default true - GitHubInstance string // GitHub instance to use, default "github.com" - ContainerCapAdd []string // list of kernel capabilities to add to the containers - ContainerCapDrop []string // list of kernel capabilities to remove from the containers - AutoRemove bool // controls if the container is automatically removed upon workflow completion - ArtifactServerPath string // the path where the artifact server stores uploads - ArtifactServerAddr string // the address the artifact server binds to - ArtifactServerPort string // the port the artifact server binds to - NoSkipCheckout bool // do not skip actions/checkout - RemoteName string // remote name in local git repo config - ReplaceGheActionWithGithubCom []string // Use actions from GitHub Enterprise instance to GitHub - ReplaceGheActionTokenWithGithubCom string // Token of private action repo on GitHub. - Matrix map[string]map[string]bool // Matrix config to run + Actor string // the user that triggered the event + Workdir string // path to working directory + ActionCacheDir string // path used for caching action contents + BindWorkdir bool // bind the workdir to the job container + EventName string // name of event to run + EventPath string // path to JSON file to use for event.json in containers + DefaultBranch string // name of the main branch for this repository + ReuseContainers bool // reuse containers to maintain state + ForcePull bool // force pulling of the image, even if already present + ForceRebuild bool // force rebuilding local docker image action + LogOutput bool // log the output from docker run + JSONLogger bool // use json or text logger + LogPrefixJobID bool // switches from the full job name to the job id + Env map[string]string // env for containers + Inputs map[string]string // manually passed action inputs + Secrets map[string]string // list of secrets + Vars map[string]string // list of vars + Token string // GitHub token + InsecureSecrets bool // switch hiding output when printing to terminal + Platforms map[string]string // list of platforms + Privileged bool // use privileged mode + UsernsMode string // user namespace to use + ContainerArchitecture string // Desired OS/architecture platform for running containers + ContainerDaemonSocket string // Path to Docker daemon socket + ContainerOptions string // Options for the job container + UseGitIgnore bool // controls if paths in .gitignore should not be copied into container, default true + GitHubInstance string // GitHub instance to use, default "github.com" + ContainerCapAdd []string // list of kernel capabilities to add to the containers + ContainerCapDrop []string // list of kernel capabilities to remove from the containers + AutoRemove bool // controls if the container is automatically removed upon workflow completion + ArtifactServerPath string // the path where the artifact server stores uploads + ArtifactServerAddr string // the address the artifact server binds to + ArtifactServerPort string // the port the artifact server binds to + NoSkipCheckout bool // do not skip actions/checkout + RemoteName string // remote name in local git repo config + ReplaceGheActionWithGithubCom []string // Use actions from GitHub Enterprise instance to GitHub + ReplaceGheActionTokenWithGithubCom string // Token of private action repo on GitHub. + Matrix map[string]map[string]bool // Matrix config to run + ContainerNetworkMode docker_container.NetworkMode // the network mode of job containers (the value of --network) } type caller struct { diff --git a/pkg/runner/runner_test.go b/pkg/runner/runner_test.go index 8d0ee771eaa..96738a883c1 100644 --- a/pkg/runner/runner_test.go +++ b/pkg/runner/runner_test.go @@ -302,6 +302,11 @@ func TestRunEvent(t *testing.T) { {workdir, "set-env-step-env-override", "push", "", platforms, secrets}, {workdir, "set-env-new-env-file-per-step", "push", "", platforms, secrets}, {workdir, "no-panic-on-invalid-composite-action", "push", "jobs failed due to invalid action", platforms, secrets}, + + // services + {workdir, "services", "push", "", platforms, secrets}, + {workdir, "services-host-network", "push", "", platforms, secrets}, + {workdir, "services-with-container", "push", "", platforms, secrets}, } for _, table := range tables { diff --git a/pkg/runner/testdata/services-host-network/push.yml b/pkg/runner/testdata/services-host-network/push.yml new file mode 100644 index 00000000000..8d0eb2947bf --- /dev/null +++ b/pkg/runner/testdata/services-host-network/push.yml @@ -0,0 +1,14 @@ +name: services-host-network +on: push +jobs: + services-host-network: + runs-on: ubuntu-latest + services: + nginx: + image: "nginx:latest" + ports: + - "8080:80" + steps: + - run: apt-get -qq update && apt-get -yqq install --no-install-recommends curl net-tools + - run: netstat -tlpen + - run: curl -v http://localhost:8080 diff --git a/pkg/runner/testdata/services-with-container/push.yml b/pkg/runner/testdata/services-with-container/push.yml new file mode 100644 index 00000000000..b37e5dcd421 --- /dev/null +++ b/pkg/runner/testdata/services-with-container/push.yml @@ -0,0 +1,16 @@ +name: services-with-containers +on: push +jobs: + services-with-containers: + runs-on: ubuntu-latest + # https://docs.github.com/en/actions/using-containerized-services/about-service-containers#running-jobs-in-a-container + container: + image: "ubuntu:latest" + services: + nginx: + image: "nginx:latest" + ports: + - "8080:80" + steps: + - run: apt-get -qq update && apt-get -yqq install --no-install-recommends curl + - run: curl -v http://nginx:80 diff --git a/pkg/runner/testdata/services/push.yaml b/pkg/runner/testdata/services/push.yaml new file mode 100644 index 00000000000..f6ca7bc4c50 --- /dev/null +++ b/pkg/runner/testdata/services/push.yaml @@ -0,0 +1,26 @@ +name: services +on: push +jobs: + services: + name: Reproduction of failing Services interpolation + runs-on: ubuntu-latest + services: + postgres: + image: postgres:12 + env: + POSTGRES_USER: runner + POSTGRES_PASSWORD: mysecretdbpass + POSTGRES_DB: mydb + options: >- + --health-cmd pg_isready + --health-interval 10s + --health-timeout 5s + --health-retries 5 + ports: + - 5432:5432 + steps: + - name: Echo the Postgres service ID / Network / Ports + run: | + echo "id: ${{ job.services.postgres.id }}" + echo "network: ${{ job.services.postgres.network }}" + echo "ports: ${{ job.services.postgres.ports }}"