Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Feature]: Add option to k3s to preload manifests into the container #1915

Closed
pablochacin opened this issue Nov 14, 2023 · 12 comments · Fixed by #1920
Closed

[Feature]: Add option to k3s to preload manifests into the container #1915

pablochacin opened this issue Nov 14, 2023 · 12 comments · Fixed by #1920
Labels
feature New functionality or new behaviors on the existing one

Comments

@pablochacin
Copy link
Contributor

pablochacin commented Nov 14, 2023

Problem

In many cases, k3s is used for testing a component (application, operator, et cetera) that must be deployed in the cluster.

Presently, this can be done for example copying the manifest to the container and using the kubectl command to apply it, as shown in the snipped below.

	// start a k3s container
	k3sContainer, err := k3s.RunContainer(ctx)
	if err != nil {
		t.Fatal(err)
	}
	
	err = k3sContainer.CopyFileToContainer(
		ctx,
		"manifest.yaml",
		"manifest.yaml",
		0x644,
	)
	if err != nil {
		t.Fatalf("copying manifest to cluster %v", err)
	}

	_, _, err = k3sContainer.Exec(
		ctx, 
		[]string{
			"kubectl",
			"apply",
			"-f",
			manifest.yaml",
		},
	)
	if err != nil {
		t.Fatalf("deploying manifest: %v", err)
	}
	
		rc, stdout, err := k3sContainer.Exec(
		ctx, 
		[]string{
			"kubectl",
			"wait",
			"pods",
			"--all",
			"--for=condition=Ready",
			"--timeout=90s",
		},
	)
	if err != nil {
		t.Fatalf("waiting for pods ready %v", err)
	}

This is repetitive and highly inconvenient if multiple components must be deployed.

Solution

Provide a customization option WithManifest() that loads the manifests in the container and applies it.

It also would be convenient to provide a WaitForCondition() wait strategy to wait for the components deployed in the manifest to be ready:

	// start a k3s container
	k3sContainer, err := k3s.RunContainer(
               ctx,
               WithManifest('manifest.yaml`),
               WithWaitStrategy(wait.ForExec([]string{
                        "kubectl",
                        "wait",
			"pods",
			"--all",
			"--for=condition=Ready",
			}),
        )
	if err != nil {
		t.Fatal(err)
	}

Benefit

Simplify test code by removing boilerplate.

Alternatives

Keep existing mechanism.

Would you like to help contributing this feature?

Yes

@pablochacin pablochacin added the feature New functionality or new behaviors on the existing one label Nov 14, 2023
@mdelapenya
Copy link
Collaborator

I like the idea @pablochacin, let's do it.

One thing regarding the WaitForCondition: would this represent a specialisation of the existing WaitFor.Exec wait strategy? I think we can add a new wait strategy for k3s, which embeds the Exec type. This new wait type could define a for condition, or simply not define anything and pass the wait timeout to the command.

Thoughts?

@pablochacin
Copy link
Contributor Author

I like the idea @pablochacin, let's do it.

I started implementing a PoC but found the problem mentioned in this issue.

The main problem is that we cannot just copy the manifests using the Files option in the container requests because the target directory is created by k3s at startup. Therefore we need to use a post start hook.

This new wait type could define a for condition, or simply not define anything and pass the wait timeout to the command.

My initial idea was to allow a "copy & paste" of kubectl wait conditions instead of creating a different syntax. In that sense, it was more like a specialization of the WaitFor.Exec that calls kubectl with the given condition. But I'm open to other options. Could you give an example of how you envision this syntax?

@mdelapenya
Copy link
Collaborator

Then the WithManifests should copy the files but not using the Files attribute, but as part of the post-startup commands (c.CopyToContainer): the first item function in the PostCreates hook will create the dir, the second will copy the manifests, the third will apply them, and the last one will wait for the pods. Wdyt?

Regarding #1919, we should implement it to have an even more consistent lifecycle.

@pablochacin
Copy link
Contributor Author

pablochacin commented Nov 15, 2023

: the first item function in the PostCreates hook will create the dir, the second will copy the manifests, the third will apply them, and the last one will wait for the pods. Wdyt?

@mdelapenya we only need to a post start hook to copy the manifest to a certain directory that is created when the container starts. Here is a PoC of this idea.

By the way, as far as I know, a post-create hook cannot create a dir, because it is not possible to execute a command in the container until it is started.

Regarding waiting for the pods, that should be a new wait option.

According to the discussion in this issue, the post-start hook that copies the manifest will be executed first and the wait will be executed the last.

@pablochacin
Copy link
Contributor Author

@mdelapenya I updated the PoC with the changes in the life cycle hooks introduced in this PR and it works as expected! 🙌

What is pending for me is defining the syntax of the proposed WaitFor option.

@pablochacin
Copy link
Contributor Author

@mdelapenya regarding the WaitForCondition option, I when I proposed it I wasn't aware of the WithWaitStragegy option, as it is not documented.

Therefore this option I proposed is redundant and as you suggested it can be implemented as:

testcontainers.WithWaitStrategy(wait.ForExec([]string{"kubectl", "wait", "pod", "nginx","--for=condition=Ready"})),

@errordeveloper
Copy link

I would say it's best to do this with server-side apply (SSA).

You could do something like this with SSA package from Flux project:

package apply

import (
	"context"
	"io"
	"time"

	"k8s.io/apimachinery/pkg/runtime"
	clientgoscheme "k8s.io/client-go/kubernetes/scheme"
	"k8s.io/client-go/rest"
	"k8s.io/klog/v2"
	ctrlClient "sigs.k8s.io/controller-runtime/pkg/client"

	"github.com/fluxcd/cli-utils/pkg/kstatus/polling"
	"github.com/fluxcd/pkg/ssa"
)

type ResourceManager struct {
	*ssa.ResourceManager
	logger klog.Logger
}

func NewResourceManager(config *rest.Config, logger klog.Logger) (*ResourceManager, error) {
	clientConfig := rest.CopyConfig(m.Config)

	options := ctrlClient.Options{
		Scheme: runtime.NewScheme(),
	}

	if err := clientgoscheme.AddToScheme(options.Scheme); err != nil {
		return nil, err
	}

	client, err := ctrlClient.New(clientConfig, options)
	if err != nil {
		return nil, err
	}

	resourceManager := &ResourceManager{
		ResourceManager: ssa.NewResourceManager(client,
			polling.NewStatusPoller(client, client.RESTMapper(), polling.Options{}),
			ssa.Owner{
				Field: "testcontainers",
				Group: "k3s.golang.testcontainers.org",
			},
		),
		logger: logger,
	}
	return resourceManager, nil
}

func (m *ResourceManager) ApplyManifest(ctx context.Context, r io.Reader) error {
	objs, err := ssa.ReadObjects(r)
	if err != nil {
		return err
	}

	if err := ssa.NormalizeUnstructuredListWithScheme(objs, m.Client().Scheme()); err != nil {
		return err
	}

	changeSet, err := m.ApplyAllStaged(ctx, objs, ssa.DefaultApplyOptions())
	if err != nil {
		return err
	}
	for _, change := range changeSet.Entries {
		m.logger.Info(change.String())
	}
	return m.WaitForSet(changeSet.ToObjMetadataSet(),
		ssa.WaitOptions{
			Interval: 2 * time.Second,
			Timeout:  time.Minute,
		})
}

Disclaimer: I think this code should work, but it was a quick copy from something I'm working on, I've not tested this as is. If someone is thinking of making a PR, do ping me ;)

@pablochacin
Copy link
Contributor Author

@errordeveloper I already working on a PR using the copy to manifest approach because I think is simpler, as it used a k3s feature and do not require the additional dependency of kubernetes client.

@errordeveloper
Copy link

...I think is simpler, as it used a k3s feature and do not require the additional dependency of kubernetes client.

Good point for sure, given the client would be a new dependency and can complicate things. The only other difference is that SSA code above includes readiness polling.

@pablochacin
Copy link
Contributor Author

The only other difference is that SSA code above includes readiness polling.

But that comes from yet another dependency on flux.

I'm not sure it brings any significant advantage over this implementation using already available methods to justify all those additional dependencies.

@errordeveloper
Copy link

errordeveloper commented Jan 24, 2024

The only other difference is that SSA code above includes readiness polling.

But that comes from yet another dependency on flux.

To be clear, the SSA package is a separate module, so it's not like installing Flux or dragging all of its deps. However, it does bring in the controller runtime client.

I'm not sure it brings any significant advantage

I would say that in most cases when I'm creating a cluster and want to apply some manifests to it before running tests, I certainly want to make sure these additional resources become ready before running any tests. So in my mind, it's quite significant.

@pablochacin this is just my few cents, I think your proposed solution is great progress already! It's possible that waiting needs to be solved separately if the k3s module should avoid the inclusion of any client. I just wanted to make sure the issue of readiness is touched on in this discussion, at least just for the record :)

@pablochacin
Copy link
Contributor Author

I just wanted to make sure the issue of readiness is touched on in this discussion

I think it is, Even when limitations, using a command like the one shown in the example should in general be sufficient to check for resource readiness.

               WithWaitStrategy(wait.ForExec([]string{
                        "kubectl",
                        "wait",
			"pods",
			"--all",
			"--for=condition=Ready",
			}),
        )

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
feature New functionality or new behaviors on the existing one
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants