diff --git a/docs/design/dry-run-mode.md b/docs/design/dry-run-mode.md new file mode 100644 index 00000000..05ff557e --- /dev/null +++ b/docs/design/dry-run-mode.md @@ -0,0 +1,83 @@ +# Dry Run Mode + +As your test suits get bigger and complicated over the period of time, it is essential that the toolings used for creating tests provide an easy way to identify and list the tests being +processed as part of your framework when invoked with certain arguments. And this listing needs +to be quick and clean in order to enable quick turn around time of test development. This requirement bring in the need to introduce a `dry-run` behavior into the `e2e-framework`. + +## Unit Of Test + +Go treats each function starting with `Testxxx` as the Test unit. However, the same is not entirely true in case of the `e2e-framework`. This introduces dynamic tests that are generated during the runtime programmatically for each assessment of each feature. + +From the perspective of the `e2e-framework`, the Unit of test is an `Assessment` that actually performs the assertion of an +expected behavior or state of the system. These assessments are run as a sub-test of the main test identified by the function +`Testxxx`. All framework specific behaviors built around this fundamental test unit of `Assessment`. + +## Why not use `test.list` from `go test` ? + +The `test.list` is a great way to run the dry-run equivalent behavior. However, it is not easily extendable into the core of `e2e-framework` as +there are framework specific behavior such as `setup` and `teardown` workflows. + +That, in conjunction with how the `test.list` works, it is not possible to extract information such as the `assessments` in the feature using the `test.list` mode brings the need to introduce a framework specific `dry-run` mode that can work well with `test.list` while providing all +the framework specific benefits of how the Tests to be processed can be listed + +## `--dry-run` mode +`e2e-framework` adds a new CLI flag that can be used while invoking the test called `--dry-run`. This works in conjunction with `test.list` to provide the following behavior. + +1. When the `--dry-run` mode is invoked No Setup/Teardown workflows are processed +2. Will display the Assessments as individual tests like they would be processed if not invoked with `--dry-run` mode +3. Skip all pre-post actions around the Before/After Features or Before/After Tests + +When tests are invoked with `-test.list` argument, the `--dry-run` mode is automatically switched to enabled to make sure setup/teardown as well as the pre-post actions can be skipped. + +## Example Output with `--dry-run` +```bash +❯ go test . -test.v -args --dry-run +=== RUN TestPodBringUp +=== RUN TestPodBringUp/Feature_One +=== RUN TestPodBringUp/Feature_One/Create_Nginx_Deployment_1 +=== RUN TestPodBringUp/Feature_One/Wait_for_Nginx_Deployment_1_to_be_scaled_up +=== RUN TestPodBringUp/Feature_Two +=== RUN TestPodBringUp/Feature_Two/Create_Nginx_Deployment_2 +=== RUN TestPodBringUp/Feature_Two/Wait_for_Nginx_Deployment_2_to_be_scaled_up +--- PASS: TestPodBringUp (0.00s) + --- PASS: TestPodBringUp/Feature_One (0.00s) + --- PASS: TestPodBringUp/Feature_One/Create_Nginx_Deployment_1 (0.00s) + --- PASS: TestPodBringUp/Feature_One/Wait_for_Nginx_Deployment_1_to_be_scaled_up (0.00s) + --- PASS: TestPodBringUp/Feature_Two (0.00s) + --- PASS: TestPodBringUp/Feature_Two/Create_Nginx_Deployment_2 (0.00s) + --- PASS: TestPodBringUp/Feature_Two/Wait_for_Nginx_Deployment_2_to_be_scaled_up (0.00s) +PASS +ok sigs.k8s.io/e2e-framework/examples/parallel_features 0.353s +``` + +```bash +❯ go test . -test.v -args --dry-run --assess "Deployment 1" +=== RUN TestPodBringUp +=== RUN TestPodBringUp/Feature_One +=== RUN TestPodBringUp/Feature_One/Create_Nginx_Deployment_1 +=== RUN TestPodBringUp/Feature_One/Wait_for_Nginx_Deployment_1_to_be_scaled_up +=== RUN TestPodBringUp/Feature_Two +=== RUN TestPodBringUp/Feature_Two/Create_Nginx_Deployment_2 + env.go:425: Skipping assessment "Create Nginx Deployment 2": name not matched +=== RUN TestPodBringUp/Feature_Two/Wait_for_Nginx_Deployment_2_to_be_scaled_up + env.go:425: Skipping assessment "Wait for Nginx Deployment 2 to be scaled up": name not matched +--- PASS: TestPodBringUp (0.00s) + --- PASS: TestPodBringUp/Feature_One (0.00s) + --- PASS: TestPodBringUp/Feature_One/Create_Nginx_Deployment_1 (0.00s) + --- PASS: TestPodBringUp/Feature_One/Wait_for_Nginx_Deployment_1_to_be_scaled_up (0.00s) + --- PASS: TestPodBringUp/Feature_Two (0.00s) + --- SKIP: TestPodBringUp/Feature_Two/Create_Nginx_Deployment_2 (0.00s) + --- SKIP: TestPodBringUp/Feature_Two/Wait_for_Nginx_Deployment_2_to_be_scaled_up (0.00s) +PASS +ok sigs.k8s.io/e2e-framework/examples/parallel_features 0.945s +``` + +## Example with `-test.list` +```bash +❯ go test . -test.v -test.list ".*" -args +TestPodBringUp +ok sigs.k8s.io/e2e-framework/examples/parallel_features 0.645s +``` + +As you can see from the above two examples, the output of the two commands are not really the same. Using `--dry-run` gives you a more framework specific behavior of how the tests are going to be processed in comparison to `-test.list` + diff --git a/examples/dry_run/README.md b/examples/dry_run/README.md new file mode 100644 index 00000000..bb161832 --- /dev/null +++ b/examples/dry_run/README.md @@ -0,0 +1,54 @@ +# Dry Run of Test Features + +This directory contains the example of how to run the test features in `dry-run` mode using framework specific flags. + +# Run Tests with flags + +These test cases can be executed using the normal `go test` command by passing the right arguments + +```bash +go test -v . -args --dry-run +``` + +With the output generated as following. + +```bash +=== RUN TestDryRunOne +=== RUN TestDryRunOne/F1 +=== RUN TestDryRunOne/F1/Assessment_One +=== RUN TestDryRunOne/F2 +=== RUN TestDryRunOne/F2/Assessment_One +=== RUN TestDryRunOne/F2/Assessment_Two +--- PASS: TestDryRunOne (0.00s) + --- PASS: TestDryRunOne/F1 (0.00s) + --- PASS: TestDryRunOne/F1/Assessment_One (0.00s) + --- PASS: TestDryRunOne/F2 (0.00s) + --- PASS: TestDryRunOne/F2/Assessment_One (0.00s) + --- PASS: TestDryRunOne/F2/Assessment_Two (0.00s) +=== RUN TestDryRunTwo +=== RUN TestDryRunTwo/F1 +=== RUN TestDryRunTwo/F1/Assessment_One +--- PASS: TestDryRunTwo (0.00s) + --- PASS: TestDryRunTwo/F1 (0.00s) + --- PASS: TestDryRunTwo/F1/Assessment_One (0.00s) +PASS +ok sigs.k8s.io/e2e-framework/examples/dry_run 0.618s +``` + +Without the `--dry-run` mode you will see the additional log `Do not run this when in dry-run mode` getting printed onto your terminal. + +In order to integrate this into the `test.list`, please run the following + +```bash +go test -v -list . +``` + +Which generates the output as following + +```bash +TestDryRunOne +TestDryRunTwo +ok sigs.k8s.io/e2e-framework/examples/dry_run 0.375s +``` + +To understand the difference in Output, please refer to the [Design Document](../../docs/design/dry-run-mode.md) diff --git a/examples/dry_run/dry_run_test.go b/examples/dry_run/dry_run_test.go new file mode 100644 index 00000000..49fc7a2e --- /dev/null +++ b/examples/dry_run/dry_run_test.go @@ -0,0 +1,60 @@ +/* +Copyright 2021 The Kubernetes Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package dry_run + +import ( + "context" + "testing" + + "k8s.io/klog/v2" + "sigs.k8s.io/e2e-framework/pkg/envconf" + "sigs.k8s.io/e2e-framework/pkg/features" +) + +func TestDryRunOne(t *testing.T) { + f1 := features.New("F1"). + Assess("Assessment One", func(ctx context.Context, t *testing.T, c *envconf.Config) context.Context { + // Perform Some assessment + return ctx + }).Feature() + + f2 := features.New("F2"). + Setup(func(ctx context.Context, t *testing.T, c *envconf.Config) context.Context { + klog.Info("Do not run this when in dry-run mode") + return ctx + }). + Assess("Assessment One", func(ctx context.Context, t *testing.T, c *envconf.Config) context.Context { + // Perform Some assessment + return ctx + }). + Assess("Assessment Two", func(ctx context.Context, t *testing.T, c *envconf.Config) context.Context { + // Perform Some assessment + return ctx + }).Feature() + + testEnv.TestInParallel(t, f1, f2) +} + +func TestDryRunTwo(t *testing.T) { + f1 := features.New("F1"). + Assess("Assessment One", func(ctx context.Context, t *testing.T, c *envconf.Config) context.Context { + // Perform Some assessment + return ctx + }).Feature() + + testEnv.Test(t, f1) +} diff --git a/examples/dry_run/main_test.go b/examples/dry_run/main_test.go new file mode 100644 index 00000000..903fd900 --- /dev/null +++ b/examples/dry_run/main_test.go @@ -0,0 +1,36 @@ +/* +Copyright 2021 The Kubernetes Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package dry_run + +import ( + "os" + "testing" + + "sigs.k8s.io/e2e-framework/pkg/env" + "sigs.k8s.io/e2e-framework/pkg/envconf" +) + +var ( + testEnv env.Environment +) + +func TestMain(m *testing.M) { + cfg, _ := envconf.NewFromFlags() + testEnv = env.NewWithConfig(cfg) + + os.Exit(testEnv.Run(m)) +} diff --git a/examples/parallel_features/parallel_features_test.go b/examples/parallel_features/parallel_features_test.go index 18dbc81f..e3713a3b 100644 --- a/examples/parallel_features/parallel_features_test.go +++ b/examples/parallel_features/parallel_features_test.go @@ -24,7 +24,6 @@ import ( appsv1 "k8s.io/api/apps/v1" corev1 "k8s.io/api/core/v1" metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" - log "k8s.io/klog/v2" "sigs.k8s.io/e2e-framework/klient/k8s" "sigs.k8s.io/e2e-framework/klient/wait" @@ -43,7 +42,6 @@ var ( func TestMain(m *testing.M) { cfg, _ := envconf.NewFromFlags() - log.InfoS("Args", "flag", cfg) testEnv = env.NewWithConfig(cfg) clusterName = envconf.RandomName("kind-parallel", 16) diff --git a/pkg/env/action.go b/pkg/env/action.go index 25f707fa..fa950157 100644 --- a/pkg/env/action.go +++ b/pkg/env/action.go @@ -21,6 +21,7 @@ import ( "fmt" "testing" + "k8s.io/klog/v2" "sigs.k8s.io/e2e-framework/pkg/envconf" "sigs.k8s.io/e2e-framework/pkg/internal/types" ) @@ -52,6 +53,10 @@ type action struct { func (a *action) runWithT(ctx context.Context, cfg *envconf.Config, t *testing.T) (context.Context, error) { switch a.role { case roleBeforeTest, roleAfterTest: + if cfg.DryRunMode() { + klog.V(2).Info("Skipping execution of roleBeforeTest and roleAfterTest due to framework being in dry-run mode") + return ctx, nil + } for _, f := range a.testFuncs { if f == nil { continue @@ -74,6 +79,10 @@ func (a *action) runWithT(ctx context.Context, cfg *envconf.Config, t *testing.T func (a *action) runWithFeature(ctx context.Context, cfg *envconf.Config, t *testing.T, fi types.Feature) (context.Context, error) { switch a.role { case roleBeforeFeature, roleAfterFeature: + if cfg.DryRunMode() { + klog.V(2).Info("Skipping execution of roleBeforeFeature and roleAfterFeature due to framework being in dry-run mode") + return ctx, nil + } for _, f := range a.featureFuncs { if f == nil { continue @@ -92,6 +101,10 @@ func (a *action) runWithFeature(ctx context.Context, cfg *envconf.Config, t *tes } func (a *action) run(ctx context.Context, cfg *envconf.Config) (context.Context, error) { + if cfg.DryRunMode() { + klog.V(2).InfoS("Skipping processing of action due to framework being in dry-run mode") + return ctx, nil + } for _, f := range a.funcs { if f == nil { continue diff --git a/pkg/env/action_test.go b/pkg/env/action_test.go index 634dba07..4203406d 100644 --- a/pkg/env/action_test.go +++ b/pkg/env/action_test.go @@ -35,6 +35,7 @@ func TestAction_Run(t *testing.T) { }{ { name: "single-step action", + cfg: &envconf.Config{}, ctx: context.WithValue(context.TODO(), &ctxTestKeyString{}, 1), setup: func(ctx context.Context, cfg *envconf.Config) (val int, err error) { funcs := []types.EnvFunc{ @@ -50,6 +51,7 @@ func TestAction_Run(t *testing.T) { }, { name: "multi-step action", + cfg: &envconf.Config{}, ctx: context.WithValue(context.TODO(), &ctxTestKeyString{}, 1), setup: func(ctx context.Context, cfg *envconf.Config) (val int, err error) { funcs := []types.EnvFunc{ @@ -69,6 +71,7 @@ func TestAction_Run(t *testing.T) { }, { name: "read from context", + cfg: &envconf.Config{}, ctx: context.WithValue(context.TODO(), &ctxTestKeyString{}, 1), setup: func(ctx context.Context, cfg *envconf.Config) (val int, err error) { funcs := []types.EnvFunc{ diff --git a/pkg/env/env.go b/pkg/env/env.go index f89ff57b..04dda7c4 100644 --- a/pkg/env/env.go +++ b/pkg/env/env.go @@ -22,11 +22,12 @@ import ( "context" "fmt" "math/rand" + "regexp" "sync" "testing" "time" - log "k8s.io/klog/v2" + "k8s.io/klog/v2" "sigs.k8s.io/e2e-framework/pkg/envconf" "sigs.k8s.io/e2e-framework/pkg/features" @@ -225,6 +226,9 @@ func (e *testEnv) processTestFeature(t *testing.T, featureName string, feature t // In case if the parallel run of test features are enabled, this function will invoke the processTestFeature // as a go-routine to get them to run in parallel func (e *testEnv) processTests(t *testing.T, enableParallelRun bool, testFeatures ...types.Feature) { + if e.cfg.DryRunMode() { + klog.V(2).Info("e2e-framework is being run in dry-run mode. This will skip all the before/after step functions configured around your test assessments and features") + } e.panicOnMissingContext() if len(testFeatures) == 0 { t.Log("No test testFeatures provided, skipping test") @@ -238,7 +242,7 @@ func (e *testEnv) processTests(t *testing.T, enableParallelRun bool, testFeature runInParallel := e.cfg.ParallelTestEnabled() && enableParallelRun if runInParallel { - log.V(4).Info("Running test features in parallel") + klog.V(4).Info("Running test features in parallel") } var wg sync.WaitGroup @@ -330,7 +334,7 @@ func (e *testEnv) Run(m *testing.M) int { for _, setup := range setups { // context passed down to each setup if e.ctx, err = setup.run(e.ctx, e.cfg); err != nil { - log.Fatal(err) + klog.Fatal(err) } } @@ -342,7 +346,7 @@ func (e *testEnv) Run(m *testing.M) int { for _, fin := range finishes { // context passed down to each finish step if e.ctx, err = fin.run(e.ctx, e.cfg); err != nil { - log.V(2).ErrorS(err, "Finish action handlers") + klog.V(2).ErrorS(err, "Finish action handlers") } } @@ -388,39 +392,27 @@ func (e *testEnv) getFinishActions() []action { return e.getActionsByRole(roleFinish) } +func (e *testEnv) executeSteps(ctx context.Context, t *testing.T, steps []types.Step) context.Context { + if e.cfg.DryRunMode() { + return ctx + } + for _, setup := range steps { + ctx = setup.Func()(ctx, t, e.cfg) + } + return ctx +} + func (e *testEnv) execFeature(ctx context.Context, t *testing.T, featName string, f types.Feature) context.Context { // feature-level subtest t.Run(featName, func(t *testing.T) { - // skip feature which matches with --skip-feature - if e.cfg.SkipFeatureRegex() != nil && e.cfg.SkipFeatureRegex().MatchString(featName) { - t.Skipf(`Skipping feature "%s": name matched`, featName) - } - - // skip feature which does not match with --feature - if e.cfg.FeatureRegex() != nil && !e.cfg.FeatureRegex().MatchString(featName) { - t.Skipf(`Skipping feature "%s": name not matched`, featName) - } - - // skip if labels does not match - // run tests if --labels values matches the feature labels - for k, v := range e.cfg.Labels() { - if f.Labels()[k] != v { - t.Skipf(`Skipping feature "%s": unmatched label "%s=%s"`, featName, k, f.Labels()[k]) - } - } - - // skip running a feature if labels matches with --skip-labels - for k, v := range e.cfg.SkipLabels() { - if f.Labels()[k] == v { - t.Skipf(`Skipping feature "%s": matched label provided in --skip-lables "%s=%s"`, featName, k, f.Labels()[k]) - } + skipped, message := e.requireFeatureProcessing(f) + if skipped { + t.Skipf(message) } // setups run at feature-level setups := features.GetStepsByLevel(f.Steps(), types.LevelSetup) - for _, setup := range setups { - ctx = setup.Func()(ctx, t, e.cfg) - } + ctx = e.executeSteps(ctx, t, setups) // assessments run as feature/assessment sub level assessments := features.GetStepsByLevel(f.Steps(), types.LevelAssess) @@ -431,29 +423,79 @@ func (e *testEnv) execFeature(ctx context.Context, t *testing.T, featName string assessName = fmt.Sprintf("Assessment-%d", i+1) } t.Run(assessName, func(t *testing.T) { - // skip assessments which matches with --skip-assessments - if e.cfg.SkipAssessmentRegex() != nil && e.cfg.SkipAssessmentRegex().MatchString(assess.Name()) { - t.Skipf(`Skipping assessment "%s": name matched`, assess.Name()) - } - - // skip assessments which does not matches with --assess - if e.cfg.AssessmentRegex() != nil && !e.cfg.AssessmentRegex().MatchString(assess.Name()) { - t.Skipf(`Skipping assessment "%s": name not matched`, assess.Name()) + skipped, message := e.requireAssessmentProcessing(assess, i+1) + if skipped { + t.Skipf(message) } - ctx = assess.Func()(ctx, t, e.cfg) + ctx = e.executeSteps(ctx, t, []types.Step{assess}) }) } // teardowns run at feature-level teardowns := features.GetStepsByLevel(f.Steps(), types.LevelTeardown) - for _, teardown := range teardowns { - ctx = teardown.Func()(ctx, t, e.cfg) - } + ctx = e.executeSteps(ctx, t, teardowns) }) return ctx } +// requireFeatureProcessing is a wrapper around the requireProcessing function to process the feature level validation +func (e *testEnv) requireFeatureProcessing(f types.Feature) (skip bool, message string) { + requiredRegexp := e.cfg.FeatureRegex() + skipRegexp := e.cfg.SkipFeatureRegex() + return e.requireProcessing("feature", f.Name(), requiredRegexp, skipRegexp, f.Labels()) +} + +// requireAssessmentProcessing is a wrapper around the requireProcessing function to process the Assessment level validation +func (e *testEnv) requireAssessmentProcessing(a types.Step, assessmentIndex int) (skip bool, message string) { + requiredRegexp := e.cfg.AssessmentRegex() + skipRegexp := e.cfg.SkipAssessmentRegex() + assessmentName := a.Name() + if assessmentName == "" { + assessmentName = fmt.Sprintf("Assessment-%d", assessmentIndex) + } + return e.requireProcessing("assessment", assessmentName, requiredRegexp, skipRegexp, nil) +} + +// requireProcessing is a utility function that can be used to make a decision on if a specific Test assessment or feature needs to be +// processed or not. +// testName argument indicate the Feature Name or test Name that can be mapped against the skip or include regex flags +// to decide if the entity in question will need processing. +// This function also perform a label check against include/skip labels to make sure only those features to make sure +// we can filter out all the non-required features during the test execution +func (e *testEnv) requireProcessing(kind, testName string, requiredRegexp, skipRegexp *regexp.Regexp, labels types.Labels) (skip bool, message string) { + if requiredRegexp != nil && !requiredRegexp.MatchString(testName) { + skip = true + message = fmt.Sprintf(`Skipping %s "%s": name not matched`, kind, testName) + return skip, message + } + if skipRegexp != nil && skipRegexp.MatchString(testName) { + skip = true + message = fmt.Sprintf(`Skipping %s: "%s": name matched`, kind, testName) + return skip, message + } + + if labels != nil { + for k, v := range e.cfg.Labels() { + if labels[k] != v { + skip = true + message = fmt.Sprintf(`Skipping feature "%s": unmatched label "%s=%s"`, testName, k, labels[k]) + return skip, message + } + } + + // skip running a feature if labels matches with --skip-labels + for k, v := range e.cfg.SkipLabels() { + if labels[k] == v { + skip = true + message = fmt.Sprintf(`Skipping feature "%s": matched label provided in --skip-lables "%s=%s"`, testName, k, labels[k]) + return skip, message + } + } + } + return skip, message +} + // deepCopyFeature just copies the values from the Feature but creates a deep // copy to avoid mutation when we just want an informational copy. func deepCopyFeature(f types.Feature) types.Feature { diff --git a/pkg/envconf/config.go b/pkg/envconf/config.go index 9a69f8b0..0223be50 100644 --- a/pkg/envconf/config.go +++ b/pkg/envconf/config.go @@ -41,6 +41,7 @@ type Config struct { skipLabels map[string]string skipAssessmentRegex *regexp.Regexp parallelTests bool + dryRun bool } // New creates and initializes an empty environment configuration @@ -79,6 +80,7 @@ func NewFromFlags() (*Config, error) { } e.skipLabels = envFlags.SkipLabels() e.parallelTests = envFlags.Parallel() + e.dryRun = envFlags.DryRun() return e, nil } @@ -227,6 +229,15 @@ func (c *Config) ParallelTestEnabled() bool { return c.parallelTests } +func (c *Config) WithDryRunMode() *Config { + c.dryRun = true + return c +} + +func (c *Config) DryRunMode() bool { + return c.dryRun +} + func randNS() string { return RandomName("testns-", 32) } diff --git a/pkg/envconf/config_test.go b/pkg/envconf/config_test.go index 368a3946..2e49b56b 100644 --- a/pkg/envconf/config_test.go +++ b/pkg/envconf/config_test.go @@ -17,6 +17,7 @@ limitations under the License. package envconf import ( + "flag" "os" "testing" ) @@ -39,6 +40,7 @@ func TestConfig_New(t *testing.T) { func TestConfig_New_WithParallel(t *testing.T) { os.Args = []string{"test-binary", "-parallel"} + flag.CommandLine = &flag.FlagSet{} cfg, err := NewFromFlags() if err != nil { t.Error("failed to parse args", err) @@ -47,3 +49,15 @@ func TestConfig_New_WithParallel(t *testing.T) { t.Error("expected parallel test to be enabled when -parallel argument is provided") } } + +func TestConfig_New_WithDryRun(t *testing.T) { + os.Args = []string{"test-binary", "--dry-run"} + flag.CommandLine = &flag.FlagSet{} + cfg, err := NewFromFlags() + if err != nil { + t.Error("failed to parse args", err) + } + if !cfg.DryRunMode() { + t.Errorf("expected dryRun mode to be enabled with invoked with --dry-run arguments") + } +} diff --git a/pkg/flags/flags.go b/pkg/flags/flags.go index a83e474a..0becb980 100644 --- a/pkg/flags/flags.go +++ b/pkg/flags/flags.go @@ -35,6 +35,7 @@ const ( flagSkipFeatureName = "skip-features" flagSkipAssessmentName = "skip-assessment" flagParallelTestsName = "parallel" + flagDryRunName = "dry-run" ) // Supported flag definitions @@ -75,6 +76,10 @@ var ( Name: flagParallelTestsName, Usage: "Run test features in parallel", } + dryRunFlag = flag.Flag{ + Name: flagDryRunName, + Usage: "Run Test suite in dry-run mode. This will list the tests to be executed without actually running them", + } ) // EnvFlags surfaces all resolved flag values for the testing framework @@ -88,6 +93,7 @@ type EnvFlags struct { skipFeatures string skipAssessments string parallelTests bool + dryRun bool } // Feature returns value for `-feature` flag @@ -131,6 +137,10 @@ func (f *EnvFlags) Parallel() bool { return f.parallelTests } +func (f *EnvFlags) DryRun() bool { + return f.dryRun +} + // Parse parses defined CLI args os.Args[1:] func Parse() (*EnvFlags, error) { return ParseArgs(os.Args[1:]) @@ -147,6 +157,7 @@ func ParseArgs(args []string) (*EnvFlags, error) { skipFeature string skipAssessment string parallelTests bool + dryRun bool ) labels := make(LabelsMap) @@ -188,6 +199,10 @@ func ParseArgs(args []string) (*EnvFlags, error) { flag.BoolVar(¶llelTests, parallelTestsFlag.Name, false, parallelTestsFlag.Usage) } + if flag.Lookup(dryRunFlag.Name) == nil { + flag.BoolVar(&dryRun, dryRunFlag.Name, false, dryRunFlag.Usage) + } + // Enable klog/v2 flag integration klog.InitFlags(nil) @@ -195,6 +210,12 @@ func ParseArgs(args []string) (*EnvFlags, error) { return nil, fmt.Errorf("flags parsing: %w", err) } + // Hook into the default test.list of the `go test` and integrate that with the `--dry-run` behavior. Treat them the same way + if !dryRun && flag.Lookup("test.list") != nil && flag.Lookup("test.list").Value.String() == "true" { + klog.V(2).Info("Enabling dry-run mode as the tests were invoked in list mode") + dryRun = true + } + return &EnvFlags{ feature: feature, assess: assess, @@ -205,6 +226,7 @@ func ParseArgs(args []string) (*EnvFlags, error) { skipFeatures: skipFeature, skipAssessments: skipAssessment, parallelTests: parallelTests, + dryRun: dryRun, }, nil } diff --git a/pkg/flags/flags_test.go b/pkg/flags/flags_test.go index cc2ab613..8911fe8d 100644 --- a/pkg/flags/flags_test.go +++ b/pkg/flags/flags_test.go @@ -17,6 +17,7 @@ limitations under the License. package flags import ( + "flag" "testing" ) @@ -28,13 +29,14 @@ func TestParseFlags(t *testing.T) { }{ { name: "with all", - args: []string{"-assess", "volume test", "--feature", "beta", "--labels", "k0=v0, k1=v1, k2=v2", "--skip-labels", "k0=v0, k1=v1", "-skip-features", "networking", "-skip-assessment", "volume test", "-parallel"}, + args: []string{"-assess", "volume test", "--feature", "beta", "--labels", "k0=v0, k1=v1, k2=v2", "--skip-labels", "k0=v0, k1=v1", "-skip-features", "networking", "-skip-assessment", "volume test", "-parallel", "--dry-run"}, flags: &EnvFlags{assess: "volume test", feature: "beta", labels: LabelsMap{"k0": "v0", "k1": "v1", "k2": "v2"}, skiplabels: LabelsMap{"k0": "v0", "k1": "v1"}, skipFeatures: "networking", skipAssessments: "volume test"}, }, } for _, test := range tests { t.Run(test.name, func(t *testing.T) { + flag.CommandLine = &flag.FlagSet{} testFlags, err := ParseArgs(test.args) if err != nil { t.Fatal(err) @@ -68,7 +70,11 @@ func TestParseFlags(t *testing.T) { } if !testFlags.Parallel() { - t.Errorf("unmatched flag parsed. Expected paralle to be true.") + t.Errorf("unmatched flag parsed. Expected parallel to be true.") + } + + if !testFlags.DryRun() { + t.Errorf("unmatched flag parsed. Expected dryRun to be true.") } }) }