Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Prototype functionality in support of ongoing Module Acceptance Testing research #27873

Merged
merged 3 commits into from
Feb 22, 2021

Conversation

apparentlymart
Copy link
Contributor

During subsequent release periods we're hoping to perform more research and development on the problem space of module acceptance testing. So far we've been doing this research using out-of-tree mechanisms such as an external provider installable from the registry, but there are annoying limitations on what sort of user experience we can achieve with that approach.

This set of changes, then, is aimed at making a small additional increment on top of the previous external provider strategy in order to start to experiment with workflow as well as test style. In particular:

  • There's an experimental terraform test command which tries to plan and apply test suite configurations given in subdirectories of a tests subdirectory of the module being tested. It aims to aggregate the results from all suites together so that it's easier to verify all of the functionality covered by the test suites.
  • There's a special built-in provider terraform.io/builtin/test, available only in the special terraform test command, which is serving as a placeholder for some possible additional Terraform language features for declaring expected postconditions that must hold even if the apply doesn't outright fail.

Nothing here is intended as "final" language or CLI user experience, because the goal for the moment is to use this to experiment with writing different sorts of tests for real-world modules and see what sorts of tests can and cannot be expressed within this model. Hopefully this will allow us to iterate towards a good set of language features that allow for writing commonly-useful tests, and then think about how to make those features more "first-class" in the language once we've understood the requirements better.

For this experiment I've taken the unusual step of adding a new documentation page specifically for the experiment, rather than adding a section about the experiment to an existing documentation section. I've done this because the design of this feature is not yet formed enough to know where it'll make sense to put it in final documentation, and because whatever we finally implement is unlikely to be very similar to the prototype pseudo-language we've implemented here. The warning message emitted by the terraform test command includes a link to the experiment's documentation page, which I intend to update in-place as we continue the experiment so that those who are participating in the experiment can follow along with updates. Once we conclude the experiment, I expect this page will become a short stub describing the outcome of the experiment and linking to whatever final documentation pages are relevant.

The tests for the new command and associated provider are intentionally rather minimal for this first pass because we're intending to intentionally cause regressions as we learn more and iterate, and so I'm aiming for the compromise of having sufficient tests to see that all of the parts seem to be working together but not trying to exhaustively test all of the situations that might arise in the details so that future iteration isn't heavily burdened by rewriting tests.

See the proposed new documentation page in the diff below for more context on what the goals are for these experimental features, and also the already-known limitations that we're accepting for now. Everything here can potentially change even in future patch releases of Terraform.

@apparentlymart apparentlymart requested a review from a team February 22, 2021 18:39
@apparentlymart apparentlymart self-assigned this Feb 22, 2021
@apparentlymart apparentlymart force-pushed the f-module-test-experiment branch 2 times, most recently from 665e556 to 8193e58 Compare February 22, 2021 19:26
@codecov
Copy link

codecov bot commented Feb 22, 2021

Codecov Report

Merging #27873 (cea28ae) into master (6bef157) will decrease coverage by 0.22%.
The diff coverage is 46.53%.

Impacted Files Coverage Δ
command/meta_providers.go 29.67% <0.00%> (-0.59%) ⬇️
command/views/hook_count.go 70.45% <ø> (ø)
command/views/operation.go 81.81% <0.00%> (-5.28%) ⬇️
commands.go 0.70% <0.00%> (-0.02%) ⬇️
internal/moduletest/assertion.go 0.00% <0.00%> (ø)
internal/moduletest/status_string.go 0.00% <0.00%> (ø)
internal/providercache/installer.go 67.28% <0.00%> (-1.27%) ⬇️
command/views/test.go 9.86% <9.86%> (ø)
internal/moduletest/provider.go 39.63% <39.63%> (ø)
command/test.go 39.82% <39.82%> (ø)
... and 22 more

@mildwonkey
Copy link
Contributor

I'm curious about the builtin test provider - you say it's only available from this new command, but I can use it in any regular configuration. Did you mean to block its use from outside terraform test?

Screen Shot 2021-02-22 at 3 33 48 PM

@apparentlymart
Copy link
Contributor Author

apparentlymart commented Feb 22, 2021

Hmm yes sorry you're right that I was imprecise in my wording there. What I meant to say was that you can use it in normal configurations but that it won't have any useful effect there: it exists only to gather data for the terraform test command to report, and so if you use it with other commands then nothing will actually go looking for that data and so it'll just be silently ignored.

However, I did include that warning message in case someone does what you tried here, so we'll have the freedom to remove it again later once this experiment runs its course.

While I could've left it totally unavailable in the other commands, I wanted to make it just be silently ignored there so that you can use terraform plan and terraform apply in the tests/foo directories for debugging if you need to, without having to comment out the test assertions first.

Copy link
Contributor

@mildwonkey mildwonkey left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This looks reasonable to me! I left a couple of very small comment fixups and one question about the "test" builtin provider itself, but those aren't blockers. This looks like it's in a good state for experimentation. The documentation is very well done.

command/test.go Outdated

plan, diags := c.testSuitePlan(ctx, suiteDirs, providerFactories)
if diags.HasErrors() {
// It should be unusual to get in here, because testSuiteProviders
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
// It should be unusual to get in here, because testSuiteProviders
// It should be unusual to get in here, because testSuitePlan

}),
})

// The provider code expects to recieve an object that was decoded from
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
// The provider code expects to recieve an object that was decoded from
// The provider code expects to receive an object that was decoded from

Copy link
Contributor

@alisdair alisdair left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is great! Thanks so much for the level of effort put into commentary here, it's incredibly helpful.

// planning a subsequent phase of this R&D effort, to help distinguish
// between things we're doing here because they are valuable vs. things
// we're doing just to make it work without doing any disruptive
// refactoring.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you for this commentary!


// AssertionKey is a unique identifier for an Assertion within a particular
// test run.
type AssertionKey struct {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I can't find any references to this type yet. What's it for?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh, good catch! This was from an earlier iteration of the model where I had the component just be a field associated with each assertion rather than its own distinct level of hierarchy (matching the JUnit XML view of things) but then I refactored it into a three-level hierarchy later and forgot to delete this unused type.

I'll delete it now.

// The result is a direct reference to the internal state of the provider,
// so the caller mustn't modify it nor store it across calls to provider
// operations.
func (p *Provider) TestResults() map[string]*Component {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't see this method used outside the package, as the command uses Reset instead. Given the caveats in this comment, could it make sense for this to be unexported?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It is true that I ended up writing out the callers that were using this in favor of a single call at the end, but I feel a bit funny about the only accessor for this data being a destructive one. 🤔 I think given that we're still learning how this is gonna work I'd like to leave this here for now and then re-evaluate it in a later iteration.

// only been pulling keys from one map, and so any duplicates
// would've been caught during config decoding, but this is here
// just to make these two blocks symmetrical to avoid mishaps in
// future refactoring/reorganization.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is a great thought 💡

As part of ongoing research into Terraform testing we'd like to use an
experimental feature to validate our current understanding that expressing
tests as part of the Terraform language, as opposed to in some other
language run alongside, is a good and viable way to write practical
module integration tests.

This initial experimental incarnation of that idea is implemented as a
provider, just because that's an easier extension point for research
purposes than a first-class language feature would be. Whether this would
ultimately emerge as a provider similar to this or as custom language
constructs will be a matter for future research, if this first
experiment confirms that tests written in the Terraform language are the
best direction to take.

The previous incarnation of this experiment was an externally-developed
provider apparentlymart/testing, listed on the Terraform Registry. That
helped with showing that there are some useful tests that we can write
in the Terraform language, but integrating such a provider into Terraform
will allow us to make use of it in the also-experimental "terraform test"
command, which will follow in subsequent commits, to see how this might
fit into a development workflow.
This is just a prototype to gather some feedback in our ongoing research
on integration testing of Terraform modules. The hope is that by having a
command integrated into Terraform itself it'll be easier for interested
module authors to give it a try, and also easier for us to iterate quickly
based on feedback without having to coordinate across multiple codebases.

Everything about this is subject to change even in future patch releases.
Since it's a CLI command rather than a configuration language feature it's
not using the language experiments mechanism, but generates a warning
similar to the one language experiments generate in order to be clear that
backward compatibility is not guaranteed.
Since this is still at an early phase and likely to change significantly
in future iterations, rather than attempting to guess on a suitable final
location for documenting the testing feature I've instead taken the
unusual approach of adding a new page that is explicitly about the
experiment. My expectation is that once we conclude the experiment we'll
replace this new page with a stub that just explains that there was once
an experiment and then links to whatever final feature unfolded from the
research.

The URL for this page is hard-coded into the warning message in the
"terraform test" command, so as we continue to evolve this feature in
future releases we'll need to update the callout note on the page about
which Terraform CLI version it's currently talking about, so users of
older versions can clearly see when they'd need to upgrade in order to
participate in a later incarnation of the experiment.
@apparentlymart apparentlymart merged commit 8d37a70 into master Feb 22, 2021
@apparentlymart apparentlymart deleted the f-module-test-experiment branch February 22, 2021 22:21
}
name := item.Name()
suitePath := filepath.Join("tests", name)
tfFiles, err := filepath.Glob(filepath.Join(suitePath, "*.tf"))
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Note that I believe this excludes testing modules with child modules.

@lorengordon
Copy link
Contributor

One thing I've run into, is tests of modules that use count/for_each, and issues where the test config generates the resources passed to those expressions in a way where the index/label cannot be determined until apply. My workaround has been to support a "prereq" config for each test that is applied first, and to read the outputs via the data source terraform_remote_state.

Has any similar pattern/issue been discussed here?

And, is there a better place to discuss this and follow progression, other than a merged PR?

@mildwonkey
Copy link
Contributor

Hi @lorengordon, thanks for the feedback!

Since this is still early work and likely to lead to unstructured discussion,
we'd like to gather feedback primarily via new topics in
the community forum. That
way we can have some more freedom to explore different ideas and approaches
without the structural requirements we typically impose on GitHub issues.

@lorengordon
Copy link
Contributor

@mildwonkey Is there going to be a post in the community forum for collecting feedback on this prototype?

@mildwonkey
Copy link
Contributor

There will be a general post when the beta is released (about everything!), but we specifically ask that you create a new topic if there isn't one already that addresses your specific feedback, so we can keep the various questions and feature requests in their own threads. It can get too noisy to have multiple discussions in one topic.

@lorengordon
Copy link
Contributor

Thanks @mildwonkey. I expanded on the question and posted it here, https://discuss.hashicorp.com/t/question-about-the-prototype-test-command-workflows/21375.

@sjbodzo
Copy link

sjbodzo commented Mar 2, 2021

The general model I follow is to mock out the server via recorded responses in golden files, then execute my HCL directly against the test server through a terraform apply. This looks to be in the same vein, allowing output assertion.

I'm excited to see where this goes, nicely done! Also 👍🏽 for spline reticulator 😄

@ghost
Copy link

ghost commented Mar 25, 2021

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.

If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

@ghost ghost locked as resolved and limited conversation to collaborators Mar 25, 2021
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants