Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Test infrastructure requirements and scenarios #14467

Closed
Priya91 opened this issue Apr 23, 2015 · 28 comments
Closed

Test infrastructure requirements and scenarios #14467

Priya91 opened this issue Apr 23, 2015 · 28 comments

Comments

@Priya91
Copy link
Contributor

Priya91 commented Apr 23, 2015

Current updated proposal from @Chrisboh :

Requirements we need to satisfy
• Run a set of tests that all developers should run before checking in
• Specify a set of tests to run from the command line
• Be able to debug all test cases
• Run with code coverage turned on
• Only build what is required

Proposal

The key to this proposal starts at build time. We need to only build what we are going to run. This is important both for overall runtime as well running tests using .net native. In order to satisfy this we need to move our ability to filter up higher in the process.

Building only What is Required

The following include is where we find all of our test projects and add them to the list of items we would like to build.
File: CoreFX\src\dirs.proj (line 8)
<Project Include="*\tests\**\*.csproj" Exclude="@(ExcludeProjects)" />

We need to add a condition to this to only include items that match a supplied filter. This filter would contain the following items.

Filter List:
• InnerLoop
• Functional
• Partner
• Customer

InnerLoop

This will contain everything we consider an Unit test today and until we complete the work to run our tests based on selectivity should be the minimum required set of tests that get run before accepting any PR.

Functional (OuterLoop Ring 1)

This category is made up of test cases that will test larger sections of the code and / or take longer to run. The tests in this category should also give us a quick high level check that there are no major issues with regards to basic performance, stress, or security.

Partner (OuterLoop Ring 2)

The goal of this category is to determine if we are ready for team Dogfooding. All performance and stress tests should be in this category as well as any compliance / security tests.

Customer (OuterLoop Ring 3)

The goal of this category is to determine if we are ready to release this for Partner Dogfooding. This will contain all scenario test cases and will have a manual component added as well.

Defining Test Project

In order for our filtering to work correctly we need to ensure that test project is setup correctly. The simple way to do this is to create a test project for each category in the Filter list. This would give us at most 4 test projects but it will be more typical to have around 2 projects as I could see unifying scenario, compliance, and security tests in a single location. In order to do this we will need to filter by test project name. We will follow the following naming conventions.

*.<Category>.tests.dll

Example:
System.ObjectModel.InnerLoop.tests.dll

Xunit Attributes

Now it is time to address what we are going to be doing about the Xunit attributes. First we will remove all attributes that can be replaced with the project filters. This will include things like InnerLoop, OuterLoop, Performance, and Stress. These attributes will not be used when deciding what binaries we should build and run. What this doesn't mean is that we will be removing all attributes entirely. We still need to keep and leverage certain attributes in order to give us the flexibility we need to further fine tune our runs.

Here are some of the key attributes we will need to leave in place.

Active Issue Attribute

This attribute will be used to filter out failing test cases when running locally. When running our daily builds we should include these test cases and label them as failing with a known bug.

Platform Specific Attribute

This attribute will be used to filter out test cases that do not apply to the platform you are currently running on. There was some debate on whether we should promote this to the filter list or not. At this time we don’t feel there are enough test cases to warrant promoting this to the filter list. In the future if this becomes an issue we can revisit this.

Command Line

We will need to execute everything from the command line as well as be able to select everything we need. Doing so will be done via two properties. The first will be defining the test categories that you would like to include. This will be done with the following property.

/p:TestTypes="InnerLoop;Functional"

The second property will be used to supply extra command line parameters to Xunit. This will simply be a pass-through from the msbuild command line onto Xunit and we will use the following property for that.

/p:AdditionalXunitArgs="-notrait ActiveIssue"

Additionally if you do not specify the TestTypes property it should run all Developer test cases automatically. Also not specifying the AdditionalXunitArgs property should be default exclude any test with an Active Issue and regardless if the property is specified we should include the correct OS parameter.

Debugging

This will be the same as what we have today. In the future we should invest me making the Xunit.runner.VisualStudio work and this would simplify everything.

@krwq
Copy link
Member

krwq commented Apr 23, 2015

We need a way of running:

  • all tests except failing
  • inner loop except failing

currently we have only inner or outer and no way of running both.

@Priya91
Copy link
Contributor Author

Priya91 commented Apr 23, 2015

Those type of scenarios will grow once we start adding more categories,

  • all tests except failing and perf
  • inner loop except formatter tests {since inner loop is not properly defined to xunit, conceptually some formatter tests can be innerloop too.}
  • categories A and B except category C
  • or for that matter, all tests.

So i think it's best not to support such specific category filtering through build.cmd, instead invoke xunit.console.netcore.exe directly for such purposes. build.cmd is a tool to be run during every checkin for build validation {from the point of view of build machines i don't think we will have a build definition to run all tests except failing, the only scenario will be on the devbox}, and adding xunit based category filtering to targets does not seem like a good idea. If this is really needed, then it is best to develop a separate test tool, rather than club with build.

@weshaggard weshaggard changed the title Design test build properties in accordance with internal test features. Design testDesign test build properties in accordance with internal test features. Apr 23, 2015
@weshaggard weshaggard changed the title Design testDesign test build properties in accordance with internal test features. Test infrastructure requirements and scenarios Apr 23, 2015
@weshaggard
Copy link
Member

@Priya91 thanks for opening this issue. I updated the title to more closely match what I'm hoping to get from this discussion. While we should continue to have the discussion as new comments can you keep the original post updated with the current list of requirements and scenarios. Lets use the original post as the spec if you will that we are trying to craft.

So with that said you can you reformat it to have at least a Scenarios section that we can use to enumerate the things people are trying to do with the test system. Then also add a requirements section that lists out what our test system requires in order to accomplish the scenarios we agree upon. Once we have those two fairly agreed upon then we can talk about potential technical solutions to the problem.

@Priya91
Copy link
Contributor Author

Priya91 commented Apr 23, 2015

@weshaggard : Updated the main description, with the scenarios i have in mind. I'll keep updating this, as the discussion progresses.

@mmitche
Copy link
Member

mmitche commented Apr 24, 2015

@Priya91 I think it's also useful to be able to not run a specific test name, or to just run a specific test. I think there is this functionality at least in one version of xunit's console runner. We should make sure this is easy to do form the command line though.

@Priya91
Copy link
Contributor Author

Priya91 commented Apr 24, 2015

@mmitche : This is already available from xunit.console.netcore runner, through option -method "name". I wouldn't want to support this throught build, as it's not a build feature, and I wouldn't want build.cmd /p:="something", as this will pollute our targets files to support a test feature, that is already available in console runner. I would prefer test.cmd -method "", where test.cmd can be a wrapper over xunit.console.netcore.exe

@eatdrinksleepcode
Copy link

@Priya91

I wouldn't want to support this throught build, as it's not a build feature...I would prefer test.cmd -method ""

I disagree strongly with this. Testing is a fundamental part of the build; otherwise we wouldn't bother including test targets in the build. I don't want to run build and test commands separately; any time I am testing, I want to make sure that I have successfully built first. Requiring separate commands leads to cases where I think I have fixed a bug but it is still failing, or I think the code I just wrote is passing but it hasn't actually been built yet.

I wouldn't want build.cmd /p:="something", as this will pollute our targets files to support a test feature

I agree that we shouldn't feel the need to explicitly expose every option that is available in the test runner as a property of our build. Instead we should ensure that the developer has the ability to pass any options to the test runner through the build (we currently have this capability using the XUnitOptions property). However, the ability to filter the tests to run is such a common requirement that most modern build systems have explicit support for it:

gradle test --tests MyProject.Some_Tests
mvn verify -Dtest=MyProject.Some_Tests
rake test TEST=MyProject.Some*Tests.rb

As long as we have the ability to pass arbitrary options to the test runner through the build, I wouldn't consider this a strict requirement, but it is a nice to have.

@Priya91
Copy link
Contributor Author

Priya91 commented Apr 24, 2015

@eatdrinksleepcode : Yes i agree, what i meant was, i wouldn't want the build to have separate properties to process each of the features supported by the underlying xunit runner, because that will lead to learning different semantics for build and test runner. Making build support existing options in the runner, by blindly passing it to the runner is fine. So all of the runner based test scenarios doesn't have to be a explicit build scenario. Adding this to the scenarios.

@weshaggard
Copy link
Member

Here are some scenarios that I had in mind:

  1. As a developer I want to be able to run a set of tests that run quickly and and give a basic quality read of the library or libraries that I'm building from the command line.
  2. As a developer I want to be able to run the complete set of tests for a library or libraries from the command line.
  3. As a developer I want to be able to run a set of tests for a given library within Visual Studio and be able to debug them.
  4. As a developer I want to have control over how I filter out a set of tests I run based on different criteria, such as non-failing, platform specific, test category (i.e. innerloop, outerloop, perf),
  5. As a developer I want to be able to run my tests with code coverage enabled to help determine the amount of code my tests are exercising.

Here are some of the requirements i had in mind:

  1. Can run all passing InnerLoop tests
  2. Can run failing tests
  3. Can run all tests applicable to a particular platform
  4. Can run only the tests that are applicable to by filtering criteria
  5. Can enable code coverage

I also think you need another section in this spec-let for definitions. If you want to use terms that aren't commonly known like InnerLoop and OuterLoop you should define them.

Once we get a good understanding of what scenarios folks want and the set of requirements that accomplish those scenarios then we can dig deeper into solutions and see what gaps our current system has and file other individual issues to tackle those.

@roncain
Copy link
Contributor

roncain commented May 11, 2015

I would like clarification on some terms and requirements for categorizing tests. I believe these are necessary so want to explicitly call them out:

  1. A "test" corresponds to an xunit test method
  2. Each test can be associated with zero or more categories
  3. Categories can be specified at the method level, class level, or project level, logically OR'ing them all levels. (ex: project declares all OuterLoop, class declares custom category "X", method declares category "Y" -- that method matches OuterLoop, "X" and "Y").

I would like clarification on what some of our common categories mean. I propose:

  1. InnerLoop: strictly unit tests that run quickly and locally, do not modify machine or use network.
  2. OuterLoop: integration tests that can take longer and may access network or disk (does not include "Performance" or "Stress"). Will not alter the execution machine,
  3. Performance: integration tests intended to measure throughput, latency, etc. Can access network and disk.
  4. Stress: integration tests intended to be long-running and heavily load machine. Can access network or disk.

(Aside: I find the terms InnerLoop and OuterLoop a little confusing. Why not "unit" and "integration"?)

The WCF team will likely have the need for a custom category for "Machine configuration required", where the test will run only if the local machine has been specially configured (details TBD). This would never run unless the developer or CI machine explicitly asked for it.

Regarding how tests are selected for execution, I think we should state:

  1. Specifying no explicit category runs InnerLoop only
  2. Multiple categories can be both included and excluded when asking tests to run (ex: run all InnerLoops and OuterLoops except custom category "X")
  3. Category filters work with all the other filters (ex: run all OuterLoop failing tests for a specific platform)
  4. Performance, stress, and custom categories are strictly opt-in and do not run when OuterLoop alone is run

@iamjasonp
Copy link
Member

Is the intention to be have a master list of categories/additional requirements under which we should classify tests? I see the following categories of test having already been suggested/implemented:

  • InnerLoop
  • OuterLoop
  • Stress
  • Performance
  • OuterLoopWithAdditionalRequirementsAndMightModifyYourMachine

(the last one's name is a little convoluted, obviously)

It might be worth considering an additional scenario, perhaps:

  • As a developer, I want to be able to specify any special dependencies and requirements for my test to run

where filtering is covered by:

  1. As a developer I want to have control over how I filter out a set of tests I run based on different criteria, such as non-failing, platform specific, test category (i.e. innerloop, outerloop, perf),

As more tests are written, there may be more tests that will have some special requirement, e.g. as @roncain said, the need for a machine where the machine's config can be changed and later cleaned up.

Within the context of how tests are categorized today: perhaps something like [OuterLoop("WcfServerRequired")] or [Performance("CrossMachineOnly")]would be sufficient to specify any additional requirements on the base categories that allow them.

With that said though, it will be important that we don't have too many categories/custom requirements - I trust that the community will also keep an eye out for custom requirement explosion if we know it's something to look out for?

@Chrisboh
Copy link
Member

With working with @Priya91 and @weshaggard I think we have come up with a plan to solve this.

Note: Updated based on feedback

Requirements we need to satisfy
• Run a set of tests that all developers should run before checking in
• Specify a set of tests to run from the command line
• Be able to debug all test cases
• Run with code coverage turned on
• Only build what is required

Proposal

The key to this proposal starts at build time. We need to only build what we are going to run. This is important both for overall runtime as well running tests using .net native. In order to satisfy this we need to move our ability to filter up higher in the process.

Building only What is Required

The following include is where we find all of our test projects and add them to the list of items we would like to build.
File: CoreFX\src\dirs.proj (line 8)
<Project Include="*\tests\**\*.csproj" Exclude="@(ExcludeProjects)" />

We need to add a condition to this to only include items that match a supplied filter. This filter would contain the following items.

Filter List:
• InnerLoop
• Functional
• Partner
• Customer

InnerLoop

This will contain everything we consider an Unit test today and until we complete the work to run our tests based on selectivity should be the minimum required set of tests that get run before accepting any PR.

Functional (OuterLoop Ring 1)

This category is made up of test cases that will test larger sections of the code and / or take longer to run. The tests in this category should also give us a quick high level check that there are no major issues with regards to basic performance, stress, or security.

Partner (OuterLoop Ring 2)

The goal of this category is to determine if we are ready for team Dogfooding. All performance and stress tests should be in this category as well as any compliance / security tests.

Customer (OuterLoop Ring 3)

The goal of this category is to determine if we are ready to release this for Partner Dogfooding. This will contain all scenario test cases and will have a manual component added as well.

Defining Test Project

In order for our filtering to work correctly we need to ensure that test project is setup correctly. The simple way to do this is to create a test project for each category in the Filter list. This would give us at most 4 test projects but it will be more typical to have around 2 projects as I could see unifying scenario, compliance, and security tests in a single location. In order to do this we will need to filter by test project name. We will follow the following naming conventions.

*.<Category>.tests.dll

Example:
System.ObjectModel.InnerLoop.tests.dll

Xunit Attributes

Now it is time to address what we are going to be doing about the Xunit attributes. First we will remove all attributes that can be replaced with the project filters. This will include things like InnerLoop, OuterLoop, Performance, and Stress. These attributes will not be used when deciding what binaries we should build and run. What this doesn't mean is that we will be removing all attributes entirely. We still need to keep and leverage certain attributes in order to give us the flexibility we need to further fine tune our runs.

Here are some of the key attributes we will need to leave in place.

Active Issue Attribute

This attribute will be used to filter out failing test cases when running locally. When running our daily builds we should include these test cases and label them as failing with a known bug.

Platform Specific Attribute

This attribute will be used to filter out test cases that do not apply to the platform you are currently running on. There was some debate on whether we should promote this to the filter list or not. At this time we don’t feel there are enough test cases to warrant promoting this to the filter list. In the future if this becomes an issue we can revisit this.

Command Line

We will need to execute everything from the command line as well as be able to select everything we need. Doing so will be done via two properties. The first will be defining the test categories that you would like to include. This will be done with the following property.

/p:TestTypes="InnerLoop;Functional"

The second property will be used to supply extra command line parameters to Xunit. This will simply be a pass-through from the msbuild command line onto Xunit and we will use the following property for that.

/p:AdditionalXunitArgs="-notrait ActiveIssue"

Additionally if you do not specify the TestTypes property it should run all Developer test cases automatically. Also not specifying the AdditionalXunitArgs property should be default exclude any test with an Active Issue and regardless if the property is specified we should include the correct OS parameter.

Debugging

This will be the same as what we have today. In the future we should invest me making the Xunit.runner.VisualStudio work and this would simplify everything.

@Priya91
Copy link
Contributor Author

Priya91 commented May 12, 2015

@Chrisboh: Sounds good, i would prefer renaming TestCaseCategory to TestProjectType or TestType and WithCategories to /p:TestTypes="", to not confuse with xunit categories. As these types only work through the build process, and not directly with xunit.exe.

@Priya91
Copy link
Contributor Author

Priya91 commented May 12, 2015

And we should also keep Perf and Stress xunit attributes, to run only Perf/Stress tests from testprojects which have TestType=Team.

@Chrisboh
Copy link
Member

@Priya91 agree with your suggestions.

@weshaggard
Copy link
Member

@Chrisboh I think this is a great start in the right direction. One other requirement that will be interesting at some point is having the ability to run pre/post commands at the test project level itself. For example if I have a test project that I have to run with admin privileges or perhaps I need to startup some other server machine to test a networking scenario.

@roncain
Copy link
Contributor

roncain commented May 15, 2015

I think this is a good start. Some responses:

  • Stress and Perf should be kept distinct and runnable separately. They don't necessarily fit in the same bucket. For example, Stress might run for days and Perf for a few minutes.
  • I'm not crazy about identifying categories by project or DLL name. That might make the build tool's life easier but feels too rigid. It violates my design requirement above to have a test marked with multiple categories. For example, we expect to have some OuterLoop tests that are attributed for "special needs" such as machine setup. Forcing them into separate projects seems artificially confining. Other than simplifying the build tool's job, what is the motivation to say every test can belong to only one category and that the project name itself defines that?
  • Regarding the MSBuild property name, I prefer /p:IncludeTests and /p:ExcludeTests. The name should not reflect how the build tools are implemented (e.g. by project "type").
  • Wes's suggestion about pre- and post- steps is a vital requirement for the WCF team. We need a hook to launch a self-hosted service and then to terminate it when done.

@Chrisboh
Copy link
Member

Hi @roncain see the responses below.

Stress and Perf should be kept distinct and runnable separately. They don't necessarily fit in the same bucket. For example, Stress might run for days and Perf for a few minutes.

With regards to the loop concept stress and performance do fit in the same loop. However, that being said I do agree that there is a high likelihood that we would need to schedule the performance run separately from the stress run. In order to achieve this I am proposing that we leverage the Xunit attributes to achieve this secondary filtering.

I'm not crazy about identifying categories by project or DLL name. That might make the build tool's life easier but feels too rigid. It violates my design requirement above to have a test marked with multiple categories. For example, we expect to have some OuterLoop tests that are attributed for "special needs" such as machine setup. Forcing them into separate projects seems artificially confining. Other than simplifying the build tool's job, what is the motivation to say every test can belong to only one category and that the project name itself defines that?

So there are two requirements that are conflicting with each other. The first being the requirement that you decorate a test case with multiple attributes and the second being we only build what is required to execute. In order to satisfy the first requirement we would need to build every single test project regardless if there are any actual test cases in that project that meet the criteria for execution. As we grow and grow this build time will be costly and something we should try and avoid as much as possible.

Regarding the MSBuild property name, I prefer /p:IncludeTests and /p:ExcludeTests. The name should not reflect how the build tools are implemented (e.g. by project "type").

I am not sure I fully understand how you intend this to be used. Could you elaborate a bit more on this?

Wes's suggestion about pre- and post- steps is a vital requirement for the WCF team. We need a hook to launch a self-hosted service and then to terminate it when done

Agreed which is why I want to separate the machine setup work from the test selection for hard requirements like this. To run our loops we will need to have knowledge of what tests we need to run and how they need to run. Selection of the tests for these types of scenarios should be consistent in that it will select the ring and then the Xunit attributes of the tests that fall into this category.

@roncain
Copy link
Contributor

roncain commented May 18, 2015

Thanks for the discussion. I think we're getting more good functional requirements out on the table now. Build time is one.

Regarding the use of project names to identify the loops -- I still think it is too confining and would like to find another way to achieve the goal of reduced build time. First, requiring InnerLoop in unit test project names is unnecessary noise -- that should be the default. Second, teams organize tests functionally (unit tests typically by type, functional tests by area and scenario). The loop mechanism is a cross-cutting classification system and should not contort a team's test organization. I still think we are letting the build tool's proposed implementation dictate too much of the test structure. I think you can expect pushback here as teams learn you require they reorganize according to loop abstractions.

My preference would be an MSBuild item collection inside each project that tags it with the ring(s) it belongs to. You don't have to build a project to get this information.

Regarding property names like /p:IncludeTests and /p:/ExcludeTests, I'm really talking about better names for the old WithCategories and WithoutCategories. I was trying to make sure the ability to exclude certain categories was not forgotten. And I was trying to avoid "TestProjectType" because a name like that implies test categories must be unique project types, which again is an implementation detail. Better to stick to the abstraction that a loop is a category of test.

Regarding p:/AdditionalXUnitArgs, I think it tries to satisfy 2 different purposes -- 1) naming the test trait filters, and 2) passing arbitrary extra parameters to xUnit. I think it would be better keep those separate. I'd like the traits to be more parallel with the loop filters, such as /p:IncludeTraits and /p:ExcludeTraits.

Regarding pre- and post- steps -- is there a proposal for where this fits in? In the case of WCF, we would need some hook that says "when you need to run a test in category "X" please execute this setup script/method/exe. And when you are done running all tests in "X", please execute this custom teardown script/method/exe. Right now we are editing build.cmd to insert this, but that doesn't work for the VS debugging experience.

Finally, has anyone proposed a way to get arguments from the comment line down to the tests? We really need a way to say something like "build /p:WcfTestServer=http://myServer /p:IncludeTests=OuterLoop". People have talked about using environment variables for this, but not all platforms may support them. It would be better to drive test parameters in from the build command some way.

@Chrisboh
Copy link
Member

Hi @roncain,

Thanks for the feedback. See responses below.

Should we enforce separation based on project file name?

I hear your concern here and it is one we have be debating in the halls here as well. No matter what we pick here I feel at a bare minimum we will need to separate our test cases into project files for each ring. It is technically possible to drop everything into a single project file and conditionally include what is needed but that brings a ton of unnecessary complexity with regards to building and doing work within Visual Studio.

Now with all of that said the real crux of the question stated above is how costly is it to call msbuild on all of the projects, taking into account that we will be adding a lot more test cases, just to determine if we want to build its content? In my initial planning around this I choose to keep as much from building as possible but I do agree with you that we should, at the very least, prototype what kind of cost we would incur if we let msbuild evaluate the project properties and conditionally choose if it should build or not.

Property Naming

I think you are conflating the two levels of filtering. The old WithCategories property acted upon the Xunit attributes. Because of this having the ability to include and exclude can seem more logical. However, with my proposal what your suggesting doesn't work at the top level of filtering. Let me give some examples of how I see this being used which will hopefully better explain how these properties will be used.

This will run all InnerLoop test cases that have no known issues
msbuild /p:TestCategory=InnerLoop /p:AdditionalXunitArgs="-notrait ActiveIssue"

This will run all InnerLoop and Functional test cases
msbuild /p:TestCategory="InnerLoop;Functional"

This will run only the stress tests that are in the Partner Loop
msbuild /p:TestCategory="Partner" /p:AdditionalXunitArgs="-trait Stress"

Regarding your proposal on renaming AdditionalXunitArgs to IncludeTrait and ExcludeTrait it might make more sense to add those in addition to AdditionalXunitArgs because I feel there will be times when users would like to use the -method or -class switches for debugging or testing purposes.

Pre and Post requirements

I will be honest here and say that while I have tried to account for these scenarios as best I can I do need a bit more information understanding all the specifics around this. That said here is how I am currently viewing how we can handle these types of test cases.

Test cases that need special setup and teardown but no admin privileges
These test cases should be leveraging the built in capabilities of Xunit. All setup for these tests should be done in the class constructor and all teardown should be done in the dispose function.
Xunit documentation can be found here: http://xunit.github.io/docs/shared-context.html

Test cases that need special setup and teardown with admin privileges
For these tests we will need to configure the machine before the execution of Xunit and again after the completion of Xunit. This work will happen at the machine level scheduling, currently Jenkins for our open code, and all tests will need to be attributed with an attribute labeling them as requiring machine setup.

Passing command line parameters to a test

In looking through the documentation there doesn't appear to be a direct way to pass data from the command line to a test case. The best way I can see this working would be to leveraging this http://xunit.github.io/docs/configuration-files.html.

We would have to create an App.config that has the key / value pair you would like to use with some default value. Then have some bit of code that can be called at build time or at scheduling time that overwrites that value with the value you would like to use after the project has been built.

@roncain
Copy link
Contributor

roncain commented May 19, 2015

Thanks for the responses @Chrisboh.

Test cases that need special setup and teardown but no admin privileges
I don't think the xunit class ctor will work for us. The setup that needs to happen requires the full framework. The CoreCLR environment the test is executing against does not support what we need to do. The second issue is that we need a single setup and teardown for the entire collection of tests selected in /p:TestCategory, not once per test class.

Test cases that need special setup and teardown with admin privileges
Our needs here need to extend beyond Jenkins. We will want developers able to configure their own machines in-house and then run these opt-in special tests. Jenkins will be just one instance of a configured machine.

Passing command line parameters to test
We'll investigate a pre-build step that alters the config or generates code prior to the build.

Should we enforce separation based on project file name?
Yes, please consider all options before requiring teams to name their projects with loop names. My preference is the MSBuild properties or item collections, of course, but a fallback to that might just be requiring the folders to bear loop names. It is hard enough to organize test folders and test projects to accurately describe what they test. Adding a loop name to each project into the mix will make things more difficult. I could handle 20 projects under an OuterLoop folder having meaningful names, but having 20 projects randomly ordered with OuterLoop in their name would not be as nice.

@iamjasonp
Copy link
Member

👍 on @roncain's comments regarding

Should we enforce separation based on project file name?
Yes, please consider all options before requiring teams to name their projects with loop names. My preference is the MSBuild properties or item collections, of course, but a fallback to that might just be requiring the folders to bear loop names. It is hard enough to organize test folders and test projects to accurately describe what they test. Adding a loop name to each project into the mix will make things more difficult. I could handle 20 projects under an OuterLoop folder having meaningful names, but having 20 projects randomly ordered with OuterLoop in their name would not be as nice.

I personally dislike the idea of enforcing a naming scheme for *.csproj files; it seems like too brittle a mechanism. If need be, categorizing tests by folder names seems fine to me, for example:

tests/feature1/OuterLoop/scenario1
tests/feature1/InnerLoop/scenario1
tests/InnerLoop/feature3/scenario1
tests/InnerLoop/feature3/scenario2

I feel this gives consumers of xunit a bit of a nudge to a sane, standard-ish naming scheme, but doesn't force things to happen at a filename level, which, from my experience, turns out to be finicky.

Filtering
I don't know exactly how this will pan out in terms of implementation, but in my mind, here are a few things I want to do:

  • "I want to run all tests for validating my pull request"
  • "I want to run all outer loop tests"
  • "I want to run all inner loop tests that test feature1, feature2."
  • "I want to run all inner and outer loop tests that test feature1, feature2."
  • "I want to run all (outer loop) stress, performance, and partner tests that test feature3."

In my mind, the above statements really look something like filtering by: importance - inner/outer/partner/functional (loop?), test type - functional/stress/perf (trait?), and feature (also trait?). I look at feature as more being a tagging function, where 1 test can belong to multiple "features", whereas 1 test should belong to only 1 "importance" level and 1 "test type".

Maybe it's supported and it's just a matter of finding the right name 😁.

@CIPop
Copy link
Member

CIPop commented May 27, 2015

👍 on @roncain's prerequisites requirements.

For System.Net (NCL) testing we'll need to standardize a way to install additional client/server machines and configure the test machine. I propose the following requirements for the test harness handling non-Unit tests:

Support for one-time prerequisite installation such as execution of Powershell scripts (Bash for Linux/Mac)

  • These should be able to execute with elevated privileges (Administrator/root) to perform configuration such as installing certificates, creating user accounts and installing/configuring remote clients or servers.
  • The scripts must be able to execute on a machine that doesn't have the git repo.
  • The scripts should allow installation and uninstallation of the prerequisites.
  • Developers shouldn't be required to execute these scripts if the prerequisites didn't change. (E.g. no need to reinstall IIS or Apache every single re-execution of the Functional-test part of the InnerLoop.)

A standardized test configuration module

Allow test-class initialization checks and report "Inconclusive" results for tests not executed due to environment issues
We've found that test results are much simpler to understand and triage when we separate network issues from what is being tested. For example, in case the remote server is temporarily unavailable, tests should be switched to an "inconclusive" and clearly report the network issue (ICMP ping failed/TCP port ping failed).

@joshfree
Copy link
Member

joshfree commented Jul 9, 2015

/cc @piotrpMSFT

@davidsh
Copy link
Contributor

davidsh commented Jul 27, 2015

/cc: me

@Priya91
Copy link
Contributor Author

Priya91 commented Jul 31, 2015

We also need infrastructure to test multi machine test cases, like to test the PerformanceCounterLib class in System.Diagnostic.Process. The class is only used, for accessing process information on remote machines.

@joshfree
Copy link
Member

@Chrisboh can you share an update?

@Chrisboh
Copy link
Member

In revisiting this most of the items I have listed have been implemented.

Build only what we need
@sokket Just checked in the .builds files for test projects allowing us to build only the projects we need to build for a given scenario.

XUnit Filtering
Active Issue and Platform specific attributes already exist and can be used for further filtering.

Command Line
Command line parameters are defined and documented. While they could be improved they are functional. Also @ianhays has added support for running inner and outer loop tests together.

Rings
Lastly the concepts around what is inner loop / outer loop and all the various rings of outer loop we will need to continue to dive into and define. I think this concept is deserving of its own issue and will open a new issue for that.

@msftgits msftgits transferred this issue from dotnet/corefx Jan 31, 2020
@msftgits msftgits added this to the 1.0.0-rtm milestone Jan 31, 2020
@ditta95aR ditta95aR mentioned this issue Jan 31, 2020
@ghost ghost locked as resolved and limited conversation to collaborators Jan 6, 2021
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests