Skip to content

Conversation

@andreasjordan
Copy link
Collaborator

@andreasjordan andreasjordan commented Dec 28, 2025

Let me test some ideas from #10069

Looks good to me. @niphlod - what do you think?

How can we trigger an complete test run on this branch?

@andreasjordan andreasjordan marked this pull request as ready for review December 28, 2025 20:52
@potatoqualitee
Copy link
Member

Will wait for niph's approval 👍🏼

@potatoqualitee
Copy link
Member

potatoqualitee commented Dec 28, 2025

i do not know how to trigger a whole run 😅 let me see what i can do. I think ill make a branch off of this and then push bc it runs my entire branches

https://ci.appveyor.com/project/dataplat/dbatools/builds/53297302

@potatoqualitee
Copy link
Member

Looks like one command keeps failing Set Parameter something, unsure if related.

@potatoqualitee
Copy link
Member

Path : Set-DbaStartupParameter/Validate command functionality/Ensure the startup params are not duplicated when more than one server is modified in the same invocation
Name : It Ensure the startup params are not duplicated when more than one server is modified in the same invocation
Result : Failed
Message : ErrorRecord: Expected 1, but got 2.
StackTrace :
RawErrorRecord : Expected 1, but got 2.
Uploading artifact DetailedTestFailures_Pester5.json (1,073 bytes)...100%
1 tests failed.

@andreasjordan
Copy link
Collaborator Author

Set-DbaStartupParameter uses instance1 and instance2 and was part of the scenario service_restarts. As I removed the scenario, the test is now autodetected into scenario default which should work. But the test needs refactoring anyway and is a good example of instanceMulti, so I will create a new scenario for it and see if it passes.

@andreasjordan
Copy link
Collaborator Author

Now I see the problem. The test uses instances 1 and 2, but the scenario service_restarts only spins up instances 2 and 3. So the test was never working correctly, but we have not noticed because we do not look at the details of the test runs.

@andreasjordan
Copy link
Collaborator Author

Now the test works as expected.

Next steps:

  • Move more tests that use multiple instances to scenario MULTI.
  • Add scenario SINGLE and move all tests that only uses a single instance to that scenario and test the scenario against the SQL2017 instance to see if they all work there.

Later we could try to use another base image that has newer SQL Server versions. Or build our own image with two instances of version 2022 or 2025. Then we could also enhance the AG tests.

@andreasjordan
Copy link
Collaborator Author

But I will pause this now and wait for @niphlod to give feedback.

@potatoqualitee
Copy link
Member

welllll it takes the exact amount of time, so there's that. 1 hour 20 mins

@andreasjordan
Copy link
Collaborator Author

It will always take that time as we can not run any tests in parallel. We could just remove the scenario concept and run all test in a row and it would take the same time. But when I look at my lab the powershell process accumulates RAM and might get into trouble when running all tests. So having different scenarios might help here as for each scenario a fresh vm is started. And we might change the instance configuration based on the scenario in the future.

If we want to speed up the tests we need to work on the long running tests.

@andreasjordan
Copy link
Collaborator Author

The new scenarios will also help in other labs. I could setup a lab for the HADR scenario and only run those tests if I work on those commands.

@potatoqualitee
Copy link
Member

it seems balanced to me, simone seems away, let me know if you'd like me to merge

@andreasjordan
Copy link
Collaborator Author

Let's wait for Simone to be online again. I really want to get his feedback first.

@niphlod
Copy link
Contributor

niphlod commented Jan 2, 2026

I dig the idea of separating tests "by feature needed" rather than "by version needed".
"Balancing" ATM doesn't matter since irregardless of appveyor's matrix we have just one concurrent build allowed.
Auto-assigning tests works fine, if on the next few builds we notice some of the tests fail often, they can be moved to the proper scenario "manually" (that was always the case even before).
The "service_restarts" scenario was meant for "destructive/possibly disruptive" tests... I'd leave them in their separate run just because some tests really DID something destructive to the running instances.

tl;dr: I don't wanna really say "it depends" because it's just a s***y answer, but I don't see why we can't take this road and see if it works better than the current one. Only a good deal of runs will reveal if we can follow this path or adjust it slightly as need occurs.

@andreasjordan
Copy link
Collaborator Author

Hi Simone! Thanks for your feedback. Then let's take this road.

Let me do some tests in my lab and then I give the go to merge this.

@andreasjordan
Copy link
Collaborator Author

Start-DbaMigration was added to the COPY scenario in the last step and seems to block Copy-DbaDatabase.

Analyzing...

@andreasjordan
Copy link
Collaborator Author

Ok, works. Ready to be merged.

@potatoqualitee
Copy link
Member

will merge once tests pass 👍🏼

@andreasjordan
Copy link
Collaborator Author

Now every test passes.

@potatoqualitee potatoqualitee merged commit c9c3ca0 into development Jan 6, 2026
14 checks passed
@potatoqualitee potatoqualitee deleted the refactor_test_scenarios branch January 6, 2026 16:03
@potatoqualitee
Copy link
Member

thank you!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants