Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support for Seed #670

Closed
matkoch opened this issue Jul 5, 2016 · 9 comments
Closed

Support for Seed #670

matkoch opened this issue Jul 5, 2016 · 9 comments

Comments

@matkoch
Copy link

matkoch commented Jul 5, 2016

It would be nice to repeat tests and have the same dummy values provided from AutoFixture.

@ploeh
Copy link
Member

ploeh commented Jul 6, 2016

Could you elaborate?

@moodmosaic
Copy link
Member

Do you mean something similar to what QuickCheck (and clones) do?

@matkoch
Copy link
Author

matkoch commented Jul 6, 2016

@moodmosaic sorry I'm not aware of QuickCheck :) But will check out later.

@ploeh actually we had this discussion already here. I think the arguments still hold.

To summarize: for some (not all projects), it would be handy to repeat a test run, and let AutoFixture generate the exact same values, as in a failed run. For instance, a run that happened on CI server.

@moodmosaic
Copy link
Member

it would be handy to repeat a test run, and let AutoFixture generate the exact same values, as in a failed run

AFAICT, that would require to change some of the built-in generators (for one, strings shouldn't be random GUIDs anymore).

@ploeh
Copy link
Member

ploeh commented Jul 26, 2016

While this would be nice to have, it'd require a major rewrite of the AutoFixture kernel. Although I haven't spent any significant time thinking this through, I'd be surprised if this was possible without introducing breaking changes.

In all the years I've been using AutoFixture, I've never needed this feature. Likewise, I can't remember having received a similar feature request before now.

These days, I mostly write F# code, and for that, I use FsCheck, which does have a replay feature. Even so, I never use that feature with FsCheck.

For all of those reasons, I consider the utility of this feature limited, and the ratio of impact to benefit to be unwarranted.

It's possible that I'm mistaken. If someone wants to prove me wrong with a series of good pull requests, I shall be happy to reconsider my position.

For now, however, I'm going to close this issue.

@ploeh ploeh closed this as completed Jul 26, 2016
@ploeh
Copy link
Member

ploeh commented Jul 26, 2016

BTW, reproducing an issue that happened on a CI server shouldn't be that difficult. I've been in that situation more than once.

Often, the output from the test run should already enable you to understand what went wrong. Sometimes, I grant, this hasn't been the case for me. When that happens, I always take that as an indication that my assertion messages are insufficiently useful, and then I refactor the test in question to include better failure messages.

Later failures of Erratic Tests will then enable me to zero in on the problem using only the test log.

This, I find, improves the overall quality of my tests.

@0xorial
Copy link

0xorial commented Aug 4, 2017

We had that problem on our project, and it actually was quite hard to find what happened. It was happening very rarely. Also it does not feel right to have tests which are not 100% reproducible. Real pity AutoFixture does not support global seed....

UPD. I inspected the code briefly and it seems that if we create an implementation of DefaultRelays which uses seed, and use it instead of DefaultPrimitiveBuilders it should be sufficient to have that feature without any breaking changes. Could someone please support or disprove that idea?

@zvirja
Copy link
Member

zvirja commented Aug 4, 2017

@0xorial In theory - yes, that should be enough, however each specimen builder should be inspected whether it doesn't use its own seed under the hood (like RandomRangedNumberGenerator which is not enabled by default). For instance, the RandomNumericSequenceGenerator builder uses Random, which should be initialized; GuidGenerator works even without any seed - simply generates a new random Guid values; StringGenerator uses Guid.NewGuid() as well.

Another story is concurrency. Some integrations (like NSubstitute) generate values on fly during a method execution. Therefore, if you have any concurrency in some of your tests, you could have random values depending on the execution order - there are internal states in the AutoFixture that change.

As Mark already mentioned, this is quite a big task and a lot of stuff inside the AutoFixture should be revisited to support this (e.g. to generate predictable Guids). It's still a question whether AutoFixture needs this feature (see this answer) as it will cost a lot to maintain this 😉

@0xorial
Copy link

0xorial commented Aug 4, 2017

@zvirja, thanks for response!
To be honest I totally forgot about Guid.NewGuid() and just searched for new Random. Luckily, even with that it seems that there are not many places where random values are actually created.

For concurrency, if you have it you your tests, you can't expect consistent results from AutoFixture - there is no way it can guarantee to produce same results for different sequence of requests (except if every request is unique and created beforehand, but I don't know the library well enough to judge if this is the case). Biggest problem I see here is that allowing seed will create false expectations - that it will somehow magically work and produce same results every time.

Anyhow, since we can inject different DefaultRelays into Fixture, I am going just to try to fix it inside our project ^^

P.S. Reflecting back on why we have this issue, while others say they never faced it, I came to a conclusion that it's because our code-base is quite dirty (no argument checks, kinda undefined behaviors here and there, etc). Still tests reproducibility sounds like a very sane expectation to me!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants