-
Notifications
You must be signed in to change notification settings - Fork 337
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support for Seed #670
Comments
Could you elaborate? |
Do you mean something similar to what QuickCheck (and clones) do? |
@moodmosaic sorry I'm not aware of QuickCheck :) But will check out later. @ploeh actually we had this discussion already here. I think the arguments still hold. To summarize: for some (not all projects), it would be handy to repeat a test run, and let AutoFixture generate the exact same values, as in a failed run. For instance, a run that happened on CI server. |
AFAICT, that would require to change some of the built-in generators (for one, strings shouldn't be random GUIDs anymore). |
While this would be nice to have, it'd require a major rewrite of the AutoFixture kernel. Although I haven't spent any significant time thinking this through, I'd be surprised if this was possible without introducing breaking changes. In all the years I've been using AutoFixture, I've never needed this feature. Likewise, I can't remember having received a similar feature request before now. These days, I mostly write F# code, and for that, I use FsCheck, which does have a replay feature. Even so, I never use that feature with FsCheck. For all of those reasons, I consider the utility of this feature limited, and the ratio of impact to benefit to be unwarranted. It's possible that I'm mistaken. If someone wants to prove me wrong with a series of good pull requests, I shall be happy to reconsider my position. For now, however, I'm going to close this issue. |
BTW, reproducing an issue that happened on a CI server shouldn't be that difficult. I've been in that situation more than once. Often, the output from the test run should already enable you to understand what went wrong. Sometimes, I grant, this hasn't been the case for me. When that happens, I always take that as an indication that my assertion messages are insufficiently useful, and then I refactor the test in question to include better failure messages. Later failures of Erratic Tests will then enable me to zero in on the problem using only the test log. This, I find, improves the overall quality of my tests. |
We had that problem on our project, and it actually was quite hard to find what happened. It was happening very rarely. Also it does not feel right to have tests which are not 100% reproducible. Real pity AutoFixture does not support global seed.... UPD. I inspected the code briefly and it seems that if we create an implementation of DefaultRelays which uses seed, and use it instead of DefaultPrimitiveBuilders it should be sufficient to have that feature without any breaking changes. Could someone please support or disprove that idea? |
@0xorial In theory - yes, that should be enough, however each specimen builder should be inspected whether it doesn't use its own seed under the hood (like Another story is concurrency. Some integrations (like As Mark already mentioned, this is quite a big task and a lot of stuff inside the AutoFixture should be revisited to support this (e.g. to generate predictable Guids). It's still a question whether AutoFixture needs this feature (see this answer) as it will cost a lot to maintain this 😉 |
@zvirja, thanks for response! For concurrency, if you have it you your tests, you can't expect consistent results from AutoFixture - there is no way it can guarantee to produce same results for different sequence of requests (except if every request is unique and created beforehand, but I don't know the library well enough to judge if this is the case). Biggest problem I see here is that allowing seed will create false expectations - that it will somehow magically work and produce same results every time. Anyhow, since we can inject different P.S. Reflecting back on why we have this issue, while others say they never faced it, I came to a conclusion that it's because our code-base is quite dirty (no argument checks, kinda undefined behaviors here and there, etc). Still tests reproducibility sounds like a very sane expectation to me! |
It would be nice to repeat tests and have the same dummy values provided from AutoFixture.
The text was updated successfully, but these errors were encountered: