-
Notifications
You must be signed in to change notification settings - Fork 182
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Initial value of out parameters is not ignored when asserting calls #215
Comments
Hi, @Dashue. Thanks for bringing this up. I like the idea of somehow not constraining on Ultimately, we'll need to wrangle back and forth over the exact syntax/API. Could you provide a full example of what you would like to see (not implementation). For example, I can't tell where the I have additional things to consider:
|
Actually, I'm recalling a comment that @adamralph made not too long ago, about the possibility of having |
I will go ahead and bump with the purpose of seeing if people still feel the same about it, and if so, to figure out a path to proceed along. |
This is something I hit recently as well and would love to see some way to handle this situation. The proposed |
I'm not really in favour of changing the current Ignored-property. @FakeItEasy/owners what do you think of the following? I've not tried to implement any of it and there is definitely room for improvement in the wording and so forth, but I think it might work: A.RefIgnored<string>().Leaves().Arg
A.RefIgnored<int>().AssignsLazily(x => 10).Arg
A.Ref(A<int>.That.IsGreaterThan(3)).Leaves().Arg
A.Ref(3).Leaves().Arg
A.Ref(A<string>.That.StartsWith("foo")).AssignsLazily(x => "bar").Arg
public static RefParameterSpecification<T> RefIgnored<T>()
{
}
public static RefParameterSpecification<T> Ref<T>(T valueOrConstraint)
{
}
public static IRefParameterSpecificationBuilder<T> Ref<T>(T valueOrConstraint)
{
}
public static IRefParameterSpecificationBuilder<T> RefIgnored<T>()
{
}
public interface IRefParameterSpecificationBuilder<T>
{
RefParameterSpecification<T> Leaves();
RefParameterSpecification<T> AssignsLazily(Func<T, T> assignment);
}
public struct RefParameterSpecification<T>
{
public T Arg;
} |
Very interesting, @patrik-hagne. As you say, the language could use some tweaking (I'm not sure "Leaves" will mean much to people right off), but generally I like it! |
Trying to wrap my head around why the input value of an Out would be considered, and that opt-in to ignore is the default and not opt-out. This comment touches more upon why we need a solution than actually help finding the solution, sorry bout that. |
I have no further feedback on this, I will wait on the higher powers to decide and then see if I can chip in some labor : ) |
I'd say that the main reason for the input value to be considered is that it's not always out-parameters, it's used for ref-parameters as well. |
Ah I see. I think I remember @blairconrad telling me that ref and out are treated / handled the same. Is this the case? And if so would it be a good idea to investigate a split of handling ref and out? I can understand the initial value being of necessity for ref. |
@Dashue, the ref and out parameters (and all the "normal" parameters too) are handled the same way - a constraint is built from the expression in the And I agree with you about special casing this. I'm still of the opinion that using an always-true constraint for out parameters is a good idea. |
Hmm. I may have accidentally implemented value-ignoring out parameters. While looking around to get enough info to make my last comment, I saw that it would be easy to check the outness of an argument in |
I'm probably in favor of ignoring the input value of out-parameters as well. Keep in mind that it is a breaking change though. |
Bah. Usually I'm the one crying "breaking change". I missed my opportunity. You're right, of course. Now, I'd say that actually relying on the constraints that FakeItEasy applies to out parameters is, I would argue, a programming error, but you are right, someone might be adversely affected. |
So this brings an interesting challenge. Can we and do we want to make this a non-breaking change? Or maybe we bite the bullet because it's currently being handled "not the right way"? |
It would be possible to configure this behaviour using the Bootstrapper, but I'm not wild about the idea, mostly for two reasons.
Of course, we could introduce another mechanism, such as the ones @patrik-hagne outlined, to allow people to start ignoring initial values for As far as "the breaking change" aspect goes, since we're breaking something that was not even so good, I'd tend to worry a little less about it. However, I can appreciate the argument that SemVer says we should go to 2.0.0 or something if we make the change. Which I'd do, but I think I'm more willing to bump major versions than some. |
I wonder.. It's not really breaking the api. I'm not too familiar with semantic versioning details, but it's semver related to the public api? For me, I could easily consider it a small bugfix that fixes incorrect behaviour. Just as a thought experiment, can someone think of a scenario, or produce one in code, that would show this as being a real issue? I.e you have a test, and that breaks by not considering the original value of out values. I cannot think of anything that doesn't involve some major incorrect assertions (if even possible). I know you are way smarter than me, maybe you can come up with something? |
"way smarter" 😆 Thanks. I needed a good chuckle. I'm a little fuzzy on the SemVer stuff myself. I've seen others argue that it's only "breaking" if signatures change, or something of that nature, but I think of it as referring to behaviour, so even if things keep "linking" and whatnot, if the thing acts differently, it would be "breaking". I agree that the proposed change is fixing incorrect behaviour. However, I've been trained (and I think @patrik-hagne leans this way too) that no matter what horrible behaviour your app or library has, it's some user's favourite feature! Hmm. How could this hurt someone? I think it's less about causing a passing test to fail than it would be to accidentally cause an existing test that should fail when production code changes to not fail. That was confusing. Stay with me. Someone's been using a variable for… something. I dunno. Maybe they're populating it, in a loop, using a Of course, I think that's a terrible test, and that the client would be better off checking those values some other way—when they get used as inputs to collaborators or when the method returns the array of accumulated values or something. But if that's what's been working for them up 'til now, it would "break" their tests. I guess it feels a little different than just a bug fix because the existing behaviour would've forced users to change their argument constraints to match the values that were present when the out-having method was called, and so we've been telling them "this I feel a bit like a hypocrite (or maybe Devil's Advocate) here by saying all this, though, because I'd really like to slip in the "ignore the outs" change, and if FakeItEasy were my project, I might just do it. 😊 |
@FakeItEasy/owners, I'm in favour of a change to have |
I agree with changing the behavior without API-changes as well as bumping the Major version |
I've pushed my implementation to blairconrad/FakeItEasy@5d6f674. I didn't want to initiate a PR until we've all weighed in on jumping to 2.0.0 for/with this. |
Repushed to blairconrad/FakeItEasy@2d7144f. |
OK, I've read back through the comments and I agree that we should change the behaviour to ignore the initial arguments passed to out parameters. However, I am in favour of not bumping the major version. The initial arguments supplied to out parameters have no value since they are inaccessible, so yes, whilst This is my opinion, but I won't hold up things if others want to publicise it as a breaking change and bump the major version. It won't do any harm, even if I feel it's redundant. As usual, I may be missing something, yada yada, so please correct me if so. |
Oh, and just the usual gentle reminder that any change can break someone including all those that have already gone in since 1.0 😉. |
I'm actually mostly on the same page as you for this one. Well, I'm of two minds. I agree that relying on catching differences in the initial value means there's a bad test, so the writer gets what they deserve. And that releasing this issue without bumping major version is analogous to what we did with #268. The other other part of me says, "yes, but that's cold comfort to the writers who were relying on it and all of sudden have failing tests"—it's not quite the same as never having looked at the values. |
I'm continuining this thread on academic grounds only - I don't want to hold up this issue and we should just press ahead with 2.0 if that's what the team wants to do. Given that Regarding the 2.0 bump, as I see it, the intention of SemVer here is to warn users not to upgrade unless they are prepared to do some work to make their code work with the new version. What users should be doing is restricting their dependency to less than 2.0 so that automatic upgrades are guaranteed not to break. However, this is not what NuGet does when adding a dependency and I wager 99% of users do not manually edit their packages.config in this way. If they did, a major version bump would have a clear downside since it would slow adoption of the new version. I believe what really happens is that when people see 2.0 they get excited about the version bump since it is embedded in the psyche that it means 'new and shiny' rather than 'warning - hold off unless you are sure' and they may even be more likely to upgrade sooner which means a 2.0 bump may actually speed up adoption of the new version. I guess the only risk here is disappointment when they poke around in 2.0 after upgrading and see no or very few new features or even just a single feature or bug fix which is of minority interest but has been deemed to be breaking. Again, this is only of academic interest in the context of this issue (although if it changes minds then so be it), but interesting nonetheless 😉. |
I don't think you're holding the issue up. If nothing else, I wouldn't mind throwing a fix for #338 in the next release along with this, and that'll take another day or two anyhow. On to other points. I agree that in this case, we are forcing users relying on the current behaviour to fix their tests. Still, we are forcing them. I'm not comfortable arguing that only users tests will fail, not their production code, since we're a mocking framework and are unlikely (I hope) to be used in production code. Or another way of looking at it: from our perspective, the tests are the production code. The argument about them being flawed to begin with holds a lot of weight, though. Enough that after all this thinking, I'd be content including this issue in a bug fix release. To continue being academic: I had wondered about SemVer + NuGet. If the majority of users will automatically get the 2.0.0 version when upgrading anyhow, the benefits of SemVer plummet. Then again, I'm not sure how often people upgrade their packages. I rarely do, unless I think there's some new feature or fixed but (that's been troubling me). Excellent point about the major bump scaring people off, although I'd've hoped that users paying attention to that would also pay attention to the release notes and say "Oh, that sounds fine." As far as "major == shiny" being embedded in the psyche, if we refuse to bump the major version because we don't have shiny new features, we're just reinforcing that perception. If we believe in SemVer, I'd suggest trying to walk the walk, and I would hate to think that we'd let "breaking" changes pile up as we wait for something sufficiently shiny to come along. The more I type, the more I'm coming 'round. In this case, we're fixing bad FakeItEasy behaviour. I think we can look at this as a bug fix, and not require the major version bump. Of course someone may have discovered the bug and be relying on it. I'd said
but that doesn't mean we have to preserve the bug, and a wise man once said
This change should only affect
Okay. I'm in the "bump the third component, not the first" camp now. @philippdolder, @patrik-hagne, do you want to talk us out of it? |
Rebranding as a bug, according to recent discussion and improved understanding of the behaviour. |
Holding onto this for a bit until we have a quick chat over at #220. I'm hopeful for a speedy resolution of that, and then we may still be able to get this into milestone 1.23.0. |
Moved |
@Dashue, thanks very much for your work on this issue. Look for your name in the release notes. 🏆 This issue has been fixed in release 1.23.0. https://www.nuget.org/packages/FakeItEasy/1.23.0 |
I tend to use Try apis alot, that returns a bool about the succes and the result as an out parameter.
I have trouble asserting calls to these and i end up either skipping the assert or using WithAnyArguments.
I would like a way (could be a variant of WithAnyArguments) which only ignores the out parameters.
If i have a method
MyMethod(int x, int y, out long z)
And
A.CallTo(() => MyMethod(1, 2, out ignore)).MustHaveHappened
.`Then if x and y is something else than 1 and 2 it should fail, but at the same time ignore the value of the out parameter.
Like IgnoreOutParametersInitialValue(). Behaving somewhat similar to WithAnyArguments but inspecting what type of parameter it is.
Thoughts?
I wouldn't mind contributing this if it's something that can be of value.
Cheers
The text was updated successfully, but these errors were encountered: