-
Notifications
You must be signed in to change notification settings - Fork 26
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Require a finding on a contravariant parameter type? [working decision: no] #49
Comments
In Java, IntelliJ IDEA's inspections currently allow nullable parameters overriding notnull ones, as there were such requests from users. |
As mentioned before I think we can just leave out parameter type contravariance for now to unblock prototyping, as from the Java point of view we should be able to safely add it later (please correct me if I'm missing something). I'll just point out that the overload-vs-override concern seems possibly solvable to me, since Kotlin uses different JVM method names in that case. That technically would be consistent with a proposal that allows contravariant nullability specifiers for methods that override each other according to the JVM, though it probably couldn't be allowed in a Kotlin class overriding another Kotlin class. The fact that kotlinc infers platform types when it sees contravariant overrides is certainly problematic but maybe could be avoided as well? Again I'm happy to defer this, but I do expect users to ask for contravariance eventually (see @donnerpeter comment above), so maybe worth thinking about if it could somehow be accommodated in kotlinc. |
For posterity, here's some (about-to-be-deleted) rationale that was in the proposal for why contravariant parameter types might be interesting:
|
One distinction that is already made in the doc but I want to call out here is that we still plan to accommodate overriding methods whose parameters types include unknown nullability. That solves part of the problem here. (And it solves that part of the problem better than mere contravariance, since it applies also to conversions between I still can imagine that we'd want to support this someday, but +1 for putting it off for now. And when we do allow it, I think it's worth considering whether it's worthy of a warning, since it might happen accidentally (or intentionally but in cases in which there's a better solution available) as much as it happens in "good" cases. |
I am also more than happy to jettison parameter contravariance for the time being. |
If we really wanted, we could annotate
|
It sounds like we can probably close this, at least as a way of saying "not yet?" (Or maybe we want some other way of marking issues as "post-1.0?") (As for my post just above this one, it seems pretty clear that kevinb9n, at least, is in favor of |
Formal parameter type contravariance is consistent with Java, so it is natural and strongly desirable. If we were designing Kspecify rather than Jspecify, we might make a different decision. I think we should accommodate Kotlin whenever we can, but we should never degrade the behavior of Jspecify on Java code in order to conform to Kotlin's different semantics. |
I've kept coming back to this in my mind, and am just gonna reopen it for now. It may not be a critical priority right now, but I don't think that's what the open/close status is supposed to show. If we don't have parameter contravariance, doesn't that mean that a method and all of its overrides can only have their annotations changed in perfect lockstep? That's against a project requirement, and it just feels bad to prohibit a method from accepting null if it wants to unless it changes the whole world around it. |
(Reopening sounds fine.) RE: changing annotations in lockstep: No, that's not required. See #49 (comment). Note also that, even with specified nullness, library authors are likely to have the option to suppress the error on the overriding method. (But I grant that it's unfortunate to have to suppress in the case of a contravariant override (which is safe) just as you do for a covariant override (which is not).) RE: consistency with Java: I can see how contravariance is:
The one way in which it's arguably inconsistent is that Java doesn't support such contravariance at present. For example, I can't override Now, again, overloading isn't possible in our case (since I think we're all in agreement on this, but I wanted to make sure I'm not missing a deeper point that you're making. Furthermore, breaking new ground might be perfectly fine. (I don't think I've formed a strong opinion yet. My main opinion so far is "not yet" :)) |
re: "lockstep", right, I forgot that we were going to have to special-case unspecified. It would just be nicer to have the right thing happen for unspecified as a simple consequence of our general contravariant-parameters and covariant-returns support. |
BTW, in the case of covariant returns I don't think jspecify even needs to do anything; just by explaining how the annotations are meant to modify types, users "should" assume they can do covariant returns because that's just how return types work. This one would need to be called out somehow though, because existing knowledge of types makes the wrong implication. |
Even with contravariant parameter types, we'd still like for unspecified to be considered to be "the same type" as both nullable and non-null. The reason is parameterized types: Suppose a method in an unannotated interface accepts a type like But you have made me see that, just as the "same type" check can permit things that contravariant parameter types cannot, the reverse is also true: Contravariant parameter types can permit some declarations to work without warnings or errors, like if I override a method that accepts Still, I see "the same type" as the essential feature and contravariant parameter types as the possible add-on. |
The outcome of #111 may make this a moot point. But if we decide on the strict approach there we will still need to decide this. My argument that we should still support this either way (regardless of #111) is that there are already cases in the wild where a submethod accepts null even though its supermethod did not force it to. If we "don't support parameter contravariance" we will force that code to be annotated incorrectly. [EDIT: added a comment to #111 trying to explain the connection between this and that] |
The default CF behavior is to allow for contravariance. JSpecify may eventually follow CF (jspecify/jspecify#49), and even if it doesn't, any given checker could choose to support it. I wouldn't have bothered to make this change except that I was having problems with the `Ordering.min`/`max` case: jspecify/jspecify@972fad0#diff-73db3d90f7545c1ed55a628fdf803a47b14f37bd08cb1eb0aedfa45b44bf1ddfR53 https://github.com/google/guava/blob/751d7c0d5f1fe3cbc067987e7d185763affa9ec7/guava/src/com/google/common/collect/ReverseOrdering.java#L54 I have not fully understood what's going wrong in that case. From the error message, I get the impression that CF is perhaps not "adapting the formal parameter types of N to the the type parameters of M" (JLS 8.4.2). Or rather, it does but only in the case of a _top-level_ type variable, meaning _not_ in the array-of-type-variable-usages case? https://github.com/typetools/checker-framework/blob/fd0abaa8b8fea775aaf71057d93d9415355b665e/framework/src/main/java/org/checkerframework/common/basetype/BaseTypeVisitor.java#L3883 (I forget whether the similar `Lib<E>` case worked, but I suspect so. Maybe that's covered by CF's contravariance and then by the specifics of how it compares type variables to one another?)
Can we revisit this issue? I propose that JSpecify should permit formal paremeter types to have wider nullability than the corresponding superparameters' types. A specific tool might choose not to support formal parameter type contravariance, but I don't think the JSpecify specification should forbid it for all tools. Arguments in favor:
Arguments against:
Are there other arguments for or against, or more compelling ways to phrase my summary? In my view, the simplification is not worth the costs, which are reduced flexibility and expressiveness, and inconsistencies with implementations and the foundational underpinnings. |
My views on this have continued to fluctuate. Lately, my thinking has been that it's not necessarily our business to write a rule about this at all. I've come to like the idea that we lay out rules for subtyping[*], and we leave it up to tool authors to decide what to do with that. In general, I'd expect for tool authors to apply rules similar to those in the JLS: "For example, the value of a return statement must be convertible to the method's return type." But this is not any sort of obligation. That gives tool authors leeway to apply in a case like this: The JLS would say "no contravariance." But the JLS is working within technical limitations that don't apply here, so there is an easy-to-understand case for tool authors to not follow the JLS in this case. [edit: I might back off a little on my claim of "technical limitations that don't apply here": It seems like the JLS probably could permit contravariance as long as the type's erasure continues to match. For example, it seems plausible that a method that requires a (Previously in that spirit: I make a case against writing a rule specifically about parameterized types.) Even if we decide not to officially rule one way or the other on this, I appreciate everyone's contributions to to understanding the tradeoffs of contravariance. That can help tool authors make their own decisions. To build on a couple things Mike said: On a downside of contravariance: It's possible that users will find that Kotlin ignores their annotations. (Of course, this can happen, anyway, if they suppress errors or don't run a nullness checker at all!) Or, if the Kotlin team does feel this is worth changing, then their users may see changes in the behavior of their existing code (unless the Kotlin team customizes the behavior only for JSpecify annotations and not for other nullness annotations). In theory, we could even collect data about how often these situations arise. On an upside of contravariance: Another kind of data that could be helpful is cases in which contravariant parameter types have come up in practice, taken from users of Checker Framework, IntelliJ, etc. (Some of this may have come up in previous docs that I don't have handy. The one example I recall mentioning up-thread is As always, if anyone else knows of tradeoffs, please do add them to the list. [*] and substitution, and whether two types are "the same," and "defaulting," and what an unbounded wildcard means, and... -- but basically to restrict ourselves to "how to know what type something has" and "how to perform a test on two types" |
The Kotlin discussion seemed to be just about what Kotlin currently does with existing annotations, but I don't see a clear implication that there would be major hurdles to adopting our semantics. I assume those challenges are surmountable (especially given that IntelliJ's inspections already allow this scenario), but corrections would be welcome. (Remember, if we can get agreement on #111 (comment) then that will autoresolve this one too. I think maybe it leaves it optional to tools whether they want to complain about param contravariance or not?) |
The root concern for Kotlin as I understand it is that in Kotlin, a class can declare both Now imagine A was declared in Java with Let's quickly think about the inverse, i.e., A declared in Kotlin and B in Java. That seems fine. However, a class C declared in Kotlin that extends B now has to be careful. Without thinking through it too much it seems it may be safe to ignore A's declaration and just inherit B's (or ignore B's and inherit A's actually, since it's stricter), but it would certainly be wrong to inherit both. |
I propose that we resolve this issue (and also others such as #169) in the following way: Unless I have missed some subtleties in the long discussions, everyone agrees that that behavior is desirable for specifying Java code. There may be an incompatibility with current Kotlin behavior, but this project is JSpecify, not KSpecify. |
I would want to think more about this before declaring that it's definitely desirable for Java code to support this. My initial reaction is that it would definitely be desirable if all users ran a nullness checker. But in today's world, I could imagine problems: If users happen to use [edit much, much later: I think my point about annotations in the previous paragraph is overstated: After all, the On the other hand, I would assume that contravariant parameter types would provide value to some JSpecify users, too, just as covariant return types do. I would just feel most comfortable if we had some more examples before I committed too strongly to either position. As for Kotlin, I think we've heard from a few different organizations that they are interested in Kotlin's interaction with JSpecify. Interoperability between JVM languages is a nice feature of Java, but certainly most users of Java are Java, so we'll want to weight concerns accordingly. Fortunately, in this case, I think Kotlin users aren't actually practically hurt by Kotlin's current behavior: Whether Kotlin sees a method as accepting [edit: OK, one downside would be if users want to convert (or transpile) their Java code to Kotlin. In that case, any override with contravariant parameters becomes a breakage. But that seems unlikely to be a major issue—perhaps similar in scope to how Kotlin arrays aren't invariant and so an override's return type can't be Put that all together, and at a minimum, I continue not to see value in saying that parameter contravariant is illegal/banned/disallowed. Now, I also still want to think more about the line between what we specify and what we don't. As a result, I am nervous about including normative comments about behavioral subtyping: Once we start saying that x and y are permitted, does that mean that z is not permitted? And what exactly is covered by x and y? I am going to try to open a separate issue about this more general concern in the coming days, though, since this current issue touches on some concerns specific to parameter types. (As a short-term patch to #169, I am still happy to remove the sample. But I understand that you ultimately want to see the underlying issue resolved.) |
I have filed #206 about what I see as one of the underlying principles. I'd be interested in whether you think that it's important to make "JSpecify permits standard behavioral subtyping" part of the normative specification. In particular, I'm pondering whether it's sufficient (assuming that it's even desirable!) to call questions like "valid parameters for overrides" explicitly not formally specified, while still explicitly non-normatively encouraging authors of nullness checker to support behavior subtyping when desired. |
I'm not sure sure I understand what this means.
I think this proves too much. It seems to equally well imply that "Java would let We certainly "have to" in the sense that we can't prevent that code from existing. But it sounds to me (or maybe I have this wrong?) like you're saying that we should tell checker authors to think differently about parameters and return types. I think we've been discussing some possible justifications for that. But I don't think we can justify it by saying that override checking is "none of our business," any more than we can call any other nullness mismatch "none of our business."
I would want to think more about this, as it's a strong claim. (I feel like you may have said this before, though, so I probably should have called it out then -- if in fact I felt this way back then :)) I can see a scenario like "I can't change the supertype's parameter's annotation, so I'll at least change the subtype's." But that feels like "doing the best we can under the constraints." Calling it "the correct option" or even "a correct option" is what I'd want to think more about. |
Thanks for reopening, I absolutely read your points as registering mild asterisks and not as major concern. You just succeeded too well at putting them diplomatically. :-) |
Kevin and I talked some offline yesterday. We didn't get to the bottom of everything, but here are some thoughts: I don't think that anyone is currently pushing for the idea that:
(It might also be that no one is currently pushing for the idea that a checker that does tell users about contravariant parameters is "lacking" from the perspective of JSpecify. But even if that's the case, I do think we'd advise checker authors to make clear that contravariant parameters do not introduce a soundness problem.) I think that we could title this issue in some way to reflect that and then close it ("If we take a stance on contravariant parameters, will it be against them [no]?"). (If we additionally want to come to a conclusion about what to say about a tool that does tell users about contravariant parameters, then we should float a proposed policy on that, whether on this issue or a new one.) I think that most of the controversy on this thread has been around related claims, like:
If one of those claims is more closely tied to this issue than I think (e.g., if people are in favor of contravariant parameters only because they think it's necessary for unspecified nullness to work with overrides), then we should get to the bottom of that. But I'm hoping that there's no actual controversy on the "Would we come out against contravariant parameters?" question. In that case, I think we can address the more controversial questions separately and more directly. |
For future reference, I this issue is about what happens at "// A": we expect a finding, we expect no finding, or we don't care.
|
(Kevin and I are going to talk synchronously about this one of these days, I promise!) I think the disagreement is indeed precisely over what I see as a pivot in that framing: What I think most people most want to know is whether nullness checkers are allowed to let
I think we all agree (but anyone who disagrees, please speak up!) that the answer is "Yes, tools can let that pass [in no small part because they can let anything pass!]." I think the second most interesting question is whether we'd discourage that because of some kind of concern (like the Kotlin platform-type concern or the non-nullness-checked-user concern). I think that there's at least general consensus behind the idea that we wouldn't even discourage it. [edit much later: TODO(cpovirk): Think more about whether, by allowing contravariant parameters, we might unintentionally encourage people to define methods like The "finding" framing is a third question for us to talk about. But it seems to be purely a question about what kind of test tool authors would find more convenient. I think that tool authors would be happy with the answer "We should provide tests that declare contravariant parameters and don't identify them as mismatches." But I think we should be very clear in distinguishing these three questions, and I think we should make sure we're answering the first two and not just the third. |
TODO: reopen this with new summary, as this is one of the 3 most vertical issues we have open for 1.0 currently. |
To your three questions:
|
I'd be fine with defining the conformance tests' override-conflict finding as not covering the contravariant-parameter scenario. If we end up needing a compromise position, I can also see defining a contravariant-parameter finding, explicitly with the understanding that:
|
We had them declared wrong in Guava until google/guava#6251. We got away with it because the current reference checker doesn't issue an error. I would expect an error from declaring _any_ override whose type-parameter declarations don't match its supermethod's. (That's what you get from javac if attempt similar mischief with base types. Presumably it's failing the return-type-substitutability check in JLS 8.4.5 when it finds that it can't "adapt to the type parameters" of the original method, since it doesn't meet the "same type parameters" laid out in 8.4.4? I notice that the Checker Framework won't issue an error merely for the type-parameter mismatch. However, it _will_ issue an error if you then declare method parameters (and probably certain kinds of return types), so it probably does catch any problem that could lead to an actual runtime error. And maybe they'd even consider their more lenient approach a feature, a generalization of [contravariant parameters](jspecify#49)? I'd need to think more about that, but I'm hoping not to bother. In any case, it doesn't surprise me too much that our checker would miss this error: We're inheriting the Checker Framework's leniency for the type-parameter declaration, and then we've edited our code for checking overrides (likely including parameters and/or return types), possibly making it more lenient in the process. I wonder if there's anything I would have done differently in that code if I'd realized that the Checker Framework was being lenient about the declarations? Realistically, probably nothing: I was confused throughout the work discussed in jspecify/checker-framework#10.
(#356) We had them declared wrong in Guava until google/guava#6251. We got away with it because the current reference checker doesn't issue an error. I would expect an error from declaring _any_ override whose type-parameter declarations don't match its supermethod's. (That's what you get from javac if attempt similar mischief with base types. Presumably it's failing the return-type-substitutability check in JLS 8.4.5 when it finds that it can't "adapt to the type parameters" of the original method, since it doesn't meet the "same type parameters" laid out in 8.4.4? I notice that the Checker Framework won't issue an error merely for the type-parameter mismatch. However, it _will_ issue an error if you then declare method parameters (and probably certain kinds of return types), so it probably does catch any problem that could lead to an actual runtime error. And maybe they'd even consider their more lenient approach a feature, a generalization of [contravariant parameters](#49)? I'd need to think more about that, but I'm hoping not to bother. In any case, it doesn't surprise me too much that our checker would miss this error: We're inheriting the Checker Framework's leniency for the type-parameter declaration, and then we've edited our code for checking overrides (likely including parameters and/or return types), possibly making it more lenient in the process. I wonder if there's anything I would have done differently in that code if I'd realized that the Checker Framework was being lenient about the declarations? Realistically, probably nothing: I was confused throughout the work discussed in jspecify/checker-framework#10.
Current decision: When a parameter type is a supertype of the parameter type it overrides (a "contravariant parameter"), there is no resulting "JSpecify-based" finding. (A tool can report something if it wants to, but that is separate and should not be enabled when running JSpecify tests.) Argument for changing: Kotlin does not allow such overrides. However, JSpecify focuses on Java declarations. Timing: This must be decided before version 1.0 of the spec. Proposal for 1.0: Finalize the current decision. If you agree, please add a thumbs-up emoji (👍) to this comment. If you disagree, please add a thumbs-down emoji (👎) to this comment and briefly explain your disagreement. Please only add a thumbs-down if you feel you can make a strong case why this decision will be materially worse for users or tool providers than an alternative. |
I keep feeling that I'm almost on top of this issue, only for things to get slippery again. JSpecify currently says essentially nothing about when its subtyping check and similar checks should be deployed on Java source. This issue might be about starting to say such things, or it might be about what conformance tests we write. Part of my concern has been that we both seem to want to say "We don't tell checkers when to apply or not apply our checks" and also seem to want to say "But by writing a conformance test about this situation, we are implicitly telling checkers that applying our checks here would be in some way 'unjustified.'" The samples tried to dodge this question by saying that they're written under the assumption of JLS-like rules (which of course ban contravariant parameters) but that that those rules (a) have no necessary connection to what checkers do and (b) are more strict than "necessary":
[edit: The spec says something similar.] I would like more clarity on whether this issue is "We will not write a conformance test for this case" or "We will write a conformance test for this case that contains no cannot-convert comment (or override-conflict comment or whatever)" or "We will recommend that tools not produce errors in such situations" or some combination of those or what. I also wonder a little if we're opening a can of worms by trying to go beyond the JLS rules for when to apply checks. For example, if a supermethod has a type parameter I may be making a big thing out of an inconsequential decision. Part of my concern is that it keeps sounding inconsequential, only for phrasing about what is (e.g.) "correct" to crop back up. If I had managed to collect my thoughts earlier, I would have talked to kevinb9n about this (more) in person.... We should probably talk sometime. |
tl;dr I think that "finding" and "JSpecify-based" are undefined terms that are making the question slippery. My proposal would be:
If it helps, I would also be open to agreeing to expand some non-normative resources to explain why contravariant parameters are worth considering. |
After some discussion elsewhere, I think I could get behind the proposal if we change "'JSpecify-based' finding" to "JSpecify conformance finding," which I think makes clearer that this is about what the checker must be able to recognize internally in order to pass our tests. (There are surely other formulations that could work, too, but that one might minimize the edit distance :)) |
Fun fact that I can't resist posting here: In one narrow way, you totally can declare a contravariant parameter type in Java: class OverrideNoTypeVariable {
interface Super<T> {
void x(T t);
}
interface Sub<T> extends Super<T> {
@Override
void x(Object t);
}
} It builds without even a warning. ("the signature of m1 is the same as the erasure (§4.6) of the signature of m2.") |
I just tried the contravariant parameter type with That turns out to not go so great: Calls that pass a |
In the draft proposal, there's a part that allows for overrides to have value parameters with contravariant nullability:
That might be a problem for the Kotlin compiler since in pure Kotlin value parameters are not contravariant in overrides and we apply the same logic when looking at the annotated Java code.
The reason we don't allow contravariant parameters in override is basically the same as in Java: they might be considered as overloads. In Java, one can not override with a wider-typed parameter (types should be equal). The same applies for Kotlin, but we have nullability as a built-in part of the type system.
That means that when we see contravariant nullability in Java annotations, such code is assumed to be illegal thus we ignore annotations info for such parts completely as inconsistent:
B::foo
has signature likefun foo(x: String!)
, i.e. it has unknown nullability from Kotlin point-of-view.But at the same time it's fine transforming from unknown nullness to both
NotNull
andNullable
:In the above example
B::foo
is perceived by Kotlin asfun foo(x: String?, y: String)
The text was updated successfully, but these errors were encountered: