Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Require a finding on a contravariant parameter type? [working decision: no] #49

Open
dzharkov opened this issue Jul 23, 2019 · 46 comments
Labels
design An issue that is resolved by making a decision, about whether and how something should work. needs-decision-before-spec-1.0 Issues needing a finalized decision for the 1.0 specification nullness For issues specific to nullness analysis.
Milestone

Comments

@dzharkov
Copy link
Collaborator

In the draft proposal, there's a part that allows for overrides to have value parameters with contravariant nullability:

The method’s parameter types may have wider nullability than the corresponding superparameters' types, meaning, it suffices for superparameters to be type-convertible to the overriding method’s parameter types (while base types have to follow the normal rules of the compiler so overriding works as expected).

That might be a problem for the Kotlin compiler since in pure Kotlin value parameters are not contravariant in overrides and we apply the same logic when looking at the annotated Java code.

The reason we don't allow contravariant parameters in override is basically the same as in Java: they might be considered as overloads. In Java, one can not override with a wider-typed parameter (types should be equal). The same applies for Kotlin, but we have nullability as a built-in part of the type system.

interface A {
    fun foo(x: String)
}

class B : A {
    override fun foo(x: String) {
        
    }
    
    @JvmName("fooNullable")
    fun foo(x: String?) {} // is a different overload for `foo`
}

fun main() {
    B().foo("") // resolved to the not-nullable
    B().foo(null) // resolved to nullable
}

That means that when we see contravariant nullability in Java annotations, such code is assumed to be illegal thus we ignore annotations info for such parts completely as inconsistent:

interface A {
    void foo(@NotNull String x);
}

class B implements A {
    @Override
    public void foo(@Nullable String x) {

    }
}

B::foo has signature like fun foo(x: String!), i.e. it has unknown nullability from Kotlin point-of-view.

But at the same time it's fine transforming from unknown nullness to both NotNull and Nullable:

interface A {
    void foo(String x, String y);
}

class B implements A {
    @Override
    public void foo(@Nullable String x, @NotNull String y) {

    }
}

In the above example B::foo is perceived by Kotlin as fun foo(x: String?, y: String)

@donnerpeter
Copy link
Collaborator

In Java, IntelliJ IDEA's inspections currently allow nullable parameters overriding notnull ones, as there were such requests from users.

@kevin1e100
Copy link
Collaborator

As mentioned before I think we can just leave out parameter type contravariance for now to unblock prototyping, as from the Java point of view we should be able to safely add it later (please correct me if I'm missing something).

I'll just point out that the overload-vs-override concern seems possibly solvable to me, since Kotlin uses different JVM method names in that case. That technically would be consistent with a proposal that allows contravariant nullability specifiers for methods that override each other according to the JVM, though it probably couldn't be allowed in a Kotlin class overriding another Kotlin class.

The fact that kotlinc infers platform types when it sees contravariant overrides is certainly problematic but maybe could be avoided as well? Again I'm happy to defer this, but I do expect users to ask for contravariance eventually (see @donnerpeter comment above), so maybe worth thinking about if it could somehow be accommodated in kotlinc.

@kevin1e100
Copy link
Collaborator

kevin1e100 commented Jul 23, 2019

For posterity, here's some (about-to-be-deleted) rationale that was in the proposal for why contravariant parameter types might be interesting:

  • This rule differs from the situation for base types, which cannot have contravariant parameters for reasons that are much less applicable to us.
  • This also should make things more forgiving when subclassing legacy code (you can mark parameters nullable if you want to accept null, and, with the unchecked conversions mentioned above, you can mark parameters that are “unknown” in the base class as not-null as well). It also means that a method can override corresponding methods of multiple supertypes even when they are annotated differently (but compatibly).

@cpovirk
Copy link
Collaborator

cpovirk commented Jul 24, 2019

One distinction that is already made in the doc but I want to call out here is that we still plan to accommodate overriding methods whose parameters types include unknown nullability. That solves part of the problem here. (And it solves that part of the problem better than mere contravariance, since it applies also to conversions between List<@NullnessUnknown String> and List<@NotNull String>, which are unrelated types and thus would not be covered by contravariance support.) I'm hoping that it solves a lot of the problem. And I'm hoping that another part of the problem is methods whose parameters types are traditionally annotated as non-null because people are using annotations/tools that don't support annotating type parameter bounds but should instead have parametric nullability. Once parametric nullability is widely supported, users might see less need for contravariant parameter types. And, as noted earlier, contravariance doesn't help with cases like List<@NullnessUnknown String> and List<@NotNull String>, anyway.

I still can imagine that we'd want to support this someday, but +1 for putting it off for now. And when we do allow it, I think it's worth considering whether it's worthy of a warning, since it might happen accidentally (or intentionally but in cases in which there's a better solution available) as much as it happens in "good" cases.

@kevinb9n
Copy link
Collaborator

I am also more than happy to jettison parameter contravariance for the time being.

@cpovirk
Copy link
Collaborator

cpovirk commented Sep 23, 2019

If we really wanted, we could annotate Collection.contains and similar methods as requiring [@NotNull] Object, since some collections do throw when called with contains(null). Then we'd want for subclasses to be able to override those to accept @Nullable Object. However:

  • Overriding often won't help, since users don't generally maintain a reference of type, e.g., ArrayList.
  • My guess is that we'd just declare it as contains(@Nullable Object) and live with the occasional NullPointerException. (This appears to be what the Checker Framework does -- though then it defines TreeSet with contains(Object). I'm a little surprised that that is allowed (probably it isn't, except in stubs?). But it is safer for people who have a reference of type TreeSet, albeit only by being even more conservative than ideal.) Or at most we'd use @NullUnspecified, which might permit subclasses to change nullability in overrides in even more flexible ways.

@cpovirk
Copy link
Collaborator

cpovirk commented Oct 11, 2019

It sounds like we can probably close this, at least as a way of saying "not yet?" (Or maybe we want some other way of marking issues as "post-1.0?")

(As for my post just above this one, it seems pretty clear that kevinb9n, at least, is in favor of contains(@Nullable Object), too, based on this and this.)

@mernst
Copy link
Contributor

mernst commented May 28, 2020

Formal parameter type contravariance is consistent with Java, so it is natural and strongly desirable. If we were designing Kspecify rather than Jspecify, we might make a different decision.

I think we should accommodate Kotlin whenever we can, but we should never degrade the behavior of Jspecify on Java code in order to conform to Kotlin's different semantics.

@kevinb9n
Copy link
Collaborator

I've kept coming back to this in my mind, and am just gonna reopen it for now. It may not be a critical priority right now, but I don't think that's what the open/close status is supposed to show.

If we don't have parameter contravariance, doesn't that mean that a method and all of its overrides can only have their annotations changed in perfect lockstep? That's against a project requirement, and it just feels bad to prohibit a method from accepting null if it wants to unless it changes the whole world around it.

@kevinb9n kevinb9n reopened this May 28, 2020
@cpovirk
Copy link
Collaborator

cpovirk commented May 29, 2020

(Reopening sounds fine.)

RE: changing annotations in lockstep: No, that's not required. See #49 (comment). Note also that, even with specified nullness, library authors are likely to have the option to suppress the error on the overriding method. (But I grant that it's unfortunate to have to suppress in the case of a contravariant override (which is safe) just as you do for a covariant override (which is not).)

RE: consistency with Java: I can see how contravariance is:

  • sound
  • compatible with plain Java (since Java won't care about our annotations at all)

The one way in which it's arguably inconsistent is that Java doesn't support such contravariance at present. For example, I can't override foo(String) with foo(Object): I just get an overload.

Now, again, overloading isn't possible in our case (since foo(String) and foo(@Nullable String) have the same erasure), so I'm not saying that users should be able to define two separate methods with those signatures. I just mean that, if users are able to override one with the other, we'd be somewhat breaking new ground.

I think we're all in agreement on this, but I wanted to make sure I'm not missing a deeper point that you're making.

Furthermore, breaking new ground might be perfectly fine. (I don't think I've formed a strong opinion yet. My main opinion so far is "not yet" :))

@kevinb9n
Copy link
Collaborator

kevinb9n commented Jun 4, 2020

re: "lockstep", right, I forgot that we were going to have to special-case unspecified. It would just be nicer to have the right thing happen for unspecified as a simple consequence of our general contravariant-parameters and covariant-returns support.

@kevinb9n
Copy link
Collaborator

kevinb9n commented Jun 4, 2020

BTW, in the case of covariant returns I don't think jspecify even needs to do anything; just by explaining how the annotations are meant to modify types, users "should" assume they can do covariant returns because that's just how return types work.

This one would need to be called out somehow though, because existing knowledge of types makes the wrong implication.

@cpovirk
Copy link
Collaborator

cpovirk commented Jun 12, 2020

It would just be nicer to have the right thing happen for unspecified as a simple consequence of our general contravariant-parameters and covariant-returns support.

Even with contravariant parameter types, we'd still like for unspecified to be considered to be "the same type" as both nullable and non-null.

The reason is parameterized types: Suppose a method in an unannotated interface accepts a type like @Unspec List<@Unspec String>. If we want to implement that in an annotated class, we would like to avoid @Unspec. Even if the right type is @Nullable List<@Nullable String>, that's not a supertype of the old method's parameters because parameterized types are invariant. So contravariance isn't enough.

But you have made me see that, just as the "same type" check can permit things that contravariant parameter types cannot, the reverse is also true: Contravariant parameter types can permit some declarations to work without warnings or errors, like if I override a method that accepts @Unspec Object with one that accepts @Nullable Object.

Still, I see "the same type" as the essential feature and contravariant parameter types as the possible add-on.

@kevinb9n
Copy link
Collaborator

kevinb9n commented Jun 15, 2020

The outcome of #111 may make this a moot point. But if we decide on the strict approach there we will still need to decide this. My argument that we should still support this either way (regardless of #111) is that there are already cases in the wild where a submethod accepts null even though its supermethod did not force it to. If we "don't support parameter contravariance" we will force that code to be annotated incorrectly.

[EDIT: added a comment to #111 trying to explain the connection between this and that]

cpovirk added a commit to jspecify/checker-framework that referenced this issue Nov 20, 2020
The default CF behavior is to allow for contravariance. JSpecify may
eventually follow CF (jspecify/jspecify#49),
and even if it doesn't, any given checker could choose to support it. I
wouldn't have bothered to make this change except that I was having
problems with the `Ordering.min`/`max` case:

jspecify/jspecify@972fad0#diff-73db3d90f7545c1ed55a628fdf803a47b14f37bd08cb1eb0aedfa45b44bf1ddfR53
https://github.com/google/guava/blob/751d7c0d5f1fe3cbc067987e7d185763affa9ec7/guava/src/com/google/common/collect/ReverseOrdering.java#L54

I have not fully understood what's going wrong in that case. From the
error message, I get the impression that CF is perhaps not "adapting the
formal parameter types of N to the the type parameters of M" (JLS
8.4.2). Or rather, it does but only in the case of a _top-level_ type
variable, meaning _not_ in the array-of-type-variable-usages case?

https://github.com/typetools/checker-framework/blob/fd0abaa8b8fea775aaf71057d93d9415355b665e/framework/src/main/java/org/checkerframework/common/basetype/BaseTypeVisitor.java#L3883

(I forget whether the similar `Lib<E>` case worked, but I suspect so.
Maybe that's covered by CF's contravariance and then by the specifics of
how it compares type variables to one another?)
@mernst
Copy link
Contributor

mernst commented Feb 9, 2021

Can we revisit this issue?

I propose that JSpecify should permit formal paremeter types to have wider nullability than the corresponding superparameters' types.

A specific tool might choose not to support formal parameter type contravariance, but I don't think the JSpecify specification should forbid it for all tools.

Arguments in favor:

  • This is behavioral subtyping, which is well-understood, sound, and flexible.
  • We have concrete examples of real-world code that benefits from this flexibility.
  • Tools such as IntelliJ IDEA and the Checker Framework support this.
  • There is no effect on overriding or overloading, which depend on Java basetypes.

Arguments against:

  • Maybe this complicates the Kotlin implementation. (It's not clear to me from the discussion whether this is definitely true or might be true.) I personally do not find this a compelling argument for JSpecify, even if it might be for KSpecify.
  • Forbidding this permits us not to think about it. I don't agree with this, because forbidding it is taking a stand, and to do so we need to think about and understand the implications.
  • The specification is slightly simpler if we forbid it. This is true.

Are there other arguments for or against, or more compelling ways to phrase my summary?

In my view, the simplification is not worth the costs, which are reduced flexibility and expressiveness, and inconsistencies with implementations and the foundational underpinnings.

@cpovirk
Copy link
Collaborator

cpovirk commented Feb 9, 2021

My views on this have continued to fluctuate.

Lately, my thinking has been that it's not necessarily our business to write a rule about this at all. I've come to like the idea that we lay out rules for subtyping[*], and we leave it up to tool authors to decide what to do with that. In general, I'd expect for tool authors to apply rules similar to those in the JLS: "For example, the value of a return statement must be convertible to the method's return type." But this is not any sort of obligation.

That gives tool authors leeway to apply in a case like this: The JLS would say "no contravariance." But the JLS is working within technical limitations that don't apply here, so there is an easy-to-understand case for tool authors to not follow the JLS in this case.

[edit: I might back off a little on my claim of "technical limitations that don't apply here": It seems like the JLS probably could permit contravariance as long as the type's erasure continues to match. For example, it seems plausible that a method that requires a Collection<Foo> could be overridden by a method that accepts any Collection<? extends Foo>. Still, even that may lead to more complications that we'd be creating by accepting differences only in nullness.]

(Previously in that spirit: I make a case against writing a rule specifically about parameterized types.)

Even if we decide not to officially rule one way or the other on this, I appreciate everyone's contributions to to understanding the tradeoffs of contravariance. That can help tool authors make their own decisions. To build on a couple things Mike said:

On a downside of contravariance: It's possible that users will find that Kotlin ignores their annotations. (Of course, this can happen, anyway, if they suppress errors or don't run a nullness checker at all!) Or, if the Kotlin team does feel this is worth changing, then their users may see changes in the behavior of their existing code (unless the Kotlin team customizes the behavior only for JSpecify annotations and not for other nullness annotations). In theory, we could even collect data about how often these situations arise.

On an upside of contravariance: Another kind of data that could be helpful is cases in which contravariant parameter types have come up in practice, taken from users of Checker Framework, IntelliJ, etc. (Some of this may have come up in previous docs that I don't have handy. The one example I recall mentioning up-thread is Collection.contains, though that depends on resolving another unsettled discussion.) I'm hoping that early testers can give us additional data on this.

As always, if anyone else knows of tradeoffs, please do add them to the list.

[*] and substitution, and whether two types are "the same," and "defaulting," and what an unbounded wildcard means, and... -- but basically to restrict ourselves to "how to know what type something has" and "how to perform a test on two types"

@kevinb9n
Copy link
Collaborator

kevinb9n commented Jun 18, 2021

The Kotlin discussion seemed to be just about what Kotlin currently does with existing annotations, but I don't see a clear implication that there would be major hurdles to adopting our semantics. I assume those challenges are surmountable (especially given that IntelliJ's inspections already allow this scenario), but corrections would be welcome.

(Remember, if we can get agreement on #111 (comment) then that will autoresolve this one too. I think maybe it leaves it optional to tools whether they want to complain about param contravariance or not?)

@kevin1e100
Copy link
Collaborator

kevin1e100 commented Jun 18, 2021

The root concern for Kotlin as I understand it is that in Kotlin, a class can declare both fun foo(x: String) and fun foo(x: String?), as they're considered distinct overloads. Because of that, if a class A declares fun foo(x: String), then a class B that extends A cannot override A's foo as fun foo(x: String?), because that would be a different method. Instead, the override must be declared fun foo(x: String), ie., identical to A (modulo return type, which I believe can covariantly change).

Now imagine A was declared in Java with @NullMarked and B in Kotlin. It's hard to see how B could override A's public void foo(String x) with fun foo(x: String?) without confusion, especially if it also wants to declare fun foo(x: String). Now I think it could be done/allowed (since one of the foo functions is required to use @JvmName so it has a unique name in bytecode), but it seems difficult to do correctly unless the same is somehow allowed if both A and B are declared in Kotlin.

Let's quickly think about the inverse, i.e., A declared in Kotlin and B in Java. That seems fine. However, a class C declared in Kotlin that extends B now has to be careful. Without thinking through it too much it seems it may be safe to ignore A's declaration and just inherit B's (or ignore B's and inherit A's actually, since it's stricter), but it would certainly be wrong to inherit both.

@mernst
Copy link
Contributor

mernst commented Jul 10, 2021

I propose that we resolve this issue (and also others such as #169) in the following way:
JSpecify permits standard behavioral subtyping.

Unless I have missed some subtleties in the long discussions, everyone agrees that that behavior is desirable for specifying Java code.
It's also what is already implemented by widely-used tools, and what users of those tools want.

There may be an incompatibility with current Kotlin behavior, but this project is JSpecify, not KSpecify.

@cpovirk
Copy link
Collaborator

cpovirk commented Jul 12, 2021

I would want to think more about this before declaring that it's definitely desirable for Java code to support this. My initial reaction is that it would definitely be desirable if all users ran a nullness checker. But in today's world, I could imagine problems: If users happen to use MyFoo.doSomething(@Nullable String) today, they could pass null safely. But if they switched to SomeOtherFoo.doSomething(@NonNull String), their code would start to fail. For example, Guava's CharStreams.asWriter has been returning a Writer whose write(String) method accepts null. This is not how most other Writer implementations behave, so we changed it.

[edit much, much later: I think my point about annotations in the previous paragraph is overstated: After all, the write(String) method could tolerate null inputs no matter how it's annotated.]

On the other hand, I would assume that contravariant parameter types would provide value to some JSpecify users, too, just as covariant return types do. I would just feel most comfortable if we had some more examples before I committed too strongly to either position.

As for Kotlin, I think we've heard from a few different organizations that they are interested in Kotlin's interaction with JSpecify. Interoperability between JVM languages is a nice feature of Java, but certainly most users of Java are Java, so we'll want to weight concerns accordingly.

Fortunately, in this case, I think Kotlin users aren't actually practically hurt by Kotlin's current behavior: Whether Kotlin sees a method as accepting Foo? or Foo! is almost entirely irrelevant to users: Kotlin will allow users to pass null in either case. Someone please correct me if I'm wrong, but if I have that right, I don't see much harm from Kotlin's current behavior for people who declare methods with contravariant parameter types.

[edit: OK, one downside would be if users want to convert (or transpile) their Java code to Kotlin. In that case, any override with contravariant parameters becomes a breakage. But that seems unlikely to be a major issue—perhaps similar in scope to how Kotlin arrays aren't invariant and so an override's return type can't be Foo[] if the supermethod's was Object[].]

Put that all together, and at a minimum, I continue not to see value in saying that parameter contravariant is illegal/banned/disallowed. Now, I also still want to think more about the line between what we specify and what we don't. As a result, I am nervous about including normative comments about behavioral subtyping: Once we start saying that x and y are permitted, does that mean that z is not permitted? And what exactly is covered by x and y? I am going to try to open a separate issue about this more general concern in the coming days, though, since this current issue touches on some concerns specific to parameter types.

(As a short-term patch to #169, I am still happy to remove the sample. But I understand that you ultimately want to see the underlying issue resolved.)

@cpovirk
Copy link
Collaborator

cpovirk commented Jul 12, 2021

I have filed #206 about what I see as one of the underlying principles. I'd be interested in whether you think that it's important to make "JSpecify permits standard behavioral subtyping" part of the normative specification. In particular, I'm pondering whether it's sufficient (assuming that it's even desirable!) to call questions like "valid parameters for overrides" explicitly not formally specified, while still explicitly non-normatively encouraging authors of nullness checker to support behavior subtyping when desired.

@cpovirk
Copy link
Collaborator

cpovirk commented Feb 17, 2022

But a "no" on my proposal would mean that (to be compliant) they still have to prove they can recognize this case as such.

I'm not sure sure I understand what this means.


it's just about override resolution.

I think this proves too much. It seems to equally well imply that "Java would let @Nullable String foo() override String foo(), so we have to, too."

We certainly "have to" in the sense that we can't prevent that code from existing. But it sounds to me (or maybe I have this wrong?) like you're saying that we should tell checker authors to think differently about parameters and return types. I think we've been discussing some possible justifications for that. But I don't think we can justify it by saying that override checking is "none of our business," any more than we can call any other nullness mismatch "none of our business."


That contravariant parameter might be the only correct thing to do for the code in question.

I would want to think more about this, as it's a strong claim. (I feel like you may have said this before, though, so I probably should have called it out then -- if in fact I felt this way back then :)) I can see a scenario like "I can't change the supertype's parameter's annotation, so I'll at least change the subtype's." But that feels like "doing the best we can under the constraints." Calling it "the correct option" or even "a correct option" is what I'd want to think more about.

@cpovirk cpovirk reopened this Feb 17, 2022
@kevinb9n
Copy link
Collaborator

kevinb9n commented Feb 17, 2022

Thanks for reopening, I absolutely read your points as registering mild asterisks and not as major concern.

You just succeeded too well at putting them diplomatically. :-)

@cpovirk cpovirk changed the title Require a finding on a contravariant parameter type? [working decision: no] Require a finding on a contravariant parameter type? Feb 17, 2022
@cpovirk
Copy link
Collaborator

cpovirk commented Feb 18, 2022

Kevin and I talked some offline yesterday. We didn't get to the bottom of everything, but here are some thoughts:

I don't think that anyone is currently pushing for the idea that:

  • "If a nullness checker tells users about 'nullness mismatches' but does not tell them about contravariant parameters, then that checker is somehow 'lacking' from the perspective of JSpecify."

(It might also be that no one is currently pushing for the idea that a checker that does tell users about contravariant parameters is "lacking" from the perspective of JSpecify. But even if that's the case, I do think we'd advise checker authors to make clear that contravariant parameters do not introduce a soundness problem.)

I think that we could title this issue in some way to reflect that and then close it ("If we take a stance on contravariant parameters, will it be against them [no]?"). (If we additionally want to come to a conclusion about what to say about a tool that does tell users about contravariant parameters, then we should float a proposed policy on that, whether on this issue or a new one.)

I think that most of the controversy on this thread has been around related claims, like:

  • [I've replaced my original text for this bullet with something new. I had misinterpreted Kevin's reference to "legacy code," sorry.] I am not sure what "special considerations because of legacy code" we have here. (I think you had elsewhere said that they're different concerns than Kotlin faced. But I'd expect the concerns to be the same.)
  • It would be questionable for a tool to apply JSpecify rules to parameters of overrides. (I think it would be desirable for it to do so.)
    • I think this is part of what's behind...

      But a "no" on my proposal would mean that (to be compliant) they still have to prove they can recognize this case as such.

      I think Kevin is saying that we have a choice between "no samples / conformance tests around parameters" and "some samples / conformance tests around parameters," and he would prefer none because that puts less obligation on tool authors. But I think that we would want to cover cases of parameter checking no matter what (e.g., covariant parameters). Plus, tool authors would want to make a decision on whether to tell users about contravariance or not, so they'd want to test that case, anyway.

  • <other general questions around defining "JSpecify compliance," touching on #232 (which I still need to reply to :( ...) and perhaps #206>

If one of those claims is more closely tied to this issue than I think (e.g., if people are in favor of contravariant parameters only because they think it's necessary for unspecified nullness to work with overrides), then we should get to the bottom of that. But I'm hoping that there's no actual controversy on the "Would we come out against contravariant parameters?" question. In that case, I think we can address the more controversial questions separately and more directly.

@kevinb9n
Copy link
Collaborator

kevinb9n commented Mar 8, 2022

For future reference, I this issue is about what happens at "// A": we expect a finding, we expect no finding, or we don't care.

static class SuperFoo {
  void bar(String s) {}
}
static class SubFoo extends SuperFoo {
  @Override void bar(@Nullable String s) {} // A
}
void test(SuperFoo sup, SubFoo sub) {
  sup.bar(null); // B
  sub.bar(null); // C
}

@cpovirk
Copy link
Collaborator

cpovirk commented Mar 8, 2022

(Kevin and I are going to talk synchronously about this one of these days, I promise!)

I think the disagreement is indeed precisely over what I see as a pivot in that framing: What I think most people most want to know is whether nullness checkers are allowed to let // A pass without some kind of warning or error to the user:

  • That lets users know that they can probably write code like that without having to fight the compiler.
  • That lets tool authors know that they can provide such tools to users who want them.
  • That lets tool authors know that interoperating with code that was annotated that way is something that they'll likely want to think about, even in a far-off, hypothetical, fully annotated world.

I think we all agree (but anyone who disagrees, please speak up!) that the answer is "Yes, tools can let that pass [in no small part because they can let anything pass!]."

I think the second most interesting question is whether we'd discourage that because of some kind of concern (like the Kotlin platform-type concern or the non-nullness-checked-user concern). I think that there's at least general consensus behind the idea that we wouldn't even discourage it.

[edit much later: TODO(cpovirk): Think more about whether, by allowing contravariant parameters, we might unintentionally encourage people to define methods like Collection.contains as having a non-nullable parameter instead of the nullable parameter that we've gone with.]

The "finding" framing is a third question for us to talk about. But it seems to be purely a question about what kind of test tool authors would find more convenient. I think that tool authors would be happy with the answer "We should provide tests that declare contravariant parameters and don't identify them as mismatches." But I think we should be very clear in distinguishing these three questions, and I think we should make sure we're answering the first two and not just the third.

@kevinb9n
Copy link
Collaborator

kevinb9n commented Mar 8, 2022

TODO: reopen this with new summary, as this is one of the 3 most vertical issues we have open for 1.0 currently.

@kevinb9n
Copy link
Collaborator

To your three questions:

  1. Yes, we agree
  2. Yes, we agree
  3. Should the conformance test suite include a contravariant-parameter scenario? My first preference would be "yes, and it should test that NO override-conflict finding is produced." A tool can still give its own warning for its own reasons, as always, but that's its own deal and not a "JSpecify thing". Does that work? If necessary would settle for my second preference, "no".... just want to get this resolved.

@cpovirk
Copy link
Collaborator

cpovirk commented Jun 22, 2022

I'd be fine with defining the conformance tests' override-conflict finding as not covering the contravariant-parameter scenario.

If we end up needing a compromise position, I can also see defining a contravariant-parameter finding, explicitly with the understanding that:

  • Tools already need to have the ability to identify contravariant parameters (in order to prove that they're not covariant or otherwise bogus), so the new finding carries minimal cost.
  • This is a finding that tools are very likely not to want to report to users in any way. (This is maybe a bit different from how we normally think of our test findings. But arguably it isn't any different than a "finding" that the type of a given parameter is String? or whatever.)

@kevinb9n kevinb9n changed the title Require a finding on a contravariant parameter type? Require a finding on a contravariant parameter type? [working decision: no] Jun 30, 2022
@kevinb9n kevinb9n modified the milestones: 1.0, 0.3 Dec 2, 2022
cpovirk added a commit to cpovirk/jspecify that referenced this issue Dec 2, 2022
We had them declared wrong in Guava until
google/guava#6251. We got away with it because
the current reference checker doesn't issue an error.

I would expect an error from declaring _any_ override whose
type-parameter declarations don't match its supermethod's. (That's what
you get from javac if attempt similar mischief with base types.
Presumably it's failing the return-type-substitutability check in JLS
8.4.5 when it finds that it can't "adapt to the type parameters" of the
original method, since it doesn't meet the "same type parameters" laid
out in 8.4.4?

I notice that the Checker Framework won't issue an error merely for the
type-parameter mismatch. However, it _will_ issue an error if you then
declare method parameters (and probably certain kinds of return types),
so it probably does catch any problem that could lead to an actual
runtime error. And maybe they'd even consider their more lenient
approach a feature, a generalization of [contravariant
parameters](jspecify#49)? I'd need to
think more about that, but I'm hoping not to bother.

In any case, it doesn't surprise me too much that our checker would miss
this error: We're inheriting the Checker Framework's leniency for the
type-parameter declaration, and then we've edited our code for checking
overrides (likely including parameters and/or return types), possibly
making it more lenient in the process. I wonder if there's anything I
would have done differently in that code if I'd realized that the
Checker Framework was being lenient about the declarations?
Realistically, probably nothing: I was confused throughout the work
discussed in jspecify/checker-framework#10.
cpovirk added a commit that referenced this issue Dec 5, 2022
(#356)

We had them declared wrong in Guava until
google/guava#6251. We got away with it because
the current reference checker doesn't issue an error.

I would expect an error from declaring _any_ override whose
type-parameter declarations don't match its supermethod's. (That's what
you get from javac if attempt similar mischief with base types.
Presumably it's failing the return-type-substitutability check in JLS
8.4.5 when it finds that it can't "adapt to the type parameters" of the
original method, since it doesn't meet the "same type parameters" laid
out in 8.4.4?

I notice that the Checker Framework won't issue an error merely for the
type-parameter mismatch. However, it _will_ issue an error if you then
declare method parameters (and probably certain kinds of return types),
so it probably does catch any problem that could lead to an actual
runtime error. And maybe they'd even consider their more lenient
approach a feature, a generalization of [contravariant
parameters](#49)? I'd need to
think more about that, but I'm hoping not to bother.

In any case, it doesn't surprise me too much that our checker would miss
this error: We're inheriting the Checker Framework's leniency for the
type-parameter declaration, and then we've edited our code for checking
overrides (likely including parameters and/or return types), possibly
making it more lenient in the process. I wonder if there's anything I
would have done differently in that code if I'd realized that the
Checker Framework was being lenient about the declarations?
Realistically, probably nothing: I was confused throughout the work
discussed in jspecify/checker-framework#10.
@cpovirk cpovirk mentioned this issue Aug 22, 2023
@kevinb9n kevinb9n added the design An issue that is resolved by making a decision, about whether and how something should work. label Mar 13, 2024
@netdpb
Copy link
Collaborator

netdpb commented Apr 9, 2024

Current decision: When a parameter type is a supertype of the parameter type it overrides (a "contravariant parameter"), there is no resulting "JSpecify-based" finding. (A tool can report something if it wants to, but that is separate and should not be enabled when running JSpecify tests.)

Argument for changing: Kotlin does not allow such overrides. However, JSpecify focuses on Java declarations.

Timing: This must be decided before version 1.0 of the spec.

Proposal for 1.0: Finalize the current decision. If you agree, please add a thumbs-up emoji (👍) to this comment. If you disagree, please add a thumbs-down emoji (👎) to this comment and briefly explain your disagreement. Please only add a thumbs-down if you feel you can make a strong case why this decision will be materially worse for users or tool providers than an alternative.

@netdpb netdpb added needs-decision-before-jar-1.0 Issues needing a finalized decision for the 1.0 jar release needs-decision-before-spec-1.0 Issues needing a finalized decision for the 1.0 specification and removed needs-decision-before-jar-1.0 Issues needing a finalized decision for the 1.0 jar release labels Apr 9, 2024
@kevinb9n kevinb9n reopened this Apr 10, 2024
@cpovirk
Copy link
Collaborator

cpovirk commented Apr 11, 2024

I keep feeling that I'm almost on top of this issue, only for things to get slippery again.

JSpecify currently says essentially nothing about when its subtyping check and similar checks should be deployed on Java source. This issue might be about starting to say such things, or it might be about what conformance tests we write.

Part of my concern has been that we both seem to want to say "We don't tell checkers when to apply or not apply our checks" and also seem to want to say "But by writing a conformance test about this situation, we are implicitly telling checkers that applying our checks here would be in some way 'unjustified.'"

The samples tried to dodge this question by saying that they're written under the assumption of JLS-like rules (which of course ban contravariant parameters) but that that those rules (a) have no necessary connection to what checkers do and (b) are more strict than "necessary":

You can think of these as extending the normal JLS type rules to cover types that have been augmented with nullness information. (For example, the value of a return statement must be convertible to the method's return type, and the receiver in a method call should not be @Nullable.) ... Nullness checkers are... not obligated to do anything for any particular comment.... Note that a nullness checker can be sound even if it does not issue errors for some cases of jspecify_nullness_mismatch or jspecify_nullness_not_enough_information!

[edit: The spec says something similar.]

I would like more clarity on whether this issue is "We will not write a conformance test for this case" or "We will write a conformance test for this case that contains no cannot-convert comment (or override-conflict comment or whatever)" or "We will recommend that tools not produce errors in such situations" or some combination of those or what.

I also wonder a little if we're opening a can of worms by trying to go beyond the JLS rules for when to apply checks. For example, if a supermethod has a type parameter <T extends Object>, is it OK for a submethod to declare it as <T extends @Nullable Object> if the resulting changes produce only covariant return-type changes and contravariant parameter-type changes? Maybe we don't need to care about that relative edge case. I just think I recall some complexity in the Checker Framework that seemed to arise from trying to handle such overrides. [edit: It came up not long after I wrote this :)] So I'd also like to have some idea of whether we want to say something like "We've thought specifically about contravariant parameter types, and we think they work well in practice" or "We think that going beyond the JLS is safe in general."

I may be making a big thing out of an inconsequential decision. Part of my concern is that it keeps sounding inconsequential, only for phrasing about what is (e.g.) "correct" to crop back up.

If I had managed to collect my thoughts earlier, I would have talked to kevinb9n about this (more) in person.... We should probably talk sometime.

@cpovirk
Copy link
Collaborator

cpovirk commented Apr 12, 2024

tl;dr I think that "finding" and "JSpecify-based" are undefined terms that are making the question slippery.

My proposal would be:

If it helps, I would also be open to agreeing to expand some non-normative resources to explain why contravariant parameters are worth considering.

@cpovirk
Copy link
Collaborator

cpovirk commented Apr 29, 2024

After some discussion elsewhere, I think I could get behind the proposal if we change "'JSpecify-based' finding" to "JSpecify conformance finding," which I think makes clearer that this is about what the checker must be able to recognize internally in order to pass our tests. (There are surely other formulations that could work, too, but that one might minimize the edit distance :))

@cpovirk
Copy link
Collaborator

cpovirk commented May 14, 2024

Fun fact that I can't resist posting here: In one narrow way, you totally can declare a contravariant parameter type in Java:

class OverrideNoTypeVariable {
  interface Super<T> {
    void x(T t);
  }

  interface Sub<T> extends Super<T> {
    @Override
    void x(Object t);
  }
}

It builds without even a warning.

("the signature of m1 is the same as the erasure (§4.6) of the signature of m2.")

@cpovirk
Copy link
Collaborator

cpovirk commented May 19, 2024

I just tried the contravariant parameter type with ImmutableMultimap.get, since it could be nice for it to accept Object instead of K (though, if we do that, we should probably update https://errorprone.info/bugpattern/CollectionIncompatibleType accordingly :)).

That turns out to not go so great: Calls that pass a K appear to start resolving to the supermethod Multimap.get, whose return type is Collection instead of ImmutableCollection, leading to build errors in code that expects ImmutableCollection.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
design An issue that is resolved by making a decision, about whether and how something should work. needs-decision-before-spec-1.0 Issues needing a finalized decision for the 1.0 specification nullness For issues specific to nullness analysis.
Projects
None yet
Development

No branches or pull requests

8 participants