Skip to content
This repository has been archived by the owner on Aug 26, 2021. It is now read-only.

Added codegen to handle supertypes #294

Closed
wants to merge 1 commit into from

Conversation

sgoldfed
Copy link
Contributor

This eliminates the need to fall back to reflection when an injected type has a (non platform) supertype. Previously, if a class had a supertype, we would fall back to reflection unless the supertype had an @Inject annotation. In fact, in order to not fall back to reflection, every (non-platform) ancestor would have to have an @Inject annotation.

This change is particularly relevant to classes with abstract supertypes that do not have members injection. The abstract supertype would have no @Inject annotation, and in its current state, Dagger would fall back to reflection.

Also note that the ParentAdapters that are generated only implement MembersInjector. That is, they are not Bindings since all we need them for is to do members injection in the parents. Additionally, we only create parent adapters when needed. So, if a class has some ancestors that have members injection and some that do not, adapters will only be generated for those that do.

This is a change in behavior, and its only downside is that Dagger will no longer automatically detect if somebody swaps in a parent type from an upstream dependency without recompiling.

…ng ParentAdapters for each ancestor(both abstract and concrete) that has injected members.
<?xml version="1.0" encoding="UTF-8"?>
<!--
Copyright (C) 2012 Square, Inc.
Copyright (C) 2012 Google, Inc.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: 2013

@JakeWharton
Copy link
Collaborator

Too exhausted to do a real review. Tomorrow.

@sgoldfed
Copy link
Contributor Author

In regards to Dagger's naming scheme for adapters: It confuses Eclipse. In particular, Eclipse gets confused when you have two classes in the same package named as in: Foo and Foo$SomethingElse. That is, when you have a class that is named identically to another class with the first different character being a dollar sign.

When I say that Eclipse gets confused, in the class itself it gives no errors(i.e. It doesn't show any errors when you declare Foo$SomethingElse). But, when you write out that class name somewhere else, it shows an error. So, if in another class, we have a field of type Foo$SomethingElse, Eclipse will give an error that it cannot resolve the type.

All of Dagger's adapters will create this error in Eclipse. It's just that until now, nowhere in the generated code do we ever write out an adapter name. Instead we use the interface as in

Binding<Foo> foo;
foo = (Binding<Foo>) linker.requestBinding(...); 

I encountered this issue, since in my code, I declare and initialize ParentAdapters (and at least for the initialization, I need to use the actual adapter name). So, in the code I generated, when I declare Ancestor$$ParentAdapter$$Child, Eclipse gives an error.

The good news is, that you only see this error when you look inside the class; it does not appear on the package explorer. The better news is that the code works fine even in Eclipse.

It appears that the cause of the error is Eclipse getting confused by the $ in the name with inner classes. To see this easily, run the codegen and try importing the generated adapters using Eclipse's autocomplete. You can see that it changes the '$' to '.' in its suggestions, and it thinks that the name of the class is just the suffix(e.g. in the autocomplete list it will show "InjectAdapter" and "ModuleAdapter" even though no such classes exist).

Anyway, as I said this is not a functionality issue since despite the red line inside the class, everything works fine. The real issue is: Will users be looking at the generated code and get confused by the red line. Or will they see the errors and blame Dagger for unrelated issues in their code.

If we want to change it, then we can either:

(1) change the naming scheme. Using a separator other than $ (or $$) seems to do the trick. But this will break for classes that have already been generated with the current naming scheme.

(2) break the naming scheme for ParentAdapters only since after all, the only time that we declare an adapter by name is with ParentAdapters.

@swankjesse
Copy link
Contributor

@sgoldfed...

First, thanks for jumping in and fixing something that's broken.

Next, I wanted to explain that this was a known limitation that we intentionally didn't fix. The problem with the fix is that you're generating code for classes outside of the current compile scope. This doesn't end well.

Suppose you have A.java, B.java and C.java, such that C extends B and B extends A. Also A has injected fields, B has none, and C has injected fields. When I compile C.java, the generated code is going to detect this, and helpfully tell C$$InjectAdapter that it can skip injecting B and go right to A. So far so good.

Until we add an @Inject member on B. That won't necessarily cause C to be recompiled because C doesn't have to be in the same project/module as C. And that means that C now has a stale assumption.

We plan on solving this problem, but our fix is going to be more invasive than just improving our annotation processors. Instead we're building whole app code gen where we admit to ourselves that incremental builds are hard and give up on annotation processors which are inherently incremental. Instead we'll enter the fun and exciting world of bytecode parsing. Hooray.

I would have loved to explain this to you before you started writing code, but you wrote the code first. Sorry about that.

@swankjesse swankjesse closed this Jul 25, 2013
@tbroyer
Copy link
Collaborator

tbroyer commented Jul 25, 2013

@swankjesse How about capitalizing on the annotation processor but instead run it as a post-processor? (javac -proc:only -cp target/classes example.MyModule example.MyInjectable example.MyInjected, or the equivalent in code with javax.tools.JavaCompiler, with the list of classes to process being fed by classpath scanning – e.g. with Scannotation or Reflections)

@cgruber
Copy link
Collaborator

cgruber commented Jul 25, 2013

@swankjesse - it's a known limitation we intentionally didn't fix, but "we" need this fixed. I realize that the desire is to get to whole-app generation, but in looking at this, we realized we need a fix in the shorter-term than the longer term.

I think your logic is suspect with respect to the swapping in of new supertypes with @Inject. You're talking about taht event "not causing C to be recompiled" because B may not be in the same project. But that is true with any change to B, and you're proposing that we support he use-case where someone alters a supertype, and doesn't re-compile the children of the supertype.

First of all, if you're in a maven context, that is, while not impossible, extremely unlikely. To bump the upstream dependency version and release, you need to re-build the dependent project. C will get recompiled. In most other cases, it is similar. In the rare case where it isn't, you're entirely at risk of B having a binary incompatibly breaking change anyway which C is assuming the old structure, so the problem is there in Java, not in ours. You're asking us to forgo a fully code-genned solution to support an edge case that is already problematic in any java code.

I think it's entirely reasonable for us to expect anyone using this in a dependency-aware build system (or, you know, any build system) to re-compile children of altered parents. Period. With that assumption in place we can get on with thing. I'm not going to re-open this, because I want to come to some understanding before I do (don't want to precipitate a pull-request open/close war. ;) But I do strongly feel that this functionality outweighs the risks, given the nature of the risk and the nature of Java development.

@cgruber
Copy link
Collaborator

cgruber commented Jul 25, 2013

Oh, and as a side note, @swankjesse, Steven wrote this after discussion with Greg Kick and I in which we decided that the risk you are describing was outweighed by the value of the feature, and that this edge case mis-use of build systems wasn't one we felt we wanted to support. That may, ultimately, mean this goes in the google fork, which is fine, but I think it's worth having this discussion here.

@swankjesse
Copy link
Contributor

Yeah, put it in the Google fork. Put some miles on it and see if the incremental builds are a problem in practice. Are you guys using Maven or something proprietary?

@swankjesse
Copy link
Contributor

@tbroyer using an annotation processor to do full-app code gen is a good idea.

@cgruber
Copy link
Collaborator

cgruber commented Jul 27, 2013

We use Blaze internally, and build from source (no binary dependencies in the way open-source projects do), but we pretty much use maven for open-source projects. So we won't feel the pain internally, by any means. My suspicion is that there is no build system out there (including maven) for which you can bump a dependency version and then push without rebuilding, unless you're deliberately circumventing build processes.

And for those few who are manually cobbling together jars, I cheerfully welcome them into the second decade of the 21 century.

@tbroyer
Copy link
Collaborator

tbroyer commented Jul 27, 2013

@cgruber You're putting too much faith in Maven: https://cwiki.apache.org/confluence/display/MAVEN/Incremental+Builds
maven-compiler-plugin 3.x handles the situation better than 2.5 but we're still not there yet (and I do believe Maven is broken by design in this respect, and is therefore unfixable).

AFAICT, Ant's javac task and many others (I believe Gradle too) are similarly broken.

One issue, to begin with, is that they try to infer staleness at the file level (Foo.java is newer than Foo.class so it needs to be recompiled; this is fine, but the reverse is not always true, because Foo.java can depend on Bar.java that has changed –the exact problem we're discussing here–; SCons uses heuristics to infer those dependencies from the source file content, but it still have false negatives).
They consequently use the previously compiled classes in the compile classpath (and this includes classes that have sinced been removed from sources, and because of annotation processors, class post-processors and the way inner classes are compiled, there's not a one-to-one mapping between output files and source files so it's hard if not impossible to selectively delete class files). This also stems from the fact that Maven plugins all output to target/classes, overwriting existing files if needed, rather than each having its own output folder and then merging outputs together.
Looking at the current state of affairs (trunk of maven-compiler-plugin and maven-shared-incremental), it seems like it doesn't take into account the module's dependencies (I find it really strange, I thought it'd be one of the first thing they'd fix); which is exactly what we're talking about here.

AFAICT, the only Java-oriented build system that gets it right is Buck (and Blaze indeed I guess); notably because it handles staleness at the JAR level.
The issue for many projects is that Buck doesn't manage external dependencies other than at the file-level. Gerrit solved it with gen_rules that download jars from Central before using them as java_binaries; transitive dependencies and version conflict resolution still have to be manually handled.

See also http://blog.ltgt.net/in-quest-of-the-ultimate-build-tool

@cgruber
Copy link
Collaborator

cgruber commented Jul 27, 2013

No, I'm putting my faith in people's consistent use of reasonable release processes... like you build clean before you push to maven's central repository. I'm not talking about maven's compiler plugin getting it right. The frequency of someone getting hit by what you're describing is the frequency by which one actually releases software into production without a clean build.

And I repeat, I'm not convinced any of us want to support that use-case. It's madness to rely on not building clean, while swapping in up-stream dependencies.

I'm not suggesting these things shouldn't be fixed (by design or fixed in implementation). And I recommend, since you saw the error in maven-compiler-plugin, propose a fix. :D But seriously... who does this? Really.

@swankjesse
Copy link
Contributor

You talk a lot about this problem being unlikely to occur in a production build. But non-production builds are important too. During development I would find it quite frustrating if occasional changes required clean builds.

As for proposing a fix, I've already proposed it and implemented it. It's the current behavior.

@cgruber
Copy link
Collaborator

cgruber commented Jul 28, 2013

LOL. Ok. I get you. Though, so as not to get diverted by a cute response: For "Fixes" I meant fixing the build infrastructure to not suck, since Thomas actually mentioned where, in maven, the failure to notice and account for upstream change lay.

In practice, for me, even in development, I nearly never omit clean in maven either. Given that compilation is <10% of the time taken in my builds, the rest being unit tests (far more if I'm running integration tests), there's rarely a payoff for time-saved from incremental builds for me in the build system. In the IDE, perhaps. And that's important. I can relate to the frustration in theory, but only in the IDE, and even then, I guess I'm used to all sorts of ways IDEs get out of sync and need a clean build. Is it frustrating? Yes. I'm also not wanting to design my framework around IDE bugs.

I guess I am, essentially, apologizing for the need for an occasional clean build, but this isn't really a problem in our framework, this is a problem in the state of IDEs and incremental compilation. That said, I see your point - you implemented around this bugginess as a solution to a problem you encountered. I tend to prefer a sound production scenario over a sound development scenario if I'm forced to choose, which we seem to be here. We've clocked some real and not-insignificant speed reductions in production because of failover to reflection in our apps, so the current solution is optimized in the wrong direction from our users' perspectives.

We'll put it in the fork, and see how it plays out. I just wanted to be clear and flesh out the fact that the above isn't inherently the wrong design, and the current design isn't inherently the right one... Dagger currently contains a design element inside our framework to account for a buggy build environment that doesn't permit (safe) incremental builds in an irregular case of supertypes being altered without invoking the children. I think we disagree over where the focus on correctness should lie (development vs. production), but I am comfortable with that disagreement. This is why we fork. :)

@gk5885
Copy link
Contributor

gk5885 commented Jul 29, 2013

This slipped through my inbox, but I'm having a bit of difficultly following the line of reasoning here.

First, we have single-artifact incremental builds to deal with. @swankjesse argues that this approach breaks incremental builds because applying an annotation processor to a subclass can result in stale behavior. OK, but OTOH, @tbroyer provided some evidence that altering the superclass in more horrible, obvious ways is broken anyway. So, either we have a build system that does the right thing or one that does the wrong thing, but have we actually found any situation where it works correctly for some types of superclass changes, but not for annotations? Granted, the example would generate Errors at runtime, we overload resolution for example could generate behavior-only changes as well.

Second, we have inter-artifact snapshot/development builds. The link from @tbroyer shows that this is again super-broken no matter what we do.

Finally, we have inter-artifact release builds. We all seem to agree that if you're updating a versioned dependency or cutting a release, there's almost certainly going to be a clean build, right? Seems fine there.

I guess I would be curious to know if there's actually a concrete example of this breaking, but only for annotation processors. It seems to me that if you change a superclass without recompiling a subclass "you're gonna have a bad time" regardless of whether or not an annotation processor is involved.

@swankjesse
Copy link
Contributor

This problem isn't theoretical. I was able to reproduce it in a very simple IntelliJ project.

First download this branch and mvn install it. Next download this zip file and open it in IntelliJ. Build & run, you'll see the following:

injected
not injected
injected

Next, change B.java to include an @Inject annotation on its only field by uncommenting the commented line (and commenting the other line). Re-run main, and you'll see your @Inject annotation didn't work.

injected
not injected
injected

Perhaps more alarmingly, if you do a clean build, you can see the reverse problem. A non-injected field is injected.

The reason this particular change is problematic is that @Inject annotations are an implementation detail of a class. I should be able to add them and remove them without rebuilding downstream classes.

@gk5885
Copy link
Contributor

gk5885 commented Jul 30, 2013

OK, so "it doesn't work in IntelliJ" is a valid issue. AFAICT, it's actually the only issue - Maven's compilation issues have nothing to do with annotation processing and your example project works fine in Eclipse. The only documentation that I can find about when IntelliJ chooses to recompile is buried in an FAQ which says "IntelliJ IDEA keeps track of dependencies between source files". It seems like it's probably trying to be more clever than it ought to.

Since IntelliJ probably isn't going to change any time soon it does seem reasonable to forgo this feature for the time being.

That said, the notion that injected members are implementation details is absurd. Just because you wedged DI between the caller and type doesn't mean that you get any fewer errors when you request an instance without providing bindings for the supertype's dependencies. The field may have default visibility, but callers have to know all of the dependencies to get anything meaningful out of the ObjectGraph. Otherwise, each one of the errors is going to (correctly) reference an "implementation detail" of the supertype that isn't anywhere in the calling code.

In fact, this is where IntelliJ's compilation scheme is already breaking down for Dagger. Take Dagger 1.0.1 and toss an @Inject Object foo; on B. No complaints from IntelliJ. You get an IllegalStateException at runtime instead. Eclipse handles this situation like you'd hope: immediate compilation error.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants