-
Notifications
You must be signed in to change notification settings - Fork 3.1k
Conversation
…ng ParentAdapters for each ancestor(both abstract and concrete) that has injected members.
<?xml version="1.0" encoding="UTF-8"?> | ||
<!-- | ||
Copyright (C) 2012 Square, Inc. | ||
Copyright (C) 2012 Google, Inc. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: 2013
Too exhausted to do a real review. Tomorrow. |
In regards to Dagger's naming scheme for adapters: It confuses Eclipse. In particular, Eclipse gets confused when you have two classes in the same package named as in: Foo and Foo$SomethingElse. That is, when you have a class that is named identically to another class with the first different character being a dollar sign. When I say that Eclipse gets confused, in the class itself it gives no errors(i.e. It doesn't show any errors when you declare Foo$SomethingElse). But, when you write out that class name somewhere else, it shows an error. So, if in another class, we have a field of type Foo$SomethingElse, Eclipse will give an error that it cannot resolve the type. All of Dagger's adapters will create this error in Eclipse. It's just that until now, nowhere in the generated code do we ever write out an adapter name. Instead we use the interface as in
I encountered this issue, since in my code, I declare and initialize ParentAdapters (and at least for the initialization, I need to use the actual adapter name). So, in the code I generated, when I declare Ancestor$$ParentAdapter$$Child, Eclipse gives an error. The good news is, that you only see this error when you look inside the class; it does not appear on the package explorer. The better news is that the code works fine even in Eclipse. It appears that the cause of the error is Eclipse getting confused by the $ in the name with inner classes. To see this easily, run the codegen and try importing the generated adapters using Eclipse's autocomplete. You can see that it changes the '$' to '.' in its suggestions, and it thinks that the name of the class is just the suffix(e.g. in the autocomplete list it will show "InjectAdapter" and "ModuleAdapter" even though no such classes exist). Anyway, as I said this is not a functionality issue since despite the red line inside the class, everything works fine. The real issue is: Will users be looking at the generated code and get confused by the red line. Or will they see the errors and blame Dagger for unrelated issues in their code. If we want to change it, then we can either: (1) change the naming scheme. Using a separator other than $ (or $$) seems to do the trick. But this will break for classes that have already been generated with the current naming scheme. (2) break the naming scheme for ParentAdapters only since after all, the only time that we declare an adapter by name is with ParentAdapters. |
@sgoldfed... First, thanks for jumping in and fixing something that's broken. Next, I wanted to explain that this was a known limitation that we intentionally didn't fix. The problem with the fix is that you're generating code for classes outside of the current compile scope. This doesn't end well. Suppose you have Until we add an We plan on solving this problem, but our fix is going to be more invasive than just improving our annotation processors. Instead we're building whole app code gen where we admit to ourselves that incremental builds are hard and give up on annotation processors which are inherently incremental. Instead we'll enter the fun and exciting world of bytecode parsing. Hooray. I would have loved to explain this to you before you started writing code, but you wrote the code first. Sorry about that. |
@swankjesse How about capitalizing on the annotation processor but instead run it as a post-processor? ( |
@swankjesse - it's a known limitation we intentionally didn't fix, but "we" need this fixed. I realize that the desire is to get to whole-app generation, but in looking at this, we realized we need a fix in the shorter-term than the longer term. I think your logic is suspect with respect to the swapping in of new supertypes with First of all, if you're in a maven context, that is, while not impossible, extremely unlikely. To bump the upstream dependency version and release, you need to re-build the dependent project. C will get recompiled. In most other cases, it is similar. In the rare case where it isn't, you're entirely at risk of B having a binary incompatibly breaking change anyway which C is assuming the old structure, so the problem is there in Java, not in ours. You're asking us to forgo a fully code-genned solution to support an edge case that is already problematic in any java code. I think it's entirely reasonable for us to expect anyone using this in a dependency-aware build system (or, you know, any build system) to re-compile children of altered parents. Period. With that assumption in place we can get on with thing. I'm not going to re-open this, because I want to come to some understanding before I do (don't want to precipitate a pull-request open/close war. ;) But I do strongly feel that this functionality outweighs the risks, given the nature of the risk and the nature of Java development. |
Oh, and as a side note, @swankjesse, Steven wrote this after discussion with Greg Kick and I in which we decided that the risk you are describing was outweighed by the value of the feature, and that this edge case mis-use of build systems wasn't one we felt we wanted to support. That may, ultimately, mean this goes in the google fork, which is fine, but I think it's worth having this discussion here. |
Yeah, put it in the Google fork. Put some miles on it and see if the incremental builds are a problem in practice. Are you guys using Maven or something proprietary? |
@tbroyer using an annotation processor to do full-app code gen is a good idea. |
We use Blaze internally, and build from source (no binary dependencies in the way open-source projects do), but we pretty much use maven for open-source projects. So we won't feel the pain internally, by any means. My suspicion is that there is no build system out there (including maven) for which you can bump a dependency version and then push without rebuilding, unless you're deliberately circumventing build processes. And for those few who are manually cobbling together jars, I cheerfully welcome them into the second decade of the 21 century. |
@cgruber You're putting too much faith in Maven: https://cwiki.apache.org/confluence/display/MAVEN/Incremental+Builds AFAICT, Ant's javac task and many others (I believe Gradle too) are similarly broken. One issue, to begin with, is that they try to infer staleness at the file level ( AFAICT, the only Java-oriented build system that gets it right is Buck (and Blaze indeed I guess); notably because it handles staleness at the JAR level. See also http://blog.ltgt.net/in-quest-of-the-ultimate-build-tool |
No, I'm putting my faith in people's consistent use of reasonable release processes... like you build clean before you push to maven's central repository. I'm not talking about maven's compiler plugin getting it right. The frequency of someone getting hit by what you're describing is the frequency by which one actually releases software into production without a clean build. And I repeat, I'm not convinced any of us want to support that use-case. It's madness to rely on not building clean, while swapping in up-stream dependencies. I'm not suggesting these things shouldn't be fixed (by design or fixed in implementation). And I recommend, since you saw the error in maven-compiler-plugin, propose a fix. :D But seriously... who does this? Really. |
You talk a lot about this problem being unlikely to occur in a production build. But non-production builds are important too. During development I would find it quite frustrating if occasional changes required clean builds. As for proposing a fix, I've already proposed it and implemented it. It's the current behavior. |
LOL. Ok. I get you. Though, so as not to get diverted by a cute response: For "Fixes" I meant fixing the build infrastructure to not suck, since Thomas actually mentioned where, in maven, the failure to notice and account for upstream change lay. In practice, for me, even in development, I nearly never omit clean in maven either. Given that compilation is <10% of the time taken in my builds, the rest being unit tests (far more if I'm running integration tests), there's rarely a payoff for time-saved from incremental builds for me in the build system. In the IDE, perhaps. And that's important. I can relate to the frustration in theory, but only in the IDE, and even then, I guess I'm used to all sorts of ways IDEs get out of sync and need a clean build. Is it frustrating? Yes. I'm also not wanting to design my framework around IDE bugs. I guess I am, essentially, apologizing for the need for an occasional clean build, but this isn't really a problem in our framework, this is a problem in the state of IDEs and incremental compilation. That said, I see your point - you implemented around this bugginess as a solution to a problem you encountered. I tend to prefer a sound production scenario over a sound development scenario if I'm forced to choose, which we seem to be here. We've clocked some real and not-insignificant speed reductions in production because of failover to reflection in our apps, so the current solution is optimized in the wrong direction from our users' perspectives. We'll put it in the fork, and see how it plays out. I just wanted to be clear and flesh out the fact that the above isn't inherently the wrong design, and the current design isn't inherently the right one... Dagger currently contains a design element inside our framework to account for a buggy build environment that doesn't permit (safe) incremental builds in an irregular case of supertypes being altered without invoking the children. I think we disagree over where the focus on correctness should lie (development vs. production), but I am comfortable with that disagreement. This is why we fork. :) |
This slipped through my inbox, but I'm having a bit of difficultly following the line of reasoning here. First, we have single-artifact incremental builds to deal with. @swankjesse argues that this approach breaks incremental builds because applying an annotation processor to a subclass can result in stale behavior. OK, but OTOH, @tbroyer provided some evidence that altering the superclass in more horrible, obvious ways is broken anyway. So, either we have a build system that does the right thing or one that does the wrong thing, but have we actually found any situation where it works correctly for some types of superclass changes, but not for annotations? Granted, the example would generate Second, we have inter-artifact snapshot/development builds. The link from @tbroyer shows that this is again super-broken no matter what we do. Finally, we have inter-artifact release builds. We all seem to agree that if you're updating a versioned dependency or cutting a release, there's almost certainly going to be a clean build, right? Seems fine there. I guess I would be curious to know if there's actually a concrete example of this breaking, but only for annotation processors. It seems to me that if you change a superclass without recompiling a subclass "you're gonna have a bad time" regardless of whether or not an annotation processor is involved. |
This problem isn't theoretical. I was able to reproduce it in a very simple IntelliJ project. First download this branch and
Next, change
Perhaps more alarmingly, if you do a clean build, you can see the reverse problem. A non-injected field is injected. The reason this particular change is problematic is that |
OK, so "it doesn't work in IntelliJ" is a valid issue. AFAICT, it's actually the only issue - Maven's compilation issues have nothing to do with annotation processing and your example project works fine in Eclipse. The only documentation that I can find about when IntelliJ chooses to recompile is buried in an FAQ which says "IntelliJ IDEA keeps track of dependencies between source files". It seems like it's probably trying to be more clever than it ought to. Since IntelliJ probably isn't going to change any time soon it does seem reasonable to forgo this feature for the time being. That said, the notion that injected members are implementation details is absurd. Just because you wedged DI between the caller and type doesn't mean that you get any fewer errors when you request an instance without providing bindings for the supertype's dependencies. The field may have default visibility, but callers have to know all of the dependencies to get anything meaningful out of the In fact, this is where IntelliJ's compilation scheme is already breaking down for Dagger. Take Dagger 1.0.1 and toss an |
This eliminates the need to fall back to reflection when an injected type has a (non platform) supertype. Previously, if a class had a supertype, we would fall back to reflection unless the supertype had an @Inject annotation. In fact, in order to not fall back to reflection, every (non-platform) ancestor would have to have an @Inject annotation.
This change is particularly relevant to classes with abstract supertypes that do not have members injection. The abstract supertype would have no @Inject annotation, and in its current state, Dagger would fall back to reflection.
Also note that the ParentAdapters that are generated only implement MembersInjector. That is, they are not Bindings since all we need them for is to do members injection in the parents. Additionally, we only create parent adapters when needed. So, if a class has some ancestors that have members injection and some that do not, adapters will only be generated for those that do.
This is a change in behavior, and its only downside is that Dagger will no longer automatically detect if somebody swaps in a parent type from an upstream dependency without recompiling.