Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Version conflict with Guava dependency #756

Closed
AlexDBlack opened this issue Sep 29, 2015 · 12 comments
Closed

Version conflict with Guava dependency #756

AlexDBlack opened this issue Sep 29, 2015 · 12 comments

Comments

@AlexDBlack
Copy link
Contributor

Running a network with a Histogram iteration listener currently results in the exception shown below.

This is likely due to a Guava version issue (which was downgraded for Hadoop: #718)

java.lang.NoClassDefFoundError: com/google/common/collect/FluentIterable
    at com.fasterxml.jackson.datatype.guava.GuavaTypeModifier.modifyType(GuavaTypeModifier.java:45)
    at com.fasterxml.jackson.databind.type.TypeFactory._constructType(TypeFactory.java:413)
    at com.fasterxml.jackson.databind.type.TypeFactory.constructType(TypeFactory.java:358)
    at com.fasterxml.jackson.databind.cfg.MapperConfig.constructType(MapperConfig.java:268)
    at com.fasterxml.jackson.databind.cfg.MapperConfig.introspectClassAnnotations(MapperConfig.java:298)
    at com.fasterxml.jackson.databind.deser.BasicDeserializerFactory.findTypeDeserializer(BasicDeserializerFactory.java:1238)
    at com.fasterxml.jackson.databind.DeserializationContext.findRootValueDeserializer(DeserializationContext.java:445)
    at com.fasterxml.jackson.databind.ObjectMapper._findRootDeserializer(ObjectMapper.java:3666)
    at com.fasterxml.jackson.databind.ObjectMapper._readValue(ObjectMapper.java:3529)
    at com.fasterxml.jackson.databind.ObjectMapper.readTree(ObjectMapper.java:1978)
    at io.dropwizard.configuration.ConfigurationFactory.build(ConfigurationFactory.java:80)
    at io.dropwizard.cli.ConfiguredCommand.parseConfiguration(ConfiguredCommand.java:114)
    at io.dropwizard.cli.ConfiguredCommand.run(ConfiguredCommand.java:63)
    at io.dropwizard.cli.Cli.run(Cli.java:70)
    at io.dropwizard.Application.run(Application.java:73)
    at org.deeplearning4j.ui.UiServer.createServer(UiServer.java:174)
    at org.deeplearning4j.ui.UiServer.getInstance(UiServer.java:72)
    at org.deeplearning4j.ui.weights.HistogramIterationListener.<init>(HistogramIterationListener.java:50)
    at org.deeplearning4j.ui.weights.HistogramIterationListener.<init>(HistogramIterationListener.java:45)
    at org.deeplearning4j.ui.TempTest.tempTest(TempTest.java:48)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
    at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
    at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
    at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
    at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
    at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
    at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
    at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
    at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
    at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
    at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
    at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
    at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
    at org.junit.runner.JUnitCore.run(JUnitCore.java:137)
    at com.intellij.junit4.JUnit4IdeaTestRunner.startRunnerWithArgs(JUnit4IdeaTestRunner.java:78)
    at com.intellij.rt.execution.junit.JUnitStarter.prepareStreamsAndStart(JUnitStarter.java:212)
    at com.intellij.rt.execution.junit.JUnitStarter.main(JUnitStarter.java:68)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at com.intellij.rt.execution.application.AppMain.main(AppMain.java:140)
Caused by: java.lang.ClassNotFoundException: com.google.common.collect.FluentIterable
    at java.net.URLClassLoader$1.run(URLClassLoader.java:372)
    at java.net.URLClassLoader$1.run(URLClassLoader.java:361)
    at java.security.AccessController.doPrivileged(Native Method)
    at java.net.URLClassLoader.findClass(URLClassLoader.java:360)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
    at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
    ... 46 more
@agibsonccc
Copy link
Contributor

@EronWright Is there any way we can avoid conflicting with spark and co + this guava dep?

The work around here is to use dependencyManagement and exclusions.

We need to be able to use the distributed version for the UI so different exclusions per dep here won't work? Unless we set something up for hadoop jobs vs normal?

@EronWright
Copy link

The standard solution is to shade the dependency. I think guava is amenable.

Caveat though, shaded types mustn't be a part of the public api...hopefully they aren't.

Assign the issue to me if you'd like me to work on it.

@agibsonccc
Copy link
Contributor

So one other thing here then: Is there any way to kill 2 birds with 1 stone and keep guava upgraded but deploy the older version for hadoop/spark?

@EronWright
Copy link

My proposal is that dl4j would have a shaded, upgraded guava while Hadoop would have whatever guava it cared to. It is unproven as of yet but is the idea. Does that kill both birds?

@EronWright
Copy link

I see a few instances where Guava types are used by the DL4J public API. That's a problem because the shaded classes are supposed to be hidden.

  • [core] o.d.clustering.cluster.info.ClusterSetInfo returns Table
  • [core] o.d.clustering.quadtree.QuadTree takes AtomicDouble
  • [nlp] o.d.models.embeddings.inmemory.InMemoryLookupTable takes AtomicDouble
  • [nlp] o.d.text.invertedindex.InvertedIndex uses Function

I will rework those classes as necessary, if @agibsonccc approves making breaking changes.

@EronWright EronWright changed the title UI (histograms etc) broken? Version conflict with Guava dependency Oct 1, 2015
@agibsonccc
Copy link
Contributor

Yup that will be great
On Oct 1, 2015 12:07 PM, "Eron Wright" notifications@github.com wrote:

I see a few instances where Guava types are used by the DL4J public API.
That's a problem because the shaded classes are supposed to be hidden.

  • [core] o.d.clustering.cluster.info.ClusterSetInfo returns Table
  • [core] o.d.clustering.quadtree.QuadTree takes AtomicDouble
  • [nlp] o.d.models.embeddings.inmemory.InMemoryLookupTable takes
    AtomicDouble
  • [nlp] o.d.text.invertedindex.InvertedIndex uses Function

I will rework those classes as necessary, if @agibsonccc
https://github.com/agibsonccc approves making breaking changes.


Reply to this email directly or view it on GitHub
#756 (comment)
.

@EronWright
Copy link

Follow-up, I see that nd4j also uses Guava on its public API, especially Function. Maybe we should revisit the reason for downgrading Guava in the first place. Following up on #713 then will update this issue.

@smarthi
Copy link

smarthi commented Oct 3, 2015

FWIW, u can see how Flink is shading out the Guava jar from hadoop and not having to downgrade to Guava 11.0

On Oct 3, 2015, at 2:35 PM, Eron Wright notifications@github.com wrote:

Follow-up, I see that nd4j also uses Guava on its public API, especially Function. Maybe we should revisit the reason for downgrading Guava in the first place. Following up on #713 #713 then will update this issue.


Reply to this email directly or view it on GitHub #756 (comment).

@agibsonccc
Copy link
Contributor

Oh that's helpful! I think we can do something with this.

@EronWright
Copy link

Update: if possible I'd like to avoid shading Guava, since its types are present on both the nd4j and dl4j public API.

We know that dl4j works with Guava 11 and with Guava 18. Guava tends to be backwards compatible, for those types not marked @Beta. One option then is to compile with 11 but allow a newer version to be used. The dependencyManagement block is causing Guava to be pinned at 11, hence the above stack trace.

The issue is dropwizard. It definitely needs a new-ish Guava. Maybe we can shade dropwizard and its dependencies to be safely embeddable in all situations. I observe that dropwizard is used by dl4j-nlp (a core library), so we need dropwizard to be embeddable.

To elaborate on @smarthi's comment, Flink has a dependency on Hadoop. It uses a fatjar of Hadoop with a shaded Guava. My plan for dropwizard is essentially like that.

To be continued.

@nyghtowl
Copy link

#58 closed and this doesn't appear to be an issue at this time.

AlexDBlack pushed a commit that referenced this issue May 21, 2018
* embedding_lookup operation from TF port. Initial commit

* dynamic_parittion operation and test. Initial commit

* Added more tests for dynamic_partition op

* Fixed test for dynamic_partition op.

* Added additional test for embedding_lookup

* dynamic_stitch operation and test. Initial commit

* Fixed dynamic_stitch op and test suite for it.
@lock
Copy link

lock bot commented Jan 21, 2019

This thread has been automatically locked since there has not been any recent activity after it was closed. Please open a new issue for related bugs.

@lock lock bot locked and limited conversation to collaborators Jan 21, 2019
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants