Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

DL4J: Loss function weights array should be automatically cast to appropriate datatype #8431

Closed
Blacktoviche opened this issue Nov 21, 2019 · 4 comments · Fixed by KonduitAI/deeplearning4j#75
Assignees
Labels
Milestone

Comments

@Blacktoviche
Copy link

@Blacktoviche Blacktoviche commented Nov 21, 2019

Issue Description

ClassNotFoundException: org.nd4j.linalg.api.ops.impl.transforms.floating.Histogram
Please describe our issue, along with:

  • expected behavior
  • encountered behavior

Version Information

Please indicate relevant versions, including, if relevant:

  • Deeplearning4j version 1.0.0-beta5
  • Platform information (OS, etc) Ubuntu
  • CUDA version, if used
  • NVIDIA driver version, if in use

Additional Information

Where applicable, please also provide:

Contributing

and If I fall back to 1.0.0-beta4 I got this :
java.lang.IllegalArgumentException: Op.X must have same data type as Op.Y: X.datatype=FLOAT, Y.datatype=DOUBLE

@Blacktoviche

This comment has been minimized.

Copy link
Author

@Blacktoviche Blacktoviche commented Nov 21, 2019

It seems library conflict with dl4j ui . when I removed deeplearning4j-ui from maven everything is fine
So I can't use deeplearning4j-ui 1.0.0-beta4 with deeplearning4j-core 1.0.0-beta5
So fall back to deeplearning4j-core 1.0.0-beta4 with deeplearning4j-ui 1.0.0-beta4
and I solved the issue

java.lang.IllegalArgumentException: Op.X must have same data type as Op.Y: X.datatype=FLOAT, Y.datatype=DOUBLE

by changing weightsArray from
final INDArray weightsArray = Nd4j.create(new double[]{0.57, 0.75});
to
final INDArray weightsArray = Nd4j.create(new float[]{0.57f, 0.75f});

Now I can run my train/test model
It seems we have to wait until the next dl4j-ui beta 5 so we can use it with dl4g-core beta 5

@AlexDBlack

This comment has been minimized.

Copy link
Contributor

@AlexDBlack AlexDBlack commented Nov 22, 2019

So I can't use deeplearning4j-ui 1.0.0-beta4 with deeplearning4j-core 1.0.0-beta5

Right, that's correct. You can't and shouldn't mix versions. Everything - DL4J, ND4J, etc - needs to be the same version.
In fact, you probably got a big warning logged about that when you started your program - we have a version checking system.

You should always use the latest version too (for optimization, bug fixes and new features), you can see release notes here: https://deeplearning4j.org/release-notes.html

It seems we have to wait until the next dl4j-ui beta 5

There's nothing to wait for. Everything is always released all at once.
We did drop scala 2.10 support in 1.0.0-beta5 (and added scala 2.12 support), so you need to use deeplearning4j-ui_2.11 or _2.12 instead. See the release notes I linked previously.

by changing weightsArray from

Can you share code for this? It's a separate issue to the other one and I don't have any context here. Do you mean weight array in a loss function or something?

@Blacktoviche

This comment has been minimized.

Copy link
Author

@Blacktoviche Blacktoviche commented Nov 22, 2019

Thanks for your explanation. I should've looked in the maven repo for the latest dl4j-ui.

Can you share code for this? It's a separate issue to the other one and I don't have any context here. Do you mean weight array in a loss function or something?

Yes it is the weight array in a loos function. here is the code

final INDArray weightsArray = Nd4j.create(new float[]{0.57f, 0.75f});
final MultiLayerConfiguration configuration = new NeuralNetConfiguration.Builder()
.weightInit(WeightInit.RELU_UNIFORM)
.updater(new Adam(0.015D))
.list()
.layer(new DenseLayer.Builder().nIn(11).nOut(6).activation(Activation.RELU).dropOut(0.9).build())
.layer(new DenseLayer.Builder().nIn(6).nOut(6).activation(Activation.RELU).dropOut(0.9).build())
.layer(new DenseLayer.Builder().nIn(6).nOut(4).activation(Activation.RELU).dropOut(0.9).build())
.layer(new OutputLayer.Builder(new LossMCXENT(weightsArray)).nIn(4).nOut(2).activation(Activation.SOFTMAX).build())
.build();

I only had to change double to float to solve the issue which I mentioned before

@AlexDBlack

This comment has been minimized.

Copy link
Contributor

@AlexDBlack AlexDBlack commented Nov 22, 2019

I only had to change double to float to solve the issue which I mentioned before

Ok, so it is a loss weights array. Yes, creating it as float (or casting to float) is a legitimate solution.
I think to make it easier for users we should handle that internally - i.e., both float and double should just work for the weights array.

@AlexDBlack AlexDBlack changed the title Class Not Found Histogram DL4J: Loss function weights array should be automatically cast to appropriate datatype Nov 22, 2019
@AlexDBlack AlexDBlack added this to the 1.0.0-beta6 milestone Nov 22, 2019
@AlexDBlack AlexDBlack self-assigned this Nov 22, 2019
AlexDBlack added a commit to KonduitAI/deeplearning4j that referenced this issue Nov 23, 2019
Signed-off-by: AlexDBlack <blacka101@gmail.com>
AlexDBlack added a commit to KonduitAI/deeplearning4j that referenced this issue Nov 23, 2019
* eclipse#8431 Cast loss function weights array automatically

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Add 'regex verbose mode' printing (ExecDebugListener) for TFGraphTestAllSameDiff'

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Class import mapping fix

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Reshape fixes

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Don't swallow first exception in NativeOpExecutioner.exec(CustomOp)

Signed-off-by: AlexDBlack <blacka101@gmail.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
2 participants
You can’t perform that action at this time.