-
Notifications
You must be signed in to change notification settings - Fork 960
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Something has changed in the last commits ? #326
Comments
TBB support included massive changes related to threading, though the configuration you describe is covered in validation. Could you please run MKL-DNN tests and see whether any fail in your configuration? |
Hi, I get errors in some convolution unit tests: 1>------ Build started: Project: RUN_TESTS, Configuration: Release x64 ------ |
@zeno40, Could you please dump the cmake output? |
cmake output: The cmake-policies(7) manual explains that the OLD behaviors of all CMake Deprecation Warning at CMakeLists.txt:22 (cmake_policy): The cmake-policies(7) manual explains that the OLD behaviors of all Selecting Windows SDK version 10.0.17134.0 to target Windows 10.0.17763. Hardware: Intel Haswell Devils canyon |
For CPU -- how many cores do you have? // still cannot reproduce the issue on my side... |
I found the culprit! I was compiling the mkldnn project with /std:c++latest instead of the dafault value, |
:) great, thx for the update! |
For your information: the same is happening when compiling mkl-dnn with /permissive- conformance mode and the default C++ Language Standard. |
Hi,
I'm using mkl-dnn in my convolution layer for all strided and kernel 1x1 convolutions.
Everything was working 3 days ago. After committing the latest changes in mkl-dnn to my repo (with support for Intel TBB) and rebuilding my code as I always do, all my trained models are working as before except the trained weights are not trained anymore in the sense the model act like an untrained one when testing. Nothing in the non mkl-dnn code has changed to explain this strange behaviour.
In all the convolutions I use a fixed nchw input/output format and oihw format for the weights.
Is it possible there is something changed in the behaviour of mkl-dnn when using convolutions this way?
thanks
Environment
build with openmp and linked with libiomp5md.lib;mklml.lib
excluding vcomp.lib
The text was updated successfully, but these errors were encountered: