Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Building from source #8

Closed
pilipovicr opened this issue Jun 22, 2020 · 6 comments
Closed

Building from source #8

pilipovicr opened this issue Jun 22, 2020 · 6 comments

Comments

@pilipovicr
Copy link

Greetings,

I tried to build FakeApproxConv2D from source files in Singularity and Docker container which are based on tensorflow/tensorflow:2.1.0-gpu-py3 image. In both cases build passes but I get low accuracy while classifying MNIST dataset using script examples/examples/fake_approx_eval.py. On the other hand, if I use prebuilt singularity container, everything works well. Unfortunatelly, cmake isn't installed in published singularity container.

@FilipVaverka
Copy link
Collaborator

Hi,

could you provide us with CUDA version and GPU you used?
You could also try to build whole container assuming you are able to get project configured on the host machine (CUDA/GPU is not required for this). However, the host machine needs to have docker and singularity installed, and you have to be able to use "sudo".

To do this you would take similar approach as if you were building from the source directly on the host machine up to the "cmake .." step. Then with configured project you should be able to run "make tf-approximate-gpu-container" instead of usual "make".

This should pull "tensorflow/tensorflow:latest-gpu-py3" docker image, install dependencies in it and build FakeApproxConv2D. Then singularity is used to pull clean "tensorflow/tensorflow:latest-gpu-py3" image once more and only resulting binaries are added to it for the release (this is exactly the same process we use to build the container).

@pilipovicr
Copy link
Author

pilipovicr commented Jun 23, 2020

Hi,

I built my docker container using docker image tensorflow/tensorflow:2.1.0-gpu-py3. After that, I installed Cmake and pillow and copied folders python/ and test/ in /opt/tf-approximate-gpu/ and set the enviorment variables LD_LIBRARY_PATH and PYTHON_PATH like in singularity def file. Then i built the library libApproxGPUOpsTF.so and copied to /opt/tf-approximate-gpu/ folder. The build process gave me a lot of warnings, you can find them in attached file output.txt. At the end I tried to execute python scripts in example folder. The training goes well (train_out.txt), while the evaluation gives me small classification accuracy (eval_out.txt)

CUDA version = Cuda compilation tools, release 10.1, V10.1.243
GPU = GeForce GTX 1080 Ti computeCapability: 6.1

I tried to build container using cmake, but I failed. Altough i have installed TF (CPU version), the cmake does not recognise it on system.

Thanks for the help
Ratko

cuda.txt
eval_out.txt
train_out.txt
output.txt

@FilipVaverka
Copy link
Collaborator

I tried to replicate your workflow and it seems to be working fine for me (albeit only with GTX 950M). I would suggest to try to run "test_table_approx_conv_2d.py" from "test" with "--device cpu:0" and "--device gpu:0" (this also requires "libApproxGPUOpsTF.so" in "LD_LIBRARY_PATH"). This is perhaps the simplest test of the convolutional layer so we eliminate as much variables as we can.

Beyond that I will probably have to get hold of some GTX 1080 TI and try to isolate the issue.

@pilipovicr
Copy link
Author

pilipovicr commented Jun 23, 2020

I rebuilt the container, and perform everything once more and run test_table_approx_conv_2d.py script. I got this output:

gpu:0: Linf Error: 0.9866220355033875

With CPU option I get around:

cpu:0: Linf Error: 2.411454147477343e-07

Greetings
Ratko

test_table_out.txt

@FilipVaverka
Copy link
Collaborator

I believe I found the cause of the issue. It seems that one gets such high error values when CUDA kernels are not compiled for CUDA Capability of given GPU - I haven't thought of that before as I would expect hard crash (we will have to look into this).

Either way, I think you should be able to fix the issue by compiling kernels for CUDA Capability 6.1 (GTX 1080 Ti). With our build setup you can do that by passing -DTFAPPROX_CUDA_ARCHS="61" ("." is omitted on purpose) to cmake or modify default value of the variable directly in "src/cuda/CMakeLists.txt". When compiling for multiple GPU the values in TFAPPROX_CUDA_ARCHS should be separated by semicolon.

@pilipovicr
Copy link
Author

It worked.

Thanks for the help.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants