-
Notifications
You must be signed in to change notification settings - Fork 29
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
CPU kernel acceleration #15
Comments
We found it due to the two parameters act_ act_integer_bits and act_fraction_bits, and we use semi precision training. When act_integer_bits andact_fraction_bits set to 8bit is right,but sets to 16bit is wrong. What is the reason? Thank you! |
Sorry for the delay. Let me look into the code. |
when we run "sh install_kernels.sh" command, an error occurs: |
(deepshift) ubuntu@ubuntu-NF5280M5:~/zj/DeepShift/pytorch$ sh install_kernels.sh Installed /home/ubuntu/anaconda3/envs/deepshift/lib/python3.6/site-packages/deepshift_cpu-0.0.0-py3.6-linux-x86_64.egg |
Our path is /usr/local/cuda-11.1/bin/nccc. What should we do? |
Try to run:
and see if it works |
I would like to clarify that:
|
we run the command: but it also hvte the same error as before: The kernel of this project can only support cuda10.0? How can we do?Thank you very much. |
We solved it by run:
|
This is great. Please don't hesitate to ask if you have further questions. |
We encountered a new error,with run the command "sh install_kernels.sh": Installed /home/ubuntu/anaconda3/envs/yolov3_new/lib/python3.6/site-packages/deepshift_cpu-0.0.0-py3.6-linux-x86_64.egg |
Your code only support specific CUDA version? The CUDA version we use is 11.1. What we can do to support it? |
I think the error is due to the mismatch of your PyTorch version and CUDA version (rather than my code). I checked the PyTorch website ( https://pytorch.org/get-started/previous-versions/ ) and found this installation command for CUDA 11.1. So I suggest to start a new conda environment and install PyTorch using this command:
|
Thank you very much for your solution. The kernel is installed correctly. |
Because the shift 1 bit will lead to some accuracy loss. We want to shift twice to solve it. For example: 10 = 8 + 2( shift 3 bits + shift 1 bit). Therefore, we modify the code as follows: def get_shift_and_sign(x, rounding='deterministic'): def round_power_of_2(x, rounding='deterministic'): shift1,shift2,sign = get_shift_and_sign(x, rounding) However, the input in class Conv2dShiftQ(_ConvNdShiftQ): function will become Nan, which should be caused by data overflow: class Conv2dShiftQ(_ConvNdShiftQ): #@weak_script_method Can you give some suggestions to solve it? Thank you very much. |
Thanks @mengjingyouling . I will close this issue and started a new issue #16 to discuss the other question. |
@mengjingyouling Hi, I recently tried to install the shift kernels with torch1.10.0 and CUDA 11.1 but failed to compile, I wonder whether you have ever successfully compiled the shift kernel under the same torch and cuda version? I would appreciate it if you could give me some guidance, thanks. |
A CPU kernel was implemented in the project. We want to know which CPU can support it.What is the acceleration efficiency?
Thank you very much
The text was updated successfully, but these errors were encountered: