-
Notifications
You must be signed in to change notification settings - Fork 10
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
encountered an error while running the demo #9
Comments
Really sorry about all the issues you are experiencing. The code changed a lot in the past few months and our changes kept breaking other things. We think these issues should now be resolved. Please pull the newest version of the code and re-try. Note that the syntax of the demos have changed and we deleted all the example data. Now you just run the script and it will simulate the input data for you. |
Firstly, dear developer, please accept my sincere gratitude. After downloading the latest version of leap and installing it using pip install -v ., I encountered the following error. I don’t understand how to solve this error, especially “CMake Error in src/CMakeLists.txt: Unknown CUDA architecture specifier ‘major’.” Can you tell me how to solve it? Using pip 23.3.2 from /home/liqiaoxin/anaconda3/envs/pytorch/lib/python3.9/site-packages/pip (python 3.9) -- Generating done
!! -- Generating done
!! -- Generating done
!! |
OK, try this. Open src\CMakeLists.txt and comment out line 129 by adding a # |
Thank you for your reply, your suggestion seems to have worked. However, I have a small issue. When I run the test_project_and_FBP.py demo, it seems that you didn’t create a sample_data folder in the latest directory. Therefore, when saving data, there is a small error message indicating that the path does not exist. This can be resolved by simply creating a sample_data folder. |
I appreciate the feedback. I added that directory back in. Let me know if you run into any other issues. |
Hello, I have some questions about the usage logic of LEAP. Are proj.leapct.set_default_volume() and proj.allocate_batch_data() necessary, or are they just for pre-allocating space? I noticed that vol_data and proj_data seem to be attributes of the Projector, so do I need to input the image into proj before using it? Do the data f_th and g_th in proj(f_th) and proj.fbp(g_th) need to be put into vol_data and proj_data every time before calling? I think these questions may be related to your INRTO.md document, but this document has not been updated recently, is its content still valid? I look forward to your answer. |
You asked a lot of questions in this post. Some of your questions I don't quite understand, so I'll answer a few of them now and maybe you and I will better understand each other as we move through this. First of all, if you want to use the NN solvers, you must allocate the space for the batch data. You don't necessarily have to run the call: allocate_batch_data(), but if you don't you'll have to set these member variables yourself. If you try running without allocating this space, it will return an error. Basically, any time you want to do tomography, you'll have to allocate space for the projection data and the reconstruction (volume) data. Pre-allocating these arrays makes it so you don't have to allocate space each iteration. Next, let's talk about the set_default_volume() function. This function is just for convenience. It does not allocate any memory, all it does it tells LEAP how to define the reconstruction volume. It defines the "default volume" which is the volume that fills the field of view of your data and uses the native voxel sizes. You can also do this yourself with the set_volume(...) command, where you tell LEAP how many voxels in each dimension, the voxel sizes (mm), and if you want this shifted from the origin. We do recommend you use the set_default_volume() command as it is usually what most people want and the code runs most efficiently, but you are free to define it how you want. You do need to define the volume parameters before you allocate memory for it. I'll answer the reminder of your questions after you reply to this post. I'd also like to mention that all of this is for doing reconstruction with neutral networks. If you just want to do tomographic reconstruction, you should go directly through the tomographicModels class which is in leapctype.py. |
First I'll address the issue with the brain CT reconstruction. You are getting a bogus answer because you specified your image size as 384 x 384, but you provided an image of size 256 x 256. Note that the set_default_volume() sets the size of your reconstruction, so since you wanted 256 x 256, you should have done something like this instead: |
The Projector class should be viewed as a torch.nn.Module that performs forward and back projections. You shouldn't have to worry about proj_data and vol_data. They are just internal data arrays used for the calculations. You do not need to fill these with any values and should be viewed as private member variables. You provide the inputs and the Projector class will generate the outputs. Yes, the values of the data (either projections or volume data) are constantly changing. What is static is the parameters that specify the CT geometry and CT volume, including the data dimension sizes. Did you see the demo scripts here? You should especially look at this one: as it provides a nice usage example. |
Even if I change numCols to 256, or change proj.leapct.set_default_volume() to proj.leapct.set_volume(256, 256, 1, pixelSize, pixelSize), the resulting image is incorrect (the sino_slice3 obtained after proj(f_th) is also wrong, all are inf or nan). When I set numCols to 256, the sino_slice3.max() obtained after proj(f_th) is inf, and when I set numCols to 384, the sino_slice3.min() obtained after proj(f_th) is nan. I have been puzzled as to why this error occurs. |
I don't have your input image, so I made one that is just a square. The following code works for me: import torch proj = Projector(forward_project=True, use_static=True, use_gpu=True, gpu_device=torch.device('cuda:0'), batch_size=1) image = np.zeros((256,256),dtype=np.float32) f_th = torch.from_numpy(image).unsqueeze(0).unsqueeze(0).to(torch.device('cuda')) |
I found out what the problem was. There was an error when my input image was 64-bit. Changing it to 32-bit solved the problem.Thank you so much!!! |
Sure, no problem. One more thing. I'm not sure you actually want to specify the voxel size as 1 mm. Doing this would mean that 128 columns of your projection data would never see anything. Do you want to make your voxels bigger so that they fill the field of view? If so, you could do this: |
"You are right, thank you for the reminder, this improvement has helped me a lot!" |
Merry Christmas !
After installation, I encountered an error while running the code /LEAP/demo_leaptorch/test_fproject_and_FBP.py. How should I handle this?
Traceback (most recent call last):
File "/data4/liqiaoxin/.pycharm_helpers/pydev/pydevconsole.py", line 364, in runcode
coro = func()
File "", line 1, in
File "/data4/liqiaoxin/.pycharm_helpers/pydev/_pydev_bundle/pydev_umd.py", line 198, in runfile
pydev_imports.execfile(filename, global_vars, local_vars) # execute the script
File "/data4/liqiaoxin/.pycharm_helpers/pydev/_pydev_imps/_pydev_execfile.py", line 18, in execfile
exec(compile(contents+"\n", file, 'exec'), glob, loc)
File "/data4/liqiaoxin/code/LEAP/demo_leaptorch/test_fproject_and_FBP.py", line 25, in
from leaptorch import Projector
File "/home/liqiaoxin/anaconda3/envs/pytorch/lib/python3.9/site-packages/leaptorch.py", line 13, in
lct = tomographicModels()
File "/home/liqiaoxin/anaconda3/envs/pytorch/lib/python3.9/site-packages/leapctype.py", line 78, in init
self.libprojectors = cdll.LoadLibrary(os.path.join(current_dir, "../build/lib/libleap.so"))
File "/home/liqiaoxin/anaconda3/envs/pytorch/lib/python3.9/ctypes/init.py", line 460, in LoadLibrary
return self._dlltype(name)
File "/home/liqiaoxin/anaconda3/envs/pytorch/lib/python3.9/ctypes/init.py", line 382, in init
self._handle = _dlopen(self._name, mode)
OSError: /home/liqiaoxin/anaconda3/envs/pytorch/lib/python3.9/site-packages/../build/lib/libleap.so: cannot open shared object file: No such file or directory
The text was updated successfully, but these errors were encountered: