-
Notifications
You must be signed in to change notification settings - Fork 10
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Out of Memory Error for 4000x4000x2096 VOLUME #77
Comments
Hello. Thanks for posting this issue. The answers to your questions are listed below.
Good luck and let me know if you have any further questions! |
Hello Kyle here is the code Set scanner geometrynumAngles = 361 Set reconstruction volume#leapct.set_default_volume() """ leapct.print_parameters() Read data#dataPath = os.path.abspath(os.path.dirname('D:\XRAY\6DIMLEAP\')) Convert to attenuationROI = np.array([5, 55, 5, 55]) """Estimate detector tilt Perform detector tiltleapct.convert_to_modularbeam() Reconstruct with FBP#f = leapct.allocate_volume() g=leapct.cropProjections([1000, 1150], None, g) "Simulate" projection datastartTime = time.time() Reset the volume array to zero, otherwise iterative reconstruction algorithm will start their iterationswith the true result which is cheatingx,y,z = leapct.voxelSamples(True) f = leapct.FBP(g) leapct.display(f) |
I tried to use this for denoising https://github.com/LLNL/LEAP/blob/main/demo_leapctype/d33_reducingConeBeamArtifacts.py |
Oh, sorry, you were using the wrong part of that script. Please read the comments in that demo script file. See the section under the statement elif whichDemo == 2? That is the part you need because it steps through the volume, one chunk at a time. You will have to insert text that saves each chunk to a file to view it at the end. For the denoising stuff, do not use the d33 script; this is for something else. Look at this script d29_filter_sequence.py for a demonstration of doing iterative reconstruction for denoising purposes. Of course, you probably don't have enough memory for this anyway. If I were you, I'd do this. First reconstruct your whole volume as a described above and save it to disk. Then load your volume, one chunk at a time and perform denoising on it. I recommend Total Variation or Guided Filter. See the filterSequece part of the docs for an explanation. Your GPU card is not an issue, LEAP will automatically split up large data to fit into your GPU. Your real issue is that you don't have enough CPU memory for these operations. Iterative reconstruction requires a HUGE amount of memory. |
OK .. Got your point.. I read few LEAP docs and working on it ... |
One copy of your volume is about 128 GB, so I don't think 128 GB of RAM will be sufficient. I'd recommend at least 256 GB of RAM. I'm not sure about the whole virtual memory thing. I don't even use this because it is likely horribly inefficient to use this in tomography. |
Kyle, RWLS Code filters = filterSequence(1.0e0) leapct.RWLS(g,f,10,filters,None,'SQS') ASDPOCS filters = filterSequence() filters.append(TV(leapct, delta=0.01/100.0, p=1.0)) leapct.ASDPOCS(g,f,3,10,10,filters) Now i have very big problem ... i modified that chunk thing Second Iteration: ...... Final Error : Here is the Code : Set scanner geometrynumAngles = 361 Set reconstruction volume#leapct.set_default_volume() leapct.print_parameters() Read datadataPath = os.path.abspath(os.path.dirname('D:\XRAY\6DIMLEAP\')) Convert to attenuationROI = np.array([5, 55, 5, 55]) Perform detector tiltleapct.convert_to_modularbeam() z = leapct.z_samples() leapct_chunk = tomographicModels() for n in range(numChunks):
|
One strange thing.. that sample works well with Simulated phantom .. i tried to increate numCol=512 to 1024 ... memory usage is the same after every iteration .. somehow its not working with tiff files or my tiff files |
RWLS and ASDPOCS have regularization parameters that need to be tuned for your specific data. Thus, one cannot say that one algorithm works and another does not until these parameters are tuned. FBP is great because it gives predictable results and is an excellent algorithm for a non-expert, but iterative reconstruction requires one to understand how the algorithms work in order to get the best performance. In addition, one cannot simply reconstruct a subset of cone-beam slices without correcting for the out of bound slices. I am building automated routines for this, but they are not ready yet. Until these are ready or you figure out how to do this yourself, I would not do iterative reconstruction if I were you. Instead, I would try postprocess denoising. Some good options are: |
Hello Kyle...that problem soved i mean in FBP no double edges.. Result of FPB only ... my main problem of Double Edges solved .... Thank you Very much ... |
Everything sorted out .. Total Variation worked ... i have one weird problem asusal.. Intensity changes in slices.... here is the screenshot .. you can see few slices are dark few are bright .. that creates problem in segmentation ... it happens randomly here is the code .. am i doing anything wrong?? import sys Set scanner geometrynumAngles = 361 #leapct.set_default_volume() """ leapct.print_parameters() Read datadataPath = os.path.abspath(os.path.dirname('D:\XData\6DIMLEAP\')) Convert to attenuationROI = np.array([5, 55, 5, 55]) makeAttenuationRadiographs(leapct, g, None, None, ROI) Perform detector tiltleapct.convert_to_modularbeam() z = leapct.z_samples() leapct_chunk = tomographicModels() n=0 output_dir="I:\TEMP\" sliceStart = n*chunkSize leapct_chunk.set_numZ(numZ) g_chunk=leapct_chunk.cropProjections(rowRange, None, g) leapct_chunk.print_parameters() f_chunk = leapct_chunk.FBP(g_chunk) cv2.imwrite("C:\asdfasdf.tif",normalized_image) print("BEFORE FILTER") print("SAVING VOLUME") leapct_chunk.save_volume(output_file, f_chunk,sliceStart) |
Are you sure the intensity is changing? This may just be an artifact of how Windows shows these images in their file explorer. Often these images are displayed from their min to max value and this can certainly change, slice-by-slice, but the average intensity should be very continuous from slice-to-slice. Can you bring these slice images into a 3D viewer to verify? |
That's really weird; I have no idea why that would happen. I am traveling and have limited computer access, so I will try this when I get back. I did try these filters with the d13 demo script and it worked fine. A few comments. It looks like you're using the default value for "delta" in the TV denoising (diffuse). Did you optimize this value for your data? |
Thanks, |
There is a good description of the effect of the delta parameter in the documentation of the TV filter sequence. |
@elliotaldarson1717 so sorry for taking so long to get back to you. I looked at your script and it looks like your data has been cropped or something because the data dimensions are different. You could be getting variable brightness because you are cropping off some of the data that contains attenuation values. CT requires untruncated projections. Anyway, I took the code you have above and made some modifications and it seems to work now. I do feel that you over-smoothed your data, but that's for you to decide. Anyway, the code below should work.
|
Thanks Kyle .. somehow that out of memory issue solved.. and images looks much better in 3d view... |
Yes, I was just referring to the parameters used in the Median and TV filters. |
For using the large size projection image for tomography, I think that binning(2x2) option is best for saving time and memory based on my experience. |
|
@hws203 , @kylechampley ... Any suggestion on Projection Image improvement for below kind of noise(Like LOW DOSE HIGH DOSE , More frame averaging, this data is taken on 12 bit OLD detector .. ) ... it prevent me from auto segmentation |
Its not good to flood that Discuttion thread with problem solution so asking here.. def save_data(self, fileName, x, T=1.0, offset_0=0.0, offset_1=0.0, offset_2=0.0, sequence_offset=0):
|
Hello Kyle,
Thanks for such a wonderful toolkit I have 3 questions
leapct.set_default_volume(4.0) is kind of 4x4 binning .. i made it 1.0 and got following error
numpy.core._exceptions._ArrayMemoryError: Unable to allocate 125. GiB for an array with shape (2096, 4000, 4000) and data type float32
My Setup is Core i9 11th Gen 32GB RAM GPU: RTX4090 .... Do i need to install 128GB ram or other workaround is there.....
2 Is it possible to Preview Selected Slice ???
The text was updated successfully, but these errors were encountered: