New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Gamma Index multiprocessing #72
Comments
Hi @jasqs, I certainly could do just that for you if you would like. You mention CLI, what do you envision the output of the CLI would look like? Would it be just a pass rate? Or would you be after a full 3D grid saved in an accessible file format? Also, to help me, if you were to design it yourself what inputs would you use? I'm not asking you to design it, but if you provide me with how you might envisage using it that would help me in making sure what ever is made meets your need. Cheers, Simon |
Actually sorry, it's been a long day on my end. I completely misread your question. Yes, I could implement multi processing if you would like. As a question, do you happen to have an NVIDIA card? I could also potentially implement CUDA acceleration which would drastically speed things up. Also, the gamma function in its current form can already have a few adjustments made just with parameter choice to make it relatively fast. @Centrus007 you've done quite a bit of work using this, would you be able to give a little guidance on parameters that an be used for a fast calculation. Cheers, Simon |
Hi, thank you for quick response. Actually I have even two GPU NVIDIA Titan X cards available in my server. We normally use them to make Monte Carlo calculations. The implementation of gamma index on GPU would certainly solve our problem with time performance (as it did with Monte Carlo). Would it be too much effort to prepare such version for us? Also version with multi processing would be helpful as our server has 40 cores. Jan Gajewski |
I'm actually on holidays at the moment, so I likely won't get a chance to
look at it for at least two weeks. But after that I'll see if I can
implement both multiprocessing and GPU acceleration.
I did see you had a few papers on a GPU acceleration TPS. Figured GPU
accelerated gamma might be helpful for you :).
I had a quick squiz at some of your work. It looks quite neat. Might you be
interested in potentially contributing something to pymedphys? We'd love to
have you and/or someone from your team feeding something in that you are
allowed to share under open source licenses. No pressure though :).
I'll keep you posted regarding the GPU acceleration and multiprocessing.
Cheers,
Simon
…On Tue., 19 Feb. 2019, 9:32 pm Jan Gajewski ***@***.*** wrote:
Hi,
thank you for quick response. Actually I have even two GPU NVIDIA Titan X
cards available in my server. We normally use them to make Monte Carlo
calculations. The implementation of gamma index on GPU would certainly
solve our problem with time performance (as it did with Monte Carlo). Would
it be too much effort to prepare such version for us? Also version with
multi processing would be helpful as our server has 40 cores.
Jan Gajewski
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
<#72 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AGQVe6Ra3nCLR68SWIkFZjIrpXlF4iNOks5vO9m4gaJpZM4bCj7I>
.
|
Any code you submit that also has a paper published could be made sure to
have the paper reference placed at the top of the documentation for that
module. Not sure if that helps justify being allowed to release code
publicly to your higher ups or not.
…On Tue., 19 Feb. 2019, 9:37 pm Simon Biggs ***@***.*** wrote:
I'm actually on holidays at the moment, so I likely won't get a chance to
look at it for at least two weeks. But after that I'll see if I can
implement both multiprocessing and GPU acceleration.
I did see you had a few papers on a GPU acceleration TPS. Figured GPU
accelerated gamma might be helpful for you :).
I had a quick squiz at some of your work. It looks quite neat. Might you
be interested in potentially contributing something to pymedphys? We'd love
to have you and/or someone from your team feeding something in that you are
allowed to share under open source licenses. No pressure though :).
I'll keep you posted regarding the GPU acceleration and multiprocessing.
Cheers,
Simon
On Tue., 19 Feb. 2019, 9:32 pm Jan Gajewski ***@***.***
wrote:
> Hi,
>
> thank you for quick response. Actually I have even two GPU NVIDIA Titan X
> cards available in my server. We normally use them to make Monte Carlo
> calculations. The implementation of gamma index on GPU would certainly
> solve our problem with time performance (as it did with Monte Carlo). Would
> it be too much effort to prepare such version for us? Also version with
> multi processing would be helpful as our server has 40 cores.
>
> Jan Gajewski
>
> —
> You are receiving this because you commented.
> Reply to this email directly, view it on GitHub
> <#72 (comment)>,
> or mute the thread
> <https://github.com/notifications/unsubscribe-auth/AGQVe6Ra3nCLR68SWIkFZjIrpXlF4iNOks5vO9m4gaJpZM4bCj7I>
> .
>
|
Hi @jasqs, I visited Cracow in June last year! It is a beautiful city. I didn't know you had a proton therapy centre there! I'd have visited it if I'd known. We're getting one soon here in Adelaide, Australia. :) Until multiprocessing is implemented, here are some things you can try to improve gamma calculation times:
|
Could have that just been fluctuations due to the use of |
Thank you @Centrus007 and @SimonBiggs for comments. @Centrus007 we have proton centre in operation since 2016 (more than 100 patient treated -> CCB). If you visit Krakow someday just let me know and you are invited. What solution (company) are you going to have in Adelaide? In Krakow we have IBA cyclotron and two dedicated gantries with scanning as well as self-made eye treatment room. I am working in a research group in two projects. One of it is dedicated to use fast MC simulations (FRED GPU based MC tool -> paper) in clinical routine. I will test the random voxel subset technique and let you know the results. The solution with @SimonBiggs I would be very interested in contribution to pymedphys. Right now my code is used mostly by me and my colleagues. It is not commented properly now and probably many things in there could be written much better as I am not an expert in python (yet). For the last few years I have been writing in Matlab mostly. Now I moved all my work to python as it was more convenient to share with our external partners. In my repository you can find mostly functions for dose evaluation, scanning spot analysis, Bragg peak analysis (including Bortfeld fit), DVH analysis and read/write files specific for our FRED MC tool and dicom. If you think any of this could be interested for you I can share it and modify/comment with needs. |
@jasqs, most of what you've described there would be most welcome within pymedphys if you would be okay with it being in there. Don't stress too much about the coding style, if you just bring one function in we can bounce back and forth iterating together until we're all happy about what is being merged in. I find I learn most effectively in processes like that, having people read my code while providing feedback and guidance. I would recommend picking one of those functions (maybe the Bragg peak analysis tools) and just picking a small function to start with, submit a pull request, trying to follow the style of the functions you already see within pymedphys. Ask questions, and we can code a bit with you on your first sets to help out. @Centrus007 can you add @jasqs to the repo and give him membership rights? And maybe add a module for @jasqs to begin writing in? @jasqs do you have a preference for the name of the module that you will begin writing in? |
@jasqs, I haven't personally had a lot of involvement with the project; but a good friend of mine, Scott Penfold (https://researchers.adelaide.edu.au/profile/scott.penfold), is front-running the project from a physics perspective. I actually spoke with him a little today and he said he'd seen some presentations on FRED; it's a small world! My understanding is that PROTOM won the contract in the end, but I'm not sure on any more specifics (e.g. number of beam lines). If you do decide that you'd like to contribute (please know that we'd really appreciate it if you do!), I've added you to the PyMedPhys repo. You should be able to make contributions now! Some things to note if you do contribute:
|
@jasqs could you give installation of the following two packages a shot:
Let me know how you go. If you are able to install those, then all that needs to be done is within the gamma code I just need to run an import check for tfinterp, and then verify it and TensorFlow are built and installed with CUDA support, if both of those conditions are true then I would dynamically swap out That way GPU support is optional, but available for those advanced users who want it. Then instead of the interpolation search being done on the CPU with |
@SimonBiggs Unfortunately I am not able to compile |
That's okay. I had feared that might be the case. I'm still not able to get
to my computer (still on our beach holiday).
I'll give it a try when I get back, it might be that tfinterp is just out
of date and not maintained. There are other options open to us though.
Given the value in having a CUDA accelerated gamma function I'll find some
way to make it work in a robust manner.
…On Mon., 25 Feb. 2019, 6:13 pm Jan Gajewski ***@***.*** wrote:
@SimonBiggs <https://github.com/SimonBiggs> Unfortunately I am not able
to compile tfinterp. I have properly installed CUDA 9.0 but I get error
in compilation of tfinterp. I will try to work with this but I am not a
C++ expert so I do not really know where is the problem yet.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#72 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AGQVe-MqY269BrpM_1xIgQBbM0UpHEQyks5vQ5QggaJpZM4bCj7I>
.
|
Notes regarding setup. I will make multiple edits to this comment until I am complete. I have not finished this yet, but I am recording here what I have been doing on Downloaded the following https://developer.download.nvidia.com/compute/cuda/repos/ubuntu1804/x86_64/cuda-repo-ubuntu1804_10.1.105-1_amd64.deb Based on the instructions at https://developer.nvidia.com/cuda-downloads?target_os=Linux&target_arch=x86_64&target_distro=Ubuntu&target_version=1804&target_type=debnetwork I ran: sudo dpkg -i cuda-repo-ubuntu1804_10.1.105-1_amd64.deb
sudo apt-key adv --fetch-keys https://developer.download.nvidia.com/compute/cuda/repos/ubuntu1804/x86_64/7fa2af80.pub
sudo apt-get update
sudo apt-get install cuda-10-0 Then pip install cupy-cuda100 |
@SimonBiggs I there any progress in implementation of gamma index calculation on multiprocessing/GPU? |
There was a little, I tried a few different angles. But I think for now I won't dive into the GPU side of things. I shall hopefully prioritise the multiprocessing option soon however. |
@SimonBiggs did you try to implement GI no multiprocessing? I am trying to calculate 3D GI for image with ~8.5E6 pixels (head and neck CT resolution) and i takes very long time. Trying to calculate only for random subset of the data takes also very long and I cannot validate the results as I am not able to calculate the full image in reasonable time. I think If not on GPU calculating on multiple CPUs would solve the problem in my case. |
Does the calculation complete? How long does a full calc take?
…On Wed., 10 Apr. 2019, 5:28 pm Jan Gajewski, ***@***.***> wrote:
@SimonBiggs <https://github.com/SimonBiggs> did you try to implement GI
no multiprocessing? I am trying to calculate 3D GI for image with ~8.5E6
pixels (head and neck CT resolution) and i takes very long time. Trying to
calculate only for random subset of the data takes also very long and I
cannot validate the results as I am not able to calculate the full image in
reasonable time. I think If not on GPU calculating on multiple CPUs would
solve the problem in my case.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#72 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AGQVe3znwXgca2rD1tUHh-aRmqYPCeyHks5vfZKSgaJpZM4bCj7I>
.
|
Also, if you need to, you can yourself program passing through the full
evaluation grid, but only a subset of the reference grid.
For however many cores you have, split the reference grid into that many
pieces, then calculate gamma on each core for the subset of the reference,
and then join the resulting gamma values back up at the end.
...that's all I'll be doing inside the code if I implement
multiprocessing...
…On Wed., 10 Apr. 2019, 6:28 pm Simon Biggs, ***@***.***> wrote:
Does the calculation complete? How long does a full calc take?
On Wed., 10 Apr. 2019, 5:28 pm Jan Gajewski, ***@***.***>
wrote:
> @SimonBiggs <https://github.com/SimonBiggs> did you try to implement GI
> no multiprocessing? I am trying to calculate 3D GI for image with ~8.5E6
> pixels (head and neck CT resolution) and i takes very long time. Trying to
> calculate only for random subset of the data takes also very long and I
> cannot validate the results as I am not able to calculate the full image in
> reasonable time. I think If not on GPU calculating on multiple CPUs would
> solve the problem in my case.
>
> —
> You are receiving this because you were mentioned.
> Reply to this email directly, view it on GitHub
> <#72 (comment)>,
> or mute the thread
> <https://github.com/notifications/unsubscribe-auth/AGQVe3znwXgca2rD1tUHh-aRmqYPCeyHks5vfZKSgaJpZM4bCj7I>
> .
>
|
Is it that simple? How do you handle the edges of each subgrid? Have the grids overlap by an amount at least as large as the distance_to_agreement, and then be sure to score each edge voxel in only one of the subgrids? |
You pass the whole evaluation, each reference point that searches the
evaluation grid is calculated independent of each other reference point.
…On Thu., 11 Apr. 2019, 9:08 am Matt Jennings, ***@***.***> wrote:
For however many cores you have, split the reference grid into that many
pieces, then calculate gamma on each core for the subset of the reference,
and then join the resulting gamma values back up at the end.
...that's all I'll be doing inside the code if I implement
multiprocessing...
Is it that simple? How do you handle the edges of each subgrid? Have the
grids overlap by an amount at least as large as the distance_to_agreement?
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#72 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AGQVe7CnHCTrDJnmk-iLiIGhlXsnZTHFks5vfm7PgaJpZM4bCj7I>
.
|
Ah yes, that makes sense. |
:) |
This is exactly I was thinking it should work. I did not finished the calculation on full grid. It was calculation for 2 days and I had to stop it :) |
That's my worry. If there is that much of a time cost, potentially
multiprocessing won't be the solution.
This particular algorithm can take a long time (indefinitely at worst) if
the dose distributions do not agree. I have seen issues like you describe
when the dose distributions are not aligned as one expects.
To troubleshoot, could you pass nonsense numbers to the random subset such
as 10, just to see if the calculation completed at all...
…On Thu., 11 Apr. 2019, 8:59 pm Jan Gajewski, ***@***.***> wrote:
This is exactly I was thinking it should work. I did not finished the
calculation on full grid. It was calculation for 2 days and I had to stop
it :)
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#72 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AGQVe7xPlg1QGeXTucOO_yOt23JwoJzlks5vfxWNgaJpZM4bCj7I>
.
|
using just 10 random points it calculated in about 3 min. |
Can you investigate the dose grids and see how different the evaluation and the reference are at those positions? |
I managed to split calculations for 40 cores but each instance loads a copy of the whole evaluation grid. This takes a lot of RAM. I suppose the multiprocessing should be done internally in gamma_shell so the evaluation grid would be loaded only once. @SimonBiggs I will check the differences and get back. |
This algorithm itself is quick if the doses are close to each other, but it
them takes longer as the doses get further away. It's optimum when you
expect gamma pass rates on the order of 90-100%...
…On Fri., 12 Apr. 2019, 8:21 pm Jan Gajewski, ***@***.***> wrote:
I managed to split calculations for 40 cores but each instance loads a
copy of the whole evaluation grid. This takes a lot of RAM. I suppose the
multiprocessing should be done internally in gamma_shell so the evaluation
grid would be loaded only once.
@SimonBiggs <https://github.com/SimonBiggs> I will check the differences
and get back.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#72 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AGQVe4XEHDTt_oLLVmNJAOBYf-cwYsgmks5vgF4WgaJpZM4bCj7I>
.
|
Another question, for those 10 that you calculated in 3 minutes, what gamma
values did they have?
…On Fri., 12 Apr. 2019, 10:09 pm Simon Biggs, ***@***.***> wrote:
This algorithm itself is quick if the doses are close to each other, but
it them takes longer as the doses get further away. It's optimum when you
expect gamma pass rates on the order of 90-100%...
On Fri., 12 Apr. 2019, 8:21 pm Jan Gajewski, ***@***.***>
wrote:
> I managed to split calculations for 40 cores but each instance loads a
> copy of the whole evaluation grid. This takes a lot of RAM. I suppose the
> multiprocessing should be done internally in gamma_shell so the evaluation
> grid would be loaded only once.
>
> @SimonBiggs <https://github.com/SimonBiggs> I will check the differences
> and get back.
>
> —
> You are receiving this because you were mentioned.
> Reply to this email directly, view it on GitHub
> <#72 (comment)>,
> or mute the thread
> <https://github.com/notifications/unsubscribe-auth/AGQVe4XEHDTt_oLLVmNJAOBYf-cwYsgmks5vgF4WgaJpZM4bCj7I>
> .
>
|
Something else that might be helpful for you to get idea of how the
algorithm is affected by distance, create an artificial dose grid that is 0
everywhere and has a 2x2x2 cube of dose value 1 in the centre. Use exactly
the same dose values for evaluation and reference, but slowly adjust the
coordinates. As in provide different coordinates to the reference and
evaluation.
When the reference and evaluation are close you'll notice the calc is
quick, but as soon as the two cubes start getting more than 2x the distance
threshold away from each other, the calculation times begin to explode...
|
I think potentially this will find a solution in better documentation of the speed profile of the gamma tool. Make it clear that it will be very slow for disagreeing datasets. |
Hi,
I am using the gamma index analysis in my project and need to perform analysis for many 3D dose distributions which is time consuming. Do you plan to implement multiprocessing to gamma_shell script?
best
Jan Gajewski
The text was updated successfully, but these errors were encountered: