-
Notifications
You must be signed in to change notification settings - Fork 437
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fixes #901 : Added documentation for "step" in dti #972
Conversation
indention for line
Do you want to sort out those PEP8 issues, while we're here? |
@@ -1132,6 +1132,9 @@ def predict(self, gtab, S0=1, step=None): | |||
The mean non-diffusion weighted signal in each voxel. Default: 1 in | |||
all voxels. | |||
|
|||
step : int | |||
The chunk size as a number of voxels. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We might want to add a bit more explanation here (after the first sentence). Something like: "to increase speed of processing, tensor fitting is done simultaneously over many voxels. This parameter sets the number of voxels that will be fit at once in each iteration. Note that a larger number here should speed things up, but should also take up more memory, so keep an eye on your computers memory consumption as you increase this number.
Also, we should mention that this is an optional parameter, and that it has a default value (10,000).
We should also add all this to the documentation of the TensorModel
object.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ok. I will add this and also update the documentaion for TensorModel object. For the pep8 issues, I can do it in this commit itself. But, if making a new branch just for pep8 isn't a big deal, I guess its better to create a seperate issue and fix pep8 there.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should the explanation come directly under the definition of step or I should include it under Notes. Also, the line - the increase in step value "should" speed things up but it should will also take up more memory - Is this correct? Regarding the memory
"""
The chunk size as a number of voxels. Optional parameter with default value 10,000.
In order to increase speed of processing, tensor fitting is done simultaneously
over many voxels. The step parameter sets the number of voxels that will be fit at
once in each iteration. A larger step value should speed things up, but will
also take up more memory. It is advisable to keep an eye on memory consumption
as this value is increased.
"""
@arokem Is this fine? |
I was hoping to also document this at the level of the TensorModel object. Maybe in the |
So all the functions here ? |
Rather, somewhere here: https://github.com/nipy/dipy/blob/master/dipy/reconst/dti.py#L696 |
Done 😄 |
take up more memory. It is advisable to keep an eye on memory consumption as | ||
this value is increased. | ||
|
||
Example : In iter_fit_tensor() we have a default step value of 1e4 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please change
iter_fit_tensor() => :func:iter_fit_tensor
Comment edited to deal with markdown vs. rst.
Cool. One more small change, so that this renders fine in sphinx (and add's a link in the html to the right function). Thanks for picking up those PEP8 bits as well! |
Done. |
One more small thing - in the sphinx markup, the name of the function has On Sat, Mar 19, 2016 at 4:37 PM, Shahnawaz Ahmed notifications@github.com
|
Updated the backticks |
This now looks fine to me. Anyone else want to give this a look? |
Fixes #901