New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Added default uncertainty function in average_combine of combiner.py #258
Conversation
Over looks pretty good @stottsco. Thanks for the contribution! Can you add in this parameter to the doc string? Just a line describing what it does. One thing that may be helpful would be adding an example and I think this would solve your question above as well. If someone wanted to use the median_absolute_deviation function as the uncertainty, they could do it as a lambda function. So something like:
Although it would be good to double check this works. |
@mwcraig there is a failure in the test suite that is related to the image collection documentation with python 3.5. I haven't looked at detail, but it is unrelated to this PR, but looks like some kind of ordering being checked and coming back in a different order. |
For average_combine I added a line to the doc string as requested, and in median_combine I added an optional argument to the parameter list uncertainty_func=median_absolute_deviation, and changed the line uncertainty = 1.4826*median_absolute_deviation(self.data_arr, axis=0) The code for the test is: def test_median_combine_unc_lambda(ccd_data):
ccd_list = [ccd_data, ccd_data, ccd_data]
c = Combiner(ccd_list)
ccd = c.median_combine(uncertainty_func=lambda x: 1.4826*np.sum(x))
assert np.all(ccd.uncertainty.array == 1.4826*np.sum(c.data_arr, 0)) From the error it looks like lambda was expecting only one arg but was passed two so I tried lambda x,y: 1.4826*np.sum(x,y) hoping it would expect two but I received the same error. |
@stottsco -- now that you mention it, a I think the RIght Thing To Do is:
@stottsco, are you up for doing the astropy pull request? |
@stottsco -- would like to get this wrapped up this week. Turns out the function we need is already defined (almost) at https://github.com/astropy/ccdproc/blob/master/ccdproc/core.py#L574 So just need to add an |
@stottsco -- the changes you made look good, I'd like to to get the git history simplified a bit. We'll tackle that at the Thursday morning coding meeting. |
…mbine in combiner.py. Also added an optional axis argument to sigma_func in core.py because it is used as the default uncertainty function in median_combine and needs to accept two arguments.
624d565
to
febd61d
Compare
@stottsco -- looks one |
@@ -580,13 +580,16 @@ def sigma_func(arr): | |||
---------- | |||
arr : `~ccdproc.CCDData` or `~numpy.ndarray` | |||
Array whose deviation is to be calculated. | |||
|
|||
axis : None or int or tuple of ints, optional | |||
Axis or axes along which a sum is performed. The default (axis = None) is perform a sum over all the dimensions of the input array. axis may be negative, in which case it counts from the last to the first axis. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could you split that line into multiple ones it's recommended that lines are not too much over 80 letters per row. Also I find it a bit misleading: It's not a sum that is performed it's a "sigma function", or am I wrong? Also the word "axis" should be capitalized when it's at the beginning of the sentence.
I would suggest:
Axis or axes along which the function is performed.
If ``None`` (the default) it is performed over all the dimensions of the input array.
The axis argument can also be negative, in this case it counts from
the last to the first axis.
I hope I'm not too late. I just left some inline comments. Most of them are not very important. Only the thing about the default value for Nice work 👍 |
Hi, If anything else needs to be changed let me know. |
@@ -574,6 +574,7 @@ def test_header(ccd_data): | |||
def test_wcs(ccd_data): | |||
ccd_data.wcs = 5 | |||
assert ccd_data.wcs == 5 | |||
<<<<<<< HEAD |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I guess that's a leftover from your rebase @mwcraig mentioned. You need to delete this line.
Okay, I edited the doc strings for those two functions. As for the <<<<<<<<HEAD thing I don't see this line in the file on my local machine nor if I look at the file in my repo on github. All I have is: def test_wcs(ccd_data):
ccd_data.wcs = 5
assert ccd_data.wcs == 5 and then its EOF so I'm not sure how to fix this? |
@stottsco - I'm not too familiar with git myself I think/hope @mwcraig or @crawfordsm know what to do in this case. |
Looks like |
Oh gotcha, I didn't catch that. Thanks, I took that out hopefully should be good to go now. |
@stottsco -- just one last thing, I think. In the rebase frenzy I think you ended up with duplicates of the WCS property and setter, and duplicates of the test. Could you please delete the duplicates (not by another rebase ;), just delete and commit)? |
@MSeifert04 -- thanks for reviewing this! |
Wow, good catch. Got that fixed too now, thanks. |
Added default uncertainty function in average_combine of combiner.py
Thanks, @stottsco |
Added a default uncertainty function in average_combine of combiner.py so that an uncertainty function can be passed optionally. This closes part of issue #224. As for the other part would it be preferable to let the default uncertainty function in median_combine be median_absolute_deviation and then use a conditional to multiply by the factor of 1.4826 if the uncertainty function is the default using something like:
def median_combine(self, median_func=ma.median, scale_to=None, uncertainty_func=median_absolute_deviation):
. . .
if uncertainty_func is median_absolute_deviation:
uncertainty_func *= 1.4826
or would it be better to define a function for median_absolute_deviation that does this multiplication so that uncertainty can simply be assigned the default or optionally passed function without having to worry about whether the factor of 1.4826 is applicable?