New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Push commits to enable creation of response files from P&Z lists #1962

Open
wants to merge 16 commits into
base: master
from

Conversation

Projects
None yet
4 participants
@amkearns-usgs

amkearns-usgs commented Oct 23, 2017

What does this PR do?

Enables the ability to construct response objects from pole/zero lists, along with a series of optional parameters with appropriate defaults chosen.

Why was it initiated? Any relevant Issues?

Yes. See issue #1949.

PR Checklist

  • This PR is not directly related to an existing issue (which has no PR yet).
  • All tests still pass.
  • Any new features or fixed regressions are be covered via new tests.
  • Any new or changed features have are fully documented.
  • Significant changes have been added to CHANGELOG.txt .
  • First time contributors have added your name to CONTRIBUTORS.txt .
@krischer

This comment has been minimized.

Member

krischer commented Oct 23, 2017

Something is still off. Would you mind if I force push to your branch to fix it?

@amkearns-usgs

This comment has been minimized.

amkearns-usgs commented Oct 23, 2017

@krischer

This comment has been minimized.

Member

krischer commented Oct 23, 2017

Fixed now.

@amkearns-usgs

This comment has been minimized.

amkearns-usgs commented Oct 23, 2017

@krischer

Thanks for this. A first round of quick feedback.

I mainly think that this function currently does a bit too much and it should aim to be as simple as possible and basically take PAZ + gain/A0 and return a response for that.

It also needs to be actually tested, best against some known good response source so we can be sure everything has been set correctly.

decimation_factor, decimation_offset, decimation_delay,
decimation_correction)
resp = Response(resource_id=response_id,
instrument_sensitivity=response_sensitivity,

This comment has been minimized.

@krischer

krischer Oct 23, 2017

Member

This must be an InstrumentSensitivity object which can be constructed given a sensitivity value and a frequency.

decimation_correction)
resp = Response(resource_id=response_id,
instrument_sensitivity=response_sensitivity,
instrument_polynomial=response_polynomial,

This comment has been minimized.

@krischer

krischer Oct 23, 2017

Member

Same here but this simple function should IMHO just not allow settings the polynomial as this a lot more special purpose.

def construct_response_from_paz_values(self):
poles = [1+2j, 1-2j, 2+2j, 2-2j]
zeros = [0, 0, 5]
resp = Response.from_paz(zeros, poles)

This comment has been minimized.

@krischer

krischer Oct 23, 2017

Member

Please add a bit more to this test, e.g. make some assertions about the response object and maybe also calculate the response and compare with a response from some known-good source.

gain = 3.08*(10**5)
sensitivity = 1201.*(2**26/40.)
resp = Response.from_paz(zeros, poles, stage_gain=gain,
response_sensitivity=sensitivity)

This comment has been minimized.

@krischer

krischer Oct 23, 2017

Member

Same comment as above.

@@ -1663,6 +1663,56 @@ def get_sacpz(self):
paz = self.get_paz()
return paz_to_sacpz_string(paz, self.instrument_sensitivity)
@staticmethod
def from_paz(zeros, poles, stage_sequence_number=1, stage_gain=1,

This comment has been minimized.

@krischer

krischer Oct 23, 2017

Member

The gain should really be a required parameter.

This comment has been minimized.

@krischer

krischer Oct 23, 2017

Member

And the stage sequence number should IMHO not as this will return a complete response object and you already know that it will be stage 1.

normalization_factor=1.0, resource_id=None, resource_id2=None,
name=None, input_units_description=None,
output_units_description=None, description=None,
decimation_input_sample_rate=None, decimation_factor=None,

This comment has been minimized.

@krischer

krischer Oct 23, 2017

Member

in my opinion: No need to set these - this is really just a helper function to construct the simplemost PAZ response.

@megies megies added this to the 1.2.0 milestone Oct 24, 2017

@amkearns-usgs

This comment has been minimized.

amkearns-usgs commented Oct 24, 2017

I believe I have made the relevant changes. Please let me know if there are any more things you need.

@krischer

This comment has been minimized.

Member

krischer commented Oct 24, 2017

There is still something wrong with your git. Does it maybe somehow autoreplace symlinks or something? In any case - not important right now - I can fix it before merging.

This still lacks a test in terms of the actual response being calculated, e.g. something like

resp = Response.from_paz(zeros, poles)
r = get_evalresp_response_for_frequencies(frequencies, output="VEL")
np.testing.assert_all_close(r, known_good_r_from_some_other_source)

This would then also test that all the normalization constants and what not are set correctly.

@megies

This comment has been minimized.

Member

megies commented Oct 25, 2017

@amkearns-usgs you commited some files by accident, we can handle it later, but you might want to give $ git add -p a try and add new files manually with $ git add <filename> and avoid using $ git commit --all in the future.

@amkearns-usgs

This comment has been minimized.

amkearns-usgs commented Oct 25, 2017

@krischer

I reviewed this in a bit more detail. Sorry for being a bit nitpicky at times but I feel like this might be used a lot so we have to make sure to get it right.

I'm also fairly sure these tests could not have passed on your machine. Note that you can locally run the tests either by directly executing the test file with python, by using obspy-runtests core (might take a while though), or using pytest which gives you a lot of control. Most IDEs also allow executing the tests.

Same with the code formatting - just install flake8 and pep8-naming (both with pip to make sure to get the most recent version - conda packages tend to lack a bit behind for these two) and run flake8 FILENAME.

normalization_factor=1.0):
"""
Takes in lists of complex poles and zeros and returns a Response with
those values defining its only stage.

This comment has been minimized.

@krischer

krischer Oct 26, 2017

Member

If possible, please rephrase the docstring into

"""
Single line description
   <- Empty Line
Longer description.

...
"""

Then it will render nicer in the sphinx documentation.

:param zeros: All zeros of the response to be defined.
:type poles: list of complex
:param poles: All poles of the response to be defined.
:type stage_gain: double

This comment has been minimized.

@krischer

krischer Oct 26, 2017

Member

All floating point numbers are double precision in Python - just call it float.

parameter, the value assigned to stage_gain_frequency will be used.
:returns: new Response instance with given P-Z values
"""
if normalization_frequency is None:

This comment has been minimized.

@krischer

krischer Oct 26, 2017

Member

This is very unlikely to happen it defaults to 1.0 so people would have to explicitly set it to None. In that case I guess they are just more likely to directly set it to whatever value they want. I would remove this "magic" behavior and just assume people are very explicit in what they want.

sequence, stage_gain,
stage_gain_frequency, input_units, output_units,
pz_transfer_function_type, normalization_frequency, zeros,
poles, normalization_factor)

This comment has been minimized.

@krischer

krischer Oct 26, 2017

Member

Just a style preferences: Could you please change it PolesZerosResponseStage(stage_sequence_number=sequence, stage_gain=stage_gain, ...)? The main reason for this is that it is very easy to make a mistake for functions/objects with a lot of parameters, especially if at one point we change the signature of the object's __init__() method.

pz_transfer_function_type, normalization_frequency, zeros,
poles, normalization_factor)
sensitivity = InstrumentSensitivity(stage_gain, stage_gain_frequency,
input_units, output_units)

This comment has been minimized.

@krischer

krischer Oct 26, 2017

Member

Same comment here.

r_zeros = resp.response_stages[0].zeros
np.testing.assert_equal(zeros, r_zeros)
def test_paz_pole_values(self):

This comment has been minimized.

@krischer

krischer Oct 26, 2017

Member

This can be combined with the preceding test case.

@@ -298,6 +298,42 @@ def test_response_with_no_units_in_stage_1(self):
out, [0 + 9869.2911771081963j, 0 + 19738.582354216393j,
0 + 39477.164708432785j])
def test_paz_zero_values(self):

This comment has been minimized.

@krischer

krischer Oct 26, 2017

Member

Can you please rename your test cases to something like test_resp_from_paz_settings_zeros() and similar more descriptive names?

zeros = [0, 0, 5]
resp = Response.from_paz(zeros, poles, 1.0)
r_zeros = resp.response_stages[0].zeros
np.testing.assert_equal(zeros, r_zeros)

This comment has been minimized.

@krischer

krischer Oct 26, 2017

Member

Please also test the other set parameters.

# + +-------------------------------------------+ +
# + | Response (Coefficients), NS306 ch SHZ | +
# + +-------------------------------------------+ +
#

This comment has been minimized.

@krischer

krischer Oct 26, 2017

Member

You will probably have to manually remove all non-PAZ stages from this files to get the test to work.

sensitivity = InstrumentSensitivity(stage_gain, stage_gain_frequency,
input_units, output_units)
resp = Response(instrument_sensitivity=sensitivity,
response_stages=[pzstage])

This comment has been minimized.

@krischer

krischer Oct 26, 2017

Member

Please add a resp.recalculate_overall_sensitivity() at the end to make sure the overall sensitivity is actually correct at the end. If stage_gain_frequency and normalization_frequency differ this is not necessarily the case.

This comment has been minimized.

@amkearns-usgs

amkearns-usgs Oct 27, 2017

I've made this change (added the recalculate_overall_sensitivity() call).

I would like to make sure that my test captures the desired behavior for this -- for some reason, even if stage_gain_frequency and normalization_frequency are set to the same value, I'm getting a different result for the response's sensitivity compared to the gain for the poles-zeros. This seems like this shouldn't be the case, since the PZ stage is the only one in the response.

To be clear, when I set the stage gain to 2014943641.6, that IS the stage gain for the PZ stage. However, the overall gain looks something like 432115652.548. The frequencies for stage gain and normalization are both kept at 1.0, as is the normalization factor. Is this an issue with the units, or is there a step that I'm missing somewhere?

:returns: new Response instance with given P-Z values
"""
pzstage = PolesZerosResponseStage(

This comment has been minimized.

@amkearns-usgs

amkearns-usgs Oct 27, 2017

I made the requested changes for these lines of code and just wanted to make sure the comment I set has been seen, since I replied to the request for the change rather than as a main comment in the thread.

I would like to make sure that my test captures the desired behavior for the recalculate_overall_sensitivity() line -- for some reason, even if stage_gain_frequency and normalization_frequency are set to the same value, I'm getting a different result for the response's sensitivity compared to the gain for the poles-zeros. This seems like this shouldn't be the case, since the PZ stage is the only one in the response.

To be clear, when I set the stage gain to 2014943641.6, that IS the stage gain for the PZ stage. However, the overall gain looks something like 432115652.548. The frequencies for stage gain and normalization are both kept at 1.0, as is the normalization factor. Is this an issue with the units, or is there a step that I'm missing somewhere?

This comment has been minimized.

@krischer

krischer Oct 27, 2017

Member

I'll have to check this on monday. The recalculate_overall_sensitivity() function just takes the result of evalresp at a given/derived frequency and uses that as the total sensitivity. I think this is correct. But I'll have to look at it in a bit more detail next week to really answer your question.

This comment has been minimized.

@amkearns-usgs

amkearns-usgs Oct 27, 2017

OK, I think we've figured out the issue with this toy test case with the help of someone else here at the lab. I didn't actually make the numbers out of anything but thin air, so the gains of the instrument aren't just the one sensitivity parameter, but also the values associated with the poles and zeros themselves. So I could probably counteract that by making sure the A0 value for the PZ stage would set the response curve to 1 at that point, whereas instead right now the A0 defaults to 1, and thus the overall sensitivity is affected by the numbers I apply to the poles and zeros.

This comment has been minimized.

@aringler-usgs

aringler-usgs Oct 30, 2017

Contributor

It looks like the response file is using blockette 58 field 4 to state the gain of stage 1. I think we want to put that as the normalization so that blockette 53 field 7 has the normalization value (that isn't unity) and then put a sensitivity value for a GS-13 (blockette 58 field 4).

@amkearns-usgs

This comment has been minimized.

amkearns-usgs commented Oct 30, 2017

In addition to previous changes and getting the test cases in the previous form to pass, we have now changed the evalresp vs. from-paz resp test to use an A0 normalization factor different from 1.0. Locally, the test cases are passing.

@amkearns-usgs

This comment has been minimized.

amkearns-usgs commented Nov 1, 2017

All requested changes have now been made, including removing the stage-2 data from the response file used in testing (since the values in question were unit, they shouldn't have affected the data, but now we can say so with certainty).

@krischer

This comment has been minimized.

Member

krischer commented Nov 1, 2017

Thanks a bunch - I'll try to find some time this week to properly review this!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment