-
Notifications
You must be signed in to change notification settings - Fork 37
Making the errors in DDS to Controls conversion more floating-point-friendly #69
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
|
||
| # check that the offsets are correctly sorted in time | ||
| time_differences = np.diff(offsets) | ||
| if not np.all(np.logical_or(np.greater(time_differences, 0.), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In this case I'm not convinced that we should allow "close" offsets. Typically we'd use a "close" check when equality is allowed, but here I think we want the offsets to be strictly increasing rather than non-decreasing.
| np.isclose(time_differences, 0.))): | ||
| raise ArgumentsValueError("Pulse timing could not be properly deduced from " | ||
| "the sequence offsets. Make sure all offset are " | ||
| "correctly ordered in time.", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Instead of saying "correctly ordered in time", how about we just say "increasing order"? That would be a more clear indication of how the user can fix the issue.
tests/test_dynamical_decoupling.py
Outdated
| a sequence tightly packed with pulses, where there is no time for a gap between | ||
| the pi/2-pulses and the adjacent pi-pulses. | ||
| """ | ||
| # create a sequence containing 30 pi-pulses and 2 pi/2-pulses at the extremities |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Similar to the comment on the last PR, but I think we'd get the same amount of coverage with a sequence of maybe 4 offsets (pi/2, pi, pi, pi/2). In the context of the bug Luigi hit it's nice to verify that situation exactly, but for the test that we actually merge I'd suggest favouring simplicity.
| if not np.all(np.logical_or(np.greater(gap_durations, 0.), | ||
| np.isclose(gap_durations, 0.))): | ||
| raise ArgumentsValueError("There is overlap between pulses in the sequence. " | ||
| "Try increasing the maximum rabi rate or maximum detuning rate.", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this necessarily the issue? I think it could also be minimum_segment_duration, if that's the limiting factor for the pulse durations.
If that's right, an alternative would be to drop this check, and update the error message below to also suggest increasing the rabi/detuning rate.
| minimum_segment_duration : float, optional | ||
| If set, further restricts the duration of every segment of the Driven Controls. | ||
| Defaults to 0, in which case it does not affect the duration of the pulses. | ||
| Must be greater or equal to 0, if set. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good point! I think this is a reasonable requirement.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, I was getting away without testing it explicitly before, but if I eliminate the gap >= 0 test then I have to make sure this isn't negative
|
@charmasaur Thanks for the feedback and for the fast approval! I guess you were convinced that this was correct faster than I was haha |
Final part of the fix for the bug found by Luigi. First part was PR #68
I split the checks of the timing of the pulses into two parts:
These two checks should be tolerant of floating point imprecisions now.
I added a unit test that replicates exactly the situation that Luigi reported. In the master branch, this test fails with the exception: