-
Notifications
You must be signed in to change notification settings - Fork 79
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ENH/API: Logical operations on StatusObject #241
Conversation
7223bf7
to
f22d8aa
Compare
Cool and useful idea. Any thoughts about exception handling? Every time we return to StatusObjects, it sure makes me wish we were using coroutines/Futures/asyncio. This would be so much nicer: def series():
yield from motor1.move(1, wait=False)
yield from motor2.move(2, wait=False)
yield from motor3.move(3, wait=False) or def parallel():
tasks = [motor1.move(1), motor1.move(2), motor1.move(3)]
loop.run_until_complete(asyncio.wait(tasks), timeout=2.0) |
I guess that that is available in bluesky (kind of): def series(motors):
for motor in motors
yield from [Msg('set', motor), Msg('wait', motor)]
def parallel(motors):
for motor in motors:
yield Msg('set', motor, block_group='A')
yield Msg('wait', 'A') But I agree that making this work at a lower level as well would be pretty. At some point, @tacaswell and I concluded that to really make pyepics work on an Event Loop -- with all the tools that asyncio provides to work with network sockets and take advantage of network latency -- you'd be rewriting not only pyepics but libca itself. What error handling do you think would be appropriate here? If one of that Status objects raises, the scan is killed. Should that be different when they are composed? |
Adding hooks to let the run engine ask a Device for a plan to run on it's self might not be a bad idea. class DeviceWithAPlan(Device):
...
def move_plan(self, position, parallel=True):
if parallel:
yield from parallel_motors(self.motors)
else:
yield from sereial_motors(self.motors)
ms = Msg('planned_move', dwap, pos, palallel=False)
class RunEngine:
@asycio.coroutine
def _planned_move(self, obj, *args, **kwarg):
new_plan = iter(obj.move_plan(*args, **kwargs)
self._gen_stack.appendleft(new_plan)
return None This also means that if you are part of the way through a move and it gets interrupted it should be replayed correctly which is kinda cool. |
I disagree with the conclusion you came regarding the need to rewrite libca, but this isn't the thread for that sort of discussion. As to writing Devices with the concept of plans, we don't want dependencies from either bluesky to ophyd or vice-versa. Anyway, back on topic: regarding my exception comment, I guess there's not much you can do to make it easier to figure out what was going on during the callbacks, so never mind. |
On further thought adding a def my_plan(obj):
g = obj.move_plan()
yield Message('sub_plan', g) where you could also just do def my_plan(obj):
yield from obj.move_plan() which is much simpler and does not involve touching the generator stack. |
@danielballan We should start a 'plan_snippet' module for things like the parallel and serial moves you wrote above. In 3.5 you can yield from inside of a try-except plan = list('abced')
def broken_plan():
yield from plan
raise RuntimeError
yield from plan
def run_contex(plan):
yield 'start'
try:
yield from plan
finally:
yield 'stop' Which makes me think we can provide a @event_decorator
def collect_single_point(det_list):
bg_uid = str(uuid4())
for d in det_list:
yield Msg('trigger', d, block_group=bg_uid)
yield Msg('wait_for', bg_uid)
rets = []
for d in det_list:
ret = yield Msg('read', d)
rets.append(ret)
return rets
def read_diode(diode):
yield Msg('trigger', diode)
ret = yield Msg('read', diode)
return ret
@run_decorator
def do_scan(some_args, motors, dets):
for p in f(some_args):
yield from move_motors(motors, p)
yield from collect_single_point(motors + dets)
def master_plan(sample_id, diode, some_args, motors, dets):
for s is sample_id:
try:
pre = yield from read_diode(diode)
yield from load_sample(s)
post = yield from read_diode(diode)
if pre['data']['diode'] - post['data']['diode'] > thresh:
yield from do_scan(some_args, motors, dets)
finally:
yield from unload_sample() |
well, I lead the comments in this PR an a wild goose hunt. This needs a companion PR in bluesky? |
I think this also needs a test of at least 2 generations of the merged status objects. As we learned with slicerator, sometimes issues do not show up until you go that deep. |
Noted -- will add tests later today. In the meantime, this is the companion PR in bluesky: https://github.com/NSLS-II/bluesky/pull/291/files |
b08bd28
to
1d692b4
Compare
1d692b4
to
d8cb711
Compare
This is ready for a last round of review. |
@danielballan, can you point out an example of using this sort of thing in the wild? |
self._cb = cb | ||
self._callbacks.add(cb) | ||
|
||
def __and__(self, other): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Docstrings sure would be nice for the logical operations.
Can you also nuke |
920b134
to
904ac5b
Compare
@klauer helpfully provided an example here This is ready for another round of review. All changes since last review are in the latest commit. See message for details. |
733c5d7
to
e09cf79
Compare
Rebased (basically rewritten) in the wake of the positioner refactor. I had to make some off-the-cuff decisions that might not be the right ones, and I'm not attached. Let's have a healthy discussion here. Major points:
|
else: | ||
return self.finish_ts - self.start_ts | ||
|
||
def __repr__(self): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
👍 to repr on base class
Travis is locking up, reproducibly, on the first move-related test. I seem to have broken this in my refactor. Can someone with a working ophyd docker take a quick look and give me a hint? |
@danielballan I'm not sure where this might be broken, so I went back to basics: In [3]: from ophyd import EpicsMotor
In [4]: mtr = EpicsMotor('XF:31IDA-OP{Tbl-Ax:X1}Mtr', name='mtr')
In [5]: mtr.connected
Out[5]: True
In [6]: mtr.position
Out[6]: 0.0
In [7]: mtr.move(2.0)
Exception in thread Thread-1:
Traceback (most recent call last):
File "/home/dchabot/Dropbox/workspace/ophyd/ophyd/status.py", line 56, in _timeout_thread
poll_rate=max(1.0, self.timeout / 10.0))
File "/home/dchabot/Dropbox/workspace/ophyd/ophyd/status.py", line 252, in wait
'(elapsed {} sec)'.format(timeout, elapsed))
TimeoutError: Operation failed to complete within 30.0 seconds(elapsed 30.029077768325806 sec) The motor did end up at the commanded position (2.0). This all works fine on |
The issue is that |
- Introduce composite status objects, moving the logic of setting them up out of the base class. - Give products of & and | nice recursive reprs. - Move useful some general concept (elapsed time, success) up into StatusBase. - Give StatusBase a nice repr similar to MoveStatus. - Remove flat "ancestors" list in favor of more semantic "left" and "right," which along with the object type (AndStatus, OrStatus) fully define the dependency chain. - A pre-emptive note: there is some code duplication between AndStatus and OrStatus, but on the whole I think code would be made *less* simple if the duplication were factored out.
56c3f4d
to
133c03b
Compare
Sorry again that the internals keep changing requiring re-re-refactoring of this PR. We had to prioritize getting the things with immediate need in first. |
Here's a start on an idea I had this morning.
Considering the
MergedStatusObject
at SRX -- which fires when both of its "children" are finished -- I thought it might be useful to be able to perform logical AND and OR on status objects, returning aStatusBase
. With these, we can construct arbitrary combinations without creating a new class for every special case.Currently, you can only register one callback function on a status object. For this to work, status objects must support multiple callbacks. If this isn't obvious, here's why:
Making this change is straightforward, but unfortunately I think it is not possible to do this with a backward-compatible API. Fortunately, the surface area of this API is not large. No non-test ophyd code (outside of
StatusBase
itself) is affected. About six lines of test code have to be changed, plus three lines downstream in bluesky and one place in the SRX beamline config.This is still WIP -- just posting it for discussion.