Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ENH/API: Logical operations on StatusObject #241

Closed
wants to merge 4 commits into from

Conversation

danielballan
Copy link
Member

Here's a start on an idea I had this morning.

Considering the MergedStatusObject at SRX -- which fires when both of its "children" are finished -- I thought it might be useful to be able to perform logical AND and OR on status objects, returning a StatusBase. With these, we can construct arbitrary combinations without creating a new class for every special case.

Currently, you can only register one callback function on a status object. For this to work, status objects must support multiple callbacks. If this isn't obvious, here's why:

s3 = s1 | s2  # registers a callback on s1 that will fire s3
s4 = s1 | s3  # registers another callback on s1 to fire s4

Making this change is straightforward, but unfortunately I think it is not possible to do this with a backward-compatible API. Fortunately, the surface area of this API is not large. No non-test ophyd code (outside of StatusBase itself) is affected. About six lines of test code have to be changed, plus three lines downstream in bluesky and one place in the SRX beamline config.

This is still WIP -- just posting it for discussion.

@danielballan danielballan force-pushed the status-logic branch 5 times, most recently from 7223bf7 to f22d8aa Compare January 23, 2016 01:50
@klauer
Copy link
Member

klauer commented Jan 23, 2016

Cool and useful idea. Any thoughts about exception handling?

Every time we return to StatusObjects, it sure makes me wish we were using coroutines/Futures/asyncio. This would be so much nicer:

def series():
    yield from motor1.move(1, wait=False)
    yield from motor2.move(2, wait=False)
    yield from motor3.move(3, wait=False)

or

def parallel():
   tasks = [motor1.move(1), motor1.move(2), motor1.move(3)]
   loop.run_until_complete(asyncio.wait(tasks), timeout=2.0)

@danielballan
Copy link
Member Author

I guess that that is available in bluesky (kind of):

def series(motors):
    for motor in motors
        yield from [Msg('set', motor), Msg('wait', motor)]

def parallel(motors):
    for motor in motors:
        yield Msg('set', motor, block_group='A')
    yield Msg('wait', 'A')

But I agree that making this work at a lower level as well would be pretty. At some point, @tacaswell and I concluded that to really make pyepics work on an Event Loop -- with all the tools that asyncio provides to work with network sockets and take advantage of network latency -- you'd be rewriting not only pyepics but libca itself.

What error handling do you think would be appropriate here? If one of that Status objects raises, the scan is killed. Should that be different when they are composed?

@tacaswell
Copy link
Contributor

Adding hooks to let the run engine ask a Device for a plan to run on it's self might not be a bad idea.

class DeviceWithAPlan(Device):
    ...

    def move_plan(self, position, parallel=True):
         if parallel:
            yield from parallel_motors(self.motors)
        else:
            yield from sereial_motors(self.motors)


ms = Msg('planned_move', dwap, pos, palallel=False)

class RunEngine:
    @asycio.coroutine
    def _planned_move(self, obj, *args, **kwarg):
        new_plan = iter(obj.move_plan(*args, **kwargs)
        self._gen_stack.appendleft(new_plan)
        return None

This also means that if you are part of the way through a move and it gets interrupted it should be replayed correctly which is kinda cool.

@klauer
Copy link
Member

klauer commented Jan 23, 2016

I disagree with the conclusion you came regarding the need to rewrite libca, but this isn't the thread for that sort of discussion. As to writing Devices with the concept of plans, we don't want dependencies from either bluesky to ophyd or vice-versa.

Anyway, back on topic: regarding my exception comment, I guess there's not much you can do to make it easier to figure out what was going on during the callbacks, so never mind.

@tacaswell
Copy link
Contributor

On further thought adding a run_sub_plan message to the default set of things the RunEngine knows how to do might be cute. On the other hand, if it does not know to look for said plans, then your user plans would be

def my_plan(obj):
    g = obj.move_plan()
    yield Message('sub_plan', g)

where you could also just do

def my_plan(obj):
    yield from obj.move_plan()

which is much simpler and does not involve touching the generator stack.

@tacaswell
Copy link
Contributor

@danielballan We should start a 'plan_snippet' module for things like the parallel and serial moves you wrote above.

In 3.5 you can yield from inside of a try-except

plan = list('abced')


def broken_plan():
    yield from plan
    raise RuntimeError
    yield from plan

def run_contex(plan):
    yield 'start'
    try:
        yield from plan
    finally:
        yield 'stop'

Which makes me think we can provide a run and event decorators so one could write

@event_decorator
def collect_single_point(det_list):
    bg_uid = str(uuid4())
    for d in det_list:
        yield Msg('trigger', d, block_group=bg_uid)
    yield Msg('wait_for', bg_uid)
    rets = []
    for d in det_list:
        ret = yield Msg('read', d)
        rets.append(ret)
    return rets

def read_diode(diode):
    yield Msg('trigger', diode)
    ret = yield Msg('read', diode)
    return ret

@run_decorator   
def do_scan(some_args, motors, dets):
    for p in f(some_args):
         yield from move_motors(motors, p)
         yield from collect_single_point(motors + dets)

def master_plan(sample_id, diode, some_args, motors, dets):
    for s is sample_id:
        try:
            pre = yield from read_diode(diode)
            yield from load_sample(s)
            post = yield from read_diode(diode)
            if pre['data']['diode'] - post['data']['diode'] > thresh:
                yield from do_scan(some_args, motors, dets)
        finally:
            yield from unload_sample()

@tacaswell
Copy link
Contributor

well, I lead the comments in this PR an a wild goose hunt.

This needs a companion PR in bluesky?

@tacaswell
Copy link
Contributor

I think this also needs a test of at least 2 generations of the merged status objects. As we learned with slicerator, sometimes issues do not show up until you go that deep.

@danielballan
Copy link
Member Author

Noted -- will add tests later today. In the meantime, this is the companion PR in bluesky: https://github.com/NSLS-II/bluesky/pull/291/files

@danielballan
Copy link
Member Author

This is ready for a last round of review.

@dchabot
Copy link
Contributor

dchabot commented Jan 28, 2016

@danielballan, can you point out an example of using this sort of thing in the wild?

self._cb = cb
self._callbacks.add(cb)

def __and__(self, other):
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Docstrings sure would be nice for the logical operations.

@klauer
Copy link
Member

klauer commented Jan 28, 2016

Can you also nuke DetectorStatus while you're at it?

@danielballan
Copy link
Member Author

@danielballan, can you point out an example of using this sort of thing in the wild?

@klauer helpfully provided an example here

This is ready for another round of review. All changes since last review are in the latest commit. See message for details.

@danielballan
Copy link
Member Author

Rebased (basically rewritten) in the wake of the positioner refactor. I had to make some off-the-cuff decisions that might not be the right ones, and I'm not attached. Let's have a healthy discussion here. Major points:

  • AndStatus succeeds if both its parents succeed.
  • If the first parent to finish fails, the AndStatus finishes early with failure. (Is this desired behavior?)
  • OrStatus is gone. It's unclear how to handle merging success in that case, and it's hard to think of a reasonable use case. If one comes up with can revisit this.
  • success is initialized as None (not False) until the status object finishes. This is not a philosophical change, but a practical one -- I use it make AndStatus work.
  • As before, the timestamp and elapsed logic move into the base class.
  • Minor point: _finished takes a kwarg called 'timestamp' to set the finish_ts property. Now __init__ works the same way to set the start_ts property.
  • More arguments are made keyword-only arguments.

else:
return self.finish_ts - self.start_ts

def __repr__(self):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

👍 to repr on base class

@danielballan
Copy link
Member Author

Travis is locking up, reproducibly, on the first move-related test. I seem to have broken this in my refactor. Can someone with a working ophyd docker take a quick look and give me a hint?

@dchabot
Copy link
Contributor

dchabot commented Feb 27, 2016

@danielballan I'm not sure where this might be broken, so I went back to basics:

In [3]: from ophyd import EpicsMotor

In [4]: mtr = EpicsMotor('XF:31IDA-OP{Tbl-Ax:X1}Mtr', name='mtr')

In [5]: mtr.connected
Out[5]: True

In [6]: mtr.position
Out[6]: 0.0

In [7]: mtr.move(2.0)
Exception in thread Thread-1:
Traceback (most recent call last):
  File "/home/dchabot/Dropbox/workspace/ophyd/ophyd/status.py", line 56, in _timeout_thread
    poll_rate=max(1.0, self.timeout / 10.0))
  File "/home/dchabot/Dropbox/workspace/ophyd/ophyd/status.py", line 252, in wait
    '(elapsed {} sec)'.format(timeout, elapsed))
TimeoutError: Operation failed to complete within 30.0 seconds(elapsed 30.029077768325806 sec)

The motor did end up at the commanded position (2.0).
The MotorStatus.done fails to evaluate to True (checked via setting wait=False), and eventually trips on the default timeout of 30 seconds...

This all works fine on master, of course.

@klauer
Copy link
Member

klauer commented Mar 1, 2016

The issue is that finished_cb was nuked and thus the api broke - all calls to that setter are now just setting an unused attribute on the status object instead of adding the callable argument it to the callback list.

- Introduce composite status objects, moving the logic of
  setting them up out of the base class.
- Give products of & and | nice recursive reprs.
- Move useful some general concept (elapsed time, success) up
  into StatusBase.
- Give StatusBase a nice repr similar to MoveStatus.
- Remove flat "ancestors" list in favor of more semantic "left"
  and "right," which along with the object type (AndStatus,
  OrStatus) fully define the dependency chain.
- A pre-emptive note: there is some code duplication between
  AndStatus and OrStatus, but on the whole I think code would be
  made *less* simple if the duplication were factored out.
@klauer
Copy link
Member

klauer commented Mar 4, 2016

Sorry again that the internals keep changing requiring re-re-refactoring of this PR. We had to prioritize getting the things with immediate need in first.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants