-
Notifications
You must be signed in to change notification settings - Fork 180
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat(api,app): finish up new stacker API #17620
base: edge
Are you sure you want to change the base?
Conversation
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## edge #17620 +/- ##
==========================================
- Coverage 25.69% 25.68% -0.01%
==========================================
Files 2850 2851 +1
Lines 219383 219457 +74
Branches 17970 17971 +1
==========================================
Hits 56363 56363
- Misses 163005 163079 +74
Partials 15 15
Flags with carried forward coverage won't be shown. Click here to find out more.
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The python side of things (labware core) make sense to me. Just got a nit.
@@ -701,20 +701,6 @@ class FlexStackerCore(ModuleCore, AbstractFlexStackerCore): | |||
|
|||
_sync_module_hardware: SynchronousAdapter[hw_modules.FlexStacker] | |||
|
|||
def set_static_mode(self, static: bool) -> None: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yay
location=adapter_location, | ||
loadName=adapter_lw.definition.parameters.loadName, | ||
definitionUri=adapter_uri, | ||
offsetId=None, | ||
) | ||
# Always load the primary labware | ||
if stacker_state.pool_primary_definition is None: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we make sure the pool primary definition exists before constructing the adapter definition?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We could, but keep in mind that there will be no state changes until this command completes and sends its state updates - the only thing we're doing here is getting information and building objects, we haven't submitted them to state yet
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looking good so far, some notes below on an issue
|
||
if stacker_state.pool_count == 5: | ||
if stacker_state.pool_count == 6: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This and other hardcoded references will need replacement with Alise's solution from the internal height calculation PR, whenever/whichever merges first will need to coordinate as discussed in person.
core = LabwareCore(labware_id, self._engine_client) | ||
self._labware_cores_by_id[labware_id] = core | ||
return core |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is good for the api core, but we aren't actually updating the LoadedCoreMap at the protocol_api end, the _contexts_by_core
dict never gets updated. We'd need to do something like self._core_map.add(...)
a layer above or parallel to this somewhere. What will happen right now is if we do the following:
def run(protocol: ProtocolContext):
stacker_b = protocol.load_module("flexStackerModuleV1", "B4")
stacker_b.set_stored_labware(load_name="opentrons_flex_96_tiprack_1000ul", adapter="opentrons_flex_96_tiprack_adapter", count=6)
labware = stacker_b.retrieve()
protocol.move_labware(labware, "D1", True)
adapter = stacker_b.labware
protocol.move_labware(adapter, OFF_DECK, False)
We'd get an error like this when trying to do module_context.labware on the adapter: KeyError [line 10]: <opentrons.protocol_api.core.engine.labware.LabwareCore object at 0x1517c2b30>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We could add all three to the core map in retrieve
and that would do it, right?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Uhhh I think that should cover us yeah! However its probably important to note any future batch-loading commands that we make would need to follow that same pattern.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actually I think only the primary labware and the adapter need to be added to the core map, lids don't go in the core map as per load_labware
's implementation.
3cffa38
to
2e5001c
Compare
…17638) Add the app-side changes from #17620 that handle - Adding the retrieve command to some of the command parse utilities (but almost certainly not all of them) - Fixing an LPC whitescreen when there are any unknown definitions, which really makes the problems with forgetting the locations of some of the command parsing stuff much worse - Adding retrieve types ## Review requests - cohesive enough when separated from the other pr? ## testing - [x] we should no longer crash when looking at a protocol that loads labware in a way we don't know about. instead that labware just isn't shown.
Moves the new stacker api to the python protocol API. This doesn't require that many changes - in fact, we're just removing things. We don't have set_static_mode anymore (this concept no longer exists); we don't have load_stacker_hopper() anymore (this has been replaced with fill/empty). Closes EXEC-1215
because that's what protocols use. Also, do a bunch of integration.
Two big things: - The core actually needs to make new labware cores sometimes, which is pretty easy to accomplish. We're getting the labware by asking the engine "hey what's on this module" after the command completes (rather than by getting the results of the command) because if the command failed and went through error recovery, its results may not be reliable. - More seriously, retrieve() needs to simultaneously load multiple labware. To do this, it uses internal engine methods that inspect what the position of the loaded labware is and walks the parent list down to the deck. So if you load labware A on location L and labware B on labware A simultaneously, how can you do this? The answer is probably "have a mechanism for doing state updates ahead of time as a temporary and operate on the projected state", but that's a lot so the answer for now is "the affected methods get a projection of a slice of state".
We do deck configuration conflict checking by accumulating all used addressable areas; figuring out every set of cutout fixtures that could provide those addressable areas; and then figuring out if there are any cutout fixtures loaded currently that are not in any of those sets of cutout fixtures. We accumulate addressable areas differently in the engine and the app. In the engine, as a command decides what to do and which addressable areas it will touch, it marks those addressable areas as used. It can do that because, well, it's the thing touching them. And it can and will mark an addressable area as used if it's used, _even if it's not apparent to anything outside the engine that this addressable area is in fact used_. But in the app, we can't see inside the engine. We can only see the command parameters and the command results. So we emulate this, but we can't emulate it fully, because there are some commands that touch addressable areas that don't show up in their commands or results - for instance, the absorbance reader open and close lid commands. So the app, specifically for modules because that's where the problem is, was essentially doing deck configuration twice for modules: when it saw a module loaded, it would figure out the addressable area for that module (which isn't in the command) by loading the cutout fixture for that module into the cutout that module's location specified and then marking all the addressable areas that the module fixture provides when loaded into that addressable area as used. Meantime, the engine is still just marking the addressable areas that are actually used as used. This worked until we added these combo fixtures that fuse the stacker with other fixtures, like the magblock or the waste chute. Now, when the app proactively loads the two addressable areas provided by the default stacker fixture, there is no resolvable set of cutout fixtures that can also provide the addressable areas used by the combo fixtures: nothing can be provide D3, _and_ flexStackerModuleV1D4, _and_ gripperWasteChute, because D3 can't be there. This was causing the app to break. One fix is to only load the "primary" (here, first) AA provided by a module fixture, and rely on some combination of the projection back through deck config to make the other AAs load and on the use of those AAs in subsequent commands if threy matter. A better fix is to just have the engine list the possible cutout fixture configurations that will work for the protocol. But this is good for now.
d55ad3a
to
e8ac4e9
Compare
This PR removes static mode and updates the protocol api to use the new way of using the stacker.
The rest of the PR fixes things we missed in previous PRs, such as
LabwareCore
entries for the labware returned byretrieve
in the protocol coreretrieve
, we need to be able to tell the rest of the engine about the rest of the labware we're loading, since if we're loading labware on adapter all at once, the internals need to know the identity and details of the adapter to figure out what to do with the labwareloadModule
, which means there would be a fundamental conflict if you first put a stacker in D3 (providingflexStackerModuleV1D4
andD3
from theflexStackerModuleV1
cutout fixture, even if you weren't usingD3
) and then used a waste chute, which would conflict with D3.to come out of dev
review requirements
testing
I ran this protocol:
and it worked and the app didn't crash!