Skip to content

test(step-generation): test to compare PD JSON protocols vs Python export #18236

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 6 commits into
base: edge
Choose a base branch
from

Conversation

ddcc4
Copy link
Contributor

@ddcc4 ddcc4 commented May 2, 2025

Overview

This test compares the JSON commands that PD generates to the engine commands produced by PD Python export.

When we launch Python export, we want to be able to guarantee that if users run the Python file instead of the JSON file, the protocol will do the same thing.

This test takes pairs of JSON and Python protocols from the __pd_protocols__/ directory. It executes the Python protocol to see what engine commands it calls. We then compare those engine commands to the commands in the JSON protocol. If PD Python export is working correctly, the commands should match exactly (after some cleanup that this test performs).

This PR includes 2 test protocols for now:

  • simple_transfer.json and .py: A simple protocol with a single Transfer step. This confirms that the Python code we generate produces exactly the same engine commands as JSON step generation. (I selected the mix option for this transfer, so this test also exercises the new Python mix() options we just implemented this week.)
  • demo_day.json and .py: The was the protocol we showed for Demo Day. This test exercises the commands for controlling the various modules.

Future test plans

I'd like to build a comprehensive test suite that exercises every option in PD (or at least the ones we care about). The plan would be: make a protocol in PD, export it as JSON, also export it as Python, save both files into __pd_protocols__/, then run this test to confirm that Python and JSON do the same thing. Maybe we could get help from QA for this? (@alexjoel42)

For now we'd have to do this manually, and update the files every time step generation changes. In the future, we could maybe rig up something to regenerate the JSON and Python protocols whenever we change step generation.

We can also use this test to see if the upcoming PD liquid class implementation is doing the right thing. Hypothetically, if you create liquid class transfer in PD (liquid class ethanol, volume 99 uL, tip pickup per source), and don't change any of the options, PD should generate JSON commands exactly equivalent to the engine commands from:

liquid_class = protocol.define_liquid_class("ethanol_80")
pipette_left.transfer_with_liquid_class(
    liquid_class=liquid_class,
    volume=99,
    source=well_plate_1["A1"],
    dest=well_plate_1["H1"],
    new_tip="per source",
)

which is the Python protocol in hypothetical_lc_transfer.py.

PD's step generation doesn't do that right now, so we have our work cut out for us. But we can use this test to see what we're missing in PD. (@ncdiehl)

Risk assessment

Low, test only.

@ddcc4 ddcc4 requested a review from jerader May 2, 2025 02:47
@ddcc4 ddcc4 requested a review from a team as a code owner May 2, 2025 02:47
@ddcc4 ddcc4 force-pushed the dc-pd-compare branch from 2157b0e to d4d28c5 Compare May 2, 2025 03:02
Copy link
Collaborator

@jerader jerader left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this is really cool! i like that the tests are in python instead of like cypress tests or something. the code is easy to follow with the comments 😄 probably wait for Jeremy &/or Sanniti to review before merging in though

@ddcc4
Copy link
Contributor Author

ddcc4 commented May 2, 2025

this is really cool!

Yeah. I used the new mix() generator that you checked in yesterday. It seems to work! The Python call we generate produces the same sequence of engine commands as the JSON protocol.

Copy link

codecov bot commented May 12, 2025

Codecov Report

All modified and coverable lines are covered by tests ✅

Project coverage is 24.84%. Comparing base (dbf0a39) to head (134fdf8).
Report is 3 commits behind head on edge.

Additional details and impacted files

Impacted file tree graph

@@           Coverage Diff           @@
##             edge   #18236   +/-   ##
=======================================
  Coverage   24.84%   24.84%           
=======================================
  Files        3225     3225           
  Lines      273249   273249           
  Branches    25995    25995           
=======================================
  Hits        67887    67887           
  Misses     205342   205342           
  Partials       20       20           
Flag Coverage Δ
protocol-designer 18.97% <ø> (ø)

Flags with carried forward coverage won't be shown. Click here to find out more.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.
  • 📦 JS Bundle Analysis: Save yourself from yourself by tracking and limiting bundle sizes in JS merges.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants