Conversation
Greptile SummaryThis PR consolidates the previously separate Confidence Score: 5/5Safe to merge — the consolidation is clean and the new ci.yml correctly handles both PR and push event types. No P0 or P1 issues found. The single remaining observation (no workflow_dispatch trigger) is a minor P2 convenience concern that doesn't affect correctness. No files require special attention. Important Files Changed
Flowchart%%{init: {'theme': 'neutral'}}%%
flowchart TD
A[Push to main/dev\nor Pull Request] --> B{paths-ignore\n**.md?}
B -- Only .md files --> Z[Skip CI]
B -- Other files --> C{Draft PR?}
C -- Yes --> Z
C -- No --> D[checks job]
D --> E[actions/checkout]
E --> F[Fix permissions]
F --> G[uv sync --all-extras --no-extra dds --frozen]
G --> H[Remove pydrake stubs]
H --> I{Event type?}
I -- pull_request --> J[pytest\nnot tool or mujoco]
I -- push --> K[coverage run\npytest\nnot tool or mujoco]
K --> L[coverage combine\n& report]
J --> M[mypy dimos\nif not cancelled]
L --> M
M --> N{push event?}
N -- Yes --> O[Upload coverage\nartifact]
N -- No --> P[Done]
O --> P
M --> Q{Failure?}
Q -- Yes --> R[df -h disk check]
Reviews (1): Last reviewed commit: "paul(ci): single step" | Re-trigger Greptile |
This reverts commit f9227c0.
There was a problem hiding this comment.
honestly I have no idea how this works, but as is tradition, no one but latest person that worked on CI knows how CI works.
so I'll just ask user level questions:
- where is the local cache now?
- can I add a docker image that depends on some other docker image we build (g1 from ros-python let's say) - will it intelligently build it only when it has to, rebuild if it's parent is rebuilt, and can I run tests on it? - this is where most complexity of CI came from..
if you say no to 2 - can still approve, we will need this eventually, but not yet - can handle this on a case-by-case basis, until someone writes a custom dimos CI action
It's stored locally. This checks out new branches without cleaning up previous ones. I know what you're thinking: "but that messes up different runs". True, but i'm also running this: That removes every non-tracked file which is not in So the end effect is that only the |
is there an issue with parallelization then? I assumed we have parallel checkouts if running multiple branches tests? |
I don't know if there are multiple checkouts per worker, but that shouldn't be an issue in either case. It will just have as many "caches" as there are checkouts. I think data needs to be cleared because the pre-commit script doesn't allow files that are not commited and other branches could polute that. Essentially what is preserved is only what is in .venv and git files because git and uv are capable of bringing the checkout to a clean state. If I tried to preserve data, I would have to add some logic to handle different cases. |
Sorry, forgot to answer this. I've kept docker-build.yml the same. The only difference is that the images get built on merges to dev, main, or every Monday morning. If you add a forth docker image for just g1, then it will be built under those same conditions. |
but docker images introduce system changes that might be required for running actual tests on that build (imagine DDS for example) so this will cause it to fail tests no? we can shelf this, this is solving an issue we don't have yet, by sacrificing actual irl performance. in a month someone might want us to CI build custom docker + run pytest on it that passes only within that system, so should keep that in mind |
The solution is to have multiple smaller commits. If you need to add a new system dependency, add it a as a separate PR. That will built a new image. |
yeah that's perfect! we simplify 99.9% usecases and make 0.1% slightly slower |
Problem
We have a docker image build step as part of the build (even if it is cached). Would be better to just
uv sync, pytest, mypy.Closes DIM-783
Solution
Breaking Changes
None.
How to Test
None.
Contributor License Agreement