-
Notifications
You must be signed in to change notification settings - Fork 20
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Build match worker image #2313
Build match worker image #2313
Conversation
After testing, the p0.9.17-9170-w0.0.1-100 release was published, and the worker v0.0.1-100 tag was also pushed, see gha. However, it appears that this worker is still compatible with the parachain latest tag. Testing has shown:
Did I miss anything? |
Did our session together solve your problem? |
Definitely helpful; it's just that there were some deviations during testing. |
a627359
to
9080e64
Compare
Is this PR ready to be reviewed? I see there're unfinished TODOs but at the same time |
2d26ebb
to
8924412
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Overall, it looks good. Thank you for resolving the connect issue.
default: 'latest' | ||
inputs: | ||
release-tag: | ||
description: "Client-api release tag (e.g. p1.2.0-9701-w0.0.1-101)" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We can't use the latest version if we change it here, right?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
it won't affect the use of lates version and we still are able to set release tag as latest
in order to generate types from latest docker images
@@ -13,11 +13,13 @@ services: | |||
litentry-node: | |||
condition: service_healthy | |||
litentry-worker-1: | |||
condition: service_healthy | |||
# using +service_started+ over +service_healthy+ since worker runs successfully but can not connect to parachain |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
3
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What does "3" mean?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sorry I'm a bit late to it - just found it was merged already
No worries, we can always make it up with subsequent changes.
@@ -114,8 +115,8 @@ jobs: | |||
${{ matrix.chain }}-parachain-srtool-digest.json | |||
${{ matrix.chain }}-parachain-runtime.compact.compressed.wasm | |||
|
|||
## build docker image of parachain binary ## | |||
build-docker: | |||
# build docker image of parachain binary ## |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Any reason to change it to single #
? It used the ## ... ##
style
@@ -127,7 +128,7 @@ jobs: | |||
|
|||
- name: Set env | |||
run: | | |||
DOCKER_TAG=$(echo ${{ env.RELEASE_TAG }} | cut -d'-' -f1 | sed 's/p/v/') | |||
DOCKER_TAG=$(echo ${{ env.RELEASE_TAG }} | sed 's/p/v/;s/\(.*\)-w.*/\1/') |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Any specific reason for this, or just a casual change?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If we change the job name from build-docker
to build-parachain-docker
, we should change the var to PARACHAIN_DOCKER_TAG
too, for workers it should be WORKER_DOCKER_TAG
- please let's keep the naming consistent.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Any specific reason for this, or just a casual change?
Yes this changes allows us to consider a variety of options for specific versions
latest
p0.9.18-w0.0.2
p0.9.18-9181-w0.0.2-101
- name: Free up disk space | ||
if: startsWith(runner.name, 'GitHub Actions') | ||
uses: jlumbroso/free-disk-space@main | ||
with: | ||
tool-cache: true | ||
swap-storage: false | ||
large-packages: false | ||
|
||
- name: Set up Docker Buildx | ||
uses: docker/setup-buildx-action@v3 | ||
with: | ||
# use the docker driver to access the local image | ||
# we don't need external caches or multi platforms here | ||
# see https://docs.docker.com/build/drivers/ | ||
driver: docker | ||
|
||
- name: Cache worker-cache | ||
uses: actions/cache@v3 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For releases I suggest not using any cache at all
networks: | ||
- litentry-test-network | ||
entrypoint: | ||
"/usr/local/worker-cli/lit_ts_api_package_build.sh -p 9912 -u ws://litentry-node | ||
entrypoint: "/usr/local/worker-cli/lit_ts_api_package_build.sh -p 9912 -u ws://litentry-node |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Shall we keep the old format?
@@ -13,11 +13,13 @@ services: | |||
litentry-node: | |||
condition: service_healthy | |||
litentry-worker-1: | |||
condition: service_healthy | |||
# using +service_started+ over +service_healthy+ since worker runs successfully but can not connect to parachain |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What does "3" mean?
# using +service_started+ over +service_healthy+ since worker runs successfully but can not connect to parachain | ||
# as requires additional pre-setup for parachain image which built in production mode | ||
# for generating types there is no need for fully workable interaction between worker and parachain | ||
condition: service_started |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Does it mean we can get the data even tho enclave is not registered on the parachain? (I assume so
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yes, that's correct. I have one concern though, what if not registred enclave generate types which are different in case it was registred 🤔
Opening follow up PR for that. |
Context
resolves p-330
Test steps: