-
Notifications
You must be signed in to change notification settings - Fork 54
patch addrman bucketing, remove build #261
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
patch addrman bucketing, remove build #261
Conversation
|
Also worth considering: I cannot claim to fully understand the ramifications of this addrman patch. I've observed different buckets and positions being used rather than a single bucket, so I know that much works. However it's unclear to me yet whether there are any currently-unforseen consequences of bucketing by /32... It occurred to me that we may have unwittingly altered the total number of buckets, but there may be other side-effects too. |
Agree, and I think we should clearly document this (at least for now). I still think this is the best route forward, but we should run a live warnet scenario for awhile and compare the raw addrman against some real nodes to get a sense of how closely the change mirrors mainnet. If this ends up introducing side effects, we'll need to go back to the drawing board, but for now this seems to unblock us. |
|
FWIW, some image size comparisons: Release image, current, debian-slim: 159MB Unified, proposed, alpine: 35.3MB |
I do agree this should be the normal mode of operation, but I don't think we should disable the build script altogether, in that I think it's still useful for CI. Needing to push images first to a registry before the CI can pass seems error prone and a PITA. AFAIK, if we are using minikube in the CI, we can build the images and use them directly in minikube, so maybe we keep the building and CI stuff to where it doesn't rely on hitting the registry? Or both: one that builds and tests the images, and another that runs with the prebuilt images, more as a regression test? There may also be users who want to use warnet locally with docker or minikube that would benefit from the build script for local work. |
57d04ad to
4fdc8ba
Compare
Hmmm, I somewhat agree. let me see if the script can be modified to use
I think this would be addressed by the above too |
Sounds excellent! |
4fdc8ba to
b7e280f
Compare
ab84b51 to
aa72a8f
Compare
777fcb7 to
3eb92dc
Compare
|
Sorry for many rebases, somehow broke the scenarios test during the work... |
10591c5 to
816baff
Compare
816baff to
1485f49
Compare
|
It also just occurred to me that with a unified image we could have a workflow to (re)build the docker images when we tag in this repo (or whatever) |
| build_args = ( | ||
| '"--disable-tests --with-incompatible-bdb --without-gui --disable-bench ' | ||
| "--disable-fuzz-binary --enable-suppress-external-warnings " | ||
| '--without-miniupnpc --without-natpmp"' |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Don't need these as we don't install the deps for them in the image, so they don't get build automatically.
342c3e8 to
0c733bd
Compare
This is not working currently, but can be re-enabled in the near future (TM)
But disable it for now in CI
josibake
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
looks good, left a few comments
| chown -R debian-tor:debian-tor /home/debian-tor | ||
| # Start tor in the background | ||
| gosu debian-tor tor & | ||
| su-exec debian-tor:debian-tor tor & |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
do we still need su-exec, now that we are always running as the root user?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
leaving for Tor (for now)
| libzmq \ | ||
| shadow \ | ||
| sqlite-dev \ | ||
| su-exec |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
can remove, yeah?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
leaving for Tor (for now)
src/utils/image_build.py
Outdated
| f" --tag {image_full_name}" | ||
| f" --file Dockerfile_k8 ." | ||
| f" --file src/templates/Dockerfile ." | ||
| f" --push" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
would be nicer if you added the {action} arg in this commit and then updated the justfile in the next, but not a big deal, was just very confused by the next commit haha
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yeah. But rebasing is tRiCkY
| Build bitcoind and bitcoin-cli from <repo>/<branch> and deploy to <registry> as <tag> | ||
| This requires docker and buildkit to be enabled. | ||
| Build bitcoind and bitcoin-cli from <repo>/<branch> as <registry>:<tag>. | ||
| Optionally deploy to remote registry using --action=push, otherwise image is loaded to local registry. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
note for a follow up , would be nice if we gave examples in our docs of how to use this script, just to make it more clear
| from warnet.status import RunningStatus | ||
| from warnet.tank import Tank | ||
| from warnet.utils import default_bitcoin_conf_args, parse_raw_messages | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
tbh im not sure this will work on k8s. generally in k8s, the pattern for building images is you have a custom image with the correct permissions (Kaniko is the most popular one), so you pass of a build context to a container running in the cluster, and that container returns the image. this has some additional nice properties of allowing you to build multiple images in parallel (i think?). either way, i think this will take some more finessing to get working.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this will just have the RPC container build the image no?
But I can't see what exactly you're commenting on here (thank GH)
0c733bd to
d3cc379
Compare
|
OK final force push done. Now if this gets approved, I'll update images before removing the top commit and merging |
|
ACK d3cc379 looks RFM, will let @m3dwards and @pinheadmz take a look first tho |
This image is based on alpine linux, and will work on k8s and compose backends, as well as forming the base of custom builds. One image to rule them all, and in the darkness bind them. Or something.
d3cc379 to
66444ff
Compare
|
ACK 66444ff I think this is great. |
|
reACK 66444ff diff looks good! |
| backend: [compose] | ||
| steps: | ||
| - uses: actions/checkout@v3 | ||
| - if: matrix.backend == 'compose' |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Did you have to remove this "if"? If we can just keep everything the same and remove "k8s" from the matrix on tests we wanna skip, that'll be cleaner and easier to undo
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
When we add K8s back here, we'll likely need docker installed to build images. So we'd need to remove the if then. (Unless we find a weird way of building)
|
post post post post post post post post merge nit: #376 |
Closes: #212
Yay, another PR which should be 3!
This turned out longer than I wanted, as after patching addrman bucketing I needed to update the images. These are currently based on a plethora of dockerfiles created using various scripts, with different versions for k8s and compose. By using the same image for both backends we can in the future drop our ipv4 generator code from compose and just have everything routable and bucketable on both. We also only need to rebuild half the number of images...
In undertaking this, I thought we could take the opportunity to switch to alpine-based images, as they're (much) smaller. This neccesitated some minor changes to
execmethods in the backends as there is nogosuin alpine (but there is in LND pods, debian-based?)Finally, the knock-on effect of this was breakage of the
buildtest. In my opinion this should be removed altogether and we should require users to simply run:...and then supply image info to the graph. This is a bit more manual than auto-building, but it's not clear currently how that would work with k8s in any case. This ends up therefore removing compose-specific code only, and makes a few code sections much simpler. We could simplify even further by removing
versionand only permitting image, using our registry as the default. But I've left that for now.In order to proceed with this change, if folks want things past the patch, we should first update https://hub.docker.com/r/bitcoindevproject/bitcoin-core with new unified (and patched) images, then remove the NO MERGE commit(s), both if we want to keep image building on compose, otherwise only the penultimate.
We can then remove the following registries too for extra cleanliness:
https://hub.docker.com/r/bitcoindevproject/k8s-bitcoin-core
https://hub.docker.com/r/bitcoindevproject/warnet-bitcoin
https://hub.docker.com/r/bitcoindevproject/warnet-bitcoin-core