Conversation
Signed-off-by: Vincent Touchard <touchardv@gmail.com>
Signed-off-by: Vincent Touchard <touchardv@gmail.com>
AustinAbro321
left a comment
There was a problem hiding this comment.
Good start. Added some clarifying questions and requests for additional content
| How will security be reviewed, and by whom? | ||
|
|
||
| How will UX be reviewed, and by whom? | ||
| --> |
There was a problem hiding this comment.
Need info in this section:
Does the new daemonset introduce any additional attack vectors? Could someone shell into that container?
There was a problem hiding this comment.
Added some info here.
There was a problem hiding this comment.
Not exactly sure what include the minimal amount of binaries to prevent shell access would mean in action. Are we planning to remake the socat image with less binaries?
It's not a deal breaker if someone with the proper k8s access can shell into the container, but it is worth documenting here as a risk. Especially since this pod will need hostNetwork: true or some other way to access the host namespace.
There was a problem hiding this comment.
I think that there we have many options. In the initial PR we used a basic socat image to keep things simple. Any other more secured/simple container image would do fine. Also, another track that could be considered: implementing the proxy using Rust/Go (a few lines of code) and use the same mechanism as the Rust injector...
There was a problem hiding this comment.
Gotcha that makes sense. Could you put your intended path under the design details section and add the alternative plan in the alternative section. I'd lean towards saying socat is adequate for the first release, but I'd like to get thoughts from @brandtkeller who is more security minded.
Signed-off-by: Vincent Touchard <touchardv@gmail.com>
Signed-off-by: Vincent Touchard <touchardv@gmail.com>
| make use of the proposal? | ||
| --> | ||
|
|
||
| In the event of a change of networking configuration in the Kubernetes cluster, the adminstrator should be able to simply re-run `zarf init` with or without the `--ipv6` command line flag. |
There was a problem hiding this comment.
We should be more clear here. Documenting these cases help us test these paths in the implementation.
If a user with an existing cluster dual stack cluster wants their cluster to use the ipv6 setup then they can run zarf init --ipv6
If a user wants to stop using the ipv6 setup then they run zarf init without the ipv6 flag and their cluster will go back to the ipv4 setup.
Signed-off-by: Vincent Touchard <touchardv@gmail.com>
Signed-off-by: Vincent Touchard <touchardv@gmail.com>
|
Hey @touchardv providing an update. I've been evaluating the security risks of this solution and testing it out with different setups and clusters, still more testing to do. The proxy solution would be valuable in a broader context than just IPV6. For example, this will allow NFtables to work, and potentially could allow distros and CNIs that block the route localnet nodeport to work. We have several issues around this problem (including but not limited to zarf-dev/zarf#2383, zarf-dev/zarf#2146, zarf-dev/zarf#3684). One idea I'm working on right now is replacing the flag Really appreciate the work you've put in so far, and I'm excited to see where this goes. Expect some more updates next week. |
|
Hello @touchardv I've been using this as a starting point for #37. The idea being to take the solution here and apply it more broadly and more securely. For example, instead of Really appreciate the work you've done so far. I'm going to merge this, then create another PR that will move the status to replaced, and continue the work in #37, this way we keep the historical context |
AustinAbro321
left a comment
There was a problem hiding this comment.
This is a great start. The registry proxy will likely be a more broad and long term solution than just for IPv6. I'm merging this so we have the historical record here. I'll make some updates to replace this and point it to #37 in a future PR. Work towards bringing IPv6 compatibility to Zarf will continue in that PR / ZEP.
Uh oh!
There was an error while loading. Please reload this page.