-
Notifications
You must be signed in to change notification settings - Fork 99
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Improve dev tooling #1305
Improve dev tooling #1305
Conversation
I have to work on docker support to pass -u & -g flags but I think this is ready for review. |
fix podman add dev:logs task simplify config mount point
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@johrstrom This is awesome, I pulled down this PR with GitPod and really had no issues booting the container.
Once support is added for passing -u
and -g
to Docker then I will push a PR GitPod support!
I noticed that when using the Mock Connector it uses the default username kilgore
(Dex's default mock user) and I had to rake dev:exec
into the container and run:
useradd kilgore
/opt/ood/nginx_stage/sbin/nginx_stage pun --user kilgore
Yea it's like a much cooler version of our user PRs welcome on the docker support. |
lib/tasks/packaging.rb
Outdated
extra = ["--build-arg", "USER=#{user.name}"] | ||
extra.concat ["--build-arg", "UID=#{user.uid}"] | ||
extra.concat ["--build-arg", "GID=#{user.uid}"] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@msquee do we need something special here for gitpod?
plain_password = $stdin.noecho(&:gets).chomp | ||
bcrypted = BCrypt::Password.create(plain_password) | ||
|
||
content = <<~CONTENT |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I wonder if we should make it possible to define a Dex user via ood-portal-generator so you don't have to write out a second file for the Dex local user. I believe right now we hardcode in a default local user, which I think was just not thinking people or even developers might want to choose a different user besides the default we provide
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we need a way to provide development containers with such a feature. Enabling it in ood-portal-generator - I'm not sure. I've been told someone kept that ood
user around during the basic auth days and it went sideways for them. I mean they accidentally kept it and someone else started using it.
I think there could be a discourse topic where someone wanted to do something similar for a proof of concept deployment. So maybe we should provide a way to generate a single user with a new password. For proof of concept deployments and this container.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think the biggest argument for using ood-portal-generator for generating the user is we already manage the Dex YAML config with ood-portal-generator so having ood-portal-generator make the config and then modifying it later without ood-portal-generator seems messy and will require subsequent executions of ood-portal-generator to require the --force flag. I think taking the work here for generating a bcrypt password and supporting something similar with ood-portal-generator shouldn't be too big a change and pretty easy to maintain backwards compatibility.
cd $OOD_DEV_DIR | ||
ln -s $APP_DEV_DIR gateway | ||
|
||
/opt/ood/ood-portal-generator/sbin/update_ood_portal | ||
/opt/ood/ood-portal-generator/sbin/update_ood_portal --force | ||
|
||
if [ -n "$OOD_STATIC_USER" ] && [ -f "$OOD_STATIC_USER" ]; then |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If we could define a custom static user with ood-portal-generator, this extra logic could be removed. It's not great in my opinion to have this file managed by ood-portal-generator but then due to lack of flexibility, we completely overwrite it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think I added this --force
here because we mount in /opt/etc/ood/config
so the sha file doesn't exist the very first time you run.
--force
may actually suite this use case though. We persist the ood_portal.yml
on the host file system, mounted into the container. And keep the ood-portal.conf
ephemeral, regenerated every time we start the container up, so we actually don't care what it's previous state was.
Maybe --rpm
is the right flag here? But I'm happy to force here in this edge case, and keep the update_ood_portal
the way it is, if nothing else, to just keep the attack surface small.
Just added docker runtime flags to boot the container as |
…er-cache option for e2e tests because it doesn't need all the app gems.
@johrstrom From a bare install on my Mac I run into this error:
Full trace:
|
Thanks for the heads up. Yea there was a bug there in the |
Building off master gives me reasonable images:
|
I can only speculate as to what your issues would be @treydock especially if you can build the container on master. There was an update to the entrypoint, but I'm at a loss to see how that would spill over to become a 99 GB file. I'm going to 🙏 it's just a blip and you can bounce your docker deamon to get rid of it. |
To add some confidence to this, my image builds have been consistently under 2GB on multiple platforms |
I had to force restart my Docker daemon because I was unable to CTRL+C a build that was just stuck exporting the image layers. So a restart of Docker daemon doesn't solve my issue.
|
Thinking maybe some stale images were the cause I did |
Updated Docker Desktop on my Mac and Docker version is still the same, and as expected the behavior didn't change. |
The main |
I went through the always painful process of rebooting my laptop and the issue persists. My ondemand directory I'm at is only ~87MB on disk, so it's not like I have some huge file stuck in the build root. |
So this works:
If I remove the
If I change
Something about my really large UID and or GID is breaking things very badly. So far removing just the GID arg with high number UID doesn't work but this does work:
So looks like my UID is the problem. These are the UID/GID I get on my OHTECH managed device. So when I build the ondemand RPM build box that actually runs as the UID of person running Docker, I don't build the container with my UID/GID inside the container. Rather all commands are run through a wrapper script that changes the UID/GID of the running command to match runtime arguments, not build time arguments. It's far from ideal but I've never had problems like this and don't have to build a container with my UID/GID , the container is instead portable and can be pushed to Docker hub and the runtime behavior of who executes commands changes based on the person running Docker commands. |
Forgot to add, noticed |
Thanks for the updates. Version is used in To your actual issue, we'll have to think of something. You may be able to use |
@treydock your issue should be fixed. It was as simple as adding To your point about portability - that's the kind of the point here, it's not portable. It's your container. You want your files mounting in your ~/ondemand/dev directory so you can read & write them with the same IDs. |
Are we ready for this? I think we can defer the removal of the mock container. |
@johrstrom I'm happy with everything in this PR! I'm finishing up a few things on PR #1311 but we are good to merge this in. |
The recent changes worked for me, was able to build the container images and launch dev environment and login. |
I went to make my development image on my personal machine over the weekend and found that what I ran at work was a far to much of a snowflake to replicate. So I put work into the
dev
namespace build/start/stop development containers.Here's the progress made. Note that the develoment.md should explain things.
ood-dev:latest
already (but can rebuild through a parameter)