New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
GH Action - Run Provisioners #2233
GH Action - Run Provisioners #2233
Conversation
Thanks for opening this pull request! Make sure
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This tries to simulate running from the host, and mounted folders, but I dn't think we need to do that. We should be simulating the guest. So symlink /srv/www
to the www
folder, or, use /srv/www
as is outright, no mapping of any kind. That's the vagrantfile/docker files job when setting up the environment, which GH Actions has already done for us. We're not testing the creation of a VM or container here, we're testing provisioners.
I'm also not bothered about permissions. We can set the initial permissions of the folders then just make sure things run as the user they need to be.
I also think we can do some prep work for this, removing reliance on the vagrant
user which is a bit weird in a docker context, or at least making it pluggable, or unnecessary. E.g. relying on a vvv
user group.
There may also be an alternative lighter weight image of Ubuntu 18 that doesn't contain PHP out of the box
I'd also like us to deprecate |
What I am doing is trying to run on the VM provided by GH (this is not a docker image). Effectively simulating the environment the provisioners get on our VM at the time of the first run. The main drawback is the built in software on the environment provided by GH. I'm specifically removing PHP versions here to test the install process of those. I haven't found any official image that had proper support for systemd (which is what services rely on), and also I couldn't find any way to have access to the docker container /etc/hosts file (it's locked by default by docker unless it's specifically set as a mount point ad far as I saw). I've just now found an issue with the mounted folders which I have almost solved and been testing right now. I'm totally onboard to run this on docker entirely. But we'd need to overcome the issues above (and also mentioned on the PR description) Again as described above. Using bindfs is the best way I found to simulate the current initial guest state and avoid permission issues that come up because of having a different scenario and that are non issues in real life cases. |
It's just a container, you can choose a different image for your GH Action to run inside For systemd: https://stackoverflow.com/questions/53383431/how-to-enable-systemd-on-dockerfile-with-ubuntu18-04
At some point, our provisioners will run in an environment where the ownership and permissions aren't enforced via VirtualBox, exposing any bugs in our provisioners. We should test for that now. Keep in mind, the VirtualBox behaviour you're trying to reproduce doesn't exist in Hyper-V or VMWare. Set the initial ownership and make sure the provisioners run as the correct users. If there's a permission problem as a result, then that's a bug in VVV, not the action. It's the provisioner that should be fixed. For
|
I agree with this, yet I didn't think that the PR resulting action fails with the current state of the project was good. That's the reason why I'm using bindfs at this point. I can rollback to my initial symlink implementation if you want tho. We'd need to agree what would be the proper user to run the provisioners as tho. By default vagrant runs everything as root, yet that may not be ideal.
I agree, subsequent cleanup could take place on separate PRs. |
We can fix those bugs upstream or as part of this PR, either way a bugs a bug. Just because it doesn't happen on VirtualBox doesn't mean it doesn't happen
I've no issue with provisioners running as |
Ok, rolled back to symlinks, without any permission forcing before the provisioners are run. Here are some issues that arise:
I still need to get a clean docker image with the systemd replacement you recommended. I have seen that particular project during my research, but since it was something "simulated" and not "official" from ubuntu images, i skipped using it. I'll look into this. Do we have a VVV docker hub or something? In case we want to push something like "vvv/ubuntu:18.04" to use as a base for these tests, and potentially in the future our docker version? |
Since this is testing frm the guests point of view, we should drop the symlinks for the log folders, and use
Any details on these?
See the stackoverflow link I mentioned further up, systemd isn't on docker out of the box intentionally, I don't believe it's a matter of switching image.
No, but I don't think it's necessary here |
Check here for details about the error i'm referring to here. It's not the provisioner logs, it's the log folder inside each site.
Check here for PHPCS and here for the site provisioners not failing appropiately (which may be desired, just need discussion).
Yes, that's what i'll be looking into. According to that project we need to do something like this to get This is the reason I asked for a docker hub account we can push images to, as it's going to be required, since we cannot run on an image generated from a raw Dockerfile. We need to have it pushed, and I wanted to avoid having things belonging to the project under my name or something. |
# Make Symlinks | ||
- name: Create Vagrant Like Environment | ||
run: | | ||
# uninstall pre installed packages (to test if utilities work) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
To move on this pr we need to test utilities? We can do another issue for that task?
Maybe in the utilities repo?
I suspect the folder has incorrect ownership and permissions. Also, for systemd, I do not want extra images. I would rather we added wrapper functions, e.g. something like this: vvv_reload_nginx() {
if systemd is available
service nginx reload
else
/usr/sbin/nginx -s reload
} |
Could we try supervisor to manage services? (as stated here in official Docker docs). We could use the approach mentioned above by Tom to keep compatibility with VMs that are not reprovisioned on the new version and therefore keep this backwards compatible. vvv_reload_nginx() {
if supervisor is available
supervisorctl restart nginx
else
service nginx restart
} According to my research, supervisor is popularly used as a process manager in docker environments (which again, is not the desired usage for docker). We'd need to write small configuration files for each service, but according to what i saw in the repos, there's not many services to be restarted (at least in core):
On vvv-utilities
Here's some example configuration for supervisor for MySQL Thinking we could use the "group" feature of supervisor to restart all registered PHP versions without hardcoding them. Let me know what do you think of this?
|
@msaggiorato right now the goal here should be an MVP for GH actions , not so much docker support. With that in mind I have no qualms with:
That means no utility testing or installation. In the future we can change things around for actual Docker support, e.g. not running the MariaDB provisioner and instead using a MariaDB container, or the Mailhog container |
I agree let's start with a minimal version so we can see what other changes we have to do. J2c about supervisor, maybe we can create an alias to restart services that will check like in the snippet above. |
We could technically move this to be a function so that it always get the correct value, but for now this works.
This is a squash of several tests i've run throughout the day.
23ad59d
to
d720ea1
Compare
Ok, here are some issues i see here:
All of these show the error, but doesn't exit the respective provisioner. This may be due to a lack of Also, it's true that these errors doesn't appear under the current stack of VVV due to the way that permissions work on synced folders (which is similar to what bindfs does). With the decision to disregard the permission specific tricks that happen in vagrant, and make things work on a docker-like environment, the question here is:
Let's decide as a team what's best here. PS: I've run some tests to run exit 1 on the provisioners and it's not failing as expected, so i'm fixing that, since that should be shipped with this PR for sure. |
Well I think that is the case of approve this and do a new pr to approach those 2 problems. |
Set -e doesn't work as some commands return a non-zero status, which doesn't necessarely means an error.
Now the proper error is displayed, and the Action actually fails when the site provisioner cannot run. I'd say we're good to merge and create issues for the subsequent changes fixes that are necessary. |
@msaggiorato lets get the action passing first :) |
Ok, we're back in business. Also found out this issue Varying-Vagrant-Vagrants/custom-site-template#45 and fixed it. |
This is something i've been working over the weekend.
It will attempt to simulate the same environment the provisioners get inside the VM, so that it runs all the provisioners.
So far, this only runs the provisioners (using part of the refactor we've made a couple months ago) and make the full logs available for download as artifacts.
Ideally, we could add extra steps here with some sort of test suite as a separate PR.
Couple of things I tried and didn't work: