Move away from the auto-update model #713
Comments
Agreed. It looks like the bug I posted (#711) is related to a commit to the puppet-drush module that was made last night, and it's broken my ability to create a new environment. With horrible timing as well, I'm planning on demoing this to the company tomorrow. |
A few more thoughts on this:
Without this kind of release structure, as consumers we're in a situation where it works today but may break tomorrow. This may actually help reduce the incoming bugs that come in and the contributors don't need to constantly put out fires. Side note - for the project I mentioned above, I tried to use a version of the puphpet config files I generated the day I started the project back in March. Turned out it didn't work at all (something was busted, I don't remember what). I had to go back into my downloads folder and pulled an old version from January that I knew worked and used that instead. |
Simply adding a consistent tag hierarchy to the repositories involved, and allowing users to select "Bleeding Edge" or "Version xxxx" and use this to populate Right now, just to get functionality that worked fine on Saturday with the same config, I've had to fork a repository, revert a commit and use that rather than the core. |
"Right now, just to get functionality that worked fine on Saturday with the same config, I've had to fork a repository, revert a commit and use that rather than the core." Right, so we're going from no longer saying "well, it worked on my machine" to "well, it worked on my machine on Saturday, but it doesn't work today" - replacing one problem for another. I will say that even if a version model is adopted, the dependencies need to use specific versions as well, otherwise you risk it breaking. Outside of this project where I'm using Ubuntu, my office is primarily a Windows shop. We have scripts to install all the dependencies, SDKs, and whatever else is needed to build a dev environment, but it's all the same for everyone. If we rev any of the dependencies, everyone updates their environment together. I think the same principle should apply here. |
"Right now, just to get functionality that worked fine on Saturday with the same config, I've had to fork a repository, revert a commit and use that rather than the core." That would be the Drush repository I presume, I had to to do the same. Agree with all the above, I started with a fork of Drupal VM project that was based on a Puphpet setup, hoping to tweak it to my needs but found that every couple of weeks provisioning a new VM fails and requires debugging and adjustment, also fed back a couple of patches to the Drupal project. If the Puppet library configuration pointed to versioned repositories in My quick fix for internal use this afternoon is going to be either point to specific commits for all the repositories in Puppetfile or fork them all on github and point to that instead. |
One (not great) solution to this is to do less at the provisioning level, and more at the box creation level. Instead of relying on the vanilla VM boxes puphpet provides, you can go one level farther down the rabbit hole and define a Packer template file that can build you a new box with specific software versions from scratch. This box then becomes vagrant's starting point, with minimal puppet provisioning happening during your first I admit I've run into these same problems (such as the release of Puppet v3.5 and the associated problems it caused: #621, #622, #623, #625), but I'm not sure it's puphpet's responsibility to solve this. It's always felt like a tool that solved only the, "get a VM up and running for yourself quickly, today," problem. If you want something less automatic because you need more control, then almost by definition you need to do more of the work yourself. Isn't that kind of always the trade-off? |
Completely revamped VM creation process. For local deployments, I created https://github.com/puphpet/packer-templates. They come with many things pre-installed, speeding up the VM creation process. I am also now including the required Puppet modules in zip download and have removed r10k functionality. |
I'm a fan of the work you guys are doing here & believe in your mission to prevent the "well, it works on my machine" conversation. However, you guys lost me along the way. I don't know if I'm just "doing it wrong" or my expectations are misaligned.
When I started using the puphpet experience, my expectation was that:
My expectation wasn't fully met on these points.
The problem:
I crafted a configuration that worked for my team and created a private repo where I checked in the puphpet config files. We all cloned this repro and did a
vagrant up
. This worked like a charm for the first few weeks.Then...things started to break. A new developer would clone the repo and they would hit issues like #575 and #492.
Ultimately we were hitting issues where a newer package was getting installed when a new developer came on board, and the configuration no longer worked. Now, instead of a developer being productive in 20 mins, one or many of us would spend hours debugging why our dev environment doesn't work, taking away precious development time.
My proposal:
I believe that getting the latest libraries of all the dependencies isn't the right approach & ultimately doesn't deliver on a consistent developer environment.
To meet the 3 bullet points above, puphpet configurations should use version specific dependencies and not point to the latest bits upon provisioning and setup.
vagrant up
will always work, regardless of when the vm was provisionedIf you look at the two issues I mentioned above, both fixes were to link to a specific version for a set of dependencies. This patched my environment without having to create a brand new puphpet configuration.
The implications for this proposal means that the configuration files generated on puphpet will link to specific versions for dependencies. Puphpet.com can always update the dependencies at any time, however, if I created a configuration file from you at any time, that config is "frozen" using whatever dependencies you linked to at the time.
There's probably a better approach to this (do I need to create my own box?), but I figured I'd share my experience & throw out a possible solution to this problem. I'm also curious if other folks have hit similar issues and I'd love to hear what you've done to prevent this from happening on your team.
The text was updated successfully, but these errors were encountered: