Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Create Strategy for Pre/Post Hook Functionality #183

Closed
6 tasks done
rickmanelius opened this issue Apr 25, 2017 · 14 comments
Closed
6 tasks done

Create Strategy for Pre/Post Hook Functionality #183

rickmanelius opened this issue Apr 25, 2017 · 14 comments

Comments

@rickmanelius
Copy link
Contributor

rickmanelius commented Apr 25, 2017

What happened (or feature request):

Feature Request

Acceptance Criteria:

  • Decision on jurisdiction and how/where to extend default ddev functionality.
  • Decision on storage locations for metadata and busienss logic.
  • Decision on metadata structure.
  • Decision on workflow for end-user creating and updating metadata.
  • Mocked out user journeys and example configuration files.
  • Documentation on how to extend pre/post hook functionality through plugins.

Related Conversations

This has been touched on within other issues:

@rickmanelius
Copy link
Contributor Author

@rfay @beeradb @tannerjfco Given the larger implications, I wanted to at least draw your attention to this ticket because it will likely a lot of input from one or more of you before this can get to an actionable state. Nothing to do at this moment, but something we'll likely discuss at tomorrow's planning meeting.

@rickmanelius
Copy link
Contributor Author

This is an extension of the work previously done by @beeradb here https://gist.github.com/beeradb/e67995bed62f8d9b823c2bc89caac6ea#pull-in-data-from-an-external-source.

IMHO, a proper solution will require addressing the following areas:

  1. Jurisdiction: What portion of ddev OR the ddev plugins are responsible for the managing the metadata?
  2. Storage: Where is the metadata and business logic stored?
  3. Structure: What pattern should we adopt for the metadata and business logic to make this repeatable and extendable?
  4. Workflow: How does an end-user install, update, and otherwise interact with this information.

And beyond a philosophical discussion, we should provide mock examples to help illustrate the points listed above.

If we can agree on the 4 items above and the necessity of examples, I would like to put forth a proposal to help push this conversation along.

Jurisdiction

Given that it’s been years since I’ve developed in an OO language like C++/Java and given that I’m not familiar with how Golang provides extensibility, I may be making an incorrect assumption or two here. In an ideal scenario, each plugin would be responsible for extending the base ddev functionality as well as managing its own metadata required to power these additions. For example, if we had a plugin for drud hosting and wanted to extend the default ddev import-db command, the plugin might provide default keys in the YML file to target the remote server information.

Storage

  • metadata: The question here is whether or not we place ALL the configuration into a single configuration file (currently .ddev/config.yml) or if we adopt an approach like we’re pursuing with the docker-compose.foo.yml (see As a developer, I want the ability to use custom containers. · Issue #121 · drud/ddev · GitHub).
  • business logic: I don’t have a preference here, but I’m assuming that if we rely on each plugin to control the metadata that it makes sense for it to also control the business logic. In essence, if we had a “drud hosting” plugin, we would require this plugin to extend the default ddev commands and/or provide its own commands.

Structure

This is where things might get a little more controversial. Namespacing is going to be important, and this might add what appears to be unnecessary levels of hierarchy for simple scenarios. My vote might be something akin to the following:

$site-type:
  $provider-plugin:
    $ddev-command:
      $trigger:
        $insert-data

An example:

drupal:
  pantheon:
    _general:
      site-id: mysite
    import-db:
      post:
        prod:
          drush:
            - “cc all”
            - “updatedb” 

Workflow

Things get trick here and we need to work through the user journeys. Once a user has gone through the default ddev config step, we now have a key piece of information (the application type). At that point we can either prolong the ddev config process by asking questions from each plugin related to that site type. Example: a pantheon plugin could have questions specific to a wordpress site versus a Drupal site. Alternatively, we could not require ddev config to handle that and simply export a series of sane defaults and then prompt the user to visit the config file to modify. I prefer the questions through a wizard, but that could be a heavy lift.

@rickmanelius
Copy link
Contributor Author

Updated summary. I now consider this actionable from the perspective of driving this to a completion point.

@tannerjfco
Copy link
Contributor

Some ideas/thoughts here. Hopefully it all makes sense, happy to clarify if something's fuzzy.

  1. Jurisdiction: What portion of ddev OR the ddev plugins are responsible for the managing the metadata?

I think the config.yaml that 'ddev config' generates should provide the minimal post-deploy steps for drupal or wordpress for the start/import commands, and link off to docs for users to determine what else they can do.

We could have plugins provide expanded post-deploy steps specific to their requirements, depending on how we end up handling plugins. e.g. If we provided the option to "ddev config pantheon", the generated config would have any pantheon-specific deploy steps.

  1. Storage: Where is the metadata and business logic stored?

metadata: I think a new section in config.yaml is fine, and I think we should avoid introducing more configuration files if possible

  1. Structure: What pattern should we adopt for the metadata and business logic to make this repeatable and extendable?

I like the idea of specifying the deploy steps related to ddev command. I don't really see why we would need to specify site type or plugin though. As for the steps themselves, I think we should simply provide ways to define commands to run. It would be great if you could also specify ddev commands to run after other ddev commands, e.g. allow for import to run after start.

extend_commands:
  start:
    post:
      - import-db:
         - src: ~/Downloads/mybackup.zip
      - exec: drush cc all
      - exec: drush updatedb
  1. Workflow: How does an end-user install, update, and otherwise interact with this information.

The user would be introduced to the information upon their first viewing of the config.yaml file generated by ddev config. They could then customize the configuration to meet their specific needs, referencing documentation we provide.

@rickmanelius
Copy link
Contributor Author

After an initial/quick review of @tannerjfco's last comment here #183 (comment), I'm mostly in agreement. The one area that I'm not quite sure about is the counter proposal on the extend_commands. However, given that we only have a few provider plugins out of the gates and we've discussed versioning our config files, I'm fine with starting with this structure for now and extending/evolving as needed.

@rickmanelius
Copy link
Contributor Author

One other item that we probably need to resolve before this is truly actionable. We'll need to determine whether or not re-running ddev config wipes out any user defined configurations (e.g. do they persist after the initial ddev config run). Another moment in time worth considering is upgrades. At each version release, there is a possibility to break previous config structures and that might be a moment in time to prompt a user to save a copy of their pre-existing config before it's replaced.

The trick at this stage is to not overthink this too much so we can start somewhere. Still, I think we're close to getting to specific acceptance criteria for an initial release of this functionality.

@rickmanelius
Copy link
Contributor Author

Slept on this and have some feedback after review (w/coffee) this morning. I wanted to drop this quick note in case anyone caught my comments last night and is actively responding. TL;DR, I think we have a path forward.

@rickmanelius
Copy link
Contributor Author

Feedback

  • I notice in @tannerjfco's latest proposal that all the pre/post commands are essentially other ddev commands (with exec being a catch-all). I hadn't considered this approach, and it does reduce the business logic a bit because we can rely on pre-existing commands to add new functionality. Unfortunately, since we're heavily relying on exec, some of this may not be testable and that is something to consider. For example, the WordPress search/replace using WP-CLI is going to be a big deal and adding that as an exec command to config.yml might pose a challenge to track that functionality. Correct?
  • The ability to stuff other ddev commands as pre/post could lead to recursion/infinite loops. Just a fair warning :)
  • To help a user understand when/where these hooks are being triggered, maybe we report on the command line that they are being used? Running import-files post from .ddev/config line 8: ddev exec drush cc
  • Persistence: Similar to the convo that we're having with settings.php files and who controls it (see Respect settings.php where it exists (if ddev didn't create it) #130), we probably need to have a dividing line in the config.yml file where user stored configs persist. Alternatively, we could have a config-user.yml file that overrides and persists. The reason I say this is because whatever is generating these initially will eventually wipe out a user's carefully created set of hooks.
  • Versioning: Given that this structure may evolve, for debugging purposes we should probably provide more information in the config file regarding versioning. Right now we specify API version 4, but perhaps we also specify which version of ddev generated it? That way if something major changes, it's easier to no why "extend_functions" no longer works because it was replaced by a different pattern.

@rickmanelius
Copy link
Contributor Author

Proposed Next Actions:

  • Getting a gut check from another member of the dev team on the approach listed above.
  • Strategy on what persists versus what doesn't.
  • A bit more detail on workflow, particularly how this is generated OR are we just going to leave a comment to our docs that provides examples? Do questions get added during ddev config when it's a WordPress site and we want to create the replace URL for import-db?
  • Summarizing this more clearly/succinctly as acceptance criteria.

I know there is still some work, but given that we're wanting to really wait for end-user feedback to help guide the future direction of how we handle plugins, the same will be true for this type of config and I'd rather ship something versus hold this in purgatory. I don't think anything I've requested above will be controversial, so my hope is that there is alignment and we just need to pick an option for the peristence, workflow, and just get this to actionable!

@tannerjfco
Copy link
Contributor

I notice in @tannerjfco's latest proposal that all the pre/post commands are essentially other ddev commands (with exec being a catch-all). I hadn't considered this approach, and it does reduce the business logic a bit because we can rely on pre-existing commands to add new functionality. Unfortunately, since we're heavily relying on exec, some of this may not be testable and that is something to consider. For example, the WordPress search/replace using WP-CLI is going to be a big deal and adding that as an exec command to config.yml might pose a challenge to track that functionality. Correct?

I think that's more a question as to what we test. I would expect we have the underlying "hook" functionality itself to test. It could also be that for drupal/wordpress we provide a minimal "hook" configuration to address the basics, such as search-replace or cache clear, at which point those minimal configurations get tested.

The ability to stuff other ddev commands as pre/post could lead to recursion/infinite loops. Just a fair warning :)

This feature could lead to users doing all sorts of interesting things. We'll have to decide how much we should / can handle bad configurations.

To help a user understand when/where these hooks are being triggered, maybe we report on the command line that they are being used? Running import-files post from .ddev/config line 8: ddev exec drush cc

There definitely needs to be output to the user to indicate additional tasks are occurring

Persistence: Similar to the convo that we're having with settings.php files and who controls it (see #130), we probably need to have a dividing line in the config.yml file where user stored configs persist. Alternatively, we could have a config-user.yml file that overrides and persists. The reason I say this is because whatever is generating these initially will eventually wipe out a user's carefully created set of hooks.

My perspective regarding config.yml is that we help create it initially via "ddev config", but from that point forward it is the user's file to control. We need to decide if that's true though, and related decide if we control the main docker-compose.yml or not. We currently allow "ddev config" to "re-configure" a site, so we would have to address how to handle reconfiguring. A possibility could be that we allow to reconfigure still, with the understanding that it would replace their existing configuration.

Versioning: Given that this structure may evolve, for debugging purposes we should probably provide more information in the config file regarding versioning. Right now we specify API version 4, but perhaps we also specify which version of ddev generated it? That way if something major changes, it's easier to no why "extend_functions" no longer works because it was replaced by a different pattern.

I would expect we leverage APIVersion in the config to support this concern. If we introduce a breaking change to configuration, we should increment the APIVersion. How we handle breaking changes (i.e. providing backwards compatibility or not) is another question to consider.

A bit more detail on workflow, particularly how this is generated OR are we just going to leave a comment to our docs that provides examples? Do questions get added during ddev config when it's a WordPress site and we want to create the replace URL for import-db?

I mentioned this somewhat as a possibility in the first question, but a possibility is to provide a minimal set of extend_command usage relevant to their platform (drupal/wordpress). We could have ddev config prompt the same way it does now, and add messaging to the completion to instruct the user to review the configuration file. The configuration file itself should then have comments explaining what things do, and link off to documentation for more details. It makes sense to me to consider ddev config as how you start your configuration, but to then provide documentation for how to expand the configuration to meet their needs.

@rickmanelius
Copy link
Contributor Author

Well, that is frustrating... I had a comment in progress that didn't get submitted. Going to attempt a shorter summary.

@rickmanelius
Copy link
Contributor Author

It sounds like we're fine to start pursuing this as follows:

  • .ddev/config.yml contains top level entries to extend each command with pre/post functionality.
  • All pre/post commands will essentially be other ddev commands.
  • Our test strategy will be to prove that pre/post commands can fire.
    • Given each ddev function already has its own coverage, we don't need to proceed further.
  • We will surface to the end-user that hooks are being fired and draw their attention to where this happens.
  • We will add some level of inline comments and links to docs to learn more.

The only up in the air section is whether config persists or not. My vote is to have some flag in the middle of the yml file. ddev config will control everything above the line and leave everything below the line. So user defined extends would carry if someone re-ran ddev config. Just my vote.

@tannerjfco if you're in agreement (and might want a sanity check from @beeradb), let's detail out a little more and convert to actionable.

@rickmanelius
Copy link
Contributor Author

Follow-up based on a convo w/@tannerjfco on hangouts:

  • We'll file a separate ticket to run system commands, but not consider it in scope for this one.
  • Limit to these 6: start, stop, exec, remove, import-files, import-db
  • Test coverage in two buckets: that pre/post fire and that there is validation and proper execution of the commands themselves.
  • We'll rely on ddev config for the first time generation. After that, an end-user that modifies this directly probably understands what they are doing and is likely to hand modify moving forward. There is also version control that we can rely on for a user to revert changes.

@rickmanelius
Copy link
Contributor Author

Functionality is merged. Great job, @tannerjfco!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants