Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Digital Ocean integration #40

Closed
1 task done
swalkinshaw opened this issue Aug 28, 2014 · 39 comments
Closed
1 task done

Digital Ocean integration #40

swalkinshaw opened this issue Aug 28, 2014 · 39 comments

Comments

@swalkinshaw
Copy link
Member

swalkinshaw commented Aug 28, 2014

Using Ansible's digital_ocean module we should be able to create a droplet and use dynamic inventory to run the playbook on it.

Would be a nice feature to also create DNS entries for a domain as an option using http://docs.ansible.com/digital_ocean_domain_module.html

@nathanielks
Copy link
Contributor

Is there a branch I can follow for this?

@Foxaii
Copy link

Foxaii commented Sep 12, 2014

Not yet.

The Ansible module requires dopy but that hasn't yet been updated for the v2 Digital Ocean API (issue here).

I have been using the Digital Ocean Vagrant plugin instead, but haven't had the time to create a fully formed Vagrantfile.

@nathanielks
Copy link
Contributor

@Foxaii brilliant. Looking forward to seeing what you have!

@swalkinshaw
Copy link
Member Author

I should note that we don't want to use Vagrant to manage this. It should all be done through Ansible.

@austinpray
Copy link
Contributor

If it is not possible to do with Ansible due to what @Foxaii mentioned, it my be a great time to try http://www.terraform.io/

@nathanielks
Copy link
Contributor

@austinpray Have you played around with it at all? I haven't had time to!

@austinpray
Copy link
Contributor

I have only used it to stand up multiple digital ocean servers kind of like AWS CloudFormation, not much else. The real value would be to be able to run "terraform apply" and then your ansible hosts file is updated with IP addresses or something. Or have ansible pull them dynamically from a manifest file. Haven't gotten that far as usually these wordpress sites are just 1 server.

@swalkinshaw
Copy link
Member Author

I'm okay with using something like Terraform since that its intended purpose if we can't use Ansible. Problem with Vagrant is its specifically meant for dev environments.

@swalkinshaw
Copy link
Member Author

See https://github.com/bandwidthcom/terraform-inventory

edit: this isn't that great

@nathanielks
Copy link
Contributor

Until the DO ansible module is up to speed, using something like Terraform would be really helpful for my fork as well.

@austinpray
Copy link
Contributor

edit: this isn't that great

Yeah looks like it's not working for DO either

@luandro
Copy link

luandro commented Nov 9, 2014

I've been cheking the issues the Ansible module has with Digital Ocean's new API. What is the suggested way to deploy to DO? Capistrano?

I have been using the Digital Ocean Vagrant plugin instead

How would I go about using the Digital Ocean Vagrant plugin to deploy what I've created with Ansible-Bedrock since DO doesn't have an image quite like the bedrock setup?

I'm new to automated deployments, the best way I figure is copying my local Wordpress folder to a previously created LEMP droplet on DO through SFTP. Any suggestions?

@austinpray
Copy link
Contributor

@luandro I would recommend reading up a bit on how ansible and Capistrano works. This stack already works out of the box with digital ocean.

currently you have to manually create a digital ocean droplet, then run ansible, then deploy with Capistrano. The scope of this issue is to automatically create the digital ocean droplet then automatically provision it.

@luandro
Copy link

luandro commented Nov 9, 2014

@austinpray Thanks for the heads up. I'll look into it.

@swalkinshaw
Copy link
Member Author

Just a note that dopy was updated to API v2 but Ansible still needs to be updated as well I believe. See ansible/ansible-modules-core#209

@swalkinshaw
Copy link
Member Author

Ansible still hasn't updated their DO module :(

See ansible/ansible-modules-core#998 for a new attempt

@mAAdhaTTah
Copy link
Contributor

Can we use this module now anyway? Might help them get it merged if we tested it.

@telemakhos
Copy link

It has been merged... =)

@retlehs
Copy link
Sponsor Member

retlehs commented Apr 25, 2015

going to leave this here https://github.com/seven1m/do-install-button

@rossedman
Copy link

I think launching infrastructure and provisioning it are two different things. Terraform would be a better choice. Pass a local-exec command from terraform to provision with ansible. This would allow swapping out of whole server cluster and configuration files if you wanted master-->slave databases and whatever.... My two cents.

@swalkinshaw
Copy link
Member Author

@rossedman I agree that would probably be better but I'd rather keep this simple and within Ansible for now. We'll still keep it separate with an infrastructure.yml playbook which will only create a DO instance.

@mAAdhaTTah
Copy link
Contributor

Is this in the works? Happy to test!

@swalkinshaw
Copy link
Member Author

@mAAdhaTTah the DO module in Ansible is broken since DO retired V1 of their API. DO finally upgrade the module to V2 but we've been waiting on their next release to include it.

The actual work won't be difficult though. Just a simple playbook to create a droplet.

@rossedman
Copy link

@swalkinshaw I understand. I just deployed some test servers last night with Terraform and could see it working beautifully with Ansible because it keeps the concerns separated. Defining infrastructure vs Provisioning. Hopefully the DO module will update soon enough so you can do it all through Ansible.

@mAAdhaTTah
Copy link
Contributor

Has there been any movement on this/are we still waiting on a new Ansible release for this to work?

@swalkinshaw
Copy link
Member Author

Still waiting 😭 They haven't bumped the DO module version in forever so whenever 2.0 is released it will be included.

@swalkinshaw
Copy link
Member Author

We can finally start development on this now that Ansible 2.0 is released!

@mAAdhaTTah
Copy link
Contributor

The wait is over!

@landerss0n
Copy link

Glad to hear

@nbyloff
Copy link
Contributor

nbyloff commented Jun 26, 2016

Not exactly polished, but I thought I would take a swing at it. Here's a simple, working instance of creating a droplet and printing the IP address. 84221e0

With the add_host command from the inventory module, the created droplet can immediately launch into the typical trellis server provisioning. The one thing I had to change your_server_hostname to localhost ansible_connection=local in the host file I was using. Until I did that, it would spit out an error. Don't know why.

Eventually I'd like to get it to where multiple DB, web, load balancers, etc. will all be configured. But for now, a single droplet will do. I'd like to hear what expectations on how you expect this to work and be configured. (IE: how do you want this simple demo to change in order to make it into master). This would be a great feature to have in the core.

Please note, dopy 0.3.7 is broken right now. Install with 0.3.5

sudo pip install 'dopy=0.3.5'

@swalkinshaw
Copy link
Member Author

Thanks for sharing @nbyloff. I've done something pretty similar to this as well. My sticking point was trying to get dynamic inventories working but thinking about it again, I'm not sure it's needed.

The typical/default setup is probably a single server so a dynamic inventory isn't really needed. It's just nice to have to skip manually editing your hosts file.

@nbyloff
Copy link
Contributor

nbyloff commented Jun 29, 2016

@swalkinshaw what about using a template to accomplish writing to the hosts file? The most difficult part would be to make sure you read the hosts file correctly to capture any pre-configured servers. But you could:

  • Read single environment host file and collect groups & servers
  • Check DO to see if the server name in inventory matches a droplet name via api
  • Create droplet that doesn't exist
  • write new hosts file with template using new IP addresses in memory as well as ones loaded from the first step

That might require the default hosts file to look something like this:

production_host ansible_host=XXX.XXX.XXX.XXX

[production]
localhost ansible_connection=local
production_host 

[web]
localhost ansible_connection=local
production_host 

But that will also leave the door open for updates in the future for simple configuration changes to create clusters like:

web1 ansible_host=XXX.XXX.XXX.XXX
web2 ansible_host=XXX.XXX.XXX.XXX
db1 ansible_host=XXX.XXX.XXX.XXX
db2 ansible_host=XXX.XXX.XXX.XXX
db3 ansible_host=XXX.XXX.XXX.XXX
lb1 ansible_host=XXX.XXX.XXX.XXX

[production]
localhost ansible_connection=local
web1
web2
db1
db2
db3
lb

[web]
localhost ansible_connection=local
web1
web2

[database]
db1
db2
db3

[load_balancer]
lb

@swalkinshaw
Copy link
Member Author

Having a hard time picturing it, but it seems pretty brittle. Parsing an existing file and checking if names match via the DO API does not sound like fun.

@nbyloff
Copy link
Contributor

nbyloff commented Jun 30, 2016

Yes, while I have an idea how to keep the hosts files updated programmatically it's probably not necessary. Time could be better spent creating a playbook that builds a cluster of servers for a large website.

The only other update I would make is notify a handler or something to provision the new droplet once created. The user could take the new IP address and paste it into their host file manually.

@davisonio
Copy link

Any update on this? Would be a cool feature. at the moment I'm doing this with terraform in the repo

@nbyloff
Copy link
Contributor

nbyloff commented Apr 28, 2018

I am using terraform as well with the dynamic inventory in ansible; I think terraform would be great to be project. However it's possible to create the server and begin provisioning immediately strictly with ansible. This could work with a large amount of cloud providers, not just digital ocean. Here's the full list of dynamic inventory options that comes with ansible.

Then for a pure ansible solution, you could do something like:

---
- hosts: localhost
  connection: local
  gather_facts: False

  tasks:
    - name: Create Digital Ocean Server
      digital_ocean:
        state: present
        command: droplet
        name: trellis-test
        image_id: ubuntu-16-04-x64
        region_id: nyc1
      register: do

    - name: Add host to inventory
      add_host:
        name: "{{ do.droplet.ipv4_address }}"
        groups: web
      when: do.droplet is defined

- hosts: web:&{{ env }}
  remote_user: "{{ admin_user }}"
  gather_facts: False

  tasks:
   # something to wait until SSH is available and we can start the server.yml playbook
    - name: Wait for ssh
      local_action: "wait_for port=22 host={{ inventory_hostname }}"

Then you can keep the single server setup or allow YAML configuration options that would build a private cloud with a LB, dedicated DB server, or however the user wants it. It also allows for things like tagging your droplets so we can grab servers by identifiers so you know what should be installed on each machine.

@swalkinshaw
Copy link
Member Author

I've had similar tasks in a local branch for a while, so thanks for sharing that. I guess the reason why this issue isn't done yet is the "integration" part.

Meaning, is there a way to provide an optional DO playbook/inventory script which just works with the existing server.yml so it's opt-in?

I guess these tasks are idempotent, could we just create a new playbook which runs those then imports server.yml?

@nbyloff
Copy link
Contributor

nbyloff commented Apr 30, 2018

Currently in my projects I modify the wordpress_sites.yml keys for backups like this. What if there was an optional server attribute? (Non-working psuedo code below)

ssl:
  enabled: true
  provider: letsencrypt
cache:
  enabled: false
backup:
  enabled: true
  cron:
    hour: 6
    minute: 0
server:
  enabled: true
  provider: "{{ cloud_servers.provider }}"
  name: "{{ web_server }}" # this is the name of the server; should be unique
  single: true  # single server that has DB, nginx, etc. all on one server like the current default setup

Then in like group_vars/all/something.yml you could have configuration options for the droplet:

cloud_servers:
  web_server: www-1  #unique name
  provider: digital_ocean
  image_id: ubuntu-16-04-x64
  region_id: nyc1
  ...

I added the single: true but might beyond the scope of this project. I mention that b/c I modified an environment and have 11 wordpress sites on 4 servers (1 LB, 2 web servers, and 1 DB server). If single were false, then options could be expanded to allow a much more dynamic environment and your users could begin building their own private clouds, especially if the community started adding playbook examples in the wiki.

A setup similar to his would allow for optional droplet creation and linking a server dynamically by a unique name vs. manually pasting the IP address in the hosts file.

Also, In the OP of this issue, @swalkinshaw mentioned updating DNS entries, which would be huge. I had mentioned a few weeks ago about Lets Encrypt and DNS validation via API on the forums. You could go end to end without any manual intervention. Exciting options!

@swalkinshaw
Copy link
Member Author

Closing the oldest issue 🎉

trellis-cli has DO integration in one easy command:

$ trellis droplet create

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests