Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

Already on GitHub? Sign in to your account

Allow setting single instance with multistage and possible bug fix? #22

Closed
wants to merge 2 commits into
from

Conversation

Projects
None yet
3 participants
Contributor

artificialrobot commented Apr 8, 2013

Some small changes to make things work for our project which uses capistrano multistage as well as what I think is a bug fix for the single instance task not setting all roles.

  • ec2_roles will now accept role names as well as hashes so you can setup default roles using ec2_roles
  • Allow for setting load balancer as a cap variable rather than having to pass it for register_instance and deregister_instance
  • Set instance name via the instance task rather than by parsing arguments so that capify_ec2 will work with multistage builds since position of server task will not be predictable
  • Always add all roles for a given instance when calling the single instance task
Contributor

artificialrobot commented Apr 8, 2013

Just realized I should illustrate the problems I am trying to solve for here. Given the server:

00: WWW-i-4d157b20-IMAGE i-4d157b20 m1.large 10.200.129.12 us-east-1c web,app,db primary

and a task nginx:restart which runs on servers with the web role.

Finally, in my deploy.rb I had the following:
ec2_role :name=>:web, :options=>{ :default => true }
ec2_role :name=>:app, :options=>{ :default => true }
ec2_role :name=>:db, :options=>{ :default => true }

If you run cap WWW-i-4d157b20-IMAGE nginx:restart I was getting an error that there were no servers with the web role. If I ran cap nginx:restart though it would properly find the server above and execute the task. This is what I believe was the bug.

I am using capistrano multistage so my tasks actually look cap WWW-i-4d157b20-IMAGE nginx:restart, but some of the tasks were relying on the server name being the first (or second) argument. Rather than relying on ordering I just have the instance task set a variable called ec2_instance_name.

The load balancer variable just allows different load balancers to be set per environment and configured within the deploy.rb or .rb for multistage.

I ended up changing my deploy.rb to just have a single call to:

ec2_roles {:name=>:web, :options=>{ :default => true }}, {:name=>:app, :options=>{ :default => true }}, {:name=>:db, :options=>{ :default => true }}

Since ec2_roles is now just a direct pass through to ec2_role this works out well with being able to pass in multiple roles as hashes.

Any feedback is welcome on this.

Member

Rylon commented Apr 9, 2013

Hey there,

I've tried to reproduce the original problem you described by essentially recreating the same server setup you have, and then creating a task which simply executes the 'date' command on the remote server.

I was able to successfully run the task as 'cap server_name test_task' and it applied only to that single server.

The only difference is that I'm using the 'rolling_deploy' branch which has some changes, and may have fixed this issue, you may want to give it a try? https://rubygems.org/gems/capify-ec2/versions/1.4.3.pre5

The other thing I'd suggest is to define all of your roles using 'ec2_roles' rather than 'ec2_role'. You can call it multiple times like you were doing with 'ec2_role', for example:

ec2_roles :name=>:web, :options=>{ :default => true }
ec2_roles :name=>:app, :options=>{ :default => true }
ec2_roles :name=>:db, :options=>{ :default => true }

Although when testing this, I was able to use either on the 'rolling_deploy' branch.

Let me know how you get on?

Contributor

artificialrobot commented Apr 9, 2013

I will give that branch a try, thanks!

Contributor

artificialrobot commented Apr 9, 2013

Unfortunately, the other branch didn't fix the problem for me:

From my deploy.rb:

ec2_roles({name: :web, :options=>{ :default => true }}, {name: :app, :options=>{ :default => true }},
          {name: :db, :options=>{ :default => true }}, {name: :cron, :options=>{ :default => true }})

Output from executing

$ cap review ec2:status
    triggering load callbacks
  * 2013-04-09 16:52:35 executing `review'
    triggering start callbacks for `ec2:status'
  * 2013-04-09 16:52:38 executing `multistage:ensure'
  * 2013-04-09 16:52:38 executing `ec2:status'
Num   Name                    ID           Type       DNS             Zone         Roles          Options
00:   WWW-i-4d157b20-IMAGE    i-4d157b20   m1.large   10.200.129.12   us-east-1c   web,app,db     primary

$ cap review WWW-i-4d157b20-IMAGE nginx:setup
    triggering load callbacks
  * 2013-04-09 16:49:32 executing `review'
    triggering start callbacks for `WWW-i-4d157b20-IMAGE'
  * 2013-04-09 16:49:35 executing `multistage:ensure'
  * 2013-04-09 16:49:35 executing `WWW-i-4d157b20-IMAGE'
    triggering start callbacks for `nginx:setup'
  * 2013-04-09 16:49:35 executing `multistage:ensure'
  * 2013-04-09 16:49:35 executing `nginx:setup'
`nginx:setup' is only run for servers matching {:roles=>:web}, but no servers matched

From what I can tell the problem is that the ec2_roles method will create the WWW-i-4d157b20-IMAGE task in the define_instance_roles (called from ec2_role for each role passed to ec2_roles) which only adds a single role to the instance and won't add each of the roles assigned to that server.

Since I am passing a stage as the first task the code starting at capistrano.rb:190 where a task named WWW-i-4d157b20-IMAGE that adds all the roles would be defined is never executed. Though it seems to me that this task will just be redefined by the define_instance_roles anyway since you call ec2_role for each role after defining this task and when you define the same task in define_instance_roles it will overwrite this task created on line 190.

My change basically extracts the task definition from ec2_roles into the define_instance_roles method. This way the instance named task will remove all default roles and add in all roles in the Roles tag for the given instance.

Member

Rylon commented Apr 9, 2013

Ah I see, OK I'll run through some tests and look at merging this in then! I'd like to release the rolling_deployment branch first though, to keep things simpler :)

Thanks!

Contributor

artificialrobot commented Apr 9, 2013

Sounds great, I'm looking forward to the rolling_redeployment stuff as well. I also have an additional small feature addition that I will submit separately that adds in a couple methods to query RDS instances by name (similar to the elb methods) which is useful for updating configurations dynamically for me at least :-)

Member

Rylon commented Apr 9, 2013

Have you been testing the rolling deployment feature as it is? Just wondering if you've seen any issues?

Contributor

artificialrobot commented Apr 9, 2013

I haven't had a chance to try it out yet, since I was just trying to work out the basics, but I will be soon. I will let you know how it goes.

@skisulli skisulli and 1 other commented on an outdated diff May 18, 2013

lib/capify-ec2/capistrano.rb
instances.each do |instance|
task instance.name.to_sym do
remove_default_roles
- define_role(role, instance)
- end
+
+ if instance.respond_to?(:roles)
+ roles = instance.roles
+ else
+ roles = [instance.tags['Roles']].flatten
@skisulli

skisulli May 18, 2013

Contributor

This should use the config for roles, @ec2_config[:aws_roles_tag]

@artificialrobot

artificialrobot May 18, 2013

Contributor

good call, I will update that.

@artificialrobot

artificialrobot Jun 7, 2013

Contributor

finally got around to fixing this issue.

Contributor

skisulli commented May 18, 2013

How are you handling the multistage stuff with this? I am looking into updating our recipe to have multistage, but the way that our ops person has configured the instances I am not sure how to best go about it. The tags we have are:

app - equivalent to Project
role - equivalent to Role
env - for determining staging or production

I am not sure what the best approach for using this gem for multistage under this setup.

Contributor

artificialrobot commented May 18, 2013

@skisulli I have my build setup where I am using the project name to differentiate environments. So I have a task defined in my deploy.rb that sets up roles by calling ec2_roles. That task is hooked in after multistage:ensure. This allows me to merge the project-tags key into the ec2_config hash with an environment specific project name -qa or -production or whatever you may have.

This way I define all of the common elements of ec2_config in the main deploy.rb and only the project name in the .rb file.

from .rb

set :ec2_config, fetch(:ec2_config).merge!(project_tags: ['project-<env>'])

from deploy.rb

set :ec2_config, {
  aws_params: { regions: ['us-east-1'] },
  load_balanced: true,
}

namespace :aws do
  task :init_roles do
    ec2_roles({name: :web, :options=>{ :default => true }}, {name: :app, :options=>{ :default => true }},
            {name: :db, :options=>{ :default => true }}, {name: :cron, :options=>{ :default => true }})
  end

  after "multistage:ensure", "aws:init_roles"
end

artificialrobot added some commits Apr 5, 2013

Fixing bug with single instance task and simplifying role setup code
ec2_roles will now accept role names as well as hashes so you can setup
default roles using ec2_roles
allow for setting load balancer as a cap variable rather than having to
pass it for register_instance and deregister_instance
set instance name via the instance task rather than by parsing arguments
so that capify_ec2 will work with multistage builds since position of
server task will not be predictable
always add all roles for a given instance when calling the single
instance task
Member

Rylon commented Jul 18, 2013

Hey there,

I'm currently reviewing #30 which also adds Multistage support, so once that is done I'll deploy a pre-release version of the Gem for you to try and see if it solves your problem please?

It would still be useful I think if you submitted the fix for single instance task roles as a separate pull request also, as it would be a shame to lose that fix in the process.

Thanks!

Contributor

artificialrobot commented Jul 18, 2013

Absolutely, just mention me here or in the other PR when you get it merged in and I will take a look and try and split these out, though it may be in a week or two as we have some big projects about to hit here at work :-)

Member

Rylon commented Jul 18, 2013

No problem! Considering it's taken me three months to actually get around to this, I won't hold it against you 😉

@Rylon Rylon closed this in 5ace347 Jul 19, 2013

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment