Skip to content

HTTPS clone URL

Subversion checkout URL

You can clone with HTTPS or Subversion.

Download ZIP
Browse files

Adds the delete_on_termination patch

  • Loading branch information...
commit 479202d9513fd37a7206782c80adf888e5b86246 1 parent f4751e1
Philip Gladstone authored
22 README.rst
View
@@ -15,40 +15,59 @@ Boto is a Python package that provides interfaces to Amazon Web Services.
At the moment, boto supports:
* Compute
+
* Amazon Elastic Compute Cloud (EC2)
* Amazon Elastic Map Reduce (EMR)
* AutoScaling
* Elastic Load Balancing (ELB)
+
* Content Delivery
+
* Amazon CloudFront
+
* Database
+
* Amazon Relational Data Service (RDS)
* Amazon DynamoDB
* Amazon SimpleDB
+
* Deployment and Management
+
* AWS Identity and Access Management (IAM)
* Amazon CloudWatch
* AWS Elastic Beanstalk
* AWS CloudFormation
+
* Application Services
+
* Amazon CloudSearch
* Amazon Simple Workflow Service (SWF)
* Amazon Simple Queue Service (SQS)
* Amazon Simple Notification Server (SNS)
* Amazon Simple Email Service (SES)
+
* Networking
+
* Amazon Route53
* Amazon Virtual Private Cloud (VPC)
+
* Payments and Billing
+
* Amazon Flexible Payment Service (FPS)
+
* Storage
+
* Amazon Simple Storage Service (S3)
* Amazon Glacier
* Amazon Elastic Block Store (EBS)
* Google Cloud Storage
+
* Workforce
+
* Amazon Mechanical Turk
+
* Other
+
* Marketplace Web Services
The goal of boto is to support the full breadth and depth of Amazon
@@ -113,6 +132,8 @@ Boto releases can be found on the `Python Cheese Shop`_.
Join our IRC channel `#boto` on FreeNode.
Webchat IRC channel: http://webchat.freenode.net/?channels=boto
+Join the `boto-users Google Group`_.
+
*************************
Getting Started with Boto
*************************
@@ -141,3 +162,4 @@ All rights reserved.
.. _this: http://code.google.com/p/boto/wiki/BotoConfig
.. _gitflow: http://nvie.com/posts/a-successful-git-branching-model/
.. _neo: https://github.com/boto/boto/tree/neo
+.. _boto-users Google Group: https://groups.google.com/forum/?fromgroups#!forum/boto-users
7 boto/ec2/connection.py
View
@@ -867,6 +867,7 @@ def modify_instance_attribute(self, instance_id, attribute, value):
* userData - Base64 encoded String (None)
* disableApiTermination - Boolean (true)
* instanceInitiatedShutdownBehavior - stop|terminate
+ * blockDeviceMapping - List of strings - ie: ['/dev/sda=false']
* sourceDestCheck - Boolean (true)
* groupSet - Set of Security Groups or IDs
* ebsOptimized - Boolean (false)
@@ -896,6 +897,12 @@ def modify_instance_attribute(self, instance_id, attribute, value):
if isinstance(sg, SecurityGroup):
sg = sg.id
params['GroupId.%s' % (idx + 1)] = sg
+ elif attribute.lower() == 'blockdevicemapping':
+ for idx, kv in enumerate(value):
+ dev_name, _, flag = kv.partition('=')
+ pre = 'BlockDeviceMapping.%d' % (idx + 1)
+ params['%s.DeviceName' % pre] = dev_name
+ params['%s.Ebs.DeleteOnTermination' % pre] = flag or 'true'
else:
# for backwards compatibility handle lowercase first letter
attribute = attribute[0].upper() + attribute[1:]
5 boto/ec2/elb/__init__.py
View
@@ -76,7 +76,7 @@ def connect_to_region(region_name, **kw_params):
class ELBConnection(AWSQueryConnection):
- APIVersion = boto.config.get('Boto', 'elb_version', '2011-11-15')
+ APIVersion = boto.config.get('Boto', 'elb_version', '2012-06-01')
DefaultRegionName = boto.config.get('Boto', 'elb_region_name', 'us-east-1')
DefaultRegionEndpoint = boto.config.get('Boto', 'elb_region_endpoint',
'elasticloadbalancing.us-east-1.amazonaws.com')
@@ -180,7 +180,8 @@ def create_load_balancer(self, name, zones, listeners, subnets=None,
:rtype: :class:`boto.ec2.elb.loadbalancer.LoadBalancer`
:return: The newly created :class:`boto.ec2.elb.loadbalancer.LoadBalancer`
"""
- params = {'LoadBalancerName': name}
+ params = {'LoadBalancerName': name,
+ 'Scheme': scheme}
for index, listener in enumerate(listeners):
i = index + 1
protocol = listener[2].upper()
2  boto/swf/layer1.py
View
@@ -1152,8 +1152,8 @@ def count_open_workflow_executions(self, domain, latest_date, oldest_date,
return self.make_request('CountOpenWorkflowExecutions', json_input)
def list_open_workflow_executions(self, domain,
+ oldest_date,
latest_date=None,
- oldest_date=None,
tag=None,
workflow_id=None,
workflow_name=None,
49 docs/source/autoscale_tut.rst
View
@@ -42,7 +42,8 @@ Like EC2 the Autoscale service has a different endpoint for each region. By
default the US endpoint is used. To choose a specific region, instantiate the
AutoScaleConnection object with that region's endpoint.
->>> ec2 = boto.connect_autoscale(host='autoscaling.eu-west-1.amazonaws.com')
+>>> import boto.ec2.autoscale
+>>> ec2 = boto.ec2.autoscale.connect_to_region('eu-west-1')
Alternatively, edit your boto.cfg with the default Autoscale endpoint to use::
@@ -94,7 +95,8 @@ ready to associate it with our new autoscale group.
>>> ag = AutoScalingGroup(group_name='my_group', load_balancers=['my-lb'],
availability_zones=['us-east-1a', 'us-east-1b'],
- launch_config=lc, min_size=4, max_size=8)
+ launch_config=lc, min_size=4, max_size=8,
+ connection=conn)
>>> conn.create_auto_scaling_group(ag)
We now have a new autoscaling group defined! At this point instances should be
@@ -116,14 +118,14 @@ its associated load balancer.
Scaling a Group Up or Down
^^^^^^^^^^^^^^^^^^^^^^^^^^
-It can also be useful to scale a group up or down depending on certain criteria.
+It can also be useful to scale a group up or down depending on certain criteria.
For example, if the average CPU utilization of the group goes above 70%, you may
want to scale up the number of instances to deal with demand. Likewise, you
-might want to scale down if usage drops again.
-These rules for **how** to scale are defined by *Scaling Polices*, and the rules for
+might want to scale down if usage drops again.
+These rules for **how** to scale are defined by *Scaling Policies*, and the rules for
**when** to scale are defined by CloudWatch *Metric Alarms*.
-For example, let's configure scaling for the above group based on CPU utilization.
+For example, let's configure scaling for the above group based on CPU utilization.
We'll say it should scale up if the average CPU usage goes above 70% and scale
down if it goes below 40%.
@@ -132,6 +134,7 @@ the group (but not when to do it, we'll specify that later).
We need one policy for scaling up and one for scaling down.
+>>> from boto.ec2.autoscale import ScalingPolicy
>>> scale_up_policy = ScalingPolicy(
name='scale_up', adjustment_type='ChangeInCapacity',
as_name='my_group', scaling_adjustment=1, cooldown=180)
@@ -147,11 +150,11 @@ Let's submit them to AWS.
Now that the polices have been digested by AWS, they have extra properties
that we aren't aware of locally. We need to refresh them by requesting them
-back again.
+back again.
->>> scale_up_policy = autoscale.get_all_policies(
+>>> scale_up_policy = conn.get_all_policies(
as_group='my_group', policy_names=['scale_up'])[0]
->>> scale_down_policy = autoscale.get_all_policies(
+>>> scale_down_policy = conn.get_all_policies(
as_group='my_group', policy_names=['scale_down'])[0]
Specifically, we'll need the Amazon Resource Name (ARN) of each policy, which
@@ -170,6 +173,7 @@ Group, rather than individual instances. We express that as CloudWatch
Create an alarm for when to scale up, and one for when to scale down.
+>>> from boto.ec2.cloudwatch import MetricAlarm
>>> scale_up_alarm = MetricAlarm(
name='scale_up_on_cpu', namespace='AWS/EC2',
metric='CPUUtilization', statistic='Average',
@@ -188,4 +192,29 @@ Create an alarm for when to scale up, and one for when to scale down.
dimensions=alarm_dimensions)
>>> cloudwatch.create_alarm(scale_down_alarm)
-Auto Scaling will now create a new instance if the existing cluster averages more than 70% CPU for two minutes. Similarly, it will terminate an instance when CPU usage sits below 40%. Auto Scaling will not add or remove instances beyond the limits of the Scaling Group's 'max_size' and 'min_size' properties.
+Auto Scaling will now create a new instance if the existing cluster averages
+more than 70% CPU for two minutes. Similarly, it will terminate an instance
+when CPU usage sits below 40%. Auto Scaling will not add or remove instances
+beyond the limits of the Scaling Group's 'max_size' and 'min_size' properties.
+
+To retrieve the instances in your autoscale group:
+
+>>> ec2 = boto.connect_ec2()
+>>> conn.get_all_groups(names=['my_group'])[0]
+>>> instance_ids = [i.instance_id for i in group.instances]
+>>> reservations = ec2.get_all_instances(instance_ids)
+>>> instances = [i for i in reservations for i in r.instances]
+
+To delete your autoscale group, we first need to shutdown all the
+instances:
+
+>>> ag.shutdown_instances()
+
+Once the instances have been shutdown, you can delete the autoscale
+group:
+
+>>> ag.delete()
+
+You can also delete your launch configuration:
+
+>>> lc.delete()
1  docs/source/index.rst
View
@@ -31,6 +31,7 @@ Currently Supported Services
* **Deployment and Management**
* CloudFormation -- (:doc:`API Reference <ref/cloudformation>`)
+ * Elastic Beanstalk -- ("doc"`API Reference <ref/beanstalk>`)
* **Identity & Access**
26 docs/source/ref/beanstalk.rst
View
@@ -0,0 +1,26 @@
+.. ref-beanstalk
+
+=================
+Elastic Beanstalk
+=================
+
+boto.beanstalk
+-------------
+
+.. automodule:: boto.beanstalk
+ :members:
+ :undoc-members:
+
+boto.beanstalk.layer1
+---------------------
+
+.. automodule:: boto.beanstalk.layer1
+ :members:
+ :undoc-members:
+
+boto.beanstalk.response
+-----------------------
+
+.. automodule:: boto.beanstalk.response
+ :members:
+ :undoc-members:
7 docs/source/ref/glacier.rst
View
@@ -46,6 +46,13 @@ boto.glacier.writer
:members:
:undoc-members:
+boto.glacier.concurrent
+-------------------
+
+.. automodule:: boto.glacier.concurrent
+ :members:
+ :undoc-members:
+
boto.glacier.exceptions
-----------------------
1  docs/source/ref/index.rst
View
@@ -8,6 +8,7 @@ API Reference
:maxdepth: 4
boto
+ beanstalk
cloudformation
cloudfront
cloudsearch
Please sign in to comment.
Something went wrong with that request. Please try again.