Skip to content

Commit

Permalink
Merge branch 'colored-errors' into wescheme2012-master
Browse files Browse the repository at this point in the history
  • Loading branch information
Danny Yoo committed Aug 9, 2012
2 parents e4e2c39 + 74f1a29 commit f2dc2e9
Show file tree
Hide file tree
Showing 238 changed files with 1,625 additions and 248 deletions.
3 changes: 1 addition & 2 deletions .settings/com.google.appengine.eclipse.core.prefs
@@ -1,6 +1,5 @@
#Mon Jun 18 14:30:52 EDT 2012
eclipse.preferences.version=1
filesCopiedToWebInfLib=jsr107cache-1.1.jar|appengine-jsr107cache-1.6.6.jar|appengine-api-labs-1.6.6.jar|appengine-api-1.0-sdk-1.6.6.jar|geronimo-jta_1.1_spec-1.1.1.jar|datanucleus-jpa-1.1.5.jar|datanucleus-appengine-1.0.10.final.jar|datanucleus-core-1.1.5.jar|geronimo-jpa_3.0_spec-1.1.1.jar|jdo2-api-2.3-eb.jar
filesCopiedToWebInfLib=appengine-api-labs.jar|appengine-endpoints.jar|jsr107cache-1.1.jar|appengine-jsr107cache-1.7.0.jar|appengine-api-1.0-sdk-1.7.0.jar|geronimo-jta_1.1_spec-1.1.1.jar|datanucleus-jpa-1.1.5.jar|datanucleus-appengine-1.0.10.final.jar|datanucleus-core-1.1.5.jar|geronimo-jpa_3.0_spec-1.1.1.jar|jdo2-api-2.3-eb.jar
gaeDeployDialogSettings=
gaeIsEclipseDefaultInstPath=true
googleCloudSqlEnabled=false
Expand Down
32 changes: 32 additions & 0 deletions cost-analysis.txt
@@ -0,0 +1,32 @@
We make estimates based on previous usage, in particular blips where
our usage went past free limits. I'm paying about $2.10 a week at the
moment.



According to AppEngine's logs, we do exceed these limits fairly
frequently. Especially on:

2012-06-20
2012-06-21
2012-06-22
2012-06-14

On the week of 6/18-6-22, I paid $2.65.



On those days, the most significant costs were:

frontend instance hours (around 40 hours)
datastore reads (about half a million read requests)

Frontend instance hours being high means multiple AppEngine servers
needed to be spun up to support the load at that time, and corresponds
roughly with server load. It contributes the highest amount (70%) of
our costs. Datastore reads cover the next largest chunk of our costs
(28%).


On our worst day (2012-06-22), we used $1.19 out of our reserved
$2.00.
61 changes: 61 additions & 0 deletions doc/ec2-auto-scaling-notes.txt
@@ -0,0 +1,61 @@
EC2 Auto scaling notes


Creating the launch configuration:

$ as-create-launch-config LaunchConfig-East-20120808 --image-id ami-9fde74f6 --instance-type m1.medium --group quicklaunch-1 --region us-east-1
$ as-create-launch-config LaunchConfig-West-20120808 --image-id ami-a8038c98 --instance-type m1.medium --group quicklaunch-1 --region us-west-2




Creating the auto-scaling groups, attached to a particular load balancer:

$ as-create-auto-scaling-group WeSchemeCompilerGroup-West --launch-configuration LaunchConfig-West-20120808 --availability-zones us-west-2a --min-size 1 --max-size 10 --load-balancers balanced-wescheme-compilers

$ as-create-auto-scaling-group WeSchemeCompilerGroup-East --launch-configuration LaunchConfig-East-20120808 --availability-zones us-east-1c --min-size 1 --max-size 10 --load-balancers LoadBalancerEast --region us-east-1




Creating the policies for adding a new instances to these group:

$ as-put-scaling-policy MyScaleUpPolicy --auto-scaling-group WeSchemeCompilerGroup-West --adjustment=1 --type ChangeInCapacity --cooldown 300 --region us-west-2

$ as-put-scaling-policy MyScaleUpPolicy --auto-scaling-group WeSchemeCompilerGroup-East --adjustment=1 --type ChangeInCapacity --cooldown 300 --region us-east-1



Adding a CloudWatch alarm that will trigger this scaling-up policy on high load. 5 minutes at 80% CPU:

$ mon-put-metric-alarm MyHighCPUAlarm --comparison-operator GreaterThanThreshold --evaluation-periods 1 --metric-name CPUUtilization --namespace "AWS/EC2" --period 300 --statistic Average --threshold 80 --alarm-actions arn:aws:autoscaling:us-west-2:093537034380:scalingPolicy:0c0d1a8a-8828-4395-99ae-8fb35b098e85:autoScalingGroupName/WeSchemeCompilerGroup:policyName/MyScaleUpPolicy --dimensions "AutoScalingGroupName=WeSchemeCompilerGroup" --region us-west-2

$ mon-put-metric-alarm MyHighCPUAlarm --comparison-operator GreaterThanThreshold --evaluation-periods 1 --metric-name CPUUtilization --namespace "AWS/EC2" --period 300 --statistic Average --threshold 80 --alarm-actions arn:aws:autoscaling:us-east-1:093537034380:scalingPolicy:62cdf7ae-1d9e-4436-bc77-603c00f63e5f:autoScalingGroupName/WeSchemeCompilerGroup-East:policyName/MyScaleUpPolicy --dimensions "AutoScalingGroupName=WeSchemeCompilerGroup-East" --region us-east-1




Creating the other policy for dropping instances:

$ as-put-scaling-policy MyScaleDownPolicy --auto-scaling-group WeSchemeCompilerGroup-West --adjustment=-1 --type ChangeInCapacity --cooldown 300 --region us-west-2

$ as-put-scaling-policy MyScaleDownPolicy --auto-scaling-group WeSchemeCompilerGroup-East --adjustment=-1 --type ChangeInCapacity --cooldown 300 --region us-east-1

and adding the corresponding CloudWatch alarms on idleness:


$ mon-put-metric-alarm MyLowCPUAlarm --comparison-operator LessThanThreshold --evaluation-periods=1 --metric-name CPUUtilization --namespace "AWS/EC2" --period 300 --statistic Average --threshold 40 --alarm-actions arn:aws:autoscaling:us-west-2:093537034380:scalingPolicy:3a6b57eb-c9c2-4360-8de8-5056b414c66b:autoScalingGroupName/WeSchemeCompilerGroup:policyName/MyScaleDownPolicy --dimensions "AutoScalingGroupName=WeSchemeCompilerGroup" --region us-west-2

$ mon-put-metric-alarm MyLowCPUAlarm --comparison-operator LessThanThreshold --evaluation-periods=1 --metric-name CPUUtilization --namespace "AWS/EC2" --period 300 --statistic Average --threshold 40 --alarm-actions arn:aws:autoscaling:us-east-1:093537034380:scalingPolicy:b09438e3-8e77-4e20-9e03-40484c295922:autoScalingGroupName/WeSchemeCompilerGroup-East:policyName/MyScaleDownPolicy --dimensions "AutoScalingGroupName=WeSchemeCompilerGroup-East" --region us-east-1


----------------------------------------------------------------------

Changing the launch configuration:

1. Make sure to add an instance to each load balancer to account for
the disappearance of the auto-scaling group.

2. Shut down existing instances by updating the max and min size of
the groups down to zero.

20 changes: 20 additions & 0 deletions src/org/wescheme/data/DAO.java
@@ -0,0 +1,20 @@
package org.wescheme.data;

import com.googlecode.objectify.ObjectifyService;
import com.googlecode.objectify.util.DAOBase;

/** Data Access Object for all the data we're managing with Objectify.
*
* @author dyoo
*
*/

public class DAO extends DAOBase {
static {
ObjectifyService.register(Feedback.class);
}

public void saveFeedback(Feedback feedback) {
ofy().put(feedback);
}
}
53 changes: 53 additions & 0 deletions src/org/wescheme/data/Feedback.java
@@ -0,0 +1,53 @@
package org.wescheme.data;

import java.util.Date;

import javax.persistence.Id;

import org.json.simple.JSONObject;

import com.googlecode.objectify.annotation.Unindexed;


/** Represents feedback we get back from WeScheme users.
* The content is unstructured for the most part; we may want
* to enforce some structure later one to help with data mining.
* @author dyoo
*
*/

public class Feedback {
@Id Long id;
@Unindexed String author;
String type;
@Unindexed String feedbackText;
Date date;

// NoArg constructor for Objectify.
Feedback() {}

public Feedback(String author, String type, String feedbackText) {
this.author = author;
this.type = type;
this.feedbackText = feedbackText;
this.date = new Date();
}



public String getAuthor() { return this.author; }
public String getType() { return this.type; }
public String getFeedbackText() { return this.feedbackText; }


public JSONObject toJSONObject() {
JSONObject obj = new JSONObject();
obj.put("id", this.id);
obj.put("author", this.author);
obj.put("type", this.type);
obj.put("feedbackText", this.feedbackText);
// date is represented at amount of milliseconds since the epoch.
obj.put("date", this.date.getTime());
return obj;
}
}
1 change: 0 additions & 1 deletion src/org/wescheme/keys/Schedule.java
Expand Up @@ -23,7 +23,6 @@ public class Schedule implements Serializable {
*
*/
private static final long serialVersionUID = -1498658908860600498L;
@SuppressWarnings("unused")
@PrimaryKey
@Persistent(valueStrategy = IdGeneratorStrategy.IDENTITY)
private Key key;
Expand Down
1 change: 0 additions & 1 deletion src/org/wescheme/project/AndroidPackage.java
Expand Up @@ -21,7 +21,6 @@ public class AndroidPackage implements Serializable {
*/
private static final long serialVersionUID = 1636299813888198554L;

@SuppressWarnings("unused")
@PrimaryKey
@Persistent(valueStrategy = IdGeneratorStrategy.IDENTITY)
private Key key;
Expand Down
1 change: 0 additions & 1 deletion src/org/wescheme/project/AndroidPackageJob.java
Expand Up @@ -29,7 +29,6 @@ public class AndroidPackageJob implements Serializable {
*/
private static final long serialVersionUID = 2257895943724927957L;

@SuppressWarnings("unused")
@PrimaryKey
@Persistent(valueStrategy = IdGeneratorStrategy.IDENTITY)
private Long id;
Expand Down
1 change: 0 additions & 1 deletion src/org/wescheme/project/SourceCode.java
Expand Up @@ -21,7 +21,6 @@ public class SourceCode implements Serializable{
*/
private static final long serialVersionUID = -6657496787529704087L;

@SuppressWarnings("unused")
@PrimaryKey
@Persistent(valueStrategy = IdGeneratorStrategy.IDENTITY)
private Key key;
Expand Down
29 changes: 29 additions & 0 deletions src/org/wescheme/servlet/AddFeedback.java
@@ -0,0 +1,29 @@
package org.wescheme.servlet;

import java.io.IOException;

import javax.servlet.ServletException;
import javax.servlet.http.HttpServlet;
import javax.servlet.http.HttpServletRequest;
import javax.servlet.http.HttpServletResponse;

import org.wescheme.data.DAO;
import org.wescheme.data.Feedback;

public class AddFeedback extends HttpServlet {
/**
*
*/
private static final long serialVersionUID = -7686196925524722519L;

public void doPost(HttpServletRequest request, HttpServletResponse response) throws IOException, ServletException {
String author = request.getParameter("author");
String type = request.getParameter("type");
String feedbackText = request.getParameter("feedbackText");
Feedback feedback = new Feedback(author, type, feedbackText);
new DAO().saveFeedback(feedback);

response.setContentType("text/plain");
response.getWriter().write("ok");
}
}
87 changes: 87 additions & 0 deletions src/org/wescheme/servlet/DumpFeedback.java
@@ -0,0 +1,87 @@
package org.wescheme.servlet;

import java.io.IOException;

import javax.servlet.ServletException;
import javax.servlet.http.HttpServlet;
import javax.servlet.http.HttpServletRequest;
import javax.servlet.http.HttpServletResponse;

import org.json.simple.JSONArray;
import org.json.simple.JSONObject;
import org.wescheme.data.DAO;
import org.wescheme.data.Feedback;
import org.wescheme.user.Session;
import org.wescheme.user.SessionManager;

import com.google.appengine.api.datastore.Cursor;
import com.google.appengine.api.datastore.QueryResultIterator;
import com.googlecode.objectify.Objectify;
import com.googlecode.objectify.ObjectifyService;
import com.googlecode.objectify.Query;

/**
* Retrieve all the feedback we get back from users.
* Output is represented as an JSON string:
* { feedbacks: [{ author: string, type: string: feedbackText: string, date: string} ...],
* cursor: string }
*
* The optional "cursor" argument here will allow us to stream
* the table, just in case it gets large enough to hit against
* the compute ceiling enforced by AppEngine.
*
* Only admins should be allowed to get at this information.
*
* This code is adapted from http://code.google.com/p/objectify-appengine/wiki/IntroductionToObjectify
* @author dyoo
*
*/
public class DumpFeedback extends HttpServlet {

public static final long LIMIT_MILLIS = 1000 * 25; // provide a little leeway


public void doGet(HttpServletRequest request, HttpServletResponse response) throws IOException, ServletException {
// First, check that the person is admin.
SessionManager sm = new SessionManager();
Session userSession = sm.authenticate(request, response);
if (!userSession.isAdmin()) {
response.sendError(401);
return;
}

// Side effect: force loading of the classes.
DAO dao = new DAO();


// Next, start dumping content till we hit CPU limit
long startTime = System.currentTimeMillis();
Objectify ofy = ObjectifyService.begin();
Query<Feedback> query = ofy.query(Feedback.class);
String cursorStr = request.getParameter("cursor");
if (cursorStr != null) {
query.startCursor(Cursor.fromWebSafeString(cursorStr));
}

JSONArray listOfFeedbacks = new JSONArray();
String nextCursorString = null;

QueryResultIterator<Feedback> iterator = query.iterator();
while(iterator.hasNext()) {
Feedback feedback = iterator.next();
listOfFeedbacks.add(feedback.toJSONObject());
if (System.currentTimeMillis() - startTime > LIMIT_MILLIS) {
nextCursorString = iterator.getCursor().toWebSafeString();
break;
}
}

// Finally, dump the content back to the user.
JSONObject result = new JSONObject();
result.put("feedbacks", listOfFeedbacks);
result.put("cursor", nextCursorString);
response.setContentType("text/plain");
response.getWriter().write(result.toString());
}

}
1 change: 0 additions & 1 deletion src/org/wescheme/user/WeSchemeUser.java
Expand Up @@ -31,7 +31,6 @@ public class WeSchemeUser{
private byte[] _digest; // the hash of the salted password
@Persistent
private boolean _active; // is the account active?
@SuppressWarnings("unused")
@Persistent
private String _email;

Expand Down
21 changes: 21 additions & 0 deletions war-src/js/ajaxactions.js
Expand Up @@ -173,4 +173,25 @@ goog.require('plt.wescheme.Program');
xhr: function(settings) { return new XMLHttpRequest(settings); }
});
};


plt.wescheme.AjaxActions.prototype.sendFeedback = function(author, type, feedbackText, onSuccess) {
jQuery.ajax({cache : false,
data : {author : author,
type : type,
feedbackText : feedbackText },
datatype: "text",
type: "POST",
url: "/addFeedback",
success: function(data) {
onSuccess();
},
error: function(xhr) {
onSuccess();
},
xhr: function(settings) { return new XMLHttpRequest(settings); }
});

};

})();

0 comments on commit f2dc2e9

Please sign in to comment.