Permalink
Browse files

Merge branch 'ostruct' into error-status-codes

Conflicts:
	changelog.md
	lib/batch_api/version.rb
  • Loading branch information...
2 parents 91d3cbb + b570b0e commit 80cefc561cb1b9988f7595acdbedf8f464b5f41c @arsduo committed Oct 4, 2012
View
@@ -2,6 +2,10 @@ v0.1.4
* Refactor errors into ErrorWrapper/BatchError
* Allow specification of custom status codes raised for errors
+v0.1.3
+* Refactor config to use a struct
+* Update readme to cover HTTP pipelining
+
v0.1.2
* Rewrite the readme
* Add travis icon
@@ -1,27 +1,27 @@
module BatchApi
- # Batch API Configuration
- class Configuration
- # Public: configuration options.
- # Currently, you can set:
- # - endpoint: (URL) through which the Batch API will be exposed (default
- # "/batch)
- # - verb: through which it's accessed (default "POST")
- # - limit: how many requests can be processed in a single request
- # (default 50)
- # decode_json_responses - automatically decode JSON response bodies,
- # so they don't get double-decoded (e.g. when you decode the batch
- # response, the bodies are already objects).
- attr_accessor :verb, :endpoint, :limit
- attr_accessor :decode_json_responses
- attr_accessor :add_timestamp
+ # Public: configuration options.
+ # Currently, you can set:
+ # - endpoint: (URL) through which the Batch API will be exposed (default
+ # "/batch)
+ # - verb: through which it's accessed (default "POST")
+ # - limit: how many requests can be processed in a single request
+ # (default 50)
+ # - decode_json_responses: automatically decode JSON response bodies,
+ # so they don't get double-decoded (e.g. when you decode the batch
+ # response, the bodies are already objects).
+ CONFIGURATION_OPTIONS = {
+ verb: :post,
+ endpoint: "/batch",
+ limit: 50,
+ decode_json_responses: true
+ }
- # Default values for configuration variables
+ # Batch API Configuration
+ class Configuration < Struct.new(*CONFIGURATION_OPTIONS.keys)
+ # Public; initialize a new configuration option and apply the defaults.
def initialize
- @verb = :post
- @endpoint = "/batch"
- @limit = 50
- @decode_json_responses = true
- @add_timestamp = true
+ super
+ CONFIGURATION_OPTIONS.each {|k, v| self[k] = v}
end
end
end
@@ -50,8 +50,7 @@ def execute!
# Returns a hash ready to go to the user
def format_response(operation_results)
{
- "results" => operation_results,
- "timestamp" => @start_time.to_s
+ "results" => operation_results
}
end
View
@@ -87,14 +87,42 @@ same status code and body they would return as individual requests.
If the Batch API itself returns a non-200 status code, that indicates a global
problem.
-## Why Batch?
+## Why a Batch API?
Batch APIs, though unRESTful, are useful for reducing HTTP overhead
by combining requests; this is particularly valuable for mobile clients,
which may generate groups of offline actions and which desire to
reduce battery consumption while connected by making fewer, better-compressed
requests.
+### Why not HTTP Pipelining?
+
+HTTP pipelining is an awesome and promising technology, and would provide a
+simple and effortless way to parallel process many requests; however, using
+pipelining raised several issues for us, one of which was a blocker:
+
+* [Lack of browser
+support](http://en.wikipedia.org/wiki/HTTP_pipelining#Implementation_in_web_browsers):
+a number of key browsers do not yet support HTTP pipelining (or have it
+disabled by default). This will of course change in time,
+but for now this takes pipelining out of consideration. (There a similar but
+more minor issue
+with [many web
+proxies](http://en.wikipedia.org/wiki/HTTP_pipelining#Implementation_in_web_proxies).)
+* The HTTP pipelining specification states that non-idempotent requests (e.g.
+[POST](http://en.wikipedia.org/wiki/HTTP_pipelining) and
+[in some
+descriptions](http://www-archive.mozilla.org/projects/netlib/http/pipelining-faq.html) PUT)
+shouldn't be made via pipelining. Though I have heard that some server
+implementations do support POST requests (putting all subsequent requests on
+hold until it's done), for applications that submit a lot of POSTs this raised
+concerns as well.
+
+Given this state of affairs -- and my desire to hack up a Batch API gem :P --,
+we decided to implement an API-based solution.
+
+### Why this Approach?
+
There are two main approaches to writing batch APIs:
* A limited, specialized batch endpoint (or endpoints), which usually handles
@@ -103,8 +131,6 @@ There are two main approaches to writing batch APIs:
* A general-purpose RESTful API that can handle anything in your application,
a la the Facebook Batch API.
-### Why this Approach?
-
The second approach, IMO, minimizes code duplication and complexity. Rather
than have two systems that manage resources (or a more complicated one that
can handle both batch and individual requests), we simply route requests as we
@@ -156,6 +182,13 @@ If it is a batch request, we:
Errors are caught and recorded appropriately.
* Send you back the results.
+At both the batch level (processing all requests) and the individual operation
+request, there is an internal, customizable midleware stack that you can
+customize to insert additional custom behavior, such as handling authentication
+or decoding JSON bodies for individual requests (this latter comes
+pre-included). Check out the lib/batch_api/internal_middleware.rb for more
+information.
+
## To Do
The core of the Batch API is complete and solid, and so ready to go that it's
@@ -169,8 +202,6 @@ Here are some immediate tasks:
surpress output for individual requests, etc.
* Add RDoc to the spec task and ensure all methods are documented.
* Research and implement parallelization and dependency management.
-* Implement a middleware stack to allow better customization of
- request/response handling.
## Thanks
@@ -121,10 +121,6 @@ def headerize(hash)
JSON.parse(response.body)["results"].should be_a(Array)
end
- it "includes the timestamp" do
- JSON.parse(response.body)["timestamp"].to_i.should be_within(100).of(@t.to_i)
- end
-
context "for a get request" do
describe "the response" do
before :each do
@@ -8,8 +8,7 @@
verb: :post,
endpoint: "/batch",
limit: 50,
- decode_json_responses: true,
- add_timestamp: true
+ decode_json_responses: true
}.each_pair do |option, default|
opt, defa = option, default
describe "##{opt}" do
@@ -103,13 +103,6 @@
processor.strategy.stub(:execute!).and_return(stubby)
processor.execute!["results"].should == stubby
end
-
- it "adds the start time" do
- t = Time.now - 1.hour
- Timecop.freeze(t) do
- processor.execute!["timestamp"].should == t.to_i.to_s
- end
- end
end
end

0 comments on commit 80cefc5

Please sign in to comment.