Permalink
Browse files

typo and a little clean up

  • Loading branch information...
1 parent 0c19602 commit 3fa6e53c39446c5267369fb656ccd5aedcf334fa @psalaets psalaets committed Feb 16, 2013
Showing with 3 additions and 3 deletions.
  1. +1 −1 cache/index.html
  2. +2 −2 worker/reference/environment/index.md
View
@@ -44,7 +44,7 @@ <h2 id="configure">1. Configure Your Client</h2>
<h2 id="set">2. Set the Item in the Cache</h2>
-<p>Addding an item to a cache and updating an item in a cache are the same thing; if the item exists, it will be overwritten. Otherwise, it will be created.</p>
+<p>Adding an item to a cache and updating an item in a cache are the same thing; if the item exists, it will be overwritten. Otherwise, it will be created.</p>
<p>You can set an item using any of the <a href="/cache/reference/libraries">client libraries</a> or even using our <a href="/cache/reference/api">REST/HTTP API</a> directly. We're going to use the Ruby library for these examples.</p>
@@ -65,11 +65,11 @@ There is a system-wide limit for the maximum length a task may run. Tasks that e
<b>Max Worker Run Time:</b> 3600 seconds (60 minutes)
</div>
-Tip: You should design your tasks to be moderate in terms of the length of time they take to run. If operations are small in nature (seconds or milli-seconds) then you'll want to group them together so as to amortize the worker setup costs. Likewise, if they are long-running operations, you should break them up into a number of workers. Note that you can chain together workers as well as use IronMQ, scheduled jobs, and datastores to orchestrate a complex series or sequence of tasks.
+Tip: You should design your tasks to be moderate in terms of the length of time they take to run. If operations are small in nature (seconds or milliseconds) then you'll want to group them together so as to amortize the worker setup costs. Likewise, if they are long-running operations, you should break them up into a number of workers. Note that you can chain together workers as well as use IronMQ, scheduled jobs, and datastores to orchestrate a complex series or sequence of tasks.
## Priority Queue Management
-Each priority (p0, p1, p2) has a targeted maximum time limit for tasks sitting in the queue. Average queue times will typically be less than those listed on the pricing page. High numbers of tasks, however, could raise those average queue times for all users. To keep keep the processing time for high priority jobs down, per user capacities are in place for high priority queues. Limits are on per-queue basis and are reset hourly. High priority tasks that exceed the limit, are queued at the next highest priority. Only under high overall system load should queue times for tasks exceeding the capacity extend beyond the initial targeted time limits. Usage rates will be based on the actual priority tasks run on, not the priority initially queued.
+Each priority (p0, p1, p2) has a targeted maximum time limit for tasks sitting in the queue. Average queue times will typically be less than those listed on the pricing page. High numbers of tasks, however, could raise those average queue times for all users. To keep the processing time for high priority jobs down, per user capacities are in place for high priority queues. Limits are on per-queue basis and are reset hourly. High priority tasks that exceed the limit, are queued at the next highest priority. Only under high overall system load should queue times for tasks exceeding the capacity extend beyond the initial targeted time limits. Usage rates will be based on the actual priority tasks run on, not the priority initially queued.
<table style="text-align: center;">
<thead>

0 comments on commit 3fa6e53

Please sign in to comment.