Skip to content

HTTPS clone URL

Subversion checkout URL

You can clone with
or
.
Download ZIP
Browse files

add some more docs, and make the guide refer to them.

  • Loading branch information...
commit 70ef6d095984ba647c2013ded4cacfc0c8c2a5ad 1 parent 61204b4
Robey Pointer authored
View
46 README.md
@@ -32,7 +32,7 @@ Kestrel is:
It runs on the JVM so it can take advantage of the hard work people have
put into java performance.
-
+
- small
Currently about 2K lines of scala (including comments), because it relies
@@ -67,16 +67,15 @@ Kestrel is not:
- transactional
This is not a database. Item ownership is transferred with acknowledgement,
- but kestrel does not support multiple outstanding operations, and treats
- each enqueued item as an atomic unit.
+ but kestrel does not support grouping multiple operations into an atomic
+ unit.
Building it
-----------
-Kestrel requires java 6 (for JMX support) and ant 1.7. If you see an error
-about missing JMX classes, it usually means you're building with java 5. On a
-mac, you may have to hard-code an annoying `JAVA_HOME` to use java 6:
+Kestrel requires java 6 and sbt 0.7.4. On OS X 10.5, you may have to hard-code
+an annoying `JAVA_HOME` to use java 6:
$ export JAVA_HOME=/System/Library/Frameworks/JavaVM.framework/Versions/1.6/Home
@@ -122,42 +121,15 @@ Configuration
-------------
Queue configuration is described in detail in `docs/guide.md` (an operational
-guide). There are a few global config options that should be self-explanatory:
-
-- `host`
-
- Host to accept connections on.
-
-- `port`
-
- Port to listen on. 22133 is the standard.
-
-- `timeout`
-
- Seconds after which an idle client is disconnected, or 0 to have no idle
- timeout.
-
-- `queue_path`
-
- The folder to store queue journal files in. Each queue (and each client of
- a fanout queue) gets its own file here.
-
-- `log`
-
- Logfile configuration, as described in configgy.
-
-- `expiration_timer_frequency_seconds`
-
- Frequency (in seconds) that a timer thread should scan active queues for
- expired items. By default, this is off (0) and no automatic expiration
- scanning happens; instead, items expire when a client does a `SET` or `GET`
- on a queue. When this is set, a background thread will periodically flush
- expired items from the head of every queue.
+guide). Scala docs for the config variables are here:
+http://robey.github.com/kestrel/doc/main/api/net/lag/kestrel/config/KestrelConfig.html
Performance
-----------
+((------FIXME------))
+
All of the below timings are on my 2GHz 2006-model macbook pro.
Since starling uses eventmachine in a single-thread single-process form, it
View
83 docs/guide.md
@@ -37,89 +37,38 @@ deleted with the "delete" command.
Configuration
-------------
-All of the per-queue configuration can be set in the global scope of
-`production.conf` as a default for all queues, or in the per-queue
-configuration to override the defaults for a specific queue. You can see an
-example of this in the default config file.
+The config files for kestrel are scala expressions loaded at runtime, usually
+from `production.scala`, although you can use `development.scala` by passing
+`-Dstage=development` to the java command line.
+
+The config file evaluates to a `KestrelConfig` object that's used to configure
+the server as a whole, a default queue, and any overrides for specific named
+queues. The fields on `KestrelConfig` are documented here with their default
+values:
+http://robey.github.com/kestrel/doc/main/api/net/lag/kestrel/config/KestrelConfig.html
To confirm the current configuration of each queue, send "dump_config" to
a server (which can be done over telnet).
To reload the config file on a running server, send "reload" the same way.
-You should immediately see the changes in "dump_config", to confirm.
-
-- `max_items` (infinite)
-
- Set a hard limit on the number of items this queue can hold. When the queue
- is full, `discard_old_when_full` dictates the behavior when a client
- attempts to add another item.
-
-- `max_size` (infinite)
-
- Set a hard limit on the number of bytes (of data in queued items) this
- queue can hold. When the queue is full, `discard_old_when_full` dictates
- the behavior when a client attempts to add another item.
-
-- `max_item_size` (infinite)
-
- Set a hard limit on the number of bytes a single queued item can contain.
- An add request for an item larger than this will be rejected.
-
-- `discard_old_when_full` (false)
-
- If this is false, when a queue is full, clients attempting to add another
- item will get an error. No new items will be accepted. If this is true, old
- items will be discarded to make room for the new one. This settting has no
- effect unless at least one of `max_items` or `max_size` is set.
-
-- `journal` (true)
-
- If false, don't keep a journal file for this queue. When kestrel exits, any
- remaining contents in the queue will be lost.
-
-- `sync_journal` (false)
-
- If true, sync the journal file on disk after each write. This is usually
- not necessary but is available for the paranoid. It will probably reduce
- the maximum throughput of the server.
+You should immediately see the changes in "dump_config", to confirm. Reloading
+will only affect queue configuration, not global server configuration. To
+change the server configuration, restart the server.
-- `max_journal_size` (16MB)
+Logging is configured according to `util-logging`. The logging configuration
+syntax is described here:
+https://github.com/twitter/util/blob/master/util-logging/README.markdown
- When a journal reaches this size, it will be rolled over to a new file as
- soon as the queue is empty. The value must be given in bytes.
+Per-queue configuration is documented here:
-- `max_journal_overflow` (10)
- If a journal file grows to this many times its desired maximum size, and
- the total queue contents (in bytes) are smaller than the desired maximum
- size, the journal file will be rewritten from scratch, to avoid using up
- all disk space. For example, using the default `max_journal_size` of 16MB
- and `max_journal_overflow` of 10, if the journal file ever grows beyond
- 160MB (and the queue's contents are less than 16MB), the journal file will
- be re-written.
-- `max_memory_size` (128MB)
- If a queue's contents grow past this size, only this part will be kept in
- memory. Newly added items will be written directly to the journal file and
- read back into memory as the queue is drained. This setting is a release
- valve to keep a backed-up queue from consuming all memory. The value must
- be given in bytes.
-- `max_age` (0 = off)
- Expiration time (in milliseconds) for items on this queue. Any item that
- has been sitting on the queue longer than this amount will be discarded.
- Clients may also attach an expiration time when adding items to a queue,
- but if the expiration time is longer than `max_age`, `max_age` will be
- used instead.
- `move_expired_to` (none)
- Name of a queue to add expired items to. If set, expired items are added to
- the requested queue as if by a `SET` command. This can be used to implement
- special processing for expired items, or to implement a simple "delayed
- processing" queue.
The journal file
View
2  project/plugins/Plugins.scala
@@ -15,7 +15,7 @@ class Plugins(info: ProjectInfo) extends PluginDefinition(info) {
}
override def ivyRepositories = Seq(Resolver.defaultLocal(None)) ++ repositories
- val standardProject = "com.twitter" % "standard-project" % "0.11.7"
+ val standardProject = "com.twitter" % "standard-project" % "0.11.11"
val lr = "less repo" at "http://repo.lessis.me"
val gh = "me.lessis" % "sbt-gh-issues" % "0.0.1"
View
4 src/main/scala/net/lag/kestrel/PeriodicSyncFile.scala
@@ -6,6 +6,10 @@ import com.twitter.conversions.time._
import com.twitter.util._
import java.io.{IOException, FileOutputStream, File}
+/**
+ * Open a file for writing, and fsync it on a schedule. The period may be 0 to force an fsync
+ * after every write, or `Duration.MaxValue` to never fsync.
+ */
class PeriodicSyncFile(file: File, timer: Timer, period: Duration) {
val writer = new FileOutputStream(file, true).getChannel
val promises = new ConcurrentLinkedQueue[Promise[Unit]]()
View
63 src/main/scala/net/lag/kestrel/config/KestrelConfig.scala
@@ -51,6 +51,9 @@ case class QueueConfig(
}
class QueueBuilder extends Config[QueueConfig] {
+ /**
+ * Name of the queue being configured.
+ */
var name: String = null
/**
@@ -66,7 +69,18 @@ class QueueBuilder extends Config[QueueConfig] {
*/
var maxSize: StorageUnit = Long.MaxValue.bytes
+ /**
+ * Set a hard limit on the number of bytes a single queued item can contain.
+ * An add request for an item larger than this will be rejected.
+ */
var maxItemSize: StorageUnit = Long.MaxValue.bytes
+
+ /**
+ * Expiration time for items on this queue. Any item that has been sitting on the queue longer
+ * than this duration will be discarded. Clients may also attach an expiration time when adding
+ * items to a queue, but if the expiration time is longer than `maxAge`, `max_Age` will be
+ * used instead.
+ */
var maxAge: Option[Duration] = None
/**
@@ -76,7 +90,8 @@ class QueueBuilder extends Config[QueueConfig] {
/**
* Keep only this much of the queue in memory. The journal will be used to store backlogged
- * items.
+ * items, and they'll be read back into memory as the queue is drained. This setting is a release
+ * valve to keep a backed-up queue from consuming all memory.
*/
var maxMemorySize: StorageUnit = 128.megabytes
@@ -94,10 +109,35 @@ class QueueBuilder extends Config[QueueConfig] {
*/
var discardOldWhenFull: Boolean = false
+ /**
+ * If false, don't keep a journal file for this queue. When kestrel exits, any remaining contents
+ * in the queue will be lost.
+ */
var keepJournal: Boolean = true
+
+ /**
+ * How often to sync the journal file. To sync after every write, set this to `0.milliseconds`.
+ * To never sync, set it to `Duration.MaxValue`. Syncing the journal will reduce the maximum
+ * throughput of the server in exchange for a lower chance of losing data.
+ */
var syncJournal: Duration = Duration.MaxValue
+
+ /**
+ * Name of a queue to add expired items to. If set, expired items are added to the requested
+ * queue as if by a `SET` command. This can be used to implement special processing for expired
+ * items, or to implement a simple "delayed processing" queue.
+ */
var expireToQueue: Option[String] = None
+
+ /**
+ * Maximum number of expired items to move into the `expireToQueue` at once.
+ */
var maxExpireSweep: Int = Int.MaxValue
+
+ /**
+ * If true, don't actually store any items in this queue. Only deliver them to fanout client
+ * queues.
+ */
var fanoutOnly: Boolean = false
def apply() = {
@@ -113,7 +153,7 @@ object Protocol {
case object Binary extends Protocol
}
-trait KestrelConfig extends Config[RuntimeEnvironment => Kestrel] {
+trait KestrelConfig extends ServerConfig[Kestrel] {
/**
* Settings for a queue that isn't explicitly listed in `queues`.
*/
@@ -124,10 +164,13 @@ trait KestrelConfig extends Config[RuntimeEnvironment => Kestrel] {
*/
var queues: List[QueueBuilder] = Nil
+ /**
+ * Address to listen for client connections. By default, accept from any interface.
+ */
var listenAddress: String = "0.0.0.0"
/**
- * Port for accepting memcache protocol connections.
+ * Port for accepting memcache protocol connections. 22133 is the standard port.
*/
var memcacheListenPort: Option[Int] = Some(22133)
@@ -137,7 +180,7 @@ trait KestrelConfig extends Config[RuntimeEnvironment => Kestrel] {
var textListenPort: Option[Int] = Some(2222)
/**
- * Where queue journals should be stored.
+ * Where queue journals should be stored. Each queue will have its own files in this folder.
*/
var queuePath: String = "/tmp"
@@ -164,17 +207,7 @@ trait KestrelConfig extends Config[RuntimeEnvironment => Kestrel] {
*/
var maxOpenTransactions: Int = 1
- /**
- * Admin service configuration (optional).
- */
- val admin = new AdminServiceConfig()
-
- /**
- * Logging config (optional).
- */
- var loggers: List[LoggerConfig] = Nil
-
- def apply() = { (runtime: RuntimeEnvironment) =>
+ def apply(runtime: RuntimeEnvironment) = {
Logger.configure(loggers)
admin()(runtime)
val kestrel = new Kestrel(default(), queues, listenAddress, memcacheListenPort, textListenPort,
Please sign in to comment.
Something went wrong with that request. Please try again.