Skip to content
Wonderful reusable code from Twitter
Scala Java Python Mako Shell Makefile
Latest commit a7cc4bc Jun 28, 2016 @mgodave mgodave committed with jenkins util-core: Add merge operation to AsyncStream

Partitions in a pub-sub messaging system map naturally to individual infinite
AsyncStreams. Furthermore, representing the partitions as a single stream and
applying a form of back pressure (using mapConcurrent) on consumption of that
stream and it's partitions is a clear and simple programming model.  Since these
partitions are infinite and no current AsyncStream function matches this usage
it became necessary to invent one.


A function for merging arbitrary, and possibly infinite, AsyncStreams

Create a function merge, given a sequence of AsyncStreams, that returns the
interleaving of events from it's inputs as they become available.

Failed to load latest commit information.
.github Add PULL_REQUEST_TEMPLATEs Mar 10, 2016
codegen [util-test] Easy argument capture for Mockito mocks. Feb 23, 2015
project util: Fix sbt Build for util-security Jun 20, 2016
util-app util-core: Add ``, complementary to `Future.within(Dur… Jun 27, 2016
util-benchmark util-core: Fewer allocations for AsyncSemaphore Jun 6, 2016
util-cache util-cache: Adds support for Caffeine caches May 23, 2016
util-class-preloader maven layout goes away Nov 30, 2015
util-codec util-codec: Stop depending on apache commons codec Jun 6, 2016
util-collection util-collections: rm apache collections!!! May 18, 2016
util-core util-core: Add merge operation to AsyncStream Jun 29, 2016
util-eval Eval: add Serializable to wrapping class Jun 27, 2016
util-events Enable warnings for unused imports for scala Mar 10, 2016
util-function csl: disable fatal warnings in java targets Jan 11, 2016
util-hashing util-codec: Stop depending on apache commons codec Jun 6, 2016
util-jvm source: fix more unused imports for scala 2.11.8 Mar 29, 2016
util-lint twitter-server: Exclude Sharded Duplicate Clients and Make Rules Test… Jun 9, 2016
util-logging Cross build for Scala 2.12 Jun 6, 2016
util-reflect Problem Dec 21, 2015
util-registry twitter-server: Add filtering to /admin/registry.json Jun 20, 2016
util-security util-security: Add X509CertificateLoader and Introduce util-security Jun 20, 2016
util-stats util-stats: Fix typo in docs Jun 20, 2016
util-test source: fix more unused imports for scala 2.11.8 Mar 29, 2016
util-thrift Problem Dec 21, 2015
util-zk-common util-zk-common: Remove ServerSet Apr 18, 2016
util-zk-test Updating code ownership Jun 20, 2016
util-zk util-zk: Fix race condition in ZkAsyncSemaphore May 30, 2016
.gitignore add sbt-launch.jar to gitignores May 11, 2012
.travis.yml util, ostrich, scrooge, finagle, twitter-server: Update to use codeco… Jun 8, 2016
CHANGES util-core: Add merge operation to AsyncStream Jun 29, 2016
CONFIG.ini [split] s/GUILD/CSL/ for all CONFIG.ini's Jun 7, 2014 Add PULL_REQUEST_TEMPLATEs Mar 10, 2016
GROUPS [split] Use new git-review with simplified OWNERS/GROUPS May 18, 2012
LICENSE [split] Fixes race in channel.send(); fixes the meaning of filter(); … Apr 6, 2011
OWNERS Fix some lint errors in CONFIG.ini and OWNERS - missing newlines - tr… Feb 29, 2016 util, ostrich, scrooge, finagle, twitter-server: Update to use codeco… Jun 8, 2016
sbt Switch to Java 8 and Scala 2.11 May 9, 2016
updatedocs.bash util - Update scaladoc script Mar 7, 2016

Twitter Util

Build status Codecov branch Project status Gitter Maven Central

A bunch of idiomatic, small, general purpose tools.

See the Scaladoc here.


This project is used in production at Twitter (and many other organizations), and is being actively developed and maintained. Please note that some sub-projects (including util-eval), classes, and methods may be deprecated, however.

Using in your project

An example SBT dependency string for the util-collection tools would look like this:

val collUtils = "com.twitter" %% "util-collection" % "6.34.0"



import com.twitter.conversions.time._

val duration1 = 1.second
val duration2 = 2.minutes
duration1.inMillis // => 1000L


val amount = 8.megabytes
amount.inBytes // => 8388608L
amount.inKilobytes // => 8192L


A Non-actor re-implementation of Scala Futures.

import com.twitter.conversions.time._
import com.twitter.util.{Await, Future, Promise}

val f = new Promise[Int]
val g = { result => result + 1 }
Await.result(g, 1.second) // => this blocks for the futures result (and eventually returns 2)

// Another option:
g.onSuccess { result =>
  println(result) // => prints "2"

// Using for expressions:
val xFuture = Future(1)
val yFuture = Future(2)

for { 
  x <- xFuture
  y <- yFuture
} {
  println(x + y) // => prints "3"



The LruMap is an LRU with a maximum size passed in. If the map is full it expires items in FIFO order. Reading a value will move an item to the top of the stack.

import com.twitter.util.LruMap

val map = new LruMap[String, String](15) // this is of type mutable.Map[String, String]

Object Pool

The pool order is FIFO.

A pool of constants

import scala.collection.mutable
import com.twitter.util.{Await, SimplePool}

val queue = new mutable.Queue[Int] ++ List(1, 2, 3)
val pool = new SimplePool(queue)

// Note that the pool returns Futures, it doesn't block on exhaustion.
assert(Await.result(pool.reserve()) == 1)
pool.reserve().onSuccess { item =>
  println(item) // prints "2"

A pool of dynamically created objects

Here is a pool of even-number generators. It stores 4 numbers at a time:

import com.twitter.util.{Future, FactoryPool}

val pool = new FactoryPool[Int](4) {
  var count = 0
  def makeItem() = { count += 1; Future(count) }
  def isHealthy(i: Int) = i % 2 == 0

It checks the health when you successfully reserve an object (i.e., when the Future yields).


util-hashing is a collection of hash functions and hashing distributors (eg. ketama).

To use one of the available hash functions:

import com.twitter.hashing.KeyHasher


Available hash functions are:


To use KetamaDistributor:

import com.twitter.hashing.{KetamaDistributor, KetamaNode, KeyHasher}

val nodes = List(KetamaNode("host:port", 1 /* weight */, "foo" /* handle */))
val distributor = new KetamaDistributor(nodes, 1 /* num reps */)
distributor.nodeForHash("abc".##) // => client


util-logging is a small wrapper around Java's built-in logging to make it more Scala-friendly.


To access logging, you can usually just use:

import com.twitter.logging.Logger
private val log = Logger.get(getClass)

This creates a Logger object that uses the current class or object's package name as the logging node, so class "" will log to node (generally showing "foo" as the name in the logfile). You can also get a logger explicitly by name:

private val log = Logger.get("")

Logger objects wrap everything useful from java.util.logging.Logger, as well as adding some convenience methods:

// log a string with sprintf conversion:"Starting compaction on zone %d...", zoneId)

try {
} catch {
  // log an exception backtrace with the message:
  case e: IOException =>
    log.error(e, "I/O exception: %s", e.getMessage)

Each of the log levels (from "fatal" to "trace") has these two convenience methods. You may also use log directly:

import com.twitter.logging.Level
log(Level.DEBUG, "Logging %s at debug level.", name)

An advantage to using sprintf ("%s", etc) conversion, as opposed to:

log(Level.DEBUG, s"Logging $name at debug level.")

is that Java & Scala perform string concatenation at runtime, even if nothing will be logged because the log file isn't writing debug messages right now. With sprintf parameters, the arguments are just bundled up and passed directly to the logging level before formatting. If no log message would be written to any file or device, then no formatting is done and the arguments are thrown away. That makes it very inexpensive to include verbose debug logging which can be turned off without recompiling and re-deploying.

If you prefer, there are also variants that take lazily evaluated parameters, and only evaluate them if logging is active at that level:

log.ifDebug(s"Login from $name at $date.")

The logging classes are done as an extension to the java.util.logging API, and so you can use the Java interface directly, if you want to. Each of the Java classes (Logger, Handler, Formatter) is just wrapped by a Scala class.


In the Java style, log nodes are in a tree, with the root node being "" (the empty string). If a node has a filter level set, only log messages of that priority or higher are passed up to the parent. Handlers are attached to nodes for sending log messages to files or logging services, and may have formatters attached to them.

Logging levels are, in priority order of highest to lowest:

  • FATAL - the server is about to exit
  • CRITICAL - an event occurred that is bad enough to warrant paging someone
  • ERROR - a user-visible error occurred (though it may be limited in scope)
  • WARNING - a coder may want to be notified, but the error was probably not user-visible
  • INFO - normal informational messages
  • DEBUG - coder-level debugging information
  • TRACE - intensive debugging information

Each node may also optionally choose to not pass messages up to the parent node.

The LoggerFactory builder is used to configure individual log nodes, by filling in fields and calling the apply method. For example, to configure the root logger to filter at INFO level and write to a file:

import com.twitter.logging._

val factory = LoggerFactory(
  node = "",
  level = Some(Level.INFO),
  handlers = List(
      filename = "/var/log/example/example.log",
      rollPolicy = Policy.SigHup

val logger = factory()

As many LoggerFactorys can be configured as you want, so you can attach to several nodes if you like. To remove all previous configurations, use:



  • QueueingHandler

    Queues log records and publishes them in another thread thereby enabling "async logging".

  • ConsoleHandler

    Logs to the console.

  • FileHandler

    Logs to a file, with an optional file rotation policy. The policies are:

    • Policy.Never - always use the same logfile (default)
    • Policy.Hourly - roll to a new logfile at the top of every hour
    • Policy.Daily - roll to a new logfile at midnight every night
    • Policy.Weekly(n) - roll to a new logfile at midnight on day N (0 = Sunday)
    • Policy.SigHup - reopen the logfile on SIGHUP (for logrotate and similar services)

    When a logfile is rolled, the current logfile is renamed to have the date (and hour, if rolling hourly) attached, and a new one is started. So, for example, test.log may become test-20080425.log, and test.log will be reopened as a new file.

  • SyslogHandler

    Log to a syslog server, by host and port.

  • ScribeHandler

    Log to a scribe server, by host, port, and category. Buffering and backoff can also be configured: You can specify how long to collect log lines before sending them in a single burst, the maximum burst size, and how long to backoff if the server seems to be offline.

  • ThrottledHandler

    Wraps another handler, tracking (and squelching) duplicate messages. If you use a format string like "Error %d at %s", the log messages will be de-duped based on the format string, even if they have different parameters.


Handlers usually have a formatter attached to them, and these formatters generally just add a prefix containing the date, log level, and logger name.

  • Formatter

    A standard log prefix like "ERR [20080315-18:39:05.033] jobs: ", which can be configured to truncate log lines to a certain length, limit the lines of an exception stack trace, and use a special time zone.

    You can override the format string used to generate the prefix, also.

  • BareFormatterConfig

    No prefix at all. May be useful for logging info destined for scripts.

  • SyslogFormatterConfig

    A formatter required by the syslog protocol, with configurable syslog priority and date format.

Version 6.x

Major version 6 introduced some breaking changes:

  • Futures are no longer Cancellable; cancellation is replaced with a simpler interrupt mechanism.
  • Time and duration implement true sentinels (similar to infinities in doubles). uses system time instead of nanotime + offset.
  • The (dangerous) implicit conversion from a Duration to a Long was removed.
  • Trys and Futures no longer handle fatal exceptions: these are propagated to the dispatching thread.

Future interrupts

Method raise on Future (def raise(cause: Throwable)) raises the interrupt described by cause to the producer of this Future. Interrupt handlers are installed on a Promise using setInterruptHandler, which takes a partial function:

val p = new Promise[T]
p.setInterruptHandler {
  case exc: MyException =>
    // deal with interrupt..

Interrupts differ in semantics from cancellation in important ways: there can only be one interrupt handler per promise, and interrupts are only delivered if the promise is not yet complete.

Time and Duration

Like arithmetic on doubles, Time and Duration arithmetic is now free of overflows. Instead, they overflow to Top and Bottom values, which are analogous to positive and negative infinity.

Since the resolution of has been reduced (and is also more expensive due to its use of system time), a new Stopwatch API has been introduced in order to calculate durations of time.

It's used simply:

import com.twitter.util.{Duration, Stopwatch}
val elapsed: () => Duration = Stopwatch.start()

which is read by applying elapsed:

val duration: Duration = elapsed()


The master branch of this repository contains the latest stable release of Util, and weekly snapshots are published to the develop branch. In general pull requests should be submitted against develop. See for more details about how to contribute.

Something went wrong with that request. Please try again.