Skip to content

Comparing changes

Choose two branches to see what’s changed or to start a new pull request. If you need to, you can also compare across forks.

Open a pull request

Create a new pull request by comparing changes across two branches. If you need to, you can also compare across forks.
...
Checking mergeability… Don’t worry, you can still create the pull request.
This comparison is big! We’re only showing the most recent 250 commits
Commits on Sep 21, 2015
@kevinoliver kevinoliver finagle-core: Export metrics from Netty's HashedWheelTimer
Motivation

There is currently no visibility into `DefaultTimer` which is a
service's primary `Timer`. This can help identify services that
may be blocking their timer or doing operations that may be better
offloaded to a separate thread pool.

Solution

  * Expose a stat that tracks the deviation from the expected time
    a task is expected to run at.

  * Expose a stat that tracks the number of pending tasks to be run.

Result

The fog is cleared. The metrics are exported at:

  "finagle/timer/deviation_ms"
  "finagle/timer/pending_tasks"

RB_ID=743233
548a988
@kevinoliver kevinoliver finagle-core: Fix lifecycle of meanweight gauge
Problem

The meanweight gauge that is associated with a
`TrafficDistributor.Distributor` is not explicitly closed when weight
classes change. This can manifest as a form of a memory leak as the
gauges will not be garbage collected as fast as they should be.

Solution

Move the gauge to `TrafficDistributor` and explicitly manage
updates of the size on changes and remove it on close.

RB_ID=743315
bb072c0
@dschobel dschobel finagle-core: use orNull
use the more idiomatic orNull form instead of getOrElse(null)

RB_ID=729406
a95d1c3
@suls suls finagle-thrift: Add test for configuring TLS via stack param
Problem

PR #412 fixed the netty-pipeline for Thrift but didn't add a test.

Solution

Added end-to-end test-case for Thrift/TLS for stack based client and
server.

Result

Green ;)

Signed-off-by: Kevin Oliver <koliver@twitter.com>

RB_ID=743938
e769040
@atollena atollena [finagle-core] Make Name.all private
Problem
--

There is no obvious way to combine a set of name into an address, because we have no
way to merge metadata. Name.all should not be used directly.

Solution
--

Make Name.all private to finagle. Remove some unused functions while I was at it.

RB_ID=743261
4aa5d4d
@kevinoliver kevinoliver finagle-httpx: Add Response.decodeBytes
Motivation

This API is useful in some cases when you have binary data.

Solution

Add decodeBytes which decodeString delegates to.

RB_ID=744318
861c003
@olix0r olix0r finagle-core: Make ServiceFactoryCache publicly accessible.
Problem

The logic encapsulated by `ServiceFactoryCache` is generally useful to
applications that build clients in dynamic ways.

Solution

Modify the classifier on the `ServiceFactoryCache` definition to
`private[finagle]`.

Signed-off-by: Kevin Oliver <koliver@twitter.com>

RB_ID=744208
96e91a9
@wmorgan wmorgan Add Buoyant to ADOPTERS.md
Blatant self-promotion / selflessly doing my part to demonstrate adoption.

Signed-off-by: Travis Brown <tbrown@twitter.com>

RB_ID=741476
0475bd8
@roanta roanta finagle-httpx: Add java forwarders for Status
RB_ID=743445
8713db9
@kevinoliver kevinoliver util-core: Remove deprecated methods from Var
Motivation

Both `c.t.util.Var` and `c.t.util.Event` were in need of some polish.

Solution

Cleanup code, fix warnings, and remove deprecated methods from `Var`.

 * To migrate `observe` and `foreach`, given `aVar.observe { t => somethingWith(t) }`
   you would write `aVar.changes.register(Witness({ t => somethingWith(t) }))`.

 * To migrate `observeUntil`, given `aVar.observeUntil(_ == something)`,
   you would write `aVar.changes.filter(_ == something).toFuture()`.

 * To migrate `observeTo`, given `aVar.observeTo(anAtomicReference)`,
   you would write `aVar.changes.register(Witness(anAtomicReference))`.

RB_ID=744282
3402dce
@blackicewei blackicewei finagle-mux: make dark mode failure detector as default
Problem

We want to turn on failure detector as default for
all mux services.

Solution

To do this with extra caution, introduce a darkmode config
as a intermediate step, which sends pings and increase a
busy counter, but does not expose busy status to loadbalancer
yet.

RB_ID=744609
30d8a27
Commits on Sep 25, 2015
@vkostyukov vkostyukov finagle-httpx: Introduce Multipart for reading file uploads
Problem
The current version of the RequestDecoder (under httpx.util) doesn't
support reading file uploads from the multipart/form-data HTTP POST requests.

Solution
Introduce c.t.f.httpx.Multipart, a set of classes and methods for decoding
both FileUploads and Attributes from the multipart/form-data requests.

RB_ID=730102
11e383a
@mangozz mangozz added flags to use Locak Memcache
RB_ID=742338
272770a
@atollena atollena [finagle-core] Log the Dtab on negative resolution
Problem
--

Services that receive a lot of traffic with overriddes have
their logs covered with "service_name: Name resolution is
negative". Those are often inocuous since they are the result of an
override's serverset becoming empty.

Solution
--

Log the local Dtab. It makes it easy to distinguish the inocuous case
(with local dtab) from the dangerous one (no local dtab).

RB_ID=738736
fe4722d
@luciferous luciferous Convert finagle-native to finagle-httpx
Problem

finagle-http is going to be deprecated in favor finagle-httpx.

Solution

Convert dependents to finagle-httpx.

RB_ID=746186
487a674
@nshkrob nshkrob finagle: Readability improvements for FailFastFactory
Problem

Improve readability of FailFastFactory. This came out some other work that got shelved.

Solution

* Removed an unused Retrying.done Promise.
* Reordered case handling to match the order of events.
* Renamed self -> underlying.

RB_ID=744793
152c4ef
@nshkrob nshkrob finagle: Add resolver for libthrift to fix the sbt build
Problem

Finagle fails to resolve the libthrift jar when resolving plugins (when compiling project/Build.scala). libthrift is a dependency of the scrooge-sbt-plugin.

Solution

Add the https://maven.twttr.com resolver for libthrift.

RB_ID=746529
ae16cd8
@jcrossley jcrossley finagle/finagle-httpx: Add @varargs annotation to HeaderMap.apply for…
… Java compat

Problem
Calling HeaderMap.apply() with no arguments from Java results in a error.

Solution
Adds @varargs annotation to HeaderMap.apply

RB_ID=746504
fca5f0b
@kevinoliver kevinoliver finagle-thrift: Add more patience to EndToEndTest
Motivation

This new test is marked as flaky sometimes in continuous integration
tests.

Solution

Give it longer to succeed which helps avoid short slowdowns.

RB_ID=746528
d396830
@gdickinson gdickinson Modify finagle-memcached to expose the KetamaClient's `numReps` param…
…eter

RB_ID=745441
0a17626
@tw-ngreen tw-ngreen finagle-serversets: Don't cache all ZK node date indefinitely
Problem

We currently Memoize the result of calls to getData. Since Memoize does not
offer an evict/remove feature, we thus cache all retrieved data indefinitely
as Entries/Vectors. This results in a memory leak corresponding to serverset
churn.

Solution

Do not use Memoize to cache getData calls. Instead use a concurrent.TrieMap
and remove nodes from cache when they are no longer present in ZK.
Note: The stabilizer will still maintain these in it's own cache until they
are removed by the stabilizer.

Result

We leak a lot less memory. However, because Memoize was caching Futures
(including failures), and this patch caches results, we may call getData
multiple times for the same node. (See comment in ServiceDiscovererTest.scala).

RB_ID=745830
fc15851
Todd Segal [finagle-benchmark] Add benchmark test for zookeeper serverset resolu…
…tion

[Problem]

There is no reliable, deterministic measurement of allocation rate nor detection of memory leaks in serverset resolution. As there recently was a large and subtle leak discovered, we need a benchmark test to use before and after changes.

[Solution]

Add a standalone test service which reliably 'thrashes' serversets on a localhost-ed zookeeper. Add a client benchmark class that resolves the test serverset and receives all of the updates.

RB_ID=746925
4ad521d
@tw-ngreen tw-ngreen finagle-serversets: Avoid Future.collect when calling getData
Problem
If any getData call fails, we will fail to fetch the entire serverset

Solution
Use collectToTry, not collect.

Result
If some getData calls fail, we still fetch the rest of the serverset

RB_ID=747536
e2aee53
@roanta roanta talon: move to httpx
finagle-http will soon be deprecated in favor of finagle-httpx. In
preparation for this change, we're assisting each team in converting
their targets that depend on finagle-http to finagle-httpx.

This RB contains changes for: talon/http

This spreadsheet tracks what targets have already been converted:
https://docs.google.com/spreadsheets/d/1ABj_r642UUjN-zNN9PM8AS_2cC06lSmks9SyGA20OGY

RB_ID=746574
TBR=true
150bbf1
Commits on Sep 28, 2015
@luciferous luciferous Remove finagle-http
Problem

finagle-http is deprecated in favor of finagle-httpx and should be
removed.

See our blog post on upgrading to Netty 4 for more information[1]. Drop
by finaglers[2] if you have any questions.

Solution

Remove finagle-http and convert its dependents to finagle-httpx.

[1] https://finagle.github.io/blog/2014/10/20/upgrading-finagle-to-netty-4/
[2] https://groups.google.com/d/forum/finaglers

RB_ID=746510
47bf5da
@nshkrob nshkrob Release CSL libraries (Sept 2015).
- Release finagle, ostrich, scrooge, twitter-server, util (Sept 2015).

RB_ID=747942
5f4f8b8
@nshkrob nshkrob finagle: Fix travis build
Problem

finagle CI build fails because it depends on scrooge-core.

Solution

Clone and build scrooge-core before building finagle.

RB_ID=748353
2cdd62c
Commits on Oct 05, 2015
@blackicewei blackicewei finagle: add bing to some sub-proj owner
I am familiar with the code base, and confident to review

RB_ID=747151
a281c32
@drichelson drichelson finagle-memcached: Change test fixture to behave more like memcached
RB_ID=747375
95b9b31
Matt Olsen Mid September pants release
RB_ID=746706
97eeb4f
@baroquebobcat baroquebobcat Update junit security manager, mark failing tests as flaky, update se…
…curity manager publish instructions

After landing https://reviewboard.twitter.biz/r/747778/, I published the 0.0.6 jar to artifactory. This change bumps the BUILD.tools version to use the new artifact.

With this change, System.exit in tests will trigger a SecurityException and produce output on stderr. The change also ensures that tests that attempt to access the network on threads other than the main thread will fail with a security exception. The previous version would just output a stacktrace without raising an exception. That allowed tests to pass when running with multiple test run threads, though they should have failed.

I found test failures due to unsupported operations being used. I marked these tests as flaky and filed tickets for them, which I've attached as bugs on this review.

This also updates the junit-sm publish instructions as they were based on pre-monobuild layout.

RB_ID=749074
538d12e
@kevinoliver kevinoliver finagle-core: Add a deny list to LoadService
Motivation

`c.t.f.util.LoadService` puts users at the mercy of their transitive
dependencies as to what implementation gets loaded. This is simple but
can be difficult for users to manage and control.

Solution

Introduce a global flag that is a deny list for implementations.
For a given LoadService interface, if the implementation is found in the
deny list, it will not be loaded.

  -com.twitter.finagle.util.loadServiceDenied=com.twitter.finagle.stats.OstrichStatsReceiver,com.twitter.finagle.stats.CommonsStatsReceiver

Result

Users have more control over the loaded implemetation when needed.

RB_ID=748750
TBR=true
8ac8b95
@roanta roanta finagle-redis: remove concurrent lb
Problem

Redis is a stateful protocol, so we can't introduce
concurrency without managing the state across connections.

Solution

Drop the concurrent lb and return to a single pipelined
connection per client.

RB_ID=749874
e8cb4e8
@dschobel dschobel finagle-core: add netty3 owners file
RB_ID=750160
eaa64c7
@luciferous luciferous Privatize Netty types in DefaultTimer and TimerFromNettyTimer
Problem

DefaultTimer and TimerFromNettyTimer leak org.jboss.netty.util.Timer.

Solution

Rename TimerFromNettyTimer to HashedWheelTimer, make them private to
finagle and provide compatible generic constructors. To migrate, users
should be able to pass the same parameters used to construct the Netty
HashedWheelTimer directly into Finagle HashedWheelTimer.

Previously:

    val timer = new TimerFromNettyTimer(
      new netty.HashedWheelTimer(tickDuration, TimeUnit.MILLISECONDS))

Now:

    val timer = finagle.HashedWheelTimer(tickDuration.milliseconds)

RB_ID=748514
TBR=true
553835d
@tw-ngreen tw-ngreen finagle-serversets: Use FutureCache for pending getData calls
Problem:

We were not caching futures for calls to getData, thus we
sometimes called getData redundantly for the same node, wasting
effort and allocations.

Solution:

Properly cache futures for getData calls. Use util-cache's
LoadingFutureCache. Also simplified the code to make it
easier to reason about.

Result:

We don't call getData redundantly, we allocated less memory
when fetching serversets.

RB_ID=749042
ca0c813
@roanta roanta util-core: decommission scheduler clock stats
The Scheduler clock stats were decommissioned as they only make
sense relative to `wallTime` and the tracking error we have experienced
in production between `wallTime` and `*Time` make it impossible to use
them reliably. It is not worth the performance and code complexity
to support them.

RB_ID=750239
7678e3b
@atollena atollena Revert "finagle-serversets: Use FutureCache for pending getData calls"
This seems to cause problems resolving the eventbus provisioning serverset
(serverset!eventbus/prod/provisioning). I could reproduce by creating a tunnel to nest in atla.

RB_ID=750974
TBR=true
615ad20
@vkostyukov vkostyukov finagle-mux: Optimize WindowedMax
Problem
`WindowedMax` which is used by our `ThresholdFailureDetector` creates small structs, `AgedLong`s,
which have the unfortunate behavior of having medium lifetimes which is what the JVM's garbage
collector wants to avoid.

Solution
Allocate once. Use everywhere.

Result
Zero allocations for `add`. Slightly better peformance.

RB_ID=751193
cb429dc
Commits on Oct 12, 2015
@dschobel dschobel finagle-core: ignore consistently failing tests
RB_ID=751473
b6f9179
Chris Chen Make Zookeeper -client and -server usages explicit across source repo.
Problem:
The current ZooKeeper jar is a monolithic artifact that contains client and
server classes. In order to make client and server usages clearer and prep
for a upcoming split of the artifact, we need to do some classification.

Solution:
Create placeholder zookeeper-client and zookeeper-server targets. Make those
usages explicit in build targets.

Result:
Dependencies are more clear, and it will be easier to introduce split artifacts.

RB_ID=751175
2769426
@adriancole adriancole Removes zipkin annotation.duration
The OpenZipkin project dropped support for Annotation.duration. This
field was rarely used, and when used often incorrectly. Moreover, the
zipkin query and web interfaces didn't support it. Dropping it
implicitly fixed bugs and confusion around the topic. This happened in
zipkin 1.9, and finagle-zipkin is the last trace instrumentation library
left to adjust.

See openzipkin/zipkin#717

Signed-off-by: Daniel Schobel <dschobel@twitter.com>

RB_ID=751986
0214ef2
Todd Segal [finagle-core, traffic] Allow libraries to configure defaults for Lat…
…encyCompensator. Add Twitter-Server-Internal override

[Problem]

Libraries cannot configure default settings for the LatencyCompensator module in the finagle stack client.

[Solution]

Add a `DefaultOverride` for the LatencyCompensator that can be set once per-process. If multiple callers attempt to set the override, the second caller will receive 'false' and it is up to the caller to determine how to handle failure.

RB_ID=750228
91b8808
@nshkrob nshkrob finagle-benchmark: Move the thrift out to work around the nested obje…
…ct bug

Problem

Due to https://issues.scala-lang.org/browse/SI-2034, JMH doesn't work on scrooge-generated scala files after the service-per-endpoint patch.

The problem is the triple-nested objects introduced in the service-per-endpoint Scrooge patch. E.g.
```
object Hello {
  object Hi {
    object Args {}
  }
}

val args: Hello.Hi.Args = Hello.Hi.Args()
println(args.getClass.getSimpleName) // throws InternalError: Malformed class name
```

Solution

Work around the bug by moving thrift out of finagle-benchmark into a new sbt project finagle-benchmark-thrift.

Result

JMH only introspects classes in the current project, not the dependencies, so finagle-benchmark works.

Alternative approach

I've tried changing the outer 'object Hello' to a namespace. That doesn't work because it encloses implicit objects that must be in an enclosing object scope.

RB_ID=752209
968e426
@nshkrob nshkrob finagle: [docs, tiny] Fix newServiceIface usage in scaladoc
Problem

newServiceIface takes a ServiceIface type param.

Solution

Correct the usage in the comment.

RB_ID=752200
a5d6bb3
@mosesn mosesn finagle-core: Stop recording transit latency and deadline budget for …
…clients

Problem

Finagle records transit latency for clients, but only servers
care about it.

Solution

Move the transit latency stat out of StatsFilter and into
ServerStatsFilter. Handletime is also a server-specific stat,
so moved that into ServerStatsFilter too, and deleted
HandletimeFilter.  This has the added advantage of recording
transit latency at the same time we do handletime, which is one
of the earliest points.

This review also handles some other miscellaneous cleanup,
making no-allocation, testable, elapsed duration easier to use,
adding tests for handletime, transit latency and deadline budget.

Result

Finagle services no longer export transit_latency_ms or
deadline_budget_ms for clients.  It's not useful for clients,
so it's safe to remove it.

RB_ID=751268
86d9b05
@vkostyukov vkostyukov finagle: Remove finagle-stress
Problem
`finagle-stress` is pretty dead and should be removed.

Solution
Remove `finagle-stress` and its the only one dependee (which is also pretty
dead) - `caccie/cassie-stress`.

Result
1500 LOCs have been removed.

RB_ID=752201
0327caa
@nshkrob nshkrob finagle-thrift: Fix thrift server construction for extended services
Problem

Given a thrift service ExtendedEcho that extends service Echo, constructing a service with
```
Thrift.serveIface(addr, new ExtendedEcho.FutureIface { ... } )
```
produces an Echo server, i.e. a server that's not aware of ExtendedEcho's methods. The symptom is that the server throws 'TApplicationException: Invalid method name' when called with the methods from ExtendedEcho. The cause is a bug in reflection code (the constructor takes ExtendedEcho[Future], not ExtendedEcho$FutureIface).

Solution

Use the correct class when searching for the constructor.

RB_ID=752469
7d46e51
@dschobel dschobel finagle-mux: fix flaky request draining test
Problem

Mux's ClientTest is asserting properties against a global log which
picks up messages beyond the test events we want to inspect. This
makes the test flaky.

Solution

Rewrite based on metrics and a local SR instance.

RB_ID=752850
91bd4c4
@dschobel dschobel make 2.11.7 the default scala version for csl OSS
Make 2.11.7 the default scala version for csl OSS and bump patch
level of 2.10.

RB_ID=752868
47a941e
@jjmmcc jjmmcc finagle.mux: make FailureDetector controllable via flags again
Problem:

When FailureDetector.DarkModeConfig was made the default, it was done in a way that broke the ability to overwrite the default behavior via the `com.twitter.finagle.mux.sessionFailureDetector` flag (the flag simply doesn't do anything now).

Solution:

Added "dark" as a potential flag value, set that as the default, and switched the default `Param` value back to `GlobalFlagConfig`.  This keeps `DarkModeConfig` as the default, but allows the default to be changed.

RB_ID=751701
4bfaf97
@roanta roanta finagle-core: Remove weights from c.t.f.util.Ring
Problem

We removed weights from the load balancer in a previous patch,
but there are some residual references to weights that needed
to be cleaned up.

Solution

Remove weights from c.t.f.util.Ring which is used by aperture.
This simplifies the construction of the aperture ring.

RB_ID=752149
d62ffa1
@luciferous luciferous Rename finagle-httpx to finagle-http
Problem / Solution

Rename finagle-httpx to finagle-http.

RB_ID=751876
TBR=true
USER_HOOK_ARGS=--i-am-evil -x .hooks/PRESUBMIT/validate-config-ini macaw-roi/.hooks/PRESUBMIT macaw-swift/.hooks/PRESUBMIT/check-swoop
84c9907
@vkostyukov vkostyukov util-core: Remove deprecated methods on Time and Duration
RB_ID=751771
TBR=true
80e9287
@amartinsn amartinsn Adding Globo.com to the list of adopters.
Signed-off-by: Travis Brown <tbrown@twitter.com>

RB_ID=753204
490f6f8
@vkostyukov vkostyukov finagle: Remove finagle-swift
RB_ID=752826
TBR=true
a7e51dd
@kevinoliver kevinoliver finagle-core: RetryPolicy.tries now uses jittered backoffs
Motivation

The simplest way of specifying a `RetryPolicy` via `ClientBuilder` is
`ClientBuilder.retries` and this retry policy does not wait at all in
between retries. This can have negative impacts on the downstream
service should it already be struggling.

Solution

Use a jittered backoff starting at 5 milliseconds and then using
random backoff between 5 millis and 3x the previous backoff for the
following retries. These backoffs are capped at 200 milliseconds.

This is used by callers to `RetryPolicy.tries` which impacts
`ClientBuilder.retries` and `RetryingService.tries`.

`Backoff` was moved into its own file and adding some additional
capabilities such as being able to specify a maximum for `linear`
and `exponential` as well as introducing two new jittered backoffs,
`exponentialJittered` and `decorrelatedJittered`.

Result

Better behaving clients should lead to more systemic resilience.

RB_ID=752629
eb99828
@jcrossley jcrossley finagle/finagle-core:: Revived FailureAccrualFactory must satisfy a r…
…equest before accepting more

Problem
A service revived after failing can immediately accept many requests.
It is more likely to fail these requests than a healthy service; it
should be conservative after revival.

Solution
After being revived, a FailureAccrualFactory enters a
'probing' state wherein it must successfully satisfy a request before
accepting more. If the request fails, it waits for the next 'markDeadFor'
period.

RB_ID=747541
faf0479
@dschobel dschobel finagle-core: disable highres timer
Replace default highres timer instance with default hashwheel timer so that context is preserved.

RB_ID=753966
b010e5b
@vkostyukov vkostyukov finagle-mux: Avoid synchronization on WindowedMax.get
RB_ID=753980
bf4b0ea
Commits on Oct 15, 2015
@dschobel dschobel finagle-http: improve handling of oversized request payloads
Problem

Rather than returning a usable error status code on oversized request
payloads our http codec implementation will throw an exception and
forcibly close the offending client connection. This turns an
application level error into a transport level failure which leaves
clients guessing at what exactly went wrong.

Related to this, finagle http servers will respond in the affirmative
(100 CONTINUE) to clients seeking confirmation that a large payload
is in fact allowed, moments before closing the client connection.

Solution

Insert a pipeline handler before the chunk aggregator and before
the ExpectContinue handler which fails fast on oversized requests
and returns the correct error code.

Result

An actionable client side error code for oversized requests and no
spurious 100 Continue responses to doomed client requests.

RB_ID=753664
7a8575c
@roanta roanta finagle-core: eagerly remove `Balancers` gauges on close
Problem

Balancers can be re-constructed across the lifetime of a
client via wily. Thus, it's important to cleanup exposed
gauges when their context is closed in order to avoid avoid exposing
noisy stats and occupying memory for longer than neccessary.

Solution

Cleanup gauges exposed by the `Balancers` factory methods.

RB_ID=754652
0dc723b
Todd Segal [finagle-core, finagle-memcached] Add FixedInetResolver for clients o…
…r other resolvers that do not want to poll for DNS updates

Problem:

The InetResolver does not cache DNS lookups and polls every 5 seconds for updates per recent RB #735006. For certain scenarios this is not desireable as it can add hundreds of spurious lookups.

-- The zk2 resolver deals with large serversets and does not want to relook up all entries on every membership change. In this case, we do not expect changes to be communicated via changing ip addresses but instead host name changes.
-- The twemcached client uses a large fixed set of hosts serving different sections of the cache. These ip addresses should not be changing to indicate new servers in the set. Instead, it should change the hostnames.

Solution:

Allow clients to opt-into the "old" behavior of caching DNS lookups indefinitely. The zk2 resolver was already doing this. Promote the zk2 resolvers code to a new FixedInetResolver and have the twemclient builder specify its schema in its names.

RB_ID=753712
b7c3a55
@roanta roanta finagle: add Hotel Urbano as an adopter
RB_ID=755152
ed082f5
Peter Schuller Remove ex-employees from OWNERS files.
RB_ID=755695
TBR=true
NO-QUEUE=true
NO_USER_HOOK=1
d87bb14
@kevinoliver kevinoliver util-lint: Add lint package to flag questionable practices
Motivation

There are a variety of best practices we can identify at runtime and
use to help our users improve their service.

Solution

Introduce a new util-lint module at com.twitter.util.lint which allows
us to register `Rules` which when run can return 0 or more `Issues`.

This commit adds a few rules:

 * Multiple `StatsReceivers` registered.

 * Large cumulative gauges.

 * Large number of requests to `StatsReceiver.{stat,counter,addGauge}`.

 * Duplicate names used for clients or servers.

Result

Users can visit /admin/lint with TwitterServer and see if there is
anything they can improve with their service.

RB_ID=754348
5dfe12d
@jcrossley jcrossley finagle/finage-core: Make FailureAccrual markDeadFor use exponential …
…backoff by default

Problem
The default value for markDeadFor in FailureAccrualFactory is a constant,
so frequently failing nodes are regularly reinstated after a timeout.

Solution
FailureAccrualFactory uses jittered backoffs (starting at 5s, up to 300s)
as the duration to mark dead for, if markDeadFor is not configured.

RB_ID=746930
0aec0a7
@roanta roanta csl: bump lib versions
RB_ID=756082
TBR=true
745578b
@vkostyukov vkostyukov util-stats: Improve docs for InMemoryStatsReceiver
RB_ID=755684
3cb59be
Commits on Oct 19, 2015
@vkostyukov vkostyukov finagle: Log exceptions occurred in the server stack
Problem
Unhandled exceptions from the Finagle server stack are silently dropped.

Solution
Replaced `DefaultMonitor` with `RootMonitor`, which logs all the unhandled
exceptions.

RB_ID=728585
89bb1eb
@vkostyukov vkostyukov finagle|util: Fix broken changelogs
- Dublicated section "Runtime Behavior Changes" under the 6.30.0 relase in finagle.
- Wrong idenetation in util's cahngelog.
- Missing changelog for TwitterServer.

RB_ID=756329
485a4ae
@arnarthor arnarthor finagle: Adding QuizUp to adopters list
Signed-off-by: Vladimir Kostyukov <vkostyukov@twitter.com>

RB_ID=756337
169a919
@vkostyukov vkostyukov finagle-http: Check for potential integer overflow of the HTTP messag…
…e size

RB_ID=756352
e917f27
Commits on Oct 26, 2015
@kevinoliver kevinoliver finagle, scrooge: Remove caliper
Motivation

We have moved to JMH from Caliper for microbenchmarks. A few
references and benchmarks remained.

Solution

Port the one interesting benchmark to JMH and remove the others.

RB_ID=756942
eb3b0f8
@dturner-tw dturner-tw Revert "finagle-http: Check for potential integer overflow of the HTT…
…P message size"

This is breaking the build.

RB_ID=757293
TBR=true
NO-QUEUE=true
2b83fa5
@kevinoliver kevinoliver finagle: Flaky tests
Cleanup for some flaky tests

RB_ID=756955
c5049b1
@kevinoliver kevinoliver util-core: Timers should propagate Locals
Motivation

`com.twitter.util.Timer` typically runs code in a different execution
context from where they scheduled. If the corresponding
`com.twitter.util.Locals` are not propagated, this can lead to
unexpected behavior. Finagle's most commonly used `Timer`
implementation is `com.twitter.finagle.util.HashedWheelTimer` which
has this behavior. However, two other implementations, `JavaTimer` and
`ScheduledThreadPoolTimer`, did not.

Solution

`Timer` now has final implementations for `schedule` which delegate to
new protected `scheduleOnce` and `schedulePeriodically` methods. This
is done to ensure that `Locals` are captured when the task is
scheduled and then used when the task is run. Existing `Timer`
implementations should rename their existing `schedule` methods to
work with the new interface.

Result

`Locals` are available when the task is run, as would be expected.

RB_ID=755387
TBR=true
47925f5
@vkostyukov vkostyukov finagle-http: Check for potential integer overflow of the HTTP messag…
…e size

Problem
If HTTP server or client are configured with request/response size bigger than 2Gb,
they will fail at runtime (when `ListenningServer` or `Transporter` are constructed)
and will drop the connection.

Solution
Fail fast instead of instantiating a wrongly configured server/client.

RB_ID=757256
d998a16
@dschobel dschobel finagle-core: fix protocol library name configuration in builders
Problem

The global registry can't access the protocol name for CodecFactory#Client
and CodecFactory#Service instances which leads to unspecified
protocols even for CodecFactory's with defined protocols names.

Solution

Configure the protocol name at stack eval time.

Result

Fewer unregistered protocols.

RB_ID=757260
ab5a3b1
Alex Yarmula loadServiceDenied: fix the usage comment to properly use global flag
There are two ways to use loadServiceDenied:

(runtime flag) java MyClass -com.twitter.finagle.util.loadServiceDenied=com.twitter.finagle.stats.OstrichStatsReceiver
(global flag) java -Dcom.twitter.finagle.util.loadServiceDenied=com.twitter.finagle.stats.OstrichStatsReceiver MyClass

Both ways start the process, but due to initialization order, the first way doesn't achieve the desired filtering.

This RB fixes the comment that implicitly advises the syntax of the first way. It also adds OstrichExporter to examples too as it's loaded too and is commonly added to denied list.

RB_ID=758031
b4a26f8
@adleong adleong Replace macaw group with api group. Cleanup OWNERS and GROUPS files a…
…ccordingly.

RB_ID=758183
75bf985
@blackicewei blackicewei finagle-core: ClientRegistry cleans up duplicates when service closes
Problem

ClientRegistry does not remove entries from a duplicate
list it maintains. It can causes memory leak if services
create lots of clients.

Solution

clean up the duplicate buffer when client closes.

RB_ID=757874
2627735
@blackicewei blackicewei finagle: revert two recent failureAccrual changes
Problem

We introduced a bug for Memcache when hostEjection is on. Once a host
is ejected from cache ring, there is no way for failureAccrual probe
to work, thus the node is never added back into the ring.

Solution

For now revert two related commits:

Revert "finagle/finagle-core:: Revived FailureAccrualFactory must satisfy a request before accepting more"
Revert "finagle/finage-core: Make FailureAccrual markDeadFor use exponential backoff by default"

RB_ID=758383
NO-QUEUE=true
1f144b7
@weibohe weibohe finagle-exp: Increment timeouts counter when the original request exc…
…eeds cutoff and causes a backup request to be issued in BackupRequestFilter

Problem

When the original request exceeds cutoff and causes a backup request to be issued, BackupRequestFilter forgets to increment the 'timeouts' counter.

Solution

Increment the counter right before issuing the backup request.

RB_ID=758716
462bbcf
@vkostyukov vkostyukov finagle-http: Add vkostyukov to the finagle-http owners
RB_ID=758804
8adde80
@kevinoliver kevinoliver finagle-core: DefaultMonitor respects Failure.logLevel
Motivation

Exceptions that propagate to the default `Monitor` became logged at
Warning level. This lead to excessive logging for `BackupRequestLost`.

Solution

`DefaultMonitor` is now the default `Monitor` and it respects
`Failure.logLevel`. This also necessitated a change so that
`BackupRequestLost` is no longer a singleton Exception. Instead, use
`BackupRequestLost.Exception`.

RB_ID=758056
c334cc3
@vkostyukov vkostyukov util-stats: Remove deprecated methods on StatsReceiver
RB_ID=757414
TBR=true
f10f2d7
@adleong adleong finagle-core: Add a serveAndAnnounce variant that accepts a SocketAdd…
…ress.

Problem
Server#serveAndAnnounce does not have a variant that accepts a SocketAddress.

Solution
Add a Server#serveAndAnnounce variant that accepts a SocketAddress.

RB_ID=758862
5558d64
@vkostyukov vkostyukov finagle-http: Convert unhandled exceptions into 500s
Problem
Any unhandled excpetion from the HTTP stack results into
a closed channel (drops connection) instead of sending out
a very basic HTTP 500 response.

Solution
Turn any unhandled exceptions into 500s.

Result
HTTP services buit with Finagle behave healthier in the
default configuration.

RB_ID=755846
571bbe4
@kevinoliver kevinoliver util-core: Promote AsyncStream out of experimental
Motivation

`AsyncStream` has been in `com.twitter.concurrent.exp` to allow it to
sort out any issues. It has been a success so far and it is time to
promote it to `com.twitter.concurrent`.

Solution

The code has moved but there are deprecated forwarding types in the
`com.twitter.concurrent.exp` package objects that should allow Scala
devs to migrate seamlessly. Java devs will need to update their code.

Thank you to Neuman Vong for getting AsyncStream to this point.

RB_ID=758061
TBR=true
7ebdb29
Todd Segal WilyNs Service Discovery End-to-End CI test base + offline scenario t…
…est.

Service Discovery has complex dependencies and resultant edge cases and scenarios which need both thorough deterministic testing as well as regression detection.

This is the base CI-friendly test suite (that will be expanded in future RBs) which verifies scenarios working end to end with:
 - In-Proc zookeeper
 - In-proc zookeeper command controller (can force expire sessions, cause failures)
 - In-proc wilyns server with timing features disabled (stabilization)
 - wilyns-client installed as the default Namer

Also, included a verification for TRFC-508 (clients should receive a cached response from wilyns when zk is offline)

[CSL: Only tiny changes to make some items protected to allow overriding in unit tests]

RB_ID=757599
7b3cbfc
Commits on Oct 27, 2015
@vkostyukov vkostyukov Add Brigade to adopters list
RB_ID=759630
697b638
@vkostyukov vkostyukov csl: Add maven-central badges to finagle, util, twitter-server and sc…
…rooge

RB_ID=759680
ed81c42
@blackicewei blackicewei finagle: add bing to owner of finagle-core, finagle-thriftmux
I am confident to review those changes.

RB_ID=759885
d7d823e
@vkostyukov vkostyukov csl: Fix TravisCI builds
Problem
TwitterServer depends on `finagle-zipkin`, which depends on `finagle-thrift`,
which depends on `finagle-core`, which depends on `scrooge-core`.

Solution
Publish local all the required dependencies for twitter-server. Cleanup the rest
of the travisci scripts.

RB_ID=759759
745ef2b
@blackicewei blackicewei finagle-mux: remove close() from failureDetector, make session recove…
…rable from TPing failures

Problem

Session lifetime management should be managed by the layer
above `ThresholdFailureDetector`.

When one ping returns as exception, the session is marked busy
forever without a way to recover from faiures.

Solution

Remove close() from `ThresholdFailureDetector`. It's enough
to mark its Status as Closed. SingletonPool does the right
thing to close the transport and reconnects.

When Tping fails, mark Status as Closed.

RB_ID=756833
2dd3ed0
@luciferous luciferous Optimize Filter.andThen
Problem

The definition of `andThen` which accepts a `Filter` parameter is the
most common usage for composing filters and a service, e.g.,

    filterA andThen filterB andThen service

The resulting Service allocates an additional Service per filter for
every request. Merely associating `andThen` to the right avoids the
allocations.

    filterA andThen (filterB andThen service)

This is possible because the right-associated pattern invokes the
definition of `andThen` which accepts a Service parameter. However, it
is not possible to do this if the user wants to build a filter chain
without a service on hand.

Solution

Introduce a hidden type (`AndThen`) to wrap a continuation, which when
ready to build the Service, we can use to magically re-associate the
`andThen`s to the most efficient pattern: to the right.

The continuation represents the prefix filter chain applied to a suffix
Service received as the argument, and therefore has the same type as the
`andThen` which takes a Service parameter: `Service[ReqOut, RepIn] =>
Service[ReqIn, RepOut]`.

Benchmark

Old
                                   (numAndThens)  Mode  Cnt     Score     Error   Units
andThenFilter                                  1  avgt   10    10.162 ±   0.077   ns/op
andThenFilter:·gc.alloc.rate.norm              1  avgt   10    ≈ 10⁻⁵              B/op
andThenFilter                                 10  avgt   10    95.834 ±   1.872   ns/op
andThenFilter:·gc.alloc.rate.norm             10  avgt   10   240.000 ±   0.001    B/op
andThenFilter                                 20  avgt   10   212.385 ±   3.959   ns/op
andThenFilter:·gc.alloc.rate.norm             20  avgt   10   480.000 ±   0.001    B/op

New
                                   (numAndThens)  Mode  Cnt   Score    Error   Units
andThenFilter                                  1  avgt   10  11.199 ±  0.181   ns/op
andThenFilter:·gc.alloc.rate.norm              1  avgt   10  ≈ 10⁻⁵             B/op
andThenFilter                                 10  avgt   10  47.235 ±  0.507   ns/op
andThenFilter:·gc.alloc.rate.norm             10  avgt   10  ≈ 10⁻⁴             B/op
andThenFilter                                 20  avgt   10  85.747 ±  1.062   ns/op
andThenFilter:·gc.alloc.rate.norm             20  avgt   10  ≈ 10⁻⁴             B/op

Signed-off-by: Vladimir Kostyukov <vkostyukov@twitter.com>

RB_ID=759635
47b2743
Commits on Oct 28, 2015
@adriancole adriancole Adds documentation to annotations as they are often misunderstood
Instrumentation libraries almost always get some aspect of annotations
wrong. This is an attempt to document them, so that folks have a better
chance at creating portable implementations.

Signed-off-by: Vladimir Kostyukov <vkostyukov@twitter.com>

RB_ID=759633
3a98a83
@jcrossley jcrossley finagle-core, finagle-memcached: FailureAccrualFactory probing works …
…with memcache

Problem
Probing did not work correctly with memcache because ejected hosts were never returned to the ring.

Solution
Ejected hosts are now returned to the ring when probing starts.

RB_ID=759194
782e46e
@blackicewei blackicewei finagle-mux: change closeThreshold to be duration in failureDetector
Problem

With long tail latency, closeThreshold is hard to
config as a multiplier.

Solution

Change it to time duration, so much easier to reason
about.

RB_ID=759406
065ff90
@vkostyukov vkostyukov csl: Fix OSS builds
Problem
OSS build is still broken by the following reasons:

1. My recent patch of `bin/travisci` had a typo under Scrooge (`.sbt` instead `./sbt`). Sorry.
2. Sbt runs the tests in the order that differers from pants. And somehow, the order matters
   for `fiangle-http` due to some tests that use "addr=lable" address format. The reason for
   this is a global mutable state per `ServerRegistry` that is shared acroos Finagle instance
   and _overrides_ the server label.
3. Finagle uses an old version of scrooge-sbt-plugin that had [a bug dealing with hierarchical
   thrift structures][1].

Solution

1. Fix typo in `scrooge/bin/travisci`.
2. Do not use `addr=label` format as address in tests to avoid using `ServerRegistry`.
3. Update Scrooge to the [most recent version][2].

RB_ID=760532
b6ef451
Commits on Oct 30, 2015
@adleong adleong finagle: Avoid overflow when bitshifting in Backoff calculations.
Problem:
The maximum backoff value for equalJittered and exponentialJittered can be negative if a bit shift overflows into the sign bit.  This causes java.lang.IllegalArgumentExceptions.

Solution:
Implement a maximum amount to shift to protect the sign bit.

RB_ID=761435
203c4ec
@blackicewei blackicewei finagle-mux: turn on session based failure detection by default.
RB_ID=756213
1c7aed4
@vkostyukov vkostyukov csl: Add Gitter badges and fix HipChat template and skip flaky tests
There are (still) plenty of problems with our Github repos:

a) Finagle build is broken;
b) READMEs don't have a link to Gitter room;

Solution

a) Add `SKIP_FLAKY` env variable to Travis' config to skip all the
   flakey tests;
b) Add Gitter badge for all the projects (except for Ostrich) that
   redirects to Finagle's room;

RB_ID=761629
a424ec3
Commits on Nov 02, 2015
Todd Segal [finagle-serversets] Zk2Resolver reports Addr.Pending when its ZkClie…
…nt is unhealthy

Problem:

The Zk2Resolver relies on a healthy zookeeper client to accurately report updates to its clients, however, this resolver doesn't report any changes when its client dies or cannot connect.

Solution:

When the underlying zk client is unhealthy (not connected) for > 1 removal window as defined in the existing stabilization config, report that the Var[Addr] is no longer definitively Neg or Bound but rather Addr.Pending.

The finagle loadbalancer and traffic distributor already use cached values when receiving an Addr.Pending update from the resolver (and have a test in place to verify such). Thus this is a safe change to make for existing distribution scenarios.

Implementation:

 - ServiceDiscoverer exposes a health var.
 - HealthStabilizer buffers up unhealthy notices until a probation interval has passed
 - Zk2Resolver surfaces Addr.Pending if it receives unhealthy from its stabilized health var.

RB_ID=760771
b1b3f47
Commits on Nov 09, 2015
@roanta roanta finagle-mux: Push message enc/dec into transports
Problem

We don't have a good way to build mux components that need to be
shared between clients and servers. For example, both senders and
receivers need to implement pings, handshake logic, and payload
splitting.

Solution

Map the mux `Transport[ChannelBuffer, ChannelBuffer]` to `Transport[Message, Message]`.
This allows us to start implementing this logic at the finagle transport layers.

RB_ID=759707
e3c79bf
Jeremie Castagna finagle-thrift: ThriftRichClient newServiceIface method signature cha…
…nge to set service labels

Please provide a meaningful and concise commit message for
your change. The automatically generated one below is taken from
the ReviewBoard summary and description:

finagle-thrift: ThriftRichClient newServiceIface method signature change to set service labels

finagle-thrift: ThriftRichClient newServiceIface method signature change to set service labels

Problem

Using ServiceIFace didn't allow for passing a label, which made stats unscoped

Solution

Implement a newServiceIface method signature that requires passing a label and deprecate old method signatures

Result

ServiceIface stats will be labeled. Code using the method without label support will get deprecation warnings.

RB_ID=760157
818285a
Brian Rutkin Problem:
  The new lint rules can help identify if StatsReciver metric creation is being requested repeatedly at runtime but it does not help in identifying which specific metrics are being requested.

 Solution:
  Add trace level logging to MetricsStatsReceiver to make debugging easier.  It can be enabled at runtime via admin interface when using Twitter-Server.

RB_ID=763368
7830952
@kevinoliver kevinoliver finagle-core, util-logging: Introduce HasLogLevel trait
Motivation

More control over the log level used for exceptions is necessary.
While `Failure` provides for this, it is marked as `final` and it can
be difficult to shoehorn other existing exception classes into it.

Solution

Introduce a small interface, `HasLogLevel` with a single method
`logLevel` that can be mixed into other exceptions as necessary.

Result

`Failure` mixes it in as does
`c.t.f.mux.ClientDiscardedRequestException`.

RB_ID=762874
5837bbd
@ryanoneill ryanoneill finagle-core: Update StatsReceiver Link on Clients Page of User Guide
Problem:

The StatsReceiver link on the Clients page of the Finagle User Guide
points to StatsReceiver.scala within finagle-core. This file is now
located in util/util-stats.

Solution:

Change the StatsReceiver link in the documentation to point to the
util/util-stats version of the file on GitHub.

Result:

A working StatsReceiver link on the Clients page of the Finagle User
Guide.

RB_ID=764458
1d750f0
@kevinoliver kevinoliver finagle: Introduce a budget for retrying failed requests
Motivation

Currently, clients can specify a `RetryPolicy` which determines which
types of failed requests are retried and how many retries to attempt.
This is flexible and gives users a good amount of control. However,
when downstream services are under duress, so called "retry storms"
can amplify the effects.

As an example, given a common setup of a front end talking to a middle
tier which talks to a backend. If both the front end and the middle
tier allow for 4 retries, and things go south on the backend, the
resulting amplification is 25x onto the backend service.

Solution

Introduce a dynamic budget that controls when it is okay to retry a
failed request such that whether or not a request can be retried is
now checked against both the policy as well as the budget.

The default budget `RetryBudget.apply()` allows for 20% of the total
requests to be retried on top of a minimum of 10 retries per second
to accomodate clients that have just started issueing requests or
clients that issue a low rate of requests per second.

Clients configured via `ClientBuilder` can customize the budget via
`ClientBuilder.retryBudget(Retries.Budget)` and clients configured via
`Stack` can use `configured(Retries.Budget)`.

RB_ID=760213
39e1163
@vkostyukov vkostyukov finagle-core: Less allocations for Backoffs and Duration
Problem
Default backoff for Failure Accrual has been changed recently from const function `() => Duration`
to `Backoff.equalJittered`, which allocates around 728 bytes per each call `tail`/`head` (due to
high allocation rate for `Duration.*(Long)`). Give that this operation happens pretty much on every
failed request, it makes sense to reduce allocations and improve performace of jittered backoffs.

Solution
Optimize `Duration.*(Long)` (scale by long number).

RB_ID=764212
f60a996
Commits on Nov 16, 2015
@tw-ngreen tw-ngreen finagle-serversets: Use FutureCache for pending getData calls
Problem:

We were not caching futures for calls to getData, thus we
sometimes called getData redundantly for the same node, wasting
effort and allocations.

Solution:

Properly cache futures for getData calls. Use util-cache's
LoadingFutureCache. Also simplified the code to make it
easier to reason about.

Result:

We don't call getData redundantly, we allocated less memory
when fetching serversets.

RB_ID=753634
f1e956c
Nik Johnson finagle-stats: Add a flag for loading stats filters from a file
Problem

The only way to load stats filters is via a CSV list passed on the command line.

Solution

Add a flag which accepts a path to a file and load the file as the list of stats filters. Combine both lists for a complete list.

Only a new flag is added; current behavior does not change.

RB_ID=764914
e896f0d
@mosesn mosesn util-core: Provides an AsyncMeter, which asynchronously rate limits t…
…raffic

Problem

We have very fast sources, and want to space out production in a
safe way.

Solution

AsyncMeter is an explicit schedule for waiters that allows them
asynchronously, allowing one waiter every `interval` duration.

RB_ID=756333
eb5b1aa
@vkostyukov vkostyukov finagle-core: Fix race condition in FailureAccrual' revive timer task
Problem

FailureAccrual's `reviveTimerTask` (the one we run periodically to check if
the dead host is back alive) has a race condition over its reference.

  1. [thread 1] Host goes down and we mark it dead for `x` seconds
  2. [thread 1] We run the timer task (`revivalTask`) to check if it's back alive in `x` seconds
  3. [thread 2] A `ServiceFactory` that wraps Failure Accrual calls `close` on it so we cancel the
     timer task (which is `None`) via `cancelReviveTimerTasks`
  4. [thread 1] Stores the reference to a timer task into `reviveTimerTask`

Since `cancelReviveTimerTasks` is not syncrhonized there is a race condition between 2 and 3 so there
is quite a chance we won't cancel the task seeing a `None` instead of `Some(task)` (memory leak).

Solution

Mark `cancelReviveTimerTasks` syncrhronized to make sure it will see the most recent value for
`reviveTimerTask`.

RB_ID=766119
2bfe368
@vkostyukov vkostyukov finagle-core: Cancel timer tasks in FailFastFactory
Problem

`FailFastFactory` is not consistent in terms of canceling the
scheduled timer tasks: only transition `Timeout -> Success` cancels
the previous task, while transition `Timeout -> TimeoutFail` doesn't.

It's unlikely that this inconsistency can cause memory leaks, since
Netty's `HashwheeledTimer` (which is used by default) guarantees
canceling/removing tasks that were executed, but it doesn't say
when the cleanup will happen. Another point is that we shouldn't
rely on a particular timer implementation so we could painlessly
switch it in future and do not end up debugging memory leaks.

Solution

Cancel tasks we're no longer insterested in.

RB_ID=767928
2334a04
@kevinoliver kevinoliver finagle-core: Default to a finite budget for RetryFilter & RetryExcep…
…tionsFilter

Motivation

`RetryFilter` and `RetryExceptionsFilter` are the common mechanisms
for retrying failed requests. Previously, there was an infinite budget
for retries to use which can lead to amplification of problems when
downstream service's problems are not transient.

Solution

Give a default `RetryBudget` to `RetryFilter` and
`RetryExceptionsFilter` in order to mitigate the amplification
effects.

RB_ID=766302
385dba9
Commits on Nov 23, 2015
@mosesn mosesn csl: Replace === with == in scalatest 2.x tests
Problem

We've been nagging each other about == vs ===.

Solution

ag -l ' === ' \
  ./util \
  ./ostrich \
  ./finagle \
  ./twitter-server \
  ./scrooge \
  ./twitter-server-internal | \
  grep '\.scala' | \
  xargs sed -i '' 's/ === / == /'

ag -l ' ===' \
  ./util \
  ./ostrich \
  ./finagle \
  ./twitter-server \
  ./scrooge \
  ./twitter-server-internal | \
  grep '\.scala' | \
  xargs sed -i '' 's/ ===$/ ==/'

Result

No more === nags in RB

RB_ID=766849
ea272c5
@blackicewei blackicewei finagle-mux: remove failureDetector darkmode config
FailureDetector is fully on by default, we can remove darkmode config now.

RB_ID=766642
9bede15
@adleong adleong finagle: Avoid using String.format in tracing
Problem:
String.format is inefficient and does a lot of allocation.

Solution:
Use string interpolation instead.

RB_ID=767487
56fa4e8
Daniel Furse finagle-http: warn about direct manipulation of Dtab headers in Requests
RB_ID=765506
f256672
Todd Segal [finagle-core] Requeue's should support backoffs.
Problem:

Requeue's are all tried synchronously. Clients wanting to prevent retry storms should be able to specify (jittered) backoff between retries.

Solution:

Allow users to optionally specify a stream of backoff's for use with a retry budget (consumed by the RequeueFilter). If a budget has space for another retry, the filter will select the next element from the backoff stream to delay the next request. If the stream runs out of elements, the budget is deemed to be exhausted.

The default backoff stream is an infinite stream of Duration.Zero to maintain parity with the existing behavior.

RB_ID=768883
fba46c2
@roanta roanta finagle-mux: Refactor ClientDispatcher
Problem

The mux ClientDispatcher was becoming unwieldy. It handles translating
Requests/Responses to mux messages, managing control messages/the
state of a session, and tag management. By teasing these apart we can
make it easier to understand and more extensible.

Solution

The dispatcher now is primarily responsible for outstanding messages and
tag management, and thus, is a Service[Int => Message, Message]. This allows
us to move Request/Response translation to a filter above it. Finally, session
management is moved into a separate class.

RB_ID=760326
bba8f20
@zfy0701 zfy0701 finagle-thrift: Update serverFromIface to try new style constructor.
Problem:
In twitter/scrooge#211, a new constructor with
service name as parameter is added.

Solution:
Try to find the new style constructor but fallback to old one if not
found.

Signed-off-by: Moses Nakamura <mnakamura@twitter.com>

RB_ID=767949
TBR=true
d245331
@mariusae mariusae Activity: add constructor from Event[State[T]]
This mirrors the one available in Var.

RB_ID=770338
31851cc
@dschobel dschobel finagle-netty4: introduce netty4 listener
Introduce a Listener for netty4.

RB_ID=718688
8e415c0
@dschobel dschobel finagle-core: introduce framing
Problem

We want netty-agnostic protocol implementations, this requires
new framing primitives which are netty-free.

Solution

Introduce framing types + an implementation of fixed-length framing.

N.B.

Our decoding type implies a message handler which observes a stream
of byte buffers of arbitrary size and is tasked with emitting a
variable number of typed frames in response to each subsequent
buffer.

RB_ID=767970
8d4d8bf
@jcrossley jcrossley finagle/finagle-core: FailureAccrualFactory uses a policy to determin…
…e when to mark endpoint dead

Problem
FailureAccrualFactory uses only 'numFailures' to determine when to mark an endpoint dead.

Solution
FailureAccrualFactory now uses a FailureAccrualPolicy to determine when to mark an endpoint dead.
The default, FailureAccrualPolicy.consecutiveFailures() mimicks existing functionality, and
FailureAccrualPolicy.successRate() operates on the exponential moving average success
rate over a window of requests.

RB_ID=756921
624da21
@vkostyukov vkostyukov finagle-core: Refactor Multipart to support in-memory/on-disk file up…
…loads

Problem

1. Finagle `Multipart` API doesn't distinct in-memory/on-disk file uploads, which
   forces users materialize _all_ in comming files into memory, even if they don't
   read the file contents.
2. Given that `Multipart.decodeNonCunked` mutates the underlying request, it totally
   makes sense to wire _lazy_ instance of `Multipart` to each requset. That said,
   use the same pattern we use for `ParamMap`.

Solution

1. Redesign the `Multipart` ADT to support both in-memory and on-disk file uploads.
2. Add a lazy field `Request.multipart` that captures the multipart data for each
   request.

RB_ID=769889
TBR=true
ed5b0e0
Commits on Nov 30, 2015
@ryanoneill ryanoneill finagle: Fix quoting in CHANGES file
Problem

One of the entries in the changes file is incorrectly quoting the
RB_ID.

Solution

Properly enclose the RB_ID in double backticks.

RB_ID=771599
b5c05f0
Todd Segal [ZkSession] Limit number of concurrent requests to a zookeeper cluster
Problem:

If ZooKeeper is under load, aggresively retrying (even with backoff) can still place strain on the server. Additionally, 'flapping' updates on a particular znode can cause strain on the client in the form of constant notify-request/response-notify (CPU) as well as correponding GC.

Solution:

Use an async semaphore with a limit of 100 permits in ZkSession to restrict concurrent requests from a single client. If zookeeper isn't keeping up or we are seeing the same node firing watches all the time, we will begin to queue up the changes on the client side.

The serverset changes for adds are already batched and removes are stabilized so being a few seconds out of date in normal circumstances is not a problem.

RB_ID=771399
8377401
Todd Segal ZkSession: Use decorrelated jittered strategy for all zk retries
Problem:

Highly leveraged zookeeper clusters can see retry storms from their clients for either watch or connect failures. Make sure that finagle clients that retry do so with proper jitter strategies.

Solution:

Use decorrelated jitter as provided by finagle-core for all retries. It performs best over time. See http://www.awsarchitectureblog.com/2015/03/backoff.html

RB_ID=771098
e45fd2a
yic maven layout goes away
RB_ID=771595
NO_USER_HOOK=1
bcf33f5
@vkostyukov vkostyukov finagle-core: Netty 4 socket channel transport
Problem / Solution

Finagle's missing a Netty 4 `ChannelTransport` implementation.

RB_ID=763435
f8f02e6
@atollena atollena [finagle-core] Initialize endpoint stacks lazily
Problem
--

For large server sets, initializing the entire set of endpoint stacks can take seconds (typically
around 1 ms per endpoint stack, and server sets can have thousands of entries). This was done
synchronously on server set change, causing clients with a large underlying server set to be slow at
satisfying their first request, causing timeouts.

Solution
--

Instead of waiting for all endpoint stacks to be ready before satisfying the first requests,
initialize each endpoint stack on the first request to a given endpoint. The first request takes the
hit for creating a single endpoint stack. The overhead of creating a single endpoint stack is
typically low.

Result
--

Clients are ready sooner.

RB_ID=769492
d100314
@tw-ngreen tw-ngreen finagle-serversets: notify Stabilizer timers in separate threads
Problem:
The timers in Stabilizer call notify() from the timer thread, the
code run as a result of notify() takes a bit of time, causing timer
deviation as it's running in the timer thread.

Solution:
Call notify() in a separate thread.

Result:
Timer deviation mitigated

RB_ID=750527
5b63860
@kevinoliver kevinoliver util-core: Avoid various allocations
Motivation

There are a handful of places in util where we create
`java.util.concurrent.atomic` objects. Some of these are created often
and have very little concurrent access. There is overhead in creating
the extra object reference.

Solution

For these candidates we can instead use `synchronization` in order to
avoid the object allocations.

* `Future.monitored`
* `Promise.attached`
* `Timer.doAt`

Result

Less allocations and about the same or better performance.

RB_ID=768656
e18de6d
Commits on Dec 03, 2015
@dschobel dschobel finagle-netty4: wire channeltransport into listener
now that we have a netty4 channel transport, let's use it.

bonus: make tcp_nodelay configurable.

RB_ID=772498
f1a6877
@dschobel dschobel finagle-netty4: private listener
listener shouldn't be public yet. make it private.

RB_ID=773072
5eecf03
@olix0r olix0r finagle-core: make RegistryEntryLifecycle.role static
Problem

RegistryEntryLifecycle's role may not be accessed without instantiating a new
module.  This is cumbersome when referencing this module via e.g.
Stack.insertAfter.

Solution

Make RegistryEntryLifecycle.role static.

Signed-off-by: Daniel Schobel <dschobel@twitter.com>

RB_ID=773211
7ce8661
@atollena atollena [finagle-serversets] Add traffic owners
RB_ID=773289
4af6acc
@kevinoliver kevinoliver finagle-core: Add Java tests for StatsReceivers
Some singleton StatsReceivers didn't have Java compilation tests.

Also, some of these tests were only compiling and not running due to
not being public.

RB_ID=771672
be8f201
@ryanoneill ryanoneill finagle: Documentation Generation Cleanup
Problem

The CHANGES file has an incorrect layout for displaying release 6.29.0
and 6.30.0 within the documentation. Also, the pushsite script is
still looking for documentation generated based on Scala 2.10.

Solution

Modify the CHANGES file to use the correct format for release 6.29.0
and 6.30.0. Also modify the pushsite script to look for documentation
in the 2.11 folder, and add a note to the README regarding the
version used within the generated documentation.

RB_ID=771755
1f7a391
@atollena atollena [finagle-serversets] Per-ZK serverset stats
Problem
--

Some users of Zk2Resolver resolve via multiple zookeeper clusters. ZooKeeper and resolvestats
are cumulated across zookeeper clients, making troubleshooting difficult.

Solution
--

Prefix ZooKeeper stats with the hostname used to connect to it.

RB_ID=772084
616868b
Todd Segal [finagle-core] TrafficDistributor: Don't empty valid serversets when …
…observing a failure in the update activity.

Problem:

When the TrafficDistributor receives an update that the Activity updating its serverset has failed unexpectedly, it immediately fails and cannot process any more requests.

Solution:

Update the distributor to continue using stale data when the activity fails unexpectedly.

RB_ID=773792
d4db1fd
@tonyd3 tonyd3 finagle-core, finagle-http: Add ability to configure HttpClients in J…
…ava.

Problem
It's not possible to configure custom params in an Http Client.

Solution
Override the Java friendly configure function within Http Client/Server
and StackClient/StackServer so that the correct types are returned in Java.

RB_ID=773504
ef6ada9
@vkostyukov vkostyukov finagle: TCP_NODELAY and SO_REUSEADDR are now stack params
Problem

There is no convinient and finagle-idiomatic way of configuring socket options
for both listeners and transportes.

Solution

Make `TCP_NODELAY` and `SO_REUSEADDR` stack params so users could easily override
them. Keep the defaults unchanged (`TCP_NODELAY=true`, `SO_REUSEADDR=true`).

RB_ID=773824
200f8b5
@vkostyukov vkostyukov csl: Release OSS libraries
Finagle 6.31
Util 6.30
Scrooge 4.2
TwitterServer 1.16
Ostrich 9.14

RB_ID=774633
TBR=true
50d3bb0
Commits on Dec 07, 2015
@j3h j3h finagle: Fix Filter.TypeAgnostic.andThen().toFilter
Problem
-------

Filter.TypeAgnostic.andThen's `toFilter` method accidentally
refers to its own `toFilter` method rather than the enclosing
class' `toFilter`, leading to infinite recursion and a stack
overflow when calling toFilter on a composed Filter.TypeAgnostic.

Separately, there is no TypeAgnostic analog to Filter.identity.

Solution
--------

- Explicitly refer to the outer class' toFilter method in the
  inner class created in Filter.TypeAgnostic.andThen.
- Add a Filter.TypeAgnostic.Identity.

RB_ID=774406
d29f985
@tw-ngreen tw-ngreen finagle-serversets: Fix stabilizers treatment of Neg as a failure
Problem:
The Stabilizer is supposed to prevent ZK errors from propagating through
the Activity of Addr returned by bind. However, if a serverset is empty,
then ZK errors are immediately propagated as empty serversets are not
stabilized b/c Addr.Negs are treated like Addr.Errors in the Stabilizer.

Solution:
Treat Negs as successful but empty entries, but still don't overwrite
Addr.Bounds with Addr.Negs, so we will still stabilize a serverset
against both Failed and Neg.

Result:
We don't surface errors during ZK issues when watching empty serversets.

RB_ID=773798
efc1d6a
@dschobel dschobel finagle-http: fix non-http 1.x match error
Problem

Since netty 3's HttpVersion can express arbitrary protocols and
major and minor versions while finagle-http cannot our conversion
function throws a MatchError on messages which can be decoded as
http 1.x but declare a non-1.x protocol.

Solution

Introduce a base case.

RB_ID=774626
f825d14
@dschobel dschobel finagle-mux: status should consider transport status
Revert the change to the status calculation introduced by
c509709ca964828bcf21ed8d8e29e7f101f17aab which excludes transport
status.

RB_ID=775263
038ceb6
@tonyd3 tonyd3 finagle-thrift: Add ability to configure Thrift Client/Server in Java.
Problem
It's not possible to configure custom params using Java in a Thrift Client/Server.

Solution
Override the Java friendly configure function within Thrift Client/Server
so that the correct types are returned in Java.

RB_ID=774505
36ef5ea
Commits on Dec 14, 2015
@slyphon slyphon finagle-memcache Client.cas should differentiate between NotFound and…
… Exists

The ConnectedClient does not differentiate between [Exists and NotFound responses](https://github.com/twitter/finagle/blob/develop/finagle-memcached/src/main/scala/com/twitter/finagle/memcached/Client.scala#L453-L455).
One could argue that these are both failures, however this presents a
problem when trying to expose the client as a service, as it's a lossy
representation to the caller.

I've added a cas2 method that returns a CASResult for each of the three
states: Stored, NotFound, and Exists. I've re-written the cas method in
terms of cas2, and fixed all of the subclasses of BaseClient where
people were redefiniting cas.

There will be a cas2 method that will expose the difference between a
NotFound and Exists response. All existing API consumers will be
unaffected.

RB_ID=771779
23e99a7
@tonyd3 tonyd3 [finagle-memcached,mysql,redis] Override configure function for Java …
…compatability.

Problem
It's not possible to configure custom params using Java in these clients.

Solution
Similar to what was done for http and thrift, we override the Java friendly
configure function so that the correct types are returned for Java.

RB_ID=775934
414ea09
@kevinoliver kevinoliver finagle-core: Add gauges in dispatchers
Motivation

There isn't visibility into the queues on `GenSerialClientDispatcher`
or the pending requests for `PipeliningDispatcher`. This information
can be valuable for investigations.

Solution

Pass in a `StatsReceiver` and add gauges.

Result

New gauges "serial/queue_size" and "pipelining/pending" exported for
clients.

RB_ID=774296
NO-QUEUE=true
baca2a5
@jcrossley jcrossley finagle/finagle-http: propagate contexts over HTTP
Problem
Contexts are not sent over HTTP, which means deadlines are not propagated.

Solution
Deadlines are sent in Finagle-Ctx request headers.

RB_ID=773800
NO-QUEUE=true
73be0f0
@kevinoliver kevinoliver finagle-core: Introduce ResponseClassifier
Motivation

Finagle lacks application level knowledge of success and failure
which limits its ability to do failure accrual and present success
rate metrics.

Solution

Introduce `c.t.f.service.ResponseClassifier` which allows developers
to give Finagle the additional application specific knowledge
necessary in order to properly classify them. This is now used by
`StatsFilter` and `FailureAccrualFactory` so that more than just
transport level failures be used for both success metrics and failure
accrual.

Result

Developers can use `ClientBuilder.responseClassifier` or
`configured(param.ResponseClassifier)` to attach a custom classifier.
This will improve the efficacy of failure accrual and the success rate
stats for clients.

RB_ID=772906
NO-QUEUE=true
9d30212
@sameerparekh sameerparekh Revert "finagle/finagle-http: propagate contexts over HTTP"
\- Revert "finagle/finagle-http: propagate contexts over HTTP"

This caused a dev0, was originally submitted --no-queue due to DPT-5518

RB_ID=776891
TBR=true
NO-QUEUE=true
77139a0
@kevinoliver kevinoliver finagle-http: ResponseClassifiers for HTTP
A followup to RB_ID 772906 this adds
`c.t.f.http.service.HttpResponseClassifier` and
`c.t.f.Http.withResponseClassifier` to simplify writing and wiring up
classifiers for HTTP.

RB_ID=772917
NO-QUEUE=true
7868b5a
@robsonpeixoto robsonpeixoto Avoid netty types
Signed-off-by: Alex Leong <aleong@twitter.com>

RB_ID=776983
e561d6a
@ldematte ldematte Update ADOPTERS.md
Added Südtirol Pass (South Tyrol Transportation System)

Signed-off-by: Alex Leong <aleong@twitter.com>

RB_ID=777020
d93ed3b
@cyphactor cyphactor Add homepage and source reference to gemspec
Problem

There is no easy way to find the source code for the project from the
rubygems or the installed gem source.

Solution

Add the homepage and source_code meta data references to the gemspec.

Result

Both the homepage and the source_code urls are visible on rubygems.org
when looking at the finagle-thrift gem and accessible from the installed
gem source.

Signed-off-by: Alex Leong <aleong@twitter.com>

RB_ID=777043
500548e
Jason Zhang finagle-memcached: set service name in FailureAccrualException
  Problem

  FailureAccrualException in memcached is SourcedException
  with default servieName as "unspecified". When there are
  more than one cluster of cache, we cannot distinguish
  where exceptions from.

  Solution

  Set serviceName when throwing FailureAccrualException

  After your change, what will change

  No behavior or performance change, it will ensure the
  exception has an appropriate serviceName for tracking
  and debuggin purpose

RB_ID=776736
f362386
@jcrossley jcrossley finagle-core: Introduce a DeadlineFilter for tardy requests
Problem
Requests past their deadline should be discarded

Solution
DeadlineFilter logs requests that should be rejected to a 'rejected' stat.

RB_ID=769460
2fa6d0b
Tommy Chong Add scope to ketama client stats from Memcached.newTwemcacheClient
RB_ID=771691
cab46af
@amartinsn amartinsn finagle-core: implementing RequestMeterFilter based on new AsyncMeter
Problem

We wanted to be able to control access to rate limited resources, by slowing down requests to fit on the request rate accepted by these resources.

Solution

We ended up implementing a filter named RequestMeterFilter using Twitter util's AsyncMeter to achieve that. Feedback on naming are really welcomed.

Result

As a result, I've added this Filter so other users with the same needs can use this solution.

Signed-off-by: Alex Leong <aleong@twitter.com>

RB_ID=777050
65b232c
@kevinoliver kevinoliver finagle-core: Minor load balancing and retry changes
Motivation

We identified a few small gaps in metrics around load balancing and requeues.

Solution

1. Improve balancer logic for P2C to rebuild more aggressively when it finds a
   down node.

2. Add stats:

"loadbalancer/max_effort_exhausted"

     A counter of the number of times a balancer failed find a node that was
    `Status.Open` within `com.twitter.finagle.loadbalancer.Balancer.maxEffort`
    attempts. When this occurs, a non-open node may be selected for that
    request.

"retries/request_limit"

    A counter of the number of times the limit of retry attempts for a logical
    request has been reached

RB_ID=776235
6e75832
@jcrossley jcrossley finagle/finagle-http: propagate contexts over HTTP
Problem
Contexts are not sent over HTTP, which means deadlines are not propagated.

Solution
Deadlines are sent in Finagle-Ctx request headers.

RB_ID=778123
TBR=true
24fe827
@adleong adleong Send dark traffic on the outbound path.
RB_ID=777642
ae107bb
Commits on Dec 21, 2015
@vkostyukov vkostyukov finagle-stats: Introduce CommonsStats formatter
Problem

There is no Commons Stats formatter available for TwitterServer
so it might block users from migrating off of AbstractApplication.

Solution

Introduce Commons Stats `StatsFormatter` so TwitterServer might
export stats in the AbstractApplication-alike format.

RB_ID=775312
5213a17
@mosesn mosesn finagle-core: Actually run JvmFilterTest
RB_ID=777316
4f59e8a
@kevinoliver kevinoliver finagle-thrift,thriftmux: Initial steps towards Thrift ResponseClassi…
…fiers

A followup to RB_ID 772906 this is the the beginning of the necessary
wiring for `ResponseClassifiers` for ThriftMux (and later Thrift as
well). This includes the Scrooge changes but it awaits a Scrooge
version bump before it can be fully enabled. This will enable
developers to classify Thrift exceptions or status codes in the
response as failures.

See `c.t.f.thriftmux.service.ThriftMuxResponseClassifier` and
`c.t.f.ThriftMux.withResponseClassifier` to simplify writing and
wiring up classifiers for ThriftMux.

RB_ID=772931
4c5f4fc
@kevinoliver kevinoliver finagle: Docs for ResponseClassification
`ResponseClassification` is a new user facing feature and needs docs
in order for users to understand it and put it to use.

Along the way I cleaned up a handful of small broken windows in the
docs.

RB_ID=778379
555185c
@dschobel dschobel finagle-integration: introduce package + client session tests for mux…
…, http, mysql + memcache

Problem

Missing test coverage around session status allowed a regression to get merged.

Solution

Add tests.

RB_ID=778565
1daa520
@kevinoliver kevinoliver finagle-mux: Wireshark dissector for Mux
Motivation

Sometimes a tcpdump is necessary for debugging network protocols. Mux
is unknown to Wireshark and as such it is just appears as an opaque
frame of TCP bytes. Being able to introspect basic functionality would
aid in debugging and understanding of the protocol.

Solution

Add `mux_dissector.lua` to enable decoding of Mux frame giving
you size, message type and tag number for all messages and detailed
decoding of `Tdispatch` messges.

Result

Sample (edited) output from `tshark -O mux` on the included pcap file:

```
Frame 5: 64 bytes on wire (512 bits), 64 bytes captured (512 bits)
Internet Protocol Version 4, Src: 127.0.0.1 (127.0.0.1), Dst: 127.0.0.1 (127.0.0.1)
Transmission Control Protocol <...>
Mux Protocol
    Size: 4
    Message type: Tping (65)
    Tag: 1
    Payload Length: 0

Frame 7: 64 bytes on wire (512 bits), 64 bytes captured (512 bits)
Internet Protocol Version 4, Src: 127.0.0.1 (127.0.0.1), Dst: 127.0.0.1 (127.0.0.1)
Transmission Control Protocol <...>
Mux Protocol
    Size: 4
    Message type: Rping (191)
    Tag: 1
    Payload Length: 0

Frame 295: 279 bytes on wire (2232 bits), 279 bytes captured (2232 bits)
Internet Protocol Version 4, Src: 127.0.0.1 (127.0.0.1), Dst: 127.0.0.1 (127.0.0.1)
Transmission Control Protocol <...>
Mux Protocol
    Size: 219
    Message type: Tdispatch (2)
    Tag: 2
    Contexts: 3
        com.twitter.finagle.thrift.ClientIdContext
            Name: test.service
        com.twitter.finagle.tracing.TraceContext
            Span id: 14852471808386315780
            Parent id: 14852471808386315780
            Trace id: 14852471808386315780
            Flags: 0x0000000000000000
        com.twitter.finagle.Deadline
            Timestamp (micros after epoch): 1450308651197000
            Deadline (micros after epoch): 9223372036854775
    Payload Length: 27

Frame 297: 104 bytes on wire (832 bits), 104 bytes captured (832 bits)
Internet Protocol Version 4, Src: 127.0.0.1 (127.0.0.1), Dst: 127.0.0.1 (127.0.0.1)
Transmission Control Protocol <...>
Mux Protocol
    Size: 44
    Message type: Rdispatch (254)
    Tag: 2
    Payload Length: 40

```

RB_ID=779482
db1899f
@kevinoliver kevinoliver finagle-thriftmux: Fix how ResponseClassifiers are hooked up
Motivation

A minor change during code review of how the ThriftMux
ResponseClassifier gets wired up lead to it being used incorrectly. I
failed to rerun the tests with the necessary scrooge changes wired up
which lead to this going unnoticed.

Solution

Only wire up the ResponseClassifier when specified. Leave comments
explaining why.

RB_ID=779754
8aa8686
@atollena atollena [finagle-core] Disable logging of ChannelExceptions
Problem
--

`ChannelException`s are mostly benign and we have good stats for them, but they show up as SEVERE in
the log when handled by the root monitor.

Solution
--

Mix HasLogLevel in and set it to debug for all ChannelException.

RB_ID=779857
b3fc368
@kevinoliver kevinoliver finagle-docs: Fixes for metrics section of the user guide
A wide ranging cleanup, mostly in terms of formatting, links, and
consistency.

RB_ID=779711
4acff04
@tonyd3 tonyd3 finagle-mysql: Add Parameters object for easier Java access.
Problem
The Parameter trait hides its companion object. To access the
companion object you'd need to use Parameter$.Module$.

Solution
Add an object that can be accessed easily in Java.

RB_ID=780262
27186aa
Commits on Dec 28, 2015
@mosesn mosesn finagle: Wire up ResponseClassifier in FailureAccrual
Problem

FailureAccrualFactory wasn't completely wired properly.

Solution

Wire it up.

RB_ID=780373
966b92a
@blackicewei blackicewei finagle: provide stats in HttpNackFilter and statsReceiver in codec.p…
…repareConnFactory

Problem

There is no stats in `c.t.f.http.filter.HttpNackFilter`.
It's hard to see the number of retryable 503 response returned
from the Http server.

Solution

Add a counter in `HttpNackFilter`.

RB_ID=779085
310502b
@vkostyukov vkostyukov finagle: Improve docs for clients
Problem / Solution

Finagle docs for clients (`Clients.rst`) are outdated and not consistent.

RB_ID=779822
b9502be
@jcrossley jcrossley finagle/finagle-core: Fix illegal state transitions in FailureAccrual…
…Factory

Problem

FailureAccrualFactory can transition between states in an unexpected way due to race
conditions.

Solution

Prevent illegal state transitions. FailureAccrualFactory can transition between
states in an unexpected way due to race conditions. Now, it transitions to
Alive only when in the ProbeClosed state, so transitions Dead => Alive and
ProbeOpen => Alive are no longer possible. It also transitions to Dead before
starting the timer, so transitions Alive => ProbeOpen are no longer possible.

RB_ID=779641
24c2371
Todd Segal [finagle-serversets] Add gauges for last-good-update of a watched zoo…
…keeper path

Problem:

We have no 'liveness' metrics to determine if a particular client is receiving updates for a zk child path.

Solution:

Add a gauge mapping client path -> time stamp (seconds since epoch) of last successful update

RB_ID=780377
73f51d1
@kevinoliver kevinoliver finagle-thriftmux: Deserialize even when a ResponseClassifier isn't p…
…rovided

Motivation

While the changes in RB 779754 fixed the underlying issue, it made
things inconsistent in terms of when deserialization would happen. If
a `ResponseClassifier` was provided, it happened eagerly, if it was not
then it happened late.

Solution

When a `ResponseClassifier` is not provided use one that deserializes
using `DeserializationCtx` but ignores the result. This retains the behavior
while making when deserialization happens consistent.

RB_ID=779843
20e2452
@kevinoliver kevinoliver finagle: Java APIs for response classification
Motivation

Response classification should be usable by Java users.

Solution

Provide `ResponseClasses` and Java compilation tests.

RB_ID=780422
bb91e96
@taylorleese taylorleese finagle: cleanup build warnings
Problem:

There were multiple problems related to the sbt build:
a) the publishM2 task is provided by the latest version of sbt so it's unnecessary
b) build warnings for the Build.scala file weren't enabled
c) deprecated sbt methods were in use

There were additional build warnings unrelated to sbt that needed fixing as well.

Solution:

1) fix all the sbt related warnings
2) fix as many of the build warnings as possible

Result:

1) sbt warnings are resolved
2) 17/35 src targets with fatal warnings enabled
3) 23/36 test targets with fatal warnings enabled

RB_ID=780248
763084b
@tw-ngreen tw-ngreen [finagle-serversets] Zk2Resolver should never give up when attempting…
… to set watches.

Problem:

Zk2Resolver is fatally unhealthy if it cannot set watches. Current logic stops setting watches in perfectly valid failure scenarios.

Solution:

Never stop trying to set watches - with the caveat that all zk operations should backoff on failure and be self-throttled/limited to preserve Zk health. The *only* valid scenario to not set watches is when a session moves to Expired status (since the session watcher will start a new session at that point) or Disconnected (since the session itself will attempt to reconnect).

RB_ID=780566
8746b14
@siggy siggy finagle: modified travisci script to default to scala 2.11.7
Signed-off-by: Kevin Oliver <koliver@twitter.com>

RB_ID=780691
80f4769
@kevinoliver kevinoliver finagle-core: Stat on per request requeue distribution
Motivation

Without this stat it can be hard to tell how many times individual
requests get requeued.

Result

With this new stat, requeues_per_request, it will give better insight
into production issues.

RB_ID=780914
TBR=true
ad42b4e
@vkostyukov vkostyukov finagle: Improve docs for servers
Problem / Solution

Finagle docs for servers (i.e., `Servers.rst`) are outdate and not consistent.

RB_ID=780427
92b19e0
Commits on Jan 04, 2016
Todd Segal [finagle-serversets] Fix duplicated gauges and double scoped stats re…
…ceivers

[Problem]

zk2/ stats are double scoped with zk host name. Additionally, gauges for last_watch_update are added together and not scoped properly to a single instance of zkSession.

RB_ID=781107
50d2532
Todd Segal [finagle-serversets] Don't cache exceptions from expired zk sessions
Problem:

There is a single future cache for resolving member data from serversets. Thus there is a race in resolving paths between old expired sessions and new connected sessions when there a zk connection/ensemble problem.

Solution:
Don't mask Exceptions as empty lists in the shared future cache. Surface the exceptions to ensure the new session will retry correctly.

RB_ID=781671
a3b0e54
Todd Segal remove spurious log statement
RB_ID=781704
75d1dca
@blackicewei blackicewei finagle: fix a bug of null check for DeserializeCtxOnly
RB_ID=781805
a568b2c
@roanta roanta finagle-mysql: Fix tracing test
Problem

The mysql tracing test didn't pass in the correct credentials
when testing tracing which caused the test to fail.

Solution

Wire in the env specific integration test credentials.

RB_ID=782938
dd56dc1
Commits on Jan 11, 2016
@olix0r olix0r finagle-thrift: Control Thrift transport framing with a Stack.Param
Problem:

It is cumbersome to control whether Thrift clients and servers are framed
because there is no Stack.Param that influences this behavior.

Solution:

Introduce a Stack.Param, Thrift.param.Framed, and remove the Client/Server
boolean attribute.

Signed-off-by: Kevin Oliver <koliver@twitter.com>

RB_ID=780738
62297a9
@tw-ngreen tw-ngreen finagle-serversets: Don't use per-member stats
Problem
We increment a read_fail stat per zk member, this creates far too many stats
for large serversets.

Solution
Make read_fail per cluster.

Result
There are less stats.

RB_ID=783518
8abab31
@roanta roanta csl: disable fatal warnings in java targets
RB_ID=783517
TBR=true
2d22f7d
@stevegury stevegury finagle-core: Increase the Penalty in p2cEwma to a higher value
Problem

The PeakEWMA load balancer applies a penalty to the load of a server when it
doesn't have any historical latency. That penalty is initialized at
Double.MaxValue/2, and we add the number of outstanding messages to compute
the load of a server. The idea behind that, is that the load balancer behaves
like LeastLoaded at start-up before having any historical data.

The problem is that the Penalty is a very big double, and adding a small
integer will not change its value because of the lack of double precision in
that range.

Solution

Choosing a bigger Penalty (e.g. Long.MaxValue >> 16 ~= 140737 seconds or 39 hours)
fixes the problem and also works well with potentially extremely latent systems.

Signed-off-by: Ruben Oanta <roanta@twitter.com>

RB_ID=780176
NO-QUEUE=true
9a5d4af
@taylorleese taylorleese Bump Scala from 2.10.5 to 2.10.6 for Travis CI
Problem:

The Travis CI builds use Scala 2.10.5, but our build.sbt and Build.scala files use 2.10.6. We should bump the Travis builds to use 2.10.6.

Solution:

Bump the .travis.yml files to 2.10.6.

Result:

Travis CI uses 2.10.6 rather than 2.10.5.

RB_ID=781291
9334bdd
@vkostyukov vkostyukov finagle: Add missing imports in docs
Problem

Finagle docs are outdated and not consistent.
Solution

1. Add imports to _all_ the examples in docs
2. Be consistent in temrs of styles for code literals (monospace)
3. Add a couple of new Finagle's companion projects

RB_ID=780793
12a8f1c
@blackicewei blackicewei finagle-core: remove onClose promise in Netty3 pipeline handlers
Problem

The `onClose` promise callbacks save per-request contexts
in locals until the connection is closed. This causes memory
leaks.

Solution

Remove `onClose` in the pipeline handlers. Callbacks are
implemented through Netty event handlers.

RB_ID=783682
91f874b
@atollena atollena [finagle-serversets] Remove newZk flag
Problem
--

zk2 has been enabled by default for more than a year. The old zk! scheme should not be used.

Solution
--

Remove the option to use the zk! scheme for wily paths and the serverset! scheme. Unfortunately
we need to keep the scheme around for the many places where zk! is used directly.

RB_ID=780169
f1f40b6
@kevinoliver kevinoliver finagle-mux: Fix synchronization in TagSet
Motivation

In the `iterator` on the `TagSet.apply` TagSet, the synchronization on
`TagSet.this` was using the outer `TagSet` companion object instead of
the anonymous class. This can be observed by examining the bytecode
output.

Solution

Use an explicit `self` reference for synchronization.

Result

Correct synchronization.

Snippet of `next()` bytecode before:
```
  6: getstatic     #450                // Field com/twitter/finagle/mux/util/TagSet$.MODULE$:Lcom/twitter/finagle/mux/util/TagSet$;
  9: dup
 10: astore_2
 11: monitorenter
```

Snippet of `next()` bytecode after:
```
  7: getfield      #446                // Field $outer:Lcom/twitter/finagle/mux/util/TagSet$$anon$1;
 10: dup
 11: astore_2
 12: monitorenter
```

RB_ID=785034
47bd460
Todd Segal [finagle-serversets] Retry member data lookup on partial/full failures
Problem:

After successfully setting a watch, we do an immediate lookup of serverset member data. If any read fails, we do not retry unless the watch is fired or our session expires. This is not robust enough.

Solution:

If there is a failure of reading 100% of the members, schedule a retry and notify witnesses of failure.
If there is a partial failure, schedule a retry and notify witnesses of success with the partial data.

When scheduling a retry, use jitter and also make sure our session has not expired.

RB_ID=783894
e03eea7
@vkostyukov vkostyukov finagle: Remove unused/deprecated exceptions
RB_ID=774658
TBR=true
25aab59
@blackicewei blackicewei finagle-core: introduce Admission controller in the server stack
Problem

The Finagle server does not have a way to dynamically
reject requests when it's overloaded. It can go into
failure spiral without a way to recover until it's
restarted.

Solution

Introduce the `c.t.f.filter.ServerAdmissionControl` module
in the server Stack, which is enabled through param
`c.t.f.param.EnableServerAdmissionControl`. There are can be
multiple implementations of admission control filters
which are registered through `ServerAdmissionControl.register`
method. It's up to each AC filter to define their own
way of detecting server over capacity and configuration.

Server admission control is on by default in the server
Stack except for TwitterServer admin server.

Result

Provide users a way to define their own admission control
logic, and hook it up in the server Stack.

RB_ID=776385
0f0e228
@kevinoliver kevinoliver util, finagle: Port allocation benchmarks to JMH
Motivation

There are a handful of allocation benchmarks that we had written on an
internal test tool before JMH supported allocation profiling. These
are more valuable as JMH tests.

Solution

Port them to util-benchmark and finagle-benchmark

RB_ID=784722
50d373a
Commits on Jan 13, 2016
@mosesn mosesn finagle-mux: Add mnakamura to OWNERS
RB_ID=785192
69fd925
@dschobel dschobel finagle-netty4: twitter buf / netty bytebuf wrappers
Problem

we need byte buffer wrappers for use at the boundary between netty
and finagle.

Solution

introduce BufAsByteBuf and ByteBufAsBuf.

RB_ID=785260
1491bc0
Commits on Jan 18, 2016
@dschobel dschobel finagle-netty4: framing handlers
Problem

Netty4 channels need handlers to make use of
com.twiter.finagle.codec.Frame[En|De]Coders.

Solution

Introduce framing handlers to install FrameEncoder and FrameDecoder
instances into the netty message pipeline.

RB_ID=768026
92bfc23
@kevinoliver kevinoliver finagle-mux: Clear Contexts while creating a Dispatcher
Motivation

Finagle can hold onto request scoped Contexts for too long when
session scoped objects are created that are not tied to the request
lifecycle, but have request-scoped objects attached via a
c.t.f.context.Context.

This was happening in mux.ClientDispatcher and
mux.ThresholdFailureDetector which lead to large deserialized Thrift
responses being held in memory due to RB 772931 which stored
deserialized Thrift responses in a Context.local.

Solution

Clear Contexts.local and Contexts.broadcast before creating a new
dispatcher.

Result

Context lifetime becomes more correct and as a nice side-effect,
shorter.

RB_ID=785435
3192738
@vkostyukov vkostyukov finagle: Introduce discoverable stack params
Problem

Stack API params used to configure Finagle clients and servers are not
discoverable by IDEs so it's hard to configure Finagle unless you
absolutely know what you're doing.

Example: How to configure concurrency limit of the Finagle server?

  server.configured(RequestSemaphoreFilter.Param(Some(new AsyncSemaphore(
    initialPermits = 10, maxWaiters = 10
  ))))

The code above says _nothing_ about the feature it's enabling, but about its
implementation details that a user should be aware of.

Solution

Intruduce discoverable Stack API params (a modern replacement for
Client-/Server-Builders) available on every Finagle client or server [1]
via `with*` methods (i.e., `Http.client.withLabel("foo")`).

Result

== before ==

  import com.twitter.concurrent.AsyncSemaphore
  import com.twitter.finagle.Http
  import com.twitter.finagle.param
  import com.twitter.finagle.Transport
  import com.twitter.finagle.filter.RequestSemaphoreFilter
  import com.twitter.util.Duration

  val server = Http.server
    .configured(RequestSemaphoreFilter.Param(Some(new AsyncSemaphore(
      initialPermits = 10, maxWaiters = 10
     ))))
    .configured(param.Label("foo"))
    .configured(Transport.Verbose(enabled = true))
    .configured(Transport.BufSizes(Some(1024), Some(2048)))
    .serve(":8080", service)

== after ==

  import com.twitter.finagle.Http
  import com.twitter.conversions.storage._

  val server = Http.server
    .withAdmissionControl.concurrencyLimit(maxConcurrentRequests = 10, maxWaiters = 10)
    .withLabel("foo")
    .withTransport.verbose
    .withTrasnport.sendBufferSize(1.kilobyte)
    .withTransport.receiveBufferSize(2.kilobytes)
    .serve(":8080", service)

RB_ID=781833
TBR=true
042078b
@kevinoliver kevinoliver finagle-thrift: Add response classification
Motivation

finagle-thrift should have the ability to do response classification
on the deserialized response objects just as finagle-thriftmux does.

Solution

Introduce `ThriftResponseClassifier` which is similar to what is in
finagle-thriftmux that enables classification for developers.

While in here, added `ResponseClassifier.named` to give classifiers
human readable `toString` output.

RB_ID=780728
246986a
Joseph Boyd ThriftServiceIface.scala: Fix missing s-interpolator on String
RB_ID=784959
NO-QUEUE=true
b6aca70
@vkostyukov vkostyukov finagle: Update docs to use new Stack API
Problem / Solution

There is a new human and IDE friendly stack API so let's use it in docs.

RB_ID=787601
e0c0bdb
@kevinoliver kevinoliver finagle docs: Add response classification to user guide
Response classification is an important tool for client configuration
and should be covered in the user guide.

RB_ID=788240
60ab48e
@dschobel dschobel finagle-netty4: transporter
Introduce a transporter for netty4.

The transporter is responsible for configuring client socket options
and bridging the netty connection future to the finagle transport
promise.

The channel initializer is responsible for configuring handlers in the
netty pipeline.

RB_ID=768039
a74abdc
@vkostyukov vkostyukov finagle: Enable with-prefixed API for all protocols
Problem

There is a new, user-friendly API with `with` methods which only available
for HTTP, but not the rest of Finagle protocols.

Solution

Enable the `with` API for the all the Finagle protocols.

The challenge here is to be able to exclude/include some group of params
depending on the protocol. It turns out there are two group of params in
Finagle, which are not included into its every protocol: session pooling
(default pool vs. singleton pool) and load balancing (dafault balancer vs.
concurrent balancer).

The following table illustates what params should be enabled/disabled for
a particular protocol:

+--------------------------------------+
| Protocol  | Pool        | Balancer   |
+--------------------------------------+
| Http      | default     | default    |
| Mux       | singleton   | default    |
| MySql     | default     | default    |
| Thrift    | default     | default    |
| ThriftMux | singleton   | default    |
| Redis     | singleton   | default    |
| Memcached | singleton   | concurrent |
+--------------------------------------+

To solve this problem in a generic way this RB introduces `WithX` traits
that contains a single `withX` method pointing to a proper configuration
(e.g. `ConcurrentLoadBalancingParams` vs. `DefaultLoadBalancingParams`).

In addition to ability to mix-in a required configuration to every client,
the `WithX` traits are also quite useful to inheric Scaladoc and not copy
same test all over the places.

RB_ID=789092
7d0b339
Commits on Jan 25, 2016
@roanta roanta finagle-mux: fragment payloads for data messages.
Problem

Mux shares a session with multiple tag streams but offers no
mechanism to ensure that an individual stream is not dominating a
session. This is particularly problematic in the presence of
large payloads which can cause head of line blocking for an entire
session. What's more, because we share an I/O thread for reads
and writes per session, large payloads can cause writes to
dissproportionally occupy a threads time which can add latency to
important control messages (e.g. pings and cancellation). Ideally,
we would share a session uniformly across streams to increase
goodput.

Solution

In order for a tag stream to yield, we need to defer flushing its writes
to the OS buffer. This implies that we add some some latency to
writes. In netty3, a call to write will immediately flush the bytes
to the OS buffer if there is space (which is the common case).  For
example, the following will sit in a tight loop and flush all the
`fragments` to the OS buffer:

def write(fragments: Iterator[ChannelBuffer]): Future[Unit] =
 if (!fragments.hasNext) Future.Done
 else trans.write(fragments.next()).before(write(fragments))

In other words, `trans.write` is synchronous in the common case.
To fix this, we need to queue and flush writes periodically (note,
netty4 introduces distinct operations for writing (enqueueing) and
flushing). The current implementation flushes writes on read and
uses a scheduled timer to ensure that we flush regularly if throughput
is low. As mentioned earlier, this does entail a latency penalty.
On our test cluster, I see a ~60% increase in latency for large
payloads (more in results section). Because we are now buffering
payloads in mux, we get nice cancellation semantics. This does
require that we introduce a new mux message type (Rdispatch) to
assure the sender of a Tdispatch they can safely clear their buffers.

Any payload larger than 64KiB is fragmented into 64KiB sized chunks.
The window size is fixed in the current implementation and future work may
include a flow control algorithm which dynamically sizes it.

Results

The resuls are extracted from a simple client/server setup where
the server responds with 500kb payloads for each request (i.e. all
of our responses are fragmented). The request rate is such that we
are not saturating our NIC allocation (< 37.5 MB/s).

Ping p99: With fragmentation, we see less spiky ping latencies and
up to a 50% decrease in peaks.

CPU: With fragmentation, we see ~20-30% less system CPU usage on
both clients and servers (likely due to less syscalls since we batch
writes).

Request p99: Request latency does take a big hit and increases by
~80%.

Allocations/request: no statistical difference between the two.

(links to graphs attached below.)

RB_ID=774742
0f25777
@bdd bdd Revert "finagle-mux: fragment payloads for data messages."
Probable IM-2118 (SEV0) root cause:

RB_ID=790358
TBR=true
NO-QUEUE=true
9f1317e
@cacoco cacoco finagle-core - Fix issue in ServerBuilder around protocol library
Problem

When using the ServerBuilder there is a bug where the protocol
library is not correctly picked up from the Stack Params. The
means that the server is incorrectly registered in the server
registry availble via the admin interface.

Resolution

Fix the anonymous function that converts from Stack params to
a Server to correctly use the fully configured params when
determining if the protocol library param should be added to
the list of server params.

Result

The protocol library is correctly configured for services
using the ServerBuilder.

RB_ID=791538
9015273
@roanta roanta finagle-test: remove package
Problem

The package was only used by one test in finagle-thrift.

Solution

Remove it and copy the code inside the test.

RB_ID=789553
6b244e0
@kevinoliver kevinoliver finagle-(thrift)mux: Disable failure detection for end-to-end tests
Motivation

There is some amount of flakiness in the Mux/ThriftMux end-to-end
tests due to the failure detector. We want these tests running and
passing 100% of the time.

Solution

Disable the failure detector for the tests.

RB_ID=791889
9873aa3
@kevinoliver kevinoliver finagle: Add links to our Code of Conduct
Motivation

Finagle has a code of conduct via Twitter OSS but it is not obvious
from looking at our README or project page.

Solution

Add links to it.

RB_ID=792258
06ab71a
@jcrossley jcrossley finagle/finagle-core: Logic improvements in DeadlineFilter
Problem
Requests past their deadline should be discarded.

Solution
DeadlineFilter records stats when requests past their deadline.

RB_ID=792018
e3e349c
Commits on Feb 01, 2016
@kevinoliver kevinoliver finagle-core: Clear Contexts for StdStackClient.newDispatcher
Motivation

The change in RB_ID 785435 only touched Mux's dispatcher, but the same
issue could exist on other stack clients, such as
`c.t.f.dispatch.PipeliningDispatcher`.

Solution

Move the `Contexts.letClear` call up to where `newDispatcher` is
called in `StdStackClient.endpointer`.

RB_ID=792054
a8b1e89
@roanta roanta finagle-mux: Interrupt server writes on Tdiscarded
Problem

When writing a response in mux, we lose track of the writer
promise (i.e. write and forget). This is unfortunate because
we want to attempt to interrupt the write if we receive a
cancellation from the client. For example, this becomes
increasingly important with large responses that are more likely
to timeout.

Solution

Compose writes as part of our pending state on the mux server.

RB_ID=792186
17f1ab3
@amartinsn amartinsn finagle: Adding Link to Configuring Fault Tolerant Finagle Clients
Signed-off-by: Ryan O'Neill <ryano@twitter.com>

RB_ID=793064
5dbcbd1
@kevinoliver kevinoliver finagle-thrift(mux): Wire up ResponseClassifier to per-method metrics
Motivation

The Scrooge changes to enable a custom `ResponseClassifier` for
per-method metrics (from RB 791465) needs to be wired up.

Solution

Use the `ResponseClassifier` when possible.

RB_ID=791470
e5dbeeb
@donghual donghual finagle-stats: Let flag statsFilterFile take multiple files
Problem

The flag statsFilterFile takes one filename. An app may need to load filters from multiple files.

Solution

Define flag statsFilterFile as GlobalFlag[Set[File]] to take comma-separated files.

Result

Compatible with current flags and is able to handle multiple files from flag statsFilterFile.

RB_ID=793397
4aef140
@tonyd3 tonyd3 finagle-core: use strict dependencies + fixup commons-codec versioning
Problem
Having a lot of dependencies makes using these libraries within
different repos a lot harder to do. There has to be a lot of
dependency exclusion to make it workng.

Solution
Start clarifying all dependencies by making them explicit and
removing unused dependencies.

RB_ID=793372
5995eb5
@blackicewei blackicewei finagle-mux: ping failure detector to close session by default
Problem

After a session is marked busy in mux, it is stuck in the
busy state until the session is closed. This is problematic
when there is an alternative good networking path, but client
never has the chance to reestablish the session on the good
path.

Solution

Close the session by default when ping RTT excceeds 4 seconds,
to allow a new session to be esbalished.

RB_ID=773649
d638aa5
@vkostyukov vkostyukov finagle: Java-friendly API for discoverable stack params
Problem

Discoverable stack params are built with Scala's type system feature called
"F-bounded polymorphism", which is one the unfriendliest things for Java ->
Scala interop. As a result, Java users are not able to properly use new API.

Solution

A well-known workaround is to override polymoprhic methods to use conreete
types. The solution is quite verbose, but still makes sense given that params
do not change that offten.

Result

Discoverable stack params are usable from Java.

RB_ID=794290
fd69c22
@bmdhacks bmdhacks you don't need specs
Specs has been deprecated for many years now.  This removes all the
inclusions of it in projects that don't use it.  It also adds it some
places where people use it but don't depend on it.

RB_ID=763547
a589a25
Todd Segal [finagle-core] Make FixedInetResolver and 1 API public
[problem]

Though the InetResolver is public, the FixedInetResolver (which caches all successful lookups) is private[finagle]

[solution]

Make the FixedInetResolver have the same access level as the InetResolver. Also, make toAddr (the API which does the one-time lookup outside of polling) a separate public API for others to leverage as needed. Nothing new is being made public API-wise

RB_ID=794785
03457a2
@jcrossley jcrossley twitter-server, finagle/finagle-core: Expose set of resolved endpoint…
…s for each client

Problem
There is no visibility into the set of resolved endpoints for each client

Solution
Added a new 'Endpoints' section on client pages, listing the weights, paths, and resolved endpoints for each dtab.

RB_ID=779001
8850c3f
Evan Jones finagle/finagle-memcached: MockClient: Add .contents for testing cont…
…ents

This permits checking properties of the cache that are not possible through
the memcache interface. For example: How many keys does it contain?
Does it contain exactly key "x" and key "y"?

RB_ID=794854
cc1f635
Commits on Feb 02, 2016
@taylorleese taylorleese Disable openjdk7 in Travis CI builds
RB_ID=786579
92dc6ec
@cacoco cacoco finagle-core: Expose a server's actual bound port in its ServerRegist…
…ry entry

Problem

We currently register a server being served in the ServerRegistry with the
addr requested and not the actual bound addr. This makes the registry somewhat
less than useful when using ephemeral ports as we end up reporting ":0" as
opposed to the bound address. The registry is then able to accurately report
a running server's bound port.

Resolution

Update ServerBuilder#serve to register the server with the
underlying.boundAddress which is the actual bound port of the server.

Result

Better information reported in the ServerRegistry that can be used to query
for a running server's port.

RB_ID=795441
21397f4
@cacoco cacoco Prepare for open-source release of CSL libraries
RB_ID=796003
dab8002
Commits on Feb 04, 2016
@dschobel dschobel finagle-core: limit pending requests per client dispatcher
Problem

We want to bound the number of pending requests on a single connection
to stop clients from overwhelming their counter-parties.

Solution

Introduce PendingRequestFilter module and ClientAdmissionController param.

RB_ID=795491
b2734f6
@taylorleese taylorleese Fix scrooge plugin in sbt build
RB_ID=780746
45fac2a
@daviddenton daviddenton finagle-site: add link to fintrospect
Signed-off-by: Daniel Schobel <dschobel@twitter.com>

RB_ID=796498
4032597
@Saisi Saisi finagle-*: fix typos
Signed-off-by: Daniel Schobel <dschobel@twitter.com>

RB_ID=796502
29f951a
@ryanoneill ryanoneill util, finagle, scrooge: Add ryano to More Owners Files
RB_ID=796566
199ee1a
@cacoco cacoco release: Prepare libraries for OSS release
RB_ID=796660
21d0ee8