Skip to content

Releases: openzipkin/zipkin

Zipkin 2.14

15 May 15:44
Compare
Choose a tag to compare

Zipkin 2.14 adds storage throttling and Elasticsearch 7 support. We've also improved efficiency around span collection and enhanced the UI. As mentioned last time, this release drops support for Elasticsearch v2.x and Kafka v0.8.x. Here's a run-down of what's new.

Storage Throttling (Experimental)

How to manage surge problems in collector architecture is non-trivial. While we've collected resources for years about this, only recently we had a champion to take on some mechanics in practical ways. @Logic-32 fleshed out concerns in collector surge handling and did an excellent job evaluating options for those running pure http sites.

Towards that end, @Logic-32 created an experimental storage throttling feature (bundled for your convenience). When STORAGE_THROTTLE_ENABLED=true calls to store spans pay attention to storage errors and adjust backlog accordingly. Under the hood, this uses Netfix concurrency limits.

Craig tested this at his Elasticsearch site, and it resulted in far less dropped spans than before. If you are interested in helping test this feature, please see the configuration notes and join gitter to let us know how it works for you.

Elasticsearch 7.x

Our server now supports Elasticsearch 6-7.x formally (and 5.x as best efforts). Most notably, you'll no longer see colons in your index patterns if using Elasticsearch 7.x. Thank to @making and @chefky for the early testing of this feature as quite a lot changed under the hood!

Lens UI improvements

@tacigar continues to improve Lens so that it can become the default user interface. He's helped tune the trace detail screen, notably displaying the minimap more intuitively based on how many spans are in the trace. You'll also notice the minimap has a slider now, which can help stabilize the area of the trace you are investigating.

Significant efficiency improvements

Our Armeria collectors (http and grpc) now work natively using pooled buffers as opposed to byte arrays with renovated protobuf parsers. The sum this is more efficient trace collection when using protobuf encoding. Thanks very much to @anuraaga for leading and closely reviewing the most important parts of this work.

No more support for Elasticsearch 2.x and Kafka 0.8.x

We no longer support Elasticsearch 2.x or Kafka 0.8.x. Please see advice mentioned in our last release if you are still on these products.

Scribe is now bundled (again)

We used to bundle Scribe (Thrift RPC span collector), but eventually moved it to a separate module due to it being archived technology with library conflicts. Our server is now powered by Armeria, which natively supports thrift. Thanks to help from @anuraaga, the server has built-in scribe support for those running legacy applications. set SCRIBE_ENABLED=true to use this.

Other notable updates

  • Elasticsearch span documents are written with ID ${traceID}-${MD5(json)} to allow for server-side deduplication
  • Zipkin Server is now using the latest Spring Boot 2.1.5 and Armeria 0.85.0

Zipkin 2.13.0

01 May 11:07
Compare
Choose a tag to compare

Zipkin 2.13 includes several new features, notably a gRPC collection endpoint and remote service name indexing. Lens, our new UI, is fast approaching feature parity, which means it will soon be default. End users should note this is the last release to support Elasticsearch v2.x and Kafka v0.8.x. Finally, this our first Apache Incubating release, and we are thankful to the community's support and patience towards this.

Lens UI Improvements

Led by @tacigar, Lens has been fast improving. Let's look at a couple recent improvements: Given a trace ID or json, the page will load what you want.

Open trace json

Go to trace ID

Right now, you can opt-in to Lens by clicking a button. The next step is when Lens is default and finally when the classic UI is deleted. Follow the appropriate projects for status on this.

gRPC Collection endpoint

Due to popular demand, we now publish a gRPC endpoint /zipkin.proto3.SpanService/Report which accepts the same protocol buffers ListOfSpans message as our POST /api/v2/spans endpoint. This listens on the same port as normal http traffic when COLLECTOR_GRPC_ENABLED=true. We will enable this by default after the feature gains more experience.

We chose to publish a unary gRPC endpoint first, as that is most portable with limited clients such as grpc-web. Our interop tests use the popular Android and Java client Square Wire. Special thanks to @ewhauser for leading this effort and @anuraaga who championed much of the work in Armeria.

Remote Service Name indexing

One rather important change in v2.13 is remote service name indexing. This means that the UI no longer confuses local and remote service name in the same drop-down. The impact is that some sites will have much shorter and more relevant drop-downs, and more efficient indexing. Here are some screen shots from Lens and Classic UIs:

Schema impact

We have tests to ensure the server can be upgraded ahead of schema change. Also, most storage types have the ability to automatically upgrade the schema. Here are relevant info if you are manually upgrading:

STORAGE_TYPE=cassandra

If you set CASSANDRA_ENSURE_SCHEMA=false, you are opting out of automatic schema management. This means you need to execute these CQL commands manually to update your keyspace

STORAGE_TYPE=cassandra3

If you set CASSANDRA_ENSURE_SCHEMA=false, you are opting out of automatic schema management. This means you need to execute these CQL commands manually to update your keyspace

STORAGE_TYPE=elasticsearch

No index changes were needed

STORAGE_TYPE=mysql

Logs include the following message until instructions are followed:

zipkin_spans.remote_service_name doesn't exist, so queries for remote service names will return empty.
Execute: ALTER TABLE zipkin_spans ADD `remote_service_name` VARCHAR(255);
ALTER TABLE zipkin_spans ADD INDEX `remote_service_name`;

Dependency group ID change

For those using Maven to download, note that the group ID for libraries changed from "io.zipkin.zipkin2" to "org.apache.zipkin.zipkin2". Our server components group ID changed from "io.zipkin.java" to "org.apache.zipkin"

This is the last version to support Elasticsearch 2.x

Elastic's current support policy is latest major version (currently 7) and last minor (currently 6.7). This limits our ability to support you. For example, Elasticsearch's hadoop library is currently broken for versions 2.x and 5.x making our dependencies job unable to work on that range and also work on version 7.x.

Our next release will support Elasticsearch 7.x, but we have to drop Elasticsearch 2.x support. Elasticsearch 5.x will be best efforts. We advise users to be current with Elastic's supported version policy, to avoid being unable to upgrade Zipkin.

This is the last version to support Kafka 0.8x

This is the last release of Zipkin to support connecting to a Kafka 0.8 broker (last release almost 4 years ago). Notably, this means those using KAFKA_ZOOKEEPER to configure their broker need to switch to KAFKA_BOOTSTRAP_SERVERS instead.

Other notable updates

  • Zipkin Server is now using the latest Spring boot 2.1.4

Zipkin 2.12.6

10 Mar 19:00
Compare
Choose a tag to compare

Zipkin 2.12.6 migrates to the Armeria http engine. We also move to Distroless to use JRE 11 in our docker images.

Interest in Armeria 3 years ago originated at LINE, a long time supporter of Zipkin. This was around its competency in http/2 and asynchronous i/o. Back then, we were shifting towards a more modular server so that they could create their own. Over time, interest and our use case for Armeria grown. Notably, @ewhauser has led an interest in a gRPC endpoint for zipkin. Typically, people present different listen ports for gRPC, but Armeria allows the same engine to be used for both usual web requests and also gRPC. Moreover, now more than ever LINE, the team behind Armeria are involved deeply in our community, as are former LINE engineers like @anuraaga. So, we had a match of both supply and demand for the technology.

End users will see no difference in Zipkin after we replaced the http engine. Observant administrators will notice some of the console lines being a bit different. The whole experience has been drop-in thanks to spring boot integration efforts led by @anuraaga @trustin and @hyangtack. For those interested in the technology for their own apps, please check out Armeria's example repository.

There's more to making things like http/2 work well than the server framework code. For example, there is nuance around OpenSSL which isn't solved well until newer runtimes. For a long time, we used alpine JRE 1.8 because some users were, justifiably or not, very concerned about the size of our docker image. As years passed, we kept hitting fragile setup concerns around OpenSSL. This would play out as bad releases of stackdriver integration, as that used gRPC. As we owe more service to users than perceptions around dist sizes, we decided it appropriate to move a larger, but still slim distroless JRE 11 image.

The MVP of all of this is @anuraaga, the same person who made an armeria zipkin server at LINE 3 years ago, who today backfilled functionality where needed, and addressed the docker side of things. Now, you can use this more advanced technology without thinking. Thank you, Rag!

2.12.4 DO NOT USE

07 Mar 19:52
Compare
Choose a tag to compare
2.12.4 DO NOT USE Pre-release
Pre-release

This version returned corrupt data under /prometheus (/actuator/prometheus). Do not use it

Zipkin 2.12.3

02 Mar 12:33
Compare
Choose a tag to compare

Zipkin 2.12.3 provides an easy way to preview Lens, our new user interface.

Introduction to Zipkin Lens

Zipkin was open sourced by Twitter in 2012. Zipkin was the first OSS distributed tracing system shipped complete with instrumentation and a UI. This "classic" UI persisted in various forms for over six years before Lens was developed. We owe a lot of thanks to the people who maintained this, as it is deployed in countless sites. We also appreciate alternate Zipkin UIs attempts, and the work that went into them. Here are milestones in the classic UI, leading to Lens.

  • Mid 2012-2015 - Twitter designed and maintained Zipkin UI
  • Early 2016 - Eirik and Zoltan team up to change UI code and packaging from scala to pure javascript
  • Late 2016 - Roger Leads Experimental Angular UI
  • Late 2017 to Early 2018 - Mayank Leads Experimental React UI
  • Mid to Late 2018 - Raja renovates classic UI while we investigate options for a standard React UI
  • December 7, 2018 - LINE contributes their React UI as Zipkin Lens
  • Early 2019 - Igarashi and Raja complete Zipkin Lens with lots of usage feedback from Daniele

Lens took inspiration from other UIs, such as Haystack, and cited that influence. You'll notice using it that it has a feel of its own. Many thanks to the design lead Igarashi, who's attention to detail makes Lens a joy to use. Some design goals were to make more usable space, as well be able to assign site-specific details, such as tags. Lens is not complete in its vision. However, it has feature parity to the point where broad testing should occur.

screen shot 2019-03-02 at 8 32 22 pm

Trying out Lens

We spend a lot of time and effort in attempts to de-risk trying new things. Notably, the Zipkin server ships with both the classic and the Lens UI, until the latter is complete. With design help from @kaiyzen and @bsideup, starting with Zipkin 2.12.3, end users can select the UI they prefer at runtime. All that's needed is to press the button "Try Lens UI", which reloads into the new codebase. There's then a button to revert: "Go back to classic Zipkin".

gobacktolens

Specifically the revert rigor was thanks to Tommy and Daniele who insisted on your behalf that giving a way out is as important as the way in. We hope you feel comfortable letting users try Lens now. If you want to prevent that, you can: set the variable ZIPKIN_UI_SUGGEST_LENS=false.

Auto-complete Keys

One design goal of Lens was to have it better reflect the priority of sites. The first work towards that is custom search criteria via auto-completion keys defined by your site. This was inspired by an alternative to Lens, Haystack UI, which has a similar "universal search" feature. Many thanks to Raja who put in ground work on the autocomplete api and storage integration needed by this feature. Thanks to Igarashi for developing the user interface in Lens for it.

To search by site-specific tags, such as environment names, you first need to tell Zipkin which keys you want. Start your zipkin servers with an environment variable like below that includes keys whitelisted from Span.tags. Note that keys you list should have a fixed set of values. In other words, do not use keys that have thousands of values.

AUTOCOMPLETE_KEYS=client_asg_name,client_cluster_name,service_asg_name,service_cluster_name java -jar zipkin.jar

Here's a screen shot of custom auto-completion, using an example trace from Netflix.

screen shot 2019-03-02 at 8 15 01 pm

Feedback

Maintaining two UIs is a lot of burden on Zipkin volunteers. We want to transition to Lens as soon as it is ready. Please upgrade and give feedback on Gitter as you use the tool. With luck, in a month or two we will be able to complete the migration, and divert the extra energy towards new features you desire, maybe with your help implementing them! Thanks for the attention, and as always star our repo, if you think we are doing a good job for open source tracing!

Zipkin 2.11.1

05 Aug 11:51
Compare
Choose a tag to compare

Zipkin 2.11.1 fixes a bug which prevented Cassandra 3.11.3+ storage from initializing properly. Thanks @nollbit and @drolando

Zipkin 2.11

03 Aug 13:25
Compare
Choose a tag to compare

Zipkin 2.11 dramatically improves Cassandra 3 indexing and fixes some UI glitches

Cassandra 3 indexing

We've had cassandra3 storage type for a while, which uses SASI indexing. One thing @Mobel123 noticed was particular high disk usage for indexing tags. This resulted in upstream work in cassandra to introduce a new indexing option which results in 20x performance for the type of indexing we use.

See #1948 for details, but here are the notes:

  1. You must upgrade to Cassandra 3.11.3 or higher first
  2. Choose a path for dealing with the old indexes
  • easiest is re-create your keyspace (which will drop trace data)
  • advanced users can run zipkin2-schema-indexes.cql, which will leave the data alone but recreate the index
  1. Update your zipkin servers to latest patch (2.11.1+)

Any questions, find us on gitter!

Thanks very much @michaelsembwever for championing this, and @llinder for review and testing this before release. Thanks also to the Apache Cassandra project for accepting this feature as it is of dramatic help!

UI fixes

@zeagord fixed bugs relating to custom time queries. @drolando helped make messages a little less scary when search is disabled. Zipkin's UI is a bit smaller as we've updated some javascript infra which minimizes better. This should reduce initial load times. Thanks tons for all the volunteering here!

Zipkin 2.10 completes our v2 migration

07 Jul 06:49
Compare
Choose a tag to compare

Zipkin 2.10 drops v1 library dependency and http read endpoints. Those using the io.zipkin.java:zipkin (v1) java library should transition to io.zipkin.zipkin2:zipkin as the next release of Zipkin will stop publishing updates to the former. Don't worry: Zipkin server will continue accepting all formats, even v1 thrift, for the foreseeable future.

Below is a story of our year long transition to a v2 data format, ending with what we've done in version 2.10 of our server (UI in nature). This is mostly a story of how you address an big upgrade in a big ecosystem when almost all are volunteers.

Before a year ago, the OpenZipkin team endured (and asked ourselves) many confused questions about our thrift data format. Why do service endpoints repeat all the time? What are binary annotations? What do we do if we have multiple similar events or binary annotations? Let's dig into the "binary annotation" as probably many reading still have no idea!

Binary annotations were a sophisticated tag, for example an http status. While the name is confusing, most problems were in being too flexible and this led to bugs. Specifically it was a list of elements with more type diversity than proved useful. While a noble aim, and made sense at the time, binary annotations could be a string, binary, various bit lengths of integer or floating point numbers. Even things that seem obvious could be thwarted. For example, some would accidentally choose the type binary for string, effectively disabling search. Things seemingly simple like numbers were bug factories. For example, folks would add random numbers as an i64, not thinking that you can't fit one in a json number without quoting or losing precision. Things that seemed low-hanging fruit were not. Let's take http status for example. Clearly, this is a number, but which? Is it a 16bit (technically correct) or is it a 32 bit (to avoid signed misinterpretation)? Could you search on it the way you want to (<200 || >299 && !404)? Tricky right? Let's say someone sent it as a different type by accident.. would it mess up your indexing if sent as a string (definitely some will!)? Even if all of this was solved, Zipkin is an open ecosystem including private sites with their private code. How much time does it cost volunteers to help others troubleshoot code that can't be shared? How can we reduce support burden while remaining open to 3rd party instrumentation?

This is a long winded story of how our version 2 data format came along. We cleaned up our data model, simplifying as an attempt to optimize reliability and support over precision. For example, we scrapped "binary annotation" for "tags". We don't let them repeat or use numeric types. There are disadvantages to these choices, but explaining them is cheap and the consequences are well understood. Last July, we started accepting a version 2 json format. Later, we added a protobuf representation.

Now, why are we talking about a data format supported a year ago? Because we just finished! It takes a lot of effort to carefully roll something out into an ecosystem as large as Zipkin's and being respectful of the time impact to our volunteers and site owners.

At first, we ingested our simplified format on the server side. This would "unlock" libraries, regardless of how they are written, and who wrote them, into simpler data.. data that much resembles tracing operations themselves. We next focused on libraries to facilitate sending and receiving data, notably brown field changes (options) so as to neither disrupt folks, nor scare them off. We wanted the pipes that send data to become "v2 ready" so owners can simultaneously use new and old formats, rather than expect an unrealistic synchronous switch of data format. After this, we started migrating our storage and collector code, so that internal functionality resemble v2 constructs even while reading or writing old data in old schemas. Finally, in version 2.10, we changed the UI to consume only v2 data.

So, what did the UI change include? What's interesting about that? Isn't the UI old? Let's start with the last question. While true the UI has only had facelifts and smaller visible features, there certainly has been work involved keeping it going. For example, backporting of tests, restructuring its internal routing, adding configuration hooks or integration patterns. When you don't have UI staff, keeping things running is what you end up spending most time on! More to the point, before 2.10, all the interesting data conversion and processing logic happened in Java, on the api server. For example, merging of data, correcting clock shifts etc. This setup a hard job for those emulating zipkin.. at least those who emulated the read side. Custom read api servers or proxies can be useful in practice. Maybe you need to stitch in authorization or data filtering logic.. maybe your data is segmented.. In short, while most read scenarios are supported out-of-box, some advanced proxies exist for good reason.

Here's a real life example: Yelp saves money by not sending trace data across paid links. For example, in Amazon's cloud (and most others), if you send data from one availability zone to another, you will pay for that. To reduce this type of cost, Yelp uses an island + aggregator pattern to save trace data locally, but materialize traces across zones when needed. In their site, this works particularly well as search doesn't use Zipkin anyway: they use a log based tool to find trace IDs. Once they find a trace ID, they use Zipkin to view it.. but still.. doing so requires data from all zones. To solve this, they made an aggregating read proxy. Before 2.10, it was more than simple json re-bundling. They found that our server did things like merging rules and clock skew correction. This code is complex and also high maintenance, but was needed for the UI to work correctly. Since v2.10 moves this to UI javascript, Yelp's read proxy becomes much simpler and easier to maintain. In summary, having more logic in the UI means less work for those with DIY api servers.

Another advantage of having processing logic in the UI is better answering "what's wrong with this trace?" For example, we know data can be missing or incorrect. When processing is done server-side, there is friction in deciding how to present errors. Do you decorate the trace with synthetic data, or use headers, or some enveloping? If instead that code was in the UI, such decisions are more flexible and don't impact the compatibility of others. While we've not done anything here yet, you can imagine it is easier to show, like color or otherwise, that you are viewing "a bad trace". Things like this are extremely exciting, given our primary goals are usually to reduce the cost of support!

In conclusion, we hope that by sharing our story, you have better insight into the OpenZipkin way of doing things, how we prioritize tasks, and how seriously we take support. If you are a happy user of Zipkin, find a volunteer who's helped you and thank them, star our repository, or get involved if you can. You can always find us on Gitter.

Zipkin 2.9

16 Jun 01:39
Compare
Choose a tag to compare

Zipkin 2.9 reorganizes the project in efforts to reduce future maintenance. This work is important to the long-term health and related to our "v2" project started last year.

If all goes well, the next server release will remove all dependencies on and stop publishing our "v1" library io.zipkin.java:zipkin.

Many thanks to continual support and testing by folks notably @rangwea @llinder and @shakuzen as this ground work was a bit bumpy. On that note, please use the latest patch (at the time of writing 2.9.4)!

Kafka consumers are now v0.10+, not 0.8, by default

We kept Kafka 0.8 support as default for longer than comfortable based on demand from older Zipkin sites. However, this starts to cause problems notably as folks would use the old KAFKA_ZOOKEEPER approach to connecting just because it was default. This also pins versions in a difficult place, notably when the server is extended. The KAFKA_BOOTSTRAP_SERVERS (v0.10+) approach, which was available as an option before, is now the default mechanism in Zipkin.

Those using KAFKA_ZOOKEEPER because they still run old 0.8 brokers can still do so. If you are using Docker, there is no change at all. If you are self-packaging a distribution of zipkin, please use these instructions to integrate v0.8 support.

AWS Elasticsearch Service is now in the zipkin-aws image

Before, our Amazon integration for Elasticsearch was baked into the default image. This was a historical thing because we didn't have a large repository for Amazon components, yet. This caused dual-repository maintenance, particularly Amazon SDK version ping-pong, and also expertise to be somewhat spread arbitrarily in two repositories. zipkin-aws is now the "one stop shop" for Amazon Web Services integrations with Zipkin (or other libraries like Brave for that matter).

Those using ES_AWS_DOMAIN or Amazon endpoints in ES_HOSTS need to use a "zipkin-aws" distribution. If you are using Docker, you just switch your image from openzipkin/zipkin to openzipkin/zipkin-aws. If you are self-packaging a distribution of zipkin, please use these instructions to integrate Amazon's Elasticsearch Service.

"Legacy reads" are now removed from our Elasticsearch storage implementation

Last year, we had to switch our storage strategy in Elasticsearch as multiple type indexes were dropped in future versions of Elasticsearch. We added a temporary ES_LEGACY_READS_ENABLED flag to allow folks to transition easier. This is now removed.

By removing this code, we have more "maintenance budget" do discuss other transitions in Elastcisearch. For example, it is hinted that with a certain version range, re-introducing natural tag indexing could be supported. This would imply yet another transition, which is a bitter pill if we also have to support an older transition.

V1 thrift codec is "re-supported"

You can now read and write old zipkin thrifts using the io.zipkin.zipkin2:zipkin library. This feature is very undesirable from a code maintenance point of view. However, some projects simply weren't upgrading as they are still running or supporting old zipkin backends that only accept thrift. To allow a longer transition period, we introduced the ability to use thrift (and scribe) again on the client side. The first consumer is Apache Camel. Under the scenes SpanBytesEncoder.THRIFT does the job.

Note: If you are still writing v1 thrifts, or using Scribe, please consider alternatives! This is not only to receive better support here, but also the myriad of Zipkin clones. Most clones only accept json, so your products will be more supportable as soon as you can transition off thrift.

Also note: the core jar is still dependency free as we coded the thrift codec directly. A lot of care was taken to pay for this change, by removing other code or sources of bloat from the jar. In fact, our jar is slightly smaller than before we re-added thrift, now a hair below 200KiB.

Storage and Collector extensions are now "v2 only"

Before, we had to support two libraries for integrations such as zipkin-gcp: this implied one path for v1 structs and and another for v2 structs. Now that our core library can read both formats, we could dramatically simplify these integrations. End users won't see any change as a part of this process.

Zipkin 2.8

03 May 08:09
Compare
Choose a tag to compare

Zipkin 2.8 migrates to Spring Boot v2 and adds a binary data format (proto3). Do not use a version lower than 2.8.3, as we found some issues post-release and resolved them.

This release is mostly infrastructure. All the help by our community are super appreciated as often such work is hard and thankless. Let's stop that here.. Thank you specifically @zeagord and @shakuzen for working on these upgrades, testing in production and knocking out dents along the way.

Spring Boot and Micrometer update

Zipkin internally has been updated to Spring Boot v2 which implies Micrometer for metrics. While features of our service didn't change, this is an important upgrade and has impact to Prometheus configuration.

Prometheus

We now internally use Micrometer for metrics on Zipkin server. Some of the prometheus metrics have changed to adhere to standards there. Our grafana setup is adjusted on your behalf, but here are some updates if rolling your own:

  • Counter metrics now properly have _total for sums
    • Ex zipkin_collector_bytes_total, not zipkin_collector_bytes
  • Collector metrics no longer have embedded fields for transport type
    • Ex zipkin_collector_bytes has a tag transport instead of a naming convention
  • Http metrics now have normalized names
    • Ex http_server_requests_seconds_count, not http_requests_total
  • Http metrics have route-based uri tags
    • Ex instead of path which has variables like /api/v2/trace/abcd, uri with a template like /ap1/v2/trace/{traceId}

Note: the metrics impact is exactly the same prior to the spring boot v2 update. You can update first to Zipkin 2.7.5 to test metrics independently to the other updates.

Endpoint changes

If you were using Spring's "actuator" endpoints, they are now under the path /actuator as described in Spring Boot documentation. However, we've reverse mapped a /metrics and /health endpoint compatible with the previous setup.

Binary Format (protobuf3)

We've had many requests for an alternative to our old thrift format for binary encoding of trace data. Some had interest in the smaller size (typical span data is half the size as uncompressed json). Others had interest in compatibility guarantees. Last year when we created Zipkin v2, we anticipated demand for this, and settled on Protocol Buffers v3 as the format of choice. Due to a surge of demand, we've added this to Zipkin 2.8

Impact to configuration

If using a library that supports this, it is as easy as an encoding choice. For example, switching to Encoding.PROTO3 in your configuration.

NOTE Servers must be upgraded first!

Impact to collectors

Applications will send spans in messages and so our collectors now detect a byte signature of the ListOfSpans type and act accordingly. In http, this is assumed when the content-type application/x-protobuf is used on the /api/v2/spans endpoint. There is no expected impact beyond this except for efficiency gains.

Impact to hand-coders

For those of you coding your own, you can choose to use normal protoc, or our standard zipkin2 library. Our bundled SpanBytesEncoder.PROTO3 has no external dependencies. While this added some size, our jar is still less than 200K.