Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add protobuf-based analysis binary store #351

Merged
merged 56 commits into from Jul 14, 2017

Conversation

Projects
None yet
8 participants
@jvican
Copy link
Member

commented Jul 10, 2017

This pull request replaces the text-based analysis format by a binary format
powered by Protobuf. The new format provides us with three main advantages:

  1. Backwards and forwards binary compatibility at the analysis format level.
  2. Faster serialization/deserialization of the analysis file.
  3. Provides a much better way to make the analysis file machine-independent.

The purpose of this pull request was to add the benefits of a
backwards/forwards binary compatible analysis file without impacting the speed
of the text-based analysis format. In my local experiments, this goal has been
met. The new format is on par performance-wise with the text-based analysis
format for small analysis files, and it's much faster (~1.5-2x) when increasing
the Scalacheck generators (maxSources and maxRelatives) from 10 to 40, and
even more if the number increases.

With regards to the public API, the PR defines the public API in zinc persist,
which was all internal. This API is required by every piece of software that
integrates with Zinc and needs to modify the read/write steps in a sane way.

The API adds the following interfaces:

  1. ReadMappers. To allow customisation before reading the analysis files.
  2. WriteMappers. To allow customisation before writing the analysis files.
  3. ReadWriteMappers. To wrap the read and write mappers.

These APIs all provide ways to get the default implementations from Java.

The most relevant implementations to discuss are the ones for ReadMappers and
WriteMappers. These implementations expose a way to relativize (make
machine-independent) an analysis file. Why would someone want to do that? To
ship analysis files to clusters and associate them with commits to avoid first
compilations when checking out a concrete commit.

These implementations, however, are not meant to cover all the possible use
cases for all build tools, they only provide a simple implementation for those
users that can ensure that all paths in the analysis file start with a concrete
prefix. This is usually not the case (read docs in the interfaces), and
therefore build tools will need to implement their own, taking into account all
the paths that can be present: ivy cache, coursier cache, binary dirs, product
dirs, et cetera. This approach was taken because it's impossible to produce an
elegant implementation that satisfies all build tools' requirements.

I've shipped today a release that @stuhood is using to integrate with these
changes and the ones I made to the loggers interface. It is accessible
here.

The implementation has been done iteratively so that every commit compiles,
making the final switch in the last commits.

I recommend that the people with most experience in this part of the codebase
(@eed3si9n and @Duhemm) review this PR. Also a ping for @stuhood. @dwijnand
Please feel free to have a view on it, too, if you'd like.

The way I recommend to review this PR is to have a look at the resulting diff.
It may look like huge, but it's actually conceptually really simple:

  • Changes that affect zincPersist basically add:
    • Converters from the Zinc API => protobuf and viceversa.
    • Mappers implementations.
    • FileAnalysisStore implementation.
  • Changes that don't affect the zincPersist module have been done to adjust
    the use sites of the new introduced APIs. AnalysisStore and AnalysisFile
    are defined in the Java sources of the zinc module.
  • Remove all the tools used in the previous implementation.
  • Remove sbinary dependency and add protobuf dependency.

Therefore, I suggest you have pay attention to the following parts:

  • Public binary API.
  • Protobuf converters.

Performance-wise, note that the implementation can be much faster in the future
by streaming instead of converting first to the protobuf IR (or the other way
around) and then reading/writing. However, this is not worth it for now, in my
opinion, because of two reasons:

  1. We don't want complexity in this part of the codebase. The dumber, the better.
  2. The time spent on reading/writing analysis files account for 1-3% of the
    whole compile time. This, of course, may be significant for some big
    projects but we should not introduce complexity until that's proven with
    data. And it seems that no one complained about the text-based format; so
    being the binary format faster than that one as it currently, I don't
    consider it's worth it do streaming.

When it comes to readability, the new format provides ways to read the code via
ScalaPB's CLI tool and also javapb. From within the code, doing toString in
any class of the Protobuf API suffices to show the JSON code of the protos. I
will add instructions on how to run this to the CONTRIBUTING guide in a bit.

I'm adding two commits very shortly, but they won't affect the contents of this
PR. The pull request is up for review. Let me know if there's something I haven't covered yet.

@eed3si9n

This comment has been minimized.

Copy link
Member

commented Jul 10, 2017

I would add @xuwei-k as a reviewer as well since he's an expert in protobuf.

@jvican jvican force-pushed the scalacenter:protobuf branch from 6f58ea6 to f825fc7 Jul 10, 2017

@Duhemm
Copy link
Contributor

left a comment

Amazing work!

I have really minor comments, mostly naming and docs...

More importantly, it looks like ReadMapper and WriteMapper have the same interface. Why don't we have just one AnalysisMapper?

@@ -37,6 +37,25 @@ public Modifiers(boolean isAbstract, boolean isOverride, boolean isFinal, boolea
);
}

/**
* Allow to set the modifiers from a flags byte where:

This comment has been minimized.

Copy link
@Duhemm

Duhemm Jul 10, 2017

Contributor

Formatting is off

This comment has been minimized.

Copy link
@jvican

jvican Jul 10, 2017

Author Member

TOFIX.

This comment has been minimized.

Copy link
@jvican

jvican Jul 11, 2017

Author Member

Apparently this is not off... It's how GitHub shows it.

* 5. The first bit tells if has an implicit modifier.
* 6. The first bit tells if has an lazy modifier.
* 7. The first bit tells if has an macro modifier.
* 8. The first bit tells if has an super accessor modifier.

This comment has been minimized.

Copy link
@Duhemm

Duhemm Jul 10, 2017

Contributor

first / second / etc.

This comment has been minimized.

Copy link
@jvican

jvican Jul 10, 2017

Author Member

True. TOFIX.

package xsbti.compile;

/**
* Defines an analysis file that contains information about every incremental compile.

This comment has been minimized.

Copy link
@Duhemm

Duhemm Jul 10, 2017

Contributor

about every incremental compile

Is that not "about the last incremental compilation"?

This comment has been minimized.

Copy link
@jvican

jvican Jul 10, 2017

Author Member

Actually, it's about one incremental compile. Original docs are less than precise. TOFIX.

*/
public interface AnalysisContents {
/**
* @return An instance of {@link CompileAnalysis}.

This comment has been minimized.

Copy link
@Duhemm

Duhemm Jul 10, 2017

Contributor

How about

@return The `CompileAnalysis` held by this analysis file

?

This comment has been minimized.

Copy link
@jvican

jvican Jul 10, 2017

Author Member

TOFIX.

CompileAnalysis getAnalysis();

/**
* @return An instance of {@link MiniSetup}.

This comment has been minimized.

Copy link
@Duhemm

Duhemm Jul 10, 2017

Contributor

How about

@return The configuration of the compiler, a `MiniSetup`, that produced this analysis

?

This comment has been minimized.

Copy link
@jvican

jvican Jul 10, 2017

Author Member

TOFIX.

optResult match {
case Some((analysis, setup)) => Optional.of(ConcreteAnalysisContents(analysis, setup))
case None => Empty
}

This comment has been minimized.

Copy link
@Duhemm

Duhemm Jul 10, 2017

Contributor

I don't understand why we need this. It uses a CacheProvider to find in which file to write and read based on the value of projectLocation, and it takes care of the rebasing of the Analysis. (Is that correct?) How is that different from FileBasedStore + mappers?

This comment has been minimized.

Copy link
@jvican

jvican Jul 10, 2017

Author Member

This is just some internal data structures that @gkossakowski added but that they don't really belong to this project (though they are good for testing). What this does is that it enables to do more "custom" behaviour as a build tool would do, have a look at ProjectRebase.

These utils kept being private on purpose, and have not been removed in this PR to keep the tests that use the mappers under the hood.

The way to go is the public API, that is FileAnalysisStore + mappers.

* instances of other classes.
*
* Even though this proxy is public, Do not depend on it, it has no binary compatibility
* guarantee and can be broken in any minor release.

This comment has been minimized.

Copy link
@Duhemm

Duhemm Jul 10, 2017

Contributor

Can we move that to the internal namespace? Also, the path doesn't match the package name currently.

This comment has been minimized.

Copy link
@jvican

jvican Jul 10, 2017

Author Member

No, it cannot. The thing is that the constructor that accepts a byte is protected, so only the things in the same namespace can use it. That's why it's defined in that namespace.

case schema.Stamps.StampType.Type.Empty => EmptyStamp
case schema.Stamps.StampType.Type.Hash(h) => new Hash(h.hash)
case schema.Stamps.StampType.Type.LastModified(lm) => new LastModified(lm.millis)
// ^ TODO: Double check that we recompute millis when reading this in certain conditions

This comment has been minimized.

Copy link
@Duhemm

Duhemm Jul 10, 2017

Contributor

Can you elaborate?

This comment has been minimized.

Copy link
@jvican

jvican Jul 10, 2017

Author Member

This can be removed. Good catch.

import xsbti.api._

final class ProtobufReaders(mapper: ReadMapper) {
def fromPathString(path: String): File = {

This comment has been minimized.

Copy link
@Duhemm

Duhemm Jul 10, 2017

Contributor

The name doesn't really match what the function does

This comment has been minimized.

Copy link
@jvican

jvican Jul 10, 2017

Author Member

Why? It converts a path (represented by a string) to a file. The name only describes from not to. It's the convention all over the place.

This comment has been minimized.

Copy link
@jvican

jvican Jul 11, 2017

Author Member

@Duhemm ping.

This comment has been minimized.

Copy link
@Duhemm

Duhemm Jul 11, 2017

Contributor

Oh I read fromPathToString, sorry. It's perfect the way it is then 👍

@@ -86,6 +111,7 @@ abstract class CommonCachedCompilation(name: String)

val prefix = tempDir.toPath.toString

// TODO(jvican): Add files that are missing...

This comment has been minimized.

Copy link
@Duhemm

Duhemm Jul 10, 2017

Contributor

Does this need to be addressed?

This comment has been minimized.

Copy link
@jvican

jvican Jul 10, 2017

Author Member

This is just a comment to check not only the stamps, but also the files. Will fix in next commit. Good catch, forgot about this.

@jvican

This comment has been minimized.

Copy link
Member Author

commented Jul 10, 2017

Thanks for the prompt review, you're awesome @Duhemm.

More importantly, it looks like ReadMapper and WriteMapper have the same interface. Why don't we have just one AnalysisMapper?

We cannot have one AnalysisMapper because we want to distinguish what's meant to be used to read and what's meant to be used to write. Why is this distinction useful? Because mappers for reading and writing do different things, and we don't want our users to pass in invalid mappers.

@eed3si9n
Copy link
Member

left a comment

Thanks for the contribution.

Although I am excited about the outlook for performance improvements, I am genuinely concerned about the size and scale of this PR.

  1. Could we split up AnalysisStore related change to another PR? That can be contained on its own, correct?
  2. What do you think about potentially providing both mechanism in Zinc 1 so downstream sbt users can potentially fallback to the old and trusty implementation, while early adopters can enjoy the speedup?
build.sbt Outdated
lazy val zincPersist = (project in internalPath / "zinc-persist")
.dependsOn(zincCore, zincCore % "test->test")
.configure(addBaseSettingsAndTestDeps)
.settings(
name := "zinc Persist",
libraryDependencies += sbinary
compileOrder := sbt.CompileOrder.Mixed,

This comment has been minimized.

Copy link
@eed3si9n

eed3si9n Jul 11, 2017

Member

Why was this necessary?

This comment has been minimized.

Copy link
@jvican

jvican Jul 11, 2017

Author Member

Because Java sources depend on Scala sources, and viceversa. It's pretty common in Zinc modules.

This comment has been minimized.

Copy link
@eed3si9n

eed3si9n Jul 11, 2017

Member

We usually try to isolate the Java interfaces into subprojects like xsbti. If the API layer is depending on Scala, that's a smell.

This comment has been minimized.

Copy link
@jvican

jvican Jul 11, 2017

Author Member

Java interfaces depend on Scala code to provide the default implementations of certain entities. This is extensively used in all the Zinc modules and it's a legit use case. This is the only way to provide a Java-friendly API.

build.sbt Outdated
lazy val zincPersist = (project in internalPath / "zinc-persist")
.dependsOn(zincCore, zincCore % "test->test")
.configure(addBaseSettingsAndTestDeps)
.settings(
name := "zinc Persist",
libraryDependencies += sbinary
compileOrder := sbt.CompileOrder.Mixed,
PB.targets in Compile := List(scalapb.gen() -> (sourceManaged in Compile).value)

This comment has been minimized.

Copy link
@eed3si9n

eed3si9n Jul 11, 2017

Member

Where are the protobuf binding code? Could you put them in src like I do for Contraband?

This comment has been minimized.

Copy link
@jvican

jvican Jul 11, 2017

Author Member

This is the way to configure it.

This comment has been minimized.

Copy link
@jvican

jvican Jul 11, 2017

Author Member

The protobuf binding code is in sbt-protoc.

* @param sourceFile The source file to compiled.
* @return A valid, modified source file.
*/
public File mapSourceFile(File sourceFile);

This comment has been minimized.

Copy link
@eed3si9n

eed3si9n Jul 11, 2017

Member

Would it make sense to switch to using NIO Path wherever Files are being used?

This comment has been minimized.

Copy link
@jvican

jvican Jul 11, 2017

Author Member

I didn't want to make this change in this PR.

* The store is a backend-independent interface that allows implementors to decide how
* the analysis stores are read and written before or after every incremental compile.
*/
public interface AnalysisStore {

This comment has been minimized.

Copy link
@eed3si9n

eed3si9n Jul 11, 2017

Member

As far as I can tell, the impact to sbt/sbt is around the relatively small changes around the storage.
Do you think you can split it up into another PR along with the ripple PRs?

This comment has been minimized.

Copy link
@jvican

jvican Jul 11, 2017

Author Member

I can split it. The reason why this is in the same PR is because the protobuf PR touches all these utilities. Breaking it in another PR is going to force me to depend on this, rebase until it's done, and then create an extra PR in sbt/sbt to synchronize with this. Is this worth it?

This comment has been minimized.

Copy link
@jvican

jvican Jul 11, 2017

Author Member

Note that this change is just one commit.

This comment has been minimized.

Copy link
@jvican

jvican Jul 11, 2017

Author Member

I have just removed this commit from the PR.

@jvican

This comment has been minimized.

Copy link
Member Author

commented Jul 11, 2017

What do you think about potentially providing both mechanism in Zinc 1 so downstream sbt users can potentially fallback to the old and trusty implementation, while early adopters can enjoy the speedup?

I think this is a bad idea and goes against the whole purpose of this PR; the creation of a binary backwards and forwards format that Zinc 1.0 can evolve over time. I see no benefit in doing this. We want everyone to use the binary format so that we don't depend on the constraints of the text-based format if we evolve the API.

@jvican

This comment has been minimized.

Copy link
Member Author

commented Jul 11, 2017

Also, on another note:

Although I am excited about the outlook for performance improvements

This PR is not only about performance improvements 😄. It's about allowing organizations to ship analysis files with the guarantee that any Zinc 1.x version will be able to read/write (backwards and forwards).

@eed3si9n

This comment has been minimized.

Copy link
Member

commented Jul 11, 2017

This PR is not about performance improvements 😄. It's about allowing organizations to ship analysis files with the guarantee that any Zinc 1.x version will be able to read/write (backwards and forwards).

If so we can do that using JSON.

@jvican

This comment has been minimized.

Copy link
Member Author

commented Jul 11, 2017

If so we can do that using JSON.

Not without a performance penalty. This PR improves both: performance and binary compatibility.

Also, JSON does not allow forwards binary compatibility, and has problems since it serialises the names of the members. Protobuf does not. Protobuf supports both forwards and backwards binary compatibility via index.

@eed3si9n

This comment has been minimized.

Copy link
Member

commented Jul 11, 2017

Also, JSON does not allow forwards binary compatibility, and has problems since it serialises the names of the members. Protobuf does not. Protobuf supports both forwards and backwards binary compatibility.

JSON uses key names, and you can make any fields optional.
Protobuf uses key numbers, and it also allows you to define optional fields. There's nothing magical difference between the two.

@jvican

This comment has been minimized.

Copy link
Member Author

commented Jul 11, 2017

JSON uses key names, and you can make any fields optional. Protobuf uses key numbers, and it also allows you to define optional fields. There's nothing magical difference between the two.

There is one difference: Protobuf allows you to modify the names of the members, JSON does not. This is great to improve APIs, allowing you to rename reserved names without polluting the API.

That and the performance benefits that Protobuf brings to the projects is a good improvement over sjson, IMO. This is why this PR targets Protobuf over the json alternative.

@xuwei-k

This comment has been minimized.

Copy link
Member

commented Jul 11, 2017

There's nothing magical difference between the two.

There is one difference: Protobuf allows you to modify the names of the members, JSON does not.

I thinks there are some more important difference.

protobuf version3 has only 4 types without deprecated types, it means protobuf has better compatibility keepability compared to json.

@jvican jvican force-pushed the scalacenter:protobuf branch 2 times, most recently from 160ba16 to 35ac7ec Jul 11, 2017

@jvican

This comment has been minimized.

Copy link
Member Author

commented Jul 11, 2017

CI is finally passing 🎉. I've published sbt-protoc myself until thesamet/sbt-protoc#27 gets merged and released.

@jvican

This comment has been minimized.

Copy link
Member Author

commented Jul 11, 2017

This PR addresses the following tickets, which are strictly coupled:

  1. #218.
  2. #320
  3. #268
@tayama0324

This comment has been minimized.

Copy link

commented Jul 11, 2017

I discussed with @xuwei-k offline. Let me explain in detail about protobuf and JSON compatibility.

There is one difference: Protobuf allows you to modify the names of the members, JSON does not.

Another difference is compatibility of array type. Protobuf fields can be replaced with repeated fields of same type (and vice versa) without breaking anything, while JSON can not, that is, [ "foo" ] and "foo" are incompatible.

The magic here is data structure of protobuf binary format. Fields of a message are represented as a sequence of (tag number, value), and deserializers are requested to preserve only lastly read value of non-repeated fields. That is why protobuf fields can be repeated in backward and forward compatible manner.

Some more differences you may be interested in are as follows:

  • In protobuf ints and bools are compatible, while JSON is not.
  • In protobuf bytes and strings are compatible, while JSON is not.
  • In JSON integers and floating point numbers are compatible, while protobuf is not.

You may think compatibility rule of protobuf binary is tricky. However, you will find that it is very simple and flexible if once you would understand underlying binary format and (de)serialization mechanism. IMO, protobuf binary is preferred choice than JSON if you need compatibility.

@jvican jvican force-pushed the scalacenter:protobuf branch 3 times, most recently from d7e357b to 4defcd2 Jul 11, 2017

Move mappers to `xsbti.compile.analysis`
The right place for mappers is in the Java universe, not `sbt.inc`. The
whole Zinc API is migrating away from placing public APIs in `sbt.inc`.

@jvican jvican force-pushed the scalacenter:protobuf branch from 6efd16c to 021c366 Jul 13, 2017

@Duhemm
Copy link
Contributor

left a comment

Really minor comments, and then LGTM! Awesome work 🦄

}

/** Defines a marker that tells the utils that the relativized path is empty. */
private final val RELATIVE_MARKER = "\u2603\u2603\u2603"

This comment has been minimized.

Copy link
@Duhemm

Duhemm Jul 14, 2017

Contributor

I don't think that's super important, but maybe we should use something inside Unicode's PUA here?

This comment has been minimized.

Copy link
@jvican

jvican Jul 14, 2017

Author Member

This is the unicode symbol used internally for the synthesized macro names by sbt. It's safe to be used because the text format uses UTF8 (we force it when we write and read) and protobuf also uses unicode (https://developers.google.com/protocol-buffers/docs/proto, look for "unicode" and "UTF").

Given the fact that this symbol is rarely used, I think it's perfect for this scenario. Some SO questions point to it as a good way to disambiguate names or use it for "marking" string boundaries.

/**
* Makes a file relative to a path.
*
* This function writes all file paths to a relative format that has a [[RELATIVE_MARKER]]

This comment has been minimized.

Copy link
@Duhemm

Duhemm Jul 14, 2017

Contributor

It converts rather than write, no?

This comment has been minimized.

Copy link
@jvican

jvican Jul 14, 2017

Author Member

Thanks!

This comment has been minimized.

Copy link
@jvican

jvican Jul 14, 2017

Author Member

Fixed.

jvican added some commits Jul 13, 2017

Remove old mappers and rework relative path utils
This commit removes the old mappers and makes the text format use the
new ones, therefore providing a unique interface to modify the behaviour
in all the formats.

To make it work for both formats, the current relative read and write
path utils (`makeRelative` and `reconstructRelative`) have proven to be
insufficient.

There are two reasons for the rework of the utils:

1. When the resulting, relative path is empty, it causes problems in the
   text format.
2. When reading/writing from a different mapper that does not use the
   utility, it's better to fail-fast at runtime instead of assuming that
   some paths are correct (with `isAbsolute`, for example). The fact
   that we protect ourselves from validating some data that could be
   wrong is really important.
Use `mapSourceFile` for sources in source infos
1. Make `ProtobufWriters` and `ProtobufReaders` use the mappers hook.
2. Remove invalid documentation.
3. Remove unnecessary `identity` in mappers.
Add generic read and write mappers
This class has been added as per Stu Hood's request. It's a prototype
that encapsulates a fine-grained abstraction over the different kind of
root paths that Zinc handles. Based on these roots, it allows users to
provide concrete implementations of them and will be used by Zinc in the
appropriate places.

See #320.

@jvican jvican force-pushed the scalacenter:protobuf branch from 021c366 to 690b7a4 Jul 14, 2017

@Duhemm

Duhemm approved these changes Jul 14, 2017

@@ -58,10 +59,10 @@ trait WithPattern { protected def Pattern: Regex }
import java.lang.{ Long => BoxedLong }

/** Define the hash of the file contents. It's a typical stamp for compilation sources. */
final class Hash(val value: Array[Byte]) extends StampBase {
val hexHash = IOHash.toHex(value)
final class Hash private[inc] (val hexHash: String) extends StampBase {

This comment has been minimized.

Copy link
@dwijnand

dwijnand Jul 14, 2017

Member

private[inc] is no guarantee - as you see Stamp.fromString invokes it without doing any validation:

scala> sbt.internal.inc.Stamp.fromString("hash(gobbledygook)")
res0: xsbti.compile.analysis.Stamp = hash(gobbledygook)

This comment has been minimized.

Copy link
@jvican

jvican Jul 14, 2017

Author Member

Yeah, I know. But it's the best we can do. There's no way to define it private and instantiate it in the same file.

This comment has been minimized.

Copy link
@dwijnand

dwijnand Jul 14, 2017

Member

Sure there is, define it private[this] and constructed it from Hash's companion object, after validating the input.

This comment has been minimized.

Copy link
@jvican

jvican Jul 14, 2017

Author Member

No, I don't think that's possible. Hash is instantiated from two different objects: Stamp and Stamper, and wrapping them into Hash's companion object is not a valid solution.

This comment has been minimized.

Copy link
@dwijnand

dwijnand Jul 14, 2017

Member

Wrapping them? You define a safe method in Hash's companion object that does the validation and then call that method from Stamp and Stamper.

This comment has been minimized.

Copy link
@dwijnand

dwijnand Jul 14, 2017

Member

Sure you can prevent the creation. You can pass back an Option[Hash], or an Either[String, Hash], or you can stick to what zinc was doing previous which is throw an exception. Either way you prevent the creation of an illegal hash object.

This comment has been minimized.

Copy link
@jvican

jvican Jul 14, 2017

Author Member

Either of those alternatives have a runtime performance penalty (returning Option and Either require validation), which goes against the goal of the commit that introduced this change.

IMO, the best to solve this issue is to put all those stamp utils in its own package, and then do private[stamps].

This comment has been minimized.

Copy link
@eed3si9n

eed3si9n Jul 14, 2017

Member

fromString is already doing regex pattern matching. If you want to tighten it you should change final val Pattern = """hash\((\w+)\)""".r in object Hash to accept hex only.

This comment has been minimized.

Copy link
@dwijnand

dwijnand Jul 14, 2017

Member

Good idea, validation is being done there. I'll fix it with that.

This comment has been minimized.

Copy link
@jvican

jvican Jul 14, 2017

Author Member

Yeah, good idea. I'll replace it with """hash\(([0-9a-fA-F]+)\).r""".

@typesafe-tools

This comment has been minimized.

Copy link

commented Jul 14, 2017

dbuild has checked the following projects against Scala 2.12:

Project Reference Commit
sbt 1.0.x sbt/sbt@716d664
zinc pull/351/head 690b7a4
io 1.0 sbt/io@f55849d
librarymanagement pull/122/head sbt/librarymanagement@e3891c8
util 1.0 sbt/util@fe77438
website 1.0.x sbt/website@7e644bc

The result is: FAILED
(restart)

@jvican

This comment has been minimized.

Copy link
Member Author

commented Jul 14, 2017

@eed3si9n This is ready for review. The validator fails with the expected errors and this PR doesn't require any sbt/sbt PR because it doesn't affect the public API.

@eed3si9n
Copy link
Member

left a comment

LGTM
Thanks for the keeping the text around

@jvican

This comment has been minimized.

Copy link
Member Author

commented Jul 14, 2017

:shipit:

@jvican

This comment has been minimized.

Copy link
Member Author

commented Jul 14, 2017

Note for future archaeologists: the plan is to remove the text-based analysis format in Zinc 1.1.

@jvican

This comment has been minimized.

Copy link
Member Author

commented Jul 14, 2017

@eed3si9n Are we waiting for something to merge this? I need to rebase in my other PR.

@eed3si9n eed3si9n merged commit 3026ea8 into sbt:1.0 Jul 14, 2017

1 check passed

continuous-integration/drone/pr the build was successful
Details

jvican added a commit to scalacenter/zinc that referenced this pull request Jul 14, 2017

@stuhood stuhood referenced this pull request Jul 14, 2017

Closed

Design a Java-friendly, safe Zinc API for build tools #300

2 of 6 tasks complete

jvican added a commit to scalacenter/sbt that referenced this pull request Jul 15, 2017

jvican added a commit to scalacenter/sbt that referenced this pull request Jul 15, 2017

@jvican jvican referenced this pull request Jan 12, 2018

Open

Mention protobuf #1

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
You can’t perform that action at this time.