Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How do you use `set-pipeline`? #1200

Closed
vito opened this issue May 23, 2017 · 41 comments

Comments

@vito
Copy link
Member

commented May 23, 2017

Currently set-pipeline is manually invoked by a user to submit a pipeline config from their local machine. This is great for rapid iteration, but also goes against a Concourse principle of having versioned, persistent, reproducible config.

Are you still using set-pipeline manually? Have you written scripts to automate it? Are you continuously configuring it using something like the Concourse Pipelines resource? How are you handling credentials?

@marco-m

This comment has been minimized.

Copy link
Contributor

commented May 23, 2017

I am investigating Concourse since a week, with the idea of replacing a Jenkins deployment. The question you are asking is one of the main questions I had. I plan to use scripts and SaltStack exactly for reproducibility. I read about the pipeline resource and I want to investigate it more. Another question I had is why we need to specify the pipeline name in the command line, why not having the pipeline name in the YAML file ? The notion of naming things in the YAML files is already there for every concept. The --pipeline option could become required only if the pipeline name in the YAML file is not present...

@vito

This comment has been minimized.

Copy link
Member Author

commented May 23, 2017

@marco-m The name is not in the .yml so that you can use a single pipeline template to configure differently-parameterized and thus differently-named pipelines.

@Chumper

This comment has been minimized.

Copy link

commented May 24, 2017

What about integrating seeding from a repository directly into concourse?

I would love to have the opportunity to state one or multiple repositories that concourse will then watch (or hook) and seed accordingly.
It doesn't need to have fancy diff mechanics, just the seeding would work fine.

That would make it easier to start with the correct "version everything" mindset.

@marco-m

This comment has been minimized.

Copy link
Contributor

commented May 27, 2017

@vito Thanks for the explanation regarding the lack of pipeline name in the configuration file. Makes sense.

@freelock

This comment has been minimized.

Copy link
Contributor

commented Jun 1, 2017

We use a bot that calls set-pipeline. Not sure how typical our setup is, but it's working well for us managing 60+ pipelines currently.

We have two pipeline yaml files that differ based on the platform being managed (we manage a lot of Drupal sites and a few WordPress sites -- we have a different pipeline per platform). All of the scripts and supporting tasks are in the same repo as our pipeline. We keep a separate credentials.yml file and a wrapper script that reads the secrets that need to be passed in as separate parameters when a pipeline is set.

We have one Matrix chatroom filled with bots -- our control bot, Concourse notifications, gitolite notifications, notifications from a couple different PM tools we use.

We then have one chatroom per project, and our bot stores some project state there (the project alias, the platform, etc). Our developers and PMs, and sometimes customers, are in the project chatroom. The bot relays and prettifies results from Concourse (from the bot room), and users can trigger the bot to do stuff.

When our bot is called to run a concourse job (usually either on user request or as a result of a gitolite post-commit hook message), it:

1 - Checks for a recent Concourse auth token, and if it doesn't have one, calls fly login.
2 - Calls the wrapper for set-pipeline if the pipeline was disabled.
3 - Enables the pipeline
4 - Calls fly trigger-job on the corresponding job

Due to performance issues we were having with more than 3 - 4 active pipelines in previous versions, the bot queues up jobs and makes sure that only 1 is running at a time. (We did make this concurrency a value we can easily change, but haven't yet bumped it up). So it inserts the triggered job into its queue, with an expected notification that signals the pipeline for the triggered job is complete.

5 - As concourse jobs send notifications, (via the matrix-notification-resource), the bot drops a message in the room and/or updates the state (metadata about job success, setting flags that allow/disallow actions like triggering deployments only if tests have passed)
6 - When the pipeline has reached the last job in a given sequence, or reports a failure, the bot pauses the pipeline and starts the next job in the queue.

So we use a bot to provide some level of control over authorizing who may trigger which jobs, and managing most of the secrets.

Our pipelines use quite a few resources, including:

  • pipelines git repo, contains all our concourse configuration, pipelines, supporting scripts for both variations
  • aliases repo, provides connection addresses for each site and environment
  • project repos, the actual code
  • pool repos, locks contain addresses for external test environments such as a Selenium Grid resource
  • Docker images for deployment, various tests

This is all working really well for us right now!

I would say the areas we'd like to see improvements are largely around security. I'm definitely in favor of some sort of improved management of secrets -- right now we just protect the bot's execution environment and greatly limit the attack surface area, which seems ok but I'd love some auditing/pen-testing of the ATC and workers to know if any stale task containers might be exploited to reveal keys to the kingdom.

The other big concern is the team login structure -- this doesn't work for us, we're just using the "Main" team. We really want read-only and read-execute-only logins that do not have the ability to set a pipeline. As it is, we do not want any of our pipelines to be publicly viewable -- we want a login for each pipeline we may give to a customer so they can view each job and task, including the detail, but not be able to do anything more than that.

And then I would like a login for each pipeline that would allow triggering jobs, pausing particular resource versions and tasks, while being unable to pause/unpause the entire pipeline or set the pipeline in any way -- this to me feels like a security risk, as we basically have a single login that lots of people have, which might allow an attacker to upload a modified pipeline that somehow gains access to production systems. (Right now I'm not sure how they would do this without the secrets kept by the bot, but it feels like there's a way to exploit in there -- I would rather put a really strong, well-guarded password as the main team and have some weaker ones that our devs can use to mostly manage pipeline jobs if something happens that breaks things that our bot can't handle...)

Oh wait, of course, there's a trivial way to exploit this -- fly get-pipeline shows all your secrets, to anyone who can load it. There's the big vulnerability that might keep me up at night -- an easy way to show an SSH private key... This is the stuff that needs to get nailed down!

@marco-m

This comment has been minimized.

Copy link
Contributor

commented Jun 1, 2017

"We really want read-only and read-execute-only logins that do not have the ability to set a pipeline"

+1

@DanielJonesEB

This comment has been minimized.

Copy link

commented Jul 6, 2017

@vito We hack on pipelines a lot, inlining Bash scripts in tasks in the main pipeline YAML, and using set-pipeline to find out if they work. Most of our work is plumbing together IaaSes and CF.

We've been toying with a tool to automatically inline/extract task scripts, so it's trivial to go from hacking to not completely awful.

We've got a pair currently looking at a pattern for promoting pipelines. We want to parameterise all gets to have version specifiers, and on 'dev' pipelines these will be latest. Dev pipeline runs, and when the whole thing has passed, we write all the versions of all the resources to a vars file, and put that.

Downstream pipelines (think 'production') pick up this vars file of pinned resource versions, and then it calls set-pipeline on itself, effectively locking it to a known good set of resources.

@DanielJonesEB

This comment has been minimized.

Copy link

commented Jul 14, 2017

Should all pipelines forcibly start with a resource that holds the pipeline itself, and sets itself? Maybe if there was an equivalent of how execute uploads a resource version from the local system, there could be an extra command that sets a pipeline from a one-off version of the pipeline resource, thus keeping fast-iterators happy.

@cjcjameson

This comment has been minimized.

Copy link
Contributor

commented Aug 16, 2017

@jmcatamney @khuddlefish @chrishajas @Chibin @pivotal-mike @divyabhargov @larham @professor and many others could comment on our workflow, but I'll start (feel free to edit my post):

We script around fly set-pipeline, especially for "dev" pipelines -- ones "forked" from the master pipeline but running a particular branch. We have reasons & benefits from this forking of dev pipelines, but you're asking about main reasons & benefits for the scripting around fly set-pipeline:

  • Some neat scripting around fly login
  • consistent naming of dev pipelines so we know what team created them
  • a way to use the majority of the same credentials that the master pipeline does, but update git branch names and S3 buckets. I've told people that you can just pass multiple -l flags and the "last one takes precedence", but a custom configuration file format took hold to override these properties. The script outputs them into -v flags, I think.
  • default to individual jobs paused but the pipeline overall unpaused. This is honestly the killer feature. When you make a dev pipeline, you only care about a few jobs out of the 40. For this case, the default of pipeline-paused + jobs unpaused is the exact opposite of what we want.
@freelock

This comment has been minimized.

Copy link
Contributor

commented Aug 16, 2017

Ooh, love the idea of an option to set all jobs in a pipeline to paused on creation...

We do use set-pipeline every time we start a new job, so I would not want this to change the paused state of jobs that are unchanged -- but if we can set a default in the pipeline itself that is used when it's first loaded, or new jobs are added to the pipeline, that would be really slick...

@pn-santos

This comment has been minimized.

Copy link

commented Aug 16, 2017

We use a script that has some pre-set options and custom logic (like using the name of the git checkout as the pipeline name, default ci/pipeline.yml location in the repo, set credentials repo location and load the vars, etc.) so anyone can update the pipeline and then run a single command to: validate pipeline + set-pipeline + up + ep (we like to make everything public since we run everything inside our VPN).

I currently have in my TODOs to explore the pipeline resource to see if we can get self-updating pipelines (but we would still use the script to do the first push)

@cjcjameson

This comment has been minimized.

Copy link
Contributor

commented Aug 16, 2017

@freelock indeed, we also have the issue where we run our script a second or third time to update something, and it resets the paused-ness back to whatever it's initial opinion was. Seems like a hard nut to crack for the core Concourse product. But your suggestion of the pipeline.yml declaring "default paused-ness" I think is possibly good...!

@jmcatamney

This comment has been minimized.

Copy link

commented Aug 16, 2017

Some more details on the workflow @cjcjameson mentioned:

Regarding credential handling, since fly login requires either clicking on a link or entering credentials but we want our script to set up a pipeline with no manual intervention even if you haven't logged in recently, we script around with the Links browser. Developers can log into Git once with Links to save those credentials, then our pipeline script tests whether you're currently logged in and if you aren't it runs fly login, grabs the authentication link, accesses it with the stored Git credentials, and continues with setting the pipeline.

It mostly works fine, but it's hackish and breaks whenever the login process changes. I'd personally appreciate an option to tell fly "This is my dev box, leave me logged in forever unless something about my credentials changes" to avoid the issue.


Regarding the custom configuration file, we do use the "last -l" method; the script reads in a configuration file, transforms some values (e.g. replacing placeholders with Github user names and bucket names), writes it out in .yml format, and passes the resulting file with -l to the script after the other .yml files used. We do it that way because if someone wants to do something unusual (e.g. use someone else's S3 bucket or pull test code from a different repo) it's easier to edit their config file than to edit the script where it's constructing the set-pipeline command.


Regarding pausing jobs, the script currently re-pauses everything except a default set of jobs because we don't have a good way to distinguish between "This is the first time I'm flying the pipeline, pause everything except a minimal set of jobs" and "I've run this script already for this pipeline, leave the paused/unpaused job states alone". That can probably be solved with the default paused-ness suggestion, so +1 to that.


An as-yet-unmentioned issue we have with setting pipelines is dealing with timed triggers. We have some jobs that take a fairly long time to run or rely on limited shared resources, so we use a time resource to only run them once each weeknight, once a week, or the like. This works fine most of the time, but when you're iterating on some code that needs to run those tests the jobs won't trigger when you push changes because the time constraint isn't satisfied, unless you manually edit the pipeline file before running set-pipeline to change the resource to something that can currently be satisfied.

It would be nice to have a flag you could set when setting a pipeline to tell it to ignore any time resources, so you could set up normal time constraints for your main pipeline, set a pipeline for development that ignores those constraints while you're pushing changes, and then reset the pipeline to obey time constraints once you're done.

@Chumper

This comment has been minimized.

Copy link

commented Aug 19, 2017

@jmcatamney You can now login with a token with version 3.3.4, see my PR here: concourse/fly#185

@elgohr

This comment has been minimized.

Copy link

commented Sep 24, 2017

We use Spiff++ for yml templating (in this way we've got let's say "distributed version management") and automated set-pipeline. Nevertheless it would be awesome to generate the pipeline from a file, which is located inside the Git itself.
Could be some sort of template.

@trobert2

This comment has been minimized.

Copy link

commented Mar 7, 2018

If someone were to use Concourse Pipelines resource along with some discovery resource for the pipelines, how would one unpause a pipeline created in that way?

@gaui

This comment has been minimized.

Copy link

commented Apr 2, 2018

This is a crucial feature for our team and is a dealbreaker for us to switch to Concourse from our current CI system, GitLab. Although in any other aspect, Concourse is a MUCH better designed CI system.

CI systems should be stateless - meaning that they don't contain any state/configuration beforehand (except a Git repository to watch), instead they read everything from a file in a Git repository. This makes it possible to introduce changes to the CI process/pipeline with a single commit - without having to change configuration in the CI system itself. GitLab does this with their .gitlab-ci.yml file in the root of the repository, Jenkins with Jenkinsfile, Travis with .travis.yml, etc.

If there is one thing I would improve in Concourse, this would be it.

@elgohr

This comment has been minimized.

Copy link

commented Apr 9, 2018

I'm totally with @gaui . Nevertheless I'm missing something in the .whatever-files: it's only one file.
At the moment I'm starting to collect templates (https://github.com/elgohr/concourse-templates), for various features (auto-update-dependencies, auto-linting,...) which might be added to one product like sidecars. I wouldn't write them into one pipe or file.
I was also thinking about something that lists my repositories and creates a pipeline per repository. This repository-pipeline would look into a .whatever-file for a reference to a template. This could look like
{ auto-update: { paramX: .... }, lint: { paramY: ... } }.
According to this 'feature'-file, the pipeline would construct one pipeline per repository per feature.
So it would be a pipe of pipes of pipes. And that's what makes me think about a solution, which could probably be more easy.

vito added a commit that referenced this issue Aug 15, 2018
bump etree tsa handlers mux clockwork goxmldsig genproto grpc
Submodule src/github.com/beevik/etree 90dafc1e..4cd0dd97 (rewind):
  < add attribute sort support.
  < Release v1.0.1
  < Update path documentation.
  < Minor code reordering.
  < add support for absolute path queries.
  < Update travis config.
  < fix bug in GetRelativePath.
  < Modify GetPath and GetRelativePath.
  < Added a GetPath() and GetRelativePath() to get the paths of an element.
  < Update travis config
  < Added filterText type
  < Added [text()] syntax to retrieve all elements with non empty text
  < path: add text filters
  < Fix broken Markdown headings
  < Add Permissive read setting.
  < Fix unit test.
Submodule src/github.com/concourse/tsa 49a729b..e1df238:
  > fix race/panic in tsa suite
Submodule src/github.com/gorilla/handlers 7e0847f9..3a5767ca (rewind):
  < added ability to register custom log formatter (#131)
  < Fix typo in cors.go (#127)
  < [bugfix] Handle CORS pre-flight request in middleware (#112)
  < Revert "Add Vary header when allowedOrigins is * (#114)" (#122)
  < Add Vary header when allowedOrigins is * (#114)
  < distinguish between explicit and implicit star (#118)
  < [bugfix] Don't return the origin header when configured to * (#116)
  < Travis go18 (#106)
  < use http.StatusOK as initial value for responseLogger.status (#103)
  < README.md: Add sourcegraph badge
  < Merge pull request #97 from nwidger/master
Submodule src/github.com/gorilla/mux e48e440e..9fa818a4 (rewind):
  < Add test for multiple calls to Name(). Fixes #394
  < Clarify behaviour of Name method if called multiple times.
  < Update LICENSE & AUTHORS files. (#386)
  < Initialize user map (#371)
  < [deps] Add go.mod for versioned Go (#376)
  < [docs] Improve docstrings for middleware, skipclean (#375)
  < [docs] Doc fix for testing variables in path (#374)
  < Add CORSMethodMiddleware (#366)
  < Fix linter issues (docs) (#370)
  < [build] Update Go versions; add 1.10.x (#364)
  < Fix table-driven example documentation (#363)
  < Make Use() variadic (#355)
  < Modify http status code to variable in README (#350)
  < Modify 403 status code to const variable (#349)
  < Create authentication middleware example. (#340)
  < [docs] Clarify SetURLVars (#335)
  < [docs] Document route.Get* methods consistently (#338)
  < [docs] README.md: Improve "walking routes" example. (#337) (#323)
  < README.md: add miss "time" (#336)
  < [docs] Fix doc.go (#333)
  < [docs] Add testing example (#331)
  < [docs] Fix Middleware docs typos (#332)
  < Update doc.go: r.AddMiddleware(...) -> r.Use(...)
  < Make shutdown docs compilable (#330)
  < [feat] Add middleware support as discussed in #293 (#294)
  < [docs] Add graceful shutdown example (#329)
  < refactor routeRegexp, particularily newRouteRegexp. (#328)
  < Public test API to set URL params (#322)
  < [docs] Add example usage for Route.HeadersRegexp (#320)
  < [docs] Note StrictSlash re-direct behaviour #308 (#321)
  < Create ISSUE_TEMPLATE.md (#318)
  < [bugfix] Fix method subrouter handler matching (#300) (#317)
  < [docs] fix outdated UseEncodedPath method docs (#314)
  < MatchErr is set to ErrNotFound if NotFoundHandler is used (#311)
  < [docs] Document router.Match (#313)
  < [build] Allow tip failures (#312)
  < .travis.yml: Remove versions < go1.5 from build matrix
  < use req.URL.EscapedPath() instead of getPath(req) (#306)
  < GetQueryTemplates and GetQueryRegexp extraction (#304)
  < Added 1.9 build step (#303)
  < Fix WriteHeader in TestA301ResponseWriter. (#301)
  < [docs] Document evaluation order for routes (#297)
  < [docs] README.md: add missing `.` (#292)
  < [docs] Fix missing space in docstring (#289)
  < Fix #271:  Return 405 instead of 404 when request method doesn't match the route
  < Prefer scheme on child route when building URLs.
  < Use scheme from parent router when building URLs.
  < Fix typo
  < Add test and fix for escaped query values.
  < Update docs.
  < Add tests for support for queries in URL reversing.
  < Add support for queries in URL reversing.
  < Update Walking Routes Section
  < Fix invalid example code
  < Removing half of conflict marker (#268)
  < Update README with example for Router.Walk
  < Update ancestors parameter for WalkFunc for matcher subrouters
  < Update Walk to match all subrouters
  < Support building URLs with non-http schemes. (#260)
  < Updated README
  < Added method Route.GetMethods
  < Added method Route.GetPathRegexp
  < fixed typo (#250)
  < Fixing Regexp in the benchmark test (#234)
  < updating logic in route matcher, cleaner and saner (#235)
  < Merge pull request #232 from DavidJFelix/patch-1
  < Add Go 1.8 to .travis.yml
  < [bugfix] fail fast if regex is incorrectly specified using capturing groups. (#218)
  < [docs] Add route listing example to README
  < Merge pull request #199 from wirehead/minor-doc-tweek
  < Merge pull request #215 from ShaneSaww/fix_for_subroutes_with_pathPrefix
  < Merge pull request #196 from olt/doc-non-capture-groups
  < Add useEncodedPath option to router and routes (#190)
  < Simplify extractVars, fixes edge cases. (#185)
  < make the getPath method safer, fixing panics within App Engine (#189)
  < Add mechanism to route based on the escaped path (#184)
  < .travis.yml: add go1.7
  < [docs] Add logo to README. (#180)
  < [docs] Add static file example to README; doc.go. (#179)
  < Clean up some naming in mux_test.go
  < [bugfix] Fix error handling in Router.Walk (#177)
  < [docs] README typo (#175)
Submodule src/github.com/jonboulle/clockwork e7c6d408..bcac9884 (rewind):
  < README: Fix "Faking time" Golang playground anchor (#16)
  < travis: bump go version (#15)
  < Add support for fake tickers (#8)
Submodule src/github.com/russellhaering/goxmldsig 7acd5e4a..eaac44c6 (rewind):
  < Treat the xml namespace as already declared during exclusive c14n
  < Avoid mutating the original tree when performing transforms
  < Correctly build a surrounding NSContext to locate SignedInfo
  < In NSFindIterateCtx pass the surrounding context of found elements instead of their own context
  < Improve the efficiency of traversing Signature searching for SignedInfo
  < Improve namespace handling when locating CanonicalizationMethod
  < Improve namespace handling in locating SignedInfo
  < Add etreeutils support for iterating and searching of direct children
  < Actually expand travis test matrix
  < Expand go runtime test matrix
  < Merge pull request #33 from apilloud/chain
  < Merge pull request #31 from skyportsystems/master
  < Merge pull request #35 from danikarik/master
  < Merge pull request #34 from otto-md/master
  < Merge pull request #30 from skyportsystems/master
  < Merge pull request #27 from gravitational/rjones/signature
  < Merge pull request #26 from aidansteele/patch-1
Submodule src/google.golang.org/genproto 383e8b2c..411e09b9 (rewind):
  < Add response field to HttpRule (#87)
  < re-enable 1.6
  < update from googleapis (#88)
  < update from googleapis (#85)
  < update from googleapis (#84)
  < update from googleapis (#83)
  < Revert "update from googleapis (#80)" (#81)
  < update from googleapis (#80)
  < update from googleapis (#79)
  < regen: use api-common-protos (#78)
  < update from googleapis (#76)
  < regenerate (#75)
  < update protos using new go protoc plugin (#73)
  < regen speech pb.gos (#72)
  < update from googleapis (#71)
  < update from googleapis (#69)
  < Update bigtable from googleapis (#70)
  < add cloud tasks protos (#67)
  < update from googleapis (#65)
  < update from googleapis (#63)
  < update from googleapis (#62)
  < update from googleapis (#61)
  < update cloudbuild (#60)
  < update from googleapis (#59)
  < update from googleapis (#58)
  < update generated files from googleapis for googleapis/spanner/* (#57)
  < update from googleapis (#56)
  < update from googleapis (#55)
  < update from googleapis (#54)
  < update generated file for googleapis/spanner/* (#53)
  < update from googleapis (#52)
  < add codeowners (#50)
  < update from googleapis (#49)
  < update from googleapis (#48)
  < update from googleapis (#47)
  < update from googleapis (#45)
  < update generated files (#43)
  < update googleapis (#42)
  < regenerate protos (#41)
  < firestore: add generated client (#40)
  < regenerate from updated googleapis (#39)
  < update from googleapis (#38)
  < update from googleapis and protobuf (#37)
  < regenerated from updated googleapis (#36)
  < regenerate speech client (#35)
  < all: regenerate from googleapis (#32)
  < regenerate with proper protobuf path (#31)
  < all: regenerate from latest googleapis (#29)
  < make travis go get cloud.google.com/go/... (#28)
  < release videointelligence (#26)
  < all: regenerate from googleapis (#25)
Submodule src/google.golang.org/grpc 07ef407d9..0e8b58d22 (rewind):
  < channelz: unexport unnecessary API on grpc entities (#2257)
  < channelz: use atomic instead of mutex (#2218)
  < internal: remove TestingUseHandlerImpl (#2253)
  < update proto generated code (#2254)
  < Revert "internal: remove transportMonitor, replace with callbacks" (#2252)
  < internal: remove transportMonitor, replace with callbacks (#2219)
  < Change version to 1.15.0-dev (#2247)
  < interop: implement special_status_message interop test (#2241)
  < internal/grpcsync: introduce package for synchronization (#2244)
  < remove 1.6 support for channelz (#2242)
  < transport: eliminate StreamError; use status errors instead (#2239)
  < transport: replace ClientTransport with *http2Client for internal usage (#2238)
  < disable go1.6 travis tests (#2237)
  < go generate: update proto files (#2236)
  < ClientConn: add Target() returning target string (#2233)
  < client: define dialOptions as interfaces instead of functions (#2230)
  < interop: loosen restrictions on creds per test in interop client (#2231)
  < Convert io.ErrUnexpectedEOF to a codes.Internal-marked status in toRPCerr. (#2228)
  < internal/transport: remove unnecessary ServerTransport method (#2224)
  < internal/transport_test.go: prevent leaking context (#2227)
  < internal/syscall: add package description (#2226)
  < transport.go: minor typo fix (#2225)
  < resolver: document that SetDefaultScheme should be called at init time (#2217)
  < addrconn: remove unused wait() method (#2220)
  < dns resolver: exponential retry when getting empty address list (#2201)
  < internal/transport: remove some unused fields from structs (#2213)
  < internal: move DialOptions to a new file (#2193)
  < Benchmark: fix build tags (#2099)
  < transport: move to internal to make room for new, public transport API (#2212)
  < balancer: add rpc method to PickOptions (#2204)
  < transport: double-check deadline when processing server cancelation (#2211)
  < createTransport: timeout under waitForHandshake case should not have transport transferred to ready stage (#2208)
  < deprecate stream, move documentation to client|server stream (#2198)
  < Set and respect HTTP/2 SETTINGS_MAX_HEADER_LIST_SIZE (#2084)
  < travis: skip race testing on 386 as it is not supported (#2207)
  < internal: changes to travis to make it do less work (#2200)
  < stream: in withRetry, block until Status is valid and check on io.EOF (#2199)
  < grpclb: s/fmt.Errorf/errors.New/ (#2196)
  < Fix flaky test: TestClientStreamingError (#2192)
  < Add documentation for loopy. (#2169)
  < Fix test: wait on server to signal successful accept. (#2183)
  < Allow interop client to use call creds on any secure channel (#2185)
  < client: Implement gRFC A6: configurable client-side retry support (#2111)
  < documentation: clarify SendMsg documentation (#2171)
  < credentials: cleanup version-specific files (#2178)
  < Restrict channelz service test to x86 architecture (#2179)
  < client, server: update dial/server buffer options to support a "disable" setting (#2147)
  < credentials: add more appengine build tags (#2177)
  < Revert stickiness (#2175)
  < minor fix: remove redundant channelz files (#2176)
  < channelz: stage 4 - add security and socket option info with appengine build tags (#2149)
  < Update flow control test to have multiple concurrent streams. (#2170)
  < balancer/grpclb: update to latest lb proto (#2172)
  < resolver/dns: error if target ends with a colon instead of assuming the default port (#2150)
  < grpclb: remove old grpclb generated code  (#2143)
  < testing: run test in simulated appengine environment (#2145)
  < interop: set dns as default scheme in interop client (#2165)
  < Change version to 1.14.0-dev (#2163)
  < Don't log grpclb server ending connection as error (#2162)
  < channelz: move APIs to internal except channelz service (#2157)
  < transport: notify controlbuf that transport is gracefully closing to ensure proper cleanup (#2158)
  < Register incoming stream with loopy as soon as it gets created. (#2144)
  < Import grpclb package in the interop client (#2155)
  < fix: do not percent encode character tilde (#2139)
  < grpclb: backoff for RPC call if init handshake was unsucessful (#2077)
  < status: handle invalid utf-8 characters (#2109) (#2134)
  < Don't do extra work for keepalive when it's disabled. (#2148)
  < internal: move backoff to internal (#2141)
  < Fix flaky tests in transport. (#2120)
  < internal: Change Lock to RLock since no mutation is performed (#2142)
  < grpclb: remove redundent testing struct (#2126)
  < Normalize gRPC LB
  < Fix test: Account for the fact that Dial can return successfully before Accept. (#2123)
  < Add some debug info (#2136)
  < Documentation: create doc describing grpc-go's log levels and their usages (#2033)
  < internal: Update proto generated code (#2133)
  < resolver_conn_wrapper.go: fix minor typo (#2135)
  < internal: move leakcheck to internal/ (#2129)
  < Revert "status: handle invalid utf-8 characters" (#2127)
  < status: handle invalid utf-8 characters (#2109)
  < Revert " channelz: stage 4 - add security and socket option info" (#2124)
  < grpclb: minor fixes on comments and tests (#2122)
  < channelz: stage 4 - add security and socket option info (#2098)
  < Split grpclb out of top level grpc package (#2107)
  < Reduce error logs in transport. (#2117)
  < DNS resolver: Throw an error for non-default DNS authority. (#2067)
  < grpclb: sync messages.proto and update client load reporting (#2101)
  < alts: copy handshake address in Clone() (#2119)
  < codes: fix: marshal/unmarshal a Code to JSON fails (#2116)
  < Account for user configured small io write buffer. (#2092)
  < clarify CloseSend vs CloseAndRecv; better formatting (#2071)
  < internal/grpcrand: New package for concurrency-safe randoms (#2106)
  < Clarify newCCResolverWrapper documentation. (#2100)
  < Revert "channelz: stage 4 - add security and socket option info" (#2096)
  < channelz: stage 4 - add security and socket option info (#1965)
  < stickiness: limit the max count of stickiness keys (#2021)
  < Benchmarks that runs server and client and separate processes. (#1952)
  < Synchronize WriteStatus with WriteHeader on server. (#2074)
  < internal: update proto generated code (#2093)
  < health: generate health proto from grpc-proto (#2081)
  < internal: remove redundant channelz service go generate (#2085)
  < Revert "Strip port from server name in grpclb (#2066)" (#2083)
  < channelz: generate proto from grpc-proto repo (#2082)
  < internal: move version to a separate file (#2080)
  < internal: fix travis failure on alts proto (#2079)
  < test: make end2end test use split grpc / proto imports (#2069)
  < credentials/alts: make go:generate rebuild alts protos (#2056)
  < channelz: split channelz grpc and pb (#2068)
  < Strip port from server name in grpclb (#2066)
  < benchmark: listen on all addresses in benchmark servers (#2073)
  < regenerate *.pb.go files due to proto-gen-go update (#2070)
  < transport: respect http2 setting SETTINGS_HEADER_TABLE_SIZE (#2045)
  < Add AuthInfoFromContext utility API (#2062)
  < Fix possible data loss; Only let reader goroutine handle connection errors. (#1993)
  < split encode into three functions (#2058)
  < small documentation addition to NewStream (#2060)
  < Documentation: Add initial documentation on concurrency (#2034)
  < status: Introduce FromContextError convenience function (#2057)
  < Change version to 1.13.0-dev (#2054)
  < client: introduce WithDisableServiceConfig DialOption (#2010)
  < fix flaky test caused by race in channelz test (#2051)
  < Fix typo (#2050)
  < Ignore metadata that gRPC explicitly sets. (#2026)
  < internal: better test names (#2043)
  < Revert "Less mem (#1987)" (#2049)
  < client: fix interceptors after recent cleanup (#2046)
  < internal: vet.sh quits when it sees macosx (#2048)
  < channelz: update proto to canonical version and rename directory (#2044)
  < interop: Fix unimplemented method test (#2040)
  < health: set health proto canonical path (#2038)
  < Fix "deprecated" function godoc comments to match standard formatting (#2027)
  < proto: update generated code (#2039)
  < Rename proto import. (#2036)
  < Fix typos. (#2035)
  < credentials/alts: Refer to ALTS gRPC types by a different package (#2028)
  < http2Client: send reset stream when closing the stream on protocol error (#2030)
  < Stage 3: Channelz server implementation (#1919)
  < Less mem (#1987)
  < server: export ServerTransportStreamFromContext for unary interceptors to control headers/trailers (#2019)
  < dns resolver: create rand seed at init time (#2007)
  < vet: disallow importing "unsafe" (#2024)
  < stickiness: avoid using unsafe (#2023)
  < Fix typos (#2020)
  < travis: skip vet install for 386 (#2018)
  < stickiness: add stickiness support (#1969)
  < Stage 2: Channelz metric collection (#1909)
  < credentials/alts: Add ServiceOption for server-side ALTS creation (#2009)
  < documentation: add instructions for running tests locally (#2006)
  < go vet: fix composite literal uses unkeyed fields (#2005)
  < documentation: add OAuth2 doc and example (#2003)
  < reflection: regenerate pb.go file after typo fix (#2002)
  < Remove unnecessary type conversions (unconvert) (#1995)
  < Fix typos (#1994)
  < Merge pull request #1996 from knweiss/gosimple
  < documentation: mention DialContext is non-blocking by default (#1970)
  < documentation: mention Register functions should be call at init time (#1975)
  < cleanup: extend dial context for TestFailFastRPCErrorOnBadCertificates to 10 seconds (#1984)
  < Fix Test: race between t.Write() and t.closeStream()  (#1989)
  < Small test readability fixes (#1985)
  < documentation: mention peer will only be populated after RPC completes (#1982)
  < Channelz: more stable tesing (#1983)
  < grpclb: fix issues caused by caching SubConns (#1977)
  < createTransport: check for SHUTDOWN before assigning TransientFailure to ac.state  (#1979)
  < resolver/dns: Typo in lookupHost failure warning (#1981)
  < Channelz: Entity Registration and Deletion (#1811)
  < clientconn: add support for unix network in DialContext. (#1883)
  < documentation: Mark compresser and decompresser as deprecated (#1971)
  < grpclb: cache SubConns for 10 seconds after it is removed from the backendlist (#1957)
  < internal: clean up deprecated Invoke() usage (#1966)
  < Mark old balancer and naming APIs as deprecated (#1951)
  < Export changes to OSS. (#1962)
  < metadata: Add Get, Set, and Append methods to metadata.MD (#1940)
  < server: add grpc.Method function for extracting method from context (#1961)
  < resolver/manual: fix minor typo (#1960)
  < status: remove redundant import (#1947)
  < client: Fix race when using both client-side default CallOptions and per-call CallOptions (#1948)
  < Change version to 1.12.0-dev (#1946)
  < resolver: keep full unparsed target string if scheme in parsed target is not registered (#1943)
  < status: rename Status to GRPCStatus to avoid name conflicts (#1944)
  < status: Allow external packages to produce status-compatible errors (#1927)
  < Merge pull request #1941 from jtattermusch/routeguide_reimplement_distance
  < service reflection can lookup enum, enum val, oneof, and field symbols (#1910)
  < Documentation: Fix broken link in rpc-errors.md (#1935)
  < Correct Go 1.6 support policy (#1934)
  < Add documentation and example of adding details to errors (#1915)
  < Allow storing alternate transport.ServerStream implementations in context (#1904)
  < Fix Test: Update the deadline since small deadlines are prone to flakes on Travis. (#1932)
  < gzip: Add ability to set compression level (#1891)
  < credentials/alts: Remove the enable_untrusted_alts flag (#1931)
  < metadata: Fix bug where AppendToOutgoingContext could modify another context's metadata (#1930)
  < fix minor typos and remove grpc.Codec related code in TestInterceptorCanAccessCallOptions (#1929)
  < credentials/alts: Update ALTS "New" APIs (#1921)
  < client: export types implementing CallOptions for access by interceptors (#1902)
  < travis: add Go 1.10 and run vet there instead of 1.9 (#1913)
  < stream: split per-attempt data from clientStream (#1900)
  < stats: add BeginTime to stats.End (#1907)
  < Reset ping strike counter right before sending out data. (#1905)
  < resolver: always fall back to default resolver when target does not follow URI scheme (#1889)
  < server: Convert all non-status errors to codes.Unknown (#1881)
  < credentials/alts: change ALTS protos to match the golden version (#1908)
  < credentials/alts: fix infinite recursion bug [in custom error type] (#1906)
  < Fix test race: Atomically access minConnecTimout in testing environment. (#1897)
  < interop: Add use_alts flag to client and server binaries (#1896)
  < ALTS: Simplify "New" APIs (#1895)
  < Fix flaky test: TestCloseConnectionWhenServerPrefaceNotReceived (#1870)
  < examples: Replace context.Background with context.WithTimeout (#1877)
  < alts: Change ALTS proto package name (#1886)
  < Add ALTS code (#1865)
  < Expunge error codes that shouldn't be returned from library (#1875)
  < Small spelling fixes (unknow -> unknown) (#1868)
  < clientconn: fix a typo in GetMethodConfig documentation (#1867)
  < Change version to 1.11.0-dev (#1863)
  < benchmarks: add flag to benchmain to use bufconn instead of network (#1837)
  < addrConn: Report underlying connection error in RPC error (#1855)
  < Fix data race in TestServerGoAwayPendingRPC (#1862)
  < addrConn: keep retrying even on non-temporary errors (#1856)
  < transport: fix race causing flow control discrepancy when sending messages over server limit (#1859)
  < interop test: Expect io.EOF from stream.Send() (#1858)
  < metadata: provide AppendToOutgoingContext interface (#1794)
  < Add status.Convert convenience function (#1848)
  < streams: Stop cleaning up after orphaned streams (#1854)
  < transport: support stats.Handler in serverHandlerTransport (#1840)
  < Fix connection drain error message (#1844)
  < Implement unary functionality using streams (#1835)
  < Revert "Add WithResolverUserOptions for custom resolver build options" (#1839)
  < Stream: do not cancel ctx created with service config timeout (#1838)
  < Fix lint error and typo (#1843)
  < stats: Fix bug causing trailers-only responses to be reported as headers (#1817)
  < transport: remove unnecessary rstReceived (#1834)
  < transport: remove redundant check of stream state in Write (#1833)
  < client: send RST_STREAM on client-side errors to prevent server from blocking (#1823)
  < Use keyed fields for struct initializers (#1829)
  < encoding: Introduce new method for registering and choosing codecs (#1813)
  < compare atomic and mutex performance in case of contention. (#1788)
  < transport: Fix a data race when headers are received while the stream is being closed (#1814)
  < Write should fail when the stream was done but context wasn't cancelled. (#1792)
  < Explain target format in DialContext's documentation (#1785)
  < gzip: add Name const to avoid typos in usage (#1804)
  < remove .please-update (#1800)
  < Documentation: update broken wire.html link in metadata package. (#1791)
  < Document that all errors from RPCs are status errors (#1782)
  < update const order (#1770)
  < Don't set reconnect parameters when the server has already responded. (#1779)
  < credentials: return Unavailable instead of Internal for per-RPC creds errors (#1776)
  < Avoid copying headers/trailers in unary RPCs unless requested by CallOptions (#1775)
  < Update version to 1.10.0-dev (#1777)
  < compare atomic and mutex performance for incrementing/storing one variable (#1757)
  < Fix flakey test. (#1771)
  < grpclb: Remove duplicate init() (#1764)
  < server: fix bug preventing Serve from exiting when Listener is closed (#1765)
  < Fix TestGracefulStop flakiness (#1767)
  < server: fix race between GracefulStop and new incoming connections (#1745)
  < Notify parent ClientConn to re-resolve in grpclb (#1699)
  < Add dial option to set balancer (#1697)
  < Fix test: Data race while resetting global var. (#1748)
  < status: add Code convenience function (#1754)
  < vet: run golint on _string files (#1749)
  < examples: fix concurrent map accesses in route_guide server (#1752)
  < grpc: fix deprecation comments to conform to standard (#1691)
  < Adjust keepalive paramenters in the test such that scheduling delays don't cause false failures too often. (#1730)
  < fix typo (#1746)
  < fix stats flaky test (#1740)
  < relocate check for shutdown in ac.tearDown() (#1723)
  < fix flaky TestPickfirstOneAddressRemoval (#1731)
  < bufconn: allow readers to receive data after writers close (#1739)
  < After sending second goaway close conn if idle. (#1736)
  < Make sure all goroutines have ended before restoring global vars. (#1732)
  < client: fix race between server response and stream context cancellation (#1729)
  < In gracefull stop close server transport only after flushing status of the last stream. (#1734)
  < Deflake tests that rely on Stop() then Dial() not reconnecting (#1728)
  < Switch balancer to grpclb when at least one address is grpclb address (#1692)
  < Merge pull request #1724 from grpc/jtattermusch-patch-1
  < codes: Add UnmarshalJSON support to Code type (#1720)
  < naming: Fix build constraints for go1.6 and go1.7 (#1718)
  < remove stringer and go generate (#1715)
  < Add WithResolverUserOptions for custom resolver build options (#1711)
  < Fix grpc basics link in route_guide example (#1713)
  < Optimize codes.String() method using a switch instead of a slice of indexes (#1712)
  < Disable ccBalancerWrapper when it is closed (#1698)
  < Refactor roundrobin to support custom picker (#1707)
  < Change parseTimeout to not handle non-second durations (#1706)
  < make load balancing policy name string case-insensitive (#1708)
  < protoCodec: avoid buffer allocations if proto.Marshaler/Unmarshaler (#1689)
  < Add comments to ClientConn/SubConn interfaces to indicate new methods may be added (#1680)
  < client: backoff before reconnecting if an HTTP2 server preface was not received (#1648)
  < use the request context with net/http handler (#1696)
  < transport: fix race sending RPC status that could lead to a panic (#1687)
  < Fix misleading default resolver scheme comments (#1703)
  < Eliminate data race in ccBalancerWrapper (#1688)
  < Re-resolve target when one connection becomes TransientFailure (#1679)
  < New grpclb implementation (#1558)
  < Fix panics on balancer and resolver updates (#1684)
  < Change version to 1.9.0-dev (#1682)
  < set context timeout when Timeout value >= 0 (#1678)
  < switch balancer based on service config info (#1670)
  < Add proper support for 'identity' encoding type (#1664)
  < update code_string.go for new stringer changes (#1674)
  < addrConn: set ac.state to TransientFailure upon non-temporary errors (#1657)
  < Eliminate race on ac.acbw (#1666)
  < Corrected documentation on Server.Serve (#1668)
  < Update picker doc when returned SubConn is not ready (#1659)
  < travis: fix GOARCH=386 and add misspell check (#1658)
  < Add context benchmarks (#1610)
  < Add protoc command to example/readme (#1653)
  < Implement transparent retries for gRFC A6 (#1597)
  < server: add EXPERIMENTAL tag to grpc.ConnectTimeout (#1652)
  < *: replace deprecated grpc.Errorf calls with status.Errorf (#1651)
  < server: apply deadline to new connections until all handshaking is completed (#1646)
  < codec_benchmark_test: fix racy unmarshal behavior and make some cleanups (#1642)
  < Speed-up quota pools. (#1636)
  < Check ac state shutdown before setting it to TransientFailure (#1643)
  < vet.sh: don't check git status when doing -install (#1641)
  < latency: Listen on localhost:0 instead of :0 in test (#1640)
  < reduce timeout for tests to 5m (7m for testrace) (#1635)
  < Introduce new Compressor/Decompressor API (#1428)
  < Fix settings ack race (#1630)
  < Update examples/README.md (#1629)
  < Get method string from stream (#1588)
  < fix max msg size type issues on different arch (#1623)
  < Deflake roundrobin TestOneServerDown, and fix test error messages (#1622)
  <  Remove self-imposed limit on max concurrent streams if the server doesn't impose any. (#1624)
  < Acquire all stream related quota and cache it locally since no more than one write can happen in parallel on stream (#1614)
  < Make travis 32-bit actually work (#1621)
  < balancer: reduce chattiness (#1608)
  < Revert "cap max msg size to min(max_int, max_uint32) (#1598)" (#1619)
  < cap max msg size to min(max_int, max_uint32) (#1598)
  < Fix parseTarget for unix socket address without scheme (#1611)
  < Fix connectivity state transitions when dialing (#1596)
  < Update go_package declarations (#1593)
  < ClientHandshake should get the dialing endpoint as the authority (#1607)
  < Add functions to ClientConn so it satisfies an interface for generated code (#1599)
  < Re-add support for Go1.6 (#1603)
  < Make passthrouth resolver the default instead of dns (#1606)
  < Fix goroutine leak in grpclb_test (#1595)
  < Add go report card (#1594)
  < Parse ServiceConfig JSON string (#1515)
  < Register and use default balancers and resolvers (#1551)
  < fix misspell (#1592)
  < Serve() should not return error on Stop() or GracefulStop() (#1485)
  < Remove single-entry var blocks (#1589)
  < update fail fast documentation to remove retry language (#1586)
  < Create versioning and release policy document (#1583)
  < Skip proxy_test in race mode (#1584)
  < transport: minor cleanups (comment and error text) (#1576)
  < Use proto3 in interop tests and end2end tests (#1574)
  < Change version to 1.8.0-dev (#1573)
  < Make resolver Build() take a target struct (#1567)
  < Revert "Temporary disable staticcheck" (#1568)
  < Update UnknownServiceHandler comment to be clearer about interceptor behavior (#1566)
  < transport: fix racey send to writes channel in WriteStatus (#1546)
  < fix stats test race (#1560)
  < Run tests without -v (#1562)
  < Remove Go1.6 support (#1492)
  < Temporary disable staticcheck (#1561)
  < fix TestServerCredsDispatch and stats test race (#1554)
  < Make interop client dial blocking (#1559)
  < benchmark: add type assertion benchmarks (#1556)
  < fix typo and lint (#1553)
  < transport: refactor of error/cancellation paths (#1533)
  < New implementation of roundrobin and pickfirst (#1506)
  < Update format string to match type (#1548)
  < add comment to dns package (#1545)
  < Make IO Buffer size configurable. (#1544)
  < Use the same hpack encoder on a transport and share it between RPCs. (#1536)
  < DNS with new API (#1513)
  < update markdown render (#1542)
  < Revert "Added localhost to net.Listen() calls to avoid macOS firewall dialog." (#1541)
  < Added localhost to net.Listen() calls to avoid macOS firewall dialog. (#1539)
  < transport: remove some defers (#1538)
  < Use Type() method for OAuth tokens instead of accessing TokenType field. (#1537)
  < benchmark: add primivites benchmark for Unlocking via defer vs. inline (#1534)
  < benchmain: format output of benchmark to a table (#1493)
  < Fix misspells (#1531)
  < vet.sh: set PATH to force downloaded binaries to be run (#1529)
  < Fix format error on travis (#1527)
  < Move primitives benchmarks to package primitives_test (#1522)
  < Speed up end to end tests by removing an unnecessary sleep (#1521)
  < Change quota version to uint32 instead on uint64 (#1517)
  < Fix deadline error on grpclb streams (#1511)
  < Dedicated goroutine for writing. (#1498)
  < benchmark: add primitives benchmarks for informational purposes (#1501)
  < Truncate payload trace string, and turn trace off by default (#1509)
  < Add leak goroutine checking to grpc/balancer tests (#1497)
  < Add RegisterIgnoreGoroutine to leakcheck package (#1507)
  < remove a debug print that causes deadlock (#1505)
  < vet.sh: fix protoc installation (#1502)
  < Add new Resolver and Balancer APIs (gRFC L9) (#1408)
  < Fix to avoid annoying firewall dialog on macOS (#1499)
  < Move leak check into a separate leakcheck package (#1445)
  < Change version to 1.7.0-dev (#1496)
  < Run Go1.9 and 386 on Travis (#1475)
  < Check "x/net/context" with `go vet` like "context" (#1490)
  < benchmain: add nop compressor and other usability tweaks (#1489)
  < Fix context warnings from govet. (#1486)
  < benchmain: minor bug fixes (#1488)
  < Update proto generation commands in example doc (#1481)
  < Remove expiration_interval from grpclb message (#1477)
  < balancer_test: possible ctx leak, cancel before break (#1479)
  < Merge pull request #1476 from dfawley/pkg
  < Fix for 32-bit architectures (#1471)
  < When sending a non heads-up goaway close the connection if there are no active streams. (#1474)
  < Remove unnecessary function handleStreamSuspension (#1468)
  < fix grpclb protos to not cause re-registration of types (#1466)
  < transport: fix handling of InTapHandle's returned context (#1461)
  < the cancel function should be called to avoid ctx leak (#1465)
  < add comment (#1464)
  < Remove buf copy when the compressor exist (#1427)
  < transport: Fix deadlock in client keepalive. (#1460)
  < benchmark: add benchmain/main.go to run benchmark with flag set (#1352)
  < stats: add methods to allow setting grpc-trace-bin and grpc-tags-bin headers (#1404)
  < deduplicate dns record in lookup (#1454)
  < Add -u to  installation command (#1451)
  < addrConn: change address to slice of address (#1376)
  < go-generate pb.go files and check in Travis to make sure they don't change (#1426)
  < Fix host string passed to PerRPCCredentials (#1433)
  < metadata: Remove NewContext and FromContext for gRFC L7 (#1392)
  < Add status details support to server HTTP handler (#1438)
  < put *gzip.Writer back to pool (#1441)
  < Automatic WriteStatus for RecvMsg/SendMsg error on server side (#1409)
  < Update ServerInHandle comments (#1437)
  < Server should send 2 goaway messages to gracefully shutdown the connection. (#1403)
  < Add and use connectivity package for states (#1430)
  < Add 'experimental' note to ServeHTTP godoc (#1429)
  < Document Server.ServeHTTP (#1406)
  < Set peer before sending request (#1423)
  < Fix missing and wrong license (#1422)
  < Fix a goroutine leak in DialContext (#1424)
  < Use `NewOutgoingContext ` in the metadata doc (#1425)
  < Fix typo
  < Add flags for tls file path (#1419)
  < Change comment on stats.End.Error (#1418)
  < Call cancel on contexts in tests (#1412)
  < Don't use 64-bit integers with atomic. (#1411)
  < benchmark: don't stop timer until after workers are done (#1407)
  < Validate send quota again after acquiring writable channel (#1367)
  < Use log instead of grpclog in routeguide example (#1395)
  < Revert "Make all "grpc-" metadata field names reserved (#1391)" (#1400)
  < Enabling client process multiple GoAways (#1393)
  < Assign testdata path to correct variable (#1397)
  < Do not call testdata.Path when defining flags (#1394)
  < Make all "grpc-" metadata field names reserved (#1391)
  < remove defer funtion in recvBufferReader Read method (#1031)
  < Add testdata package and unify testdata to only one dir (#1297)
  < DNS resolver (#1300)
  < Expose ConnectivityState of a ClientConn. (#1385)
  < status: Add WithDetails and Details functions (#1358)
  < benchmark: remove multi-layer for loop (#1339)
  < transport: fix minor typo in http2_server.go (#1383)
  < Add doc in default implementation fatal functions on os.Exit() (#1365)
  < Fix bufconn.Close to not be blocking. (#1377)
  < Do not create new addrConn when connection error happens (#1369)
  < Change version to 1.6.x (#1382)
  < Revert "Use bufconn in end2end tests." (#1381)
  < Fix logging method (#1375)
  < Use bufconn in end2end tests.
  < Create bufconn package for a local, buffered net.Conn and dialer/listener
  < Fix a typo in examples/gotutorial.md (#1374)
  < Use log severity and verbosity level (#1340)
  < fix deadlock of roundrobin balancer (#1353)
  < Ignore goroutines spanwned by log.init during leakcheck. (#1368)
  < Populate callInfo.peer object for streaming RPCs (#1356)
  < BDP estimation and window update. (#1310)
  < Canonicalize https://grpc.io as the preferred URL prefix
  < Update leckCheck to ignore non-gRPC goroutine introduced in Go1.9 (#1351)
  < Do not flush NewStream header on client side for unary RPCs and streaming RPCs with requests. (#1343)
  < adjust import order (#1311)
  < add license for some proto files (#1322)
  < latency: sleep in Write when BDP is exceeded to avoid buffer bloat (#1330)
  < Add documentation to deprecate WithTimeout dial option (#1333)
  < change objects in recvBuffer queue from interface to concrete type to reduce allocs (#1029)
  < Catch invalid use of Server.RegisterService after Register.Serve (#828)
  < benchmark: add latency/MTU/bandwidth into testcases (#1304)
  < Updated documentation of ClientStream. (#1320)
  < Add support for grpc.SupportPackageIsVersion3 back (#1331)
  < Deflake TestServerGoAway (#1321)
  < dont create new reader in recvMsg (#940)
  < Make Apache 2.0 LICENSE file a verbatim copy (#1329)
  < Protect bytesSent and bytesReceived with mutex to avoid datarace (#1318)
  < Add Severity and VerboseLevel to grpclog. (#922)
  < update LICENSE (#1312)
  < fix spell (#1314)
  < Add goroutine safety doc on stream (#1313)
  < replace 127.0.0.1 with localhost for ipv6 only environment (#1306)
  < transport: fix error handling on Stream deletion (#1275)
  < Behaviour Change: transport errors should be coded Unavailable instead of internal. (#1307)
  < Support ipv6 addresses in grpclb (#1303)
  < Return header in Stream.Header() if available (#1281)
  < add license for some files (#1296)
  < Make RPCs non-failfast in grpclb_test. (#1302)
  < Specify characters allowed in metadata keys (#1299)
  < use subtests for the benchmark_test and add it into the Makefile (#1278)
  < update the path of guide (#950)
  < Create latency package for realistically simulating network latency (#1286)
  < Deflake TestFlowContolLogicalRace (#1279)
  < Merge pull request #1290 from jtattermusch/apache_license
  < Change version to 1.5.0-dev (#1288)
  < transport: fix minor typo in 'GoAway' godoc (#1284)
  < Piggyback window updates for connection with those of a stream. (#1273)
  < Reopening: Server shouldn't Fatalf in case it fails to encode. (#1276)
  < Avoid int32 overflow when applying initial window size setting
  < Revert "Server shouldn't Fatalf in case it fails to encode. (#1251)" (#1274)
  < Server shouldn't Fatalf in case it fails to encode. (#1251)
  < Decouple transport flow control from application read. (#1265)
  < Update references to route_guide.proto to use new directory name (#1270)
  < add MaxConcurrentStreams to benchmark_test when start the server (#1271)
  < Merge pull request #1267 from jtattermusch/improve_contributing
  < re-enable handler_server in end2end test, and fix some failed tests (#1259)
  < Avoid panic caused by stdlib context package errors (#1258)
  < Initialize stream properly in handler_server. (#1260)
  < Expand stream's flow control in case of an active read. (#1248)
  < Suppress server log message when EOF without receiving data for preface (#1052)
  < Fixed comment spelling (#1254)
  < Merge pull request #1165 from lyuxuan/service_config_pr
  < clientconn, server: replace time.After with time.NewTimer (#998)
  < grpclb balancer.Close() should not panic if called more than once (#1250)
  < Add doc and example for mocking streaming RPCs (#1230)
  < Test for EmptyCallOption
  < Implement `EmptyCallOption`
  < Reuse Token for serviceAccount credentials (#1238)
  < Travis: add staticcheck (#1019)
  < Defined GA and add pointer to benchmarks (#1239)
  < call listen with "localhost:port" instead of ":port" in tests (#1237)
  < fix server panic trying to send on stream as client disconnects #1111 (#1115)
  < Eagerly set a pointer to nil to help GC (#1232)
  < add logs to grpclb on send and recv (#1235)
  < Add stats test for client streaming and server streaming RPCs (#1140)
  < Adding dial options for PerRPCCredentials (#1225)
  < Pass custom dialer to balancer (#1205)
  < Http status to grpc status conversion (#1195)
  < Calling handleRPC with context derived from the original (#1227)
  < Use pooled gzip.{Writer,Reader} in gzip{Compressor,Decompressor} (#1217)
  < tentative fix to a flow control over-give-back bug (#1170)
  < Ensure that RoundRobin.Close() does not panic. (#1139)
  < Log the actual error when inTapHandle fails in http2Server (#1185)
  < make ServerOption panic messages more clear. (#1194)
  < Make window size configurable. (#1210)
  < Reset proto before unmarshalling (#1222)
  < Merge pull request #1221 from adelez/doc_fixit
  < Fix go buildable source file problem (#1213)
  < don't add defer func if stats handler is nil (#1214)
  < Change version to 1.4.0-dev (#1212)
  < Fix nil pointer dereferences from status.FromProto(nil) (#1211)
  < Split grpclb client load report test to deflake test. (#1206)
  < Use unpadded base64 encoding for binary metadata headers; handle padded or unpadded input (#1209)
  < Never encode binary metadata within the metadata map (#1188)
  < Client load report for grpclb. (#1200)
  < Use proto.Equal for equalities on Go proto messages (#1204)
  < Update grpclb proto and move grpclb into package grpc (#1186)
  < Revert "temporary disable 1.6 on travis (#1198)" (#1199)
  < temporary disable 1.6 on travis (#1198)
  < Revert "To adhere with protocol the server should send RST_STREAM on observing timeout on a strea, (#1130)"
  < Make sure all in-flight streams close when ClientConn.Close() is called. (#1136)
  < To adhere with protocol the server should send RST_STREAM on observing timeout on a strea, (#1130)
  < Fix broken Markdown headings in examples/gotutorial.md (#1189)
  < Support proxy with dialer (#1098)
  < grpclb should connect to the second balancer (#1181)
@JohannesRudolph

This comment has been minimized.

Copy link
Contributor

commented Aug 21, 2018

We generate our pipelines using ejs after using yaml anchors just didn't cut it anymore for our use case (monorepo with ~12 micro-services built across feature/develop/master branches). We store the generated yml pipeline in our repo, but that's more for cosmetics. Developers still manually call fly set-pipeline.

This workflow is currently not optimal. For example, when adding a new microservice or just a new build step we go through the following hoops:

  • add the new job/step to the develop branch, potentially also adding "dummy" task.yml files to our repo
  • update the pipeline
  • now we have to ensure we merge back develop into all feature branches or else they will fail to build
  • the feature branch that actually introduces use of the new job/step updates the dummy task.yml and we get proper builds for it.

In an ideal world we'd like to have:

  • a resource (of custom resource type) that we can use to provide a pipeline. I'd call this something like a "pipeline root resource". this is all that we pass to fly set-pipeline
  • concourse gains a concept of "pipeline versions", i.e. every build remembers which pipeline version it came from (so we can go back in history)
  • when there's a new pipeline version, all changed jobs/step definitions get rebuilt
  • I'm not sure if there should be some sort of "sync" going on for resource checking, i.e. how do we deal with the pipeline changing mid-way through a build? maybe the "pipeline root resource" should flow implicitly through every step in the pipeline like a passed: [pipeline-root] constraint?
@marco-m

This comment has been minimized.

Copy link
Contributor

commented Aug 21, 2018

@JohannesRudolph I particularly like the

concourse gains a concept of "pipeline versions", i.e. every build remembers which pipeline version it came from (so we can go back in history)

This reminded me that GoCD has this feature, and also shows the diff. This page shows the UI: https://docs.gocd.org/current/faq/stage_old_config.html#see-what-changed-in-the-configuration-between-two-stage-runs.

Would love to see Concourse support this, since it would allow to answer the question: "This pipeline failed. Did it fail due to a build error or did it fail due to a pipeline configuration change?"

@marco-m

This comment has been minimized.

Copy link
Contributor

commented Nov 13, 2018

@vito any plans to add to Concourse the capability to load the pipeline configuration file from the repo itself, like travis/circleci/jenkins2 do ? In which Epic in the Concourse roadmap could this feature go ?

@vito

This comment has been minimized.

Copy link
Member Author

commented Mar 12, 2019

@marco-m No plans at the moment, but I've been thinking about it a lot lately.

Obviously the fanciest UX would be something magical like this:

fly set-pipeline \
  --url https://github.com/concourse/concourse \
  --file ci/pipelines/concourse.yml # this could have a default, e.g. pipeline.yml

First challenge: What is that URL? How does Concourse fetch it? Well, the natural answer is resources, but Concourse doesn't special-case any, so we'd have to be more generic:

fly set-pipeline
  --resource-type git \
  --resource-source uri:https://github.com/concourse/concourse \
  --file ci/pipelines/concourse.yml

Second challenge: we're planning on removing all core resource types except registry-image (context in #3003), so soon Concourse won't even know about git unless you tell it what it is. Putting that in the CLI would be pretty ugly:

fly set-pipeline
  --resource-type-name git \
  --resource-type-type registry-image \
  --resource-type-source repository:concourse/git-resource \
  --pipeline-resource-type git \
  --pipeline-resource-source uri:https://github.com/concourse/concourse \
  --file ci/pipelines/concourse.yml

Hideous! Might as well just put it in .yml at that point:

resource_types:
- name: git
  type: registry-image
  source: {repository: concourse/git-resource}

pipeline_resource:
  type: git
  source:
    uri: https://github.com/concourse/concourse

That's not too bad. A bit verbose, but no magic.

But now I wonder:

  • How can I tell when the pipeline has been configured?
  • Where do I look for validation errors?
  • How does this show up in the UI?

At this point, you might as well just set a pipeline that sets pipelines. You're already maintaining a YAML file, so you might as well write one you already know how to write. Then the above questions have pretty clear answers:

  • How can I tell when the pipeline has been configured?
    • Look at the builds for the meta-job.
  • Where do I look for validation errors?
    • Look at the builds for the meta-job.
  • How does this show up in the UI...at all?
    • It's a pipeline, silly!

Here's how it might look:

resource_types:
- name: git
  type: registry-image
  source: {repository: concourse/git-resource}

resources:
- name: concourse
  type: git
  source:
    uri: https://github.com/concourse/concourse

jobs:
- name: set-pipeline
  plan:
  - get: concourse
    trigger: true
  - set_pipeline: concourse
    file: concourse/ci/pipelines/concourse.yml

All we would need for this is a set_pipeline step.

There are a ton of advantages to this:

  • It builds on existing primitives and leverages existing architecture rather than introducing a new config file.
  • You could use whatever templating system you need, by just having a task prior to set_pipeline that generates the YAML.
  • It may lead naturally to #532 - pipelines set from a pipeline could be nested under that pipeline.
  • It provides a clear place to see the meta-status of pipeline management - the 'set-pipeline' job will fail if the pipeline is invalid.
  • It's still just pipelines, so you have all kinds of control to decide how to set pipelines.
    • For example, you could use 'spaces' to set entire pipelines per-space, rather than having one pipeline span all the spaces.
    • You could configure filters to only set pipelines when their files change.

...to be honest, this is already the pattern with the concourse-pipeline resource. The only difference would be that you wouldn't have to configure that resource, and you don't have to figure out how that resource authenticates with Concourse itself.

How does all that sound? I realize it's a bit verbose, but it seems pretty powerful and leverages all of today's existing primitives, which could lead to super interesting workflows.

If we still want fly set-pipeline --git-repo, maybe we could just make that be client-side syntax sugar for setting a pipeline that sets a config from that URL?

Super interested in hearing feedback on this, since it would be pretty easy to implement (just need a set_pipeline step!)

@vito vito added the high impact label Mar 12, 2019

@marco-m

This comment has been minimized.

Copy link
Contributor

commented Mar 12, 2019

thanks for the detailed reasoning @vito!

Here are my comments, from the point of view of a end-user (as opposed to the point of view of a Concourse expert).

Consider the UX when adding CircleCI/Travis builds to a given repo. The user has to do the following steps:

  1. Create a configuration file in a fixed place. For CircleCI, it is $REPO/.circleci/config.yml.
  2. In the CircleCI UI, add $REPO to the list of repos to be built.

That's it. This is enough for CircleCI to monitor (optimized via webhooks, no polling) all commits to all branches of $REPO and trigger a build on each commit.

Pros:
A. Protection from forgetting to call fly set-pipeline on pipeline change ("How come? I committed and pushed my change. Why on earth is Concourse not picking it up?"). Also expert Concourse users like myself keep falling into this gotcha.
B. Protection from race conditions when changing the pipeline configuration file. The race condition means that a pipeline, under manual fly set-pipeline or pipeline-resource, can be changed in-flight, while a job is building, and sometimes it breaks a build in a puzzling way for people not expert.
C. Related to A and B: it is always possible to answer the question: "what changed to the pipeline between build N and build M?" The answer stays in a simple git diff. By contrast, with fly set-pipeline, and also if using the Concourse pipeline resource, one cannot answer this question (because the pipeline resource runs async with changes to the repo containing the pipeline file).
D. Per-branch build out-of-the-box.
E. Nice integration with github build status.
F. Simplicity, which brings with itself satisfaction ("Hey, I have my repo under CI!").

Cons:
a. Lack of flexibility and control, the whole point of Concourse :-)
b. Fewer platforms / Operating System supported.
c. Less pipeline-esque, (although CircleCI "workflows" are close).
d. Works only with github (I think).
e. Not a great story for the Deployment part.
f. Can only have one pipeline per repo (with potentially temporary changes, meant to be merged, in a branch).
g. No support for fly execute.

I am not sure wether I am able to propose something concrete actually. If I were to elaborate, I think I would propose something really basic and inflexible (only one pipeline, in a fixed place / fixed name), and maybe consider if it is possible to give up Concourse "purity" and, I don't know, make Concourse know about git, so that it is able to "bootstrap".

But, now that I wrote this and I re-read your notes, maybe what is most important is not even emulating CircleCI or this ticket at all. What is most imporant, from my point of view, is:

  1. Solve the problem around the race condition when calling set-pipeline
  2. Solve the problem around forgetting to call fly set-pipeline

I put the race condition more important than forgetting to call set-pipeline because I make the assumption that one is using the pipeline-resource.

@DanielJonesEB

This comment has been minimized.

Copy link

commented Mar 13, 2019

Thanks for the update @vito.

I concur with everything you've said about building on primitives, and the UX aspects of it too.

I think the key thing is for the approach to work folks need to be able to lock down permissions to set-pipeline, so that it can be guaranteed that pipelines are only ever set by the pipeline-setter-pipeline, and not by some random developer. I'm not up-to-date with the RBAC work, so I don't know if this is a thing already being considered.

Using pipeline-setting-pipelines isn't the most intuitive idea for new users. However, I wonder whether this is another example where Concourse becomes the Kubernetes of CI (a platform that you build CI solutions out of) rather than an end-user product. Imagine a Concourse 'distro' that shipped with some opinionated CLI or something that allows users to not-fly register-pipeline https://some/repo.git and then creates a pipeline-setting-pipeline for the user's repo. Maybe this kind of stuff is where third parties can add value beyond Concourse by codifying opinionated workflows.

@marco-m

This comment has been minimized.

Copy link
Contributor

commented Mar 13, 2019

@DanielJonesEB I am personally not interested in having Concourse the "Kubernetes of CI". There is a reason why OpenShift exists, to tame Kubernetes. I don't want to have to tame Concourse any more than currently I have to.

@DanielJonesEB

This comment has been minimized.

Copy link

commented Mar 13, 2019

@marco-m Thanks for sharing your point of view. At Concourse London User Group we'd heard of a few use cases where people were building more opinionated, higher-level CI/CD solution using Concourse as a 'CI platform' of primitives. Perhaps it's fine for people to do so, so that Concourse itself doesn't become pigeon-holed with the opinions of a subset of its users. Having third parties develop and offer higher-level solutions built atop Concourse also helps ensure its success, as then you've got folks with a commercial interest in the underlying OSS that they're basing their products on.

@vito

This comment has been minimized.

Copy link
Member Author

commented Mar 13, 2019

B. Protection from race conditions when changing the pipeline configuration file. The race condition means that a pipeline, under manual fly set-pipeline or pipeline-resource, can be changed in-flight, while a job is building, and sometimes it breaks a build in a puzzling way for people not expert.

How do other products with similar challenges handle this? I'm struggling to make any sense of it in my head. 🤔

On Concourse's side, I can't imagine a scenario where running set-pipeline would affect a build that's already running; its plan is already figured out by that point, so it's not even looking at the pipeline config anymore and should just finish whatever it was doing. I'm not sure what else it would do, to be honest.

I'm in full agreement that setting a pipeline that sets pipelines isn't the best UX, but I think that's something we can make easier by having fly set-pipeline just support syntactic sugar for it:

fly set-pipeline -p concourse --git-repo https://github.com/concourse/concourse

This could just result in setting the meta-pipeline. The challenge then would be surfacing this in the UI in a way that's not confusing, which is indeed a challenge, since they'll have a pipeline on the dashboard, but not the one they necessarily expect. But they'd at least have visibility into it and know what's up when their pipeline config is invalid. Maybe we could make it clear that it's the 'meta' pipeline either by naming it clearly or having some way to distinguish pipelines or annotate them in the UI (which is already an ask - see #1982).

I do see the draw of making Concourse just as easy as other successful products, but I'm concerned that it could result in fragmenting the product itself and end up making everything more confusing. ("My project has outgrown the easy UX and now I need to learn a whole new workflow.") ("Concourse knows about git, but only sometimes, and not always in the same way.") This is why I'm trying to lean as hard as possible on the same core set of principles and ideas - because at least once you learn them it's knowledge you can leverage all the time.

@vito

This comment has been minimized.

Copy link
Member Author

commented Mar 13, 2019

As a side-tangent, I think there may be undertones here around 'what is a pipeline', and I think that may be causing mental friction when discussing other CI solutions, as their 'pipelines' tend to be more like Concourse 'jobs'. The difference being that Concourse pipelines are really just a namespace of jobs and resources.

This is a key difference, because there is no such thing as a "run of a pipeline", for example. Another key difference is that pipelines define all their jobs all at once, and the change takes effect immediately, even for downstream jobs that haven't run yet.

@marco-m This is kind of a shot in the dark, but I wonder if the kinds of things you're proposing would be more appropriate to think about at the job level. I'm not really sure how to word this now, as it's kind of just a suspicion. As a vague example, could jobs be configured in such a way that they learn their build plan YAML at runtime based on their own inputs?

...I'm leaving this kind of vague because I'm not too sure myself and I'm hoping you have an "a-ha!" reaction and can meet me in the middle. 😛

@marco-m

This comment has been minimized.

Copy link
Contributor

commented Mar 13, 2019

@vito

As a side-tangent, I think there may be undertones here around 'what is a pipeline', and I think that may be causing mental friction when discussing other CI solutions, as their 'pipelines' tend to be more like Concourse 'jobs'. The difference being that Concourse pipelines are really just a namespace of jobs and resources.

Yes, I think you nailed it.

("Concourse knows about git, but only sometimes, and not always in the same way."). This is why I'm trying to lean as hard as possible on the same core set of principles and ideas

I agree. This is why I put that suggestion under "I am not sure wether I am able to propose something concrete actually."

On the other hand, the race condition I am talking about is real and, in my recollections, easy to reproduce. I though it was a known problem, but you comment makes me think it is not the case. I will open a ticket with a repro.

Mmh, re-reading what you wrote:

This is a key difference, because there is no such thing as a "run of a pipeline", for example. Another key difference is that pipelines define all their jobs all at once, and the change takes effect immediately, even for downstream jobs that haven't run yet.

Exactly. This is the race condition, in the sense that downstream jobs can fail. Again, I will repro it.

@DanielJonesEB

This comment has been minimized.

Copy link

commented Mar 13, 2019

On Concourse's side, I can't imagine a scenario where running set-pipeline would affect a build that's already running; its plan is already figured out by that point, so it's not even looking at the pipeline config anymore and should just finish whatever it was doing. I'm not sure what else it would do, to be honest.

This is a key difference, because there is no such thing as a "run of a pipeline", for example.

It wouldn't affect a build that was already running, but it could certainly affect a 'run of a pipeline', and that's something we've had to guard against (and seen customers do) by using a pool resource as a lock, so there's only one 'run' possible at a time, with a set-pipeline as one of the first steps.

The challenge then would be surfacing this in the UI in a way that's not confusing, which is indeed a challenge, since they'll have a pipeline on the dashboard, but not the one they necessarily expect.

Yep, would be a pretty drastic change. One team has many pipeline-sources (generators? meta-pipelines?), one pipeline has one pipeline-source.

@DanielJonesEB

This comment has been minimized.

Copy link

commented Mar 13, 2019

As an aside, at Concourse London User Group some Pivots explicitly called out as a beginner's error folks mistakenly thinking that serial groups could protect against concurrent/out-of-sequence runs.

@vito

This comment has been minimized.

Copy link
Member Author

commented Mar 13, 2019

Exactly. This is the race condition, in the sense that downstream jobs can fail. Again, I will repro it.

Ah, now that I think of it I have seen one example. It's usually when tasks use file: and expect certain inputs, and then the pipeline removes those get steps. I can't really think of any other examples off the top of my head.

It wouldn't affect a build that was already running, but it could certainly affect a 'run of a pipeline', and that's something we've had to guard against (and seen customers do) by using a pool resource as a lock, so there's only one 'run' possible at a time, with a set-pipeline as one of the first steps.

As an aside, at Concourse London User Group some Pivots explicitly called out as a beginner's error folks mistakenly thinking that serial groups could protect against concurrent/out-of-sequence runs.

Yeah, this is another bit of awkwardness with trying to push 'runs of a pipeline' into the Concourse mental model. It's something I've thought about making first-class and replacing serial_groups. I've never really focused on it though. (There's too much to do!)

@DanielJonesEB

This comment has been minimized.

Copy link

commented Mar 13, 2019

Yeah, this is another bit of awkwardness with trying to push 'runs of a pipeline' into the Concourse mental model. It's something I've thought about making first-class and replacing serial_groups.

Maybe something like having 'modal resources'. Some resources that have gets in the pipeline can be flagged as modal, and thus only one permutation of the set of all modal resources is allowed to run at a time, and subsequent versions are queued? That's my crap idea having thought about it for all of 15 seconds whilst staring out of a window, so is undoubtedly completely unworkable.

I've never really focused on it though. (There's too much to do!)

Ha, I bet :)

@chenbh

This comment has been minimized.

Copy link
Member

commented Mar 13, 2019

I'm in full agreement that setting a pipeline that sets pipelines isn't the best UX, but I think that's something we can make easier by having fly set-pipeline just support syntactic sugar for it:

I wonder if we can give fly the ability to submit a pipeline configuration from a URL and somebody (e.g. concourse-ci.org) can host a set of "meta-pipelines" that would be used to read and set the CI pipelines

$ curl https://concourse-ci.org/pipelines/load-ci-from-git.yml

resources:
- name: repo
  type: git
  source: 
    uri: ((github-repo))
    paths: pipeline.yml  # or make it a var that can be passed in

- name: ci-pipeline
  type: concourse-pipeline
  source:
    target: ((target))
    insecure: "false"
    teams:
    - name: ((team))
      username: ((user))
      password: ((pass))


jobs:
- name: set-pipeline
  plan:
  - get: repo
    trigger: true
  - put: ci-pipeline
    params:
      pipelines_file: repo/pipeline.yml

And to use it

fly set-pipeline -p meta-pipeline \
  --config-from-url https://concourse-ci.org/pipelines/load-ci-from-git.yml \
  --var "github-repo=https://github.com/concourse/concourse.git" \
  --var target, team, user, pass....

Pros:

  • Easy to copy/paste and get your CI up and running
  • Concourse don't have to know about git (other than having the resource)
  • Fairly easy to extend to whatever other configuration you want your pipeline.yml to be (hg, s3 bucket?????)

Cons:

  • Still has a meta pipeline whose only job is to update the CI pipeline
  • Must explicitly specify fly settings
  • Meta-pipelines might become pretty complex and require lots of vars to be passed in (default values in pipeline configurations?)
@DanielJonesEB

This comment has been minimized.

Copy link

commented Mar 14, 2019

I wonder if we can give fly the ability to submit a pipeline configuration from a URL and somebody (e.g. concourse-ci.org) can host a set of "meta-pipelines" that would be used to read and set the CI pipelines

Interesting idea! In my mind, this starts to touch on higher-level value offerings to be built atop Concourse - like 'buildpacks' for CI pipelines. Some thing that you point at your code, which then generates you a meta-pipeline and/or a pipeline.

@eedwards-sk

This comment has been minimized.

Copy link

commented Mar 15, 2019

Vito mentions how ugly the fly cli could be to set a pipeline where you also have to specify a git resource.

...but imo, the fly cli is already pretty ugly (heavily parameterized). I rarely use it directly, often wrapping it to provide all the vars (-l, -c, -p etc)

I don't see why adding more vars is a big deal if it gives us the results we need?

@jshearer

This comment has been minimized.

Copy link

commented Mar 17, 2019

As someone who doesn't know that much about how concourse works internally, I'd like to add to what @marco-m said.

I'd like to switch from GitLab or CircleCI to concourse since I love the task definition and planning, individual docker containers per task, scheduling, cost, etc. The problem is that even with @vito's suggestion of a meta-pipeline to create pipelines, and the concourse pipeline resource, it's still significantly harder to get the workflow I want up and running.

In gitlab and circle, I can just create a pipeline file in my repo with a well known name, teach the CI system about my repo, and everything just works -- branch builds, updates to the pipeline config get picked up atomically, pipeline changes per branch (i.e I can push a change to the pipeline in a branch and that pipeline will run only for that branch) and no worrying about "oh shit, I changed my pipeline but forgot to run fly update-the-pipeline". If the meta-pipeline suggestion solves this (i.e, it takes the place of "teaching the CI system about my repo", and I only have to do it once), then I think that's great.

In addition, not only do I not have to worry about forgetting to manually update the pipeline, I don't have to worry about builds potentially running against an old version of the pipeline when I push an update to the repo that changes the pipeline yaml before I or the meta resource have time to update it.

In thinking about this more, it seems like really only the last issue I mentioned is a problem:

  1. Push change to project repo containing pipeline change, and code change
  2. The meta-pipeline picks up this change, and applies it to the main pipeline
  3. The main pipeline detects a change in the repo and runs

The problem is, are 2 and 3 guaranteed to happen in that order? What if 3 just happens to run before 2? Now I'm running an old version of the pipeline against the new commit, and who knows what might happen.

I actually like the idea of storing all of the meta-pipelines in version control and just having to manually configure concourse once at the very beginning to pull meta-pipelines from a single repo, so then you can also keep which repos concourse should watch for pipelines in version control.

@vito

This comment has been minimized.

Copy link
Member Author

commented Mar 18, 2019

@jshearer Yeah, I think we're all on the same page. The meta-pipeline proposal would solve the first part of what you said (teach concourse about my repo and everything's automatic from there). But it won't solve that race condition. Cheers for adding your perspective, it's good to know a lot of people are aligned on this.

@ari-becker

This comment has been minimized.

Copy link

commented Mar 19, 2019

@vito We approach this with a meta-pipeline that sets our other pipelines through the Concourse Pipeline resource. I'm not too keen on any kind of proposal which launches Concourse with knowledge of a meta-pipeline since our meta-pipeline is itself generated. Currently we run our meta-pipeline generation script by hand, but I could see us putting the script in a cronjob.

We too suffer from the race condition mentioned by @jshearer. My proposal would be to add a parent_pipeline optional field to pipelines which, if set, waits to perpetuate new versions to the pipeline until those versions have been processed by the parent_pipeline for any resources shared by both pipelines. This would ensure that pipelines would be reset by the meta-pipeline before they begin processing their resources' new versions. (edit: naturally fits under the hierarchical pipelines proposal #532 )

When I first started using Concourse, I was coming from Jenkins and Jenkinsfiles and wondering why Concourse didn't have them. But any organization at scale ends up using Jenkins shared libraries to keep a consistent configuration across repositories, and the Jenkinsfile just ends up being a wrapper around that shared library. We find that the meta-pipeline pattern scales much better, and IMO some kind of .concourse-ci.yaml file would be an anti-pattern, not least of which because such a per-repository file could not be generated, as there would be no way to instruct Concourse how to generate the .concourse-ci.yaml file per-repository.

IMO the two uses for set-pipeline are for setting the meta-pipeline and for hacking on development pipelines.

@ukabu

This comment has been minimized.

Copy link

commented Jun 4, 2019

What about a fly -t target set-pipeline -p pipeline-name --resource git --source "uri=...,other_source_params=...." --check_every 5m ...

This way, Concourse doesn't have to "know" about git. Could use any resource as the source of the pipeline.

This would also define the source resource in the pipeline without having to redefine it.

For external resource, we could pass a --resource_type ....

@vito

This comment has been minimized.

Copy link
Member Author

commented Jul 29, 2019

Hi all, thanks a lot for helping me map this out. I think I've got a plan that addresses this, and I've outlined it in the v10 Roadmap blog post. 🙂

Specifically, this has led to concourse/rfcs#32 which proposes a workflow that maps pretty closely to what we talked about here. A set_pipeline step is involved, but only as a small piece of the puzzle.

I'm going to close this issue and direct the conversation there. Please take a look and leave feedback, and thanks again!

@vito vito closed this Jul 29, 2019

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
You can’t perform that action at this time.