Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

TEP 0002: Custom Tasks #128

Merged
merged 1 commit into from Jul 6, 2020
Merged

Conversation

imjasonh
Copy link
Member

@imjasonh imjasonh commented Jun 18, 2020

Based on the Google Doc, now with 1000% more Markdown! 😄

Rendered here

/assign @bobcatfish
/assign @vdemeester
/assign @afrittoli

/cc @jlpettersson @mattmoor @n3wscott

@tekton-robot tekton-robot added the size/L Denotes a PR that changes 100-499 lines, ignoring generated files. label Jun 18, 2020
@imjasonh
Copy link
Member Author

Prototype code going on here: https://github.com/tektoncd/pipeline/compare/master...ImJasonH:run?expand=1

This will be split up and proposed as separate PRs after this TEP is accepted.

teps/0002-custom-tasks.md Show resolved Hide resolved
teps/0002-custom-tasks.md Show resolved Hide resolved
teps/0002-custom-tasks.md Show resolved Hide resolved
teps/0002-custom-tasks.md Show resolved Hide resolved
teps/0002-custom-tasks.md Show resolved Hide resolved
teps/0002-custom-tasks.md Outdated Show resolved Hide resolved
teps/0002-custom-tasks.md Outdated Show resolved Hide resolved

Supporting cancellation is optional but recommended.

### Timeout
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I feel like another thing we need to plumb through in general is identity e.g. serviceAccountName

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's one we'd need to plumb through for any Custom Task that intends to end up running Pod-based workflows (e.g., MatrixTask, GroupTask), but not the general case.

In the meantime a MatrixTask config can specify and use serviceAccountName and it just won't be overrideable from a Pipeline. Perhaps it should just be a param? (e.g., serviceAccountName: $(params.serviceAccountName))

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this is a usability issue. Users now have to maintain the service account name setting in multiple places (the existing PR serviceAccountName keys and now separate task parameters which could be custom-task specific). People will question why the pipelinerun's configured service account name can't just be passed down.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should we have a specific header serviceAccount and related subject then ? 🙃

teps/0002-custom-tasks.md Show resolved Hide resolved

### CLI and Dashboard Support

At the very least, the `tkn` CLI and Tekton Dashboard should have some way to display basic information about Custom Tasks, even if it's just a dump of the YAML. Solving a complete holistic plugin model for Go binaries and web front-ends expands the scope of this work too broadly, but at least providing Custom Task authors some basic support in Tekton's native tooling is better than nothing.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What about a run subcommand for listing runs?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

BTW the bit about surfacing the children runs was thinking forward to tkn run logs for Pipeline/Task/Matrix/Custom/...

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Blegh, I kind of just don't want to think about CLI support at all, let alone plumbing through tkn logs support, which would insinuate some way to tell the CLI that you do end up running Pods. Blegh.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is why I tagged @n3wscott elsewhere, since he's trying to motivate custom subresources in K8s and /logs is one that I believe has come up.

@imjasonh imjasonh force-pushed the custom-tasks branch 2 times, most recently from b8ebf5b to e157b10 Compare June 19, 2020 00:16
Copy link
Member

@afrittoli afrittoli left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for this, it's looking good!
I've not finished the review yet, but I wanted to share feedback so far.


## Requirements

* Add a new CRD type, `Run`, which will be instantiated when a `Pipeline` is run that contains `taskRef`s which have an `apiVersion` that is not `tekton.dev/*` -- `taskRefs` that reference `Task`s and `ClusterTask`s (the only valid values today) will be unaffected.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It doesn't seem to me that how custom tasks are identified in a pipeline is a requirement per se, more a design decision or implementation detail - or is there a special reason you wanted to call that out specifically here?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah I think I just didn't know what to put as a "requirement" -- there isn't really any other work that needs to be done that blocks this work, so maybe there are no "requirements"?

If "requirements" means "things that must be done before this TEP is considered 'complete'", then I think how it integrates with Pipelines is in-scope, that's all I was trying to say.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

at a high level, we're proposing something to solve a problem. i think we're using the goals section to capture what problem we're solving in general. requirements are constraints on how the problem needs to be solved, e.g. something like "requests must not take > 500 ms to respond" or "it must be possible to do this with by running one command"

for something like this it might be things like (mostly just making things up, dont feel like you have to use thes)

  • adding support for new custom tasks must not require changes to the tekton pipelines controller
  • it must be possible to release custom tasks completely separately from tekton pipelines
  • it must be possible to use something other than a kubernetes controller to realize custom tasks

i think some of the things you listed here work, like providing docs, samples, helper functions + template repo, you could reword that if you wanted into something like "someone who wants to implement their own custom task controller has the xyz that they need to get started"

i agree that the CRD type part is more of an implementation detail

teps/0002-custom-tasks.md Show resolved Hide resolved
message: Oh no bad times
```

The `Run` type's `.status` will also allow controllers to report other fields, such as `startTime`, `completionTime`, `results` (see below), and arbitrary context-dependent fields the Custom Task author wants to report. A fully-specified `Run` status might look like:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same for the dashboard. It would be really useful for custom tasks to be able to expose their logs with no extra work on CLI/dashboard.


### Cancellation

To support cancellation of `Run`s, when a `PipelineRun` is cancelled, the `PipelineRun` controller will attempt to update any ongoing `Run`s' `.spec.status` to `"Cancelled"`, and update `.status.conditions` to signal unsuccessful execution.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does this mean that the PipelineRun controller will set a condition on he Run resource?
I think setting Cancelled to the spec should be enough, and it would be responsibility of the custom task controller to set the condition accordingly.

What will be the behaviour in case of cancellation not supported or delayed or failed for the custom task?

The implementation for TaskRun is quite optimistic: we apply the patch to the spec of all TaskRuns. If that goes well we assume everything is done and set the completion time.
I wonder if we should wait for the TaskRun to actually report that they stopped running - and similarly for custom tasks.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same question as @afrittoli, updating the .status.conditions is the "custom task" controller reponsaility isn't it ?
I think it comes from the fact that this cancellation support would be optional. How can a Custom Task give that information ("I support cancellation", or "I do not support cancellation") to the controller ?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is a bit tricky. If a custom task controller doesn't handle cancellation (i.e., doesn't respond in any way when .spec.status is set to Cancelled), then the PipelineRun should Do The Right Thing, which I think is to set all sub-task .statuses to conditions[@type=Succeeded].status=False and reason:RunCancelled.

Maybe we could specify that PipelineRun controller is responsible for all of this, and custom task authors can optionally detect that case and propagate cancellation however that makes sense for them, or we can say that the PipelineRun controller will set .spec.status=Cancelled and then wait to see if anything responds, and then force-cancel the Run? That's sort of how initial update timeout would work, but that means more complex logic in the PipelineRun controller and more nodes on the state diagram -- what happens if the PipelineRun controller restarts while it's waiting to see if a cancellation has been acted upon? Gross.

Ideas welcome. I don't think this specific question should block overall approval of the proposal though, since it only affects a fairly narrow interaction between Runs and PipelineRuns.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also, we can get this TEP in with still some element to be discussed 😉


Today, users can specify a timeout for a component `Task` of a `Pipeline` (see [`PipelineTask.Timeout`](https://godoc.org/github.com/tektoncd/pipeline/pkg/apis/pipeline/v1beta1#PipelineTask)). The `Run` type will specify a `Timeout` field to hold this value when created as part of a `PipelineRun` (or when `Run`s are created directly). Custom Task authors can read and propagate this field if desired, but a Tekton-owned controller will be responsible for tracking this timeout and updating timed out `Run`s to signal unsuccessful execution.

A Custom Task author can watch for this status update and take any corresponding actions (e.g., cancel a cloud build, stop the waiting timer, tear down the approval listener).
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Similar to the cancel case, I think that the custom controller should be the sole responsible to write the "Succeeded" condition of the custom task. The pipeline run controller should have a mechanism to tell the custom task controller to stop the execution of a custom task.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's a reasonable stance. How would you like the PipelineRun controller to signal that to custom task controllers? Just treat timeout as a type of cancellation? Set .spec.status=TimedOut and handle that differently?

Is there any situation where a custom task controller would care to know that it was cancelled due to timeout specifically? 🤔


In order to test correct handling of Custom Tasks in the PipelineRun controller, simple e2e tests could install a simple Wait type and controller (**only used for testing**), and assert that a Pipeline that references that Wait type runs component Tasks with some approriate period of time between them.

Other future experimental types and controllers (e.g., in `tektoncd/experimental`) should be accompanied by unit tests and e2e tests along the same lines.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Will images be published for the controllers is tektoncd/experimental? I'm not sure how that process would work, i.e. would it have releases?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's up to the custom task author. If they're in tektoncd/experimental there's probably a higher bar of support than if it's just a random personal repo on GitHub, especially because that repo already has Prow set up.

For now I've been envisioning myself writing most custom task controllers in my own repos, just to avoid having to find someone else to review them as required by tektoncd/experimental 😈


1. Provide any Custom Task implementations as "official" or "first-party"
integrations. Some may be added in a future change, but for now the goal
is just to support _any_ third-party integrations, and let the user install them themselves, or let distributors provide them if they prefer to.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Are you saying that you don't want "official" custom tasks built on top of this support in the near future by anybody, or just in the scope of this TEP? I ask because I've been looking at implementing task looping using custom tasks and @jerop is looking at implementing conditions using custom tasks. I would expect those items be assessed by the community on their own as to whether they should be official or experimental -- unless you are stating a position that it's premature to implement any custom tasks officially?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@GregDritschler only as part of this TEP. This TEP focuses on adding support for custom tasks. Doesn't mean, we won't ship built-in custom task, … but it would need another TEP (aka the idea is to create focus TEP).
So it's not premature to implement any custom task officially, but it's not in the scope of this particular TEP.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@GregDritschler what @vdemeester said. This TEP should only cover being able to write a custom task, and not expand to include any official custom tasks.

I can definitely see a future where the tektoncd/pipeline repo includes some custom task controllers for commonly requested features, maintained and supported by Tekton contributors. But, in the future a Tekton platform/distribution (e.g., Openshift Pipelines) might decide not to install some that they don't need, or install their own versions, or install new ones by default, etc. -- that's up to them.

@shinji62
Copy link

shinji62 commented Jun 24, 2020

Anyway that we can vote on that ?
In our case we use external party which are doing async stuff, so having a task which is not running and waiting for result could be nice.

And we accept the drawback as we already have some specific CRD

Copy link
Member

@vdemeester vdemeester left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@imjasonh nice TEP 😍
Few comments and questions.

I also think the TEP would benefit from a little bit more examples (maybe in User Stories) based on what is in the initial design doc 👼 🙏

teps/0002-custom-tasks.md Show resolved Hide resolved
teps/0002-custom-tasks.md Show resolved Hide resolved

1. Provide any Custom Task implementations as "official" or "first-party"
integrations. Some may be added in a future change, but for now the goal
is just to support _any_ third-party integrations, and let the user install them themselves, or let distributors provide them if they prefer to.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@GregDritschler only as part of this TEP. This TEP focuses on adding support for custom tasks. Doesn't mean, we won't ship built-in custom task, … but it would need another TEP (aka the idea is to create focus TEP).
So it's not premature to implement any custom task officially, but it's not in the scope of this particular TEP.

teps/0002-custom-tasks.md Show resolved Hide resolved

This references an `Example` CRD type defined by the custom task author, an instance of which is named `my-example`.

When a `Run` object is created, Tekton will validate that the `ref` is specified, and that the specified CRD type is defined, using webhook validation.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit/impl. detail: this means the validation webhook will need to "discuss" with the k8s api right ? (which it does not as of today)

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm, true. I don't actually feel strongly that the ref is validated synchonously. The custom task controller will reconcile it to failure soon after.


Supporting cancellation is optional but recommended.

### Timeout
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should we have a specific header serviceAccount and related subject then ? 🙃


A Custom Task author can watch for this status update and take any corresponding actions (e.g., cancel a cloud build, stop the waiting timer, tear down the approval listener).

Supporting timeouts is optional but recommended.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think we are covering for cases where the Custom Task should inhibit the default / set timeout on a PipelineRun. The approval case is one such example, we may want to inhibit the timeout of the execution (which is 1h by default), to let approvers time to do their thing.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's a good point. I'm not sure whether a person specifying a Pipeline-wide timeout of 24h would expect time spent waiting for approval to count toward that time or not. On the one hand, I might want my Pipeline to always be guaranteed to finish today. On the other hand, it means I can grant approval .1s before the Pipeline's timeout elapses, which isn't terribly helpful. I guess that could already happen either way 🤔

teps/0002-custom-tasks.md Show resolved Hide resolved
teps/0002-custom-tasks.md Show resolved Hide resolved

Under this proposal, the user could fork a template GitHub repo, define a type describing their intended behavior, and implement a controller that performs that behavior. In this example, the controller would react to creations of `Run<Approval>` objects by setting up a service that listens for approval events, and updates the `Run`'s `.status` to signal that the pipeline should proceed.

Other users who want to use this approval mechanism in their own pipelines could install the task author's type and controller to get the same behavior. The task author could release and distribute their controller using the Tekton [catalog](https://github.com/tektoncd/catalog) and/or (someday) Tekton Hub, or their own GitHub repo.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I didn't though of that, very interesting thought
/cc @sthaha

@tekton-robot tekton-robot requested a review from sthaha June 24, 2020 09:28
Copy link
Contributor

@bobcatfish bobcatfish left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm excited! I have an alternative that id like to include in the alternatives section but I approve of this overall and want to go ahead.

I was initially skeptical about Runs but after talking with @imjasonh im convinced that Runs (vs instantiating arbitrary custom CRDs directly) has the advantage of nicely reducing and simplifying the responsibilities of the tekton controller itself. You can still do whatever crazy stuff you want to do, but you have to do it in your own controller.

teps/0002-custom-tasks.md Show resolved Hide resolved
* enable looping execution of sub-Tasks -- e.g., express that a Task should be repeatedly run until some state is reached (signalled at run-time)
* ...and in general, support a model where integrators can implement their own execution types without modifying Tekton Pipelines code directly

This mechanism can also be used by Tekton core contributors to prototype and experiment with new execution modes, even other forms of Pod-based Tasks, before possibly integrating that functionality into Tekton Pipelines directly.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🎉


## Requirements

* Add a new CRD type, `Run`, which will be instantiated when a `Pipeline` is run that contains `taskRef`s which have an `apiVersion` that is not `tekton.dev/*` -- `taskRefs` that reference `Task`s and `ClusterTask`s (the only valid values today) will be unaffected.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

at a high level, we're proposing something to solve a problem. i think we're using the goals section to capture what problem we're solving in general. requirements are constraints on how the problem needs to be solved, e.g. something like "requests must not take > 500 ms to respond" or "it must be possible to do this with by running one command"

for something like this it might be things like (mostly just making things up, dont feel like you have to use thes)

  • adding support for new custom tasks must not require changes to the tekton pipelines controller
  • it must be possible to release custom tasks completely separately from tekton pipelines
  • it must be possible to use something other than a kubernetes controller to realize custom tasks

i think some of the things you listed here work, like providing docs, samples, helper functions + template repo, you could reword that if you wanted into something like "someone who wants to implement their own custom task controller has the xyz that they need to get started"

i agree that the CRD type part is more of an implementation detail

teps/0002-custom-tasks.md Show resolved Hide resolved

### Parameter Passing

Custom Task authors should support parameter passing, by supporting a `.spec.params` field (of type [`[]Param`](https://godoc.org/github.com/tektoncd/pipeline/pkg/apis/pipeline/v1beta1#Param)), and by resolving any `$(params.foo)` placeholders in the CRD type when a `Run` of that type is first reconciled -- this functionality should be implemented by a Go package provided by Tekton, which should be the same one that Tekton itself uses when resolving placeholders in `TaskRun`s and `PipelineRun`s.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

sounds like it could make sense to have a repo that is dedicated to code that makes it easier to write these custom task controllers - instead of calling it something generic like pkg, maybe we could call it controller or even customtask

Copy link
Contributor

@bobcatfish bobcatfish left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@imjasonh i added some suggested edits to try to capture what we discussed!

1. Provide first-party support for things like long waits, approvals, Pipelines-in-Pipelines, in an ad-hoc tightly-coupled manner. This requires these integrations to be implemented "in-tree", by Tekton contributors, which could harm team velocity and focus. By exposing a plug-in mechanism, the community is more fully enabled to experiment and contribute to the ecosystem.

1. As in previous iterations of this design (documented more fully in the Google Doc), require implementors to define and support two CRD types, instead of one. See the doc for full explanation of the trade-offs.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
1. Allow users to instantiate arbitrary kubernetes objects by providing their entire content inline. In this approach, users would be create instances of their own custom CRDs, e.g. `CELRun`, by providing the entire body inline, much like [triggertemplates](https://github.com/tektoncd/triggers/blob/master/docs/triggertemplates.md).
* pros: ultimiate flexibility
* cons: requires the tekton pipelines controller to be able to create and monitor arbitrary objects. it would need to have additional permissions to create these types, and it wouldn't be able to tell until after it instantiated the types if the type actually compiled with the required interface (i.e. [status reporting](#status-reporting)). Out of the box this would mean a user could try to instantiate a pod in a pipeline (the controller would have permissions to do this). Keeping these responsibilities in a separate controller reduced the existing controller's responsibilities. Arbitrary types will still be reported, but they must be created by the custom controller.


When a `Run` object is created, Tekton will validate that the `ref` is specified, and that the specified CRD type is defined, using webhook validation.

After that, Tekton Pipelines expects a custom task author to implement a controller for `Run` objects that reference their type (annotated throughout this proposal with the shorthand `Run<Example>`) to take some action, and eventually update its `.status` to signal completion, either successfully or unsuccessfully, using the `conditions` model used by Tekton PipelineRuns and TaskRuns.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
After that, Tekton Pipelines expects a custom task author to implement a controller for `Run` objects that reference their type (annotated throughout this proposal with the shorthand `Run<Example>`) to take some action, and eventually update its `.status` to signal completion, either successfully or unsuccessfully, using the `conditions` model used by Tekton PipelineRuns and TaskRuns.
After that, Tekton Pipelines expects a custom task author to implement a controller for `Run` objects that reference their type (annotated throughout this proposal with the shorthand `Run<Example>`) to take some action, and eventually update its `.status` to signal completion, either successfully or unsuccessfully, using the `conditions` model used by Tekton PipelineRuns and TaskRuns.
Adding a new Tekton supported type (`Run`) and requiring the author to create a custom controller provides a useful division of responsibilities:
* The existing Tekton controller will only need to know how to instantiate and monitor `Run` objects. It will need no additional privileges or client libraries.
* In the custom controller, the author has the flexibility to do whatever they need to do - any privileges or dependencies required to do this are restricted to the custom controller only
This gives custom task authors complete flexibility without significantly increasing the scope of the existing Tekton controller's responsibilities and permissions.

taskRef:
apiVersion: example.dev/v0
kind: Example
name: my-example
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@imjasonh and i were talking about a situation where there is no type to refer to, for example say i want to make a custom controller that evaluates CEL expressions, but I don't need a CEL CRD type to refer to - all I really need is for a run to be created with a param with the expression (not saying this is the final design, just a possibility for this feature or any other). in that case i need to indicate in the Run what type of thing we're running so my controller can filter for it, but there is no instance to refer to

@imjasonh suggested we could do this by making "name" an optional field

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@imjasonh i tried to think of syntax tweaks and i have nothing super compelling but here are my thoughts:

  1. it's interesting in the definition of an instance of a type, that apiversion and kind are at a different level than name, which is a field of metadata, maybe we could separate them also somehow:
apiVersion: tekton.dev/v1beta1
kind: ClusterTask
metadata:
  name: clustertask-v1beta1
  1. instead of overloading taskRef, adding a new section called run which is used when a Run is supposed to be created, makes me less confused. but it pretty much has the same fields as taskRef and defeats the whole purpose of including kindin there in the first place

  2. instead of a ref section in Run maybe these values could be top level fields

anyway none of these ideas are great, feel free to ignore them

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Our TaskRef type is based on corev1.ObjectReference (and possibly should just reuse that type directly?). I think adding/indenting metadata is just more verbose and makes it seem like you're about to define a whole object, when it's just a reference.

Re (3), it's currently stuffed into ref to allow the possibility to add a sibling field that's the full custom task spec in the future. I agree it looks sort of unnecessary now, hopefully it won't be lonely for long 😄

Making name not required sounds fine by me, I think I need to think/experiment more with a CELTask specifically to see how that looks. 🤔

@imjasonh
Copy link
Member Author

Re: helper package: I had been envisioning them going in tektoncd/pipeline directly, for a few reasons:

  • param placeholder replacement code will hopefully be shared with tektoncd/pipeline, so it'll be useful to have it in the same repo where changes can be made atomically.

  • aside from param support I don't anticipate much real new code, probably just a filter func for controllers to be able to listen for changes to only their types of Run<T>s.

  • new repos require new Prow config, separate releases, non-trivial infra, and so far I think I might be the only person with anything to commit to it, so I'd need someone to commit to reviewing it. 😅

I do think we should add a new GitHub template repo to hold a sample Custom Task controller, like knative/sample-controller, and using knative/pkg infra. That would serve as a useful getting-started guide for prospective authors, but would by no means be the only way to write a controller.

I just really don't want to end up supporting something like knative/pkg, or even complex Tekton-centric extensions to kntaive/pkg, especially since that already exists and is pretty easily used directly.

@bobcatfish
Copy link
Contributor

I do think we should add a new GitHub template repo to hold a sample Custom Task controller, like knative/sample-controller, and using knative/pkg infra. That would serve as a useful getting-started guide for prospective authors, but would by no means be the only way to write a controller.

Sounds great to me!

I just really don't want to end up supporting something like knative/pkg, or even complex Tekton-centric extensions to kntaive/pkg, especially since that already exists and is pretty easily used directly.

💯

i like the idea of having shared code that is used among projects but what I really don't like about knative/pkg is that everything is thrown together into a grab bag; but the fact is that by separating the code out into its own repo, you're making it so that changes to code in that repo have implications to all the projects that depend on them. given that i think it's important that when we do this, we start treating the shared code the way we do code that is relied on by external projects: single responsibility, well documented, versioned projects with clear release processes and compatibility guarantees

Copy link
Member

@vdemeester vdemeester left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we can merge it as is, aka proposed and take the nitty/gritty detail in a follow-up PR to make it implementable (and be able to merge things in projects)

/cc @afrittoli

@tekton-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: vdemeester

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@tekton-robot tekton-robot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Jul 2, 2020
@bobcatfish
Copy link
Contributor

Looks like we can merge this with proposed status! And it feels like we're on the cusp of going from proposed to approved?

/lgtm

@tekton-robot tekton-robot added the lgtm Indicates that a PR is ready to be merged. label Jul 6, 2020
@tekton-robot tekton-robot merged commit 53e8267 into tektoncd:master Jul 6, 2020
dlorenc pushed a commit to dlorenc/community that referenced this pull request Oct 27, 2022
…g in the repo (tektoncd#128)

Signed-off-by: cpanato <ctadeu@gmail.com>

Signed-off-by: cpanato <ctadeu@gmail.com>
dlorenc pushed a commit to dlorenc/community that referenced this pull request Oct 27, 2022
Relates to: tektoncd#128

Signed-off-by: Luke Hinds <lhinds@redhat.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. lgtm Indicates that a PR is ready to be merged. size/L Denotes a PR that changes 100-499 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

10 participants