Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Proposition] Extend the componentGroupKind to define a Component's kind #68

Closed
cmoulliard opened this issue Oct 9, 2018 · 17 comments
Closed
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@cmoulliard
Copy link

The ApplicationSpec type includes the field spec.componentKinds to group under a name, related kubernetes resources such as Service, StatefulSet, ConfigMap, Secret ... describing globally what the application is composed

// ApplicationSpec defines the specification for an Application.
type ApplicationSpec struct {
	// ComponentGroupKinds is a list of Kinds for Application's components (e.g. Deployments, Pods, Services, CRDs). It
	// can be used in conjunction with the Application's Selector to list or watch the Applications components.
	ComponentGroupKinds []metav1.GroupKind `json:"componentKinds,omitempty"`

If we use this Application custom resource to install/configure the environment on kubernetes to deploy the resources needed using a controller or operator, then it is important to have also a specialised type able to :

  • Describe what the component is or framework, runtime, ... it encapsulates
  • Packaging mode : jar, war, ...
  • Type: application, job, ...
  • Mode to be used to install it,
  • Env variables to be configured or parameters to be passed to configure the runtime/jvm

Example: As a user, I would like to install a Spring Boot application using the version 1.5.15 of the framework and would like to access it externally using a route. The default port of the service is 8080. To convert this requirement into a component's type, then the following object could be created

apiVersion: component.k8s.io/v1alpha1
kind: Component
metadata:
  name: my-spring-boot
spec:
  deployment: innerloop
  packaging: jar
  type: application
  runtime: spring-boot
  version: 1.5.15
  exposeService: true

The advantage to have such component custom resource is that we could be able with a UI or CLI to display the information in a more readable way

kubectl application describe

NAME        	    Category      Type               Version       Source       Visible Externally
payment-frontend    runtime       nodejs             0.8           local        yes
payment-backend     runtime       spring-boot        1.5.15        binary       no
payment-database    service       db-postgresql-apb  dev                        no

Component's type proposition

type ComponentSpec struct {
	// The name represents a human readable string describing from a business perspective what this component is related to
	// Example : payment-frontend, retail-backend
	Name string
	// The packagingMode refers to the archive file's type which has been used to package the code
	// Example : jar, war, ...
	PackagingMode string
	// The type is related to how the component is installed, as a pod, job, statefulset
	Type string
	// DeploymentMode indicates the strategy to be adopted to install the resources into a namespace
	// and next to create a pod. 2 strategies are currently supported; inner and outer loop
	// where outer loop refers to a build of the code and the packaging of the application into a container's image
	// while the inner loop will install a pod's running a supervisord daemon used to trigger actions such as : assemble, run, ...
	DeploymentMode string `json:"deployment,omitempty"`
	// Runtime is the framework used to start within the container the application
	// It corresponds to one of the following values: spring-boot, vertx, tornthail, nodejs
	Runtime string `json:"runtime,omitempty"`
	// To indicate if we want to expose the service out side of the cluster as a route
	ExposeService bool `json:"exposeService,omitempty"`
	// Cpu is the cpu to be assigned to the pod's running the application
	Cpu string `json:"cpu,omitempty"`
	// Cpu is the memory to be assigned to the pod's running the application
	Memory string `json:"memory,omitempty"`
	// Port is the HTTP/TCP port number used within the pod by the runtime
	Port int32 `json:"port,omitempty"`
	// The storage allows to specify the capacity and mode of the volume (ReadWrite) to be mounted for the pod
	Storage Storage `json:"storage,omitempty"`
	// The list of the images created according to the DeploymentMode to install the loop
	Images []Image `json:"image,omitempty"`
	// Array of env variables containing extra/additional info to be used to configure the runtime
	Envs []Env `json:"env,omitempty"`
	// List of services consumed by the runtime and created as service instance from a Service Catalog
	Services []Service
	// The features represents a capability that it is required to have, to install to allow the component
	// to operate with by example a Prometheus backend system to collect metrics, an OpenTracing datastore
	// to centralize the traces/logs of the runtime, to deploy a servicemesh, ...
	Features []Feature
}
@ant31
Copy link
Contributor

ant31 commented Oct 11, 2018

we use this Application custom resource to install/configure

Installation configuration is part of the container build.
On the runtime, the initialization and execution of the container are already declared in the workload object (Deployment, DaemonSet, Job...).
The scope of the Application is to aggregate kubernetes resources associated with one application, what's going on inside the container and how it's built is not part of the proposal.

@cmoulliard
Copy link
Author

cmoulliard commented Oct 12, 2018

The scope of the Application is to aggregate kubernetes

This is also the goal of my proposition excepted that I suggest to have a dedicated custom resource describing using a more human/readible resource, what are the components composing my application.

When you design/develop a microservices's application as an architect, you will then describe what are the different systems, part of your application that ultimately we have to install deploy on kubernetes/openshift.

By adopting this component custom resource, we can then decouple the technical k8s resources (pod, service, serviceaccount, configmap, secret, replicaset,...) to be created from the definition of the application itself and delegate to a controller/operator, the responsibility to translate the info provided such as runtime, cpu, memory, port, ... into the final k8s resource to be created.

Example :

High Level definition

Application
  component1 : spring boot, port 9090, env : SPRING_PROFILES_ACTIVE=my-cool-db
  component2 : nodejs, port 8080
  service1 : postgresqldb (from service catalog)

Converted by the controller/operator into

Application AND resources associated (for garbage collection) :  
  component1 
    deployment, replicaset, pod where port = 9090, service where port = HTTP and 9090, pod including env var with value SPRING_PROFILES_ACTIVE=my-cool-db ...
  component2 
    deployment, replicaset, pod where port = 8080, service where port = HTTP and 8080, ...
  service1 
    serviceInstance, secret, serviceInstance, updating the deployment to add EnvFrom

@cmoulliard
Copy link
Author

@mattfarina @kow3ns @prydonius WDYT about my proposition ?

@cmoulliard
Copy link
Author

To support the Component CRD's approach, I have created this demo project where we install 2 microservices or components and consume a service

https://github.com/snowdrop/component-operator-demo#introduction

WDYT @mattfarina @kow3ns @prydonius

@mattfarina
Copy link
Contributor

I like to keep a separation of concerns.

Information about how an image was built would be better placed on the image itself. Maybe as an annotation. No matter where that image is run this information would be available.

If something should be exported this will show up in a Service. The space needed will be in an existing object. Why would we add a component to record it a second time?

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label May 26, 2019
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Jun 25, 2019
@fejta-bot
Copy link

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

@k8s-ci-robot
Copy link
Contributor

@fejta-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@cmoulliard
Copy link
Author

/reopen

@k8s-ci-robot
Copy link
Contributor

@cmoulliard: Reopened this issue.

In response to this:

/reopen

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@k8s-ci-robot k8s-ci-robot reopened this Feb 20, 2020
@cmoulliard
Copy link
Author

/remove-lifecycle rotten

@k8s-ci-robot k8s-ci-robot removed the lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. label Feb 20, 2020
@cmoulliard
Copy link
Author

FYI: The Component API Spec has been moved to this project : https://github.com/halkyonio/api/blob/master/component/v1beta1/types.go#L50-L74
and is currently supported by this operator : https://github.com/halkyonio/operator

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label May 20, 2020
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Jun 19, 2020
@fejta-bot
Copy link

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

@k8s-ci-robot
Copy link
Contributor

@fejta-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

No branches or pull requests

5 participants