-
Notifications
You must be signed in to change notification settings - Fork 41
Conversation
a27dd0e
to
675729a
Compare
So as of right now: This successfully converts to DeploymentConfig Had to work-around adding PodSpec (between client-go and kubernetes/api structs). Tests have yet to be added. But otherwise, this is the preliminary work done! |
Would it be possible to just replace |
It won't build :-(
|
@kadel Tried again ago but unfortunately not 👎 since we use PodSpec extensively from client-go, we would end up having to change all of our libraries to use k8s.io/kubernetes instead. Since we use Same as the commented-out section for (eventually) adding ImageStream triggers. But since we use PodSpec (magically I must say), it has to be client-go. I tried to use type assertion but that also did not work (structs are slightly different from each-other). ConvertPodSpec simply (manually) converts the k8s.io/kubernetes PodSpec struct to the client-go equivalent so we don't need to implement any hacking around. Just plug it in and ago. @kadel I've gone ahead and fixed the compiling. Looks like I had forgotten to do my last |
41e4eae
to
b2bb82a
Compare
It still can't be build :-( |
I'm not sure about this approach, it looks like a lot of manual work with the big potential to cause problems. How can you be sure that you covered everything in the PodSpec? I would use this only as the last resort if there is really no other way around it. |
but OpenShift is also using apimachinery for ObjectMeta and TypeMeta, so this shouldn't be problem |
@cdrage I'm not sure If I'm explaining it correctly. So I tried to do it and test it. Why we can't do it like this?: https://github.com/kadel/kedge/tree/test-pull315 Haven't tested it much, just on two small files. But it looks like it builds and works. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
$ go install
# github.com/kedgeproject/kedge/pkg/spec
pkg/spec/deploymentconfig.go:142: replicas declared and not used
@cdrage getting this while trying to build :(
f975988
to
7bb4488
Compare
Tests pass and I implemented @kadel 's suggestion! Compiles and converts correctly. Need's a double-check however... Does this seem right?
|
@cdrage yeah same does not build for me as well $ make install
go install -ldflags="-w -X github.com/kedgeproject/kedge/cmd.GITCOMMIT=675729a"
# github.com/kedgeproject/kedge/pkg/spec
pkg/spec/deploymentconfig.go:160:11: undefined: util
pkg/spec/util.go:110:28: undefined: deployment
Makefile:31: recipe for target 'install' failed
make: *** [install] Error 2 |
@cdrage my bad, something went wrong while pulling code earlier so now tried again also, re-triggered the travis build! reviewing it now |
// (scope and select) objects. May match selectors of replication controllers | ||
// and services. More info: http://kubernetes.io/docs/user-guide/labels | ||
// +optional | ||
Labels map[string]string `json:"labels,omitempty,conflicting"` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@containscafeine do we need this, now that we have integration of ObjectMeta
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think we need this since ObjectMeta is already embedded in ControllerFields. More context in #267
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@containscafeine Not having this = tests fail.
Do you want me to remove it?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@cdrage could you point out which ones are failing? The following worked for me without any test failures -
diff --git a/pkg/spec/types.go b/pkg/spec/types.go
index fcd48bc..5129639 100644
--- a/pkg/spec/types.go
+++ b/pkg/spec/types.go
@@ -138,11 +138,6 @@ type SecretMod struct {
// ControllerFields are the common fields in every controller Kedge supports
type ControllerFields struct {
Controller string `json:"controller,omitempty"`
- // Map of string keys and values that can be used to organize and categorize
- // (scope and select) objects. May match selectors of replication controllers
- // and services. More info: http://kubernetes.io/docs/user-guide/labels
- // +optional
- Labels map[string]string `json:"labels,omitempty,conflicting"`
// List of volume that should be mounted on the pod.
// ref: io.kedge.VolumeClaim
// +optional
pkg/spec/deploymentconfig.go
Outdated
ObjectMeta: metav1.ObjectMeta{ | ||
Name: deployment.Name, | ||
Labels: deployment.Labels, | ||
}, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You don't need to copy fields from deployment
, just copy the entire ObjectMeta like it is done in deployment.go for kubernetes deployment.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done.
pkg/spec/types.go
Outdated
Controller string `json:"controller,omitempty"` | ||
} | ||
|
||
// Orchestrator: Kubernetes, OpenShift |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What does this line signify?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I see this line on DeploymentSpecMod
and DeploymentConfigSpecMod
but not on JobSpecMod
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Updated this in the PR ^^
pkg/spec/deploymentconfig.go
Outdated
} | ||
|
||
return &os_deploy_v1.DeploymentConfig{ | ||
TypeMeta: metav1.TypeMeta{ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this is created on the fly, you don't need to hard code the type meta!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actually, we do. SetGVK and such as implemented by @containscafeine only supports Kubernetes, thus we have to manually specify the TypeMeta.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Whoops, we need to fix this in another PR then, need to check for OpenShift does this!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Tracked in #349
pkg/spec/deploymentconfig.go
Outdated
Name: deployment.Name, | ||
Labels: deployment.Labels, | ||
}, | ||
Spec: os_deploy_v1.DeploymentConfigSpec{ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this should also be directly assigned from object deployment
of type DeploymentConfigSpecMod
!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Bit confused what you mean here. Can you elaborate? @surajssd DeploymentConfigSpecMod contains ControllerFields (uneeded).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Great work @cdrage.
This works for me wrt functionality, huge bump to our OpenShift story 🎉
The changes to the code that I've suggested are mostly refactoring and making the code uniform with the rest of the code.
Also it'd be great if we could -
- add a simple example file in docs/examples and a bit exhaustive one in examples/
- Since a lot of checks are the same for both Kubernetes Deployment and OpenShift's DeploymentConfig, I think we can abstract code and reuse it in deployment.go and deploymentconfig.go
- It would be great if we could add tests to some major and big functions in this PR
return &DeploymentSpecMod{}, nil | ||
case "job": | ||
return &JobSpecMod{}, nil | ||
case "deploymentconfig": |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should we check for -
- deploymentConfig
- deploymentconfig
- DeploymentConfig
Or, should we convert the controller field in lowercase and then check for the value? Because I'd expect all 3 of these variants to work.
WDYT?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done.
pkg/spec/types.go
Outdated
// a complete kedge app based on OpenShift | ||
type DeploymentConfigSpecMod struct { | ||
ControllerFields `json:",inline"` | ||
os_deploy_v1.DeploymentConfigSpec `json:",inline"` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add the relevant comments above this, something on the lines of k8s: io.k8s.kubernetes.pkg.apis.batch.v1.JobSpec
?
@surajssd can help with how to derive those I think.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Added 👍 atleast for kedgeSpec
pkg/spec/deploymentconfig.go
Outdated
) | ||
|
||
// Unmarshal the Kedge YAML file | ||
func (deployment *DeploymentConfigSpecMod) Unmarshal(data []byte) error { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
can we change the receiver names here to deploymentConfig
or dc
or something, this is a bit confusing since it says deployment
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done.
pkg/spec/deploymentconfig.go
Outdated
|
||
// Fix all services / volume claims / configmaps that are applied | ||
// TODO: abstract out this code when more controllers are added | ||
func (deployment *DeploymentConfigSpecMod) Fix() error { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
wrt TODO and what is happening in this method, we can already abstract this like it happens for other controllers -
e.g. the following code from the Fix()
method for a Deployment -
func (deployment *DeploymentSpecMod) Fix() error {
if err := deployment.ControllerFields.fixControllerFields(); err != nil {
return errors.Wrap(err, "unable to fix ControllerFields")
}
deployment.ControllerFields.ObjectMeta.Labels = addKeyValueToMap(appLabelKey, deployment.ControllerFields.Name, deployment.ControllerFields.ObjectMeta.Labels)
return nil
}
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We cannot use that method now because it has some checks of ingress which cannot be used with OpenShift!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@containscafeine also, this TODO was added by you 👍
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Tracked in #350
pkg/spec/deploymentconfig.go
Outdated
} | ||
|
||
// Created the OpenShift DeploymentConfig controller | ||
func (deployment *DeploymentConfigSpecMod) createDeploymentConfigController() (*os_deploy_v1.DeploymentConfig, error) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How about renaming this to either -
CreateOpenShiftController()
CreateController()
and renaming all theCreateK8sController()
methods toCreateController()
as well
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think the second one is a better option
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is a private method (not public) and has to be lower-case.
pkg/spec/deploymentconfig.go
Outdated
|
||
// Set replicas to 1 if not set (0) | ||
replicas := deployment.Replicas | ||
if replicas == 0 { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why do we need to set replicas manually if not set?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Because if we don't, replicas is set to 0 by default.
pkg/spec/deploymentconfig.go
Outdated
if replicas == 0 { | ||
replicas = 1 | ||
} | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This method is missing all the checks which are in CreateK8sController() for Kubernetes Deployment.
e.g. -
// We need to error out if both, deployment.PodSpec and deployment.DeploymentSpec are empty
if deployment.isDeploymentSpecPodSpecEmpty() {
log.Debug("Both, deployment.PodSpec and deployment.DeploymentSpec are empty, not enough data to create a deployment.")
return nil, nil
}
// We are merging whole DeploymentSpec with PodSpec.
// This means that someone could specify containers in template.spec and also in top level PodSpec.
// This stupid check is supposed to make sure that only one of them set.
// TODO: merge DeploymentSpec.Template.Spec and top level PodSpec
if deployment.isMultiplePodSpecSpecified() {
return nil, fmt.Errorf("Pod can't be specfied in two places. Use top level PodSpec or template.spec (DeploymentSpec.Template.Spec) not both")
}
deploymentSpec := deployment.DeploymentSpec
// top level PodSpec is not empty, use it for deployment template
// we already know that if deployment.PodSpec is not empty deployment.DeploymentSpec.Template.Spec is empty
if !reflect.DeepEqual(deployment.PodSpec, api_v1.PodSpec{}) {
deploymentSpec.Template.Spec = deployment.PodSpec
}
// TODO: check if this wasn't set by user, in that case we shouldn't overwrite it
deploymentSpec.Template.ObjectMeta.Name = deployment.Name
// TODO: merge with already existing labels and avoid duplication
deploymentSpec.Template.ObjectMeta.Labels = deployment.Labels
I think it should be in the scope of this PR to abstract that code and reuse it in both the places.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@containscafeine IMO we should add this after since OpenShift does things a bit different for pod checking in this PR.
Could we abstract this and refactor in a new PR (since we copy and paste most of this between deployment.go, deploymentconfig.go and job.go)...
pkg/spec/deploymentconfig.go
Outdated
replicas = 1 | ||
} | ||
|
||
return &os_deploy_v1.DeploymentConfig{ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we don't to assign most of this, the way it's done in Deployment should suffice - something like -
return &ext_v1beta1.Deployment{
ObjectMeta: deployment.ObjectMeta,
Spec: deploymentSpec,
}, nil
}
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@containscafeine DeploymentConfig is different, you have to specify the template, replicas, selector manually. If you can find a way to do it similarly above, let me know!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
what is there is not right, as it doesn't copy everything see
#315 (comment)
What might work is if we first copy whole spec:Spec: deploymentConfig.DeploymentConfigSpec
and than fill selector
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
what @surajssd did in #315 (comment) looks like a good solution for this
@@ -17,10 +17,11 @@ limitations under the License. | |||
package spec |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
+1 on the rename to types.go :)
// (scope and select) objects. May match selectors of replication controllers | ||
// and services. More info: http://kubernetes.io/docs/user-guide/labels | ||
// +optional | ||
Labels map[string]string `json:"labels,omitempty,conflicting"` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think we need this since ObjectMeta is already embedded in ControllerFields. More context in #267
Hey @containscafeine implementing the changes ended up with a nil pointer error.. Specifically, related to adding: // We need to error out if both, deployment.PodSpec and deployment.DeploymentSpec are empty
if deployment.isDeploymentSpecPodSpecEmpty() {
log.Debug("Both, deployment.PodSpec and deployment.DeploymentSpec are empty, not enough data to create a deployment.")
return nil, nil
}
// We are merging whole DeploymentSpec with PodSpec.
// This means that someone could specify containers in template.spec and also in top level PodSpec.
// This stupid check is supposed to make sure that only one of them set.
// TODO: merge DeploymentSpec.Template.Spec and top level PodSpec
if deployment.isMultiplePodSpecSpecified() {
return nil, fmt.Errorf("Pod can't be specfied in two places. Use top level PodSpec or template.spec (DeploymentSpec.Template.Spec) not both")
}
deploymentSpec := deployment.DeploymentSpec
// top level PodSpec is not empty, use it for deployment template
// we already know that if deployment.PodSpec is not empty deployment.DeploymentSpec.Template.Spec is empty
if !reflect.DeepEqual(deployment.PodSpec, api_v1.PodSpec{}) {
deploymentSpec.Template.Spec = deployment.PodSpec
}
// TODO: check if this wasn't set by user, in that case we shouldn't overwrite it deploymentSpec.Template.ObjectMeta.Name = deployment.Name
// TODO: merge with already existing labels and avoid duplication
deploymentSpec.Template.ObjectMeta.Labels = deployment.Labels I'd debug but i think we should start getting this OpenShift code in and refactor how we fix / check pods. I believe it's a missing pointer value somewhere within PodSpec, that's causing the issue. |
@kadel I forget what it was, but any idea in regards to the failing test? go test -i -race -cover ./cmd/... ./pkg/... ./tests/... .
can't load package: package k8s.io/kubernetes/kubernetes: cannot find package "k8s.io/kubernetes/kubernetes" in any of:
/home/travis/.gimme/versions/go1.7.linux.amd64/src/k8s.io/kubernetes/kubernetes (from $GOROOT)
/home/travis/gopath/src/k8s.io/kubernetes/kubernetes (from $GOPATH)
can't load package: package k8s.io/kubernetes/tools/clientcmd: cannot find package "k8s.io/kubernetes/tools/clientcmd" in any of:
/home/travis/.gimme/versions/go1.7.linux.amd64/src/k8s.io/kubernetes/tools/clientcmd (from $GOROOT)
/home/travis/gopath/src/k8s.io/kubernetes/tools/clientcmd (from $GOPATH)
make: *** [test-unit-cover] Error 1 |
Hey @containscafeine For your three points:
|
@cdrage I'll look at tests. I have another issue. kedge: name: nginx
controller: deploymentconfig
containers:
- image: nginx
replicas: 2
triggers:
- type: "ConfigChange"
strategy:
type: Rolling output: apiVersion: v1
kind: DeploymentConfig
metadata:
creationTimestamp: null
name: nginx
spec:
replicas: 2
selector:
app: nginx
strategy:
resources: {}
template:
metadata:
creationTimestamp: null
labels:
app: nginx
spec:
containers:
- image: nginx
name: nginx
resources: {}
test: false
triggers: null strategy and triggers are empty :-( |
@cdrage $ git diff
diff --git a/pkg/spec/deploymentconfig.go b/pkg/spec/deploymentconfig.go
index c601fb5..71fd721 100644
--- a/pkg/spec/deploymentconfig.go
+++ b/pkg/spec/deploymentconfig.go
@@ -136,10 +136,9 @@ func (deploymentConfig *DeploymentConfigSpecMod) Transform() ([]runtime.Object,
// Created the OpenShift DeploymentConfig controller
func (deploymentConfig *DeploymentConfigSpecMod) createOpenShiftController() (*os_deploy_v1.DeploymentConfig, error) {
- // Set replicas to 1 if not set (0)
- replicas := deploymentConfig.Replicas
- if replicas == 0 {
- replicas = 1
+ dcSpec := deploymentConfig.DeploymentConfigSpec
+ dcSpec.Template = &kapi.PodTemplateSpec{
+ Spec: deploymentConfig.PodSpec,
}
return &os_deploy_v1.DeploymentConfig{
@@ -148,15 +147,6 @@ func (deploymentConfig *DeploymentConfigSpecMod) createOpenShiftController() (*o
APIVersion: "v1",
},
ObjectMeta: deploymentConfig.ObjectMeta,
- Spec: os_deploy_v1.DeploymentConfigSpec{
- Replicas: replicas,
- Selector: deploymentConfig.Labels,
- Template: &kapi.PodTemplateSpec{
- ObjectMeta: metav1.ObjectMeta{
- Labels: deploymentConfig.Labels,
- },
- Spec: deploymentConfig.PodSpec,
- },
- },
+ Spec: dcSpec,
}, nil
}
this will resolve the issues that @kadel is pointing at as well! |
pkg/spec/deploymentconfig.go
Outdated
ObjectMeta: deploymentConfig.ObjectMeta, | ||
Spec: os_deploy_v1.DeploymentConfigSpec{ | ||
Replicas: replicas, | ||
Selector: deploymentConfig.Labels, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
shouldn't we use something a bit more specific as a selector?
maybe we could add new label to deployment deploymentconfig: <appname>
and use that also together with app: <appname>
as selector.
I'm afraid that someone will create pod with app
label and it will be wrongly assigned to a DeploymentConfig
root cause of this is e2e_test.go At least not for structs, using --EDIT-- |
@@ -86,7 +86,7 @@ func (job *JobSpecMod) Transform() ([]runtime.Object, []string, error) { | |||
return runtimeObjects, includeResources, nil | |||
} | |||
|
|||
func (job *JobSpecMod) CreateK8sController() (*batch_v1.Job, error) { | |||
func (job *JobSpecMod) createKubernetesController() (*batch_v1.Job, error) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Don't forget to rename this in the tests!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done!
8e7fbd2
to
18fbdfe
Compare
Adds DeploymentConfig support to Kedge by being able to specify a controller like so: ```yaml controller: deploymentconfig ```
Added test(s), up for another reviews / testing it out. |
This looks like a good first cut to me! |
What I feel is that get this thing in, and file issues with whatever is missing in this PR! |
// kedgeSpec: io.kedge.DeploymentConfigSpecMod | ||
type DeploymentConfigSpecMod struct { | ||
ControllerFields `json:",inline"` | ||
os_deploy_v1.DeploymentConfigSpec `json:",inline"` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ping @surajssd, can you help with the comment for this field, or create an issue for it?
Super work, @cdrage 🎉 |
This was merged too quickly :-( see: Has anyone actually tested this before merging? |
Adds DeploymentConfig support to Kedge by being able to specify a
controller like so: