Skip to content
This repository has been archived by the owner on Nov 15, 2022. It is now read-only.

Add DeploymentConfig support #315

Merged
merged 1 commit into from
Oct 13, 2017
Merged

Conversation

cdrage
Copy link
Collaborator

@cdrage cdrage commented Oct 5, 2017

Adds DeploymentConfig support to Kedge by being able to specify a
controller like so:

controller: deploymentconfig

@cdrage cdrage force-pushed the add-openshift branch 2 times, most recently from a27dd0e to 675729a Compare October 5, 2017 21:48
@cdrage
Copy link
Collaborator Author

cdrage commented Oct 5, 2017

So as of right now:

This successfully converts to DeploymentConfig

Had to work-around adding PodSpec (between client-go and kubernetes/api structs).

Tests have yet to be added.

But otherwise, this is the preliminary work done!

@kadel
Copy link
Member

kadel commented Oct 6, 2017

Would it be possible to just replace import k8s.io/client-go/pkg/api/v1 for import k8s.io/kubernetes/pkg/api/v1 and avaid that the whole PodConversion thingy?

@kadel
Copy link
Member

kadel commented Oct 6, 2017

It won't build :-(

go build -ldflags="-w -X github.com/kedgeproject/kedge/cmd.GITCOMMIT=675729a8" -o kedge main.go
# github.com/kedgeproject/kedge/pkg/spec
pkg/spec/deploymentconfig.go:160:11: undefined: util
pkg/spec/util.go:110:28: undefined: deployment
make: *** [bin] Error 2

@cdrage
Copy link
Collaborator Author

cdrage commented Oct 6, 2017

@kadel Tried again ago but unfortunately not 👎 since we use PodSpec extensively from client-go, we would end up having to change all of our libraries to use k8s.io/kubernetes instead.

Since we use metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" for ObjectMeta and TypeMeta it's fine.

Same as the commented-out section for (eventually) adding ImageStream triggers.

But since we use PodSpec (magically I must say), it has to be client-go.

I tried to use type assertion but that also did not work (structs are slightly different from each-other).

ConvertPodSpec simply (manually) converts the k8s.io/kubernetes PodSpec struct to the client-go equivalent so we don't need to implement any hacking around. Just plug it in and ago.

@kadel I've gone ahead and fixed the compiling. Looks like I had forgotten to do my last git push -f change.

@cdrage cdrage force-pushed the add-openshift branch 2 times, most recently from 41e4eae to b2bb82a Compare October 6, 2017 19:54
@kadel
Copy link
Member

kadel commented Oct 9, 2017

It still can't be build :-(

@kadel
Copy link
Member

kadel commented Oct 9, 2017

ConvertPodSpec simply (manually) converts the k8s.io/kubernetes PodSpec struct to the client-go equivalent so we don't need to implement any hacking around. Just plug it in and ago.

I'm not sure about this approach, it looks like a lot of manual work with the big potential to cause problems. How can you be sure that you covered everything in the PodSpec? I would use this only as the last resort if there is really no other way around it.

@kadel
Copy link
Member

kadel commented Oct 9, 2017

Since we use metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" for ObjectMeta and TypeMeta it's fine.

but OpenShift is also using apimachinery for ObjectMeta and TypeMeta, so this shouldn't be problem

@kadel
Copy link
Member

kadel commented Oct 9, 2017

@cdrage I'm not sure If I'm explaining it correctly. So I tried to do it and test it.

Why we can't do it like this?: https://github.com/kadel/kedge/tree/test-pull315
(I've just commented out your conversion function and s/client-go/kubernetes/g everywhere expect vendor directory - https://github.com/cdrage/kedge/compare/add-openshift...kadel:test-pull315?expand=1 )

Haven't tested it much, just on two small files. But it looks like it builds and works.
Tests are failing on some missing tags in types.go, So it needs a bit investigation (There might be some fields that are in conflict)

Copy link
Collaborator

@concaf concaf left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

$ go install
# github.com/kedgeproject/kedge/pkg/spec
pkg/spec/deploymentconfig.go:142: replicas declared and not used

@cdrage getting this while trying to build :(

@cdrage cdrage force-pushed the add-openshift branch 2 times, most recently from f975988 to 7bb4488 Compare October 10, 2017 13:16
@cdrage
Copy link
Collaborator Author

cdrage commented Oct 10, 2017

Tests pass and I implemented @kadel 's suggestion! Compiles and converts correctly.

Need's a double-check however...

Does this seem right?

name: httpd
controller: deploymentconfig
containers:
- image: centos/httpd
services:
- name: httpd
  type: NodePort
  ports:
  - port: 8080
    targetPort: 80
---
apiVersion: v1
kind: Service
metadata:
  creationTimestamp: null
  labels:
    app: httpd
  name: httpd
spec:
  ports:
  - port: 8080
    targetPort: 80
  selector:
    app: httpd
  type: NodePort
status:
  loadBalancer: {}
---
apiVersion: v1
kind: DeploymentConfig
metadata:
  creationTimestamp: null
  labels:
    app: httpd
  name: httpd
spec:
  replicas: 1
  selector:
    app: httpd
  strategy:
    resources: {}
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: httpd
    spec:
      containers:
      - image: centos/httpd
        name: httpd
        resources: {}
  test: false
  triggers: null
status:
  availableReplicas: 0
  latestVersion: 0
  observedGeneration: 0
  replicas: 0
  unavailableReplicas: 0
  updatedReplicas: 0

@surajssd
Copy link
Member

@cdrage yeah same does not build for me as well

$ make install
go install -ldflags="-w -X github.com/kedgeproject/kedge/cmd.GITCOMMIT=675729a"
# github.com/kedgeproject/kedge/pkg/spec
pkg/spec/deploymentconfig.go:160:11: undefined: util
pkg/spec/util.go:110:28: undefined: deployment
Makefile:31: recipe for target 'install' failed
make: *** [install] Error 2

@surajssd
Copy link
Member

surajssd commented Oct 11, 2017

@cdrage my bad, something went wrong while pulling code earlier so now tried again also, re-triggered the travis build!

reviewing it now

// (scope and select) objects. May match selectors of replication controllers
// and services. More info: http://kubernetes.io/docs/user-guide/labels
// +optional
Labels map[string]string `json:"labels,omitempty,conflicting"`
Copy link
Member

@surajssd surajssd Oct 11, 2017

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@containscafeine do we need this, now that we have integration of ObjectMeta ?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think we need this since ObjectMeta is already embedded in ControllerFields. More context in #267

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@containscafeine Not having this = tests fail.

Do you want me to remove it?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@cdrage could you point out which ones are failing? The following worked for me without any test failures -

diff --git a/pkg/spec/types.go b/pkg/spec/types.go
index fcd48bc..5129639 100644
--- a/pkg/spec/types.go
+++ b/pkg/spec/types.go
@@ -138,11 +138,6 @@ type SecretMod struct {
 // ControllerFields are the common fields in every controller Kedge supports
 type ControllerFields struct {
        Controller string `json:"controller,omitempty"`
-       // Map of string keys and values that can be used to organize and categorize
-       // (scope and select) objects. May match selectors of replication controllers
-       // and services. More info: http://kubernetes.io/docs/user-guide/labels
-       // +optional
-       Labels map[string]string `json:"labels,omitempty,conflicting"`
        // List of volume that should be mounted on the pod.
        // ref: io.kedge.VolumeClaim
        // +optional

ObjectMeta: metav1.ObjectMeta{
Name: deployment.Name,
Labels: deployment.Labels,
},
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You don't need to copy fields from deployment, just copy the entire ObjectMeta like it is done in deployment.go for kubernetes deployment.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done.

Controller string `json:"controller,omitempty"`
}

// Orchestrator: Kubernetes, OpenShift
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What does this line signify?

Copy link
Member

@surajssd surajssd Oct 11, 2017

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see this line on DeploymentSpecMod and DeploymentConfigSpecMod but not on JobSpecMod ?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Updated this in the PR ^^

}

return &os_deploy_v1.DeploymentConfig{
TypeMeta: metav1.TypeMeta{
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this is created on the fly, you don't need to hard code the type meta!

Copy link
Collaborator Author

@cdrage cdrage Oct 11, 2017

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actually, we do. SetGVK and such as implemented by @containscafeine only supports Kubernetes, thus we have to manually specify the TypeMeta.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Whoops, we need to fix this in another PR then, need to check for OpenShift does this!

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Tracked in #349

Name: deployment.Name,
Labels: deployment.Labels,
},
Spec: os_deploy_v1.DeploymentConfigSpec{
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this should also be directly assigned from object deployment of type DeploymentConfigSpecMod!

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Bit confused what you mean here. Can you elaborate? @surajssd DeploymentConfigSpecMod contains ControllerFields (uneeded).

Copy link
Collaborator

@concaf concaf left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Great work @cdrage.
This works for me wrt functionality, huge bump to our OpenShift story 🎉

The changes to the code that I've suggested are mostly refactoring and making the code uniform with the rest of the code.

Also it'd be great if we could -

  • add a simple example file in docs/examples and a bit exhaustive one in examples/
  • Since a lot of checks are the same for both Kubernetes Deployment and OpenShift's DeploymentConfig, I think we can abstract code and reuse it in deployment.go and deploymentconfig.go
  • It would be great if we could add tests to some major and big functions in this PR

return &DeploymentSpecMod{}, nil
case "job":
return &JobSpecMod{}, nil
case "deploymentconfig":
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should we check for -

  • deploymentConfig
  • deploymentconfig
  • DeploymentConfig

Or, should we convert the controller field in lowercase and then check for the value? Because I'd expect all 3 of these variants to work.

WDYT?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done.

// a complete kedge app based on OpenShift
type DeploymentConfigSpecMod struct {
ControllerFields `json:",inline"`
os_deploy_v1.DeploymentConfigSpec `json:",inline"`
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Add the relevant comments above this, something on the lines of k8s: io.k8s.kubernetes.pkg.apis.batch.v1.JobSpec?
@surajssd can help with how to derive those I think.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Added 👍 atleast for kedgeSpec

)

// Unmarshal the Kedge YAML file
func (deployment *DeploymentConfigSpecMod) Unmarshal(data []byte) error {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can we change the receiver names here to deploymentConfig or dc or something, this is a bit confusing since it says deployment.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done.


// Fix all services / volume claims / configmaps that are applied
// TODO: abstract out this code when more controllers are added
func (deployment *DeploymentConfigSpecMod) Fix() error {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

wrt TODO and what is happening in this method, we can already abstract this like it happens for other controllers -
e.g. the following code from the Fix() method for a Deployment -

func (deployment *DeploymentSpecMod) Fix() error {
	if err := deployment.ControllerFields.fixControllerFields(); err != nil {
		return errors.Wrap(err, "unable to fix ControllerFields")
	}

	deployment.ControllerFields.ObjectMeta.Labels = addKeyValueToMap(appLabelKey, deployment.ControllerFields.Name, deployment.ControllerFields.ObjectMeta.Labels)

	return nil
}

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We cannot use that method now because it has some checks of ingress which cannot be used with OpenShift!

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@containscafeine also, this TODO was added by you 👍

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Tracked in #350

}

// Created the OpenShift DeploymentConfig controller
func (deployment *DeploymentConfigSpecMod) createDeploymentConfigController() (*os_deploy_v1.DeploymentConfig, error) {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How about renaming this to either -

  • CreateOpenShiftController()
  • CreateController() and renaming all the CreateK8sController() methods to CreateController() as well

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think the second one is a better option

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is a private method (not public) and has to be lower-case.


// Set replicas to 1 if not set (0)
replicas := deployment.Replicas
if replicas == 0 {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why do we need to set replicas manually if not set?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Because if we don't, replicas is set to 0 by default.

if replicas == 0 {
replicas = 1
}

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This method is missing all the checks which are in CreateK8sController() for Kubernetes Deployment.
e.g. -

	// We need to error out if both, deployment.PodSpec and deployment.DeploymentSpec are empty
	if deployment.isDeploymentSpecPodSpecEmpty() {
		log.Debug("Both, deployment.PodSpec and deployment.DeploymentSpec are empty, not enough data to create a deployment.")
		return nil, nil
	}

	// We are merging whole DeploymentSpec with PodSpec.
	// This means that someone could specify containers in template.spec and also in top level PodSpec.
	// This stupid check is supposed to make sure that only one of them set.
	// TODO: merge DeploymentSpec.Template.Spec and top level PodSpec
	if deployment.isMultiplePodSpecSpecified() {
		return nil, fmt.Errorf("Pod can't be specfied in two places. Use top level PodSpec or template.spec (DeploymentSpec.Template.Spec) not both")
	}

	deploymentSpec := deployment.DeploymentSpec

	// top level PodSpec is not empty, use it for deployment template
	// we already know that if deployment.PodSpec is not empty deployment.DeploymentSpec.Template.Spec is empty
	if !reflect.DeepEqual(deployment.PodSpec, api_v1.PodSpec{}) {
		deploymentSpec.Template.Spec = deployment.PodSpec
	}

	// TODO: check if this wasn't set by user, in that case we shouldn't overwrite it
	deploymentSpec.Template.ObjectMeta.Name = deployment.Name

	// TODO: merge with already existing labels and avoid duplication
	deploymentSpec.Template.ObjectMeta.Labels = deployment.Labels

I think it should be in the scope of this PR to abstract that code and reuse it in both the places.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@containscafeine IMO we should add this after since OpenShift does things a bit different for pod checking in this PR.

Could we abstract this and refactor in a new PR (since we copy and paste most of this between deployment.go, deploymentconfig.go and job.go)...

replicas = 1
}

return &os_deploy_v1.DeploymentConfig{
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we don't to assign most of this, the way it's done in Deployment should suffice - something like -

	return &ext_v1beta1.Deployment{
		ObjectMeta: deployment.ObjectMeta,
		Spec:       deploymentSpec,
	}, nil
}

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@containscafeine DeploymentConfig is different, you have to specify the template, replicas, selector manually. If you can find a way to do it similarly above, let me know!

Copy link
Member

@kadel kadel Oct 12, 2017

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

what is there is not right, as it doesn't copy everything see
#315 (comment)

What might work is if we first copy whole spec:Spec: deploymentConfig.DeploymentConfigSpec and than fill selector

Copy link
Member

@kadel kadel Oct 12, 2017

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

what @surajssd did in #315 (comment) looks like a good solution for this

@@ -17,10 +17,11 @@ limitations under the License.
package spec
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

+1 on the rename to types.go :)

// (scope and select) objects. May match selectors of replication controllers
// and services. More info: http://kubernetes.io/docs/user-guide/labels
// +optional
Labels map[string]string `json:"labels,omitempty,conflicting"`
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think we need this since ObjectMeta is already embedded in ControllerFields. More context in #267

@cdrage
Copy link
Collaborator Author

cdrage commented Oct 11, 2017

Hey @containscafeine implementing the changes ended up with a nil pointer error.. Specifically, related to adding:

  // We need to error out if both, deployment.PodSpec and deployment.DeploymentSpec are empty                                                                              
  if deployment.isDeploymentSpecPodSpecEmpty() {                                                                                                                           
    log.Debug("Both, deployment.PodSpec and deployment.DeploymentSpec are empty, not enough data to create a deployment.")                                                 
    return nil, nil                                                                                                                                                        
  }                                                                                                                                                                        
                                                                                                                                                                             // We are merging whole DeploymentSpec with PodSpec.                                                                                                                     
  // This means that someone could specify containers in template.spec and also in top level PodSpec.                                                                      
  // This stupid check is supposed to make sure that only one of them set.                                                                                                 
  // TODO: merge DeploymentSpec.Template.Spec and top level PodSpec                                                                                                        
  if deployment.isMultiplePodSpecSpecified() {                                                                                                                             
    return nil, fmt.Errorf("Pod can't be specfied in two places. Use top level PodSpec or template.spec (DeploymentSpec.Template.Spec) not both")                          
  }                                                                                                                                                                        
                                                                                                                                                                           
  deploymentSpec := deployment.DeploymentSpec                                                                                                                                                                                                                                                                                                         
  // top level PodSpec is not empty, use it for deployment template                                                                                                        
  // we already know that if deployment.PodSpec is not empty deployment.DeploymentSpec.Template.Spec is empty                                                              
  if !reflect.DeepEqual(deployment.PodSpec, api_v1.PodSpec{}) {                                                                                                            
    deploymentSpec.Template.Spec = deployment.PodSpec                                                                                                                      
  }                                                                                                                                                                        
                                                                                                                                                                           
  // TODO: check if this wasn't set by user, in that case we shouldn't overwrite it                                                                                          deploymentSpec.Template.ObjectMeta.Name = deployment.Name                                                                                                                
                                                                                                                                                                           
  // TODO: merge with already existing labels and avoid duplication                                                                                                        
  deploymentSpec.Template.ObjectMeta.Labels = deployment.Labels       

I'd debug but i think we should start getting this OpenShift code in and refactor how we fix / check pods. I believe it's a missing pointer value somewhere within PodSpec, that's causing the issue.

@cdrage
Copy link
Collaborator Author

cdrage commented Oct 11, 2017

@kadel I forget what it was, but any idea in regards to the failing test?

go test -i -race -cover ./cmd/... ./pkg/... ./tests/... .
can't load package: package k8s.io/kubernetes/kubernetes: cannot find package "k8s.io/kubernetes/kubernetes" in any of:
	/home/travis/.gimme/versions/go1.7.linux.amd64/src/k8s.io/kubernetes/kubernetes (from $GOROOT)
	/home/travis/gopath/src/k8s.io/kubernetes/kubernetes (from $GOPATH)
can't load package: package k8s.io/kubernetes/tools/clientcmd: cannot find package "k8s.io/kubernetes/tools/clientcmd" in any of:
	/home/travis/.gimme/versions/go1.7.linux.amd64/src/k8s.io/kubernetes/tools/clientcmd (from $GOROOT)
	/home/travis/gopath/src/k8s.io/kubernetes/tools/clientcmd (from $GOPATH)
make: *** [test-unit-cover] Error 1

@cdrage
Copy link
Collaborator Author

cdrage commented Oct 11, 2017

Hey @containscafeine

For your three points:

  • I've added examples / docs
  • I agree, we need to refactor it between all three (job, deployment, deploymentconfig)
  • Yup. Tests will be added.

@kadel
Copy link
Member

kadel commented Oct 12, 2017

@cdrage I'll look at tests.

I have another issue.
It looks I can't set any of the DeploymentConfig specific options :-(

kedge:

name: nginx
controller: deploymentconfig

containers:
- image: nginx

replicas: 2
triggers:
  - type: "ConfigChange"
strategy:
  type: Rolling

output:

apiVersion: v1
kind: DeploymentConfig
metadata:
  creationTimestamp: null
  name: nginx
spec:
  replicas: 2
  selector:
    app: nginx
  strategy:
    resources: {}
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: nginx
    spec:
      containers:
      - image: nginx
        name: nginx
        resources: {}
  test: false
  triggers: null

strategy and triggers are empty :-(

@surajssd
Copy link
Member

@cdrage
without implementing all the checks that @containscafeine has asked I did some changes in the code and it worked for me without nil pointer exception, so now you don't need to manually copy the fields from deploymentconfigspec

$ git diff
diff --git a/pkg/spec/deploymentconfig.go b/pkg/spec/deploymentconfig.go
index c601fb5..71fd721 100644
--- a/pkg/spec/deploymentconfig.go
+++ b/pkg/spec/deploymentconfig.go
@@ -136,10 +136,9 @@ func (deploymentConfig *DeploymentConfigSpecMod) Transform() ([]runtime.Object,
 // Created the OpenShift DeploymentConfig controller
 func (deploymentConfig *DeploymentConfigSpecMod) createOpenShiftController() (*os_deploy_v1.DeploymentConfig, error) {
 
-       // Set replicas to 1 if not set (0)
-       replicas := deploymentConfig.Replicas
-       if replicas == 0 {
-               replicas = 1
+       dcSpec := deploymentConfig.DeploymentConfigSpec
+       dcSpec.Template = &kapi.PodTemplateSpec{
+               Spec: deploymentConfig.PodSpec,
        }
 
        return &os_deploy_v1.DeploymentConfig{
@@ -148,15 +147,6 @@ func (deploymentConfig *DeploymentConfigSpecMod) createOpenShiftController() (*o
                        APIVersion: "v1",
                },
                ObjectMeta: deploymentConfig.ObjectMeta,
-               Spec: os_deploy_v1.DeploymentConfigSpec{
-                       Replicas: replicas,
-                       Selector: deploymentConfig.Labels,
-                       Template: &kapi.PodTemplateSpec{
-                               ObjectMeta: metav1.ObjectMeta{
-                                       Labels: deploymentConfig.Labels,
-                               },
-                               Spec: deploymentConfig.PodSpec,
-                       },
-               },
+               Spec:       dcSpec,
        }, nil
 }

this will resolve the issues that @kadel is pointing at as well!

ObjectMeta: deploymentConfig.ObjectMeta,
Spec: os_deploy_v1.DeploymentConfigSpec{
Replicas: replicas,
Selector: deploymentConfig.Labels,
Copy link
Member

@kadel kadel Oct 12, 2017

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

shouldn't we use something a bit more specific as a selector?
maybe we could add new label to deployment deploymentconfig: <appname> and use that also together with app: <appname> as selector.

I'm afraid that someone will create pod with app label and it will be wrongly assigned to a DeploymentConfig

@kadel
Copy link
Member

kadel commented Oct 12, 2017

@kadel I forget what it was, but any idea in regards to the failing test?

go test -i -race -cover ./cmd/... ./pkg/... ./tests/... .
can't load package: package k8s.io/kubernetes/kubernetes: cannot find package "k8s.io/kubernetes/kubernetes" in any of:
/home/travis/.gimme/versions/go1.7.linux.amd64/src/k8s.io/kubernetes/kubernetes (from $GOROOT)
/home/travis/gopath/src/k8s.io/kubernetes/kubernetes (from $GOPATH)
can't load package: package k8s.io/kubernetes/tools/clientcmd: cannot find package "k8s.io/kubernetes/tools/clientcmd" in any of:
/home/travis/.gimme/versions/go1.7.linux.amd64/src/k8s.io/kubernetes/tools/clientcmd (from $GOROOT)
/home/travis/gopath/src/k8s.io/kubernetes/tools/clientcmd (from $GOPATH)
make: *** [test-unit-cover] Error 1

root cause of this is e2e_test.go "k8s.io/kubernetes/kubernetes" is an invalid import , this was created by replacing k8s.io/client-g/kubernetes.
Don't have solution yet :-(
@ashetty1 do you have any idea? Problem is that we can't use client-go anymore :-(

At least not for structs, using "k8s.io/client-go/tools/clientcmd" MIGHT be ok

--EDIT--
Just found a solution :-)
Just put client-go back to e2e_tests.go, we still can use it there and it should work ;-)

@@ -86,7 +86,7 @@ func (job *JobSpecMod) Transform() ([]runtime.Object, []string, error) {
return runtimeObjects, includeResources, nil
}

func (job *JobSpecMod) CreateK8sController() (*batch_v1.Job, error) {
func (job *JobSpecMod) createKubernetesController() (*batch_v1.Job, error) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Don't forget to rename this in the tests!

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done!

@cdrage cdrage force-pushed the add-openshift branch 5 times, most recently from 8e7fbd2 to 18fbdfe Compare October 12, 2017 17:46
Adds DeploymentConfig support to Kedge by being able to specify a
controller like so:

```yaml
controller: deploymentconfig
```
@cdrage
Copy link
Collaborator Author

cdrage commented Oct 12, 2017

Added test(s), up for another reviews / testing it out.

@surajssd
Copy link
Member

This looks like a good first cut to me!

@surajssd
Copy link
Member

What I feel is that get this thing in, and file issues with whatever is missing in this PR!

// kedgeSpec: io.kedge.DeploymentConfigSpecMod
type DeploymentConfigSpecMod struct {
ControllerFields `json:",inline"`
os_deploy_v1.DeploymentConfigSpec `json:",inline"`
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ping @surajssd, can you help with the comment for this field, or create an issue for it?

@concaf concaf merged commit a61cf06 into kedgeproject:master Oct 13, 2017
@concaf
Copy link
Collaborator

concaf commented Oct 13, 2017

Super work, @cdrage 🎉

@kadel
Copy link
Member

kadel commented Oct 13, 2017

This was merged too quickly :-(

see:
#352

Has anyone actually tested this before merging?

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants