-
Notifications
You must be signed in to change notification settings - Fork 61
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support native deployment #51
Conversation
@@ -1,133 +0,0 @@ | |||
/* |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why did you delete the test case? I think it can help us verified the spec, and checking the result of reconciling whether is correct.
"github.com/megaease/easemesh/mesh-operator/pkg/syncer" | ||
"github.com/pkg/errors" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Need to follow the convention of import section?
r.Log.WithValues("DeploymentID", req.NamespacedName) | ||
|
||
deploy := &v1.Deployment{} | ||
err := r.Client.Get(context.TODO(), req.NamespacedName, deploy) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The ctx
is passed by argument, prefer to using it
buff, err := yaml.Marshal(deploy) | ||
if err != nil { | ||
t.Fatalf("mashal failed: %v", err) | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should verify the sidecar was injected correctly
// https://github.com/kubernetes/klog/issues/253 | ||
// https://github.com/kubernetes/klog/pull/242 | ||
// https://github.com/kubernetes-sigs/controller-runtime/issues/1538 | ||
deploy.Spec.Template.ObjectMeta.Labels = sourceDeploySpec.Selector.MatchLabels |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's good to refactor about reducing numbers of initializing containers |
Updated as comments along with ImagePullPolicy assigned with IfNotPresent by default |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There are some issues:
scenarios 0:
- I had a deployment that hasn't any mesh annotations, and it's applied in K8s
- I modified the deployment add some mesh annotations, and apply it again
result:
the original deployment and the new deployment (injected with sidecar) are co-existed ( does it expected?)
scenarios 1:
- No other deployments existed in the current namespace.
- Deploy a deployment with a mesh annotation.
Result:
An extra deployment **vets-service-5956bc7b4c-hlpkz ** was created and immediately deleted, it's unexpected"
we should avoid the original deployment been created.(could we?)
zhaokun-tm1801: emctl git:(ly-operator) ▷ k get po -n spring-petclinic -w
NAME READY STATUS RESTARTS AGE
vets-service-5956bc7b4c-hlpkz 0/1 Pending 0 0s
vets-service-5956bc7b4c-hlpkz 0/1 Pending 0 0s
vets-service-8c64999f9-jk7fs 0/2 Pending 0 0s
vets-service-5956bc7b4c-hlpkz 0/1 ContainerCreating 0 0s
vets-service-8c64999f9-jk7fs 0/2 Pending 0 0s
vets-service-8c64999f9-jk7fs 0/2 Init:0/1 0 0s
vets-service-5956bc7b4c-hlpkz 0/1 ContainerCreating 0 1s
vets-service-8c64999f9-jk7fs 0/2 Init:0/1 0 1s
vets-service-5956bc7b4c-hlpkz 1/1 Running 0 2s
vets-service-8c64999f9-jk7fs 0/2 PodInitializing 0 2s
vets-service-8c64999f9-jk7fs 2/2 Running 0 3s
vets-service-5956bc7b4c-hlpkz 1/1 Terminating 0 3s
scenarios 2:
Repeatedly applying a deployment with mesh annotations, I got a following error outputs:
The Deployment "vets-service" is invalid:
* spec.template.spec.containers[0].volumeMounts[0].name: Not found: "agent-volume"
* spec.template.spec.containers[1].volumeMounts[0].name: Not found: "sidecar-volume"
* spec.template.spec.initContainers[0].volumeMounts[0].name: Not found: "agent-volume"
* spec.template.spec.initContainers[0].volumeMounts[1].name: Not found: "sidecar-volume"
deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
namespace: spring-petclinic
name: vets-service
annotations:
mesh.megaease.com/enable: "true"
mesh.megaease.com/service-name: "vets-service"
spec:
replicas: 1
selector:
matchLabels:
app: vets-service
template:
metadata:
labels:
app: vets-service
spec:
containers:
- image: megaease/spring-petclinic-vets-service:latest
name: vets-service
imagePullPolicy: IfNotPresent
lifecycle:
preStop:
exec:
command: ["sh", "-c", "sleep 10"]
command: ["/bin/sh"]
args: ["-c", "java -server -Xmx1024m -Xms1024m -Dspring.profiles.active=sit -Djava.security.egd=file:/dev/./urandom org.springframework.boot.loader.JarLauncher"]
resources:
limits:
cpu: 2000m
memory: 1Gi
requests:
cpu: 200m
memory: 256Mi
ports:
- containerPort: 8080
volumes:
restartPolicy: Always
|
||
service := &meshService{ | ||
Name: deploy.Name, | ||
Labels: deploy.Labels, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
All deployment‘s labels are registered in the registry, which could interfere with the traffic splitting because traffic splitting only concerns labels that are dedicated to select the next service instance, too many unrelated labels maybe lead to unexpected behavior.
Could we organize the canary labels in the mesh annotations?
You seem not to delete the meshdeployment with the same name.
According to the quick research, it seems we don't have that interface to block the system-level controller to deploy
Sure, the example could be: apiVersion: apps/v1
kind: Deployment
metadata:
namespace: spring-petclinic
name: vets-service
annotations:
mesh.megaease.com/enable: "true"
mesh.megaease.com/service-name: "vets-service"
mesh.megaease.com/service-labels: "version=beta,env=production" It seems harder than structural labels to write and read, but acceptable for me. |
FYI: Other mesh products leverage Dynamic Admission Control to modify objects sent to the API server to enforce custom defaults. |
Updated with Dynamic Admission Control. |
return ignoreResp(req) | ||
} | ||
|
||
if req.Kind.Kind != "Deployment" { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Overall, the implementation is good to me, it should be better if we mutated pod instead of Deployment, which means that we could support Deployment, Statefulset, DaemonSet at the same time in one implementation.
There are some conflicts should be resolved |
Conflict fixed. I think it's cool to support more objects but this PR is big enough, I'd like to support them in my next contribution. |
Detail
Actually, this is also a refactoring in implementing the new feature:
And there left something to complete, which of part were commented, and:
After them, I found more trivial or significant stuff needed to be improved, which will be opened issues in the next contribution.
Reference
There lists the old and new specs of deployment after injecting the sidecar. The same/unimportant part has been committed. We can see the two init containers have been merged into one, and the volumes and volumeMounts became more consistent. The config of k8s objects are complex enough, it's very important to manage the complexity of them carefully.
old-vets-service.yaml:
new-vets-service.yaml:
The diff of them: