Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add http -> https redirect to load balancer #5

Draft
wants to merge 9 commits into
base: main
Choose a base branch
from
Draft

Conversation

shreve
Copy link
Member

@shreve shreve commented Aug 23, 2021

Addresses https://github.com/umsi-mads/main/issues/267

Still needs to be tested.

@damianavila
Copy link
Collaborator

Still needs to be tested.

👍 , see my comment in the other PR about testing this one before merging both: https://github.com/umsi-mads/mads-software/pull/134#pullrequestreview-737175196

Add a cert and a DNS record to the stack to enable https. This mechanism
requires the addition of a domain name and ownership of the base domain
in Route 53.
@shreve
Copy link
Member Author

shreve commented Sep 1, 2021

Testing of this is currently blocked by an issue with either helm or the jupyterhub image puller. I can't get the hub started.

However, I do know the https redirect works.

$ curl -I http://jupyter-hub-shreve-test.michiganmads.org/                                                                                                       (base) 
HTTP/1.1 301 Moved Permanently
Server: awselb/2.0
Date: Wed, 01 Sep 2021 15:26:35 GMT
Content-Type: text/html
Content-Length: 134
Connection: keep-alive
Location: https://jupyter-hub-shreve-test.michiganmads.org:443/

@damianavila
Copy link
Collaborator

Testing of this is currently blocked by an issue with either helm or the jupyterhub image puller. I can't get the hub started.

Do you have more details/logs about this... how are you trying to debug it?

However, I do know the https redirect works.

Nice!

@shreve
Copy link
Member Author

shreve commented Sep 2, 2021

This is all the logging we've got using the --debug flag. I'm certain this isn't related to my changes because this also happens on the current master.

I tried changing the version and that didn't help. Other than that, there's not really a good jumping off point. I'm trying to understand what the hook-image-awaiter job entails.

$ helm upgrade --install jhub jupyterhub/jupyterhub --namespace jhub --version 0.8.2 --values helm_config.yaml --debug
history.go:56: [debug] getting history for release jhub
upgrade.go:123: [debug] preparing upgrade for jhub
upgrade.go:131: [debug] performing update for jhub
upgrade.go:303: [debug] creating upgraded release for jhub
client.go:290: [debug] Starting delete for "hook-image-puller" DaemonSet
client.go:128: [debug] creating 1 resource(s)
client.go:290: [debug] Starting delete for "hook-image-awaiter" ServiceAccount
client.go:128: [debug] creating 1 resource(s)
client.go:290: [debug] Starting delete for "hook-image-awaiter" Role
client.go:128: [debug] creating 1 resource(s)
client.go:290: [debug] Starting delete for "hook-image-awaiter" RoleBinding
client.go:128: [debug] creating 1 resource(s)
client.go:290: [debug] Starting delete for "hook-image-awaiter" Job
client.go:128: [debug] creating 1 resource(s)
client.go:519: [debug] Watching for changes to Job hook-image-awaiter with timeout of 5m0s
client.go:547: [debug] Add/Modify event for hook-image-awaiter: ADDED
client.go:586: [debug] hook-image-awaiter: Jobs active: 0, jobs failed: 0, jobs succeeded: 0
client.go:547: [debug] Add/Modify event for hook-image-awaiter: MODIFIED
client.go:586: [debug] hook-image-awaiter: Jobs active: 1, jobs failed: 0, jobs succeeded: 0
Error: failed pre-install: timed out waiting for the condition

@shreve
Copy link
Member Author

shreve commented Sep 2, 2021

The hook-image-awaiter pod won't even start because the only node in the cluster isn't ready because

runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized

@shreve
Copy link
Member Author

shreve commented Sep 2, 2021

According to a popular answer in this thread, you need to use the EKSctl tool to update some internal node configuration. aws/amazon-vpc-cni-k8s#284

@damianavila
Copy link
Collaborator

So it seems there is a mismatch between some Kube components...

I noticed that the aws-node pod was still using the 1.6.xx CNI version, when another cluster I had that was also running kubernetes v.1.16 was running CNI v1.7.xx
So after further troubleshooting I had to manually upgrade the CNI addon on my cluster.

Accordingly to some docs, you might enforce the CNI addon version in the CloudFormation, maybe??

# Upgrade some internal components of the cluster
eksctl utils update-kube-proxy --cluster "$EKS_NAME" --approve
eksctl utils update-aws-node --cluster "$EKS_NAME" --approve
eksctl utils update-coredns --cluster "$EKS_NAME" --approve
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do you really need to update these components before installing the addon?
Since you are installing the last Kube (1.19, right?), I would expect everything to be up to date...

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants