-
Notifications
You must be signed in to change notification settings - Fork 34
Default k8s services domains are hardcoded in trigger, can`t specify it in config #22
Comments
Hi @mgolovatiy-atconsulting-ru, Thanks for reporting the issue. You mention that the cluster domain has been changed in your case. What would be the correct URL for your function? (Rather than |
svc.cluster.local -> k8s.test that specified as Or, may be, should query domain via k8s api, don`t know if it is possible. If i know go, i could make PR as soon as noticed this. |
In theory the I see that parameter available in the kubelet configmap present in the Marking this up for grabs since I don't have the bandwidth to work on this right now. Happy to help to anyone to want to give this a try. |
@andresmgot I suggest a solution without introducing an enviroment variable:
Seems to be pretty simple.
Don't have unix, so cant test well, even can't run makefile. Need CR due to my zero go-expirence |
Comments in the PR. I think it's easier to leave this code in kafka trigger for the moment (no need to move it kubeless core). |
Couldn`t find in which repo new image was put, wish to use it with my test enviroment. Can i use some snapshot or only must wait for stable version? |
I have a working environment so I have built the image for you, you can use it as |
Thank u |
My solution doesn't work, have the same error. Don't understand why This question seems to be raised long before: vmware-archive/kubeless#832 |
Sorry to hear that.
Well, it's not related to a specific function or trigger but to a cluster. I think this should be a flag for the trigger controller just as it was started in vmware-archive/kubeless#857. I think you are close with your implementation, you just need to use a command flag instead of trying to auto-resolve the domain. |
closed by mistake |
May be PR vmware-archive/kubeless#857 is good enough? |
I wish to test kubeless image with PR #857 changes in my enviroment |
#857 is not syntactically correct (it doesn't compile) so it's not usable as it is |
Is this a BUG REPORT or FEATURE REQUEST?:
bug
What happened:
After kafka message put into queue, kafka-trigger-controller pod shows error in logs
What you expected to happen:
Trigger send message to function pod
nodejs-kafka-consumer-sample
, can see logged event in its logs.May be this domain should be got from configMap (kubeless-config?).
How to reproduce it (as minimally and precisely as possible):
Anything else we need to know?:
kubelet run with
--cluster-domain=k8s.test
optionSources have hardcoded literal with domain:
event_sender.go: 57
Seems to be bug.
Environment:
The text was updated successfully, but these errors were encountered: