Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

failed to find trigger in the lighthouse configuration #7589

Open
manuelwallrapp opened this issue Nov 4, 2020 · 20 comments
Open

failed to find trigger in the lighthouse configuration #7589

manuelwallrapp opened this issue Nov 4, 2020 · 20 comments

Comments

@manuelwallrapp
Copy link

Summary

How to create the trigger in the lighthouse configuration?

When I create the quickstart project, everything works fine, till it waits to find the trigger for my gitrepo

created pull request https://github.com/manuelwallrapp/jx3-cluster-repo/pull/3 on the development git repository https://github.com/myrepoproject/jx3-cluster-repo.git
waiting up to 20m0s for a trigger to be added to the lighthouse configuration in ConfigMap config in namespace jx for repository: myrepoproject/thethird-try
error: failed to wait for repository to be setup in lighthouse: failed to find trigger in the lighthouse configuration in ConfigMap config in namespace jx for repository: myrepoproject/thethird-try within 20m0s
error: failed to wait for the pipeline to be setup myrepoproject/thethird-try: failed to run 'jx pipeline wait --owner meowner --repo thethird-try' command in directory '', output: ''

Steps to reproduce the behavior

I created a springboot application, merged the pullrequest and then it waits for the trigger to configure. But I don't know what to do.

Expected behavior

That it creates the trigger itself

Actual behavior

After 20min the execution of "jx create spring" aborts because of missing trigger.

Jx version

version: 3.0.680

Kubernetes cluster

I use Openshift CRC Version: 4.5.14

Open

Kubectl version

The output of kubectl version --client is:
Client Version: version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.3", GitCommit:"1e11e4a2108024935ecfcb2912226cedeafd99df", GitTreeState:"clean", BuildDate:"2020-10-14T12:50:19Z", GoVersion:"go1.15.2", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"18+", GitVersion:"v1.18.3+5302882", GitCommit:"5302882", GitTreeState:"clean", BuildDate:"2020-09-27T22:44:09Z", GoVersion:"go1.13.4", Compiler:"gc", Platform:"linux/amd64"}

Operating system / Environment

Mac Os X 10.15.7

@rminchev1
Copy link

Hey guys, any update/thoughts on this one?

@puzzled-lazlo
Copy link

Same problem here. Any help?

@dlylei
Copy link

dlylei commented Dec 11, 2020

hi there,
the same problem here after I upgrade to 3.x and importing an new project.
I check the hooks from github and it says "We couldn’t deliver this payload: Service Timeout".

@hlechuga
Copy link

hlechuga commented Dec 23, 2020

having the same issue. Im using Minikube

you can watch the boot job to update the configuration via: jx admin log
for more information on how this works see: https://jenkins-x.io/docs/v3/about/how-it-works/#importing--creating-quickstarts


WARNING: It looks like the boot job failed to setup this project.
You can view the log via: jx admin log
error: failed to wait for repository to be setup in lighthouse: failed to find trigger in the lighthouse configuration in ConfigMap config in namespace jx for repository: hlechuga/jx-py-test3 within 20m0s
error: failed to wait for the pipeline to be setup hlechuga/jx-py-test3: failed to run 'jx pipeline wait --owner hlechuga --repo jx-py-test3' command in directory '', output: ''
jx get activities                                                                  
STEP                        STARTED AGO DURATION STATUS
hlechuga/jx3-test/master #1      42m23s    7m52s Failed
  from build pack                34m55s      24s Failed
    Git Clone                    34m55s       3s Succeeded
    Admin Log                    34m51s      20s Failed
hlechuga/jx3-test/PR-3 #1        42m30s    7m21s Succeeded
  from build pack                40m29s    5m20s Succeeded
    Git Clone                    40m29s       3s Succeeded
    Git Merge                    40m26s       5s Succeeded
    Make Pr                      40m20s    5m11s Succeeded
oc logs release-6wwtb-from-build-pack-9rjpc-pod-sqt6x -c step-admin-log           
viewing the git operator boot job log for commit sha: 87bff456e87c3dcbcc34e8133ff5018497ecb72f
Installing plugin jx-admin version 0.0.143 for command jx admin from https://github.com/jenkins-x/jx-admin/releases/download/v0.0.143/jx-admin-linux-amd64.tar.gz into /home/.jx3/plugins/bin
error: no boot Jobs to view. Try add --active to wait for the next boot job

@clklachu
Copy link

Hello,

Exactly the same issue when using minikube as others after upgrading to 3.X

@clklachu
Copy link

clklachu commented Jan 7, 2021

Hello, could you please provide some directions as i have the same issue in AWS EKS as well?

@jaimemasson
Copy link

i had the same issue after following the install guide for minikube. I finally realized the git commit command didn't actually commit all the files in in the repo. after i committed all the .lighthouse files that didn't get committed originally it finally worked. it still gives the waiting message if you rerun the command with the error but it does build the app and deploy it.

@pbriet
Copy link

pbriet commented Mar 1, 2021

Same issue here with Gitlab and OKD

@ginosubscriptions
Copy link

Same here... with minikube...

@thecooldrop
Copy link

Same here with Minikube. It seems like the guide is incomplete.

@jaydattd
Copy link

jaydattd commented Apr 24, 2021

I had the same issue and it got resolved after I did below steps:

  • Create namespace "kuberhealthy" in the desired Kubernetes cluster/context:
    kubectl create namespace kuberhealthy

  • Set your current namespace to "kuberhealthy":
    kubectl config set-context --current --namespace=kuberhealthy

  • Add the kuberhealthy repo to Helm:
    helm repo add kuberhealthy https://comcast.github.io/kuberhealthy/helm-repos

  • Install kuberhealthy:
    helm install kuberhealthy kuberhealthy/kuberhealthy

if you check jx admin log you will notice that it is expecting kuberhealthy workspace there

@ginosubscriptions
Copy link

ginosubscriptions commented Apr 25, 2021

I got through with minikube now. No more errors anywhere and jx create spring runs until a success message.
The solution was that I had to "cheat" with my Dynamic DNS and I basically modified the svc/hook to work as NodePort to a specific port and then NAT map that port to port 80 of my host/physical machine. This both fixed the webhook values and allowed for using the webhook.
So, there seems to be no need for kuberhealthy to be able run through the entire spring boot project creation. However, although it all completes correctly (no errors), nothing actually gets created in the minikube cluster. The GitHub repository for the Spring Boot project gets created, but nothing returns when running jx get applications. When using jx get build logs I can only see a boot job, but no build jobs... There still is something off with Minikube. One of the things that are different with the GKE/GCP setup is that there is no Nexus nor Helm repository (no ingresses either). Not sure if that is actually normal.

Please do let me know if I should move this to another discussion though, as it ultimately solves my "not being able to run a project creation", but still doesn't solve creating a project using JX with minikube.

@alisson-gomesc
Copy link

for minikube don't forget to keep running:

$ kubectl port-forward svc/hook 8080:80

This work for me.

@danielmoeller
Copy link

Hello all,

any update on this?
I am facing the same issue with an on-premise setup (git-operator-based).

Webhooks seem to run fine, same for kuberhealthy and jx admin does not show obvious issues, either. Any advise on how to debug this issue is highly appreciated.

Only thing, that seems suspicious to me is an entry

target_url: http://lighthouse-jx.<DOMAIN>/merge/status in ConfigMap config. There is no matching ingress definition for this URL.

Thank you in advance

@GaeGreco
Copy link

Same problem. Any help?

@ankitm123
Copy link
Member

ankitm123 commented Oct 26, 2021

few things to check:

  • Did it create a PR in the cluster git repo? (The repo that was created from jx3-eks-vault or jx3-gke-vault etc ...)
  • Did the PR create a job (check by running jx admin log)? If not, then few more things to check:
    • Does webhook exist for the cluster git repo?
    • What is the output of lighthouse (keeper and webhook) pods in the jx namespace? (the pods have name of the form lighthouse-keeper-XXXX and lighthouse-webhooks-XXXX)
  • Did it create an entry in the configmap config in jx namespace?
  • Did it create a webhook in the imported repo?

Also did the name of the repository have an underscore in it (_), that issue was fixed recently, but not released yet: https://github.com/jenkins-x-plugins/jx-pipeline/releases/tag/v0.0.159

@GaeGreco
Copy link

Yes.
I follow this steps here https://jenkins-x.io/v3/admin/platforms/minikube/ but don't work

@GaeGreco
Copy link

Ok now it's all ok but i have a new error.

WARNING: no $GIT_SECRET_MOUNT_PATH environment variable set
releasing chart nodo
error: failed to create chart release in dir charts/nodo: failed to add remote repo: failed to run 'helm repo add --username admin --password ***** release-repo http://bucketrepo.jx.svc.cluster.local/bucketrepo/charts/' command in directory 'charts/nodo', output: 'Error: looks like "http://bucketrepo.jx.svc.cluster.local/bucketrepo/charts/" is not a valid chart repository or cannot be reached: error unmarshaling JSON: while decoding JSON: json: cannot unmarshal string into Go value of type repo.IndexFile'

Pipeline failed on stage 'from-build-pack' : container 'step-promote-helm-release'. The execution of the pipeline has stopped.

@FranLucchini
Copy link

Hi,

I have the same issue in an EKS cluster. I did a quickstart project, and everything worked correctly. But when I tried to import a repository, the PR was created in the cluster repo and finished without problems, however, I got the same error. I checked the imported repo in Github, and it did not create a webhook, unlike the quickstart project.

I should mention that I ran jx project import while not in the main branch but a secondary branch. If that is the issue, what can I do to revert it? How should I trigger the creation of the webhook?

I was hoping to get some guidance. Also, I checked the question offered by @ankitm123, and everything was OK except the final webhook of the imported repository.

@spaily
Copy link

spaily commented Oct 26, 2022

Release pipeline fails with the same error,

step-promote-helm-release
WARNING: no $GIT_SECRET_MOUNT_PATH environment variable set
step-promote-helm-release
releasing chart jx3-nodejs
step-promote-helm-release
error: failed to create chart release in dir charts/jx3-nodejs: failed to add remote repo: failed to run 'helm repo add --username admin --password ***** release-repo http://bucketrepo.jx.svc.cluster.local/bucketrepo/charts' command in directory 'charts/jx3-nodejs', output: 'Error: looks like "http://bucketrepo.jx.svc.cluster.local/bucketrepo/charts" is not a valid chart repository or cannot be reached: error unmarshaling JSON: while decoding JSON: json: cannot unmarshal string into Go value of type repo.IndexFile'

Screen Shot 2022-10-26 at 3 39 29 PM

Is this resolved?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests