-
Notifications
You must be signed in to change notification settings - Fork 1.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
After migrating to new cluster with the same config, auto triggers stopped working #6505
Comments
So this error
is typically what we would see when the SSL handshake fails because clouddriver wasn't able to validate your certificate with the certificate authority. What's strange is that you have The part of the code that you're hitting in 1.26.x is What version was your previously working cluster? If I remember correctly, |
I added to registry settings. Still getting the same error
The previous cluster was 1.11, not working on 1.19. |
This issue hasn't been updated in 45 days, so we are tagging it as 'stale'. If you want to remove this label, comment:
|
This issue is tagged as 'stale' and hasn't been updated in 45 days, so we are tagging it as 'to-be-closed'. It will be closed in 45 days unless updates are made. If you want to remove this label, comment:
|
This issue is tagged as 'stale' and hasn't been updated in 45 days, so we are tagging it as 'to-be-closed'. It will be closed in 45 days unless updates are made. If you want to remove this label, comment:
|
This issue is tagged as 'to-be-closed' and hasn't been updated in 45 days, so we are closing it. You can always reopen this issue if needed. |
Issue Summary:
After migrating our Spinnaker deployment to new cluster, autotriggers are not working anymore. The new environment is Pivotal Kubernetes Servise, old one was vanilla. When the spinnaker deployment was at vanilla same config was working. Not at the new environment, some how getting the below error
2021-08-10 07:58:21.461 ERROR 1 --- [lTaskScheduler2] c.n.s.c.d.r.a.v.c.DockerRegistryClient : Error authenticating with registry https://our_private_registry_address_with_signed_certificate, for request 'v2 version check': Authentication failed: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target
Cloud Provider(s):
Kubernetes
Environment:
Spinnaker version: 1.26.6
Installed via halyard
One of my registry config
requiredGroupMembership: []
permissions: {}
address: https://some_fqwd_to_hostip
username: admin
password: test123.
email: fake.email@spinnaker.io
cacheIntervalSeconds: 30
clientTimeoutMillis: 60000
cacheThreads: 1
paginateSize: 100
sortTagsByDate: false
trackDigests: false
insecureRegistry: true
repositories:
- gg/repo1
- gg/repo2
- gg/repo3
- gg/repo4
- gg/repo5
- gg/repo6
- gg/repo7
- gg/repo8
- gg/repo9
- gg/repo10
- gg/repo11
- gg/repo12
- gg/repo13
- gg/repo14
- gg/repo15
- gg/repo16
- gg/repo17
- gg/repo18
- gg/repo19
- gg/repo20
- gg/repo21
- gg/repo22
- gg/repo23
- gg/repo24
- gg/repo25
- gg/repo26
- gg/repo27
- gg/repo28
Description:
To understand the problem, i created another registry by adding hal command and a single repository and created a pipeline with auto trigger, when i send a new image to test repo, trigger worked. Indeed settings are same with other pipelines.
Then i added 28 repos, it worked again. Then i added 2 new repo foo-new, foo-b-new same config with same repositories. Changed the configuration of the auto triggered pipelines with new ones, and auto trigger didn't worked.
The expected thing was, they should be also working
Steps to Reproduce:
We installed spinnaker with the same external redis config and minio configuration to another cluster, moved halconfig and other custom configurations (we are using pipeline permissions) and auto triggers not working.
The text was updated successfully, but these errors were encountered: