-
Notifications
You must be signed in to change notification settings - Fork 2.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Can not deploy Origin 3.10 with packages from CBS repository #8550
Comments
And I apologize if I opened the issue under the wrong repository, I just do not sure who responsible for CBS repository. |
@cynepco3hahue i and other guys are looking after creating the Origin rpms which then are pushed into CentOS repos. Saying that, currently for 3.10 (master branch) the rpm creation doesn't work, i'll look at it next week once i'm back from holiday - sorry for delay/ inconvenience. |
@DanyC97 No worries, I will wait with patience 😄 |
@DanyC97 Hi Doni, do you have any updates? |
@cynepco3hahue The same error on a different oc command: openshift/origin#19590 Maybe it's connected. |
@cynepco3hahue not sure if you are subscribed to openshift dev mailing list or centos-devel (where the PaaS meetings logs are being shared) but this week i've announced that we've built a 3.10 origin rpm from Origin master branch (note there is not RC candidate for 3.10 origin yet) for playing a bit and get ourselves ready from PaaS pov when a release will be cut out. The rpm are located https://cbs.centos.org/koji/taskinfo?taskID=449606 , they are not pushed to any other repos ( like centos mirror or release) Let me know how successful you are in getting 3.10 up and thank you for your patience |
@DanyC97 Thanks for the update, I will try to deploy 3.10 with packages from the link. |
@DanyC97 I tried to deploy Origin Jun 17 07:41:07 node01 origin-node[4145]: /usr/local/bin/openshift-node: line 17: /usr/bin/openshift-node-config: No such
Jun 17 07:41:07 node01 systemd[1]: origin-node.service: main process exited, code=exited, status=1/FAILURE Does this file must be part of some package or does it must be created dynamically via ansible execution? |
@cynepco3hahue that is a v good question and i'm afraid i don't know the answer. @vrutkovs @sdodson do you guys have any insights into this? is |
@DanyC97 this seems to be provides by |
hmm that is interesting then @vrutkovs i'll need to dig in to see what is going on |
openshift-node-config is a new binary, building 3.10-rc.0 should make this problem go away. |
@cynepco3hahue Can you post the dependency issue? Like is it failing on cri-tools or on a dependency of cri-tools? |
@sdodson Sure
|
@sdodson i think we need to tag the cri-o to CentOS release repo too. I'll try to sort it soon. @cynepco3hahue if the question is about how long do i need to wait for the centOs origin rpms once a new Origin release was cut the answer is: we doing all we can to improve the time and iron few things as we go on in parallel to our daily job |
@DanyC97 😄 Just want to be aware of exact dates, because our testing strongly bounded with OpenShift, so I want to be sure that our code works correctly on k8s and OpenShift. |
It seems our ability to test new releases of openshift is hampered by the lack of documentation on how to test an upgrade. I have a 13 machine cluster which I have deemed to be my poc environment. It would be nice that once a release candidate is announced there be a way of testing upgrading this cluster using ansible. The same thing happened in 3.9 only there 3.9 went golden with no rpms in site for a number of weeks. So while we can test a single node system using the oc command there is no way of testing a multi-machine cluster. If I am wrong can someone please point me at the documentation on what I would have to do test a release candidate using ansible. Is the ansible method of installation an afterthought because it doesn't seem to be on the same release cycle as the main product? I.e lack of rpms make ansible not usable from a testing perspective. Ted |
|
@debianmaster It missed for |
@cynepco3hahue can you share a your inventory file if its possible? removing sensitive info? thanks for your help |
@debianmaster Sure,
|
thanks i will give a try |
not much luck here. guess i will wait for release instead of wasting time. |
@cynepco3hahue can we close this issue in favour of #8399 ? to many duplicated issues .... |
@cynepco3hahue until you close this issue as mentioned in my previous comment, have a look here with my last update looking forward to get some feedback |
Dupe of #8399 |
@DanyC97 Thanks will monitor the issue that you specified. |
any update here guys .. I see issue persists for origin 3.11 as well .. TASK [openshift_node : Install node, clients, and conntrack packages] ****************************************************************************************** |
Description
I am trying to deploy OpenShift origin with packages from CBS repository(https://cbs.centos.org/repos/paas7-openshift-origin310-candidate/x86_64/os/), but deployment fails on
I checked the cluster and I can see that
apiserver-5t2n4
fails to run# oc get pods --all-namespaces NAMESPACE NAME READY STATUS RESTARTS AGE default docker-registry-1-lg6x6 1/1 Running 0 4m default registry-console-1-m2xms 1/1 Running 0 3m default router-1-5hmdq 1/1 Running 0 4m kube-service-catalog apiserver-5t2n4 0/1 CrashLoopBackOff 4 2m kube-service-catalog controller-manager-psjmp 0/1 CrashLoopBackOff 4 2m openshift-web-console webconsole-5b4d568df4-dd4dd 1/1 Running 0 3m
Under the
apiserver-5t2n4
log I can see:# oc -n kube-service-catalog logs apiserver-5t2n4 Error: unknown flag: --admission-control
Version
Steps To Reproduce
Expected Results
Deployment succeeds without any errors
Observed Results
Deployment fails
Additional Information
I checked that the problem that I have solved under commit 2fba651#diff-f5c4b4675369f72d180a86be3772fe87
So puting updated packages under the repository can be enough to solve the issue.
I think that it can be a good approach to run nighty builds and put generate packages under the relevant repository.
The text was updated successfully, but these errors were encountered: