-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Load Kubeflow Pipelines in RHODS Pipelines #156
Comments
Now we need to figure out how to correctly configure the TLS params for the routes. Following tutorial (https://www.redhat.com/sysadmin/cert-manager-operator-openshift), we make sure that the cert-manager is running and I was able to create an Then we try to follow this tutorial to manually generate a certificate and add the secret to my route. There were two issues here. One I cannot create a certificate using the Try to create
|
It turns out that without the cert-manager-openshift-routes we can still manualy setup certificate for routes and connect to KPF endpoint. While I think having this be automatically done in the test cluster would be nice, I have the steps to do so manually below. Steps
|
In the meantime we should document this as a solution. I also agree it would be nice to have an automated way of doing this. Nice work @Zongshun96! |
Problem DescriptionI am facing a new error when deploying a kfp pipeline with intermediate data. The PVC is mounted to a volume but the container cannot mount that volume. Reproducibility 1Deploying the pipeline below. Error LogsReproducibility 2Deploying a single busybox container pod with a PVC also shows the same error. interm-pvc.yaml
fake-deployment.yaml
Error Logs |
The storage class issue was fixed by enforce node affinity to avoid those newly introduced GPU nodes. There seemst to be some permissions haven't been setup. #170 For now my fix is to apply the node affinity to my components. The following is an example to
A working pipeline is shown here. Some useful pointers to recall
Thank you! |
It seems the AWS access key in
The solution is to manully update the |
Description
It seems the Kubeflow Pipelines SDK cannot be used with RHODS Pipeline at the moment. The Kubeflow pipeline endpoint is not exposed.
Forwarding
ds-pipeline-pipelines-definition
service in my openshift project(ns) didn't solve the problem as my code (kpf_tekton
with mybearer token
, adapted from here) complainingcertificate verify failed
. Also it is not safe to simply forward the service.Trevor Royer commented the problem could be that "the container likely has a cert built into it that is self signed so your cert verification fails."
Proposed Solution
Trevor suggested to add a route for the
oauth
port in the service, e.g.,oc create route reencrypt --service=dsp-def-service --port=oauth
. He mentioned "you can setup a route and the cluster will create a new cert that is already trusted for you."He also mentioned some workarounds. While I think we need a permenant fix to this problem, I am listing them here for record.
kpf_tekton
may provide an option to allow you to connect without authenticating the cert"Reproducibility
Notes
oc describe <pod>
andAWS Cloudtrail
)<route>/pipelines
and in dsp you will need to use just the route. No/pipelines
"data-science-pipelines-defenition
and the route for the API endpoint will be the one pointing towards that"kfp_tekton
. Any normal pipeline defenition pieces usekfp
."The text was updated successfully, but these errors were encountered: