-
Notifications
You must be signed in to change notification settings - Fork 2.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
inferenceService can pull image directly #7560
Comments
This should be moved to https://github.com/kserve/kserve/issues instead. However, since I don't think that KServe does anything magic about image pulling, it would always fail unless it is authorizes for pulling the image. So either it is a public registry (at least for that cluster), or the image pull secrets are passed to the pods in some other way Ps. for the future, please format your code using something like:
|
/close |
@juliusvonkohout: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
/kind bug
Status:
Components:
Predictor:
Latest Created Revision: torchserve-predictor-00001
Conditions:
Last Transition Time: 2024-04-21T08:10:39Z
Reason: PredictorConfigurationReady not ready
Severity: Info
Status: False
Type: LatestDeploymentReady
Last Transition Time: 2024-04-21T08:11:00Z
Message: Revision "torchserve-predictor-00001" failed with message: Unable to fetch image "pytorch/torchserve-kfs:0.8.2": failed to resolve image to digest: Get "https://index.docker.io/v2/": context deadline exceeded.
Reason: RevisionFailed
Severity: Info
Status: False
Type: PredictorConfigurationReady
Last Transition Time: 2024-04-21T08:10:39Z
Message: Configuration "torchserve-predictor" does not have any ready Revision.
Reason: RevisionMissing
Status: False
Type: PredictorReady
Last Transition Time: 2024-04-21T08:10:39Z
Message: Configuration "torchserve-predictor" does not have any ready Revision.
What steps did you take and what happened:
i create inferenceService
the yaml is following:
apiVersion: serving.kserve.io/v1beta1
kind: InferenceService
metadata:
name: "torchserve"
namespace: kubeflow-user-example-com
spec:
predictor:
pytorch:
storageUri: pvc:inferpvc
image: harbor.local.sinopec.com:59443/kubeflow-1-8-0/pytorch/torchserve-kfs:0.8.2
image: m.daocloud.io/docker.io/pytorch/torchserve-kfs:0.8.2
What did you expect to happen:
the inferenceService not ready
it report Unable to fetch image "pytorch/torchserve-kfs:0.8.2": failed to resolve image to digest: Get "https://index.docker.io/v2/": context deadline exceeded.
Anything else you would like to add:
i also try to designate it by the "image: " which i comment out
it also fail
Environment:
kfctl version
): n/a ,just kubectlminikube
) kubespray or standard k8skubectl version
): 1.28.3/etc/os-release
): centos 7.3The text was updated successfully, but these errors were encountered: