Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Issues with the current Go notebook controller #2456

Closed
vkoukis opened this issue Feb 12, 2019 · 19 comments
Closed

Issues with the current Go notebook controller #2456

vkoukis opened this issue Feb 12, 2019 · 19 comments
Projects

Comments

@vkoukis
Copy link
Member

vkoukis commented Feb 12, 2019

Hello @lluunn @jlewi

@kimwnasptd , @ioandr and myself are playing with the new Jupyter web app to make it work end-to-end with the new Go-based notebook controller, so we can then proceed with a final review and merge of #2357.

I am writing to summarize a few things we bumped into, it's a single GitHub issue to keep the discussion focused, but If you feel it's best to open distinct issues for them, please let us know.

  • The controller seems to have a bug in the way it attempts to override the container spec (line 114), which leads to the set values never having any effect.
  • The controller seems to hardcode the fact that the Pod always listens to port 8888 (line 154). The deeper issue here is, "what can the controller expect of the application running in the pod?", and hardcoding the value to 8888 seems contrary to the goal of allowing arbitrary Images to work -- Make arbitrary Jupyter images work with Kubeflow #2208. I will come back to this later, but the easiest way to solve this would be: The controller uses a default value of 8888 for containerPort , the user is not required to provide it with in the submitted Notebook CR, but they can always override it. If I understand correctly, the controller can access this value as Notebook.spec.template.spec.containers[0].containerPort. The controller should accept an explicit port choice by the user, because they know better what their presumably custom Image does, but should not require it.
  • The controller creates a new Service but does not create annotations for Ambassador, thus there is no way for the user to reach their new Notebook from outside the cluster.
  • This may be off-topic, but it was rather counter-intuitive to us: when Ambassador processes a new Mapping, it seems to not actually query the Service for its Endpoints, but instead expects the Mapping to specify the endpoint port explicitly. I understand the controller already know the (potentially custom) containerPort for the Notebook (see above), so it can just include it in the Ambassador-specific annotation that it sets on the new Service instance that it creates.
  • When setting the Ambassador Mapping, the controller must make sure to enable use_websocket: true
    because the Jupyter UI requires it and breaks without it.
  • The user cannot specify whether to enable JupyterLab or not for the new Notebook. The controller hardcodes setting the JUPYTER_ENABLE_LAB env var to TRUE (line 116). This is a more philosophical question, better suited to notebook controller Add CRD validation #2417 , but it would make sense for the user to specify some Jupyter-specific settings inside the Notebook CR directly. Yes, the user could just specify environment variables as part of the Pod spec, but it makes sense to simplify setting some Jupyter-specific things, because the Notebook CRD is a Jupyter-specific CRD.

And finally, a bigger issue:

It seems the webapp must specify the Image, command, and arguments to the command explicitly when creating a new Notebook CR. I understand the stated goal of allowing the user to work with arbitrary images #2208, but making these arguments mandatory makes the common case rather difficult:
Why would a simple user have to specify the full notebook Image, and the command to run, along with its arguments, in every single YAML that they deploy?

Given how elaborate the argument list currently is, if I understand correctly, something similar to:

args = [
       "start.sh",
       "jupyter",
       "lab",
       "--LabApp.token=''",
       "--LabApp.allow_remote_access=True",
       "--LabApp.allow_root=True",
       "--LabApp.ip='*'",
         "--LabApp.base_url=/" + request.parent.metadata.namespace + "/" + request.parent.metadata.name + "/",
       "--port=8888",
       "--no-browser",
   ]

This is taken from @kkasravi 's metacontroller, but it seems to be missing from the current Go implementation.

We cannot expect the user to specify this magic string inside the Notebook YAML that they submit.
But they can always override this, if they really know what they're doing. For example, if they actually override the --port=8888 argument, they'd better also modify the containerPort field of the Pod spec, so the controller knows how to create the Ambassador mapping, see above.

The Notebook controller already knows it's handling Jupyter notebooks. So it makes sense to take certain things for granted when deploying a Pod based on an Image containing the Jupyter software, e.g., that the Jupyter software is installed at a certain default location, and takes certain arguments.

So, an alternative would be: Would it make sense to have the Notebook controller use sane defaults for the Image, the command, its arguments, and the containerPort to connect to? This way, the user just submits a super-minimal Notebook YAML, overriding maybe just the resources they need.
Actually, since all these are part of the Pod template, would it make sense for the administrator, who deploys the Controller, to give it a default Pod template containing these values, as part of a ConfigMap?

@lluunn
Copy link
Contributor

lluunn commented Feb 12, 2019

And finally, a bigger issue:

It seems the webapp must specify the Image, command, and arguments to the command explicitly when creating a new Notebook CR. I understand the stated goal of allowing the user to work with arbitrary images #2208, but making these arguments mandatory makes the common case rather difficult:
Why would a simple user have to specify the full notebook Image, and the command to run, along with its arguments, in every single YAML that they deploy?

Given how elaborate the argument list currently is, if I understand correctly, something similar to:

args = [
       "start.sh",
       "jupyter",
       "lab",
       "--LabApp.token=''",
       "--LabApp.allow_remote_access=True",
       "--LabApp.allow_root=True",
       "--LabApp.ip='*'",
         "--LabApp.base_url=/" + request.parent.metadata.namespace + "/" + request.parent.metadata.name + "/",
       "--port=8888",
       "--no-browser",
   ]

This is taken from @kkasravi 's metacontroller, but it seems to be missing from the current Go implementation.

We cannot expect the user to specify this magic string inside the Notebook YAML that they submit.
But they can always override this, if they really know what they're doing. For example, if they actually override the --port=8888 argument, they'd better also modify the containerPort field of the Pod spec, so the controller knows how to create the Ambassador mapping, see above.

The Notebook controller already knows it's handling Jupyter notebooks. So it makes sense to take certain things for granted when deploying a Pod based on an Image containing the Jupyter software, e.g., that the Jupyter software is installed at a certain default location, and takes certain arguments.

So, an alternative would be: Would it make sense to have the Notebook controller use sane defaults for the Image, the command, its arguments, and the containerPort to connect to? This way, the user just submits a super-minimal Notebook YAML, overriding maybe just the resources they need.
Actually, since all these are part of the Pod template, would it make sense for the administrator, who deploys the Controller, to give it a default Pod template containing these values, as part of a ConfigMap?

Since container command/args are coupled with image, we think it doesn't make sense to provide default values for command/args.
I know the arg is complicated, but I would expect the users will use this controller via the webapp, and the webapp will provide a set of image/cmd for user to choose from (they will see the image only)

@lluunn
Copy link
Contributor

lluunn commented Feb 12, 2019

  • The controller seems to have a bug in the way it attempts to override the container spec (line 114), which leads to the set values never having any effect.

Thanks, didn't notice this. Do you mean our default value aren't set, or the user override values are not set? I will test it later...

  • The controller seems to hardcode the fact that the Pod always listens to port 8888 (line 154). The deeper issue here is, "what can the controller expect of the application running in the pod?", and hardcoding the value to 8888 seems contrary to the goal of allowing arbitrary Images to work -- Make arbitrary Jupyter images work with Kubeflow #2208. I will come back to this later, but the easiest way to solve this would be: The controller uses a default value of 8888 for containerPort , the user is not required to provide it with in the submitted Notebook CR, but they can always override it. If I understand correctly, the controller can access this value as Notebook.spec.template.spec.containers[0].containerPort. The controller should accept an explicit port choice by the user, because they know better what their presumably custom Image does, but should not require it.
  • The user cannot specify whether to enable JupyterLab or not for the new Notebook. The controller hardcodes setting the JUPYTER_ENABLE_LAB env var to TRUE (line 116). This is a more philosophical question, better suited to notebook controller Add CRD validation #2417 , but it would make sense for the user to specify some Jupyter-specific settings inside the Notebook CR directly. Yes, the user could just specify environment variables as part of the Pod spec, but it makes sense to simplify setting some Jupyter-specific things, because the Notebook CRD is a Jupyter-specific CRD.

Makes sense, we should allow port and jupterlab to be customized.

  • The controller creates a new Service but does not create annotations for Ambassador, thus there is no way for the user to reach their new Notebook from outside the cluster.
  • This may be off-topic, but it was rather counter-intuitive to us: when Ambassador processes a new Mapping, it seems to not actually query the Service for its Endpoints, but instead expects the Mapping to specify the endpoint port explicitly. I understand the controller already know the (potentially custom) containerPort for the Notebook (see above), so it can just include it in the Ambassador-specific annotation that it sets on the new Service instance that it creates.
  • When setting the Ambassador Mapping, the controller must make sure to enable use_websocket: true
    because the Jupyter UI requires it and breaks without it.

I didn't put ambassador routes because we are migrating to istio. #2261
But I guess we can add it for now and change to istio later.

@jlewi
Copy link
Contributor

jlewi commented Feb 12, 2019

The Notebook controller already knows it's handling Jupyter notebooks. So it makes sense to take certain things for granted when deploying a Pod based on an Image containing the Jupyter software, e.g., that the Jupyter software is installed at a certain default location, and takes certain arguments.

Can the defaults be encoded in the docker image? e.g. any command line arguments, environment variables that should be set by default should be set in the docker image.

My expectation is that most jupyter images are configured to start Jupyter by default just via docker run. e.g.

docker run tensorflow/tensorflow
  • (Or at least I think it use to start jupyter by default looks like that might no longer be the case)

How could the controller pick sensible defaults for things like the command line and arguments? This will depend on the Jupyter image.

So perhaps the problem is not with the controller but with the fact that our existing Jupyter images don't set a default entrypoint and args in the container image?

@jlewi jlewi added this to New in 0.5.0 via automation Feb 12, 2019
@vkoukis
Copy link
Member Author

vkoukis commented Feb 12, 2019

Since container command/args are coupled with image, we think it doesn't make sense to provide default values for command/args.
I know the arg is complicated, but I would expect the users will use this controller via the webapp, and the webapp will provide a set of image/cmd for user to choose from (they will see the image only)

I understand that users will use the webapp most of the time, but I argue against the controller depending on it. The arguments do not depend on the image really, they depend on the fact that we are starting jupyter, the arguments are arguments to jupyter, if I understand correctly. And it makes sense for a controller for Jupyter Notebook CRs to assume that jupyter is installed.

How could the controller pick sensible defaults for things like the command line and arguments? This will depend on the Jupyter image.

True, but they are all going to be Jupyter images. So, expecting a specific command (jupyter) to exist at a certain location, and accept certain Jupyter-specific arguments is a valid assumption for a controller that manages containers running Jupyter notebooks. This goes hand-in-hand with this argument:

  • The user cannot specify whether to enable JupyterLab or not for the new Notebook.

Yes, the user can always set the environment variables by hand by overriding the pod spec in their submission, but I think it makes sense for the Jupyter controller to set environment variables that it knows the Jupyter software respects. And it makes sense for the Notebook CRD to have Jupyter-specific fields outside the Pod spec, e.g., useLab: True.

So perhaps the problem is not with the controller but with the fact that our existing Jupyter images don't set a default entrypoint and args in the container image?

This is a good argument! And I agree that the container images should be self-standing. If I start the container image, just by running docker run, everything should work.

But until all images are like this, I think it makes sense that the controller has a default pod template associated with it, and in this template the administrator specifies a default image, and a default command, and defaults args in this template, so submissions of very simple Notebooks CRs work.
And yes, if the user specifies a custom Image in the Notebook CR, by overriding the relevant field in the Pod spec, they'd better also override the arguments field, if their image does not work with whatever generic arguments the controller passes by default.

@kkasravi
Copy link
Contributor

I had a similar issue with the tensorboard component where the port was a parameter arg but fixed in other places including in the docker file. If you provide command, args in the CRD you can avoid this by never using the docker ENTRYPOINT. But we should try and be consistent and compatible with kustomize. My 2¢

@jlewi
Copy link
Contributor

jlewi commented Feb 12, 2019

This is a good argument! And I agree that the container images should be self-standing. If I start the container image, just by running docker run, everything should work.

But until all images are like this, I think it makes sense that the controller has a default pod template associated with it, and in this template the administrator specifies a default image, and a default command,

i can think of two options

  1. If we can find one or more jupyter images that is free standing is that enough to unblock things while we convert over the other images?
  2. Alternatively, should we add a flag to the go controller to enable a legacy mode so that it works with our existing images?

But we should try and be consistent and compatible with kustomize

Where does kustomize come into play? Doesn't kustomize provide the same flexibility to override command or args?

@kkasravi
Copy link
Contributor

Where does kustomize come into play? Doesn't kustomize provide the same flexibility to override command or args?

yes - it was more of a comment about leveraging the same things that kustomize uses like overlays on the command and args so moving to kustomize at a later point is easy.

@vkoukis
Copy link
Member Author

vkoukis commented Feb 12, 2019

@jlewi thanks for the feedback. Here's what I don't get:

i can think of two options

  1. If we can find one or more jupyter images that is free standing is that enough to unblock things while we convert over the other images?

  2. Alternatively, should we add a flag to the go controller to enable a legacy mode so that it works with our existing images?

Do we expect the user to always pass a full pod spec as part of the Notebook they submit, or do we expect them to pass only the parts they want to override? To put it differently, I think it makes sense for the Controller to always have a base pod spec, from which it starts. This could be hardcoded in the controller, or even better, could be part of a ConfigMap, set by the administrator, mounted by the Controller.

If this happens, then I think the Controller shouldn't have to worry about Image, command, and args.
I assume that the default pod spec will specify a default image, a default command, default args, maybe even default requests for resources -- CPU, RAM.

In this case, whatever the user has to submit becomes minimal, which I think is always a good thing. So, there's two different scenarios:

  1. The base pod spec doesn't specify Image, or command, or args. In this case, the user must specify Image explicitly. If they do not specify commands, or args, we hope the Docker image itself has a proper entrypoint defined, as you say, and I agree completely.

  2. If the image doesn't have a proper entrypoint, and the user didn't override command and args, things will break. But in this case, the base pod spec can specify a sane default for command and args, which is how we started every image so far anyway. I think this is the legacy mode you are referring to. But it will be easier if you don't have to hard-code this behavior in the Go source.

Having a base pod spec that can boot no matter what would make sense in any case. Because then we could start with an almost empty Notebook CR, and still have a working notebook.

What do you think?

@vkoukis
Copy link
Member Author

vkoukis commented Feb 12, 2019

I think this is also aligned with what @kkasravi describes with his reference to kustomize: The controller is a mechanism that starts with a base pod, which is self-standing, ideally, and applies the customizations that come from the Notebook CR that the user submitted. The user can omit fields they may not care about [command, args], and they will be inherited from the base pod spec, so the end result is always valid to submit to the K8s API.

@lluunn
Copy link
Contributor

lluunn commented Feb 12, 2019

Do we expect the user to always pass a full pod spec as part of the Notebook they submit, or do we expect them to pass only the parts they want to override?

I know I am repeating myself, but aren't we expecting most users to use webapp? So this isn't making user experience hard.
And for those not using webapp, I would expect they know the arg/cmd for their own images.

I understand that users will use the webapp most of the time, but I argue against the controller depending on it. The arguments do not depend on the image really, they depend on the fact that we are starting jupyter, the arguments are arguments to jupyter, if I understand correctly. And it makes sense for a controller for Jupyter Notebook CRs to assume that jupyter is installed.

But see our default cmd:

args = [
       "start.sh",
       "jupyter",
       "lab",
       "--LabApp.token=''",
       "--LabApp.allow_remote_access=True",
       "--LabApp.allow_root=True",
       "--LabApp.ip='*'",
         "--LabApp.base_url=/" + request.parent.metadata.namespace + "/" + request.parent.metadata.name + "/",
       "--port=8888",
       "--no-browser",
   ]

It's assuming start.sh, not just jupyter.

So I think this makes more sense:

  • controller provides default values for all fields other than image, command, and arg

Users will still have a easy way to launch notebook with Webapp by just selecting the image.

@vkoukis
Copy link
Member Author

vkoukis commented Feb 12, 2019

@lluunn

I know I am repeating myself, but aren't we expecting most users to use webapp? So this isn't making user experience hard.

Hello Lun-Kai,

You are right, but I am making the argument that the abstraction of submitting Notebook CRs based on a Notebook CRD and having them managed by a Notebook Controller, is a good abstraction on its own. Yes, most people will use the webapp, but it makes sense to keep the abstraction as clean as possible at the level of submitting a Notebook CR.

I mean, what do we gain, by having the webapp submit magic arguments in the Notebook CR?
Or conversely, is it not a good feature to have the controller start from a base definition of a pod, and know that it is passing jupyter arguments to a jupyter pod?

It's assuming start.sh, not just jupyter.

start.sh is a Kubeflow-specific wrapper that chown()s files around, but start.sh jupyter is jupyter, it accepts only jupyter-specific args.

A similar argument can be made for --LabApp.allow_remote_access=True, and --LabApp.allow_root=True, if the Controller actually depends on this functionality, e.g., so the Service that it then creates works as expected, then it makes sense to actually pass these arguments as default.

So, it's not obvious to me why this wouldn't work:
The controller comes with a base spec for a pod. It patches the user's choices into this spec.
This spec happens to also define Image, command, and args, in a way that works for this specific Image, and probably for most other current Images.

Or, to put it differently: What is it that makes the Notebook Controller actually be a Notebook controller, and not a generic mechanism to submit Pod manifests? I think it is the fact that the Controller knows that it is orchestrating pods that run Images that contain jupyter, and passes arguments to the jupyter command based on fields of the Notebook CRD.

@lluunn
Copy link
Contributor

lluunn commented Feb 13, 2019

the abstraction of submitting Notebook CRs based on a Notebook CRD and having them managed by a Notebook Controller, is a good abstraction on its own. Yes, most people will use the webapp, but it makes sense to keep the abstraction as clean as possible at the level of submitting a Notebook CR.

I actually think not setting the default args is cleaner and the better abstraction.
And the reason is the arg/cmd are coupled with the image.
For example, if a user is using a self-standing jupyter image (with proper entrypoint set), how can they tell the controller to not set the args? The user would expect with no args set in the spec, the image should be running properly, but actually it will fail with the additional args.

I mean, what do we gain, by having the webapp submit magic arguments in the Notebook CR?
Or conversely, is it not a good feature to have the controller start from a base definition of a pod, and know that it is passing jupyter arguments to a jupyter pod?

I think we agree(?) that the whether we set default arg or not doesn't affect the UX of 99% of our users. So the opposite question can be asked as "what do we gain, by having the controller setting the default args"

start.sh is a Kubeflow-specific wrapper that chown()s files around,

Doesn't this mean the default arg only works for kubeflow-built images?

A similar argument can be made for --LabApp.allow_remote_access=True, and --LabApp.allow_root=True, if the Controller actually depends on this functionality, e.g., so the Service that it then creates works as expected, then it makes sense to actually pass these arguments as default.

I think the controller won't depend on this; it will depend on UseLab?

Or, to put it differently: What is it that makes the Notebook Controller actually be a Notebook controller, and not a generic mechanism to submit Pod manifests? I think it is the fact that the Controller knows that it is orchestrating pods that run Images that contain jupyter, and passes arguments to the jupyter command based on fields of the Notebook CRD.

I don't think it's only about setting arg for the pod. The controller will also secure the access, and do TTL for the notebook.

@vkoukis
Copy link
Member Author

vkoukis commented Feb 13, 2019

@lluunn We had a very nice discussion with @jlewi [thanks! ], it is all logged here:
https://kubeflow.slack.com/archives/CF94DURGF/p1550017046026600?thread_ts=1549996774.007600&cid=CF94DURGF

I'll try to summarize in this comment, @jlewi please correct whatever I may have missed:

We agree that if all container images have a proper entrypoint [command/args] specified, then we're good, and everything will work, and we will avoid the ugly magic argument list in the web app, so we'll go forward with taking this as granted.

There is also the issue of how the controller will know what port the image is set up to listen at, and we agreed that the Controller should have a default value for containerPort, 8888, and this can be hardcoded in the Go code for the Controller, or even better a command-line argument to it, but the user should definitely be able to override it via the pod spec they submit with their Notebook CR, and the Controller should apply their choice when setting up services. That is, if the user knows that the Image they are specifying listens at port 9999, they must specify containerPort as 9999 in the submitted Notebook CR and the Controller should respect their choice when setting up Ambassador annotations / Istio routes.

This seems like a way for us to move forward.

My only reservation would be: [it doesn't apply at this point, but we can keep it and use it as initial context if/when we revisit this]:

I agree that having the default value for containerPort as a command-line argument to the controller will work. But if we see that we start gathering more command-line arguments for default values for things that are specified as part of a pod spec, then it may make sense to have a "base pod spec" containing all of these defaults, instead of different command-line arguments, and provide this base pod spec as part of a ConfigMap to the Controller.

@ioandr ioandr moved this from New to Mid Release Demo in 0.5.0 Feb 14, 2019
@vkoukis
Copy link
Member Author

vkoukis commented Feb 14, 2019

@lluunn @jlewi
I hope I'm not referring to something that's solved already, I wanted to make sure that these are known, because they seem to be blocking the demo now:

  1. We need at least one public notebook Docker image with the proper modifications for the entry point, as specified by Jeremy, so we can use it from the web app and unblock the e2e demo. [there is Change jupyter images to work with the new jupyter CR #2458 for this]
  2. The controller must get the containerPort argument from the Notebook CR and specify this as the target port for Ambassador annotations, or use 8888 as the default, if the user has not explicitly specified a containerPort.

For (1), I see #2458, which waits for this issue, #2456, but what is missing for this issue to close?
I see #2463 which seems to be enough or #2456 to close, except for point (2) above. Is there something else we can do / help with?

For point (2), it seems that #2463 does not pass the target port in the Ambassador mapping, so this still blocks #2456.

Is there anything else we should keep in mind, to move forward with #2456 and #2458?
@lluunn Let us know!

@lluunn
Copy link
Contributor

lluunn commented Feb 14, 2019

The service's port will match 80 to 8888 (or container port specified), and ambassador maps route to service, so IIUC ambassador annotation should not need the containerPort?

@vkoukis
Copy link
Member Author

vkoukis commented Feb 14, 2019

@lluunn @kimwnasptd

The service's port will match 80 to 8888 (or container port specified), and ambassador maps route to service, so IIUC ambassador annotation should not need the containerPort?

Hey Lun-Kai, I think you are referring to the fact that if you kubectl get the Service, you see the containerPort as part of the Endpoints in the service, right?
Yes, this is true. But Ambassador, for some reason that is not clear to me, completely disregards the Endpoints field in the actual Service, and only looks at the port that is specified explicitly, as part of the Mapping... This is what I was trying to explain in my original comment here:

  • This may be off-topic, but it was rather counter-intuitive to us: when Ambassador processes a new Mapping, it seems to not actually query the Service for its Endpoints, but instead expects the Mapping to specify the endpoint port explicitly. I understand the controller already know the (potentially custom) containerPort for the Notebook (see above), so it can just include it in the Ambassador-specific annotation that it sets on the new Service instance that it creates.

I know it doesn't make sense, but from what we have seen, you have to specify the target port in the mapping explicitly. And if I understand correctly, it doesn't really matter what Service you annotate with the mapping. It doesn't make much sense, maybe @jlewi has some more insight on this, but this is what we've seen, please correct us if we're wrong!

@lluunn
Copy link
Contributor

lluunn commented Feb 14, 2019

I see, didn't know that. Will also fix in #2463

@lluunn lluunn added the area/jupyter Issues related to Jupyter label Feb 14, 2019
@lluunn
Copy link
Contributor

lluunn commented Feb 16, 2019

The issues here are fixed. Other improvement in #2417

@lluunn lluunn closed this as completed Feb 16, 2019
0.5.0 automation moved this from Mid Release Demo to Done Feb 16, 2019
@shalberd
Copy link

shalberd commented Feb 26, 2019

I am currently developing according to OpenShift and Nvidia GPU Daemon Plugin specs from here:

https://github.com/zvonkok/origin-ci-gpu/
https://github.com/redhat-performance/openshift-psap/tree/master/blog/gpu/device-plugin

The Pod template should also have custom environment variables for GPU acceleration, such as

env:
          - name: NVIDIA_VISIBLE_DEVICES
            value: all
          - name: NVIDIA_DRIVER_CAPABILITIES
            value: "compute,utility"
          - name: NVIDIA_REQUIRE_CUDA  
            value: "cuda>=5.0"
and custom security contexts, if wished.
 securityContext:
          allowPrivilegeEscalation: false
          capabilities:
            drop: ["ALL"]
          seLinuxOptions:
            type: nvidia_container_t    

In my case, a spawned GPU container showed the following when checking the logs

oc logs jupyter-mlk8user | grep -b1 -a10 CUDA

12336:2019-02-25 10:09:48.256799: E tensorflow/stream_executor/cuda/cuda_driver.cc:300] failed call to cuInit: CUDA_ERROR_NO_DEVICE: no CUDA-capable device is detected
12498:2019-02-25 10:09:48.256986: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:163] retrieving CUDA diagnostic information for host: jupyter-mlk8user
12651-2019-02-25 10:09:48.257005: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:170] hostname: jupyter-mlk8user
12765-2019-02-25 10:09:48.257070: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:194] libcuda reported version is: 410.79.0
12890-2019-02-25 10:09:48.257106: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:198] kernel reported version is: Permission denied: could not open driver version path for reading: /proc/driver/nvidia/version

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
No open projects
0.5.0
  
Done
Development

No branches or pull requests

5 participants