New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Switching GPUs before training when tensorflow is used #1602
Comments
I'm curious about this as well, since theano seems to need the flags/theanorc/configure a device at start, and not listed here: http://keras.io/faq/#how-can-i-run-keras-on-gpu and I tried to get it to work with (prior to keras fitting model)
but it didn't work. |
This article running Keras on multiple GPUs may give you some direction, it's untested though. |
The referenced article is only valid for Theano, and setting a device on .theanorc or running Python with |
I think this link explains about using multiple GPUs, their placement etc. in Tensorflow. |
Following the link @parag2489 posted, I did the following: created my model inside a context created using
(note: you should set This seems to work properly, but I did not test with all operations to make sure. @fchollet Any comments on this? Should we add a function to configure the Tensorflow backend to specify a device? I think we could maybe hide device selection easily using the |
Seems like there could be a function for every backend that allows the user to set gpu/cpu stuff through keras and not through theano/tensorflow/etc |
solved by: |
@grahamannett If we design such an API, we have to keep in mind that you can allocate different parts of a model to different devices (but at the same time, do not make it extremely complicated). It is straightforward to do in Tensorflow, and there is a currently experimental way to do it in Theano as well. |
Any PR to add such functionality to the backend would be really welcome! On 7 March 2016 at 18:46, João Felipe Santos notifications@github.com
|
@fchollet Do you think we should specify devices on a per-layer basis? The API would get a little bloated, though. What I would really like to do is something similar to what is done in Tensorflow, where you can specify a "device context" and anything declared within that scope is allocated to that device. Any ideas? |
Why not just what TF is doing? It seems like a good API. By the way, note that the variables used by a layer are instantiated when On 8 March 2016 at 05:16, João Felipe Santos notifications@github.com
|
I think we could do something similar to what Tensorflow is doing and adapt the Theano backend to do the same thing. My idea is to add an implementation for the Theano backend that replaces the |
@snurkabill If possible, will you please post a recipe showing how to use this newly added function? Thanks. |
@leocnj just use keras syntax in "with("/:gpu42")" statement. It's same as TF has, but propagated via keras |
use a instruct of CUDA_VISIBLE_DEVICES to choice which gpu can be seen and decide which gpu can be used indirectly |
@jfsantos I have tried to follow:
however I get the following error:
Keras 2.0.2 / Tensorflow 0.12.1 |
@ktamiola Keras 2.0 was rewritten, so that API has probably changed. If you want to use only one GPU, use the |
I am trying to run two models on two GPUs at the same time inside the same script. Seems like I've tried with K.tf.device('/gpu:{}'.format(gpu)):
K.set_session(K.tf.Session(config=K.tf.ConfigProto(allow_soft_placement=True, log_device_placement=True)))
model = Sequential(name='ann')
model.add(Dense 50, input_shape=input_shape, activation='elu', kernel_initializer='he_normal'))
model.add(BatchNormalization())
model.add(Dense(n_classes, activation = 'softmax'))
model.compile(loss='categorical_crossentropy', optimizer=Adam(), metrics=[keras.metrics.categorical_accuracy]) But I get a lot of errors:
should the soft placement not prevent those warnings? can I just ignore them? the model seems to run fine. |
@Vishruit Could you please paste the code you tried because k.set_session is not working for me. |
This issue has been automatically marked as stale because it has not had recent activity. It will be closed after 30 days if no further activity occurs, but feel free to re-open a closed issue if needed. |
I tried |
Hi,
I would like to ask, how one can change used GPU when keras is used? I have to GPUs on my machine and I would like to run two separate scripts on them, parametrized by GPU id.
Is it possible?
Use case in TF is to run session with device("/gpu:x") where "x" is GPU's id.
The text was updated successfully, but these errors were encountered: