Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Error exception when load keras model deep learning by celery #4172

Closed
2 tasks
trangtv57 opened this issue Jul 28, 2017 · 4 comments
Closed
2 tasks

Error exception when load keras model deep learning by celery #4172

trangtv57 opened this issue Jul 28, 2017 · 4 comments

Comments

@trangtv57
Copy link

Checklist

  • I have included the output of celery -A proj report in the issue.
    (if you are not able to do this, then at least specify the Celery
    version affected).
  • I have verified that the issue exists against the master branch of Celery.

Steps to reproduce

I want to load model i have trained it before in keras by celery. But in tensorflow( backend of keras) its just accept for load only 1 model on same session( can say is same thread). So problem here is, how can i saparate each thread in celery (can just how to make each thread have it's own each object model keras) to can run.

Expected behavior

Can run api of deep learning( keras) with celery

Actual behavior

@niteeshm
Copy link

niteeshm commented Aug 7, 2017

@trangtv57 Can you post some logs or explain what error is there if any ? Load the model in our main celery app and use it everywhere else required. All workers will have separate object.s

@trangtv57
Copy link
Author

trangtv57 commented Aug 8, 2017

Sorry if my question have some confuse. I have object graph: is tensorflow.get_default_graph(). this object need share separate on all threads like this: In Celery I run with mode debugging but nothing so error ( note: I have same problem (with nothing error show, its just stuck when running) when I run load my model on multi process, so I can understand why celery don't show any thing and just stuck).
In my sample when I load model in multi thread, my model can run:
`
def main_worker_dga(queue_domain, n_threads=5):

l_threads = []
model = load_model(path_file_model, path_file_weight)
graph = tf.get_default_graph()
print(id(graph))
for i in range(n_threads):
    o_worker_abc = WorkerABC(queue_domain, model, graph)
    o_worker_abc.start()
    l_threads.append(o_worker_abc)

for e_thread in l_threads:
    e_thread.join()

`

AND File worker thread:

`
class WorkerABC(threading.Thread):
dga_o = None
queue_dm = None
graph = None

def __init__(self, queue_dm, model_dga, graph):
    threading.Thread.__init__(self)
    self.queue_dm = queue_dm
    self.model = model_dga
    self.graph = graph

def run(self):
    self.abc_o = ABC()
    self.abc_o.set_model(self.model)

    # global graph
    # with graph[0].as_default():
    with self.graph.as_default():
        while True:
            if not self.queue_dm.empty():
                dm_next = self.queue_dm.get()
                self.abc_o.set_omain(dm_next)
                print(self.abc_o.predict_probability_domain())
            else:
                break

`

@godelstheory
Copy link

The issue is get_session from the TensorFlow library: this will hang across processes. It is not recommended to utilize multiprocess (see the last comment of tensorflow/tensorflow#8220).

A workaround is that all imports to Tensorflow (including keras) occur only in the child, spawned processes. This has allowed me to effectively load and utilize serialized TensorFlow models across child process using keras. see tensorflow/tensorflow#5448

@auvipy
Copy link
Member

auvipy commented Dec 19, 2017

closing as not a bug, if any documentation or code fixes are suggested plz send a pr

@auvipy auvipy closed this as completed Dec 19, 2017
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

4 participants