New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

TypeError: can't pickle _thread.lock objects #11157

Closed
mattfeel opened this Issue Jun 30, 2017 · 50 comments

Comments

Projects
None yet
@mattfeel

mattfeel commented Jun 30, 2017

Please go to Stack Overflow for help and support:

http://stackoverflow.com/questions/tagged/tensorflow

If you open a GitHub issue, here is our policy:

  1. It must be a bug or a feature request.
  2. The form below must be filled out.
  3. It shouldn't be a TensorBoard issue. Those go here.

Here's why we have that policy: TensorFlow developers respond to issues. We want to focus on work that benefits the whole community, e.g., fixing bugs and adding features. Support only helps individuals. GitHub also notifies thousands of people when issues are filed. We want them to see you communicating an interesting problem, rather than being redirected to Stack Overflow.


System information

  • Have I written custom code (as opposed to using a stock example script provided in TensorFlow): No, using stock examples
  • OS Platform and Distribution (e.g., Linux Ubuntu 16.04): Linux Ubuntu 16.04
  • TensorFlow installed from (source or binary): pip
  • TensorFlow version (use command below): 1.2.1
  • Python version: 3.6.1 (Anaconda 4.4.0 64-bit)
  • Bazel version (if compiling from source):
  • CUDA/cuDNN version:
  • GPU model and memory:
  • Exact command to reproduce: I'm running the seq2seq example in models/tutorials/rnn/translate, verbatim.

You can collect some of this information using our environment capture script:

https://github.com/tensorflow/tensorflow/tree/master/tools/tf_env_collect.sh

Collecting system information...
2017-06-29 18:35:16.672194: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.1 instructions, but these are available on your machine and could speed up CPU computations.
2017-06-29 18:35:16.672242: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.2 instructions, but these are available on your machine and could speed up CPU computations.
2017-06-29 18:35:16.672250: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX instructions, but these are available on your machine and could speed up CPU computations.
Wrote environment to tf_env.txt. You can review the contents of that file.
and use it to populate the fields in the github issue template.

cat tf_env.txt

== cat /etc/issue ===============================================
Linux GCRGDL171 4.8.0-58-generic #63~16.04.1-Ubuntu SMP Mon Jun 26 18:08:51 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux
VERSION="16.04.2 LTS (Xenial Xerus)"
VERSION_ID="16.04"
VERSION_CODENAME=xenial

== are we in docker =============================================
No

== compiler =====================================================
c++ (Ubuntu 5.4.0-6ubuntu1~16.04.4) 5.4.0 20160609
Copyright (C) 2015 Free Software Foundation, Inc.
This is free software; see the source for copying conditions. There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.

== uname -a =====================================================
Linux GCRGDL171 4.8.0-58-generic #63~16.04.1-Ubuntu SMP Mon Jun 26 18:08:51 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux

== check pips ===================================================
numpy (1.12.1)
numpydoc (0.6.0)
protobuf (3.3.0)
tensorflow (1.2.1)

== check for virtualenv =========================================
False

== tensorflow import ============================================
tf.VERSION = 1.2.1
tf.GIT_VERSION = v1.2.0-5-g435cdfc
tf.COMPILER_VERSION = v1.2.0-5-g435cdfc
Sanity check: array([1], dtype=int32)

== env ==========================================================
LD_LIBRARY_PATH /usr/local/cuda:/usr/local/cuda/lib64:
DYLD_LIBRARY_PATH is unset

== nvidia-smi ===================================================
Thu Jun 29 18:35:19 2017
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 375.66 Driver Version: 375.66 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|===============================+======================+======================|
| 0 Tesla K40m Off | 0000:27:00.0 Off | 0 |
| N/A 27C P8 21W / 235W | 0MiB / 11439MiB | 0% Default |
+-------------------------------+----------------------+----------------------+

+-----------------------------------------------------------------------------+
| Processes: GPU Memory |
| GPU PID Type Process name Usage |
|=============================================================================|
| No running processes found |
+-----------------------------------------------------------------------------+

== cuda libs ===================================================
/usr/local/cuda-8.0/lib64/libcudart.so.8.0.61
/usr/local/cuda-8.0/lib64/libcudart_static.a
/usr/local/cuda-8.0/doc/man/man7/libcudart.7
/usr/local/cuda-8.0/doc/man/man7/libcudart.so.7

You can obtain the TensorFlow version with

python -c "import tensorflow as tf; print(tf.GIT_VERSION, tf.VERSION)"

Describe the problem

Describe the problem clearly here. Be sure to convey here why it's a bug in TensorFlow or a feature request.

I get exception: TypeError: can't pickle _thread.lock objects. It happens on different machines with the same python version. Just running your example code verbatim.

Source code / logs

Include any logs or source code that would be helpful to diagnose the problem. If including tracebacks, please include the full traceback. Large logs and files should be attached. Try to provide a reproducible test case that is the bare minimum necessary to generate the problem.

Traceback (most recent call last):
File "translate.py", line 322, in
tf.app.run()
File "/home/t-mabruc/anaconda3/lib/python3.6/site-packages/tensorflow/python/platform/app.py", line 48, in run
_sys.exit(main(_sys.argv[:1] + flags_passthrough))
File "translate.py", line 319, in main
train()
File "translate.py", line 178, in train
model = create_model(sess, False)
File "translate.py", line 136, in create_model
dtype=dtype)
File "/home/t-mabruc/models/tutorials/rnn/translate/seq2seq_model.py", line 179, in init
softmax_loss_function=softmax_loss_function)
File "/home/t-mabruc/anaconda3/lib/python3.6/site-packages/tensorflow/contrib/legacy_seq2seq/python/ops/seq2seq.py", line 1206, in model_with_buckets
decoder_inputs[:bucket[1]])
File "/home/t-mabruc/models/tutorials/rnn/translate/seq2seq_model.py", line 178, in
lambda x, y: seq2seq_f(x, y, False),
File "/home/t-mabruc/models/tutorials/rnn/translate/seq2seq_model.py", line 142, in seq2seq_f
dtype=dtype)
File "/home/t-mabruc/anaconda3/lib/python3.6/site-packages/tensorflow/contrib/legacy_seq2seq/python/ops/seq2seq.py", line 848, in embedding_attention_seq2seq
encoder_cell = copy.deepcopy(cell)
File "/home/t-mabruc/anaconda3/lib/python3.6/copy.py", line 161, in deepcopy
y = copier(memo)
File "/home/t-mabruc/anaconda3/lib/python3.6/site-packages/tensorflow/python/layers/base.py", line 476, in deepcopy
setattr(result, k, copy.deepcopy(v, memo))
File "/home/t-mabruc/anaconda3/lib/python3.6/copy.py", line 150, in deepcopy
y = copier(x, memo)
File "/home/t-mabruc/anaconda3/lib/python3.6/copy.py", line 215, in _deepcopy_list
append(deepcopy(a, memo))
File "/home/t-mabruc/anaconda3/lib/python3.6/copy.py", line 180, in deepcopy
y = _reconstruct(x, memo, *rv)
File "/home/t-mabruc/anaconda3/lib/python3.6/copy.py", line 280, in _reconstruct
state = deepcopy(state, memo)
File "/home/t-mabruc/anaconda3/lib/python3.6/copy.py", line 150, in deepcopy
y = copier(x, memo)
File "/home/t-mabruc/anaconda3/lib/python3.6/copy.py", line 240, in _deepcopy_dict
y[deepcopy(key, memo)] = deepcopy(value, memo)
File "/home/t-mabruc/anaconda3/lib/python3.6/copy.py", line 180, in deepcopy
y = _reconstruct(x, memo, *rv)
File "/home/t-mabruc/anaconda3/lib/python3.6/copy.py", line 280, in _reconstruct
state = deepcopy(state, memo)
File "/home/t-mabruc/anaconda3/lib/python3.6/copy.py", line 150, in deepcopy
y = copier(x, memo)
File "/home/t-mabruc/anaconda3/lib/python3.6/copy.py", line 240, in _deepcopy_dict
y[deepcopy(key, memo)] = deepcopy(value, memo)
File "/home/t-mabruc/anaconda3/lib/python3.6/copy.py", line 180, in deepcopy
y = _reconstruct(x, memo, *rv)
File "/home/t-mabruc/anaconda3/lib/python3.6/copy.py", line 280, in _reconstruct
state = deepcopy(state, memo)
File "/home/t-mabruc/anaconda3/lib/python3.6/copy.py", line 150, in deepcopy
y = copier(x, memo)
File "/home/t-mabruc/anaconda3/lib/python3.6/copy.py", line 240, in _deepcopy_dict
y[deepcopy(key, memo)] = deepcopy(value, memo)
File "/home/t-mabruc/anaconda3/lib/python3.6/copy.py", line 180, in deepcopy
y = _reconstruct(x, memo, *rv)
File "/home/t-mabruc/anaconda3/lib/python3.6/copy.py", line 280, in _reconstruct
state = deepcopy(state, memo)
File "/home/t-mabruc/anaconda3/lib/python3.6/copy.py", line 150, in deepcopy
y = copier(x, memo)
File "/home/t-mabruc/anaconda3/lib/python3.6/copy.py", line 240, in _deepcopy_dict
y[deepcopy(key, memo)] = deepcopy(value, memo)
File "/home/t-mabruc/anaconda3/lib/python3.6/copy.py", line 169, in deepcopy
rv = reductor(4)
TypeError: can't pickle _thread.lock objects

@printdhruv

This comment has been minimized.

Show comment
Hide comment
@printdhruv

printdhruv Jun 30, 2017

Contributor

Hi,
As website sates "NOTE: The conda package is community supported, not officially supported. That is, the TensorFlow team neither tests nor maintains this conda package. Use that package at your own risk."
I think that you should try it with native Python environment. I tested and seems running successfully.

Contributor

printdhruv commented Jun 30, 2017

Hi,
As website sates "NOTE: The conda package is community supported, not officially supported. That is, the TensorFlow team neither tests nor maintains this conda package. Use that package at your own risk."
I think that you should try it with native Python environment. I tested and seems running successfully.

@mattfeel

This comment has been minimized.

Show comment
Hide comment
@mattfeel

mattfeel Jun 30, 2017

I also tried it with native Python, and the error persists. The error disappears only when downgrading to tensorflow version 1.0.0

mattfeel commented Jun 30, 2017

I also tried it with native Python, and the error persists. The error disappears only when downgrading to tensorflow version 1.0.0

@printdhruv

This comment has been minimized.

Show comment
Hide comment
@printdhruv

printdhruv Jun 30, 2017

Contributor

Can you list the exact steps,source code which you are executing?

Contributor

printdhruv commented Jun 30, 2017

Can you list the exact steps,source code which you are executing?

@mattfeel

This comment has been minimized.

Show comment
Hide comment
@mattfeel

mattfeel Jun 30, 2017

I'm running the code found here https://github.com/tensorflow/models/tree/master/tutorials/rnn/translate

I run it with:
python translate.py --data_dir <path-to-data-dir> --train_dir <path-to-train-dir> --from_train_data <path-to-from-train-data> --to_train_data <path-to-to-train-data>

I'm running this code as-is. Python is native (no Anacoda, no virtualenv, etc.)

Also, the same error happens on both Linux and Windows. And it is fixed on both systems only by using tensorflow 1.0.0.

mattfeel commented Jun 30, 2017

I'm running the code found here https://github.com/tensorflow/models/tree/master/tutorials/rnn/translate

I run it with:
python translate.py --data_dir <path-to-data-dir> --train_dir <path-to-train-dir> --from_train_data <path-to-from-train-data> --to_train_data <path-to-to-train-data>

I'm running this code as-is. Python is native (no Anacoda, no virtualenv, etc.)

Also, the same error happens on both Linux and Windows. And it is fixed on both systems only by using tensorflow 1.0.0.

@skye

This comment has been minimized.

Show comment
Hide comment
@skye
Member

skye commented Jul 5, 2017

@nealwu

This comment has been minimized.

Show comment
Hide comment
@nealwu

nealwu Jul 6, 2017

Member

That's very odd that the error disappears with TensorFlow 1.0. @lukaszkaiser do you know what might be happening here?

Member

nealwu commented Jul 6, 2017

That's very odd that the error disappears with TensorFlow 1.0. @lukaszkaiser do you know what might be happening here?

@lukaszkaiser

This comment has been minimized.

Show comment
Hide comment
@lukaszkaiser

lukaszkaiser Jul 6, 2017

Member

There were a lot of changes to RNNCell since 1.0, must ask @ebrevdo to take a look.

Member

lukaszkaiser commented Jul 6, 2017

There were a lot of changes to RNNCell since 1.0, must ask @ebrevdo to take a look.

@ebrevdo

This comment has been minimized.

Show comment
Hide comment
@ebrevdo

ebrevdo Jul 6, 2017

Contributor

Do you get this error in earlier versions of python (i.e., python 3.4)? I'm trying to replicate locally with python2.7.

Contributor

ebrevdo commented Jul 6, 2017

Do you get this error in earlier versions of python (i.e., python 3.4)? I'm trying to replicate locally with python2.7.

@ebrevdo

This comment has been minimized.

Show comment
Hide comment
@ebrevdo

ebrevdo Jul 6, 2017

Contributor

Scratch that; i'll test it with python3.4.

Contributor

ebrevdo commented Jul 6, 2017

Scratch that; i'll test it with python3.4.

@loveJasmine

This comment has been minimized.

Show comment
Hide comment
@loveJasmine

loveJasmine Jul 11, 2017

I totally reproduced this error on py3.6.0 and py2.7.10 on Mac, but the error are different:

for python3.6.0:
seq2seq.py", line 910, in embedding_attention_seq2seq
encoder_cell = copy.deepcopy(cell)
File "/usr/local/Cellar/python3/3.6.0_1/Frameworks/Python.framework/Versions/3.6/lib/python3.6/copy.py", line 161, in deepcopy
y = copier(memo)
File "/usr/local/lib/python3.6/site-packages/tensorflow/python/layers/base.py", line 476, in deepcopy
setattr(result, k, copy.deepcopy(v, memo))
File "/usr/local/Cellar/python3/3.6.0_1/Frameworks/Python.framework/Versions/3.6/lib/python3.6/copy.py", line 150, in deepcopy
y = copier(x, memo)
File "/usr/local/Cellar/python3/3.6.0_1/Frameworks/Python.framework/Versions/3.6/lib/python3.6/copy.py", line 215, in _deepcopy_list
append(deepcopy(a, memo))
File "/usr/local/Cellar/python3/3.6.0_1/Frameworks/Python.framework/Versions/3.6/lib/python3.6/copy.py", line 180, in deepcopy
y = _reconstruct(x, memo, *rv)
File "/usr/local/Cellar/python3/3.6.0_1/Frameworks/Python.framework/Versions/3.6/lib/python3.6/copy.py", line 280, in _reconstruct
state = deepcopy(state, memo)
File "/usr/local/Cellar/python3/3.6.0_1/Frameworks/Python.framework/Versions/3.6/lib/python3.6/copy.py", line 150, in deepcopy
y = copier(x, memo)
File "/usr/local/Cellar/python3/3.6.0_1/Frameworks/Python.framework/Versions/3.6/lib/python3.6/copy.py", line 240, in _deepcopy_dict
y[deepcopy(key, memo)] = deepcopy(value, memo)
File "/usr/local/Cellar/python3/3.6.0_1/Frameworks/Python.framework/Versions/3.6/lib/python3.6/copy.py", line 180, in deepcopy
y = _reconstruct(x, memo, *rv)
File "/usr/local/Cellar/python3/3.6.0_1/Frameworks/Python.framework/Versions/3.6/lib/python3.6/copy.py", line 280, in _reconstruct
state = deepcopy(state, memo)
File "/usr/local/Cellar/python3/3.6.0_1/Frameworks/Python.framework/Versions/3.6/lib/python3.6/copy.py", line 150, in deepcopy
y = copier(x, memo)
File "/usr/local/Cellar/python3/3.6.0_1/Frameworks/Python.framework/Versions/3.6/lib/python3.6/copy.py", line 240, in _deepcopy_dict
y[deepcopy(key, memo)] = deepcopy(value, memo)
File "/usr/local/Cellar/python3/3.6.0_1/Frameworks/Python.framework/Versions/3.6/lib/python3.6/copy.py", line 180, in deepcopy
y = _reconstruct(x, memo, *rv)
File "/usr/local/Cellar/python3/3.6.0_1/Frameworks/Python.framework/Versions/3.6/lib/python3.6/copy.py", line 280, in _reconstruct
state = deepcopy(state, memo)
File "/usr/local/Cellar/python3/3.6.0_1/Frameworks/Python.framework/Versions/3.6/lib/python3.6/copy.py", line 150, in deepcopy
y = copier(x, memo)
File "/usr/local/Cellar/python3/3.6.0_1/Frameworks/Python.framework/Versions/3.6/lib/python3.6/copy.py", line 240, in _deepcopy_dict
y[deepcopy(key, memo)] = deepcopy(value, memo)
File "/usr/local/Cellar/python3/3.6.0_1/Frameworks/Python.framework/Versions/3.6/lib/python3.6/copy.py", line 180, in deepcopy
y = _reconstruct(x, memo, *rv)
File "/usr/local/Cellar/python3/3.6.0_1/Frameworks/Python.framework/Versions/3.6/lib/python3.6/copy.py", line 280, in _reconstruct
state = deepcopy(state, memo)
File "/usr/local/Cellar/python3/3.6.0_1/Frameworks/Python.framework/Versions/3.6/lib/python3.6/copy.py", line 150, in deepcopy
y = copier(x, memo)
File "/usr/local/Cellar/python3/3.6.0_1/Frameworks/Python.framework/Versions/3.6/lib/python3.6/copy.py", line 240, in _deepcopy_dict
y[deepcopy(key, memo)] = deepcopy(value, memo)
File "/usr/local/Cellar/python3/3.6.0_1/Frameworks/Python.framework/Versions/3.6/lib/python3.6/copy.py", line 169, in deepcopy
rv = reductor(4)
TypeError: can't pickle _thread.lock objects

for python2.7.10:
encoder_cell = copy.deepcopy(cell)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/copy.py", line 174, in deepcopy
y = copier(memo)
File "/Library/Python/2.7/site-packages/tensorflow/python/layers/base.py", line 476, in deepcopy
setattr(result, k, copy.deepcopy(v, memo))
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/copy.py", line 163, in deepcopy
y = copier(x, memo)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/copy.py", line 230, in _deepcopy_list
y.append(deepcopy(a, memo))
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/copy.py", line 190, in deepcopy
y = _reconstruct(x, rv, 1, memo)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/copy.py", line 334, in _reconstruct
state = deepcopy(state, memo)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/copy.py", line 163, in deepcopy
y = copier(x, memo)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/copy.py", line 257, in _deepcopy_dict
y[deepcopy(key, memo)] = deepcopy(value, memo)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/copy.py", line 190, in deepcopy
y = _reconstruct(x, rv, 1, memo)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/copy.py", line 334, in _reconstruct
state = deepcopy(state, memo)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/copy.py", line 163, in deepcopy
y = copier(x, memo)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/copy.py", line 257, in _deepcopy_dict
y[deepcopy(key, memo)] = deepcopy(value, memo)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/copy.py", line 190, in deepcopy
y = _reconstruct(x, rv, 1, memo)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/copy.py", line 334, in _reconstruct
state = deepcopy(state, memo)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/copy.py", line 163, in deepcopy
y = copier(x, memo)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/copy.py", line 257, in _deepcopy_dict
y[deepcopy(key, memo)] = deepcopy(value, memo)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/copy.py", line 190, in deepcopy
y = _reconstruct(x, rv, 1, memo)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/copy.py", line 334, in _reconstruct
state = deepcopy(state, memo)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/copy.py", line 163, in deepcopy
y = copier(x, memo)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/copy.py", line 257, in _deepcopy_dict
y[deepcopy(key, memo)] = deepcopy(value, memo)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/copy.py", line 163, in deepcopy
y = copier(x, memo)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/copy.py", line 257, in _deepcopy_dict
y[deepcopy(key, memo)] = deepcopy(value, memo)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/copy.py", line 163, in deepcopy
y = copier(x, memo)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/copy.py", line 230, in _deepcopy_list
y.append(deepcopy(a, memo))
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/copy.py", line 190, in deepcopy
y = _reconstruct(x, rv, 1, memo)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/copy.py", line 334, in _reconstruct
state = deepcopy(state, memo)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/copy.py", line 163, in deepcopy
y = copier(x, memo)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/copy.py", line 257, in _deepcopy_dict
y[deepcopy(key, memo)] = deepcopy(value, memo)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/copy.py", line 190, in deepcopy
y = _reconstruct(x, rv, 1, memo)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/copy.py", line 334, in _reconstruct
state = deepcopy(state, memo)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/copy.py", line 163, in deepcopy
y = copier(x, memo)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/copy.py", line 257, in _deepcopy_dict
y[deepcopy(key, memo)] = deepcopy(value, memo)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/copy.py", line 190, in deepcopy
y = _reconstruct(x, rv, 1, memo)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/copy.py", line 334, in _reconstruct
state = deepcopy(state, memo)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/copy.py", line 163, in deepcopy
y = copier(x, memo)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/copy.py", line 257, in _deepcopy_dict
y[deepcopy(key, memo)] = deepcopy(value, memo)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/copy.py", line 163, in deepcopy
y = copier(x, memo)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/copy.py", line 230, in _deepcopy_list
y.append(deepcopy(a, memo))
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/copy.py", line 163, in deepcopy
y = copier(x, memo)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/copy.py", line 237, in _deepcopy_tuple
y.append(deepcopy(a, memo))
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/copy.py", line 163, in deepcopy
y = copier(x, memo)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/copy.py", line 257, in _deepcopy_dict
y[deepcopy(key, memo)] = deepcopy(value, memo)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/copy.py", line 190, in deepcopy
y = _reconstruct(x, rv, 1, memo)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/copy.py", line 334, in _reconstruct
state = deepcopy(state, memo)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/copy.py", line 163, in deepcopy
y = copier(x, memo)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/copy.py", line 257, in _deepcopy_dict
y[deepcopy(key, memo)] = deepcopy(value, memo)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/copy.py", line 190, in deepcopy
y = _reconstruct(x, rv, 1, memo)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/copy.py", line 334, in _reconstruct
state = deepcopy(state, memo)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/copy.py", line 163, in deepcopy
y = copier(x, memo)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/copy.py", line 257, in _deepcopy_dict
y[deepcopy(key, memo)] = deepcopy(value, memo)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/copy.py", line 190, in deepcopy
y = _reconstruct(x, rv, 1, memo)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/copy.py", line 334, in _reconstruct
state = deepcopy(state, memo)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/copy.py", line 163, in deepcopy
y = copier(x, memo)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/copy.py", line 257, in _deepcopy_dict
y[deepcopy(key, memo)] = deepcopy(value, memo)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/copy.py", line 163, in deepcopy
y = copier(x, memo)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/copy.py", line 257, in _deepcopy_dict
y[deepcopy(key, memo)] = deepcopy(value, memo)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/copy.py", line 190, in deepcopy
y = _reconstruct(x, rv, 1, memo)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/copy.py", line 329, in _reconstruct
y = callable(*args)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/copy_reg.py", line 93, in newobj
return cls.new(cls, *args)
TypeError: object.new(NotImplementedType) is not safe, use NotImplementedType.new()

loveJasmine commented Jul 11, 2017

I totally reproduced this error on py3.6.0 and py2.7.10 on Mac, but the error are different:

for python3.6.0:
seq2seq.py", line 910, in embedding_attention_seq2seq
encoder_cell = copy.deepcopy(cell)
File "/usr/local/Cellar/python3/3.6.0_1/Frameworks/Python.framework/Versions/3.6/lib/python3.6/copy.py", line 161, in deepcopy
y = copier(memo)
File "/usr/local/lib/python3.6/site-packages/tensorflow/python/layers/base.py", line 476, in deepcopy
setattr(result, k, copy.deepcopy(v, memo))
File "/usr/local/Cellar/python3/3.6.0_1/Frameworks/Python.framework/Versions/3.6/lib/python3.6/copy.py", line 150, in deepcopy
y = copier(x, memo)
File "/usr/local/Cellar/python3/3.6.0_1/Frameworks/Python.framework/Versions/3.6/lib/python3.6/copy.py", line 215, in _deepcopy_list
append(deepcopy(a, memo))
File "/usr/local/Cellar/python3/3.6.0_1/Frameworks/Python.framework/Versions/3.6/lib/python3.6/copy.py", line 180, in deepcopy
y = _reconstruct(x, memo, *rv)
File "/usr/local/Cellar/python3/3.6.0_1/Frameworks/Python.framework/Versions/3.6/lib/python3.6/copy.py", line 280, in _reconstruct
state = deepcopy(state, memo)
File "/usr/local/Cellar/python3/3.6.0_1/Frameworks/Python.framework/Versions/3.6/lib/python3.6/copy.py", line 150, in deepcopy
y = copier(x, memo)
File "/usr/local/Cellar/python3/3.6.0_1/Frameworks/Python.framework/Versions/3.6/lib/python3.6/copy.py", line 240, in _deepcopy_dict
y[deepcopy(key, memo)] = deepcopy(value, memo)
File "/usr/local/Cellar/python3/3.6.0_1/Frameworks/Python.framework/Versions/3.6/lib/python3.6/copy.py", line 180, in deepcopy
y = _reconstruct(x, memo, *rv)
File "/usr/local/Cellar/python3/3.6.0_1/Frameworks/Python.framework/Versions/3.6/lib/python3.6/copy.py", line 280, in _reconstruct
state = deepcopy(state, memo)
File "/usr/local/Cellar/python3/3.6.0_1/Frameworks/Python.framework/Versions/3.6/lib/python3.6/copy.py", line 150, in deepcopy
y = copier(x, memo)
File "/usr/local/Cellar/python3/3.6.0_1/Frameworks/Python.framework/Versions/3.6/lib/python3.6/copy.py", line 240, in _deepcopy_dict
y[deepcopy(key, memo)] = deepcopy(value, memo)
File "/usr/local/Cellar/python3/3.6.0_1/Frameworks/Python.framework/Versions/3.6/lib/python3.6/copy.py", line 180, in deepcopy
y = _reconstruct(x, memo, *rv)
File "/usr/local/Cellar/python3/3.6.0_1/Frameworks/Python.framework/Versions/3.6/lib/python3.6/copy.py", line 280, in _reconstruct
state = deepcopy(state, memo)
File "/usr/local/Cellar/python3/3.6.0_1/Frameworks/Python.framework/Versions/3.6/lib/python3.6/copy.py", line 150, in deepcopy
y = copier(x, memo)
File "/usr/local/Cellar/python3/3.6.0_1/Frameworks/Python.framework/Versions/3.6/lib/python3.6/copy.py", line 240, in _deepcopy_dict
y[deepcopy(key, memo)] = deepcopy(value, memo)
File "/usr/local/Cellar/python3/3.6.0_1/Frameworks/Python.framework/Versions/3.6/lib/python3.6/copy.py", line 180, in deepcopy
y = _reconstruct(x, memo, *rv)
File "/usr/local/Cellar/python3/3.6.0_1/Frameworks/Python.framework/Versions/3.6/lib/python3.6/copy.py", line 280, in _reconstruct
state = deepcopy(state, memo)
File "/usr/local/Cellar/python3/3.6.0_1/Frameworks/Python.framework/Versions/3.6/lib/python3.6/copy.py", line 150, in deepcopy
y = copier(x, memo)
File "/usr/local/Cellar/python3/3.6.0_1/Frameworks/Python.framework/Versions/3.6/lib/python3.6/copy.py", line 240, in _deepcopy_dict
y[deepcopy(key, memo)] = deepcopy(value, memo)
File "/usr/local/Cellar/python3/3.6.0_1/Frameworks/Python.framework/Versions/3.6/lib/python3.6/copy.py", line 169, in deepcopy
rv = reductor(4)
TypeError: can't pickle _thread.lock objects

for python2.7.10:
encoder_cell = copy.deepcopy(cell)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/copy.py", line 174, in deepcopy
y = copier(memo)
File "/Library/Python/2.7/site-packages/tensorflow/python/layers/base.py", line 476, in deepcopy
setattr(result, k, copy.deepcopy(v, memo))
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/copy.py", line 163, in deepcopy
y = copier(x, memo)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/copy.py", line 230, in _deepcopy_list
y.append(deepcopy(a, memo))
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/copy.py", line 190, in deepcopy
y = _reconstruct(x, rv, 1, memo)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/copy.py", line 334, in _reconstruct
state = deepcopy(state, memo)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/copy.py", line 163, in deepcopy
y = copier(x, memo)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/copy.py", line 257, in _deepcopy_dict
y[deepcopy(key, memo)] = deepcopy(value, memo)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/copy.py", line 190, in deepcopy
y = _reconstruct(x, rv, 1, memo)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/copy.py", line 334, in _reconstruct
state = deepcopy(state, memo)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/copy.py", line 163, in deepcopy
y = copier(x, memo)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/copy.py", line 257, in _deepcopy_dict
y[deepcopy(key, memo)] = deepcopy(value, memo)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/copy.py", line 190, in deepcopy
y = _reconstruct(x, rv, 1, memo)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/copy.py", line 334, in _reconstruct
state = deepcopy(state, memo)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/copy.py", line 163, in deepcopy
y = copier(x, memo)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/copy.py", line 257, in _deepcopy_dict
y[deepcopy(key, memo)] = deepcopy(value, memo)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/copy.py", line 190, in deepcopy
y = _reconstruct(x, rv, 1, memo)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/copy.py", line 334, in _reconstruct
state = deepcopy(state, memo)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/copy.py", line 163, in deepcopy
y = copier(x, memo)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/copy.py", line 257, in _deepcopy_dict
y[deepcopy(key, memo)] = deepcopy(value, memo)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/copy.py", line 163, in deepcopy
y = copier(x, memo)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/copy.py", line 257, in _deepcopy_dict
y[deepcopy(key, memo)] = deepcopy(value, memo)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/copy.py", line 163, in deepcopy
y = copier(x, memo)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/copy.py", line 230, in _deepcopy_list
y.append(deepcopy(a, memo))
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/copy.py", line 190, in deepcopy
y = _reconstruct(x, rv, 1, memo)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/copy.py", line 334, in _reconstruct
state = deepcopy(state, memo)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/copy.py", line 163, in deepcopy
y = copier(x, memo)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/copy.py", line 257, in _deepcopy_dict
y[deepcopy(key, memo)] = deepcopy(value, memo)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/copy.py", line 190, in deepcopy
y = _reconstruct(x, rv, 1, memo)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/copy.py", line 334, in _reconstruct
state = deepcopy(state, memo)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/copy.py", line 163, in deepcopy
y = copier(x, memo)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/copy.py", line 257, in _deepcopy_dict
y[deepcopy(key, memo)] = deepcopy(value, memo)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/copy.py", line 190, in deepcopy
y = _reconstruct(x, rv, 1, memo)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/copy.py", line 334, in _reconstruct
state = deepcopy(state, memo)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/copy.py", line 163, in deepcopy
y = copier(x, memo)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/copy.py", line 257, in _deepcopy_dict
y[deepcopy(key, memo)] = deepcopy(value, memo)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/copy.py", line 163, in deepcopy
y = copier(x, memo)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/copy.py", line 230, in _deepcopy_list
y.append(deepcopy(a, memo))
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/copy.py", line 163, in deepcopy
y = copier(x, memo)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/copy.py", line 237, in _deepcopy_tuple
y.append(deepcopy(a, memo))
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/copy.py", line 163, in deepcopy
y = copier(x, memo)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/copy.py", line 257, in _deepcopy_dict
y[deepcopy(key, memo)] = deepcopy(value, memo)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/copy.py", line 190, in deepcopy
y = _reconstruct(x, rv, 1, memo)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/copy.py", line 334, in _reconstruct
state = deepcopy(state, memo)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/copy.py", line 163, in deepcopy
y = copier(x, memo)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/copy.py", line 257, in _deepcopy_dict
y[deepcopy(key, memo)] = deepcopy(value, memo)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/copy.py", line 190, in deepcopy
y = _reconstruct(x, rv, 1, memo)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/copy.py", line 334, in _reconstruct
state = deepcopy(state, memo)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/copy.py", line 163, in deepcopy
y = copier(x, memo)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/copy.py", line 257, in _deepcopy_dict
y[deepcopy(key, memo)] = deepcopy(value, memo)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/copy.py", line 190, in deepcopy
y = _reconstruct(x, rv, 1, memo)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/copy.py", line 334, in _reconstruct
state = deepcopy(state, memo)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/copy.py", line 163, in deepcopy
y = copier(x, memo)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/copy.py", line 257, in _deepcopy_dict
y[deepcopy(key, memo)] = deepcopy(value, memo)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/copy.py", line 163, in deepcopy
y = copier(x, memo)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/copy.py", line 257, in _deepcopy_dict
y[deepcopy(key, memo)] = deepcopy(value, memo)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/copy.py", line 190, in deepcopy
y = _reconstruct(x, rv, 1, memo)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/copy.py", line 329, in _reconstruct
y = callable(*args)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/copy_reg.py", line 93, in newobj
return cls.new(cls, *args)
TypeError: object.new(NotImplementedType) is not safe, use NotImplementedType.new()

@printdhruv

This comment has been minimized.

Show comment
Hide comment
@printdhruv

printdhruv Jul 17, 2017

Contributor

@mattfeel Can you tell the log when you execute python translate.py? It will download WMT data from internet thus make sure that you have at least 20GB of disk space.

Contributor

printdhruv commented Jul 17, 2017

@mattfeel Can you tell the log when you execute python translate.py? It will download WMT data from internet thus make sure that you have at least 20GB of disk space.

@fabriciorsf

This comment has been minimized.

Show comment
Hide comment
@fabriciorsf

fabriciorsf Jul 29, 2017

I get this same error with Keras+TensorFlow on fit_generator.
And the same code with Keras+Theano works fine.

Follow the command gets the error:

model.fit_generator(self.train_inputs, steps_per_epoch=self.train_inputs.steps_per_epoch(),
                    validation_data=test_input_sequence, validation_steps=steps_test,
                    max_queue_size=self.train_inputs.workers, epochs=i+1, initial_epoch=i,
                    workers=self.train_inputs.workers, use_multiprocessing=True,
                    callbacks = callbacks)

The error:

Epoch 1/1
Traceback (most recent call last):
  File "/opt/programs/miniconda3/envs/myenv/lib/python3.6/site-packages/keras/utils/data_utils.py", line 497, in get
    inputs = self.queue.get(block=True).get()
  File "/opt/programs/miniconda3/envs/myenv/lib/python3.6/multiprocessing/pool.py", line 608, in get
    raise self._value
  File "/opt/programs/miniconda3/envs/myenv/lib/python3.6/multiprocessing/pool.py", line 385, in _handle_tasks
    put(task)
  File "/opt/programs/miniconda3/envs/myenv/lib/python3.6/multiprocessing/connection.py", line 206, in send
    self._send_bytes(_ForkingPickler.dumps(obj))
  File "/opt/programs/miniconda3/envs/myenv/lib/python3.6/multiprocessing/reduction.py", line 51, in dumps
    cls(buf, protocol).dump(obj)
TypeError: can't pickle _thread.lock objects

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "./myfolder/mycode.py", line 473, in <module>
    main()
  File "./myfolder/mycode.py", line 459, in main
    autonem.train_autonem(args.embedding_file, args.tune_embedding)
  File "./myfolder/mycode.py", line 182, in train_autonem
    callbacks = callbacks)
  File "/opt/programs/miniconda3/envs/myenv/lib/python3.6/site-packages/keras/legacy/interfaces.py", line 87, in wrapper
    return func(*args, **kwargs)
  File "/opt/programs/miniconda3/envs/myenv/lib/python3.6/site-packages/keras/engine/training.py", line 1809, in fit_generator
    generator_output = next(output_generator)
  File "/opt/programs/miniconda3/envs/myenv/lib/python3.6/site-packages/keras/utils/data_utils.py", line 502, in get
    raise StopIteration(e)
StopIteration: can't pickle _thread.lock objects

System information:

Have I written custom code: Yes
OS Platform and Distribution: Linux GnomeUbuntu 16.04, but with new kernel
_TensorFlow installed from: pip
TensorFlow version: 1.2.1
Python version: 3.6.1 (Miniconda3 4.3.11-64bit)
Bazel version (if compiling from source): I don't know.
CUDA/cuDNN version: I don't use because my graphic card is AMD-Radeon
GPU model and memory: AMD Radeon R7 M260/M265
CPU model: Intel® Core™ i7-4510U CPU @ 2.00GHz × 4
RAM Memory: 16GiB (2x8Gib dual-channel)
Exact command to reproduce:

history = CumulativeHistory()
callbacks = [history]
from keras import backend as K
if K.backend() == 'tensorflow':
  board = keras.callbacks.TensorBoard(log_dir=f"{self.prefix_folder_logs}{time()}",
                                    histogram_freq=1, write_graph=True, write_images=True)
  callbacks.append(board)
metric_to_compare = 'val_euclidean_distance'
print("Begin of training model...")
for i in range(MAX_NUM_EPOCHS):
  model.fit_generator(self.train_inputs, steps_per_epoch=self.train_inputs.steps_per_epoch(),
                      validation_data=test_input_sequence, validation_steps=steps_test,
                      max_queue_size=self.train_inputs.workers, epochs=i+1, initial_epoch=i,
                      workers=self.train_inputs.workers, use_multiprocessing=True,
                      callbacks = callbacks)
  try:
    metrics_diff = history.history[metric_to_compare][i] - min(history.history[metric_to_compare][:i])
  except:
    metrics_diff = -1
  if metrics_diff < 0:
    self._save_models(i)
    self.data_processor = None  # Empty memory
    best_epoch = i
    num_worse_epochs = 0
  elif metrics_diff > 0:
    num_worse_epochs += 1
    if num_worse_epochs >= PATIENCE:
      print("Ran out of patience. Stopping training.")
      break
print("End of training model.")

Collected information:

(myenv) myuser@mymachine:~$ ./tf_env_collect.sh 
Collecting system information...
2017-07-28 21:05:00.140602: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.1 instructions, but these are available on your machine and could speed up CPU computations.
2017-07-28 21:05:00.140632: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.2 instructions, but these are available on your machine and could speed up CPU computations.
2017-07-28 21:05:00.140645: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX instructions, but these are available on your machine and could speed up CPU computations.
2017-07-28 21:05:00.140650: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX2 instructions, but these are available on your machine and could speed up CPU computations.
2017-07-28 21:05:00.140656: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use FMA instructions, but these are available on your machine and could speed up CPU computations.
Wrote environment to tf_env.txt. You can review the contents of that file.
and use it to populate the fields in the github issue template.

cat tf_env.txt

(myenv) myuser@mymachine:~$ cat tf_env.txt

== cat /etc/issue ===============================================
Linux mymachine 4.4.0-87-generic #110~14.04.1-Ubuntu SMP Tue Jul 18 14:51:32 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux
VERSION="14.04.5 LTS, Trusty Tahr"
VERSION_ID="14.04"

== are we in docker =============================================
No

== compiler =====================================================
c++ (Ubuntu 4.8.4-2ubuntu1~14.04.3) 4.8.4
Copyright (C) 2013 Free Software Foundation, Inc.
This is free software; see the source for copying conditions.  There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.


== uname -a =====================================================
Linux mymachine 4.4.0-87-generic #110~14.04.1-Ubuntu SMP Tue Jul 18 14:51:32 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux

== check pips ===================================================
numpy (1.13.1)
protobuf (3.3.0)
tensorflow (1.2.1)

== check for virtualenv =========================================
False

== tensorflow import ============================================
tf.VERSION = 1.2.1
tf.GIT_VERSION = v1.2.0-5-g435cdfc
tf.COMPILER_VERSION = v1.2.0-5-g435cdfc
Sanity check: array([1], dtype=int32)

== env ==========================================================
LD_LIBRARY_PATH /opt/programs/miniconda3/envs/myenv/lib:/opt/intel/compilers_and_libraries_2017.4.196/linux/tbb/lib/intel64_lin/gcc4.7:/opt/intel/compilers_and_libraries_2017.4.196/linux/compiler/lib/intel64_lin:/opt/intel/compilers_and_libraries_2017.4.196/linux/mkl/lib/intel64_lin::/opt/programs/acml/gfortran64/lib
DYLD_LIBRARY_PATH is unset

== nvidia-smi ===================================================
./tf_env_collect.sh: linha 105: nvidia-smi: comando não encontrado

== cuda libs  ===================================================

fabriciorsf commented Jul 29, 2017

I get this same error with Keras+TensorFlow on fit_generator.
And the same code with Keras+Theano works fine.

Follow the command gets the error:

model.fit_generator(self.train_inputs, steps_per_epoch=self.train_inputs.steps_per_epoch(),
                    validation_data=test_input_sequence, validation_steps=steps_test,
                    max_queue_size=self.train_inputs.workers, epochs=i+1, initial_epoch=i,
                    workers=self.train_inputs.workers, use_multiprocessing=True,
                    callbacks = callbacks)

The error:

Epoch 1/1
Traceback (most recent call last):
  File "/opt/programs/miniconda3/envs/myenv/lib/python3.6/site-packages/keras/utils/data_utils.py", line 497, in get
    inputs = self.queue.get(block=True).get()
  File "/opt/programs/miniconda3/envs/myenv/lib/python3.6/multiprocessing/pool.py", line 608, in get
    raise self._value
  File "/opt/programs/miniconda3/envs/myenv/lib/python3.6/multiprocessing/pool.py", line 385, in _handle_tasks
    put(task)
  File "/opt/programs/miniconda3/envs/myenv/lib/python3.6/multiprocessing/connection.py", line 206, in send
    self._send_bytes(_ForkingPickler.dumps(obj))
  File "/opt/programs/miniconda3/envs/myenv/lib/python3.6/multiprocessing/reduction.py", line 51, in dumps
    cls(buf, protocol).dump(obj)
TypeError: can't pickle _thread.lock objects

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "./myfolder/mycode.py", line 473, in <module>
    main()
  File "./myfolder/mycode.py", line 459, in main
    autonem.train_autonem(args.embedding_file, args.tune_embedding)
  File "./myfolder/mycode.py", line 182, in train_autonem
    callbacks = callbacks)
  File "/opt/programs/miniconda3/envs/myenv/lib/python3.6/site-packages/keras/legacy/interfaces.py", line 87, in wrapper
    return func(*args, **kwargs)
  File "/opt/programs/miniconda3/envs/myenv/lib/python3.6/site-packages/keras/engine/training.py", line 1809, in fit_generator
    generator_output = next(output_generator)
  File "/opt/programs/miniconda3/envs/myenv/lib/python3.6/site-packages/keras/utils/data_utils.py", line 502, in get
    raise StopIteration(e)
StopIteration: can't pickle _thread.lock objects

System information:

Have I written custom code: Yes
OS Platform and Distribution: Linux GnomeUbuntu 16.04, but with new kernel
_TensorFlow installed from: pip
TensorFlow version: 1.2.1
Python version: 3.6.1 (Miniconda3 4.3.11-64bit)
Bazel version (if compiling from source): I don't know.
CUDA/cuDNN version: I don't use because my graphic card is AMD-Radeon
GPU model and memory: AMD Radeon R7 M260/M265
CPU model: Intel® Core™ i7-4510U CPU @ 2.00GHz × 4
RAM Memory: 16GiB (2x8Gib dual-channel)
Exact command to reproduce:

history = CumulativeHistory()
callbacks = [history]
from keras import backend as K
if K.backend() == 'tensorflow':
  board = keras.callbacks.TensorBoard(log_dir=f"{self.prefix_folder_logs}{time()}",
                                    histogram_freq=1, write_graph=True, write_images=True)
  callbacks.append(board)
metric_to_compare = 'val_euclidean_distance'
print("Begin of training model...")
for i in range(MAX_NUM_EPOCHS):
  model.fit_generator(self.train_inputs, steps_per_epoch=self.train_inputs.steps_per_epoch(),
                      validation_data=test_input_sequence, validation_steps=steps_test,
                      max_queue_size=self.train_inputs.workers, epochs=i+1, initial_epoch=i,
                      workers=self.train_inputs.workers, use_multiprocessing=True,
                      callbacks = callbacks)
  try:
    metrics_diff = history.history[metric_to_compare][i] - min(history.history[metric_to_compare][:i])
  except:
    metrics_diff = -1
  if metrics_diff < 0:
    self._save_models(i)
    self.data_processor = None  # Empty memory
    best_epoch = i
    num_worse_epochs = 0
  elif metrics_diff > 0:
    num_worse_epochs += 1
    if num_worse_epochs >= PATIENCE:
      print("Ran out of patience. Stopping training.")
      break
print("End of training model.")

Collected information:

(myenv) myuser@mymachine:~$ ./tf_env_collect.sh 
Collecting system information...
2017-07-28 21:05:00.140602: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.1 instructions, but these are available on your machine and could speed up CPU computations.
2017-07-28 21:05:00.140632: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.2 instructions, but these are available on your machine and could speed up CPU computations.
2017-07-28 21:05:00.140645: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX instructions, but these are available on your machine and could speed up CPU computations.
2017-07-28 21:05:00.140650: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX2 instructions, but these are available on your machine and could speed up CPU computations.
2017-07-28 21:05:00.140656: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use FMA instructions, but these are available on your machine and could speed up CPU computations.
Wrote environment to tf_env.txt. You can review the contents of that file.
and use it to populate the fields in the github issue template.

cat tf_env.txt

(myenv) myuser@mymachine:~$ cat tf_env.txt

== cat /etc/issue ===============================================
Linux mymachine 4.4.0-87-generic #110~14.04.1-Ubuntu SMP Tue Jul 18 14:51:32 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux
VERSION="14.04.5 LTS, Trusty Tahr"
VERSION_ID="14.04"

== are we in docker =============================================
No

== compiler =====================================================
c++ (Ubuntu 4.8.4-2ubuntu1~14.04.3) 4.8.4
Copyright (C) 2013 Free Software Foundation, Inc.
This is free software; see the source for copying conditions.  There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.


== uname -a =====================================================
Linux mymachine 4.4.0-87-generic #110~14.04.1-Ubuntu SMP Tue Jul 18 14:51:32 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux

== check pips ===================================================
numpy (1.13.1)
protobuf (3.3.0)
tensorflow (1.2.1)

== check for virtualenv =========================================
False

== tensorflow import ============================================
tf.VERSION = 1.2.1
tf.GIT_VERSION = v1.2.0-5-g435cdfc
tf.COMPILER_VERSION = v1.2.0-5-g435cdfc
Sanity check: array([1], dtype=int32)

== env ==========================================================
LD_LIBRARY_PATH /opt/programs/miniconda3/envs/myenv/lib:/opt/intel/compilers_and_libraries_2017.4.196/linux/tbb/lib/intel64_lin/gcc4.7:/opt/intel/compilers_and_libraries_2017.4.196/linux/compiler/lib/intel64_lin:/opt/intel/compilers_and_libraries_2017.4.196/linux/mkl/lib/intel64_lin::/opt/programs/acml/gfortran64/lib
DYLD_LIBRARY_PATH is unset

== nvidia-smi ===================================================
./tf_env_collect.sh: linha 105: nvidia-smi: comando não encontrado

== cuda libs  ===================================================
@ebrevdo

This comment has been minimized.

Show comment
Hide comment
@ebrevdo

ebrevdo Jul 29, 2017

Contributor
Contributor

ebrevdo commented Jul 29, 2017

@loveJasmine

This comment has been minimized.

Show comment
Hide comment
@loveJasmine

loveJasmine commented Jul 30, 2017

@ebigelow
it doesn't work!

@loveJasmine

This comment has been minimized.

Show comment
Hide comment
@loveJasmine

loveJasmine Aug 1, 2017

It is a real disaster!

any workaround?

loveJasmine commented Aug 1, 2017

It is a real disaster!

any workaround?

@pwfff

This comment has been minimized.

Show comment
Hide comment
@pwfff

pwfff Aug 1, 2017

@loveJasmine I was able to work around it by replacing the .deepcopy call with .copy. This probably breaks other things in subtle ways, but for my purpose it allowed me to get on with training.

The line I changed was

packages/tensorflow/contrib/legacy_seq2seq/python/ops/seq2seq.py", line 848, in embedding_attention_seq2seq
encoder_cell = copy.deepcopy(cell)

@ebrevdo this was after trying the RC and compiling master myself to no avail.

pwfff commented Aug 1, 2017

@loveJasmine I was able to work around it by replacing the .deepcopy call with .copy. This probably breaks other things in subtle ways, but for my purpose it allowed me to get on with training.

The line I changed was

packages/tensorflow/contrib/legacy_seq2seq/python/ops/seq2seq.py", line 848, in embedding_attention_seq2seq
encoder_cell = copy.deepcopy(cell)

@ebrevdo this was after trying the RC and compiling master myself to no avail.

@ebrevdo

This comment has been minimized.

Show comment
Hide comment
@ebrevdo

ebrevdo Aug 1, 2017

Contributor
Contributor

ebrevdo commented Aug 1, 2017

@loveJasmine

This comment has been minimized.

Show comment
Hide comment
@loveJasmine

loveJasmine Aug 2, 2017

@pwfff
in my project,change from deepcopy to copy will move on, but it doesn't make any sense on the logic

loveJasmine commented Aug 2, 2017

@pwfff
in my project,change from deepcopy to copy will move on, but it doesn't make any sense on the logic

@Chesao

This comment has been minimized.

Show comment
Hide comment
@Chesao

Chesao Aug 4, 2017

@mattfeel ,

I suffered exactly the same problem.
I just passed the error by modifying 2 lines of the following seq2seq.py file from Tensorflow.

file: Anaconda3\Lib\site-packages\tensorflow\contrib\legacy_seq2seq\python\ops\seq2seq.py
848 #encoder_cell = copy.deepcopy(cell)
849 encoder_cell = core_rnn_cell.EmbeddingWrapper(
850 cell, #encoder_cell,

Now, my seq2seq model is being trained.
Good luck!

Chesao commented Aug 4, 2017

@mattfeel ,

I suffered exactly the same problem.
I just passed the error by modifying 2 lines of the following seq2seq.py file from Tensorflow.

file: Anaconda3\Lib\site-packages\tensorflow\contrib\legacy_seq2seq\python\ops\seq2seq.py
848 #encoder_cell = copy.deepcopy(cell)
849 encoder_cell = core_rnn_cell.EmbeddingWrapper(
850 cell, #encoder_cell,

Now, my seq2seq model is being trained.
Good luck!

@loveJasmine

This comment has been minimized.

Show comment
Hide comment
@loveJasmine

loveJasmine Aug 4, 2017

@Chesao

what is core_rnn_cell

loveJasmine commented Aug 4, 2017

@Chesao

what is core_rnn_cell

@Chesao

This comment has been minimized.

Show comment
Hide comment
@Chesao

Chesao Aug 4, 2017

@loveJasmine

it is tf.nn.rnn_cell.GRUCell()

Chesao commented Aug 4, 2017

@loveJasmine

it is tf.nn.rnn_cell.GRUCell()

@dshahrokhian

This comment has been minimized.

Show comment
Hide comment
@dshahrokhian

dshahrokhian Aug 16, 2017

Same issue here, when using a Keras + Tensorflow and creating an sklearn wrapper. Still couldn't find a workaround.

dshahrokhian commented Aug 16, 2017

Same issue here, when using a Keras + Tensorflow and creating an sklearn wrapper. Still couldn't find a workaround.

@Chesao

This comment has been minimized.

Show comment
Hide comment
@Chesao

Chesao Aug 16, 2017

@dshahrokhian

Have you tried my workaround? Why don't you dump up your full error log here?

Chesao commented Aug 16, 2017

@dshahrokhian

Have you tried my workaround? Why don't you dump up your full error log here?

@loveJasmine

This comment has been minimized.

Show comment
Hide comment
@loveJasmine

loveJasmine Aug 16, 2017

@dshahrokhian
your workaround works on my case, thanks

loveJasmine commented Aug 16, 2017

@dshahrokhian
your workaround works on my case, thanks

@dshahrokhian

This comment has been minimized.

Show comment
Hide comment
@dshahrokhian

dshahrokhian Aug 16, 2017

@Chesao AFAIK I cannot use your method in my case. Here's my error log, but I already decided to put some extra code so I don't need to use the sklearn wrapper that was giving me the error.

Traceback (most recent call last):
  File "experiments/openface_ck+.py", line 77, in <module>
    main()
  File "experiments/openface_ck+.py", line 74, in main
    io_utils.kfold_report_metrics(get_temporal_model, optimal['max_params'], features, labels)
  File "/home/dani/Git/EmotionRecognition/experiments/io_utils.py", line 140, in kfold_report_metrics
    print("Test loss and Confidence Interval: %.2f (+/- %.2f)" % (np.mean(losses), np.std(losses)))
  File "/home/dani/Git/EmotionRecognition/experiments/io_utils.py", line 225, in plot_learning_curve
    
  File "/home/dani/Software/anaconda3/lib/python3.6/site-packages/sklearn/model_selection/_validation.py", line 772, in learning_curve
    for train, test in cv_iter
  File "/home/dani/Software/anaconda3/lib/python3.6/site-packages/sklearn/externals/joblib/parallel.py", line 758, in __call__
    while self.dispatch_one_batch(iterator):
  File "/home/dani/Software/anaconda3/lib/python3.6/site-packages/sklearn/externals/joblib/parallel.py", line 603, in dispatch_one_batch
    tasks = BatchedCalls(itertools.islice(iterator, batch_size))
  File "/home/dani/Software/anaconda3/lib/python3.6/site-packages/sklearn/externals/joblib/parallel.py", line 127, in __init__
    self.items = list(iterator_slice)
  File "/home/dani/Software/anaconda3/lib/python3.6/site-packages/sklearn/model_selection/_validation.py", line 773, in <genexpr>
    for n_train_samples in train_sizes_abs)
  File "/home/dani/Software/anaconda3/lib/python3.6/site-packages/sklearn/base.py", line 69, in clone
    new_object_params[name] = clone(param, safe=False)
  File "/home/dani/Software/anaconda3/lib/python3.6/site-packages/sklearn/base.py", line 60, in clone
    return copy.deepcopy(estimator)
  File "/home/dani/Software/anaconda3/lib/python3.6/copy.py", line 180, in deepcopy
    y = _reconstruct(x, memo, *rv)
  File "/home/dani/Software/anaconda3/lib/python3.6/copy.py", line 280, in _reconstruct
    state = deepcopy(state, memo)
  File "/home/dani/Software/anaconda3/lib/python3.6/copy.py", line 150, in deepcopy
    y = copier(x, memo)
  File "/home/dani/Software/anaconda3/lib/python3.6/copy.py", line 240, in _deepcopy_dict
    y[deepcopy(key, memo)] = deepcopy(value, memo)
  File "/home/dani/Software/anaconda3/lib/python3.6/copy.py", line 150, in deepcopy
    y = copier(x, memo)
  File "/home/dani/Software/anaconda3/lib/python3.6/copy.py", line 215, in _deepcopy_list
    append(deepcopy(a, memo))
  File "/home/dani/Software/anaconda3/lib/python3.6/copy.py", line 180, in deepcopy
    y = _reconstruct(x, memo, *rv)
  File "/home/dani/Software/anaconda3/lib/python3.6/copy.py", line 280, in _reconstruct
    state = deepcopy(state, memo)
  File "/home/dani/Software/anaconda3/lib/python3.6/copy.py", line 150, in deepcopy
    y = copier(x, memo)
  File "/home/dani/Software/anaconda3/lib/python3.6/copy.py", line 240, in _deepcopy_dict
    y[deepcopy(key, memo)] = deepcopy(value, memo)
  File "/home/dani/Software/anaconda3/lib/python3.6/copy.py", line 180, in deepcopy
    y = _reconstruct(x, memo, *rv)
  File "/home/dani/Software/anaconda3/lib/python3.6/copy.py", line 280, in _reconstruct
    state = deepcopy(state, memo)
  File "/home/dani/Software/anaconda3/lib/python3.6/copy.py", line 150, in deepcopy
    y = copier(x, memo)
  File "/home/dani/Software/anaconda3/lib/python3.6/copy.py", line 240, in _deepcopy_dict
    y[deepcopy(key, memo)] = deepcopy(value, memo)
  File "/home/dani/Software/anaconda3/lib/python3.6/copy.py", line 180, in deepcopy
    y = _reconstruct(x, memo, *rv)
  File "/home/dani/Software/anaconda3/lib/python3.6/copy.py", line 280, in _reconstruct
    state = deepcopy(state, memo)
  File "/home/dani/Software/anaconda3/lib/python3.6/copy.py", line 150, in deepcopy
    y = copier(x, memo)
  File "/home/dani/Software/anaconda3/lib/python3.6/copy.py", line 240, in _deepcopy_dict
    y[deepcopy(key, memo)] = deepcopy(value, memo)
  File "/home/dani/Software/anaconda3/lib/python3.6/copy.py", line 169, in deepcopy
    rv = reductor(4)
TypeError: can't pickle _thread.lock objects

Thanks,
Dani

dshahrokhian commented Aug 16, 2017

@Chesao AFAIK I cannot use your method in my case. Here's my error log, but I already decided to put some extra code so I don't need to use the sklearn wrapper that was giving me the error.

Traceback (most recent call last):
  File "experiments/openface_ck+.py", line 77, in <module>
    main()
  File "experiments/openface_ck+.py", line 74, in main
    io_utils.kfold_report_metrics(get_temporal_model, optimal['max_params'], features, labels)
  File "/home/dani/Git/EmotionRecognition/experiments/io_utils.py", line 140, in kfold_report_metrics
    print("Test loss and Confidence Interval: %.2f (+/- %.2f)" % (np.mean(losses), np.std(losses)))
  File "/home/dani/Git/EmotionRecognition/experiments/io_utils.py", line 225, in plot_learning_curve
    
  File "/home/dani/Software/anaconda3/lib/python3.6/site-packages/sklearn/model_selection/_validation.py", line 772, in learning_curve
    for train, test in cv_iter
  File "/home/dani/Software/anaconda3/lib/python3.6/site-packages/sklearn/externals/joblib/parallel.py", line 758, in __call__
    while self.dispatch_one_batch(iterator):
  File "/home/dani/Software/anaconda3/lib/python3.6/site-packages/sklearn/externals/joblib/parallel.py", line 603, in dispatch_one_batch
    tasks = BatchedCalls(itertools.islice(iterator, batch_size))
  File "/home/dani/Software/anaconda3/lib/python3.6/site-packages/sklearn/externals/joblib/parallel.py", line 127, in __init__
    self.items = list(iterator_slice)
  File "/home/dani/Software/anaconda3/lib/python3.6/site-packages/sklearn/model_selection/_validation.py", line 773, in <genexpr>
    for n_train_samples in train_sizes_abs)
  File "/home/dani/Software/anaconda3/lib/python3.6/site-packages/sklearn/base.py", line 69, in clone
    new_object_params[name] = clone(param, safe=False)
  File "/home/dani/Software/anaconda3/lib/python3.6/site-packages/sklearn/base.py", line 60, in clone
    return copy.deepcopy(estimator)
  File "/home/dani/Software/anaconda3/lib/python3.6/copy.py", line 180, in deepcopy
    y = _reconstruct(x, memo, *rv)
  File "/home/dani/Software/anaconda3/lib/python3.6/copy.py", line 280, in _reconstruct
    state = deepcopy(state, memo)
  File "/home/dani/Software/anaconda3/lib/python3.6/copy.py", line 150, in deepcopy
    y = copier(x, memo)
  File "/home/dani/Software/anaconda3/lib/python3.6/copy.py", line 240, in _deepcopy_dict
    y[deepcopy(key, memo)] = deepcopy(value, memo)
  File "/home/dani/Software/anaconda3/lib/python3.6/copy.py", line 150, in deepcopy
    y = copier(x, memo)
  File "/home/dani/Software/anaconda3/lib/python3.6/copy.py", line 215, in _deepcopy_list
    append(deepcopy(a, memo))
  File "/home/dani/Software/anaconda3/lib/python3.6/copy.py", line 180, in deepcopy
    y = _reconstruct(x, memo, *rv)
  File "/home/dani/Software/anaconda3/lib/python3.6/copy.py", line 280, in _reconstruct
    state = deepcopy(state, memo)
  File "/home/dani/Software/anaconda3/lib/python3.6/copy.py", line 150, in deepcopy
    y = copier(x, memo)
  File "/home/dani/Software/anaconda3/lib/python3.6/copy.py", line 240, in _deepcopy_dict
    y[deepcopy(key, memo)] = deepcopy(value, memo)
  File "/home/dani/Software/anaconda3/lib/python3.6/copy.py", line 180, in deepcopy
    y = _reconstruct(x, memo, *rv)
  File "/home/dani/Software/anaconda3/lib/python3.6/copy.py", line 280, in _reconstruct
    state = deepcopy(state, memo)
  File "/home/dani/Software/anaconda3/lib/python3.6/copy.py", line 150, in deepcopy
    y = copier(x, memo)
  File "/home/dani/Software/anaconda3/lib/python3.6/copy.py", line 240, in _deepcopy_dict
    y[deepcopy(key, memo)] = deepcopy(value, memo)
  File "/home/dani/Software/anaconda3/lib/python3.6/copy.py", line 180, in deepcopy
    y = _reconstruct(x, memo, *rv)
  File "/home/dani/Software/anaconda3/lib/python3.6/copy.py", line 280, in _reconstruct
    state = deepcopy(state, memo)
  File "/home/dani/Software/anaconda3/lib/python3.6/copy.py", line 150, in deepcopy
    y = copier(x, memo)
  File "/home/dani/Software/anaconda3/lib/python3.6/copy.py", line 240, in _deepcopy_dict
    y[deepcopy(key, memo)] = deepcopy(value, memo)
  File "/home/dani/Software/anaconda3/lib/python3.6/copy.py", line 169, in deepcopy
    rv = reductor(4)
TypeError: can't pickle _thread.lock objects

Thanks,
Dani

@ebrevdo

This comment has been minimized.

Show comment
Hide comment
@ebrevdo

ebrevdo Aug 16, 2017

Contributor

Can someone provide a minimal reproducible code snippet? Ideally ~ 10 lines with tf.constant() inputs?

Contributor

ebrevdo commented Aug 16, 2017

Can someone provide a minimal reproducible code snippet? Ideally ~ 10 lines with tf.constant() inputs?

@JankiMehta

This comment has been minimized.

Show comment
Hide comment
@JankiMehta

JankiMehta Aug 28, 2017

@ebrevdo

import pickle
from keras.layers import Dense
from keras.layers import LSTM
from keras.models import Sequential
from keras.metrics import categorical_accuracy

model = Sequential()
model.add(LSTM(20, return_sequences=True, stateful=False, batch_input_shape=(10, 20, 4)))
model.add(Dense(3, activation='softmax'))

model.compile(loss="categorical_crossentropy",
              optimizer='adam',
              metrics=[categorical_accuracy],
              sample_weight_mode='temporal')

data_path = '/home/ubuntu/invoice/data/'     #any path to store pickle dump
output_file_path = data_path + 'model.dat'
with open(output_file_path, 'wb') as f:
    pickle.dump(model, f)

Error Msg:

Traceback (most recent call last):
  File "<input>", line 19, in <module>
TypeError: can't pickle _thread.lock objects

JankiMehta commented Aug 28, 2017

@ebrevdo

import pickle
from keras.layers import Dense
from keras.layers import LSTM
from keras.models import Sequential
from keras.metrics import categorical_accuracy

model = Sequential()
model.add(LSTM(20, return_sequences=True, stateful=False, batch_input_shape=(10, 20, 4)))
model.add(Dense(3, activation='softmax'))

model.compile(loss="categorical_crossentropy",
              optimizer='adam',
              metrics=[categorical_accuracy],
              sample_weight_mode='temporal')

data_path = '/home/ubuntu/invoice/data/'     #any path to store pickle dump
output_file_path = data_path + 'model.dat'
with open(output_file_path, 'wb') as f:
    pickle.dump(model, f)

Error Msg:

Traceback (most recent call last):
  File "<input>", line 19, in <module>
TypeError: can't pickle _thread.lock objects
@zzks

This comment has been minimized.

Show comment
Hide comment
@zzks

zzks Sep 5, 2017

same problem... keras1.2.0, tf0.12.1

zzks commented Sep 5, 2017

same problem... keras1.2.0, tf0.12.1

@maxim5

This comment has been minimized.

Show comment
Hide comment
@maxim5

maxim5 Dec 23, 2017

Here's what you can do in your code to pass this (without modifying tensorflow's seq2seq.py):

setattr(tf.contrib.rnn.GRUCell, '__deepcopy__', lambda self, _: self)
setattr(tf.contrib.rnn.BasicLSTMCell, '__deepcopy__', lambda self, _: self)
setattr(tf.contrib.rnn.MultiRNNCell, '__deepcopy__', lambda self, _: self)

If you're using core RNN API, you can change to tf.nn.rnn_cell.GRUCell (both variants work, actually).

This is a workaround. The proper fix is either make cells deep-copeable or get rid of the copy.deepcopy call. See also this StackOverflow discussion.

maxim5 commented Dec 23, 2017

Here's what you can do in your code to pass this (without modifying tensorflow's seq2seq.py):

setattr(tf.contrib.rnn.GRUCell, '__deepcopy__', lambda self, _: self)
setattr(tf.contrib.rnn.BasicLSTMCell, '__deepcopy__', lambda self, _: self)
setattr(tf.contrib.rnn.MultiRNNCell, '__deepcopy__', lambda self, _: self)

If you're using core RNN API, you can change to tf.nn.rnn_cell.GRUCell (both variants work, actually).

This is a workaround. The proper fix is either make cells deep-copeable or get rid of the copy.deepcopy call. See also this StackOverflow discussion.

@tensorflowbutler

This comment has been minimized.

Show comment
Hide comment
@tensorflowbutler

tensorflowbutler Jan 7, 2018

Member

Nagging Awaiting TensorFlower: It has been 14 days with no activityand the awaiting tensorflower label was assigned. Please update the label and/or status accordingly.

Member

tensorflowbutler commented Jan 7, 2018

Nagging Awaiting TensorFlower: It has been 14 days with no activityand the awaiting tensorflower label was assigned. Please update the label and/or status accordingly.

@tensorflowbutler

This comment has been minimized.

Show comment
Hide comment
@tensorflowbutler

tensorflowbutler Jan 23, 2018

Member

A member of the TensorFlow organization has replied after the stat:awaiting tensorflower label was applied.

Member

tensorflowbutler commented Jan 23, 2018

A member of the TensorFlow organization has replied after the stat:awaiting tensorflower label was applied.

@tatatodd

This comment has been minimized.

Show comment
Hide comment
@tatatodd

tatatodd Jan 25, 2018

Member

@ebrevdo any updates on reproducing or fixing this issue?

Member

tatatodd commented Jan 25, 2018

@ebrevdo any updates on reproducing or fixing this issue?

@AmnaKhan1

This comment has been minimized.

Show comment
Hide comment
@AmnaKhan1

AmnaKhan1 Feb 1, 2018

I am facing similar error. Keras.save didn't help.

File "C:\ProgramData\Anaconda2\envs\ENV_tensorflow\lib\copy.py", line 155, in deepcopy
y = copier(x, memo)
File "C:\ProgramData\Anaconda2\envs\ENV_tensorflow\lib\copy.py", line 218, in _deepcopy_list
y.append(deepcopy(a, memo))
File "C:\ProgramData\Anaconda2\envs\ENV_tensorflow\lib\copy.py", line 155, in deepcopy
y = copier(x, memo)
File "C:\ProgramData\Anaconda2\envs\ENV_tensorflow\lib\copy.py", line 223, in _deepcopy_tuple
y = [deepcopy(a, memo) for a in x]
File "C:\ProgramData\Anaconda2\envs\ENV_tensorflow\lib\copy.py", line 223, in
y = [deepcopy(a, memo) for a in x]
File "C:\ProgramData\Anaconda2\envs\ENV_tensorflow\lib\copy.py", line 155, in deepcopy
y = copier(x, memo)
File "C:\ProgramData\Anaconda2\envs\ENV_tensorflow\lib\copy.py", line 243, in _deepcopy_dict
y[deepcopy(key, memo)] = deepcopy(value, memo)
File "C:\ProgramData\Anaconda2\envs\ENV_tensorflow\lib\copy.py", line 182, in deepcopy
y = _reconstruct(x, rv, 1, memo)
File "C:\ProgramData\Anaconda2\envs\ENV_tensorflow\lib\copy.py", line 297, in _reconstruct
state = deepcopy(state, memo)
File "C:\ProgramData\Anaconda2\envs\ENV_tensorflow\lib\copy.py", line 155, in deepcopy
y = copier(x, memo)
File "C:\ProgramData\Anaconda2\envs\ENV_tensorflow\lib\copy.py", line 243, in _deepcopy_dict
y[deepcopy(key, memo)] = deepcopy(value, memo)
File "C:\ProgramData\Anaconda2\envs\ENV_tensorflow\lib\copy.py", line 182, in deepcopy
y = _reconstruct(x, rv, 1, memo)
File "C:\ProgramData\Anaconda2\envs\ENV_tensorflow\lib\copy.py", line 297, in _reconstruct
state = deepcopy(state, memo)
File "C:\ProgramData\Anaconda2\envs\ENV_tensorflow\lib\copy.py", line 155, in deepcopy
y = copier(x, memo)
File "C:\ProgramData\Anaconda2\envs\ENV_tensorflow\lib\copy.py", line 243, in _deepcopy_dict
y[deepcopy(key, memo)] = deepcopy(value, memo)
File "C:\ProgramData\Anaconda2\envs\ENV_tensorflow\lib\copy.py", line 182, in deepcopy
y = _reconstruct(x, rv, 1, memo)
File "C:\ProgramData\Anaconda2\envs\ENV_tensorflow\lib\copy.py", line 297, in _reconstruct
state = deepcopy(state, memo)
File "C:\ProgramData\Anaconda2\envs\ENV_tensorflow\lib\copy.py", line 155, in deepcopy
y = copier(x, memo)
File "C:\ProgramData\Anaconda2\envs\ENV_tensorflow\lib\copy.py", line 243, in _deepcopy_dict
y[deepcopy(key, memo)] = deepcopy(value, memo)
File "C:\ProgramData\Anaconda2\envs\ENV_tensorflow\lib\copy.py", line 182, in deepcopy
y = _reconstruct(x, rv, 1, memo)
File "C:\ProgramData\Anaconda2\envs\ENV_tensorflow\lib\copy.py", line 306, in _reconstruct
y.dict.update(state)
AttributeError: 'NoneType' object has no attribute 'update'

AmnaKhan1 commented Feb 1, 2018

I am facing similar error. Keras.save didn't help.

File "C:\ProgramData\Anaconda2\envs\ENV_tensorflow\lib\copy.py", line 155, in deepcopy
y = copier(x, memo)
File "C:\ProgramData\Anaconda2\envs\ENV_tensorflow\lib\copy.py", line 218, in _deepcopy_list
y.append(deepcopy(a, memo))
File "C:\ProgramData\Anaconda2\envs\ENV_tensorflow\lib\copy.py", line 155, in deepcopy
y = copier(x, memo)
File "C:\ProgramData\Anaconda2\envs\ENV_tensorflow\lib\copy.py", line 223, in _deepcopy_tuple
y = [deepcopy(a, memo) for a in x]
File "C:\ProgramData\Anaconda2\envs\ENV_tensorflow\lib\copy.py", line 223, in
y = [deepcopy(a, memo) for a in x]
File "C:\ProgramData\Anaconda2\envs\ENV_tensorflow\lib\copy.py", line 155, in deepcopy
y = copier(x, memo)
File "C:\ProgramData\Anaconda2\envs\ENV_tensorflow\lib\copy.py", line 243, in _deepcopy_dict
y[deepcopy(key, memo)] = deepcopy(value, memo)
File "C:\ProgramData\Anaconda2\envs\ENV_tensorflow\lib\copy.py", line 182, in deepcopy
y = _reconstruct(x, rv, 1, memo)
File "C:\ProgramData\Anaconda2\envs\ENV_tensorflow\lib\copy.py", line 297, in _reconstruct
state = deepcopy(state, memo)
File "C:\ProgramData\Anaconda2\envs\ENV_tensorflow\lib\copy.py", line 155, in deepcopy
y = copier(x, memo)
File "C:\ProgramData\Anaconda2\envs\ENV_tensorflow\lib\copy.py", line 243, in _deepcopy_dict
y[deepcopy(key, memo)] = deepcopy(value, memo)
File "C:\ProgramData\Anaconda2\envs\ENV_tensorflow\lib\copy.py", line 182, in deepcopy
y = _reconstruct(x, rv, 1, memo)
File "C:\ProgramData\Anaconda2\envs\ENV_tensorflow\lib\copy.py", line 297, in _reconstruct
state = deepcopy(state, memo)
File "C:\ProgramData\Anaconda2\envs\ENV_tensorflow\lib\copy.py", line 155, in deepcopy
y = copier(x, memo)
File "C:\ProgramData\Anaconda2\envs\ENV_tensorflow\lib\copy.py", line 243, in _deepcopy_dict
y[deepcopy(key, memo)] = deepcopy(value, memo)
File "C:\ProgramData\Anaconda2\envs\ENV_tensorflow\lib\copy.py", line 182, in deepcopy
y = _reconstruct(x, rv, 1, memo)
File "C:\ProgramData\Anaconda2\envs\ENV_tensorflow\lib\copy.py", line 297, in _reconstruct
state = deepcopy(state, memo)
File "C:\ProgramData\Anaconda2\envs\ENV_tensorflow\lib\copy.py", line 155, in deepcopy
y = copier(x, memo)
File "C:\ProgramData\Anaconda2\envs\ENV_tensorflow\lib\copy.py", line 243, in _deepcopy_dict
y[deepcopy(key, memo)] = deepcopy(value, memo)
File "C:\ProgramData\Anaconda2\envs\ENV_tensorflow\lib\copy.py", line 182, in deepcopy
y = _reconstruct(x, rv, 1, memo)
File "C:\ProgramData\Anaconda2\envs\ENV_tensorflow\lib\copy.py", line 306, in _reconstruct
y.dict.update(state)
AttributeError: 'NoneType' object has no attribute 'update'

@maxim5

This comment has been minimized.

Show comment
Hide comment
@maxim5

maxim5 Feb 2, 2018

@AmnaKhan1 the cause of this stacktrace is the same, my previous #11157 (comment) solves it too.

maxim5 commented Feb 2, 2018

@AmnaKhan1 the cause of this stacktrace is the same, my previous #11157 (comment) solves it too.

@tensorflowbutler

This comment has been minimized.

Show comment
Hide comment
@tensorflowbutler

tensorflowbutler Feb 17, 2018

Member

Nagging Awaiting TensorFlower: It has been 14 days with no activity and the awaiting tensorflower label was assigned. Please update the label and/or status accordingly.

Member

tensorflowbutler commented Feb 17, 2018

Nagging Awaiting TensorFlower: It has been 14 days with no activity and the awaiting tensorflower label was assigned. Please update the label and/or status accordingly.

@ebrevdo

This comment has been minimized.

Show comment
Hide comment
@ebrevdo

ebrevdo Feb 17, 2018

Contributor

A couple of solutions have been proposed. E.g. using keras.save. Closing.

Contributor

ebrevdo commented Feb 17, 2018

A couple of solutions have been proposed. E.g. using keras.save. Closing.

@josh-marsh

This comment has been minimized.

Show comment
Hide comment
@josh-marsh

josh-marsh Apr 12, 2018

Does anyone know of any other solutions? I am having the same problem, and none of the above solutions seem to work.

josh-marsh commented Apr 12, 2018

Does anyone know of any other solutions? I am having the same problem, and none of the above solutions seem to work.

@grayfall

This comment has been minimized.

Show comment
Hide comment
@grayfall

grayfall May 1, 2018

@ebrevdo unfortunately, keras.save doesn't not seem to be a universal solution, because that's what has brought me here in the first place. I've checked all major tensorflow releases from 1.4 to 1.7 – the issue is persistent.

grayfall commented May 1, 2018

@ebrevdo unfortunately, keras.save doesn't not seem to be a universal solution, because that's what has brought me here in the first place. I've checked all major tensorflow releases from 1.4 to 1.7 – the issue is persistent.

@thormacy

This comment has been minimized.

Show comment
Hide comment
@thormacy

thormacy Jun 21, 2018

It seems to be an issue caused when using embedding_attention_seq2seq several times while training with buckets.

thormacy commented Jun 21, 2018

It seems to be an issue caused when using embedding_attention_seq2seq several times while training with buckets.

@thormacy

This comment has been minimized.

Show comment
Hide comment
@thormacy

thormacy Jun 21, 2018

One alternative might be only fit one bucket at a time

thormacy commented Jun 21, 2018

One alternative might be only fit one bucket at a time

@thormacy

This comment has been minimized.

Show comment
Hide comment
@thormacy

thormacy Jun 22, 2018

@maxim5
your solution does not work for me.
@Chesao 's solution works.
But I am not sure if there's any other influence.

thormacy commented Jun 22, 2018

@maxim5
your solution does not work for me.
@Chesao 's solution works.
But I am not sure if there's any other influence.

@suffic

This comment has been minimized.

Show comment
Hide comment
@suffic

suffic Jun 22, 2018

Seeing same issue with tf 1.5.0:
pk.dumps(obj)
TypeError: can't pickle _thread.lock objects

suffic commented Jun 22, 2018

Seeing same issue with tf 1.5.0:
pk.dumps(obj)
TypeError: can't pickle _thread.lock objects

@zdx3578

This comment has been minimized.

Show comment
Hide comment
@zdx3578

zdx3578 Jul 3, 2018

Saved final_models/thor_model.pkl
Traceback (most recent call last):
File "train_expert/train_thor.py", line 46, in
main()
File "train_expert/train_thor.py", line 42, in main
act.save("final_models/thor_model.pkl")
File "/home/sdc/github/baselines-rudder/baselines/deepq/simple.py", line 62, in save
cloudpickle.dump((model_data, self._act_params), f)
File "/home/sdc/github/baselines-rudder/baseline/lib/python3.5/site-packages/cloudpickle/cloudpickle.py", line 879, in dump
CloudPickler(file, protocol=protocol).dump(obj)
File "/home/sdc/github/baselines-rudder/baseline/lib/python3.5/site-packages/cloudpickle/cloudpickle.py", line 268, in dump
return Pickler.dump(self, obj)
File "/usr/lib/python3.5/pickle.py", line 408, in dump
self.save(obj)
File "/usr/lib/python3.5/pickle.py", line 475, in save
f(self, obj) # Call unbound method with explicit self
File "/usr/lib/python3.5/pickle.py", line 725, in save_tuple
save(element)
File "/usr/lib/python3.5/pickle.py", line 475, in save
f(self, obj) # Call unbound method with explicit self
File "/usr/lib/python3.5/pickle.py", line 810, in save_dict
self._batch_setitems(obj.items())
File "/usr/lib/python3.5/pickle.py", line 836, in _batch_setitems
save(v)
File "/usr/lib/python3.5/pickle.py", line 475, in save
f(self, obj) # Call unbound method with explicit self
File "/home/sdc/github/baselines-rudder/baseline/lib/python3.5/site-packages/cloudpickle/cloudpickle.py", line 413, in save_function
self.save_function_tuple(obj)
File "/home/sdc/github/baselines-rudder/baseline/lib/python3.5/site-packages/cloudpickle/cloudpickle.py", line 559, in save_function_tuple
save(state)
File "/usr/lib/python3.5/pickle.py", line 475, in save
f(self, obj) # Call unbound method with explicit self
File "/usr/lib/python3.5/pickle.py", line 810, in save_dict
self._batch_setitems(obj.items())
File "/usr/lib/python3.5/pickle.py", line 836, in _batch_setitems
save(v)
File "/usr/lib/python3.5/pickle.py", line 475, in save
f(self, obj) # Call unbound method with explicit self
File "/usr/lib/python3.5/pickle.py", line 770, in save_list
self._batch_appends(obj)
File "/usr/lib/python3.5/pickle.py", line 797, in _batch_appends
save(tmp[0])
File "/usr/lib/python3.5/pickle.py", line 520, in save
self.save_reduce(obj=obj, *rv)
File "/usr/lib/python3.5/pickle.py", line 623, in save_reduce
save(state)
File "/usr/lib/python3.5/pickle.py", line 475, in save
f(self, obj) # Call unbound method with explicit self
File "/usr/lib/python3.5/pickle.py", line 810, in save_dict
self._batch_setitems(obj.items())
File "/usr/lib/python3.5/pickle.py", line 836, in _batch_setitems
save(v)
File "/usr/lib/python3.5/pickle.py", line 520, in save
self.save_reduce(obj=obj, *rv)
File "/usr/lib/python3.5/pickle.py", line 623, in save_reduce
save(state)
File "/usr/lib/python3.5/pickle.py", line 475, in save
f(self, obj) # Call unbound method with explicit self
File "/usr/lib/python3.5/pickle.py", line 810, in save_dict
self._batch_setitems(obj.items())
File "/usr/lib/python3.5/pickle.py", line 836, in _batch_setitems
save(v)
File "/usr/lib/python3.5/pickle.py", line 520, in save
self.save_reduce(obj=obj, *rv)
File "/usr/lib/python3.5/pickle.py", line 623, in save_reduce
save(state)
File "/usr/lib/python3.5/pickle.py", line 475, in save
f(self, obj) # Call unbound method with explicit self
File "/usr/lib/python3.5/pickle.py", line 810, in save_dict
self._batch_setitems(obj.items())
File "/usr/lib/python3.5/pickle.py", line 836, in _batch_setitems
save(v)
File "/usr/lib/python3.5/pickle.py", line 520, in save
self.save_reduce(obj=obj, *rv)
File "/usr/lib/python3.5/pickle.py", line 623, in save_reduce
save(state)
File "/usr/lib/python3.5/pickle.py", line 475, in save
f(self, obj) # Call unbound method with explicit self
File "/usr/lib/python3.5/pickle.py", line 810, in save_dict
self._batch_setitems(obj.items())
File "/usr/lib/python3.5/pickle.py", line 836, in _batch_setitems
save(v)
File "/usr/lib/python3.5/pickle.py", line 475, in save
f(self, obj) # Call unbound method with explicit self
File "/home/sdc/github/baselines-rudder/baseline/lib/python3.5/site-packages/cloudpickle/cloudpickle.py", line 630, in save_builtin_function
return self.save_function(obj)
File "/home/sdc/github/baselines-rudder/baseline/lib/python3.5/site-packages/cloudpickle/cloudpickle.py", line 400, in save_function
return self.save_reduce(obj=obj, *rv)
File "/usr/lib/python3.5/pickle.py", line 599, in save_reduce
save(args)
File "/usr/lib/python3.5/pickle.py", line 475, in save
f(self, obj) # Call unbound method with explicit self
File "/usr/lib/python3.5/pickle.py", line 725, in save_tuple
save(element)
File "/usr/lib/python3.5/pickle.py", line 495, in save
rv = reduce(self.proto)
TypeError: can't pickle _thread.lock objects

tf 1.8

zdx3578 commented Jul 3, 2018

Saved final_models/thor_model.pkl
Traceback (most recent call last):
File "train_expert/train_thor.py", line 46, in
main()
File "train_expert/train_thor.py", line 42, in main
act.save("final_models/thor_model.pkl")
File "/home/sdc/github/baselines-rudder/baselines/deepq/simple.py", line 62, in save
cloudpickle.dump((model_data, self._act_params), f)
File "/home/sdc/github/baselines-rudder/baseline/lib/python3.5/site-packages/cloudpickle/cloudpickle.py", line 879, in dump
CloudPickler(file, protocol=protocol).dump(obj)
File "/home/sdc/github/baselines-rudder/baseline/lib/python3.5/site-packages/cloudpickle/cloudpickle.py", line 268, in dump
return Pickler.dump(self, obj)
File "/usr/lib/python3.5/pickle.py", line 408, in dump
self.save(obj)
File "/usr/lib/python3.5/pickle.py", line 475, in save
f(self, obj) # Call unbound method with explicit self
File "/usr/lib/python3.5/pickle.py", line 725, in save_tuple
save(element)
File "/usr/lib/python3.5/pickle.py", line 475, in save
f(self, obj) # Call unbound method with explicit self
File "/usr/lib/python3.5/pickle.py", line 810, in save_dict
self._batch_setitems(obj.items())
File "/usr/lib/python3.5/pickle.py", line 836, in _batch_setitems
save(v)
File "/usr/lib/python3.5/pickle.py", line 475, in save
f(self, obj) # Call unbound method with explicit self
File "/home/sdc/github/baselines-rudder/baseline/lib/python3.5/site-packages/cloudpickle/cloudpickle.py", line 413, in save_function
self.save_function_tuple(obj)
File "/home/sdc/github/baselines-rudder/baseline/lib/python3.5/site-packages/cloudpickle/cloudpickle.py", line 559, in save_function_tuple
save(state)
File "/usr/lib/python3.5/pickle.py", line 475, in save
f(self, obj) # Call unbound method with explicit self
File "/usr/lib/python3.5/pickle.py", line 810, in save_dict
self._batch_setitems(obj.items())
File "/usr/lib/python3.5/pickle.py", line 836, in _batch_setitems
save(v)
File "/usr/lib/python3.5/pickle.py", line 475, in save
f(self, obj) # Call unbound method with explicit self
File "/usr/lib/python3.5/pickle.py", line 770, in save_list
self._batch_appends(obj)
File "/usr/lib/python3.5/pickle.py", line 797, in _batch_appends
save(tmp[0])
File "/usr/lib/python3.5/pickle.py", line 520, in save
self.save_reduce(obj=obj, *rv)
File "/usr/lib/python3.5/pickle.py", line 623, in save_reduce
save(state)
File "/usr/lib/python3.5/pickle.py", line 475, in save
f(self, obj) # Call unbound method with explicit self
File "/usr/lib/python3.5/pickle.py", line 810, in save_dict
self._batch_setitems(obj.items())
File "/usr/lib/python3.5/pickle.py", line 836, in _batch_setitems
save(v)
File "/usr/lib/python3.5/pickle.py", line 520, in save
self.save_reduce(obj=obj, *rv)
File "/usr/lib/python3.5/pickle.py", line 623, in save_reduce
save(state)
File "/usr/lib/python3.5/pickle.py", line 475, in save
f(self, obj) # Call unbound method with explicit self
File "/usr/lib/python3.5/pickle.py", line 810, in save_dict
self._batch_setitems(obj.items())
File "/usr/lib/python3.5/pickle.py", line 836, in _batch_setitems
save(v)
File "/usr/lib/python3.5/pickle.py", line 520, in save
self.save_reduce(obj=obj, *rv)
File "/usr/lib/python3.5/pickle.py", line 623, in save_reduce
save(state)
File "/usr/lib/python3.5/pickle.py", line 475, in save
f(self, obj) # Call unbound method with explicit self
File "/usr/lib/python3.5/pickle.py", line 810, in save_dict
self._batch_setitems(obj.items())
File "/usr/lib/python3.5/pickle.py", line 836, in _batch_setitems
save(v)
File "/usr/lib/python3.5/pickle.py", line 520, in save
self.save_reduce(obj=obj, *rv)
File "/usr/lib/python3.5/pickle.py", line 623, in save_reduce
save(state)
File "/usr/lib/python3.5/pickle.py", line 475, in save
f(self, obj) # Call unbound method with explicit self
File "/usr/lib/python3.5/pickle.py", line 810, in save_dict
self._batch_setitems(obj.items())
File "/usr/lib/python3.5/pickle.py", line 836, in _batch_setitems
save(v)
File "/usr/lib/python3.5/pickle.py", line 475, in save
f(self, obj) # Call unbound method with explicit self
File "/home/sdc/github/baselines-rudder/baseline/lib/python3.5/site-packages/cloudpickle/cloudpickle.py", line 630, in save_builtin_function
return self.save_function(obj)
File "/home/sdc/github/baselines-rudder/baseline/lib/python3.5/site-packages/cloudpickle/cloudpickle.py", line 400, in save_function
return self.save_reduce(obj=obj, *rv)
File "/usr/lib/python3.5/pickle.py", line 599, in save_reduce
save(args)
File "/usr/lib/python3.5/pickle.py", line 475, in save
f(self, obj) # Call unbound method with explicit self
File "/usr/lib/python3.5/pickle.py", line 725, in save_tuple
save(element)
File "/usr/lib/python3.5/pickle.py", line 495, in save
rv = reduce(self.proto)
TypeError: can't pickle _thread.lock objects

tf 1.8

@safdark

This comment has been minimized.

Show comment
Hide comment
@safdark

safdark Jul 25, 2018

I'm getting a similar error with keras 2.0.9 and tensorflow 1.3.0. Note, however, that this happens only when I wrap a model using the keras.utils.training_utils.multi_gpu_model class, as below:

model = multi_gpu_model(model, gpus=8) # I was trying to utilize 8 GPUs
..
..
checkpointer = ModelCheckpoint(filepath='results/model.hd5', verbose=0)
..
hist = model.fit_generator(generator=audio_gen.next_train(), steps_per_epoch=steps_per_epoch,
        epochs=epochs, validation_data=audio_gen.next_valid(), validation_steps=validation_steps,
        callbacks=[checkpointer], verbose=verbose)  <<<<<<<<<<< # ON THIS LINE #

In the absence of multi_gpu_model, this code above runs without a problem.

===================== OUTPUT ======================
Epoch 1/20
24/106 [================>.] - ETA: 35 - loss: 194.7247

TypeError                                 Traceback (most recent call last)
     74     hist = model.fit_generator(generator=audio_gen.next_train(), steps_per_epoch=steps_per_epoch,
     75         epochs=epochs, validation_data=audio_gen.next_valid(), validation_steps=validation_steps,
---> 76         callbacks=[checkpointer], verbose=verbose)
     77 
     78     # save model loss

/usr/local/lib/python3.5/dist-packages/keras/legacy/interfaces.py in wrapper(*args, **kwargs)
     85                 warnings.warn('Update your `' + object_name +
     86                               '` call to the Keras 2 API: ' + signature, stacklevel=2)
---> 87             return func(*args, **kwargs)
     88         wrapper._original_function = func
     89         return wrapper

/usr/local/lib/python3.5/dist-packages/keras/engine/training.py in fit_generator(self, generator, steps_per_epoch, epochs, verbose, callbacks, validation_data, validation_steps, class_weight, max_queue_size, workers, use_multiprocessing, shuffle, initial_epoch)
   2115                         break
   2116 
-> 2117                 callbacks.on_epoch_end(epoch, epoch_logs)
   2118                 epoch += 1
   2119                 if callback_model.stop_training:

/usr/local/lib/python3.5/dist-packages/keras/callbacks.py in on_epoch_end(self, epoch, logs)
     71         logs = logs or {}
     72         for callback in self.callbacks:
---> 73             callback.on_epoch_end(epoch, logs)
     74 
     75     def on_batch_begin(self, batch, logs=None):

/usr/local/lib/python3.5/dist-packages/keras/callbacks.py in on_epoch_end(self, epoch, logs)
    423                     self.model.save_weights(filepath, overwrite=True)
    424                 else:
--> 425                     self.model.save(filepath, overwrite=True)
    426 
    427 

/usr/local/lib/python3.5/dist-packages/keras/engine/topology.py in save(self, filepath, overwrite, include_optimizer)
   2554         """
   2555         from ..models import save_model
-> 2556         save_model(self, filepath, overwrite, include_optimizer)
   2557 
   2558     def save_weights(self, filepath, overwrite=True):

/usr/local/lib/python3.5/dist-packages/keras/models.py in save_model(model, filepath, overwrite, include_optimizer)
    105         f.attrs['model_config'] = json.dumps({
    106             'class_name': model.__class__.__name__,
--> 107             'config': model.get_config()
    108         }, default=get_json_type).encode('utf8')
    109 

/usr/local/lib/python3.5/dist-packages/keras/engine/topology.py in get_config(self)
   2395             model_outputs.append([layer.name, new_node_index, tensor_index])
   2396         config['output_layers'] = model_outputs
-> 2397         return copy.deepcopy(config)
   2398 
   2399     @classmethod

/usr/lib/python3.5/copy.py in deepcopy(x, memo, _nil)
    153     copier = _deepcopy_dispatch.get(cls)
    154     if copier:
--> 155         y = copier(x, memo)
    156     else:
    157         try:

/usr/lib/python3.5/copy.py in _deepcopy_dict(x, memo)
    241     memo[id(x)] = y
    242     for key, value in x.items():
--> 243         y[deepcopy(key, memo)] = deepcopy(value, memo)
    244     return y
    245 d[dict] = _deepcopy_dict

/usr/lib/python3.5/copy.py in deepcopy(x, memo, _nil)
    153     copier = _deepcopy_dispatch.get(cls)
    154     if copier:
--> 155         y = copier(x, memo)
    156     else:
    157         try:

/usr/lib/python3.5/copy.py in _deepcopy_list(x, memo)
    216     memo[id(x)] = y
    217     for a in x:
--> 218         y.append(deepcopy(a, memo))
    219     return y
    220 d[list] = _deepcopy_list

/usr/lib/python3.5/copy.py in deepcopy(x, memo, _nil)
    153     copier = _deepcopy_dispatch.get(cls)
    154     if copier:
--> 155         y = copier(x, memo)
    156     else:
    157         try:

/usr/lib/python3.5/copy.py in _deepcopy_dict(x, memo)
    241     memo[id(x)] = y
    242     for key, value in x.items():
--> 243         y[deepcopy(key, memo)] = deepcopy(value, memo)
    244     return y
    245 d[dict] = _deepcopy_dict

/usr/lib/python3.5/copy.py in deepcopy(x, memo, _nil)
    153     copier = _deepcopy_dispatch.get(cls)
    154     if copier:
--> 155         y = copier(x, memo)
    156     else:
    157         try:

/usr/lib/python3.5/copy.py in _deepcopy_dict(x, memo)
    241     memo[id(x)] = y
    242     for key, value in x.items():
--> 243         y[deepcopy(key, memo)] = deepcopy(value, memo)
    244     return y
    245 d[dict] = _deepcopy_dict

/usr/lib/python3.5/copy.py in deepcopy(x, memo, _nil)
    153     copier = _deepcopy_dispatch.get(cls)
    154     if copier:
--> 155         y = copier(x, memo)
    156     else:
    157         try:

/usr/lib/python3.5/copy.py in _deepcopy_tuple(x, memo)
    221 
    222 def _deepcopy_tuple(x, memo):
--> 223     y = [deepcopy(a, memo) for a in x]
    224     # We're not going to put the tuple in the memo, but it's still important we
    225     # check for it, in case the tuple contains recursive mutable structures.

/usr/lib/python3.5/copy.py in <listcomp>(.0)
    221 
    222 def _deepcopy_tuple(x, memo):
--> 223     y = [deepcopy(a, memo) for a in x]
    224     # We're not going to put the tuple in the memo, but it's still important we
    225     # check for it, in case the tuple contains recursive mutable structures.

/usr/lib/python3.5/copy.py in deepcopy(x, memo, _nil)
    153     copier = _deepcopy_dispatch.get(cls)
    154     if copier:
--> 155         y = copier(x, memo)
    156     else:
    157         try:

/usr/lib/python3.5/copy.py in _deepcopy_tuple(x, memo)
    221 
    222 def _deepcopy_tuple(x, memo):
--> 223     y = [deepcopy(a, memo) for a in x]
    224     # We're not going to put the tuple in the memo, but it's still important we
    225     # check for it, in case the tuple contains recursive mutable structures.

/usr/lib/python3.5/copy.py in <listcomp>(.0)
    221 
    222 def _deepcopy_tuple(x, memo):
--> 223     y = [deepcopy(a, memo) for a in x]
    224     # We're not going to put the tuple in the memo, but it's still important we
    225     # check for it, in case the tuple contains recursive mutable structures.

/usr/lib/python3.5/copy.py in deepcopy(x, memo, _nil)
    180                             raise Error(
    181                                 "un(deep)copyable object of type %s" % cls)
--> 182                 y = _reconstruct(x, rv, 1, memo)
    183 
    184     # If is its own copy, don't memoize.

/usr/lib/python3.5/copy.py in _reconstruct(x, info, deep, memo)
    295     if state is not None:
    296         if deep:
--> 297             state = deepcopy(state, memo)
    298         if hasattr(y, '__setstate__'):
    299             y.__setstate__(state)

/usr/lib/python3.5/copy.py in deepcopy(x, memo, _nil)
    153     copier = _deepcopy_dispatch.get(cls)
    154     if copier:
--> 155         y = copier(x, memo)
    156     else:
    157         try:

/usr/lib/python3.5/copy.py in _deepcopy_dict(x, memo)
    241     memo[id(x)] = y
    242     for key, value in x.items():
--> 243         y[deepcopy(key, memo)] = deepcopy(value, memo)
    244     return y
    245 d[dict] = _deepcopy_dict

/usr/lib/python3.5/copy.py in deepcopy(x, memo, _nil)
    180                             raise Error(
    181                                 "un(deep)copyable object of type %s" % cls)
--> 182                 y = _reconstruct(x, rv, 1, memo)
    183 
    184     # If is its own copy, don't memoize.

/usr/lib/python3.5/copy.py in _reconstruct(x, info, deep, memo)
    295     if state is not None:
    296         if deep:
--> 297             state = deepcopy(state, memo)
    298         if hasattr(y, '__setstate__'):
    299             y.__setstate__(state)

/usr/lib/python3.5/copy.py in deepcopy(x, memo, _nil)
    153     copier = _deepcopy_dispatch.get(cls)
    154     if copier:
--> 155         y = copier(x, memo)
    156     else:
    157         try:

/usr/lib/python3.5/copy.py in _deepcopy_dict(x, memo)
    241     memo[id(x)] = y
    242     for key, value in x.items():
--> 243         y[deepcopy(key, memo)] = deepcopy(value, memo)
    244     return y
    245 d[dict] = _deepcopy_dict

/usr/lib/python3.5/copy.py in deepcopy(x, memo, _nil)
    153     copier = _deepcopy_dispatch.get(cls)
    154     if copier:
--> 155         y = copier(x, memo)
    156     else:
    157         try:

/usr/lib/python3.5/copy.py in _deepcopy_dict(x, memo)
    241     memo[id(x)] = y
    242     for key, value in x.items():
--> 243         y[deepcopy(key, memo)] = deepcopy(value, memo)
    244     return y
    245 d[dict] = _deepcopy_dict

/usr/lib/python3.5/copy.py in deepcopy(x, memo, _nil)
    153     copier = _deepcopy_dispatch.get(cls)
    154     if copier:
--> 155         y = copier(x, memo)
    156     else:
    157         try:

/usr/lib/python3.5/copy.py in _deepcopy_method(x, memo)
    248 
    249 def _deepcopy_method(x, memo): # Copy instance methods
--> 250     return type(x)(x.__func__, deepcopy(x.__self__, memo))
    251 _deepcopy_dispatch[types.MethodType] = _deepcopy_method
    252 

/usr/lib/python3.5/copy.py in deepcopy(x, memo, _nil)
    180                             raise Error(
    181                                 "un(deep)copyable object of type %s" % cls)
--> 182                 y = _reconstruct(x, rv, 1, memo)
    183 
    184     # If is its own copy, don't memoize.

/usr/lib/python3.5/copy.py in _reconstruct(x, info, deep, memo)
    295     if state is not None:
    296         if deep:
--> 297             state = deepcopy(state, memo)
    298         if hasattr(y, '__setstate__'):
    299             y.__setstate__(state)

/usr/lib/python3.5/copy.py in deepcopy(x, memo, _nil)
    153     copier = _deepcopy_dispatch.get(cls)
    154     if copier:
--> 155         y = copier(x, memo)
    156     else:
    157         try:

/usr/lib/python3.5/copy.py in _deepcopy_dict(x, memo)
    241     memo[id(x)] = y
    242     for key, value in x.items():
--> 243         y[deepcopy(key, memo)] = deepcopy(value, memo)
    244     return y
    245 d[dict] = _deepcopy_dict

/usr/lib/python3.5/copy.py in deepcopy(x, memo, _nil)
    153     copier = _deepcopy_dispatch.get(cls)
    154     if copier:
--> 155         y = copier(x, memo)
    156     else:
    157         try:

/usr/lib/python3.5/copy.py in _deepcopy_dict(x, memo)
    241     memo[id(x)] = y
    242     for key, value in x.items():
--> 243         y[deepcopy(key, memo)] = deepcopy(value, memo)
    244     return y
    245 d[dict] = _deepcopy_dict

/usr/lib/python3.5/copy.py in deepcopy(x, memo, _nil)
    180                             raise Error(
    181                                 "un(deep)copyable object of type %s" % cls)
--> 182                 y = _reconstruct(x, rv, 1, memo)
    183 
    184     # If is its own copy, don't memoize.

/usr/lib/python3.5/copy.py in _reconstruct(x, info, deep, memo)
    295     if state is not None:
    296         if deep:
--> 297             state = deepcopy(state, memo)
    298         if hasattr(y, '__setstate__'):
    299             y.__setstate__(state)

/usr/lib/python3.5/copy.py in deepcopy(x, memo, _nil)
    153     copier = _deepcopy_dispatch.get(cls)
    154     if copier:
--> 155         y = copier(x, memo)
    156     else:
    157         try:

/usr/lib/python3.5/copy.py in _deepcopy_dict(x, memo)
    241     memo[id(x)] = y
    242     for key, value in x.items():
--> 243         y[deepcopy(key, memo)] = deepcopy(value, memo)
    244     return y
    245 d[dict] = _deepcopy_dict

/usr/lib/python3.5/copy.py in deepcopy(x, memo, _nil)
    153     copier = _deepcopy_dispatch.get(cls)
    154     if copier:
--> 155         y = copier(x, memo)
    156     else:
    157         try:

/usr/lib/python3.5/copy.py in _deepcopy_dict(x, memo)
    241     memo[id(x)] = y
    242     for key, value in x.items():
--> 243         y[deepcopy(key, memo)] = deepcopy(value, memo)
    244     return y
    245 d[dict] = _deepcopy_dict

/usr/lib/python3.5/copy.py in deepcopy(x, memo, _nil)
    180                             raise Error(
    181                                 "un(deep)copyable object of type %s" % cls)
--> 182                 y = _reconstruct(x, rv, 1, memo)
    183 
    184     # If is its own copy, don't memoize.

/usr/lib/python3.5/copy.py in _reconstruct(x, info, deep, memo)
    295     if state is not None:
    296         if deep:
--> 297             state = deepcopy(state, memo)
    298         if hasattr(y, '__setstate__'):
    299             y.__setstate__(state)

/usr/lib/python3.5/copy.py in deepcopy(x, memo, _nil)
    153     copier = _deepcopy_dispatch.get(cls)
    154     if copier:
--> 155         y = copier(x, memo)
    156     else:
    157         try:

/usr/lib/python3.5/copy.py in _deepcopy_dict(x, memo)
    241     memo[id(x)] = y
    242     for key, value in x.items():
--> 243         y[deepcopy(key, memo)] = deepcopy(value, memo)
    244     return y
    245 d[dict] = _deepcopy_dict

/usr/lib/python3.5/copy.py in deepcopy(x, memo, _nil)
    180                             raise Error(
    181                                 "un(deep)copyable object of type %s" % cls)
--> 182                 y = _reconstruct(x, rv, 1, memo)
    183 
    184     # If is its own copy, don't memoize.

/usr/lib/python3.5/copy.py in _reconstruct(x, info, deep, memo)
    295     if state is not None:
    296         if deep:
--> 297             state = deepcopy(state, memo)
    298         if hasattr(y, '__setstate__'):
    299             y.__setstate__(state)

/usr/lib/python3.5/copy.py in deepcopy(x, memo, _nil)
    153     copier = _deepcopy_dispatch.get(cls)
    154     if copier:
--> 155         y = copier(x, memo)
    156     else:
    157         try:

/usr/lib/python3.5/copy.py in _deepcopy_dict(x, memo)
    241     memo[id(x)] = y
    242     for key, value in x.items():
--> 243         y[deepcopy(key, memo)] = deepcopy(value, memo)
    244     return y
    245 d[dict] = _deepcopy_dict

/usr/lib/python3.5/copy.py in deepcopy(x, memo, _nil)
    180                             raise Error(
    181                                 "un(deep)copyable object of type %s" % cls)
--> 182                 y = _reconstruct(x, rv, 1, memo)
    183 
    184     # If is its own copy, don't memoize.

/usr/lib/python3.5/copy.py in _reconstruct(x, info, deep, memo)
    295     if state is not None:
    296         if deep:
--> 297             state = deepcopy(state, memo)
    298         if hasattr(y, '__setstate__'):
    299             y.__setstate__(state)

/usr/lib/python3.5/copy.py in deepcopy(x, memo, _nil)
    153     copier = _deepcopy_dispatch.get(cls)
    154     if copier:
--> 155         y = copier(x, memo)
    156     else:
    157         try:

/usr/lib/python3.5/copy.py in _deepcopy_dict(x, memo)
    241     memo[id(x)] = y
    242     for key, value in x.items():
--> 243         y[deepcopy(key, memo)] = deepcopy(value, memo)
    244     return y
    245 d[dict] = _deepcopy_dict

/usr/lib/python3.5/copy.py in deepcopy(x, memo, _nil)
    172                     reductor = getattr(x, "__reduce_ex__", None)
    173                     if reductor:
--> 174                         rv = reductor(4)
    175                     else:
    176                         reductor = getattr(x, "__reduce__", None)

TypeError: can't pickle _thread.lock objects

safdark commented Jul 25, 2018

I'm getting a similar error with keras 2.0.9 and tensorflow 1.3.0. Note, however, that this happens only when I wrap a model using the keras.utils.training_utils.multi_gpu_model class, as below:

model = multi_gpu_model(model, gpus=8) # I was trying to utilize 8 GPUs
..
..
checkpointer = ModelCheckpoint(filepath='results/model.hd5', verbose=0)
..
hist = model.fit_generator(generator=audio_gen.next_train(), steps_per_epoch=steps_per_epoch,
        epochs=epochs, validation_data=audio_gen.next_valid(), validation_steps=validation_steps,
        callbacks=[checkpointer], verbose=verbose)  <<<<<<<<<<< # ON THIS LINE #

In the absence of multi_gpu_model, this code above runs without a problem.

===================== OUTPUT ======================
Epoch 1/20
24/106 [================>.] - ETA: 35 - loss: 194.7247

TypeError                                 Traceback (most recent call last)
     74     hist = model.fit_generator(generator=audio_gen.next_train(), steps_per_epoch=steps_per_epoch,
     75         epochs=epochs, validation_data=audio_gen.next_valid(), validation_steps=validation_steps,
---> 76         callbacks=[checkpointer], verbose=verbose)
     77 
     78     # save model loss

/usr/local/lib/python3.5/dist-packages/keras/legacy/interfaces.py in wrapper(*args, **kwargs)
     85                 warnings.warn('Update your `' + object_name +
     86                               '` call to the Keras 2 API: ' + signature, stacklevel=2)
---> 87             return func(*args, **kwargs)
     88         wrapper._original_function = func
     89         return wrapper

/usr/local/lib/python3.5/dist-packages/keras/engine/training.py in fit_generator(self, generator, steps_per_epoch, epochs, verbose, callbacks, validation_data, validation_steps, class_weight, max_queue_size, workers, use_multiprocessing, shuffle, initial_epoch)
   2115                         break
   2116 
-> 2117                 callbacks.on_epoch_end(epoch, epoch_logs)
   2118                 epoch += 1
   2119                 if callback_model.stop_training:

/usr/local/lib/python3.5/dist-packages/keras/callbacks.py in on_epoch_end(self, epoch, logs)
     71         logs = logs or {}
     72         for callback in self.callbacks:
---> 73             callback.on_epoch_end(epoch, logs)
     74 
     75     def on_batch_begin(self, batch, logs=None):

/usr/local/lib/python3.5/dist-packages/keras/callbacks.py in on_epoch_end(self, epoch, logs)
    423                     self.model.save_weights(filepath, overwrite=True)
    424                 else:
--> 425                     self.model.save(filepath, overwrite=True)
    426 
    427 

/usr/local/lib/python3.5/dist-packages/keras/engine/topology.py in save(self, filepath, overwrite, include_optimizer)
   2554         """
   2555         from ..models import save_model
-> 2556         save_model(self, filepath, overwrite, include_optimizer)
   2557 
   2558     def save_weights(self, filepath, overwrite=True):

/usr/local/lib/python3.5/dist-packages/keras/models.py in save_model(model, filepath, overwrite, include_optimizer)
    105         f.attrs['model_config'] = json.dumps({
    106             'class_name': model.__class__.__name__,
--> 107             'config': model.get_config()
    108         }, default=get_json_type).encode('utf8')
    109 

/usr/local/lib/python3.5/dist-packages/keras/engine/topology.py in get_config(self)
   2395             model_outputs.append([layer.name, new_node_index, tensor_index])
   2396         config['output_layers'] = model_outputs
-> 2397         return copy.deepcopy(config)
   2398 
   2399     @classmethod

/usr/lib/python3.5/copy.py in deepcopy(x, memo, _nil)
    153     copier = _deepcopy_dispatch.get(cls)
    154     if copier:
--> 155         y = copier(x, memo)
    156     else:
    157         try:

/usr/lib/python3.5/copy.py in _deepcopy_dict(x, memo)
    241     memo[id(x)] = y
    242     for key, value in x.items():
--> 243         y[deepcopy(key, memo)] = deepcopy(value, memo)
    244     return y
    245 d[dict] = _deepcopy_dict

/usr/lib/python3.5/copy.py in deepcopy(x, memo, _nil)
    153     copier = _deepcopy_dispatch.get(cls)
    154     if copier:
--> 155         y = copier(x, memo)
    156     else:
    157         try:

/usr/lib/python3.5/copy.py in _deepcopy_list(x, memo)
    216     memo[id(x)] = y
    217     for a in x:
--> 218         y.append(deepcopy(a, memo))
    219     return y
    220 d[list] = _deepcopy_list

/usr/lib/python3.5/copy.py in deepcopy(x, memo, _nil)
    153     copier = _deepcopy_dispatch.get(cls)
    154     if copier:
--> 155         y = copier(x, memo)
    156     else:
    157         try:

/usr/lib/python3.5/copy.py in _deepcopy_dict(x, memo)
    241     memo[id(x)] = y
    242     for key, value in x.items():
--> 243         y[deepcopy(key, memo)] = deepcopy(value, memo)
    244     return y
    245 d[dict] = _deepcopy_dict

/usr/lib/python3.5/copy.py in deepcopy(x, memo, _nil)
    153     copier = _deepcopy_dispatch.get(cls)
    154     if copier:
--> 155         y = copier(x, memo)
    156     else:
    157         try:

/usr/lib/python3.5/copy.py in _deepcopy_dict(x, memo)
    241     memo[id(x)] = y
    242     for key, value in x.items():
--> 243         y[deepcopy(key, memo)] = deepcopy(value, memo)
    244     return y
    245 d[dict] = _deepcopy_dict

/usr/lib/python3.5/copy.py in deepcopy(x, memo, _nil)
    153     copier = _deepcopy_dispatch.get(cls)
    154     if copier:
--> 155         y = copier(x, memo)
    156     else:
    157         try:

/usr/lib/python3.5/copy.py in _deepcopy_tuple(x, memo)
    221 
    222 def _deepcopy_tuple(x, memo):
--> 223     y = [deepcopy(a, memo) for a in x]
    224     # We're not going to put the tuple in the memo, but it's still important we
    225     # check for it, in case the tuple contains recursive mutable structures.

/usr/lib/python3.5/copy.py in <listcomp>(.0)
    221 
    222 def _deepcopy_tuple(x, memo):
--> 223     y = [deepcopy(a, memo) for a in x]
    224     # We're not going to put the tuple in the memo, but it's still important we
    225     # check for it, in case the tuple contains recursive mutable structures.

/usr/lib/python3.5/copy.py in deepcopy(x, memo, _nil)
    153     copier = _deepcopy_dispatch.get(cls)
    154     if copier:
--> 155         y = copier(x, memo)
    156     else:
    157         try:

/usr/lib/python3.5/copy.py in _deepcopy_tuple(x, memo)
    221 
    222 def _deepcopy_tuple(x, memo):
--> 223     y = [deepcopy(a, memo) for a in x]
    224     # We're not going to put the tuple in the memo, but it's still important we
    225     # check for it, in case the tuple contains recursive mutable structures.

/usr/lib/python3.5/copy.py in <listcomp>(.0)
    221 
    222 def _deepcopy_tuple(x, memo):
--> 223     y = [deepcopy(a, memo) for a in x]
    224     # We're not going to put the tuple in the memo, but it's still important we
    225     # check for it, in case the tuple contains recursive mutable structures.

/usr/lib/python3.5/copy.py in deepcopy(x, memo, _nil)
    180                             raise Error(
    181                                 "un(deep)copyable object of type %s" % cls)
--> 182                 y = _reconstruct(x, rv, 1, memo)
    183 
    184     # If is its own copy, don't memoize.

/usr/lib/python3.5/copy.py in _reconstruct(x, info, deep, memo)
    295     if state is not None:
    296         if deep:
--> 297             state = deepcopy(state, memo)
    298         if hasattr(y, '__setstate__'):
    299             y.__setstate__(state)

/usr/lib/python3.5/copy.py in deepcopy(x, memo, _nil)
    153     copier = _deepcopy_dispatch.get(cls)
    154     if copier:
--> 155         y = copier(x, memo)
    156     else:
    157         try:

/usr/lib/python3.5/copy.py in _deepcopy_dict(x, memo)
    241     memo[id(x)] = y
    242     for key, value in x.items():
--> 243         y[deepcopy(key, memo)] = deepcopy(value, memo)
    244     return y
    245 d[dict] = _deepcopy_dict

/usr/lib/python3.5/copy.py in deepcopy(x, memo, _nil)
    180                             raise Error(
    181                                 "un(deep)copyable object of type %s" % cls)
--> 182                 y = _reconstruct(x, rv, 1, memo)
    183 
    184     # If is its own copy, don't memoize.

/usr/lib/python3.5/copy.py in _reconstruct(x, info, deep, memo)
    295     if state is not None:
    296         if deep:
--> 297             state = deepcopy(state, memo)
    298         if hasattr(y, '__setstate__'):
    299             y.__setstate__(state)

/usr/lib/python3.5/copy.py in deepcopy(x, memo, _nil)
    153     copier = _deepcopy_dispatch.get(cls)
    154     if copier:
--> 155         y = copier(x, memo)
    156     else:
    157         try:

/usr/lib/python3.5/copy.py in _deepcopy_dict(x, memo)
    241     memo[id(x)] = y
    242     for key, value in x.items():
--> 243         y[deepcopy(key, memo)] = deepcopy(value, memo)
    244     return y
    245 d[dict] = _deepcopy_dict

/usr/lib/python3.5/copy.py in deepcopy(x, memo, _nil)
    153     copier = _deepcopy_dispatch.get(cls)
    154     if copier:
--> 155         y = copier(x, memo)
    156     else:
    157         try:

/usr/lib/python3.5/copy.py in _deepcopy_dict(x, memo)
    241     memo[id(x)] = y
    242     for key, value in x.items():
--> 243         y[deepcopy(key, memo)] = deepcopy(value, memo)
    244     return y
    245 d[dict] = _deepcopy_dict

/usr/lib/python3.5/copy.py in deepcopy(x, memo, _nil)
    153     copier = _deepcopy_dispatch.get(cls)
    154     if copier:
--> 155         y = copier(x, memo)
    156     else:
    157         try:

/usr/lib/python3.5/copy.py in _deepcopy_method(x, memo)
    248 
    249 def _deepcopy_method(x, memo): # Copy instance methods
--> 250     return type(x)(x.__func__, deepcopy(x.__self__, memo))
    251 _deepcopy_dispatch[types.MethodType] = _deepcopy_method
    252 

/usr/lib/python3.5/copy.py in deepcopy(x, memo, _nil)
    180                             raise Error(
    181                                 "un(deep)copyable object of type %s" % cls)
--> 182                 y = _reconstruct(x, rv, 1, memo)
    183 
    184     # If is its own copy, don't memoize.

/usr/lib/python3.5/copy.py in _reconstruct(x, info, deep, memo)
    295     if state is not None:
    296         if deep:
--> 297             state = deepcopy(state, memo)
    298         if hasattr(y, '__setstate__'):
    299             y.__setstate__(state)

/usr/lib/python3.5/copy.py in deepcopy(x, memo, _nil)
    153     copier = _deepcopy_dispatch.get(cls)
    154     if copier:
--> 155         y = copier(x, memo)
    156     else:
    157         try:

/usr/lib/python3.5/copy.py in _deepcopy_dict(x, memo)
    241     memo[id(x)] = y
    242     for key, value in x.items():
--> 243         y[deepcopy(key, memo)] = deepcopy(value, memo)
    244     return y
    245 d[dict] = _deepcopy_dict

/usr/lib/python3.5/copy.py in deepcopy(x, memo, _nil)
    153     copier = _deepcopy_dispatch.get(cls)
    154     if copier:
--> 155         y = copier(x, memo)
    156     else:
    157         try:

/usr/lib/python3.5/copy.py in _deepcopy_dict(x, memo)
    241     memo[id(x)] = y
    242     for key, value in x.items():
--> 243         y[deepcopy(key, memo)] = deepcopy(value, memo)
    244     return y
    245 d[dict] = _deepcopy_dict

/usr/lib/python3.5/copy.py in deepcopy(x, memo, _nil)
    180                             raise Error(
    181                                 "un(deep)copyable object of type %s" % cls)
--> 182                 y = _reconstruct(x, rv, 1, memo)
    183 
    184     # If is its own copy, don't memoize.

/usr/lib/python3.5/copy.py in _reconstruct(x, info, deep, memo)
    295     if state is not None:
    296         if deep:
--> 297             state = deepcopy(state, memo)
    298         if hasattr(y, '__setstate__'):
    299             y.__setstate__(state)

/usr/lib/python3.5/copy.py in deepcopy(x, memo, _nil)
    153     copier = _deepcopy_dispatch.get(cls)
    154     if copier:
--> 155         y = copier(x, memo)
    156     else:
    157         try:

/usr/lib/python3.5/copy.py in _deepcopy_dict(x, memo)
    241     memo[id(x)] = y
    242     for key, value in x.items():
--> 243         y[deepcopy(key, memo)] = deepcopy(value, memo)
    244     return y
    245 d[dict] = _deepcopy_dict

/usr/lib/python3.5/copy.py in deepcopy(x, memo, _nil)
    153     copier = _deepcopy_dispatch.get(cls)
    154     if copier:
--> 155         y = copier(x, memo)
    156     else:
    157         try:

/usr/lib/python3.5/copy.py in _deepcopy_dict(x, memo)
    241     memo[id(x)] = y
    242     for key, value in x.items():
--> 243         y[deepcopy(key, memo)] = deepcopy(value, memo)
    244     return y
    245 d[dict] = _deepcopy_dict

/usr/lib/python3.5/copy.py in deepcopy(x, memo, _nil)
    180                             raise Error(
    181                                 "un(deep)copyable object of type %s" % cls)
--> 182                 y = _reconstruct(x, rv, 1, memo)
    183 
    184     # If is its own copy, don't memoize.

/usr/lib/python3.5/copy.py in _reconstruct(x, info, deep, memo)
    295     if state is not None:
    296         if deep:
--> 297             state = deepcopy(state, memo)
    298         if hasattr(y, '__setstate__'):
    299             y.__setstate__(state)

/usr/lib/python3.5/copy.py in deepcopy(x, memo, _nil)
    153     copier = _deepcopy_dispatch.get(cls)
    154     if copier:
--> 155         y = copier(x, memo)
    156     else:
    157         try:

/usr/lib/python3.5/copy.py in _deepcopy_dict(x, memo)
    241     memo[id(x)] = y
    242     for key, value in x.items():
--> 243         y[deepcopy(key, memo)] = deepcopy(value, memo)
    244     return y
    245 d[dict] = _deepcopy_dict

/usr/lib/python3.5/copy.py in deepcopy(x, memo, _nil)
    180                             raise Error(
    181                                 "un(deep)copyable object of type %s" % cls)
--> 182                 y = _reconstruct(x, rv, 1, memo)
    183 
    184     # If is its own copy, don't memoize.

/usr/lib/python3.5/copy.py in _reconstruct(x, info, deep, memo)
    295     if state is not None:
    296         if deep:
--> 297             state = deepcopy(state, memo)
    298         if hasattr(y, '__setstate__'):
    299             y.__setstate__(state)

/usr/lib/python3.5/copy.py in deepcopy(x, memo, _nil)
    153     copier = _deepcopy_dispatch.get(cls)
    154     if copier:
--> 155         y = copier(x, memo)
    156     else:
    157         try:

/usr/lib/python3.5/copy.py in _deepcopy_dict(x, memo)
    241     memo[id(x)] = y
    242     for key, value in x.items():
--> 243         y[deepcopy(key, memo)] = deepcopy(value, memo)
    244     return y
    245 d[dict] = _deepcopy_dict

/usr/lib/python3.5/copy.py in deepcopy(x, memo, _nil)
    180                             raise Error(
    181                                 "un(deep)copyable object of type %s" % cls)
--> 182                 y = _reconstruct(x, rv, 1, memo)
    183 
    184     # If is its own copy, don't memoize.

/usr/lib/python3.5/copy.py in _reconstruct(x, info, deep, memo)
    295     if state is not None:
    296         if deep:
--> 297             state = deepcopy(state, memo)
    298         if hasattr(y, '__setstate__'):
    299             y.__setstate__(state)

/usr/lib/python3.5/copy.py in deepcopy(x, memo, _nil)
    153     copier = _deepcopy_dispatch.get(cls)
    154     if copier:
--> 155         y = copier(x, memo)
    156     else:
    157         try:

/usr/lib/python3.5/copy.py in _deepcopy_dict(x, memo)
    241     memo[id(x)] = y
    242     for key, value in x.items():
--> 243         y[deepcopy(key, memo)] = deepcopy(value, memo)
    244     return y
    245 d[dict] = _deepcopy_dict

/usr/lib/python3.5/copy.py in deepcopy(x, memo, _nil)
    172                     reductor = getattr(x, "__reduce_ex__", None)
    173                     if reductor:
--> 174                         rv = reductor(4)
    175                     else:
    176                         reductor = getattr(x, "__reduce__", None)

TypeError: can't pickle _thread.lock objects
@ebrevdo

This comment has been minimized.

Show comment
Hide comment
@ebrevdo

ebrevdo Jul 25, 2018

Contributor
Contributor

ebrevdo commented Jul 25, 2018

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment