New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
TypeError: can't pickle _thread.lock objects #11157
Comments
Hi, |
I also tried it with native Python, and the error persists. The error disappears only when downgrading to tensorflow version 1.0.0 |
Can you list the exact steps,source code which you are executing? |
I'm running the code found here https://github.com/tensorflow/models/tree/master/tutorials/rnn/translate I run it with: I'm running this code as-is. Python is native (no Anacoda, no virtualenv, etc.) Also, the same error happens on both Linux and Windows. And it is fixed on both systems only by using tensorflow 1.0.0. |
That's very odd that the error disappears with TensorFlow 1.0. @lukaszkaiser do you know what might be happening here? |
There were a lot of changes to RNNCell since 1.0, must ask @ebrevdo to take a look. |
Do you get this error in earlier versions of python (i.e., python 3.4)? I'm trying to replicate locally with python2.7. |
Scratch that; i'll test it with python3.4. |
I totally reproduced this error on py3.6.0 and py2.7.10 on Mac, but the error are different: for python3.6.0: for python2.7.10: |
@mattfeel Can you tell the log when you execute |
I get this same error with Keras+TensorFlow on Follow the command gets the error:
The error:
System information: Have I written custom code: Yes
Collected information:
|
Try using the tf 1.3 rc
…On Jul 28, 2017 2:35 PM, "Fabrício Raphael Silva Pereira" < ***@***.***> wrote:
I get this same error with Keras+TensorFlow on fit_generator.
And the same code with Keras+Theano works fine.
Follow the command gets the error:
model.fit_generator(self.train_inputs, steps_per_epoch=self.train_inputs.steps_per_epoch(),
validation_data=test_input_sequence, validation_steps=steps_test,
max_queue_size=self.train_inputs.workers, epochs=i+1, initial_epoch=i,
workers=self.train_inputs.workers, use_multiprocessing=True,
callbacks = callbacks)
The error:
Epoch 1/1
Traceback (most recent call last):
File "/opt/programs/miniconda3/envs/myenv/lib/python3.6/site-packages/keras/utils/data_utils.py", line 497, in get
inputs = self.queue.get(block=True).get()
File "/opt/programs/miniconda3/envs/myenv/lib/python3.6/multiprocessing/pool.py", line 608, in get
raise self._value
File "/opt/programs/miniconda3/envs/myenv/lib/python3.6/multiprocessing/pool.py", line 385, in _handle_tasks
put(task)
File "/opt/programs/miniconda3/envs/myenv/lib/python3.6/multiprocessing/connection.py", line 206, in send
self._send_bytes(_ForkingPickler.dumps(obj))
File "/opt/programs/miniconda3/envs/myenv/lib/python3.6/multiprocessing/reduction.py", line 51, in dumps
cls(buf, protocol).dump(obj)
TypeError: can't pickle _thread.lock objects
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "./myfolder/mycode.py", line 473, in <module>
main()
File "./myfolder/mycode.py", line 459, in main
autonem.train_autonem(args.embedding_file, args.tune_embedding)
File "./myfolder/mycode.py", line 182, in train_autonem
callbacks = callbacks)
File "/opt/programs/miniconda3/envs/myenv/lib/python3.6/site-packages/keras/legacy/interfaces.py", line 87, in wrapper
return func(*args, **kwargs)
File "/opt/programs/miniconda3/envs/myenv/lib/python3.6/site-packages/keras/engine/training.py", line 1809, in fit_generator
generator_output = next(output_generator)
File "/opt/programs/miniconda3/envs/myenv/lib/python3.6/site-packages/keras/utils/data_utils.py", line 502, in get
raise StopIteration(e)
StopIteration: can't pickle _thread.lock objects
*System information:*
*Have I written custom code:* Yes
*OS Platform and Distribution:* Linux GnomeUbuntu 16.04, but with new
kernel
_*TensorFlow installed from:* pip
*TensorFlow version:* 1.2.1
Python version: 3.6.1 (Miniconda3 4.3.11-64bit)
*Bazel version (if compiling from source):* I don't know.
*CUDA/cuDNN version:* I don't use because my graphic card is AMD-Radeon
*GPU model and memory:* AMD Radeon R7 M260/M265
*CPU model:* Intel® Core™ i7-4510U CPU @ 2.00GHz × 4
*RAM Memory:* 16GiB (2x8Gib dual-channel)
Exact command to reproduce:
history = CumulativeHistory()
callbacks = [history]
from keras import backend as K
if K.backend() == 'tensorflow':
board = keras.callbacks.TensorBoard(log_dir=f"{self.prefix_folder_logs}{time()}",
histogram_freq=1, write_graph=True, write_images=True)
callbacks.append(board)
metric_to_compare = 'val_euclidean_distance'
print("Begin of training model...")
for i in range(MAX_NUM_EPOCHS):
model.fit_generator(self.train_inputs, steps_per_epoch=self.train_inputs.steps_per_epoch(),
validation_data=test_input_sequence, validation_steps=steps_test,
max_queue_size=self.train_inputs.workers, epochs=i+1, initial_epoch=i,
workers=self.train_inputs.workers, use_multiprocessing=True,
callbacks = callbacks)
try:
metrics_diff = history.history[metric_to_compare][i] - min(history.history[metric_to_compare][:i])
except:
metrics_diff = -1
if metrics_diff < 0:
self._save_models(i)
self.data_processor = None # Empty memory
best_epoch = i
num_worse_epochs = 0
elif metrics_diff > 0:
num_worse_epochs += 1
if num_worse_epochs >= PATIENCE:
print("Ran out of patience. Stopping training.")
break
print("End of training model.")
*Collected information*:
(myenv) ***@***.***:~$ ./tf_env_collect.sh
Collecting system information...
2017-07-28 21:05:00.140602: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.1 instructions, but these are available on your machine and could speed up CPU computations.
2017-07-28 21:05:00.140632: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.2 instructions, but these are available on your machine and could speed up CPU computations.
2017-07-28 21:05:00.140645: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX instructions, but these are available on your machine and could speed up CPU computations.
2017-07-28 21:05:00.140650: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX2 instructions, but these are available on your machine and could speed up CPU computations.
2017-07-28 21:05:00.140656: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use FMA instructions, but these are available on your machine and could speed up CPU computations.
Wrote environment to tf_env.txt. You can review the contents of that file.
and use it to populate the fields in the github issue template.
cat tf_env.txt
(myenv) ***@***.***:~$ cat tf_env.txt
== cat /etc/issue ===============================================
Linux mymachine 4.4.0-87-generic #110~14.04.1-Ubuntu SMP Tue Jul 18 14:51:32 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux
VERSION="14.04.5 LTS, Trusty Tahr"
VERSION_ID="14.04"
== are we in docker =============================================
No
== compiler =====================================================
c++ (Ubuntu 4.8.4-2ubuntu1~14.04.3) 4.8.4
Copyright (C) 2013 Free Software Foundation, Inc.
This is free software; see the source for copying conditions. There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
== uname -a =====================================================
Linux mymachine 4.4.0-87-generic #110~14.04.1-Ubuntu SMP Tue Jul 18 14:51:32 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux
== check pips ===================================================
numpy (1.13.1)
protobuf (3.3.0)
tensorflow (1.2.1)
== check for virtualenv =========================================
False
== tensorflow import ============================================
tf.VERSION = 1.2.1
tf.GIT_VERSION = v1.2.0-5-g435cdfc
tf.COMPILER_VERSION = v1.2.0-5-g435cdfc
Sanity check: array([1], dtype=int32)
== env ==========================================================
LD_LIBRARY_PATH /opt/programs/miniconda3/envs/myenv/lib:/opt/intel/compilers_and_libraries_2017.4.196/linux/tbb/lib/intel64_lin/gcc4.7:/opt/intel/compilers_and_libraries_2017.4.196/linux/compiler/lib/intel64_lin:/opt/intel/compilers_and_libraries_2017.4.196/linux/mkl/lib/intel64_lin::/opt/programs/acml/gfortran64/lib
DYLD_LIBRARY_PATH is unset
== nvidia-smi ===================================================
./tf_env_collect.sh: linha 105: nvidia-smi: comando não encontrado
== cuda libs ===================================================
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#11157 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/ABtim_x0X7z4fwR27nf-QB5eI0azF46Pks5sSn5qgaJpZM4OKDmJ>
.
|
@ebigelow |
It is a real disaster! any workaround? |
@lovejasmine I was able to work around it by replacing the The line I changed was
@ebrevdo this was after trying the RC and compiling master myself to no avail. |
Interesting, I thought we had fixed this in master. I'll dig some more
after vacation.
…On Aug 1, 2017 10:46 AM, "pwfff" ***@***.***> wrote:
@lovejasmine <https://github.com/lovejasmine> I was able to work around
it by replacing the .deepcopy call with .copy. This probably breaks other
things in subtle ways, but for my purpose it allowed me to get on with
training.
The line I changed was
packages/tensorflow/contrib/legacy_seq2seq/python/ops/seq2seq.py", line 848, in embedding_attention_seq2seq
encoder_cell = copy.deepcopy(cell)
@ebrevdo <https://github.com/ebrevdo> this was after trying the RC and
compiling master myself to no avail.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#11157 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/ABtim0XaIbwB8YeFQx62kznZgx2nejtOks5sT466gaJpZM4OKDmJ>
.
|
@pwfff |
@mattfeel , I suffered exactly the same problem. file: Anaconda3\Lib\site-packages\tensorflow\contrib\legacy_seq2seq\python\ops\seq2seq.py Now, my seq2seq model is being trained. |
what is core_rnn_cell |
it is tf.nn.rnn_cell.GRUCell() |
Same issue here, when using a Keras + Tensorflow and creating an sklearn wrapper. Still couldn't find a workaround. |
Have you tried my workaround? Why don't you dump up your full error log here? |
@dshahrokhian |
@Chesao AFAIK I cannot use your method in my case. Here's my error log, but I already decided to put some extra code so I don't need to use the sklearn wrapper that was giving me the error.
Thanks, |
Can someone provide a minimal reproducible code snippet? Ideally ~ 10 lines with tf.constant() inputs? |
Error Msg:
|
same problem... keras1.2.0, tf0.12.1 |
Does anyone know of any other solutions? I am having the same problem, and none of the above solutions seem to work. |
@ebrevdo unfortunately, |
It seems to be an issue caused when using embedding_attention_seq2seq several times while training with buckets. |
One alternative might be only fit one bucket at a time |
Seeing same issue with tf 1.5.0: |
Saved final_models/thor_model.pkl tf 1.8 |
I'm getting a similar error with keras 2.0.9 and tensorflow 1.3.0. Note, however, that this happens only when I wrap a model using the keras.utils.training_utils.multi_gpu_model class, as below:
In the absence of multi_gpu_model, this code above runs without a problem. ===================== OUTPUT ======================
|
Safdar, that's an unrelated error. Can you open a new issue for it?
…On Wed, Jul 25, 2018, 7:14 AM Skye Wanderman-Milne ***@***.***> wrote:
Assigned #11157 <#11157>
to @ebrevdo <https://github.com/ebrevdo>.
—
You are receiving this because you were assigned.
Reply to this email directly, view it on GitHub
<#11157 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/ABtim1FGeIlfOtW0GcgBbzkc0r-qEPdYks5uKH1dgaJpZM4OKDmJ>
.
|
did not work for me.maybe I did not type it correct,can u please write it properly |
when I use fit_generator() meet the same error with tf 1.8.0 and keras 2.2.4. |
I think there is no connection with "Python environment", because I try it in Colaboratory, same error. |
But I need this lambda layer!! Do you know another solution? |
@minda163 I've resorted to saving weigths alone and using a constructor function to build the model itself. It seems like there is no other way around this, because |
Did you fix it. I have the same error? |
Although that the issues discussed in this thread are somewhat diverse in their origin, this answer on Stackoverflow might help some in looking for a solution to their problem. |
Do you solve this issue ? If you solve please suggest me to modifications please |
One way is to use the model saving functionality in TensorFlow:
tf.keras.models.save_model
<https://www.tensorflow.org/api_docs/python/tf/keras/models/save_model> or
calling model.save
<https://www.tensorflow.org/guide/keras/save_and_serialize>. Don't use
pickle.
…On Mon, Oct 7, 2019 at 7:19 AM SumitNikam ***@***.***> wrote:
@ebrevdo <https://github.com/ebrevdo>
import pickle
from keras.layers import Dense
from keras.layers import LSTM
from keras.models import Sequential
from keras.metrics import categorical_accuracy
model = Sequential()
model.add(LSTM(20, return_sequences=True, stateful=False, batch_input_shape=(10, 20, 4)))
model.add(Dense(3, activation='softmax'))
model.compile(loss="categorical_crossentropy",
optimizer='adam',
metrics=[categorical_accuracy],
sample_weight_mode='temporal')
data_path = '/home/ubuntu/invoice/data/' #any path to store pickle dump
output_file_path = data_path + 'model.dat'
with open(output_file_path, 'wb') as f:
pickle.dump(model, f)
Error Msg:
Traceback (most recent call last):
File "<input>", line 19, in <module>
TypeError: can't pickle _thread.lock objects
Do you solve this issue ? If you solve please suggest me to modifications
please
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#11157?email_source=notifications&email_token=AANWFGYP3ZQWRMDTSASYNE3QNNAPXA5CNFSM4DRIHGE2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEAQQGCA#issuecomment-539034376>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AANWFG4B6FWNJ4EPBETPMKTQNNAPXANCNFSM4DRIHGEQ>
.
|
@ebrevdo Well, still huge amount of non-tf packages I want to use with my tf model depend on a pickle. I encountered the issue 3 times: |
What happened to this issue?
|
@GGDRriedel this error can happen for many reasons; so your instance may differ a bit from the previous instances in this thread. I suggest opening a new issue on github and answering some of the questions in the template (version of TF, etc) and someone from the keras team can likely help you. |
It's still the inability to pickle dynamic model structures from
so it's basically the same problem. |
It depends on what your Lambda layer does. If it doesn't call to
tensorflow, then it's possible your thread/lock object is completely
unrelated to the one in the bug - aka you would want to push a new bug or
debug why you have locks in your code. If your code is accessing variables
- don't. Lambda is not meant for code that accesses tf.Variables. If
you're in tf1 legacy mode,then the issues are very different and you should
file a separate bug (though I guess the suggestion will be to move to TF2
and eager mode/tf.functions completely). Without knowing more we can't
help you. For example; we can't see your stack trace. And for that you'd
file a new bug anyway.
…On Thu, Feb 25, 2021 at 4:39 AM GGDRriedel ***@***.***> wrote:
It's still the inability to pickle dynamic model structures from
my problem was a lambda layer in my training loop. Please check if you
have any and try to substitute it.
But I need this lambda layer!! Do you know another solution?
so it's basically the same problem.
Anyway I resorted to saving just the weights as a workaround but obviously
this won't work in deployment/production and so on
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#11157 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AANWFG6ZCGV45CBDTT65TBLTAZAJTANCNFSM4DRIHGEQ>
.
|
Please go to Stack Overflow for help and support:
http://stackoverflow.com/questions/tagged/tensorflow
If you open a GitHub issue, here is our policy:
Here's why we have that policy: TensorFlow developers respond to issues. We want to focus on work that benefits the whole community, e.g., fixing bugs and adding features. Support only helps individuals. GitHub also notifies thousands of people when issues are filed. We want them to see you communicating an interesting problem, rather than being redirected to Stack Overflow.
System information
You can collect some of this information using our environment capture script:
https://github.com/tensorflow/tensorflow/tree/master/tools/tf_env_collect.sh
Collecting system information...
2017-06-29 18:35:16.672194: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.1 instructions, but these are available on your machine and could speed up CPU computations.
2017-06-29 18:35:16.672242: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.2 instructions, but these are available on your machine and could speed up CPU computations.
2017-06-29 18:35:16.672250: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX instructions, but these are available on your machine and could speed up CPU computations.
Wrote environment to tf_env.txt. You can review the contents of that file.
and use it to populate the fields in the github issue template.
cat tf_env.txt
== cat /etc/issue ===============================================
Linux GCRGDL171 4.8.0-58-generic #63~16.04.1-Ubuntu SMP Mon Jun 26 18:08:51 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux
VERSION="16.04.2 LTS (Xenial Xerus)"
VERSION_ID="16.04"
VERSION_CODENAME=xenial
== are we in docker =============================================
No
== compiler =====================================================
c++ (Ubuntu 5.4.0-6ubuntu1~16.04.4) 5.4.0 20160609
Copyright (C) 2015 Free Software Foundation, Inc.
This is free software; see the source for copying conditions. There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
== uname -a =====================================================
Linux GCRGDL171 4.8.0-58-generic #63~16.04.1-Ubuntu SMP Mon Jun 26 18:08:51 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux
== check pips ===================================================
numpy (1.12.1)
numpydoc (0.6.0)
protobuf (3.3.0)
tensorflow (1.2.1)
== check for virtualenv =========================================
False
== tensorflow import ============================================
tf.VERSION = 1.2.1
tf.GIT_VERSION = v1.2.0-5-g435cdfc
tf.COMPILER_VERSION = v1.2.0-5-g435cdfc
Sanity check: array([1], dtype=int32)
== env ==========================================================
LD_LIBRARY_PATH /usr/local/cuda:/usr/local/cuda/lib64:
DYLD_LIBRARY_PATH is unset
== nvidia-smi ===================================================
Thu Jun 29 18:35:19 2017
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 375.66 Driver Version: 375.66 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|===============================+======================+======================|
| 0 Tesla K40m Off | 0000:27:00.0 Off | 0 |
| N/A 27C P8 21W / 235W | 0MiB / 11439MiB | 0% Default |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: GPU Memory |
| GPU PID Type Process name Usage |
|=============================================================================|
| No running processes found |
+-----------------------------------------------------------------------------+
== cuda libs ===================================================
/usr/local/cuda-8.0/lib64/libcudart.so.8.0.61
/usr/local/cuda-8.0/lib64/libcudart_static.a
/usr/local/cuda-8.0/doc/man/man7/libcudart.7
/usr/local/cuda-8.0/doc/man/man7/libcudart.so.7
You can obtain the TensorFlow version with
python -c "import tensorflow as tf; print(tf.GIT_VERSION, tf.VERSION)"
Describe the problem
Describe the problem clearly here. Be sure to convey here why it's a bug in TensorFlow or a feature request.
I get exception: TypeError: can't pickle _thread.lock objects. It happens on different machines with the same python version. Just running your example code verbatim.
Source code / logs
Include any logs or source code that would be helpful to diagnose the problem. If including tracebacks, please include the full traceback. Large logs and files should be attached. Try to provide a reproducible test case that is the bare minimum necessary to generate the problem.
Traceback (most recent call last):
File "translate.py", line 322, in
tf.app.run()
File "/home/t-mabruc/anaconda3/lib/python3.6/site-packages/tensorflow/python/platform/app.py", line 48, in run
_sys.exit(main(_sys.argv[:1] + flags_passthrough))
File "translate.py", line 319, in main
train()
File "translate.py", line 178, in train
model = create_model(sess, False)
File "translate.py", line 136, in create_model
dtype=dtype)
File "/home/t-mabruc/models/tutorials/rnn/translate/seq2seq_model.py", line 179, in init
softmax_loss_function=softmax_loss_function)
File "/home/t-mabruc/anaconda3/lib/python3.6/site-packages/tensorflow/contrib/legacy_seq2seq/python/ops/seq2seq.py", line 1206, in model_with_buckets
decoder_inputs[:bucket[1]])
File "/home/t-mabruc/models/tutorials/rnn/translate/seq2seq_model.py", line 178, in
lambda x, y: seq2seq_f(x, y, False),
File "/home/t-mabruc/models/tutorials/rnn/translate/seq2seq_model.py", line 142, in seq2seq_f
dtype=dtype)
File "/home/t-mabruc/anaconda3/lib/python3.6/site-packages/tensorflow/contrib/legacy_seq2seq/python/ops/seq2seq.py", line 848, in embedding_attention_seq2seq
encoder_cell = copy.deepcopy(cell)
File "/home/t-mabruc/anaconda3/lib/python3.6/copy.py", line 161, in deepcopy
y = copier(memo)
File "/home/t-mabruc/anaconda3/lib/python3.6/site-packages/tensorflow/python/layers/base.py", line 476, in deepcopy
setattr(result, k, copy.deepcopy(v, memo))
File "/home/t-mabruc/anaconda3/lib/python3.6/copy.py", line 150, in deepcopy
y = copier(x, memo)
File "/home/t-mabruc/anaconda3/lib/python3.6/copy.py", line 215, in _deepcopy_list
append(deepcopy(a, memo))
File "/home/t-mabruc/anaconda3/lib/python3.6/copy.py", line 180, in deepcopy
y = _reconstruct(x, memo, *rv)
File "/home/t-mabruc/anaconda3/lib/python3.6/copy.py", line 280, in _reconstruct
state = deepcopy(state, memo)
File "/home/t-mabruc/anaconda3/lib/python3.6/copy.py", line 150, in deepcopy
y = copier(x, memo)
File "/home/t-mabruc/anaconda3/lib/python3.6/copy.py", line 240, in _deepcopy_dict
y[deepcopy(key, memo)] = deepcopy(value, memo)
File "/home/t-mabruc/anaconda3/lib/python3.6/copy.py", line 180, in deepcopy
y = _reconstruct(x, memo, *rv)
File "/home/t-mabruc/anaconda3/lib/python3.6/copy.py", line 280, in _reconstruct
state = deepcopy(state, memo)
File "/home/t-mabruc/anaconda3/lib/python3.6/copy.py", line 150, in deepcopy
y = copier(x, memo)
File "/home/t-mabruc/anaconda3/lib/python3.6/copy.py", line 240, in _deepcopy_dict
y[deepcopy(key, memo)] = deepcopy(value, memo)
File "/home/t-mabruc/anaconda3/lib/python3.6/copy.py", line 180, in deepcopy
y = _reconstruct(x, memo, *rv)
File "/home/t-mabruc/anaconda3/lib/python3.6/copy.py", line 280, in _reconstruct
state = deepcopy(state, memo)
File "/home/t-mabruc/anaconda3/lib/python3.6/copy.py", line 150, in deepcopy
y = copier(x, memo)
File "/home/t-mabruc/anaconda3/lib/python3.6/copy.py", line 240, in _deepcopy_dict
y[deepcopy(key, memo)] = deepcopy(value, memo)
File "/home/t-mabruc/anaconda3/lib/python3.6/copy.py", line 180, in deepcopy
y = _reconstruct(x, memo, *rv)
File "/home/t-mabruc/anaconda3/lib/python3.6/copy.py", line 280, in _reconstruct
state = deepcopy(state, memo)
File "/home/t-mabruc/anaconda3/lib/python3.6/copy.py", line 150, in deepcopy
y = copier(x, memo)
File "/home/t-mabruc/anaconda3/lib/python3.6/copy.py", line 240, in _deepcopy_dict
y[deepcopy(key, memo)] = deepcopy(value, memo)
File "/home/t-mabruc/anaconda3/lib/python3.6/copy.py", line 169, in deepcopy
rv = reductor(4)
TypeError: can't pickle _thread.lock objects
The text was updated successfully, but these errors were encountered: