Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Error during Inference #19

Closed
betegon opened this issue Jan 30, 2020 · 7 comments
Closed

Error during Inference #19

betegon opened this issue Jan 30, 2020 · 7 comments

Comments

@betegon
Copy link

betegon commented Jan 30, 2020

Hi,

Thank you for your effort in DeepXi @anicolson .

I have been able to test it for inference with the files you provide, but when I put my own file to denoise, I got the following error:

The test_x list has a total of 2 entries.
Loading sample statistics from pickle file...
Preparing graph...
Inference...
  0%|                                                           | 0/2 [00:01<?, ?it/s]
Traceback (most recent call last):
  File "/home/betegon/anaconda3/lib/python3.7/site-packages/tensorflow/python/client/session.py", line 1356, in _do_call
    return fn(*args)
  File "/home/betegon/anaconda3/lib/python3.7/site-packages/tensorflow/python/client/session.py", line 1341, in _run_fn
    options, feed_dict, fetch_list, target_list, run_metadata)
  File "/home/betegon/anaconda3/lib/python3.7/site-packages/tensorflow/python/client/session.py", line 1429, in _call_tf_sessionrun
    run_metadata)
tensorflow.python.framework.errors_impl.InvalidArgumentError: indices[89] = 89 is not in [0, 89)
	 [[{{node boolean_mask/GatherV2}}]]

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "deepxi.py", line 44, in <module>
    if args.infer: infer(sess, net, args)
  File "lib/dev/infer.py", line 36, in infer
    net.nframes_ph: input_feat[1], net.training_ph: False}) # output of network.
  File "/home/betegon/anaconda3/lib/python3.7/site-packages/tensorflow/python/client/session.py", line 950, in run
    run_metadata_ptr)
  File "/home/betegon/anaconda3/lib/python3.7/site-packages/tensorflow/python/client/session.py", line 1173, in _run
    feed_dict_tensor, options, run_metadata)
  File "/home/betegon/anaconda3/lib/python3.7/site-packages/tensorflow/python/client/session.py", line 1350, in _do_run
    run_metadata)
  File "/home/betegon/anaconda3/lib/python3.7/site-packages/tensorflow/python/client/session.py", line 1370, in _do_call
    raise type(e)(node_def, op, message)
tensorflow.python.framework.errors_impl.InvalidArgumentError: indices[89] = 89 is not in [0, 89)
	 [[node boolean_mask/GatherV2 (defined at lib/dev/ResNet.py:88) ]]

Original stack trace for 'boolean_mask/GatherV2':
  File "deepxi.py", line 40, in <module>
    net = deepxi_net.deepxi_net(args)
  File "lib/dev/deepxi_net.py", line 32, in __init__
    d_model=args.d_model, d_f=args.d_f, k_size=args.k_size, max_d_rate=args.max_d_rate)
  File "lib/dev/ResNet.py", line 88, in ResNet
    if boolean_mask: blocks[-1] = tf.boolean_mask(blocks[-1], tf.sequence_mask(seq_len))
  File "/home/betegon/anaconda3/lib/python3.7/site-packages/tensorflow/python/ops/array_ops.py", line 1386, in boolean_mask
    return _apply_mask_1d(tensor, mask, axis)
  File "/home/betegon/anaconda3/lib/python3.7/site-packages/tensorflow/python/ops/array_ops.py", line 1355, in _apply_mask_1d
    return gather(reshaped_tensor, indices, axis=axis)
  File "/home/betegon/anaconda3/lib/python3.7/site-packages/tensorflow/python/util/dispatch.py", line 180, in wrapper
    return target(*args, **kwargs)
  File "/home/betegon/anaconda3/lib/python3.7/site-packages/tensorflow/python/ops/array_ops.py", line 3475, in gather
    return gen_array_ops.gather_v2(params, indices, axis, name=name)
  File "/home/betegon/anaconda3/lib/python3.7/site-packages/tensorflow/python/ops/gen_array_ops.py", line 4097, in gather_v2
    batch_dims=batch_dims, name=name)
  File "/home/betegon/anaconda3/lib/python3.7/site-packages/tensorflow/python/framework/op_def_library.py", line 788, in _apply_op_helper
    op_def=op_def)
  File "/home/betegon/anaconda3/lib/python3.7/site-packages/tensorflow/python/util/deprecation.py", line 507, in new_func
    return func(*args, **kwargs)
  File "/home/betegon/anaconda3/lib/python3.7/site-packages/tensorflow/python/framework/ops.py", line 3616, in create_op
    op_def=op_def)
  File "/home/betegon/anaconda3/lib/python3.7/site-packages/tensorflow/python/framework/ops.py", line 2005, in __init__
    self._traceback = tf_stack.extract_stack()

Do you know why is it happening?

This is what I did:

  1. Remove the file set/test_noisy_speech/FB_FB10_07_voice-babble_5dB.wav and add my own .wav file and call it like the old one: FB_FB10_07_voice-babble_5dB.wav.
  2. Do inference as I did before:
python3 deepxi.py --infer 1 --out_type y --gain mmse-lsa --ver '3f' --epoch 175 --gpu 0

Also, why does it look like there are two files to denoise ?

The test_x list has a total of 2 entries.
Loading sample statistics from pickle file...
Preparing graph...
Inference...
  0%|                                                           | 0/2 [00:01<?, ?it/s]
@anicolson
Copy link
Owner

Hi, can you delete the 'test_x_list.p' from the data directory and try again?

Also, which version of tensorflow are you using?

@anicolson
Copy link
Owner

anicolson commented Jan 30, 2020

Hi betagon,

You should not rename it to the original file. The code stores the file path and the file length of each .wav file in test_x_list.p. If you delete test_x_list.p, then the code will create it again correctly.

Just add the files you want to set/test_noisy_speech (without renaming). If you get an error, try and delete data/test_x_list.py to recreate the test list

Hope this helps

@betegon
Copy link
Author

betegon commented Jan 31, 2020

That was it.

  1. Removed test_x_list_turbo.p file from data/
  2. Put audio files in set/test_noisy_speech

Also, I had to resample audio files to the following format in order to make it work:
Sample rate: 16000 Hz
Bit rate: 256 kbps
I used the following command in linux:

sox input.wav -r 16000 -b 16  output.wav
# check if output.wav has 256kbps, if it has 512kbps, change option -b 16 to -b 8 

To install sox: sudo apt-get install sox

Furthermore, the results I got weren't the expected and I run into errors when trying to do inference in a big file (180MB). I will open some issues later to address theses problems.

Thank you for you time and effort

@liziru
Copy link

liziru commented Dec 29, 2020

@anicolson I found a problem when I add one 35s test audio to 'set/test_noisy_speech', I got this error when running command './run.sh VER="mhanet-1.1c" INFER=1 GAIN="mmse-lsa"', but it works well for short audio such as 3s:
`100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 3.01it/s]
(1, 2225, 257)
(1, 2225, 257)
(1,)
Performing inference...
2020-12-29 17:33:16.576687: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudnn.so.7
2020-12-29 17:33:18.438092: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcublas.so.10
Traceback (most recent call last):
File "main.py", line 92, in
saved_data_path=args.saved_data_path,
File "/home/dell/users/lpp/ns/DeepXi/deepxi/model.py", line 277, in infer
tgt_hat_batch = self.model.predict(inp_batch, verbose=1)
File "/home/dell/.local/lib/python3.7/site-packages/tensorflow/python/keras/engine/training.py", line 130, in _method_wrapper
return method(self, *args, **kwargs)
File "/home/dell/.local/lib/python3.7/site-packages/tensorflow/python/keras/engine/training.py", line 1599, in predict
tmp_batch_outputs = predict_function(iterator)
File "/home/dell/.local/lib/python3.7/site-packages/tensorflow/python/eager/def_function.py", line 780, in call
result = self._call(*args, **kwds)
File "/home/dell/.local/lib/python3.7/site-packages/tensorflow/python/eager/def_function.py", line 846, in _call
return self._concrete_stateful_fn._filtered_call(canon_args, canon_kwds) # pylint: disable=protected-access
File "/home/dell/.local/lib/python3.7/site-packages/tensorflow/python/eager/function.py", line 1848, in _filtered_call
cancellation_manager=cancellation_manager)
File "/home/dell/.local/lib/python3.7/site-packages/tensorflow/python/eager/function.py", line 1924, in _call_flat
ctx, args, cancellation_manager=cancellation_manager))
File "/home/dell/.local/lib/python3.7/site-packages/tensorflow/python/eager/function.py", line 550, in call
ctx=ctx)
File "/home/dell/.local/lib/python3.7/site-packages/tensorflow/python/eager/execute.py", line 60, in quick_execute
inputs, attrs, num_outputs)
tensorflow.python.framework.errors_impl.InvalidArgumentError: 2 root error(s) found.
(0) Invalid argument: indices[0,2209] = 2209 is not in [0, 2048)
[[node functional_1/embedding/embedding_lookup (defined at /home/dell/users/lpp/ns/DeepXi/deepxi/model.py:277) ]]
(1) Invalid argument: indices[0,2209] = 2209 is not in [0, 2048)
[[node functional_1/embedding/embedding_lookup (defined at /home/dell/users/lpp/ns/DeepXi/deepxi/model.py:277) ]]
[[functional_1/embedding/embedding_lookup/_8]]
0 successful operations.
0 derived errors ignored. [Op:__inference_predict_function_2107]

Errors may have originated from an input operation.
Input Source operations connected to node functional_1/embedding/embedding_lookup:
functional_1/embedding/embedding_lookup/1639 (defined at /home/dell/.conda/envs/lpp_tf2.0/lib/python3.7/contextlib.py:112)

Input Source operations connected to node functional_1/embedding/embedding_lookup:
functional_1/embedding/embedding_lookup/1639 (defined at /home/dell/.conda/envs/lpp_tf2.0/lib/python3.7/contextlib.py:112)

Function call stack:
predict_function -> predict_function`

I will appreciate any help.

@liziru
Copy link

liziru commented Dec 29, 2020

Name: tensorflow-gpu
Version: 2.3.0

@liziru
Copy link

liziru commented Dec 29, 2020

@anicolson After doing this setting, everything is ok. Any way to remove this limit?
image
print("Performing inference...") inp_batch=inp_batch[:,:2048,:] supplementary_batch=supplementary_batch[:,:2048,:]

@anicolson
Copy link
Owner

anicolson commented Jan 3, 2021 via email

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants