Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Failing worker when starting master and server #4

Closed
usr000 opened this issue Apr 12, 2016 · 7 comments
Closed

Failing worker when starting master and server #4

usr000 opened this issue Apr 12, 2016 · 7 comments

Comments

@usr000
Copy link

usr000 commented Apr 12, 2016

Hi,
Thank you for the image!
I have trouble going through the steps of testing namely starting the server inside the container.
When I do

/opt/start.sh -y /opt/models/sample_english_nnet2.yaml

I get the below in the worker.log:

root@83fb42059eaf:/opt#` cat worker.log
libdc1394 error: Failed to initialize libdc1394
   DEBUG 2016-04-12 10:17:39,582 Starting up worker
2016-04-12 10:17:39 -    INFO:   decoder2: Creating decoder using conf: {'post-processor': "perl -npe 'BEGIN {use IO::Handle; STDOUT->autoflush(1);} s/(.*)/\\1./;'", 'logging': {'version': 1, 'root': {'level': 'DEBUG', 'handlers': ['console']}, 'formatters': {'simpleFormater': {'datefmt': '%Y-%m-%d %H:%M:%S', 'format': '%(asctime)s - %(levelname)7s: %(name)10s: %(message)s'}}, 'disable_existing_loggers': False, 'handlers': {'console': {'formatter': 'simpleFormater', 'class': 'logging.StreamHandler', 'level': 'DEBUG'}}}, 'use-vad': False, 'decoder': {'ivector-extraction-config': '/opt/models/english/tedlium_nnet_ms_sp_online/conf/ivector_extractor.conf', 'num-nbest': 10, 'lattice-beam': 6.0, 'acoustic-scale': 0.083, 'do-endpointing': True, 'beam': 10.0, 'max-active': 10000, 'fst': '/opt/models/english/tedlium_nnet_ms_sp_online/HCLG.fst', 'mfcc-config': '/opt/models/english/tedlium_nnet_ms_sp_online/conf/mfcc.conf', 'use-threaded-decoder': True, 'traceback-period-in-secs': 0.25, 'model': '/opt/models/english/tedlium_nnet_ms_sp_online/final.mdl', 'word-syms': '/opt/models/english/tedlium_nnet_ms_sp_online/words.txt', 'endpoint-silence-phones': '1:2:3:4:5:6:7:8:9:10', 'chunk-length-in-secs': 0.25}, 'silence-timeout': 10, 'out-dir': 'tmp', 'use-nnet2': True}
/opt/kaldi-gstreamer-server/kaldigstserver/decoder2.py:48: Warning: When installing property: type 'Gstkaldinnet2onlinedecoder' already has a property named 'delta'
  self.asr = Gst.ElementFactory.make("kaldinnet2onlinedecoder", "asr")
/opt/kaldi-gstreamer-server/kaldigstserver/decoder2.py:48: Warning: When installing property: type 'Gstkaldinnet2onlinedecoder' already has a property named 'max-mem'
  self.asr = Gst.ElementFactory.make("kaldinnet2onlinedecoder", "asr")
/opt/kaldi-gstreamer-server/kaldigstserver/decoder2.py:48: Warning: When installing property: type 'Gstkaldinnet2onlinedecoder' already has a property named 'phone-determinize'
  self.asr = Gst.ElementFactory.make("kaldinnet2onlinedecoder", "asr")
/opt/kaldi-gstreamer-server/kaldigstserver/decoder2.py:48: Warning: When installing property: type 'Gstkaldinnet2onlinedecoder' already has a property named 'word-determinize'
  self.asr = Gst.ElementFactory.make("kaldinnet2onlinedecoder", "asr")
/opt/kaldi-gstreamer-server/kaldigstserver/decoder2.py:48: Warning: When installing property: type 'Gstkaldinnet2onlinedecoder' already has a property named 'minimize'
  self.asr = Gst.ElementFactory.make("kaldinnet2onlinedecoder", "asr")
/opt/kaldi-gstreamer-server/kaldigstserver/decoder2.py:48: Warning: When installing property: type 'Gstkaldinnet2onlinedecoder' already has a property named 'beam'
  self.asr = Gst.ElementFactory.make("kaldinnet2onlinedecoder", "asr")
/opt/kaldi-gstreamer-server/kaldigstserver/decoder2.py:48: Warning: When installing property: type 'Gstkaldinnet2onlinedecoder' already has a property named 'max-active'
  self.asr = Gst.ElementFactory.make("kaldinnet2onlinedecoder", "asr")
/opt/kaldi-gstreamer-server/kaldigstserver/decoder2.py:48: Warning: When installing property: type 'Gstkaldinnet2onlinedecoder' already has a property named 'min-active'
  self.asr = Gst.ElementFactory.make("kaldinnet2onlinedecoder", "asr")
/opt/kaldi-gstreamer-server/kaldigstserver/decoder2.py:48: Warning: When installing property: type 'Gstkaldinnet2onlinedecoder' already has a property named 'lattice-beam'
  self.asr = Gst.ElementFactory.make("kaldinnet2onlinedecoder", "asr")
/opt/kaldi-gstreamer-server/kaldigstserver/decoder2.py:48: Warning: When installing property: type 'Gstkaldinnet2onlinedecoder' already has a property named 'prune-interval'
  self.asr = Gst.ElementFactory.make("kaldinnet2onlinedecoder", "asr")
/opt/kaldi-gstreamer-server/kaldigstserver/decoder2.py:48: Warning: When installing property: type 'Gstkaldinnet2onlinedecoder' already has a property named 'determinize-lattice'
  self.asr = Gst.ElementFactory.make("kaldinnet2onlinedecoder", "asr")
/opt/kaldi-gstreamer-server/kaldigstserver/decoder2.py:48: Warning: When installing property: type 'Gstkaldinnet2onlinedecoder' already has a property named 'beam-delta'
  self.asr = Gst.ElementFactory.make("kaldinnet2onlinedecoder", "asr")
/opt/kaldi-gstreamer-server/kaldigstserver/decoder2.py:48: Warning: When installing property: type 'Gstkaldinnet2onlinedecoder' already has a property named 'hash-ratio'
  self.asr = Gst.ElementFactory.make("kaldinnet2onlinedecoder", "asr")
/opt/kaldi-gstreamer-server/kaldigstserver/decoder2.py:48: Warning: When installing property: type 'Gstkaldinnet2onlinedecoder' already has a property named 'acoustic-scale'
  self.asr = Gst.ElementFactory.make("kaldinnet2onlinedecoder", "asr")
2016-04-12 10:17:39 -    INFO:   decoder2: Setting decoder property: ivector-extraction-config = /opt/models/english/tedlium_nnet_ms_sp_online/conf/ivector_extractor.conf
2016-04-12 10:17:39 -    INFO:   decoder2: Setting decoder property: num-nbest = 10
2016-04-12 10:17:39 -    INFO:   decoder2: Setting decoder property: lattice-beam = 6.0
2016-04-12 10:17:39 -    INFO:   decoder2: Setting decoder property: acoustic-scale = 0.083
2016-04-12 10:17:39 -    INFO:   decoder2: Setting decoder property: do-endpointing = True
2016-04-12 10:17:39 -    INFO:   decoder2: Setting decoder property: beam = 10.0
2016-04-12 10:17:39 -    INFO:   decoder2: Setting decoder property: max-active = 10000
2016-04-12 10:17:39 -    INFO:   decoder2: Setting decoder property: fst = /opt/models/english/tedlium_nnet_ms_sp_online/HCLG.fst
**terminate called after throwing an instance of 'std::length_error'
  what():  vector::reserve**

I have tried increasing memory of the Virtual machine, and also building the image directly from the Git, but got the same result.

Could you please look into this?

thanks

@jcsilva
Copy link
Owner

jcsilva commented Apr 12, 2016

Hi,

this message terminate called after throwing an instance of 'std::length_error' what(): vector::reserve tells that you have a problem with memory allocation. Some questions that you could investigate:

  1. You have at least 4 GB RAM free?
  2. Your system is 64-bit? You may have some problems trying to allocate more than 4 GB when using 32-bit system.
  3. Would it be possible to test it in a physical machine (without any Virtual Machine)?

I think you don't need to build the image from Git. People have already used the image that is in the Docker hub and it worked.

Please, try to answer these questions and tell me if it helped you...

@usr000
Copy link
Author

usr000 commented Apr 13, 2016

Hi,
thank you for a prompt response!

to answer 1&2:
I allocated 8 Gigs to the virtual machine.
Here is the output of the 'free' command after starting the server, values are in Megs:

root@4a588ff82a7e:/# free -m
             total       used       free     shared    buffers     cached
Mem:          7976        357       7619        123          8        237
-/+ buffers/cache:        111       7865
Swap:         2901          0       2901

I also assigned 2 processors to the virtual machine.
it doesn't help actually.
3) I don't have a physical Linux machine at my disposal at the moment
4) I tried both with building images as well as pulling them directly from the Docker Hub

Thanks!

@jcsilva
Copy link
Owner

jcsilva commented Apr 13, 2016

Could you please try to run docker with the memory allocation flag? I'm not sure, but I think it's the "-m" flag (Please, check the manual).

If possible, try to set it to 4 GB.

@usr000
Copy link
Author

usr000 commented Apr 13, 2016

here is what I tried with no luck:

docker run -it -p 8080:80 -m 4g --memory-reservation="4g" -v ...

Just in case I'm using Docker 1.9.1 on a Win machine

@jcsilva
Copy link
Owner

jcsilva commented Apr 13, 2016

I built a small small model that may be useful just to test your setup. Please, download the attached file and untar it. You will find a really small model and a yaml file. Change the necessary lines in this yaml file according to your needs and try to run the docker with it. If it works, you will be able to test it with the wav file I sent in this tar.gz.

Just an important point: I'm not worried about the accuracy of this model. I'm just trying to understand if your setup is ok...

yes_no.tar.gz

@usr000
Copy link
Author

usr000 commented Apr 14, 2016

Thank you so much!
it looks like model you posted made the difference.
I was able to get the following output:

NO. 
NO. 
NO. 
NO. 
NO. 
NO. 
NO. 
NO. 
NO. 
NO. 
NO. 
NO. 
NO. 
Audio sent, now sending EOS
NO. NO. NO. NO. NO. NO. NO. NO. NO. NO. NO. NO. NO.

What should be my next steps to get the larger model working?

thanks again!

@jcsilva
Copy link
Owner

jcsilva commented Apr 15, 2016

Well, the difference between this model and the other one you have tried is its size. You should investigate problems related to memory allocation in your docker setup ... maybe, you could try to open a big file in your docker and see if it helps you to find out what is happening.

You can also try a smaller kaldi model, e.g. : https://github.com/alumae/kaldi-gstreamer-server/tree/master/test/models/english/voxforge/tri2b_mmi_b0.05 (with this yaml file: https://github.com/alumae/kaldi-gstreamer-server/blob/master/sample_worker.yaml)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants