Skip to content

CONFIG FILES DOCUMENTATION

Anthony edited this page Dec 21, 2017 · 48 revisions

All personal parameters are stored into config files.

Personal parameters are for example : servo map, arduino ports, language etc...
On first start, config files have a .default extension and are renamed to .config, only if a .config file does not exist.
It is useful to easily update the whole script anytime, without losing personal parameters.

If you want to understand how services works or code your own program from scratch, you can get some raw scripts here, this ultimate references : http://myrobotlab.org/matrix.php?branch=develop

MAIN CONFIGURATION

Inside main folder (myrobotlab\inmoov\config**_inmoov.config**)

Main > Hardware mode ( setup your com ports inside service_6_Arduino.config) ScriptType=Virtual
; RightSide: Also called FINGERSTARTER : connect one arduino ( called right ) to use FingerStarter + inmoov right side
; LeftSide: connect one arduino ( called left) to use head / inmoov left side
; NoArduino: vocal Only, useful to test chatbot ; Full: Both side arduinos connected
; Virtual: virtual arduino and inmoov !

Language of ear,mouth and chatbot folder
Language=en
; en,fr,es,de,nl,ru,hi

IsMute ( don’t speak about robot startup routine )
IsMute=False
True / False ;talk about starting actions

[GENERAL]
LoadingPicture > myrobotlab logo at startup
True / False
StartupSound > custom sound at startup
True / False
IuseLinux > some things dont work on mac and linux like marytts voice automatic download True / False
LaunchSwingGui > chose if SwingGui opens up, you can disable it to speedup the system ?
True / False
BetaVersion > ;develop branch updates True / False


MOUTH CONFIGURATION

inside service_5_Mouth.config

[TTS]

Speechengine=MarySpeech

; you can use : ;MarySpeech : open source TTS : http://myrobotlab.org/service/MarySpeech ;LocalSpeech : Local operating system TTS : http://myrobotlab.org/service/LocalSpeech ;Polly : [NEED API KEY AccessKey (apiKey1) and SecretKey (apiKey2)] http://myrobotlab.org/service/Polly ;VoiceRss : [NEED API KEY (apiKey1)] Free cloud service : http://myrobotlab.org/service/VoiceRss ;IndianTts : [NEED API KEY (apiKey1) and userid (apiKey2)] Hindi support : http://myrobotlab.org/service/IndianTts

VoiceName=cmu-bdl-hsmm

; MarySpeech voices - take HSMM ones - http://myrobotlab.org/content/marytts-multi-language-support ; LocalSpeech : use local windows voices/macOs ( 0,1,2 etc ) print mount.getVoices() ; amazon polly : http://docs.aws.amazon.com/polly/latest/dg/voicelist.html

[API_KEY] apiKey1= apiKey2=

ARDUINO CONFIGURATION

inside service_6_Arduino.config [ARDUINO]
MyRightPort=COMX
MyLeftPort=COMX


SKELETON CONFIGURATION

You can setup each skeleton parts from corresponding skeleton .config file.
Exemple : skeleton_head.config.default

[MAIN]
isHeadActivated=False : Activate or not skeleton part

[SERVO_MINIMUM_MAP_OUTPUT] > your servos minimal limits
[SERVO_MAXIMUM_MAP_OUTPUT] > your servos maximal limits
[SERVO_REST_POSITION] > the position your want the servo to move to when you call a servo.rest()
[MOUTHCONTROL] > activate a software jaw movement while robot speaking
[AUDIOSIGNALPROCESSING] > realtime jaw movement based on signal audio
[SERVO_INVERTED] > you can invert a servo here
[MINIMUM_MAP_INPUT] > used to tweak the map calculation
[MAX_VELOCITY] > The maximum speed your servo can move without breaking anything [SERVO_PIN] > force a non standard Inmoov pin

;robot move the jaw while speaking
[MOUTHCONTROL]
MouthControlActivated=True
;How much the jaw move ( after map )
MouthControlJawMin=0
MouthControlJawMax=100


SERVICES CONFIGURATION

Ear.config :

[MAIN]
EarEngine=WebkitSpeechRecognition
;WebkitSpeechRecognition ; http://myrobotlab.org/service/AndroidSpeechRecognition
;Sphinx : http://myrobotlab.org/service/AndroidSpeechRecognition
;AndroidSpeechRecognition : http://myrobotlab.org/service/AndroidSpeechRecognition
setContinuous=False : False is immediate processing
setAutoListen=1 : auto rearm microphone
ForceMicroOnIfSleeping=1
MagicCommandToWakeUp=wake up

NeoPixel.config :

http://myrobotlab.org/service/NeoPixel

isNeopixelActivated > functionality activation
True/False
NeopixelMaster > On which arduino RX/TX the neopixel is connected ?
left/right
NeopixelMasterPort > On which RX/TX port the neopixel is connected ? COMx for usb / Serialx for rx-tx ( Serial is case sensitive ) pin > On which pin the neopixel is connected ?

Chatbot.config :

http://myrobotlab.org/service/ProgramAB

isChatbotActivated=True > Activated by default

ultraSonicSensor.config :

http://myrobotlab.org/service/UltrasonicSensor

[MAIN]
ultraSonicSensorActivated=False
ultraSonicSensorArduino=left : left or right
trigPin=29
echoPin=33

NervoBoardRelay.config :

http://myrobotlab.org/service/Relay
Useful to control the power of your nervoboard with a relay

PIR.config :

https://github.com/MyRobotLab/inmoov/wiki/HOWTO-PIR-SENSOR
Human presence detector
[MAIN]
isPirActivated=True

;which arduino controls the pir :
PirControllerArduino=right
PirPin=23

[TWEAK]
;5 minutes after presence detected
HumanPresenceTimeout=300000

OpenCv.config :

http://myrobotlab.org/service/OpenCV

[MAIN]
isOpenCvActivated=False
faceRecognizerActivated=True > Activate faceRecognizer filter at same time if tracking launched ...

CameraIndex=0 > your caméra index
DisplayRender=SarxosFrameGrabber > use SarxosFrameGrabber / default etc ...
streamerEnabled=False > disable Stream, can save some CPU

[TRACKING]
Setup here your PID values

eyeXPidKp=20.0
eyeXPidKi=5.0
eyeXPidKd=0.1
eyeYPidKp=20.0
eyeYPidKi=5.0
eyeYPidKd=0.1
rotheadPidKp=12.0
rotheadPidKi=5.0
rotheadPidKd=0.1
neckPidKp=12.0
neckPidKi=5.0
neckPidKd=0.1


;----------------------------- INMOOV LIFE and AUTOMATION ----------------------------------------
Inside InmoovLife folder --> wip (work in progress)

[HEALTHCHECK] > wip robot health check
Activated=True
TimerValue=60000

[MOVEHEADRANDOM] > random move the head while speaking
RobotCanMoveHeadWhileSpeaking=True