-
Notifications
You must be signed in to change notification settings - Fork 6
System Setup
This guide has been tested on a clean laptop
If you are using a ROS machine/JMU machine, ROS Kinetic/Melodic may already be setup. However if that isn't the case, you will need to install it before moving forward:
Make sure to replace $(lsb_release -sc)
with bionic here:
sudo sh -c 'echo "deb http://packages.ros.org/ros/ubuntu $(lsb_release -sc) main" > /etc/apt/sources.list.d/ros-latest.list'
Autoware is an autonomous vehicle framework that provides autonomous driving functionality. It is used on this project purely for localization, and building pointcloud maps from bag files. Setup should be followed here:
Notes:
- Follow instructions for Autoware Version 1.12.0, Ubuntu 18.04 / Melodic
- In step 4, replace the second line with the following:
rosdep install -y --from-paths src --ignore-src --rosdistro melodic --os=ubuntu:bionic
There are several Python dependencies that need to be installed, a list of dependencies as well as the functionality the provide, and commands to install follows.
- bitstruct - bitstruct allows us to pack control variables(speed, angle, etc) into a packed binary structure that can be sent to an Arduino.
- opencv2 - Python binding for OpenCV, OpenCV helps provide various computer vision tools.
- numpy - Scientific computing package used in various math heavy nodes throughout the system.
- scikit-learn - Machine Learning toolkit.
- socketio - For network communication between systems.
- pyaudio - Audio Input/Output library.
- wave - Library for dealing with WAV sound format files.
- gTTS - Google Text To Speech API.
- SpeechRecognition - Python library for speech recognition.
- networkx - Graph utility library used for self-driving maps.
Command for installing all of these dependencies:
sudo apt install ros-melodic-video-stream-opencv python-tk
sudo apt install portaudio19-dev
sudo apt install python-pip
pip install --user --upgrade pip
pip install --user wheel
pip install bitstruct opencv-contrib-python numpy scikit-learn python-socketio==4.3.0 pyaudio gTTs SpeechRecognition networkx
Assuming you have a catkin workspace setup follow these instructions
- Open a terminal
- Change directory to
catkin_ws/src
- Obtain the repository:
git clone https://github.com/JACart/ai-navigation.git
Once the dependencies are installed, you should be able to build the ai-navigation
packages:
cd ~/catkin_ws/
catkin_make
Execute the following to add two lines to your .bashrc
file:
echo "source ~/catkin_ws/devel/setup.bash" >> ~/.bashrc
echo "source ~/autoware.ai/install/setup.bash" >> ~/.bashrc
Then
source ~/.bashrc
Then add this line into the visudo
%sudo ALL=NOPASSWD: /home/jacart/catkin_ws/src/ai-navigation/run
Install screen
sudo apt-get install screen
Run the command to install nvm
curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.38.0/install.sh | bash
Once this is complete, restart the terminal session. Then run
nvm install 14.16.0
and
nvm use 14.16.0
Obtain the repositories: git clone https://github.com/JACart/local-admin.git
, git clone https://github.com/JACart/local-server.git
, git clone https://github.com/JACart/cart-ui.git
Go into each directory and run the command npm i
.
Once complete, go into the local-admin directory and run npm run linux-start
. Press the "Config" button at the top. The default paths should be local-server: ../local-server; ui-server: ../cart-ui; run.sh: ../ai-navigation. The default port for the local server is 8021, and the UI server is 3001.
You should then be good to go.
The cart uses speech recognition to allow the passenger to interface with the vehicle through voice. Setup should be followed here: Speech Recognition
The cart keeps track of its passenger to determine whether the passenger is in the cart or not, and safely if so. Setup should be followed here: Pose Tracking