Skip to content

Commit

Permalink
Move example, socket_publisher and pangolin_viewer
Browse files Browse the repository at this point in the history
  • Loading branch information
ymd-stella committed Jul 9, 2023
1 parent a5dfcab commit 2ac5fd3
Show file tree
Hide file tree
Showing 4 changed files with 56 additions and 65 deletions.
11 changes: 2 additions & 9 deletions docs/docker.rst
Original file line number Diff line number Diff line change
Expand Up @@ -56,13 +56,6 @@ In order to enable X11 forwarding, supplemental options (``-e DISPLAY=$DISPLAY``
After launching the container, the shell interface will be launched in the docker container.

.. code-block:: bash
root@ddad048b5fff:/stella_vslam/build# ls
lib run_image_slam run_video_slam
run_euroc_slam run_kitti_slam run_tum_slam
See :ref:`Tutorial <chapter-simple-tutorial>` to run SLAM examples in the container.

.. NOTE ::
Expand Down Expand Up @@ -135,7 +128,7 @@ The shell interface will be launched in the docker container.
.. code-block:: bash
$ docker run --rm -it --name stella_vslam-socket --net=host stella_vslam-socket
root@hostname:/stella_vslam/build#
root@hostname:/stella_vslam_examples/build#
See :ref:`Tutorial <chapter-simple-tutorial>` to run SLAM examples in the container.

Expand Down Expand Up @@ -182,7 +175,7 @@ The shell interface will be launched in the docker container.
.. code-block:: bash
$ docker run --rm -it --name stella_vslam-socket stella_vslam-socket
root@hostname:/stella_vslam/build#
root@hostname:/stella_vslam_examples/build#
| See :ref:`Tutorial <chapter-simple-tutorial>` to run SLAM examples in the container.
| Please don't forget to append ``SocketPublisher.server_uri`` entry to the ``config.yaml`` if you use the downloaded datasets in the tutorial.
Expand Down
36 changes: 18 additions & 18 deletions docs/example.rst
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@ SLAM with Video Files
=====================

We provide an example snippet for using video files (e.g. ``.mp4``) for visual SLAM.
The source code is placed at ``./example/run_video_slam.cc``.
The source code is placed at ``stella_vslam_examples/src/run_video_slam.cc``.

| The camera that captures the video file must be calibrated. Create a config file (``.yaml``) according to the camera parameters.
| We provided a vocabulary file for FBoW at `here <https://github.com/stella-cv/FBoW_orb_vocab/raw/main/orb_vocab.fbow>`__.
Expand All @@ -29,7 +29,7 @@ SLAM with Image Sequences
=========================

We provided an example snippet for using image sequences for visual SLAM.
The source code is placed at ``./example/run_image_slam.cc``.
The source code is placed at ``stella_vslam_examples/src/run_image_slam.cc``.

| The camera that captures the video file must be calibrated. Create a config file (``.yaml``) according to the camera parameters.
| We provided a vocabulary file for FBoW at `here <https://github.com/stella-cv/FBoW_orb_vocab/raw/main/orb_vocab.fbow>`__.
Expand All @@ -48,7 +48,7 @@ KITTI Odometry dataset

`KITTI Odometry dataset <http://www.cvlibs.net/datasets/kitti/>`_ is a benchmarking dataset for monocular and stereo visual odometry and lidar odometry that is captured from car-mounted devices.
We provided an example source code for running monocular and stereo visual SLAM with this dataset.
The source code is placed at ``./example/run_kitti_slam.cc``.
The source code is placed at ``stella_vslam_examples/src/run_kitti_slam.cc``.

Start by downloading the dataset from `here <http://www.cvlibs.net/datasets/kitti/eval_odometry.php>`__.
Download the grayscale set (``data_odometry_gray.zip``).
Expand All @@ -73,12 +73,12 @@ If you built examples with Pangolin Viewer support, a map viewer and frame viewe
$ ./run_kitti_slam \
-v /path/to/orb_vocab/orb_vocab.fbow \
-d /path/to/KITTI/Odometry/sequences/00/ \
-c ../example/kitti/KITTI_mono_00-02.yaml
-c ~/lib/stella_vslam/example/kitti/KITTI_mono_00-02.yaml
# stereo SLAM with sequence 05
$ ./run_kitti_slam \
-v /path/to/orb_vocab/orb_vocab.fbow \
-d /path/to/KITTI/Odometry/sequences/05/ \
-c ../example/kitti/KITTI_stereo_04-12.yaml
-c ~/lib/stella_vslam/example/kitti/KITTI_stereo_04-12.yaml
.. _subsection-example-euroc:

Expand All @@ -87,7 +87,7 @@ EuRoC MAV dataset

`EuRoC MAV dataset <https://projects.asl.ethz.ch/datasets/doku.php?id=kmavvisualinertialdatasets>`_ is a benchmarking dataset for monocular and stereo visual odometry that is captured from drone-mounted devices.
We provide an example source code for running monocular and stereo visual SLAM with this dataset.
The source code is placed at ``./example/run_euroc_slam.cc``.
The source code is placed at ``stella_vslam_examples/src/run_euroc_slam.cc``.

Start by downloading the dataset from `here <http://robotics.ethz.ch/~asl-datasets/ijrr_euroc_mav_dataset/>`__.
Download the ``.zip`` file of a dataset you plan on using.
Expand All @@ -101,7 +101,7 @@ After downloading and uncompressing it, you will find several directories under
In addition, download a vocabulary file for FBoW from `here <https://github.com/stella-cv/FBoW_orb_vocab/raw/main/orb_vocab.fbow>`__.

We provided the two config files for EuRoC, ``./example/euroc/EuRoC_mono.yaml`` for monocular and ``./example/euroc/EuRoC_stereo.yaml`` for stereo.
We provided the two config files for EuRoC, ``~/lib/stella_vslam/example/euroc/EuRoC_mono.yaml`` for monocular and ``~/lib/stella_vslam/example/euroc/EuRoC_stereo.yaml`` for stereo.

If you have built examples with Pangolin Viewer support, a map viewer and frame viewer will be launched right after executing the following command.

Expand All @@ -112,20 +112,20 @@ If you have built examples with Pangolin Viewer support, a map viewer and frame
$ ./run_euroc_slam \
-v /path/to/orb_vocab/orb_vocab.fbow \
-d /path/to/EuRoC/MAV/mav0/ \
-c ../example/euroc/EuRoC_mono.yaml
-c ~/lib/stella_vslam/example/euroc/EuRoC_mono.yaml
# stereo SLAM with any EuRoC sequence
$ ./run_euroc_slam \
-v /path/to/orb_vocab/orb_vocab.fbow \
-d /path/to/EuRoC/MAV/mav0/ \
-c ../example/euroc/EuRoC_stereo.yaml
-c ~/lib/stella_vslam/example/euroc/EuRoC_stereo.yaml
.. _subsection-example-tum-rgbd:

TUM RGBD dataset
^^^^^^^^^^^^^^^^

`TUM RGBD dataset <https://vision.in.tum.de/data/datasets/rgbd-dataset>`_ is a benchmarking dataset fcontaining RGB-D data and ground-truth data with the goal to establish a novel benchmark for the evaluation of visual odometry and visual SLAM systems.
The source code is placed at ``./example/run_tum_rgbd_slam.cc``.
The source code is placed at ``stella_vslam_examples/src/run_tum_rgbd_slam.cc``.

Start by downloading the various dataset from `here <https://vision.in.tum.de/data/datasets/rgbd-dataset/download>`__.
One of many example datasets can be found from `here <https://vision.in.tum.de/rgbd/dataset/freiburg3/rgbd_dataset_freiburg3_calibration_rgb_depth.tgz>`__.
Expand All @@ -144,7 +144,7 @@ In addition, download a vocabulary file for FBoW from `here <https://github.com/

We provided the config files for RGBD dataset at, ``./example/tum_rgbd``.

For above specific example we shall use two config files, ``./example/tum_rgbd/TUM_RGBD_mono_3.yaml`` for monocular and ``./example/tum_rgbd/TUM_RGBD_rgbd_3.yaml`` for RGBD.
For above specific example we shall use two config files, ``~/lib/stella_vslam/example/tum_rgbd/TUM_RGBD_mono_3.yaml`` for monocular and ``~/lib/stella_vslam/example/tum_rgbd/TUM_RGBD_rgbd_3.yaml`` for RGBD.

Tracking and Mapping
^^^^^^^^^^^^^^^^^^^^
Expand All @@ -156,7 +156,7 @@ Tracking and Mapping
$ ./run_tum_rgbd_slam \
-v /path/to/orb_vocab/orb_vocab.fbow \
-d /path/to/rgbd_dataset_freiburg3_calibration_rgb_depth/ \
-c ../example/tum_rgbd/TUM_RGBD_mono_3.yaml \
-c ~/lib/stella_vslam/example/tum_rgbd/TUM_RGBD_mono_3.yaml \
--no-sleep \
--auto-term \
--map-db-out fr3_slam_mono.msg
Expand All @@ -165,7 +165,7 @@ Tracking and Mapping
$ ./run_tum_rgbd_slam \
-v /path/to/orb_vocab/orb_vocab.fbow \
-d /path/to/rgbd_dataset_freiburg3_calibration_rgb_depth/ \
-c ../example/tum_rgbd/TUM_RGBD_rgbd_3.yaml \
-c ~/lib/stella_vslam/example/tum_rgbd/TUM_RGBD_rgbd_3.yaml \
--no-sleep \
--auto-term \
--map-db-out fr3_slam_rgbd.msg
Expand All @@ -180,7 +180,7 @@ Localization
$ ./run_tum_rgbd_slam --disable-mapping \
-v /path/to/orb_vocab/orb_vocab.fbow \
-d /path/to/rgbd_dataset_freiburg3_calibration_rgb_depth/ \
-c ../example/tum_rgbd/TUM_RGBD_mono_3.yaml \
-c ~/lib/stella_vslam/example/tum_rgbd/TUM_RGBD_mono_3.yaml \
--no-sleep \
--auto-term \
--map-db-in fr3_slam_mono.msg
Expand All @@ -189,7 +189,7 @@ Localization
$ ./run_tum_rgbd_slam --disable-mapping \
-v /path/to/orb_vocab/orb_vocab.fbow \
-d /path/to/rgbd_dataset_freiburg3_calibration_rgb_depth/ \
-c ../example/tum_rgbd/TUM_RGBD_rgbd_3.yaml \
-c ~/lib/stella_vslam/example/tum_rgbd/TUM_RGBD_rgbd_3.yaml \
--no-sleep \
--auto-term \
--map-db-in fr3_slam_rgbd.msg
Expand All @@ -206,7 +206,7 @@ This feature can be used to add keyframes to stabilize localization results.
$ ./run_tum_rgbd_slam --temporal-mapping \
-v /path/to/orb_vocab/orb_vocab.fbow \
-d /path/to/rgbd_dataset_freiburg3_calibration_rgb_depth/ \
-c ../example/tum_rgbd/TUM_RGBD_mono_3.yaml \
-c ~/lib/stella_vslam/example/tum_rgbd/TUM_RGBD_mono_3.yaml \
--no-sleep \
--auto-term \
--map-db-in fr3_slam_mono.msg
Expand All @@ -215,7 +215,7 @@ This feature can be used to add keyframes to stabilize localization results.
$ ./run_tum_rgbd_slam --temporal-mapping \
-v /path/to/orb_vocab/orb_vocab.fbow \
-d /path/to/rgbd_dataset_freiburg3_calibration_rgb_depth/ \
-c ../example/tum_rgbd/TUM_RGBD_rgbd_3.yaml \
-c ~/lib/stella_vslam/example/tum_rgbd/TUM_RGBD_rgbd_3.yaml \
--no-sleep \
--auto-term \
--map-db-in fr3_slam_rgbd.msg
Expand All @@ -234,7 +234,7 @@ Tracking and Mapping
^^^^^^^^^^^^^^^^^^^^

We provided an example snippet for using a UVC camera, which is often called a webcam, for visual SLAM.
The source code is placed at ``./example/run_camera_slam.cc``.
The source code is placed at ``stella_vslam_examples/src/run_camera_slam.cc``.

| Please specify the camera number you want to use by ``-n`` option.
| The camera must be calibrated. Create a config file (``.yaml``) according to the camera parameters.
Expand Down
60 changes: 29 additions & 31 deletions docs/installation.rst
Original file line number Diff line number Diff line change
Expand Up @@ -370,39 +370,43 @@ Otherwise, please download, build and install Protobuf from source.
Build Instructions
==================

When building with support for PangolinViewer, please specify the following cmake options: ``-DUSE_PANGOLIN_VIEWER=ON`` and ``-DUSE_SOCKET_PUBLISHER=OFF``.

.. code-block:: bash
cd /path/to/stella_vslam
mkdir -p ~/lib
cd ~/lib
git clone --recursive https://github.com/stella-cv/stella_vslam.git
mkdir build && cd build
cmake \
-DUSE_STACK_TRACE_LOGGER=ON \
-DCMAKE_BUILD_TYPE=RelWithDebInfo \
-DUSE_PANGOLIN_VIEWER=ON \
-DINSTALL_PANGOLIN_VIEWER=ON \
-DUSE_SOCKET_PUBLISHER=OFF \
-DBUILD_TESTS=OFF \
-DBUILD_EXAMPLES=ON \
..
make -j4 && sudo make install
cmake -DCMAKE_BUILD_TYPE=RelWithDebInfo ..
make -j4
sudo make install
When building with support for SocketViewer, please specify the following cmake options: ``-DUSE_PANGOLIN_VIEWER=OFF`` and ``-DUSE_SOCKET_PUBLISHER=ON``.
# When building with support for PangolinViewer
cd ~/lib
git clone -b 0.0.1 --recursive https://github.com/stella-cv/pangolin_viewer.git
mkdir -p pangolin_viewer/build
cd pangolin_viewer/build
cmake -DCMAKE_BUILD_TYPE=RelWithDebInfo ..
make -j
sudo make install
.. code-block:: bash
# When building with support for SocketViewer
cd ~/lib
git clone -b 0.0.1 --recursive https://github.com/stella-cv/socket_publisher.git
mkdir -p socket_publisher/build
cd socket_publisher/build
cmake -DCMAKE_BUILD_TYPE=RelWithDebInfo ..
make -j
sudo make install
cd /path/to/stella_vslam
mkdir build && cd build
cd ~/lib
git clone -b 0.0.1 --recursive https://github.com/stella-cv/stella_vslam_examples.git
mkdir -p stella_vslam_examples/build
cd stella_vslam_examples/build
cmake \
-DUSE_STACK_TRACE_LOGGER=ON \
-DCMAKE_BUILD_TYPE=RelWithDebInfo \
-DUSE_PANGOLIN_VIEWER=OFF \
-DUSE_SOCKET_PUBLISHER=ON \
-DINSTALL_SOCKET_PUBLISHER=ON \
-DBUILD_TESTS=OFF \
-DBUILD_EXAMPLES=ON \
-DUSE_STACK_TRACE_LOGGER=ON \
..
make -j4 && sudo make install
make -j
After building, check to see if it was successfully built by executing ``./run_kitti_slam -h``.

Expand All @@ -411,13 +415,7 @@ After building, check to see if it was successfully built by executing ``./run_k
$ ./run_kitti_slam -h
Allowed options:
-h, --help produce help message
-v, --vocab arg vocabulary file path
-d, --data-dir arg directory path which contains dataset
-c, --config arg config file path
--frame-skip arg (=1) interval of frame skip
--no-sleep not wait for next frame in real time
--auto-term automatically terminate the viewer
--log-level arg (=info) log level
...
.. _section-viewer-setup:
Expand Down
14 changes: 7 additions & 7 deletions docs/simple_tutorial.rst
Original file line number Diff line number Diff line change
Expand Up @@ -18,9 +18,9 @@ The later parts of this chapter explains what each of the commands do in more de
# at the build directory of stella_vslam ...
$ pwd
/path/to/stella_vslam/build/
~/lib/stella_vslam_examples/build/
$ ls
run_video_slam lib/ ...
run_video_slam ...
# download an ORB vocabulary from GitHub
curl -sL "https://github.com/stella-cv/FBoW_orb_vocab/raw/main/orb_vocab.fbow" -o orb_vocab.fbow
Expand All @@ -40,14 +40,14 @@ The later parts of this chapter explains what each of the commands do in more de
unzip aist_living_lab_2.zip
# run tracking and mapping
./run_video_slam -v ./orb_vocab.fbow -m ./aist_living_lab_1/video.mp4 -c ../example/aist/equirectangular.yaml --frame-skip 3 --no-sleep --map-db-out map.msg
./run_video_slam -v ./orb_vocab.fbow -m ./aist_living_lab_1/video.mp4 -c ~/lib/stella_vslam/example/aist/equirectangular.yaml --frame-skip 3 --no-sleep --map-db-out map.msg
# click the [Terminate] button to close the viewer
# you can find map.msg in the current directory
# run localization
./run_video_slam --disable-mapping -v ./orb_vocab.fbow -m ./aist_living_lab_2/video.mp4 -c ../example/aist/equirectangular.yaml --frame-skip 3 --no-sleep --map-db-in map.msg
./run_video_slam --disable-mapping -v ./orb_vocab.fbow -m ./aist_living_lab_2/video.mp4 -c ~/lib/stella_vslam/example/aist/equirectangular.yaml --frame-skip 3 --no-sleep --map-db-in map.msg
# run localization with temporal mapping based odometry. loaded keyframes are prioritized for localization/localBA.
./run_video_slam --temporal-mapping -v ./orb_vocab.fbow -m ./aist_living_lab_2/video.mp4 -c ../example/aist/equirectangular.yaml --frame-skip 3 --no-sleep --map-db-in map.msg
./run_video_slam --temporal-mapping -v ./orb_vocab.fbow -m ./aist_living_lab_2/video.mp4 -c ~/lib/stella_vslam/example/aist/equirectangular.yaml --frame-skip 3 --no-sleep --map-db-in map.msg
Sample Datasets
Expand Down Expand Up @@ -266,7 +266,7 @@ The paths should be changed accordingly.
$ ./run_video_slam \
-v /path/to/orb_vocab/orb_vocab.fbow \
-c /path/to/stella_vslam/example/aist/equirectangular.yaml \
-c ~/lib/stella_vslam/example/aist/equirectangular.yaml \
-m /path/to/aist_living_lab_1/video.mp4 \
--frame-skip 3 \
--map-db-out aist_living_lab_1_map.msg
Expand Down Expand Up @@ -384,7 +384,7 @@ The paths should be changed accordingly.
$ ./run_video_slam --disable-mapping \
-v /path/to/orb_vocab/orb_vocab.fbow \
-c /path/to/stella_vslam/example/aist/equirectangular.yaml \
-c ~/lib/stella_vslam/example/aist/equirectangular.yaml \
-m /path/to/aist_living_lab_2/video.mp4 \
--frame-skip 3 \
--map-db-in aist_living_lab_1_map.msg
Expand Down

0 comments on commit 2ac5fd3

Please sign in to comment.