-
Notifications
You must be signed in to change notification settings - Fork 0
Home
To begin recording data on any Linux-based computer, follow the steps below:
-
Install Docker: First, make sure you have Docker installed on your system. Docker is a platform that allows you to package and distribute applications in lightweight, isolated containers. You can download Docker from the official website here and follow the installation instructions specific to your operating system.
-
Download the Repository: Next, download the repository containing the necessary files for recording. You can do this by cloning the repository using Git or by downloading the repository as a ZIP file and extracting it to a local directory on your computer.
-
Initialize Git Submodules: Within the downloaded repository, there may be Git submodules that need to be initialized. Git submodules are separate Git repositories embedded within a parent repository. To initialize the submodules, navigate to the repository's root directory using a terminal or command prompt and execute the following command:
git submodule init git submodule update
-
Configure Topic Selection: Create a
.env
file in thedocker/
directory of the repository. This file will specify the topic names you want to record. Open the.env
file in a text editor and define the topics using the following format:USER=username TOPICS=" /imu/data /imu/mag /imu/temperature /livox/lidar /mynteye/left_rect/image_rect/compressed /mynteye/left_rect/camera_info /mynteye/right_rect/image_rect/compressed /mynteye/right_rect/camera_info /mynteye/depth/image_raw /realsense/color/image_raw/compressed /realsense/color/camera_info /realsense/aligned_depth_to_color/image_raw/compressedDepth /realsense/aligned_depth_to_color/camera_info /realsense/accel/sample /realsense/gyro/sample /mimix3/gps/assisted "
In the above example, we have provided a list of example topics that you may want to record. You can customize this list based on your specific requirements.
-
Connect the Sensors: Connect the required sensors to your computer, making sure they are properly powered. This may involve connecting devices such as the Mynt Eye s1030 camera, Xsens IMU, Intel Realsense D435i camera, Livox LiDAR, and any other sensors you wish to use.
-
Launch the Recording System: Open a terminal or command prompt and navigate to the
fruc_dataset_apparatus/docker
directory within the downloaded repository. Run the following command to start the recording system using Docker Compose:docker-compose up
Docker Compose will read the configuration from the
docker-compose.yml
file and launch the required containers with the specified settings. You will see the output and logs from each container in the terminal window.The recording system is now up and running, capturing data from the selected topics and storing it in a rosbag file under
fruc_dataset_apparatus/docker
. You can customize the behavior and settings of the recording system by modifying thedocker-compose.yml
file or other relevant configuration files within the repository.
To start using the recording system, simply run the command docker compose up
. This command will launch the system with default settings and configurations. However, you may want to customize certain parameters based on your specific requirements. This section will guide you on how to customize various aspects of the system.
To customize the parameters of the sensors used in the system, such as frequency, topic names, topic availability, compression rates, and more, you need to edit the corresponding launch files located in the fruc_dataset_apparatus/catkin_ws/src/sensor_tools/launch/
directory. Each sensor has its own launch file that contains the specific parameters for that sensor.
You can modify these launch files using a text editor to adjust the desired parameters. Make sure to save the changes after editing. However, please note that Xsens parameters have a separate configuration that can be modified in the docker-compose.yml
file. For more detailed information on customizing Xsens parameters, refer to the in-depth overview of the melodic docker image.
The recording service in the system allows you to capture data into rosbag files. You can customize various configurations related to rosbag recording by editing the docker-compose.yml
file.
For example, you can modify parameters such as the name of the rosbag file, the maximum size of the bag file, the maximum number of bags to keep, and more. Locate the recording service section in the docker-compose.yml
file and adjust the parameters according to your preferences. Save the changes after modifying the file.
The recording system incorporates various sensors to capture data from the environment. These sensors enable perception, localization, and environmental awareness within the ROS ecosystem. The following sensors are utilized in the system:
-
Mynt Eye s1030 Camera: The Mynt Eye s1030 is a stereo camera capable of capturing high-resolution images and depth information. It provides visual perception capabilities and is used for tasks such as visual odometry, object detection, and scene understanding.
-
Xsens IMU: The Xsens Inertial Measurement Unit (IMU) is a sensor that measures acceleration, angular velocity, and magnetic fields. It provides essential motion sensing data for tasks such as orientation estimation, motion tracking, and sensor fusion.
-
Intel Realsense D435i Camera: The Intel Realsense D435i is a depth camera that combines stereo vision with an infrared projector and a motion tracking module. It captures depth information, RGB images, and provides camera pose estimation, enabling tasks such as 3D reconstruction and object tracking.
-
Livox LiDAR: Livox LiDAR is a type of Light Detection and Ranging sensor that uses laser beams to measure distances and create 3D point cloud representations of the surroundings. It provides accurate and dense spatial information, facilitating tasks such as mapping, localization, and obstacle detection.
These sensors are integrated into the recording system using ROS nodes and corresponding driver software. The system communicates with each sensor to capture relevant data, which can then be recorded, processed, or visualized within the ROS framework.