-
Notifications
You must be signed in to change notification settings - Fork 14
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Rework development tooling #256
Conversation
Signed-off-by: Michel Hidalgo <michel@ekumenlabs.com>
@@ -1,21 +0,0 @@ | |||
# |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This script was a hack for back when I was working with the mbari controller in the loop. It made it easy for me to get set up with the controller
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do you think it's still useful in that context? I plan to support running publicly available missions with the MBARI controller.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yep, it definitely is, although a user would have to know a bit about how to navigate tmux. The top pane loads the mbari CLI and the bottom pane the simulator. The second window was for plotting vehicle behavior as seen internally by the mbari controller (we were debugging issues like sign flips for integration etc). I think the second window can be removed.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Circling back here. On a second look, theses scripts are super specific to MBARI's codebase. I think we should move them to the other side and make sure they are present in the public image (along with the same capabilities this tooling I'm adding has to iterate on them).
See modified installation page below. InstallationUse Docker imageBuild the latest LRAUV simulation image: docker build --target lrauv -t osrf/lrauv:latest -f tools/setup/Dockerfile https://github.com/osrf/lrauv.git#main You may then run a sample simulation: docker run --rm -e DISPLAY -v /tmp/.X11-unix:/tmp/.X11-unix:ro --gpus all osrf/lrauv:latest or attach to the container and run Note: at the moment, these Docker images require Nvidia graphics. Make sure you have a recent version of Docker and nvidia-docker installed if so required by your Docker version. Build from sourceThere are two ways to build the LRAUV simulation from source: Using a local workspaceInstall prerequisitesIdeally you want to be running Gazebo GardenGazebo is the toolbox of development libraries for simulation that is used to simulate LRAUVs. Gazebo Garden is the version used by the LRAUV simulation and it needs to be installed first. Instructions on how to compile and install Gazebo Garden from source can be found here. NOTE: The latest source in this repository will need specific changes on some of the gazebo libraries. Instead of the Gazebo Garden collection file it is recommended to use the following repos file when getting the sources:
Alternatively, unstable binary packages for
After this just update and install the
ColconColcon is the command line tool used to help easily compile and test all the packages in the LRAUV repository. First, the ROS2 repositories that contain the
Then just update an install the
Extra dependencies
Get the sources and build
Developers may also want to build tests. Note that this will take longer:
Using a containerized workspaceThis is the recommended mechanism for development, as it ensures a consistent environment. Make sure you have a recent version of Docker and nvidia-docker installed if so required by your Docker version. Then, simply run the following command:
It'll prompt for a workspace directory, or default to the current working directory if none is provided, and then proceed with the build of Docker images. Note this may take a while. Once it's done with the build, run the Now, you can build the workspace:
To join in a separate terminal, simply |
Signed-off-by: Michel Hidalgo <michel@ekumenlabs.com>
4c8e8b1
to
2313ef8
Compare
Signed-off-by: Michel Hidalgo <michel@ekumenlabs.com>
Signed-off-by: Michel Hidalgo <michel@ekumenlabs.com>
Signed-off-by: Michel Hidalgo <michel@ekumenlabs.com>
When running this command I get the following error:
|
@caguero that's because we haven't merged yet. Change the branch name at the end of the URL! |
Alright, in the interest of time, I will move forward with this PR. |
Precisely what the title says. This patch replaces our multiple Dockerfiles and scripts with a multi-stage build and a single script to help manage a containerized workspace.
Connected to #257. Documentation upcoming!