-
Notifications
You must be signed in to change notification settings - Fork 420
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Sanchitgarg/audio sensor #1646
Sanchitgarg/audio sensor #1646
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for creating this PR, @sanchitgarg. Just a quick pass with some easy clean-up. Will return with a more thorough review.
data/test_assets/dataset_tests/dataset_0/objects/dataset_test_object1.object_config.json
Outdated
Show resolved
Hide resolved
7368216
to
7067b0a
Compare
a46de80
to
6658fb6
Compare
9be7e4f
to
7da73f7
Compare
7da73f7
to
4beec93
Compare
4beec93
to
8965a6e
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Did a proof pass over the README. Thanks for the documentation, this is great!
docs/AUDIO.md
Outdated
|
||
The C++ Audio sensor class uses the [RLRAudioPropagation](https://github.com/facebookresearch/rlr-audio-propagation) library. This is a bi-directional ray tracer based audio simulator. Given a source location, listener location, mesh, audio materials and some parameters will simulate how audio waves travel from the source to the listener. The output of the library is an impulse response for the given setup. | ||
|
||
The C++ implementation is exposed for python user via py-bindings. This document explains the various python apis, structs and enums. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Note: once this is live on aihabitat.org, we can point directly to the AudioSensor class docs, tutorial Colab, or other resources too.
The C++ implementation is exposed for python user via py-bindings. This document explains the various python apis, structs and enums. | |
The C++ implementation is exposed for python users via pybind11. This document explains the various python APIs, structs, and enums. Also see the relevant [Habitat-sim python API](https://aihabitat.org/docs/habitat-sim/classes.html) doc pages. |
aaa70be
to
efa5aa0
Compare
3cc312b
to
0411007
Compare
ede0a0e
to
131ba4d
Compare
1d9e1da
to
61745ce
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good overall.
A few documentation comments and I didn't test the viewer changes (CI won't either), so please double check viewer with and without the audio build.
Once those are resolved and the CI is green I think we can merge. 🎉
6204b62
to
33cd91f
Compare
8d2c6c1
to
d1fb41e
Compare
Co-authored-by: Aaron Gokaslan <aaronGokaslan@gmail.com>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
When tests pass, good to merge!
Motivation and Context
With this change, we give our awesome virtual agents the ability to hear. This enables new research avenues like Audio-Visual navigation.
We are adding a new git-submodule :
rlr-audio-propagation
.This git repo hosts the header and binaries for Meta RL Research's audio propagation engine. A new AudioSensor C++ class is added that calls into this new lib. The C++ AudioSensor class is exposed to python via the existing pybindings.
Further, we are adding a new build flag :
--audio
.This is an optional flag. Any user who wants to use the audio sensor will have to build with this flag.
Please see the examples/tutorials/audio_agent.py script for a tutorial on how to use it.
Also, please refer to docs/AUDIO.md which goes into more details about the feature.
How Has This Been Tested
The code was tested locally using the C++ viewer. A new audio_agent.py script has been added as well to test the change. Further, we tested the integration with the SoundSpaces team. They were able to generate impulse response in they navigation system.
Types of changes
Checklist