Releases: facebookresearch/projectaria_tools
1.5.0
🎉 We’re excited to announce Project Aria Tools v1.5.0. 🎉
[Tools - Python]
- ASR code sample shows how to use Faster Whisper to run Whisper Speech Recognition on an Aria audio stream. The ASR outputs are time aligned with Aria Device Time. Go to the Automated Speech Recognition Readme for how to get started.
[MPS CLI]
- Support for new MPS service, hand tracking
- Updated Eye Tracking to support depth estimation
[Documentation]
- Various improvements, including:
- New MPS Troubleshooting page
- New Collaborative Tools Page, did you know you can use Aria data with Nerfstudio?
- New Hand Tracking Data Formats
- Updated MPS Google Colab tutorials
1.4.0
🎉 We are excited to announce the release of our new 1.4.0 version! 🎉
- MPS CLI (ARK|Project Aria Research partners)
- New (and recommended) workflow for requesting Machine Perception services.
- MPS Multi-SLAM can only be requested by the MPS CLI.
- MPS Multi-SLAM (ARK|Project Aria Research partners)
- Compute SLAM MPS outputs in a shared coordinate frame for multiple VRS files
Here is the complete changelog:
[API]
- DataProvider - {Feature} - Make DeliverQueue start and end timestamp logic more robust
- Calibration - {Feature} calibration - expose imu rectification matrix and bias vector
- Viewer MPS - {Visualizer} Python - viewer_mps - Add mps_folder option
[MPS CLI]
- aria_mps - Addition of a Python CLI for
- Project Aria Research partners to manage their Machine Perception Services requests (run and monitor upload, run and results retrieval). Improvements on the Desktop app include:
- Auto run health checks prior to upload
- Recordings will not be uploaded if they are not valid for any of the MPS services requested
- Resumable uploads
- Concurrent processing
- Automatically downloads outputs once processing is complete
- Recordings are processed once
- Uploaded data is stored for 24 hours
- Additional MPS can be requested without needing to upload again
- Data can be reprocessed without needing to upload again
- CLI can be integrated into automated workflows
- Project Aria Research partners to manage their Machine Perception Services requests (run and monitor upload, run and results retrieval). Improvements on the Desktop app include:
- Request Multi-Recording outputs
- Compute SLAM MPS outputs in a shared coordinate frame for multiple VRS files
- Projectaria_tools must be installed using pip to access the CLI
[Documentation]
- aria_mps CLI documentation
- MPS Data Formats documentation refactored to align with MPS CLI output structure
Minor improvements to other documentation
1.3.3
🎉 We are excited to announce the release of version 1.3.3 🎉
This release includes several new features and improvements, including:
- Features
- A MpsDataPathsProvider & MPSDataProvider API
- Dataset support
- Aria Everyday Activities (AEA) Dataset support
- NEW - Code Sample
- Demo to run EfficientSAM with eye gaze prompt
- CI/CD/Build
- Improved our PyPI Python wheel generation workflow
Here is the complete changelog:
[Visualization]
- viewer_mps update to load multi sequence datasets (AEA, ADT)
- viewer_projects_aea
[API]
- MPS
- Add MpsDataPathsProvider & MPSDataProvider API
Allowing to easily retrieve MPS file assets and query them their metadata by timestamp
- Add MpsDataPathsProvider & MPSDataProvider API
[Dataset]
AEA Aria EveryDay Activities dataset
ADT Aria Digital Twin dataset is updated with MPS data
[Build/CI/CD]
- {Build} Improve FMT compatibility (#54)
- {Build} GitHub CI - Python Wheels generation
- {Build} Fix python wheel build workflow for PyPI
[Documentation]
- AEA documentation
Minor improvements to other documentation
[Thank you to our new contributors]
@selcuk-meta
@eric-fb
1.3.0
🎉 We are excited to announce the release of version 1.3.0 🎉
This release includes several major new features and improvements, including:
- New python visualization samples
- A VRS_to_MP4 tool to help quickly review (RGB + Sound) for data collection
- New C++ and Python tutorials on MPS point cloud colorization & how to use ADT depth map to generate point cloud
…
Here is the complete changelog:
[Visualization]
- [Python] New rerun visualization samples to help debug and visualize Aria and MPS temporal data & states
[API]
- [Python] Introduce an
projectaria_tools.mps.utils
module to help query and filter loaded MPS data
import projectaria_tools.mps.utils
# Retrieve Pose/Eye Gaze data by timestamp
get_nearest_eye_gaze, get_nearest_pose
# Reproject eyegaze vector in image
get_gaze_vector_reprojection
# Filter Point Cloud data
filter_points_from_confidence
filter_points_from_count
- [Python - C++] Image undistortion update
distort_by_calibration
- API update to perform bilinear or nearest neighbor multithread interpolation to better select the right interpolation for depth (bilinear) or segmentation mask (nearest) -> see (
distort_by_calibration
,distort_depth_by_calibration
&distort_label_by_calibration
)
- API update to perform bilinear or nearest neighbor multithread interpolation to better select the right interpolation for depth (bilinear) or segmentation mask (nearest) -> see (
- [Python - C++] Calibration rotation
- Ease accessibility to upright pinhole calibration data for RGB/SLAM images rotate_camera_calib_cw90deg
[Tools]
- [Python] Tool to create an MP4 file from VRS RGB and audio data (code, documentation)
- [C++] PointCloud Colorization Sample
- [C++] Aria Viewer - Enable plot buffer for AriaViewer to Avoids slow down/ speed up when audio is enabled/disabled
- [C++/Python] Vrs health check
add health check for VRS streams (audio, barometer, bluetooth, camera, gps, imu, wifi) -> Reads all records from all streams and check the health of each record in each stream - [Python] Projects/Aria Digital Twin - Add notebook tutorial to create and merge point clouds from depth maps data
[Continuous integration - GitHub]
- Various cleanup in GitHub actions
- Improved CI code coverage by adding testing python notebooks
[BugFix]
- [Core] fix support of multiple gps streams (coming from Aria and cell phone)
[Known Issues]
- Machine Perception Services (MPS) outputs have been renamed, so that they more clearly communicate what is in the outputs:
SLAM/Trajectory
- global_points.csv.gz -> semidense_points.csv.gz
Eye Gaze
- generalized_eye_gaze.csv -> general_eye_gaze.csv
- calibrated_eye_gaze.csv -> personalized_eye_gaze.csv
[Documentation]
- Aria Digital Twin, new landing and challenge page
- Aria Synthetic Environments, new landing and challenge page
- Data Formats - how Project Aria uses VRS
- Data Utilities - refactored our visualization guide to have aPython and a C++ page
- Refactored downloading MPS sample data into a single download page
[Thank you to our new contributors]
@baderouaich
1.2.0
[Features]
-
[Core - Python]
-
Sophus python binding
- Add SO3, SE3 interface in python based on Sophus library. Example code is provided in sophus_quickstart_tutorial notebook
-
- Python type hinting/ stubs are automatically generated as part of the pypi package when installing projectaria_tools with pip install. Users can also generate them on their own using the
generate_stubs.py
script.
- Python type hinting/ stubs are automatically generated as part of the pypi package when installing projectaria_tools with pip install. Users can also generate them on their own using the
-
Google Colab runnable notebooks
- Python notebooks can now be run in Google Colab -> Dataprovider Quickstart Tutorial | Machine Perception Services Tutorial
- No installation on local machine required to test and play with projectaria_tools
-
-
[Core]
-
Add
cameraId
toImageDataRecord
- Allow the
ImageDataRecord
to list from which camera the data came from
- Allow the
-
Continuous integration
- GitHub Actions runs Python Unit test
-
Dependencies
- Update to use VRS v1.1.0
- Remove cereal dependency and use directly rapidjson
-
-
[MPS]
-
Calibrated and generalized EyeGaze
- Support of calibrated eye gaze via in-session calibration
- Support for multiple wearers in a single Aria capture. The eye gaze output will contain a
session_uid
field that will help distinguish between different wearers.
-
Python type format
print(X)
will now display object content
-
-
[Tools]
- MPS Replay Viewer {C++}
- Renders static scene and dynamic elements: 2D/3D observations rays + eye gaze data
- MPS Replay Viewer {C++}
[BugFix]
- [Core]
- {bug fix} update crop and rescale to SensorCalibration
- Update the API to make calibration data to match from the sensor and device access point:
get_sensor_calibration(stream_id).camera_calibration()
andprovider.get_device_calibration().get_camera_calib(name)
to match.
[Known Issues]
- [Core]
- The Sophus API has been updated, if you encounter issues, please update to v1.2 of Project Aria Tools
- Here is how to update your existing code following the API change for SO3/SE3:
.matrix()
to.to_matrix()
.quaternion()
->.rotation().to_quat()[0]
orto_quat_and_translation()[0]
[Documentation]
- [Core]
- VRS to MP4 Tutorial showing how to export VRS RGB images to a MP4 video.
- Additional information added to 3D Coordinate Frame Conventions
- [MPS]
- Eye Gaze Data Formats updated to include
calibrated_eye_gaze.csv
andsummary.json
- Eye Gaze Calibration
- Eye Gaze Data Formats updated to include
[Thank you to our new contributors]
@brentyi
Seanwarren-meta
Selcuk Karakas
Przemyslaw Szczepanski
Guru Somasundaram
Full Changelog: 1.1.0...1.2.0
1.1.0
[BugFix]
-
[Core]
- AriaViewer (reset line plots when new timestamp is requested)
-
[ADT]
- Released ADT datasets v1.1:
- The ADT library has been updated to support dataset versioning.
- Data schema update
- Fix quaternion order in ‘aria_trajectory.csv’
Corrected toqw, qx, qy, qz
fromqx, qy, qz, qw
- Fix gravity field names are now called
gravity_x/y/z_world
to align with MPS layout - Change
SkeletonMetaData.json
toskeleton_aria_association.json
to better reflect the file content - Change
gt-metadata.json
tometadata.json
- Fix quaternion order in ‘aria_trajectory.csv’
- Data schema update
- The ADT library has been updated to support dataset versioning.
- Users are STRONGLY ADVISED to pull from the release branch and follow ADT download instructions to update their ADT datasets to v1.1.
- Released ADT datasets v1.1:
-
[ASE]
- Released a more accurate set of camera FishEye model calibration parameter
-
[Documentation]
- Minor updates
1.0.0
Initial release (https://ariatutorial2023.github.io/)
[Core]
- Provide C++/Python VRS data provider (sensor data and configuration) and utilities (camera poses and intrinsics manipulation)
[Tools]
- Aria VRS and MPS visualizers
[Projects]
- ADT - Aria Digital Twin
- A real-world dataset, with hyper-accurate digital counterpart & comprehensive ground-truth annotation
- ASE - Aria Synthetic Environments
- A procedurally generated synthetic Aria dataset for large-scale ML research.
[Documentation]
- Project Aria Documentation (Aria Research Kit, Open Dataset and Project Aria Tools)