Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

HD-maps and Motion Prediction #272

Closed
RobeSafe-UAH opened this issue Sep 28, 2021 · 9 comments
Closed

HD-maps and Motion Prediction #272

RobeSafe-UAH opened this issue Sep 28, 2021 · 9 comments

Comments

@RobeSafe-UAH
Copy link

Hey folks,

We are trying to incorporate the hd map info (lanes info, driveable area, etc.) in our stochastic model based to conduct the motion forecasting task. In the API, the plots seem to be quite interesting since they incorporate useful information to give particular attention to the corresponding vehicles. Nevertheless, after studying the maps provided in hd-maps.tar.gz, the .npy structures corresponding to Miami (similar to Pittsburgh) "driveable_area", "halluc_bbox", "npyimage_to_city_se2" and "ground_height" have the following dimensions (after applying astype(np.uint8) to avoid lossy conversion from float64 to uint8, expected by imsave):

Shape: (12574, 4)
Image: MIA_10316_halluc_bbox_table

Shape: (3674, 1482)
Image: MIA_10316_driveable_area_mat_2019_05_28

Shape: (3674, 1482)
Image: MIA_10316_ground_height_mat_2019_05_28

Shape: (3, 3)
Image: MIA_10316_npyimage_to_city_se2_2019_05_28

As result, they are totally black with the exception of ground_height (several points are white).

MIA_10316_driveable_area_mat_2019_05_28
MIA_10316_ground_height_mat_2019_05_28
MIA_10316_halluc_bbox_table
MIA_10316_npyimage_to_city_se2_2019_05_28

My questions are:

Are these numpy.array only bidimensional?
Do these maps cover the entire sequence of trajectories for each city (that is, there is a single BEV hd-map for each city and the obstacles appear and dissapear from that particular region, not having a sequence of hd maps along the ego-vehicle motion)?

Thanks in advance,

@johnwlambert
Copy link
Contributor

johnwlambert commented Sep 28, 2021

Hi @RobeSafe-UAH, have you tried out our map tutorial here?
https://github.com/argoai/argoverse-api/blob/master/demo_usage/argoverse_map_tutorial.ipynb

We recommend calling the API functions listed here, instead of working with the raw map files:
https://github.com/argoai/argoverse-api/blob/master/argoverse/map_representation/map_api.py

We provide a single map for each city (covering the portion of the city where we release sensor data + tracked trajectories).

@RobeSafe-UAH
Copy link
Author

Hi @RobeSafe-UAH, have you tried out our map tutorial here? https://github.com/argoai/argoverse-api/blob/master/demo_usage/argoverse_map_tutorial.ipynb

We recommend calling the API functions listed here, instead of working with the raw map files: https://github.com/argoai/argoverse-api/blob/master/argoverse/map_representation/map_api.py

We provide a single map for each city (covering the portion of the city where we release sensor data + tracked trajectories).

@RobeSafe-UAH
Copy link
Author

Thanks @johnwlambert for your clear explanation. I have another question. If we analyze the .csv provided inthe motion-forecasting sets, the OBJECT_TYPE column can be AV, OTHERS or AGENT. What is the difference between AV and AGENT (I assume OTHERS are the remaining vehicles, pedestrians, bycicles, etc.). Is the AGENT the ego-vehicle with which sensor data is recorded? Am I wrong with this hypothesis?

@RobeSafe-UAH
Copy link
Author

We mean this:

TIMESTAMP,TRACK_ID,OBJECT_TYPE,X,Y,CITY_NAME
315968203.70296454,00000000-0000-0000-0000-000000000000,AV,419.3545778179974,1125.9280648873869,MIA
315968203.70296454,00000000-0000-0000-0000-000000023470,OTHERS,404.7292168396011,1253.0065911729125,MIA
315968203.70296454,00000000-0000-0000-0000-000000023463,OTHERS,491.96770374503166,1147.2865808928393,MIA
315968203.70296454,00000000-0000-0000-0000-000000023476,OTHERS,473.8274823137583,1146.6724731073973,MIA
315968203.70296454,00000000-0000-0000-0000-000000023478,OTHERS,419.64133675822717,1252.0345383663025,MIA

@johnwlambert
Copy link
Contributor

Hi @RobeSafe-UAH, sorry for the confusion here. These issues clarify the object types: #79 and #58.

Please let me know if you have additional questions.

@RobeSafe-UAH
Copy link
Author

Hi @johnwlambert, my apologies, but I don't understand it perfectly. The answer to issue #79 is as following:

"""
AV - Autonomous Vehicle
AGENT - The object with most interesting trajectory or track
OTHERS - Include all the other objects in the scene for which tracks are recorded

The 15 object classes you mention are provided in the 3D tracking dataset. To my knowledge, they haven't made any such statement regarding forecasting dataset.
"""

Nevertheless, in other Motion Prediction datasets (such as INTERACTION or NuScenes) you have the recorded vehicle/pedestrian/etc. track files and the ego-vehicle position with the corresponding sensors. My question is: Is AV in the ARGOVERSE nomenclature the ego-vehicle from which the sensors were recorded? Why there is a most interesting trajectory or track (AGENT) if we actually have to predict the future position of the agents n-seconds ahead?

Thanks in advance :).

@RobeSafe-UAH
Copy link
Author

@johnwlambert Another naive question. Is it required to predict the future positions of all agents n-seconds ahead or just the AGENT (most interesting trajectory or track), as a Single Motion Prediction conditioned by the traffic situation and the other agents?

@James-Hays
Copy link
Contributor

For Argoverse 1, the task is only to predict the future for the identified AGENT.

For Argoverse 2, multi-agent forecasting will be supported.

@RobeSafe-UAH
Copy link
Author

@James-Hays totally understood now, I read the preliminar documentation (https://openreview.net/forum?id=vKQGe36av4k) of Argoverse 2.0 and its comparison with Argoverse 1.1:

"""
Comparison to Argoverse 1.1: This dataset was the first motion-forecasting specific dataset in the self-driving domain and was pivotal in influencing increased research activity in this domain. However, as evident in Figure 2 of supplementary material, the performance of forecasting methods has saturated, with no significant improvement in minFDE over the last several months. Furthermore, the lack of object categories and multi-agent evaluation, as well as shorter forecast horizon, and smaller quantity of challenging scenarios, has limited its use. Argoverse 2.0 overcomes all these shortcomings and provides a much more “complete” dataset to work with.
"""

Thanks a lot for your suggestions.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants