Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Relation Prediction produces error after process 23%, with low rate of label2 #12

Open
TianzhuQin opened this issue Aug 17, 2022 · 4 comments

Comments

@TianzhuQin
Copy link

Screen Shot 2022-08-17 at 4 41 18 PM

As graph, the Relation Prediction part produces errors after 23%. And the rate of Label2 at the moment is fairly low. On the one hand, I can't see the reason why it's popping out. On the other hand, it is quite strange for the Label2 rate since it is only a prediction. Does this ever happen to you guys?

@larksq
Copy link
Contributor

larksq commented Aug 23, 2022

  1. The training might not use the whole dataset after interaction type filtering, hence the early stopping.
  2. Low accuracy of label 2, which means no interactions, is typical and shows a cautious relation prediction result for many ambiguous scenarios benefitting the safety of consecutive modules.

@EnnaSachdeva
Copy link

EnnaSachdeva commented Aug 25, 2022

  1. The training might not use the whole dataset after interaction type filtering, hence the early stopping.

    1. Low accuracy of label 2, which means no interactions, is typical and shows a cautious relation prediction result for many ambiguous scenarios benefitting the safety of consecutive modules.

I'm also getting a similar performance. Early stopping is usually used during the training process, however, here we are using a pre-trained model during relation prediction (after the model has been trained), why is there an early stopping? Can you please elaborate on the 1) point?

@larksq
Copy link
Contributor

larksq commented Nov 22, 2022

We did not implement any early stoppings during training. If you see some scenarios are not being used for training, this is probably because they are filtered by some logic in the data loader. You can search for 'return None' at the function get_instance() in dataset_waymo.py to check each condition.

For example, one of those filters is the agent_type filter. This means if you pass 'vehicle' in the 'agent_type' in the training command, all scenarios that have no vehicles marked to predict will be skipped. And this gets more complicated if you are training for conditional trajectory predictor. Here the loaded relation pickle has only a relation of v2v which requires both two agents to predict to be vehicles. If these conditions are not met, the scenario will be skipped.

@EnnaSachdeva
Copy link

Thanks for the clarification.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants