Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to get the ego vehicle's future planning trajectory without ground truth #10

Closed
Alexwxwu opened this issue Mar 21, 2022 · 1 comment

Comments

@Alexwxwu
Copy link

Alexwxwu commented Mar 21, 2022

Nice work. I have a question about the ego vehicle's future planning. In this work, it has been used as input. But for testing and evaluation, how can we get the ego vehicle's future planning trajectory if we do not have the ground truth? It seems that the ego vehicle's future planning trajectory is generated from the ground truth in the code.
Line 325 in data.py:

` def getPlanFuture(self, dsId, planId, refVehId, t):
# Traj of the reference veh
refColIndex = np.where(self.Tracks[dsId - 1][refVehId - 1][0, :] == t)[0][0]
refPos = self.Tracks[dsId - 1][refVehId - 1][1:3, refColIndex].transpose()
# Traj of the planned veh
planColIndex = np.where(self.Tracks[dsId - 1][planId - 1][0, :] == t)[0][0]
stpt = planColIndex
enpt = planColIndex + self.t_f + 1
planGroundTrue = self.Tracks[dsId - 1][planId - 1][1:3, stpt:enpt:self.d_s].transpose()
planFut = planGroundTrue.copy()
# Fitting the downsampled waypoints as the planned trajectory in testing and evaluation.
if self.fit_plan_traj:
wayPoint = np.arange(0, self.t_f + self.d_s, self.d_s)
wayPoint_to_fit = np.arange(0, self.t_f + 1, self.d_s * self.further_ds_plan)
planFut_to_fit = planFut[::self.further_ds_plan, ]
laterParam = fitting_traj_by_qs(wayPoint_to_fit, planFut_to_fit[:, 0])
longiParam = fitting_traj_by_qs(wayPoint_to_fit, planFut_to_fit[:, 1])
planFut[:, 0] = quintic_spline(wayPoint, *laterParam)
planFut[:, 1] = quintic_spline(wayPoint, *longiParam)'

    revPlanFut = np.flip(planFut[1:,] - refPos, axis=0).copy()
    return revPlanFut `
@Haoran-SONG
Copy link
Owner

Nice work. I have a question about the ego vehicle's future planning. In this work, it has been used as input. But for testing and evaluation, how can we get the ego vehicle's future planning trajectory if we do not have the ground truth? It seems that the ego vehicle's future planning trajectory is generated from the ground truth in the code. Line 325 in data.py:

` def getPlanFuture(self, dsId, planId, refVehId, t): # Traj of the reference veh refColIndex = np.where(self.Tracks[dsId - 1][refVehId - 1][0, :] == t)[0][0] refPos = self.Tracks[dsId - 1][refVehId - 1][1:3, refColIndex].transpose() # Traj of the planned veh planColIndex = np.where(self.Tracks[dsId - 1][planId - 1][0, :] == t)[0][0] stpt = planColIndex enpt = planColIndex + self.t_f + 1 planGroundTrue = self.Tracks[dsId - 1][planId - 1][1:3, stpt:enpt:self.d_s].transpose() planFut = planGroundTrue.copy() # Fitting the downsampled waypoints as the planned trajectory in testing and evaluation. if self.fit_plan_traj: wayPoint = np.arange(0, self.t_f + self.d_s, self.d_s) wayPoint_to_fit = np.arange(0, self.t_f + 1, self.d_s * self.further_ds_plan) planFut_to_fit = planFut[::self.further_ds_plan, ] laterParam = fitting_traj_by_qs(wayPoint_to_fit, planFut_to_fit[:, 0]) longiParam = fitting_traj_by_qs(wayPoint_to_fit, planFut_to_fit[:, 1]) planFut[:, 0] = quintic_spline(wayPoint, *laterParam) planFut[:, 1] = quintic_spline(wayPoint, *longiParam)'

    revPlanFut = np.flip(planFut[1:,] - refPos, axis=0).copy()
    return revPlanFut `

We stated that in the quantitative experiments ego planning is obtained from the ego's GT but degraded, as we don't exactly know the future states of surrounding agents when the ego's planning is changed.

While in our qualitative experiments (active planning & user study), the sets of diverse plans are produced by trajectory generator. Not limited to [4], different sampling-based trajectory generators or some vehicle motion controllers could be employed to provide diverse ego plans, to investigate how the multi-agent prediction result varies with the provided ego plan.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants