-
Notifications
You must be signed in to change notification settings - Fork 24
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Can you explain how you get the descriptors? #5
Comments
Yes, but the points on the original image don't need to be a set of unique points. In other words, detected lanes with less of 128x128 points will have repeated points in the descriptor. The descriptor extractions works as a pixel-level regularization strategy, where relevant information is extracted from the RGB image regardless of the lane position, shape and pixel number. The detailed strategy for descriptor is explained in the
Please tell me if something is still unclear :) |
In your paper, the points in descriptor is 'extracted in the original image and arranged in squared images'. Does this means that you need 128*128 points in original image to be arranged in squared descriptor?
The text was updated successfully, but these errors were encountered: