You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
test: &test
type: LaneDataset
parameters:
dataset: nolabel_dataset
normalize: true # Wheter to normalize the input data. Use the same value used in the pretrained model (all pretrained models that I provided used normalization, so you should leave it as it is)
augmentations: [] # List of augmentations. You probably want to leave this empty for testing
img_h: 540 # The height of your test images (they shoud all have the same size)
img_w: 960 # The width of your test images
img_size: [540, 960] # Yeah, this parameter is duplicated for some reason, will fix this when I get time (feel free to open a pull request :))
max_lanes: 5 # Same number used in the pretrained model. If you use a model pretrained on TuSimple (most likely case), you'll use 5 here
root: "/home/share/make/PolyLaneNet/test_image" # Path to the directory containing your test images. The loader will look recursively for image files in this directory
img_ext: ".jpg" # Test images extension (e.g., .png, .jpg)"
val = test
val:
<<: *test
Could you give me some advice ? Thanks
The text was updated successfully, but these errors were encountered:
Every lane detection dataset will have a certain bias toward the position of the horizon line. Since the model was trained on TuSimple, and you're testing on a different dataset, that bias will have a large effect on the quality of the predictions. As you can see in your image, the horizon line has a large offset from the real horizon line.
In order to fix this, you'll have to fine-tune the model on your dataset.
HI, I use the pretrained model_2695.pt to test my own dataset.but it shows like this:
![pred_screenshot_02 06 2020](https://user-images.githubusercontent.com/42080579/83508782-01acbb80-a4fd-11ea-89bf-f92c04485347.png)
Below is my config.yaml :
Training settings
seed: 0
exps_dir: 'experiments'
iter_log_interval: 1
iter_time_window: 100
model_save_interval: 1
backup:
model:
name: PolyRegression
parameters:
num_outputs: 35 # (5 lanes) * (1 conf + 2 (upper & lower) + 4 poly coeffs)
pretrained: true
backbone: 'resnet50'
pred_category: false
curriculum_steps: [0, 0, 0, 0]
loss_parameters:
conf_weight: 1
lower_weight: 1
upper_weight: 1
cls_weight: 0
poly_weight: 300
batch_size: 16
epochs: 2695
optimizer:
name: Adam
parameters:
lr: 3.0e-4
lr_scheduler:
name: CosineAnnealingLR
parameters:
T_max: 385
Testing settings
test_parameters:
conf_threshold: 0.5
Dataset settings
datasets:
train:
type: LaneDataset
parameters:
dataset: nolabel_dataset
split: train
img_size: [540, 960]
normalize: true
aug_chance: 0.9090909090909091 # 10/11
augmentations:
- name: Affine
parameters:
rotate: !!python/tuple [-10, 10]
- name: HorizontalFlip
parameters:
p: 0.5
- name: CropToFixedSize
parameters:
width: 540
height: 960
root: "/home/share/make/PolyLaneNet/test_image"
test: &test
type: LaneDataset
parameters:
dataset: nolabel_dataset
normalize: true # Wheter to normalize the input data. Use the same value used in the pretrained model (all pretrained models that I provided used normalization, so you should leave it as it is)
augmentations: [] # List of augmentations. You probably want to leave this empty for testing
img_h: 540 # The height of your test images (they shoud all have the same size)
img_w: 960 # The width of your test images
img_size: [540, 960] # Yeah, this parameter is duplicated for some reason, will fix this when I get time (feel free to open a pull request :))
max_lanes: 5 # Same number used in the pretrained model. If you use a model pretrained on TuSimple (most likely case), you'll use 5 here
root: "/home/share/make/PolyLaneNet/test_image" # Path to the directory containing your test images. The loader will look recursively for image files in this directory
img_ext: ".jpg" # Test images extension (e.g., .png, .jpg)"
val = test
val:
<<: *test
Could you give me some advice ? Thanks
The text was updated successfully, but these errors were encountered: