Skip to content
This repository has been archived by the owner on Aug 5, 2023. It is now read-only.

i have an issue #6

Closed
VYRION-Ai opened this issue Oct 10, 2021 · 8 comments · Fixed by #10
Closed

i have an issue #6

VYRION-Ai opened this issue Oct 10, 2021 · 8 comments · Fixed by #10

Comments

@VYRION-Ai
Copy link

Can't get attribute 'Model' on <module 'models.yolo' from 'E:\Python\vehicle-counting-master\vehicle-counting-master\models\yolo\init.py'>

i run

python run.py --input_path='moos.mp4' --output_path="results" --weight="best.pt"

@kaylode
Copy link
Owner

kaylode commented Oct 11, 2021

I think you use the wrong checkpoint file format. Can you send me that .pt file ?

@VYRION-Ai
Copy link
Author

I use YOLO V5 m net , it works well on other codes

@kaylode
Copy link
Owner

kaylode commented Oct 11, 2021

If you are using the trained model from ultralytics, you will have to get the state dict of the model and save it as the .pt file (meaning not to use the author's save function).

To do this, I suggest looking into the train.py from author's code, search for the line where the model loading the state dict (something like model.load_state_dict(state_dict)), then save the state dict with torch.save(state_dict, 'model.pt').

Then, the model.pt will be working on this repo. Since many people faced this problem, I intend to add a script to convert the .pt format soon.

@konishon
Copy link

konishon commented Dec 5, 2021

Hi @kaylode

Conversion script

I bodged the following script, "It has parts of train.py from ultralytics, it exported a file 'model.pt'

from utils.general import (LOGGER, check_dataset, check_file, check_git_status, check_img_size, check_requirements,
                           check_suffix, check_yaml, colorstr, get_latest_run, increment_path, init_seeds,
                           intersect_dicts, labels_to_class_weights, labels_to_image_weights, methods, one_cycle,
                           print_args, print_mutation, strip_optimizer)

import torch
from models.yolo import Model

nc = 15
hyp = {}
hyp['anchors'] = 3
weights = "best.pt"

ckpt = torch.load(weights, map_location='cpu')  # load checkpoint      
model = Model(ckpt['model'].yaml, ch=3, nc=nc, anchors=hyp.get('anchors')).to('cpu')  # create
csd = ckpt['model'].float().state_dict()  # checkpoint state_dict as FP32
csd = intersect_dicts(csd, model.state_dict(), exclude=[])  # intersect
       
state_dict = model.load_state_dict(csd, strict=False)  # load
torch.save(state_dict, 'model.pt')

Running vehicle counting

I ran python3 run.py --input_path=demo/sample/cam_04.mp4 --output_path=cam_5 --weight=.models/model.pt and it shows the following stracktrace

Traceback (most recent call last):
  File "run.py", line 73, in <module>
    main(args, config)
  File "run.py", line 35, in main
    pipeline = CountingPipeline(args, config, cam_config)
  File "/home/nishont/Projects/vehicle-counting/modules/__init__.py", line 9, in __init__
    self.detector = ImageDetect(args, config)
  File "/home/nishont/Projects/vehicle-counting/modules/detect.py", line 52, in __init__
    load_checkpoint(self.model, args.weight)
  File "/home/nishont/Projects/vehicle-counting/trainer/checkpoint.py", line 67, in load_checkpoint
    model.model.load_state_dict(state["model"])
KeyError: 'model'

@kaylode
Copy link
Owner

kaylode commented Dec 5, 2021

You are correct @nishontan , but with little modification that I forgot to mention:

From

torch.save(state_dict, 'model.pt')

To

torch.save({
    'model': model.state_dict(),
    'class_names': [<list of your class names in right order as your training>]
}, 'model.pt')

Unfortunately, as I have tested recently, original YoloV5 repo has updated their model to newer version. So that if you train your model with this new version , it may not be loaded correctly in this repo. I'm trying to figure out what they've updated and I will update mine accordingly.

@konishon
Copy link

konishon commented Dec 6, 2021

Thanks!

It exported, It ran in your pipeline but it did not detect.
I am looking forward to your version.

Here's the script

from utils.general import (LOGGER, check_dataset, check_file, check_git_status, check_img_size, check_requirements,
                           check_suffix, check_yaml, colorstr, get_latest_run, increment_path, init_seeds,
                           intersect_dicts, labels_to_class_weights, labels_to_image_weights, methods, one_cycle,
                           print_args, print_mutation, strip_optimizer)

import torch
from models.yolo import Model

nc = 15
hyp = {}
hyp['anchors'] = 3
weights = "best.pt"

ckpt = torch.load(weights, map_location='cpu')  # load checkpoint      
model = Model(ckpt['model'].yaml, ch=3, nc=nc, anchors=hyp.get('anchors')).to('cpu')  # create
csd = ckpt['model'].float().state_dict()  # checkpoint state_dict as FP32
csd = intersect_dicts(csd, model.state_dict(), exclude=[])  # intersect
       
state_dict = model.load_state_dict(csd, strict=False)  # load
torch.save({'model': model.state_dict(),'class_names': [0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15]}, 'model.pt')

@kaylode
Copy link
Owner

kaylode commented Dec 6, 2021

Hey @nishontan ,
I've just created a new branch yolov5_ver6.0 which uses the new version of Yolov5. Can you test your checkpoints with that branch using the same conversion script you described above? Let me know the results. Thanks!

@konishon
Copy link

konishon commented Dec 7, 2021

It did not work for me.
Here's the stack trace it showed

Traceback (most recent call last):
  File "run.py", line 73, in <module>
    main(args, config)
  File "run.py", line 35, in main
    pipeline = CountingPipeline(args, config, cam_config)
  File "/home/nishont/Projects/vehicle-counting/modules/__init__.py", line 9, in __init__
    self.detector = ImageDetect(args, config)
  File "/home/nishont/Projects/vehicle-counting/modules/detect.py", line 52, in __init__
    load_checkpoint(self.model, args.weight)
  File "/home/nishont/Projects/vehicle-counting/trainer/checkpoint.py", line 59, in load_checkpoint
    model.model.model.module.load_state_dict(state["model"])
  File "/home/nishont/.local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1406, in load_state_dict
    raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format(
RuntimeError: Error(s) in loading state_dict for Yolov5:
        Missing key(s) in state_dict: "model.0.conv.weight", "model.0.bn.weight", "model.0.bn.bias", "model.0.bn.running_mean", "model.0.bn.running_var", "model.2.cv2.conv.weight", "model.2.cv2.bn.weight", "model.2.cv2.bn.bias", "model.2.cv2.bn.running_mean", "model.2.cv2.bn.running_var", "model.2.cv3.conv.weight", "model.2.cv3.bn.weight", "model.2.cv3.bn.bias", "model.2.cv3.bn.running_mean", "model.2.cv3.bn.running_var", "model.4.cv2.conv.weight", "model.4.cv2.bn.weight", "model.4.cv2.bn.bias", "model.4.cv2.bn.running_mean", "model.4.cv2.bn.running_var", "model.4.cv3.conv.weight", "model.4.cv3.bn.weight", "model.4.cv3.bn.bias", "model.4.cv3.bn.running_mean", "model.4.cv3.bn.running_var", "model.6.cv2.conv.weight", "model.6.cv2.bn.weight", "model.6.cv2.bn.bias", "model.6.cv2.bn.running_mean", "model.6.cv2.bn.running_var", "model.6.cv3.conv.weight", "model.6.cv3.bn.weight", "model.6.cv3.bn.bias", "model.6.cv3.bn.running_mean", "model.6.cv3.bn.running_var", "model.8.cv3.conv.weight", "model.8.cv3.bn.weight", "model.8.cv3.bn.bias", "model.8.cv3.bn.running_mean", "model.8.cv3.bn.running_var", "model.8.m.0.cv1.conv.weight", "model.8.m.0.cv1.bn.weight", "model.8.m.0.cv1.bn.bias", "model.8.m.0.cv1.bn.running_mean", "model.8.m.0.cv1.bn.running_var", "model.8.m.0.cv2.conv.weight", "model.8.m.0.cv2.bn.weight", "model.8.m.0.cv2.bn.bias", "model.8.m.0.cv2.bn.running_mean", "model.8.m.0.cv2.bn.running_var", "model.9.cv2.conv.weight", "model.9.cv2.bn.weight", "model.9.cv2.bn.bias", "model.9.cv2.bn.running_mean", "model.9.cv2.bn.running_var", "model.13.cv2.conv.weight", "model.13.cv2.bn.weight", "model.13.cv2.bn.bias", "model.13.cv2.bn.running_mean", "model.13.cv2.bn.running_var", "model.13.cv3.conv.weight", "model.13.cv3.bn.weight", "model.13.cv3.bn.bias", "model.13.cv3.bn.running_mean", "model.13.cv3.bn.running_var", "model.17.cv2.conv.weight", "model.17.cv2.bn.weight", "model.17.cv2.bn.bias", "model.17.cv2.bn.running_mean", "model.17.cv2.bn.running_var", "model.17.cv3.conv.weight", "model.17.cv3.bn.weight", "model.17.cv3.bn.bias", "model.17.cv3.bn.running_mean", "model.17.cv3.bn.running_var", "model.20.cv2.conv.weight", "model.20.cv2.bn.weight", "model.20.cv2.bn.bias", "model.20.cv2.bn.running_mean", "model.20.cv2.bn.running_var", "model.20.cv3.conv.weight", "model.20.cv3.bn.weight", "model.20.cv3.bn.bias", "model.20.cv3.bn.running_mean", "model.20.cv3.bn.running_var", "model.23.cv2.conv.weight", "model.23.cv2.bn.weight", "model.23.cv2.bn.bias", "model.23.cv2.bn.running_mean", "model.23.cv2.bn.running_var", "model.23.cv3.conv.weight", "model.23.cv3.bn.weight", "model.23.cv3.bn.bias", "model.23.cv3.bn.running_mean", "model.23.cv3.bn.running_var". 
        Unexpected key(s) in state_dict: "model.0.conv.conv.weight", "model.0.conv.bn.weight", "model.0.conv.bn.bias", "model.0.conv.bn.running_mean", "model.0.conv.bn.running_var", "model.0.conv.bn.num_batches_tracked", "model.2.cv4.conv.weight", "model.2.cv4.bn.weight", "model.2.cv4.bn.bias", "model.2.cv4.bn.running_mean", "model.2.cv4.bn.running_var", "model.2.cv4.bn.num_batches_tracked", "model.2.bn.weight", "model.2.bn.bias", "model.2.bn.running_mean", "model.2.bn.running_var", "model.2.bn.num_batches_tracked", "model.2.cv2.weight", "model.2.cv3.weight", "model.4.cv4.conv.weight", "model.4.cv4.bn.weight", "model.4.cv4.bn.bias", "model.4.cv4.bn.running_mean", "model.4.cv4.bn.running_var", "model.4.cv4.bn.num_batches_tracked", "model.4.bn.weight", "model.4.bn.bias", "model.4.bn.running_mean", "model.4.bn.running_var", "model.4.bn.num_batches_tracked", "model.4.cv2.weight", "model.4.cv3.weight", "model.4.m.2.cv1.conv.weight", "model.4.m.2.cv1.bn.weight", "model.4.m.2.cv1.bn.bias", "model.4.m.2.cv1.bn.running_mean", "model.4.m.2.cv1.bn.running_var", "model.4.m.2.cv1.bn.num_batches_tracked", "model.4.m.2.cv2.conv.weight", "model.4.m.2.cv2.bn.weight", "model.4.m.2.cv2.bn.bias", "model.4.m.2.cv2.bn.running_mean", "model.4.m.2.cv2.bn.running_var", "model.4.m.2.cv2.bn.num_batches_tracked", "model.6.cv4.conv.weight", "model.6.cv4.bn.weight", "model.6.cv4.bn.bias", "model.6.cv4.bn.running_mean", "model.6.cv4.bn.running_var", "model.6.cv4.bn.num_batches_tracked", "model.6.bn.weight", "model.6.bn.bias", "model.6.bn.running_mean", "model.6.bn.running_var", "model.6.bn.num_batches_tracked", "model.6.cv2.weight", "model.6.cv3.weight", "model.9.cv3.weight", "model.9.cv4.conv.weight", "model.9.cv4.bn.weight", "model.9.cv4.bn.bias", "model.9.cv4.bn.running_mean", "model.9.cv4.bn.running_var", "model.9.cv4.bn.num_batches_tracked", "model.9.bn.weight", "model.9.bn.bias", "model.9.bn.running_mean", "model.9.bn.running_var", "model.9.bn.num_batches_tracked", "model.9.cv2.weight", "model.9.m.0.cv1.conv.weight", "model.9.m.0.cv1.bn.weight", "model.9.m.0.cv1.bn.bias", "model.9.m.0.cv1.bn.running_mean", "model.9.m.0.cv1.bn.running_var", "model.9.m.0.cv1.bn.num_batches_tracked", "model.9.m.0.cv2.conv.weight", "model.9.m.0.cv2.bn.weight", "model.9.m.0.cv2.bn.bias", "model.9.m.0.cv2.bn.running_mean", "model.9.m.0.cv2.bn.running_var", "model.9.m.0.cv2.bn.num_batches_tracked", "model.13.cv4.conv.weight", "model.13.cv4.bn.weight", "model.13.cv4.bn.bias", "model.13.cv4.bn.running_mean", "model.13.cv4.bn.running_var", "model.13.cv4.bn.num_batches_tracked", "model.13.bn.weight", "model.13.bn.bias", "model.13.bn.running_mean", "model.13.bn.running_var", "model.13.bn.num_batches_tracked", "model.13.cv2.weight", "model.13.cv3.weight", "model.17.cv4.conv.weight", "model.17.cv4.bn.weight", "model.17.cv4.bn.bias", "model.17.cv4.bn.running_mean", "model.17.cv4.bn.running_var", "model.17.cv4.bn.num_batches_tracked", "model.17.bn.weight", "model.17.bn.bias", "model.17.bn.running_mean", "model.17.bn.running_var", "model.17.bn.num_batches_tracked", "model.17.cv2.weight", "model.17.cv3.weight", "model.20.cv4.conv.weight", "model.20.cv4.bn.weight", "model.20.cv4.bn.bias", "model.20.cv4.bn.running_mean", "model.20.cv4.bn.running_var", "model.20.cv4.bn.num_batches_tracked", "model.20.bn.weight", "model.20.bn.bias", "model.20.bn.running_mean", "model.20.bn.running_var", "model.20.bn.num_batches_tracked", "model.20.cv2.weight", "model.20.cv3.weight", "model.23.cv4.conv.weight", "model.23.cv4.bn.weight", "model.23.cv4.bn.bias", "model.23.cv4.bn.running_mean", "model.23.cv4.bn.running_var", "model.23.cv4.bn.num_batches_tracked", "model.23.bn.weight", "model.23.bn.bias", "model.23.bn.running_mean", "model.23.bn.running_var", "model.23.bn.num_batches_tracked", "model.23.cv2.weight", "model.23.cv3.weight". 
        size mismatch for model.8.cv2.conv.weight: copying a param with shape torch.Size([512, 1024, 1, 1]) from checkpoint, the shape in current model is torch.Size([256, 512, 1, 1]).
        size mismatch for model.8.cv2.bn.weight: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([256]).
        size mismatch for model.8.cv2.bn.bias: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([256]).
        size mismatch for model.8.cv2.bn.running_mean: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([256]).
        size mismatch for model.8.cv2.bn.running_var: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([256]).
        size mismatch for model.24.m.0.weight: copying a param with shape torch.Size([60, 128, 1, 1]) from checkpoint, the shape in current model is torch.Size([63, 128, 1, 1]).
        size mismatch for model.24.m.0.bias: copying a param with shape torch.Size([60]) from checkpoint, the shape in current model is torch.Size([63]).
        size mismatch for model.24.m.1.weight: copying a param with shape torch.Size([60, 256, 1, 1]) from checkpoint, the shape in current model is torch.Size([63, 256, 1, 1]).
        size mismatch for model.24.m.1.bias: copying a param with shape torch.Size([60]) from checkpoint, the shape in current model is torch.Size([63]).
        size mismatch for model.24.m.2.weight: copying a param with shape torch.Size([60, 512, 1, 1]) from checkpoint, the shape in current model is torch.Size([63, 512, 1, 1]).
        size mismatch for model.24.m.2.bias: copying a param with shape torch.Size([60]) from checkpoint, the shape in current model is torch.Size([63]).

The weights I used were trained a month back.
I might train again this month. I will post update if needed

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants