Skip to content

Commit

Permalink
Merge pull request #129 from pliablepixels/alpr
Browse files Browse the repository at this point in the history
alpr initial integration
  • Loading branch information
pliablepixels committed Jul 4, 2019
2 parents 36cd1ea + 6fbdf3c commit f828f39
Show file tree
Hide file tree
Showing 14 changed files with 308 additions and 43 deletions.
9 changes: 7 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,7 +1,12 @@
What
----
The Event Notification Server sits along with ZoneMinder and offers real time notifications, support for push notifications as well as Machine Learning powered recognition.
As of today, it supports detection of 80 types of objects (persons, cars, etc.) and face recognition. I will add more algorithms over time.
As of today, it supports:
* detection of 80 types of objects (persons, cars, etc.)
* face recognition
* deep license plate recognition

I will add more algorithms over time.

Documentation
-------------
Expand All @@ -15,4 +20,4 @@ Screenshots
Click each image for larger versions. Some of these images are from other users who have granted permission for use
###### (permissions received from: Rockedge/ZM Slack channel/Mar 15, 2019)

<img src="https://github.com/pliablepixels/zmeventnotification/blob/master/screenshots/person_face.jpg" width="300px" /> <img src="https://github.com/pliablepixels/zmeventnotification/blob/master/screenshots/delivery.jpg" width="300px" /> <img src="https://github.com/pliablepixels/zmeventnotification/blob/master/screenshots/car.jpg" width="300px" /> <img src="https://github.com/pliablepixels/zmeventnotification/blob/master/screenshots/cat.jpg" width="300px" />
<img src="https://github.com/pliablepixels/zmeventnotification/blob/master/screenshots/person_face.jpg" width="300px" /> <img src="https://github.com/pliablepixels/zmeventnotification/blob/master/screenshots/delivery.jpg" width="300px" /> <img src="https://github.com/pliablepixels/zmeventnotification/blob/master/screenshots/car.jpg" width="300px" /> <img src="https://github.com/pliablepixels/zmeventnotification/blob/master/screenshots/alpr.jpg" width="300px" />
4 changes: 4 additions & 0 deletions docs/guides/breaking.rst
Original file line number Diff line number Diff line change
@@ -1,6 +1,10 @@
Breaking Changes
----------------

Version 3.9 onwards
~~~~~~~~~~~~~~~~~~~~
- Hooks now add ALPR, so you need to run `sudo -H pip install -r requirements.txt` again
- See modified objectconfig.ini if you want to add ALPR. Currently works with platerecognizer.com, so you will need an API key. See hooks docs for more info

Version 3.7 onwards
~~~~~~~~~~~~~~~~~~~
Expand Down
27 changes: 27 additions & 0 deletions docs/guides/hooks.rst
Original file line number Diff line number Diff line change
Expand Up @@ -272,6 +272,33 @@ If you select yolo, you can add a ``model_type=tiny`` to use tiny YOLO
instead of full yolo weights. Again, please readd the comments in
``objectconfig.ini``

How to use license plate recognition
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

I use `Plate Recognizer <https://platerecognizer.com>`__ for license plate recognition. It uses a deep learning model that does a far better job than OpenALPR (based on my tests). The class is abstracted, obviously, so in future I may add local models. For now, you will have to get a license key from them (they have a `free tier <https://platerecognizer.com/pricing/>`__ that allows 2500 lookups per month)

To enable alpr, simple add `alpr` to `models`. You will also have to add your license key to the ``[alpr]`` section of ``objdetect.ini``

Note that since this is a remote service hosted by a 3rd party, you can't use ALPR without an internet connection.

::

models = yolo,alpr

[alpr]
alpr_service=plate_recognizer
alpr_key=<the key>
alpr_use_after_detection_only=yes

Leave ``alpr_service`` and ``alpr_use_after_detection_only`` to the default values.

How license plate recognition will work
''''''''''''''''''''''''''''''''''''''''

- To save on platerecognizer API calls, the code will only invoke remote APIs if a vehicle is detected
- This also means you MUST specify yolo along with alpr


How to use face recognition
^^^^^^^^^^^^^^^^^^^^^^^^^^^

Expand Down
170 changes: 147 additions & 23 deletions hook/detect.py
Original file line number Diff line number Diff line change
Expand Up @@ -14,6 +14,7 @@
import imutils
import ssl
import pickle
import json
#import hashlib

import zmes_hook_helpers.log as log
Expand All @@ -23,6 +24,7 @@

import zmes_hook_helpers.yolo as yolo
import zmes_hook_helpers.hog as hog
import zmes_hook_helpers.alpr as alpr
from zmes_hook_helpers.__init__ import __version__


Expand Down Expand Up @@ -77,12 +79,13 @@ def append_suffix(filename, token):
g.logger.debug('TESTING ONLY: reading image from {}'.format(args['file']))
filename1 = args['file']
filename1_bbox = append_suffix(filename1, '-bbox')
filename2 = ''
filename2_bbox = ''
filename2 = None
filename2_bbox = None


start = datetime.datetime.now()

obj_json = []
# Read images to analyze
image2 = None
image1 = cv2.imread(filename1)
Expand Down Expand Up @@ -118,6 +121,10 @@ def append_suffix(filename, token):
conf = []
classes = []

use_alpr = True if 'alpr' in g.config['models'] else False
g.logger.debug ('User ALPR if vehicle found: {}'.format(use_alpr))
# labels that could have license plates. See https://github.com/pjreddie/darknet/blob/master/data/coco.names

for model in g.config['models']:
# instaniate the right model
# after instantiation run all files with it,
Expand All @@ -139,6 +146,14 @@ def append_suffix(filename, token):
m = face.Face(upsample_times=g.config['face_upsample_times'],
num_jitters=g.config['face_num_jitters'],
model=g.config['face_model'])
elif model == 'alpr':
if g.config['alpr_use_after_detection_only'] == 'yes':
g.logger.debug ('Skipping ALPR as it is configured to only be used after object detection')
continue # we would have handled it after YOLO
else:
g.logger.info ('Standalone ALPR will come later. Please use after yolo')
continue

else:
g.logger.error('Invalid model {}'.format(model))
raise ValueError('Invalid model {}'.format(model))
Expand All @@ -148,6 +163,14 @@ def append_suffix(filename, token):
# read the detection pattern we need to apply as a filter
r = re.compile(g.config['detect_pattern'])


try_next_image = False # take the best of both images, currently used only by alpr
# temporary holders, incase alpr is used but not found
saved_bbox = []
saved_labels = []
saved_conf = []
saved_classes = []
saved_image = None
# Apply the model to all files
for filename in [filename1, filename2]:
if filename is None:
Expand Down Expand Up @@ -178,21 +201,107 @@ def append_suffix(filename, token):
# now filter these with polygon areas
#g.logger.debug ("INTERIM BOX = {} {}".format(b,l))
b, l, c = img.processIntersection(b, l, c, match)


if use_alpr:
vehicle_labels = ['car','motorbike', 'bus','truck', 'boat']
if not set(l).isdisjoint(vehicle_labels):
# if this is true, that ,means l has vehicle labels
# this happens after match, so no need to add license plates to filter
g.logger.debug ('Invoking ALPR as detected object is a vehicle')
alpr_obj = alpr.ALPRPlateRecognizer(apikey=g.config['alpr_key'])
# don't pass resized image - may be too small
alpr_b, alpr_l, alpr_c = alpr_obj.detect(filename)
if len (alpr_l):
g.logger.debug ('ALPR returned: {}, {}, {}'.format(alpr_b, alpr_l, alpr_c))
try_next_image = False
# First get non plate objects
for idx, t_l in enumerate(l):
obj_json.append( {
'type': 'object',
'label': t_l,
'box': b[idx],
'confidence': c[idx]
})
# Now add plate objects
for i, al in enumerate(alpr_l):
g.logger.debug ('ALPR Found {} at {} with score:{}'.format(al, alpr_b[i], alpr_c[i]))
b.append(alpr_b[i])
l.append(al)
c.append(alpr_c[i])
obj_json.append( {
'type': 'licenseplate',
'label': al,
'box': alpr_b[i],
'confidence': alpr_c[i]
})
elif filename == filename1 and filename2: # no plates, but another image to try
g.logger.debug ('We did not find license plates in vehicles, but there is another image to try')
saved_bbox = b
saved_labels = l
saved_conf = c
saved_classes = m.get_classes()
saved_image = image.copy()
try_next_image = True
else: # no plates, no more to try
g.logger.debug ('We did not find license plates, and there are no more images to try')
if saved_bbox:
g.logger.debug ('Going back to matches in first image')
b = saved_bbox
l = saved_labels
c = saved_conf
image = saved_image
# store non plate objects
for idx, t_l in enumerate(l):
obj_json.append( {
'type': 'object',
'label': t_l,
'box': b[idx],
'confidence': c[idx]
})
try_next_image = False
else: # objects, no vehicles
if filename == filename1 and filename2:
g.logger.debug ('There was no vehicle detected here, but we have another image to try')
try_next_image = True
saved_bbox = b
saved_labels = l
saved_conf = c
saved_classes = m.get_classes()
saved_image = image.copy()
else:
g.logger.debug ('No vehicle detected, and no more images to try')
try_next_image = False
for idx, t_l in enumerate(l):
obj_json.append({
'type': 'object',
'label': t_l,
'box': b[idx],
'confidence': c[idx]
})
else: # usealpr
g.logger.debug ('ALPR not in use, no need for look aheads in processing')
# store objects
for idx, t_l in enumerate(l):
obj_json.append( {
'type': 'object',
'label': t_l,
'box': b[idx],
'confidence': c[idx]
})
if b:
#g.logger.debug ('ADDING {} and {}'.format(b,l))
bbox.extend(b)
label.extend(l)
conf.extend(c)
classes.append(m.get_classes())
g.logger.debug('labels found: {}'.format(l))
g.logger.debug ('match found in {}, breaking file loop...'.format(filename))
matched_file = filename
break # if we found a match, no need to process the next file
# g.logger.debug ('ADDING {} and {}'.format(b,l))
if not try_next_image:
bbox.extend(b)
label.extend(l)
conf.extend(c)
classes.append(m.get_classes())
g.logger.debug('labels found: {}'.format(l))
g.logger.debug ('match found in {}, breaking file loop...'.format(filename))
matched_file = filename
break # if we found a match, no need to process the next file
else:
g.logger.debug ('Going to try next image before we decide the best one to use')
else:
g.logger.debug('No match found in {} using model:{}'.format(filename,model))
found_match = False
# file loop
# model loop
if matched_file and g.config['detection_mode'] == 'first':
Expand Down Expand Up @@ -220,14 +329,36 @@ def append_suffix(filename, token):
out = img.draw_bbox(image, bbox, label, classes, conf, None, False)
image = out

if g.config['frame_id'] == 'bestmatch':
if matched_file == filename1:
prefix = '[a] ' # we will first analyze alarm
frame_type = 'alarm'
else:
prefix = '[s] '
frame_type = 'snapshot'
else:
prefix = '[x] '
frame_type = g.config['frameid']


if g.config['write_debug_image'] == 'yes':
g.logger.debug('Writing out debug bounding box image to {}...'.format(bbox_f))
cv2.imwrite(bbox_f, image)

if g.config['write_image_to_zm'] == 'yes':
if (args['eventpath']):
g.logger.debug('Writing detected image to {}'.format(args['eventpath']))
g.logger.debug('Writing detected image to {}/objdetect.jpg'.format(args['eventpath']))
cv2.imwrite(args['eventpath'] + '/objdetect.jpg', image)
jf = args['eventpath'] + '/objects.json'
final_json = {
'frame': frame_type,
'detections': obj_json
}
g.logger.debug ('Writing JSON output to {}'.format(jf))
with open (jf, 'w') as jo:
json.dump(final_json,jo)


else:
g.logger.error('Could not write image to ZoneMinder as eventpath not present')
# Now create prediction string
Expand All @@ -247,14 +378,7 @@ def append_suffix(filename, token):
label = label_t
conf = conf_t

if g.config['frame_id'] == 'bestmatch':
if matched_file == filename1:
prefix = '[a] ' # we will first analyze alarm
else:
prefix = '[s] '
else:
prefix = '[x] '


pred = ''
seen = {}
#g.logger.debug ('CONFIDENCE ARRAY:{}'.format(conf))
Expand Down

0 comments on commit f828f39

Please sign in to comment.