Skip to content

Commit

Permalink
Merge branch 'main' into m1support
Browse files Browse the repository at this point in the history
  • Loading branch information
songsparrow authored and songsparrow committed Sep 8, 2022
2 parents c7e4266 + fd18047 commit 8f4f251
Show file tree
Hide file tree
Showing 21 changed files with 463 additions and 254 deletions.
6 changes: 6 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -53,3 +53,9 @@ api_config*.py
*.pth
*.o
debug.log
*.swp

# Things created when building the sync API
yolov5
api/synchronous/api_core/animal_detection_api/detection

13 changes: 7 additions & 6 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -35,7 +35,7 @@ If you're already familiar with MegaDetector and you're ready to run it on your

We work with ecologists all over the world to help them spend less time annotating images and more time thinking about conservation. You can read a little more about how this works on our [getting started with MegaDetector](collaborations.md) page.

Here are a few of the organizations that have used MegaDetector... we're only listing organizations who (a) we know about and (b) have kindly given us permission to refer to them here, so if you're using MegaDetector or other tools from this repo and would like to be added to this list, <a href="mailto:cameratraps@lila.science">email us</a>!
Here are a few of the organizations that have used MegaDetector... we're only listing organizations who (a) we know about and (b) have kindly given us permission to refer to them here (or have posted publicly about their use of MegaDetector), so if you're using MegaDetector or other tools from this repo and would like to be added to this list, <a href="mailto:cameratraps@lila.science">email us</a>!

* Idaho Department of Fish and Game
* San Diego Zoo Global
Expand Down Expand Up @@ -76,7 +76,6 @@ Here are a few of the organizations that have used MegaDetector... we're only li
* Conservation X Labs
* The Nature Conservancy in Wyoming
* Seattle Urban Carnivore Project
* Road Ecology Center, University of California, Davis
* Blackbird Environmental
* UNSW Sydney
* Taronga Conservation Society
Expand All @@ -85,13 +84,15 @@ Here are a few of the organizations that have used MegaDetector... we're only li
* Capitol Reef National Park and Utah Valley University
* University of Victoria Applied Conservation Macro Ecology (ACME) Lab
* Université du Québec en Outaouais Institut des Science de la Forêt Tempérée (ISFORT)
* University of British Columbia Wildlife Coexistence Lab ([OSS tool](https://github.com/WildCoLab/WildCo_Face_Blur))
* Alberta Biodiversity Monitoring Institute (ABMI) ([blog post](https://wildcams.ca/blog/the-abmi-visits-the-zoo/))
* Felidae Conservation Fund ([platform](https://wildepod.org/)) ([blog post](https://abhaykashyap.com/blog/ai-powered-camera-trap-image-annotation-system/))
* University of British Columbia Wildlife Coexistence Lab ([WildCo-FaceBlur tool](https://github.com/WildCoLab/WildCo_Face_Blur))
* Road Ecology Center, University of California, Davis ([Wildlife Observer Network platform](https://wildlifeobserver.net/))
* The Nature Conservancy in California ([Animl platform](https://github.com/tnc-ca-geo/animl-frontend))
* Felidae Conservation Fund ([WildePod platform](https://wildepod.org/)) ([blog post](https://abhaykashyap.com/blog/ai-powered-camera-trap-image-annotation-system/))
* Alberta Biodiversity Monitoring Institute (ABMI) ([WildTrax platform](https://www.wildtrax.ca/)) ([blog post](https://wildcams.ca/blog/the-abmi-visits-the-zoo/))
* Shan Shui Conservation Center ([blog post](https://mp.weixin.qq.com/s/iOIQF3ckj0-rEG4yJgerYw?fbclid=IwAR0alwiWbe3udIcFvqqwm7y5qgr9hZpjr871FZIa-ErGUukZ7yJ3ZhgCevs)) ([translated blog post](https://mp-weixin-qq-com.translate.goog/s/iOIQF3ckj0-rEG4yJgerYw?fbclid=IwAR0alwiWbe3udIcFvqqwm7y5qgr9hZpjr871FZIa-ErGUukZ7yJ3ZhgCevs&_x_tr_sl=auto&_x_tr_tl=en&_x_tr_hl=en&_x_tr_pto=wapp))
* Irvine Ranch Conservancy ([story](https://www.ocregister.com/2022/03/30/ai-software-is-helping-researchers-focus-on-learning-about-ocs-wild-animals/))
* Wildlife Protection Solutions ([story](https://customers.microsoft.com/en-us/story/1384184517929343083-wildlife-protection-solutions-nonprofit-ai-for-earth))
* [TrapTagger](https://wildeyeconservation.org/trap-tagger-about/)
* [WildTrax](https://www.wildtrax.ca/)
* [Camelot](https://camelotproject.org/)

# Data
Expand Down
64 changes: 55 additions & 9 deletions api/batch_processing/data_preparation/manage_local_batch.py
Original file line number Diff line number Diff line change
Expand Up @@ -46,6 +46,9 @@

quiet_mode = True

# Specify a target image size when running MD... strongly recommended to leave this at "None"
image_size = None

# Only relevant when running on CPU
ncores = 1

Expand Down Expand Up @@ -94,7 +97,7 @@

filename_base = os.path.join(base_output_folder_name, base_task_name)
combined_api_output_folder = os.path.join(filename_base, 'combined_api_outputs')
postprocessing_output_folder = os.path.join(filename_base, 'postprocessing')
postprocessing_output_folder = os.path.join(filename_base, 'preview')

os.makedirs(filename_base, exist_ok=True)
os.makedirs(combined_api_output_folder, exist_ok=True)
Expand Down Expand Up @@ -181,9 +184,13 @@ def split_list(L, n):
if quiet_mode:
quiet_string = '--quiet'

image_size_string = ''
if image_size is not None:
image_size_string = '--image_size {}'.format(image_size)

# Generate the script to run MD

cmd = f'{cuda_string} python run_detector_batch.py "{model_file}" "{chunk_file}" "{output_fn}" {checkpoint_frequency_string} {checkpoint_path_string} {use_image_queue_string} {ncores_string} {quiet_string}'
cmd = f'{cuda_string} python run_detector_batch.py "{model_file}" "{chunk_file}" "{output_fn}" {checkpoint_frequency_string} {checkpoint_path_string} {use_image_queue_string} {ncores_string} {quiet_string} {image_size_string}'

cmd_file = os.path.join(filename_base,'run_chunk_{}_gpu_{}.sh'.format(str(i_task).zfill(2),
str(gpu_number).zfill(2)))
Expand Down Expand Up @@ -255,7 +262,8 @@ def split_list(L, n):
results=None,
n_cores=ncores,
use_image_queue=use_image_queue,
quiet=quiet_mode)
quiet=quiet_mode,
image_size=image_size)
elapsed = time.time() - start_time

print('Task {}: finished inference for {} images in {}'.format(
Expand Down Expand Up @@ -338,7 +346,9 @@ def split_list(L, n):
assert task_results['detection_categories'] == combined_results['detection_categories']
combined_results['images'].extend(copy.deepcopy(task_results['images']))

assert len(combined_results['images']) == len(all_images)
assert len(combined_results['images']) == len(all_images), \
'Expected {} images in combined results, found {}'.format(
len(all_images),len(combined_results['images']))

result_filenames = [im['file'] for im in combined_results['images']]
assert len(combined_results['images']) == len(set(result_filenames))
Expand Down Expand Up @@ -1218,7 +1228,41 @@ def remove_overflow_folders(relativePath):
d = categorize_detections_by_size.categorize_detections_by_size(input_file,size_separated_file,options)


#%% Subsetting
#%% .json splitting

data = None

from api.batch_processing.postprocessing.subset_json_detector_output import (
subset_json_detector_output, SubsetJsonDetectorOutputOptions)

input_filename = filtered_output_filename
output_base = os.path.join(filename_base,'json_subsets')

if False:
if data is None:
with open(input_filename) as f:
data = json.load(f)
print('Data set contains {} images'.format(len(data['images'])))

print('Processing file {} to {}'.format(input_filename,output_base))

options = SubsetJsonDetectorOutputOptions()
# options.query = None
# options.replacement = None

options.split_folders = True
options.make_folder_relative = True

# Reminder: 'n_from_bottom' with a parameter of zero is the same as 'bottom'
options.split_folder_mode = 'bottom' # 'top', 'n_from_top', 'n_from_bottom'
options.split_folder_param = 0
options.overwrite_json_files = False
options.confidence_threshold = 0.01

subset_data = subset_json_detector_output(input_filename, output_base, options, data)


#%% Custom splitting/subsetting

data = None

Expand Down Expand Up @@ -1246,9 +1290,11 @@ def remove_overflow_folders(relativePath):
options = SubsetJsonDetectorOutputOptions()
options.confidence_threshold = 0.01
options.overwrite_json_files = True
options.make_folder_relative = True
options.query = folder_name + '\\'
options.query = folder_name + '/'

# This doesn't do anything in this case, since we're not splitting folders
# options.make_folder_relative = True

subset_data = subset_json_detector_output(input_filename, output_filename, options, data)


Expand All @@ -1268,13 +1314,13 @@ def remove_overflow_folders(relativePath):
subset_json_detector_output(input_filename,output_filename,options)


#%% Folder splitting
#%% Splitting images into folders

from api.batch_processing.postprocessing.separate_detections_into_folders import (
separate_detections_into_folders, SeparateDetectionsIntoFoldersOptions)

default_threshold = 0.2
base_output_folder = r'e:\{}-{}-separated'.format(base_task_name,default_threshold)
base_output_folder = os.path.expanduser('~/data/{}-{}-separated'.format(base_task_name,default_threshold))

options = SeparateDetectionsIntoFoldersOptions(default_threshold)

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -416,6 +416,10 @@ def separate_detections_into_folders(options):
if 'detector_metadata' in results['info'] and \
'typical_detection_threshold' in results['info']['detector_metadata']:
default_threshold = results['info']['detector_metadata']['typical_detection_threshold']
elif ('detector' not in results['info']) or (results['info']['detector'] is None):
print('Warning: detector version not available in results file, using MDv5 defaults')
detector_metadata = get_detector_metadata_from_version_string('v5a.0.0')
default_threshold = detector_metadata['typical_detection_threshold']
else:
print('Warning: detector metadata not available in results file, inferring from MD version')
detector_filename = results['info']['detector']
Expand Down Expand Up @@ -634,6 +638,24 @@ def main():

args_to_object(args, options)

def validate_threshold(v,name):
print('{} {}'.format(v,name))
if v is not None:
assert v >= 0.0 and v <= 1.0, \
'Illegal {} threshold {}'.format(name,v)

validate_threshold(args.threshold,'default')
validate_threshold(args.animal_threshold,'animal')
validate_threshold(args.vehicle_threshold,'vehicle')
validate_threshold(args.human_threshold,'human')

if args.threshold is not None:
if args.animal_threshold is not None \
and args.human_threshold is not None \
and args.vehicle_threshold is not None:
raise ValueError('Default threshold specified, but all category thresholds also specified... not exactly wrong, but it\'s likely that you meant something else.')


options.category_name_to_threshold['animal'] = args.animal_threshold
options.category_name_to_threshold['person'] = args.human_threshold
options.category_name_to_threshold['vehicle'] = args.vehicle_threshold
Expand Down
Loading

0 comments on commit 8f4f251

Please sign in to comment.