Skip to content
CURE-OR: Challenging Unreal and Real Environments for Object Recognition
Branch: master
Clone or download
Fetching latest commit…
Cannot retrieve the latest commit at this time.
Permalink
Type Name Latest commit message Commit time
Failed to load latest commit information.
figs
.gitignore
IQA_loader.m
LICENSE
README.md
analysis.py
cure_or_objects.txt
dataloader.py
utils.py

README.md

CURE-OR

The goal of this project is to analyze the robustness of off-the-shelf recognition applications under multifarious challenging conditions, investigate the relationship between the recognition performance and image quality, and estimate the performance based on hand-crafted features as well as data-driven features. To achieve this goal, we introduced a large-scale, controlled, and multi-platform object recognition dataset CURE-OR, which includes 1 million images of 100 objects captured with different backgrounds, devices and perspectives, as well as simulated challenging conditions. This repository includes codes to produce analysis results in our papers. For more information about CURE-OR, please refer to our papers and website linked below.

Dataset

Objects of CURE-OR: 100 objects of 6 categories

5 Backgrounds: White, 2D Living room, 2D Kitchen, 3D Living room, 3D Office

5 Devices: iPhone 6s, HTC One X, LG Leon, Logitech C920 HD Pro Webcam, Nikon D80

5 Object orientations: Front, Left, Back, Right, Top

In order to receive the download link, please fill out this form to submit your information and agree to the conditions to use. These information will be kept confidential and will not be released to anyone outside the OLIVES administration team.

Usage

Download the analysis data from here and unzip it under the same directory as the codes. The folder structure is as following:

├── AWS/                          # Recognition results from AWS Rekognition API
│    ├─── 01_no_challenge/        # Organized in folders by challenge types of CURE-OR
│    └─── ...
├── Azure                         # Recognition results from Microsoft Azure Computer Vision API
│    ├─── 01_no_challenge/        # Organized in folders by challenge types of CURE-OR
│    └─── ...
├── IQA                           
│    ├── IQA_codes                # Matlab codes for image quality assessments
│    └── Result                   # Image quality results organized in folders by objects
└── CBIR                          # Content-based image retrieval
     ├── Features                 # Extracted features
     ├── Performance              # Performance of recognition applications preprocessed for analysis
     └── Distance                 # Distance between features of "best" images and the rest: averaged across objects

CBIR codes were referenced from this repo.

To see the analyis results, simply run:

python analysis.py

The results will be stored under Results/.

Citations

If you use CURE-OR dataset and/or these codes, please consider citing our papers

@inproceedings{Temel2018_ICMLA,
author      = {D. Temel and J. Lee and G. AlRegib},
booktitle   = {2018 17th IEEE International Conference on Machine Learning and Applications (ICMLA)},
title       = {CURE-OR: Challenging unreal and real environments for object recognition},
year        = {2018},}

@INPROCEEDINGS{Temel2019_ICIP,
author      = {D. Temel and J. Lee and G. AIRegib},
booktitle   = {IEEE International Conference on Image Processing (ICIP)},
title       = {Object Recognition Under Multifarious Conditions: A Reliability Analysis and A Feature Similarity-Based Performance Estimation},
year        = {2019},}
You can’t perform that action at this time.