Skip to content
forked from usnistgov/F4DE

Framework for Detection Evaluation (F4DE) : set of evaluation tools for detection evaluations and for specific NIST-coordinated evaluations

Notifications You must be signed in to change notification settings

wolverineq/F4DE

 
 

Repository files navigation

File:  README
Date:  August 5, 2015
F4DE Version: 3.2.5


This directory contains the Framework for Detection Evaluation (F4DE)
software package.  It contains a set of evaluation tools for detection
evaluations and for specific NIST-coordinated evaluations listed below.

  *  2007 CLEAR Evaluation. 
      - Domains: Broadcast News, Meeting Room, Surveillance and UAV
      - Measures: Area and Point
    - Detection and Tracking (DT) tools:
      - CLEARDTScorer - The main DT evaluation script.
      - CLEARDTViperValidator - A syntactic and semantic validator for 
        both system output ViPER files and reference annotation files.
    - Text Recognition (TR) tools:
      - CLEARTRScorer - The main TR evaluation script.
      - CLEARTRViperValidator - A syntactic and semantic validator for 
        both system output ViPER files and reference annotation files.
	
  *  2008 TRECVID Surveillance Event Detection Evaluation. 
    - TV08Scorer - The main evaluation script. 
    - TV08ViperValidator - A syntactic and semantic validator for 
      both system output ViPER files and reference annotation files.
    - TV08MergeHelper - A TRECVID '08 ViPER-formatted file merging program.  
    - TV08_BigXML_ValidatorHelper - A helper program (that relies 
      on TV08ViperValidator and TV08MergeHelper) to perform syntactic
      and semantic validation on ViPER-formatted files containing 
      a large number of event observations.
    - TV08ED-SubmissionChecker - A tool designed to help confirm submission
      archives before transmitting them to NIST. 

  *  2009 AVSS Evaluation.
    - AVSS09Scorer - The main evaluation script. 
    - AVSS09ViperValidator - A syntactic and semantic validator for 
      both system output ViPER files and reference annotation files.
    - AVSS09-SubmissionChecker - A tool designed to help confirm submission
      archives before transmitting them to NIST. 

  *  2009 TRECVID Surveillance Event Detection Evaluation. 
    - Same tools as the 2008 TRECVID Surveillance Event Detection Evaluation
      (TV08Scorer, TV08ViperValidator, TV08MergeHelper, 
      TV08_BigXML_ValidatorHelper)
    - TV09ED-SubmissionChecker - A tool designed to help confirm submission
      archives before transmitting them to NIST. 

  *  2010 AVSS Evaluation.
    - Same tools as the 2009 AVSS Evaluation (AVSS09Scorer, 
      AVSS09ViperValidator, AVSS09-SubmissionChecker)

  *  2010 TRECVID Multimedia Event Detection Evaluation. 
    - DEVA_cli - The main evaluation script. 

  *  2010 TRECVID Surveillance Event Detection Evaluation. 
    - Same tools as the 2009 TRECVID Surveillance Event Detection Evaluation
      (TV08Scorer, TV08ViperValidator, TV08MergeHelper,
      TV08_BigXML_ValidatorHelper, TV09ED-SubmissionChecker)
    - TV10SED-SubmissionChecker - A tool designed to help confirm submission
      archives before transmitting them to NIST.

  *  2011 TRECVID Multimedia Event Detection Evaluation. 
    - DEVA_cli - The main evaluation script. 
    - TV11MED-SubmissionChecker - A tool designed to help confirm MED11 submission 
      archives before transmitting them to NIST.
    - Scoring Primer: DEVA/doc/TRECVid-MED11-ScoringPrimer.html

  *  2011 TRECVID Surveillance Event Detection Evaluation. 
    - Same tools as the 2008 TRECVID Surveillance Event Detection Evaluation
      (TV08Scorer, TV08ViperValidator, TV08MergeHelper, 
      TV08_BigXML_ValidatorHelper)
    - TV11SED-SubmissionChecker - A tool designed to help confirm SED11 submission
      archives before transmitting them to NIST. 

  *  KWS Evaluation.
    - KWSEval - The main KeyWord Search evaluation program derived from STDEval.
    - UTF-8 code set support.

  *  2012 TRECVID Multimedia Event Detection Evaluation. 
    - DEVA_cli - The main evaluation script. 
    - TV12MED-SubmissionChecker - A tool designed to help confirm MED12 submission 
      archives before transmitting them to NIST.
    - Scoring Primer: DEVA/doc/TRECVid-MED12-ScoringPrimer.html

  *  2012 TRECVID Surveillance Event Detection Evaluation. 
    - Same tools as the 2008 TRECVID Surveillance Event Detection Evaluation
      (TV08Scorer, TV08ViperValidator, TV08MergeHelper, 
      TV08_BigXML_ValidatorHelper)
    - TV12SED-SubmissionChecker - A tool designed to help confirm SED12 submission
      archives before transmitting them to NIST. 

  *  2013 TRECVID Multimedia Event Detection Evaluation. 
    - DEVA_cli - The main evaluation script. 
    - TV13MED-SubmissionChecker - A tool designed to help confirm MED13 submission 
      archives before transmitting them to NIST.
    - Scoring Primer: DEVA/doc/TRECVid-MED13-ScoringPrimer.html

  *  2013 TRECVID Multimedia Event Recounting Evaluation. 
    - TV13MED-SubmissionChecker - A tool designed to help confirm
      MED13 and MER13 submission archives before transmitting them to
      NIST.

  *  2013 TRECVID Surveillance Event Detection Evaluation. 
    - Same tools as the 2008 TRECVID Surveillance Event Detection Evaluation
      (TV08Scorer, TV08ViperValidator, TV08MergeHelper, 
      TV08_BigXML_ValidatorHelper)
    - TV13SED-SubmissionChecker - A tool designed to help confirm SED13 submission
      archives before transmitting them to NIST. 

  *  2014 TRECVID Surveillance Event Detection Evaluation. 
    - Same tools as the 2008 TRECVID Surveillance Event Detection Evaluation
      (TV08Scorer, TV08ViperValidator, TV08MergeHelper, 
      TV08_BigXML_ValidatorHelper)
    - TV14SED-SubmissionChecker - A tool designed to help confirm SED14 submission
      archives before transmitting them to NIST. 

  *  2015 Open Keyword Search Evaluation
    - Participant's side of the BABEL Scorer

  *  2015 TRECVID Surveillance Event Detection Evaluation. 
    - Same tools as the 2008 TRECVID Surveillance Event Detection Evaluation
      (TV08Scorer, TV08ViperValidator, TV08MergeHelper, 
      TV08_BigXML_ValidatorHelper)
    - TV15SED-SubmissionChecker - A tool designed to help confirm SED15 submission
      archives before transmitting them to NIST. 

  *  2015 TRECVID Multimedia Event Detection Evaluation. 
    - DEVA_cli - The main evaluation script. 
    - TV15MED-SubmissionChecker - A tool designed to help confirm MED15 submission 
      archives before transmitting them to NIST.
    - Scoring Primer: DEVA/doc/TRECVid-MED15-ScoringPrimer.html

In addition to evaluation tools, the package contains the following related tools:

  * VidAT (common/tools/VidAT)
    - A suite of tools designed to overlay video with boxes, polygons, etc. on 
      a frame-by-frame basis by using the output logs generated by CLEARDTScorer.
      Consult the README within the directory for special installation details
      and usage. VidAT's tools require FFmpeg, Ghostscript, ImageMagick.

  * SQLite_tools (common/tools/SQLite_tools)
    - A suite of tools designed to help interface CSV files and SQLite databases.
      The tool suite requires SQLite.


INSTALLATION
------------

----- Prerequisites:

F4DE consists of a set of Perl scripts that can be run under a shell terminal.
It has been confirmed to work under Linux, OS X and Cygwin.
NOTE: the tools are known not to work with Perl 5.18 (or after), please use a 5.16 at most

The only pre-requisites of the tool are:

- a recent version of gnuplot (at least 4.4 with png support)
  to do plots (for DETCurves among others)

- a recent version of xmllint (at least 2.6.30) (part of libxml2) 
  to validate XML files against their corresponding schema files.

- a recent version of SQLite (at least 3.6.12) to use all the SQLite based 
  tools (including DEVA)

- the 'rsync' tool available in your PATH (used in the installation process)

- some Perl modules (available on CPAN [http://www.cpan.org/]
  following the "Installing Perl Modules" using manual or automatic
  installation).  Availability of those will be tested during step 1
  ('make check') of the installation process. The list of those
  modules is as follow:

    Text::CSV
    Text::CSV_XS
    Math::Random::OO::Uniform
    Math::Random::OO::Normal
    Statistics::Descriptive
    Statistics::Descriptive::Discrete
    Statistics::Distributions
    DBI
    DBD::SQLite
    File::Monitor
    File::Monitor::Object
    Digest::SHA
    YAML
    Data::Dump
From the main directory, after having insured side tools, and
confirmed that Perl's 'cpanp' is installed and properly configured,
you can run 'make perl_install' to try to help with the automatic
installation of the perl modules that will be tested in the "make
check" step.

If you prefer a manual approach, installation of those modules can be
automated through the 'cpan' command.

----- Actual Installation:

All of the F4DE tools are made so that they can be run from the directory 
they are uncompressed from, and therefore installation is optional, but 
useful if you want to launch some tools from anywhere.

Installation is a 3 step process:

  1. Run 'make' to get a list of the "check" options available and at minimum
     run 'make mincheck' followed by the appropriate check for your tool set
     (ie, if you intend to use DEVA, run 'make DEVAcheck') to make sure all 
     required libraries and executables are available on the system.
      Please note that each tool's individual test can take from a few
     seconds to a few minutes to complete. 
      We recommend running 'make check' to run a full check to confirm
     that all software checks pass [*]. 
  2. Execute the command 'make install' to make symbolic links from the executables
     into F4DE uncompression directory's 'bin' and 'man' directories
  3. Add F4DE uncompression directory's 'bin' directory to your PATH environment
     variable and its 'man' directory to your MANPATH environment variable

[*] If one of the Tools tests fails, please follow the bug reports submission 
instructions detailed in the "TEST CASE BUG REPORT" section.
 For information, in each of those tests, the command line that was run by the
test can be found in the corresponding test number ("res*.txt") file as the
first line in the COMMANDLINE section.


CYGWIN PRE-INSTALLATION NOTES
----------------------------

The tools have been confirmed to work under windows when running cygwin 
(tested under Cygwin 1.7.5-1 and 1.7.9-1). 
After downloading the latest "setup.exe" from http://www.cygwin.com/
make sure to add the following when in the "Select Packages":
 - in "Archive", select "unzip"
 - in "Database", select "sqlite3"
 - in "Devel", select "gcc", "gcc4" and "make"
 - in "Libs", select "libxml2"
 - in "Math", select "gnuplot" 
 - in "Net", select "rsync" 
 - in "Perl", select "perl", "perl-ExtUtils-Depends" 
   and "perl-ExtUtils-PkgConfig"

After installation, from shell, run 'cpan' from which you will
want to "install" first the "ExtUtils::CBuilder" modules and then 
the modules listed in the 'prerequisites' section of the main 
'installation' instructions.

After this, you should be able to successfully complete "make check" 
and use the tools.

USAGE
-----

A manual page can be printed by each command by executing the command
with the option '--man'.  For example:

   %  TV08Scorer --man

The manual pages contain command line examples for each tool.

To try some command lines with data files, go to the testing directories
relative to the evaluation tool you are trying to test, in 
'CLEAR/test/<TOOLNAME>', 'TrecVid08/test/<TOOLNAME>' or 
'AVSS09/test/<TOOLNAME>' and try the command lines printed on the
first line of the res*.txt test case files.



CONTACT
-------

       Please send bug reports to <nist_f4de@nist.gov>

For the bug report to be useful, please include the command line,
files and text output, including the error message in your email.



TEST CASE BUG REPORT
--------------------

If the error occurred wile doing a 'make check', go in the directory
associated with the tool that failed (for example:
'CLEAR/test/<TOOLNAME>'), and type 'make makecompcheckfiles'. This
process will create a file corresponding to each test number named
"res_test*.txt-comp". These file are (like their .txt equivalent) text
files that can be compared to the original "res_test*.txt" files.

 When a test fails, please send us the "res_test*.txt-comp" file of the
failed test(s) for us to try to understand what happened, as well as 
information about your system (OS, architecture, ...) that you think
might help us.  Thank you for helping us improve F4DE.



AUTHORS
-------

       Martial Michel <martial.michel@nist.gov>

       David Joy <david.joy@nist.gov>

       Jonathan Fiscus <jonathan.fiscus@nist.gov>

       Vladimir Dreyvitser

       Vasant Manohar

       Jerome Ajot

       Bradford N. Barr



COPYRIGHT 
---------

Full details can be found at: http://nist.gov/data/license.cfm

This software was developed at the National Institute of Standards and
Technology by employees of the Federal Government in the course of their
official duties.  Pursuant to Title 17 Section 105 of the United States Code
this software is not subject to copyright protection within the United States
and is in the public domain. F4DE is an experimental system.  NIST assumes
no responsibility whatsoever for its use by any party, and makes no guarantees,
expressed or implied, about its quality, reliability, or any other characteristic.

We would appreciate acknowledgement if the software is used.  This software can be
redistributed and/or modified freely provided that any derivative works bear some notice
that they are derived from it, and any modified versions bear some notice that they
have been modified.

THIS SOFTWARE IS PROVIDED "AS IS."  With regard to this software, NIST MAKES
NO EXPRESS OR IMPLIED WARRANTY AS TO ANY MATTER WHATSOEVER, INCLUDING
MERCHANTABILITY, OR FITNESS FOR A PARTICULAR PURPOSE.

About

Framework for Detection Evaluation (F4DE) : set of evaluation tools for detection evaluations and for specific NIST-coordinated evaluations

Resources

Stars

Watchers

Forks

Packages

 
 
 

Languages

  • Perl 89.2%
  • Roff 4.9%
  • HTML 2.8%
  • Shell 2.1%
  • Other 1.0%