Zero Speech Challenge 2017 Surprise Dataset and Evaluation
Switch branches/tags
Nothing to show
Clone or download
Fetching latest commit…
Cannot retrieve the latest commit at this time.
Permalink
Type Name Latest commit message Commit time
Failed to load latest commit information.
.gitignore
README.md
download_surprise_data.sh
exemple.tar.gz
make_submission_archive.sh

README.md

General procedure for official evaluation

Welcome to the zerospeech2017 surprise evaluation phase! If you are here, it means that you have developed a system and thoroughly tested and tuned it on the three development languages (see www.zerospeech.com/2017).

The procedure is as follows: you can download the surprise dataset using the script download_surprise_data.sh and run your tuned system on it. Once done, you will prepare a zip archive file containing the command lines, source code/ or binaries for the system, plus the output files files for the 3 dev and 2 surprise languages. This zip will then be submitted to zenodo, in order to create a unique digital object identifier (DOI) for it. Once we receive the DOI, we will automatically compute on our servers, the official evaluation results of your system. The DOI will appear on our public results page on the leaderboard result page, together with the associated results on all 5 languages.

Downloading the surprise dataset

The dataset can be downloaded via the script download_surprise_data.sh. Only the raw files and their VAD annotations will be available, no other metadata, transcriptions or evaluation software will be given.

Running your system on the dataset

Attention: it is important that you extract the output files for all 5 languages using the exact same system (with the same hyperparameters)! This means, of course, that the results on the 3 dev languages may not be the optimal ones for each of them separately; but this is not the aim of the challenge; the aim is to generate a system that will score ok on the three dev languages and generalize best on the two new surprise languages.

Please prepare a script that will run your system with the optimal hyper parameters on the 5 languages in succession. This will be part of the distributed code, and will assure replicability.

Results directory format

Your system code and result files must be organized in a directory that you will submit as a .tar.gz archive. It must have the following structure:

    system/
        README
        distribution of your program, see below
    track1/
        README
        {english, french, mandarin, LANG1, LANG2}/
        {1s, 10s, 120s}/
        *.fea
    track2/
        README
        english.txt
        french.txt
        mandarin.txt
        LANG1.txt
        LANG2.txt

Valid submissions can contain only one of the tracks, and can have missing data on one of the hyper training or test languages (in case of a last minute bug, we will not prevent you from submitting if one language is missing).

We request a README file in each subdirectory. They can have any (or no) extension, e.g. README.md or README.txt.

system directory

The system directory contains the program(s) you are using for the challenge. It can be distributed as a DOI, sources or binary.

  • DOI: If you have already published your code elsewhere and have an attached DOI, put that information in the README, along with a download URL. In that case, the system folder contains only the README (learn how to attach a zenodo DOI to a github repository here).

  • Sources: If you do not have a DOI you can distribute your program as sources. In that case you are free to organize the system folder as you want. The README must describe the program, any software dependencies and the build process.

  • Binary: We strongly encourage distribution from sources or DOI but, if you cannot release your sources, we also accept binary files. In that case the README must specify how to run the executable (system, dependencies, etc...). We will run it to check that it does generate the same files as the one you provide on the challenge's data.

system README

The system README must contain the following four types of information:

  • hardware configuration (e.g. sudo lshw -short or cat /proc/cpuinfo).
  • system configuration (e.g. uname -a).
  • libraries and dependencies.
  • list and value of the hyperparameters that have been tuned with the development languages for each of the tracks.

Here is a README exemple:

* hardware
processor	: 0-7
vendor_id	: GenuineIntel
cpu family	: 6
model		: 44
model name	: Intel(R) Xeon(R) CPU           E5620  @ 2.40GHz
cpu MHz		: 1596.000
cache size	: 12288 KB

* system
Linux habilis 4.9.0-0.bpo.1-amd64 #1 SMP Debian 4.9.2-2~bpo8+1 (2017-01-26) x86_64 GNU/Linux

* library and dependencies
Python 2.7.5 :: Anaconda 1.6.1 (64-bit)
numpy 1.7.1
theano 0.5.0
gcc (Debian 4.9.2-10) 4.9.2

* hyperparameters
- Track 1
 number of gaussians (-N): 532
 concentration parameters (-A): 100
 speech features: MFCC-13 with delta and delta2, from HTK 

track1 and track2 directories

The track1 and track2 directories contain the results you obtained on the 3 hypertrain datasets (english, french and mandarin) and the 2 hypertest datasets (LANG1 and LANG2) using your system.

  • If you are submitting only for one of the two tracks or if you have missing languages, delete the unused folders.

  • The README files must contain the command and parameters used to generate the entire set of results. Importantly, the hyperparameters of your system should be exactly the same for the 5 languages.

Here is a README exemple for track 2:

        for L in english french mandarin LANG1 LANG2
        do
            system/bin/my_spoken_term_disc -a 1 -b 2 -c 3 ~/data/${L}/training > track2/${L}.txt
        done
  • The track1 .fea files are the features computed on test files in the format specified here.

  • The track2 .txt files are the clusters computed on train files in the format specified here

Directory validation and archive creation

The evaluation will fail if the archive is not in the correct format. Make sure your result directory is ready for evaluation and convert it to a .tar.gz using this script from the challenge repository. Here is an example usage with the provided exemple archive:

        $ cd zerospeech2017_surprise
        $ tar xf exemple.tar.gz
        $ make_submission_archive.sh exemple
        checking exemple/system... found
        checking exemple/system README... found exemple/system/README.txt
        checking exemple/{track1, track2} directories... found track1 track2
        checking exemple/track1 README... found exemple/track1/README.md
        [... more checks ...]
        checking exemple/track2/LANG1.txt... found
        checking exemple/track2/LANG2.txt... found
        success: exemple dircetory is valid and ready for evaluation
        creating exemple.tar.gz... done
        size of exemple:	68K
        size of exemple.tar.gz:	4,0K

You are limited to 50GB, this should be largely enough for the challenge, if you have a bigger archive please contact us.

Archive submission and evaluation

  1. Sign up to zenodo on https://zenodo.org/signup/

  2. Upload your tar.gz archive on zenodo to get a DOI (digital object identifier).

    • Start a new upload into the zenodo zerospeech2017 community here and pre-reserve a DOI.

  - Your pre-reserved DOI will be the string 10.SERV/DATACITE, where SERV is numeric value for the DOI provider and DATACITE is your data alphanumeric string id (e.g: 10.5281/zenodo.803972). To avoid system incompatibilities you will need to archive your data in a compressed file as 10_SERV_DATACITE.tar.gz, and from the previous example it will be 10_5281_zenodo_803972.tar.gz.

  • The upload type must be dataset to allow the 5OGB limit.

  • Please be carefull before publishing as the archive becomes public and it can only be deleted by emailing zenodo.

  1. Once the archive is published, the challenge organizers will receive a notification and an automated job will download and evaluate their features.

  2. The evaluation results will be sent to you by email and published within 24h on the challenge results page.

  3. To submit another archive, you can go back to (2). Maximum is 5 submissions.