Skip to content

epic-kitchens/C1-Action-Recognition

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

24 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

EPIC-KITCHENS Action Recognition baselines

Train/Val/Test splits and annotations are available at Annotations Repo

To participate and submit to this challenge, register at Action Recognition Codalab Challenge

Released Models

Result data formats

We support two formats for model results.

  • List format:
    [
        {
            'verb_output': Iterable of float, shape [97],
            'noun_output': Iterable of float, shape [300],
            'narration_id': str, e.g. 'P01_101_1'
        }, ... # repeated for all segments in the val/test set.
    ]
    
  • Dict format:
    {
        'verb_output': np.ndarray of float32, shape [N, 97],
        'noun_output': np.ndarray of float32, shape [N, 300],
        'narration_id': np.ndarray of str, shape [N,]
    }
    

Either of these formats can saved via torch.save with .pt suffix or with pickle.dump with a .pkl suffix.

We provide two example files (from a TSN trained on RGB) following both these formats for reference.

Note that either of these layouts can be stored in a .pkl/.pt file--the dict format doesn't necessarily have to be in a .pkl.

Evaluating model results

We provide an evaluation script to compute the metrics we report in the paper on the validation set. You will also need to clone the annotations repo

$ python evaluate.py \
    results.pt \
    /path/to/annotations/EPIC_100_validation.pkl \
    --tail-verb-classes-csv /path/to/annotations/EPIC_100_tail_verbs.csv \
    --tail-noun-classes-csv /path/to/annotations/EPIC_100_tail_nouns.csv \
    --unseen-participant-ids /path/to/annotations/EPIC_100_unseen_participant_ids_test.csv \

all_action_accuracy_at_1: ...
all_action_accuracy_at_5: ...
all_noun_accuracy_at_1: ...
all_noun_accuracy_at_5: ...
all_verb_accuracy_at_1: ...
all_verb_accuracy_at_5: ...
tail_action_accuracy_at_1: ...
tail_noun_accuracy_at_1: ...
tail_verb_accuracy_at_1: ...
unseen_action_accuracy_at_1: ...
unseen_noun_accuracy_at_1: ...
unseen_verb_accuracy_at_1: ...

Creating competition submission files

Once you have evaluated your model on the test set, you can simply run this script to generate a zip file containing the JSON representation of your model's scores that can be submitted to the challenge leaderboard.

$ python create_submission.py test_results.pt test_results.zip

An example of the output this script produces: https://www.dropbox.com/s/sxxgyscziryeno1/tsn_fused_split%3Dtest.zip?dl=1 This is the submission file for the modality fused results of TSN model.

License

Copyright University of Bristol. The repository is published under the Creative Commons Attribution-NonCommercial 4.0 International License. This means that you must give appropriate credit, provide a link to the license, and indicate if changes were made. You may do so in any reasonable manner, but not in any way that suggests the licensor endorses you or your use. You may not use the material for commercial purposes.

About

Evaluation metrics and submission file creation scripts the Action Recognition challenge

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages