Skip to content
@hearbenchmark

HEAR Benchmark

The Holistic Evaluation of Audio Representations (HEAR). A benchmark of diverse audio tasks for audio ML researchers.

HEAR Benchmark Logo

The Holistic Evaluation of Audio Representations Benchmark

What audio embedding approach generalizes best to a wide range of downstream tasks across a variety of everyday domains without fine-tuning?

The aim of the HEAR benchmark is to develop a general-purpose audio representation that provides a strong basis for learning in a wide variety of tasks and scenarios. HEAR evaluates audio representations using a benchmark suite across a variety of domains, including speech, environmental sound, and music.

For more information on HEAR please visit https://hearbenchmark.com or read our paper https://arxiv.org/abs/2203.03022

To submit to the HEAR benchmark leaderboard see instructions on our website and then submit by making a pull request here.

Pinned

  1. hearbenchmark.com hearbenchmark.com Public

    HEAR Benchmark website and leaderboard submissions

    6 2

  2. hear-eval-kit hear-eval-kit Public

    Evaluation kit for the HEAR Benchmark

    Jupyter Notebook 48 14

  3. hear-baseline hear-baseline Public

    Simple baseline model for the HEAR benchmark

    Python 22 7

  4. hear-validator hear-validator Public

    Submission validator for the HEAR Benchmark

    Python 6 3

  5. hear2021-submitted-models hear2021-submitted-models Public

    Open-source audio embedding models, submitted to the HEAR 2021 challenge

    Python 10 1

  6. hear-preprocess hear-preprocess Public

    Dataset preprocessing code for the HEAR 2021 NeurIPS competition

    Python 7 1

Repositories

Showing 10 of 11 repositories

People

This organization has no public members. You must be a member to see who’s a part of this organization.

Top languages

Loading…

Most used topics

Loading…