Skip to content

mlcontests/mlcontests.github.io

master
Switch branches/tags

Name already in use

A tag already exists with the provided branch name. Many Git commands accept both tag and branch names, so creating this branch may cause unexpected behavior. Are you sure you want to create this branch?
Code

Latest commit

 

Git stats

Files

Permalink
Failed to load latest commit information.
Type
Name
Latest commit message
Commit time
 
 
 
 
 
 

Bootstrap logo

ML Contests

license website pull_requests stars

A sortable list of public machine learning/data science/AI contests, viewable on mlcontests.com.

Please submit a pull request for any changes.

Additions or changes to the competitions list can be made by editing https://github.com/mlcontests/mlcontests.github.io/blob/master/competitions.json. Please check the submission criteria first to ensure your competition qualifies.

Schema

Mandatory fields

"name": A description of the competition. 
"url": Link to the competition. Feel free to insert codes so you can track the source. 
"type": The type of ML that most closely matches the competition. See other competitions for examples. E.g. "✅ Supervised Learning"
"deadline": final day for submissions. Format is "D MMM YYYY".
"prize": Monetary prizes only, converted to USD, or leave blank. 
"platform": which platform is running the competition? E.g. "Kaggle"/"DrivenData"
"sponsor": Who's providing sponsorship? E.g. "Google"

Optional fields:

"conference": Any conference affiliation, e.g. "NeurIPS"
"conference-year": Which year of the conference is this competition affiliated with? E.g. 2022 
"launched": day the competition starts. Format is "D MMM YYYY".
"registration-deadline": final day new competitors are able to register. Format is "D MMM YYYY".
"additional_urls": Any additional relevant links - for example, to the competition homepage if the actual competition is run on CodaLab. E.g. ["https://example1.com", "https://example2.com"]
"tags": Any tags relevant to the type of challenge. E.g. ["supervised", "vision", "nlp"]

The required date format in all cases is D MMM YYYY - e.g. 5 Jan 2023.

Valid tags

We are currently transitioning away from assinging a competition a single type (e.g. "supervised learning" / "computer vision") and towards assigning multiple tags (e.g. ["supervised", "vision", "timeseries"]).

Currently valid tags are listed below. Please check this list and tag your competition with all relevant tags. If you feel like any important tags are missing from this list, feel free to make suggestions in a pull request.

Until the transition is complete, please also assign both a type and tags.

Tag Description
"supervised" Supervised learning (labels are given)
"unsupervised" Unsupervised learning (no labels given)
"rl" Reinforcement learning (actions to maximise reward)
"control" Control problems (controlling a dynamical system)
"classification" Classification (class labels)
"regression" Regression (numerical labels)
"ranking" Ranking (ranking sets of items)
"segmentation" Segmentation (1) (2) (dividing something into parts with labels)
"vision" Computer Vision (images/video)
"audio" Audio processing (sound)
"nlp" Natural Language Processing (language, or sequences of tokens)
"tabular" Tabular data (structured, in rows and columns)
"multimodal" Multi-modal data (e.g. audio + text)
"timeseries" Time series analysis (anything with time series data)
"forecasting" Forecasting (making predictions about the future)
"causal" Causal inference (cause and effect)
"automl" AutoML (competitions restricted to AutoML solutions)
"graph" Learning on Graphs
"optimisation" Optimisation (formal optimisation problems)
"search" Search problems
"safety" AI Safety (alignment, robustness, ,monitoring, etc)
"security" Information security (virus detection, passwords, encryption, etc)
"privacy" Privacy (privacy-enhancing ML, federated learning, etc)
"meta" Meta learning (learning to learn)
"writing" Writing (essays, articles, blog posts)
"reasoning" Logical reasoning or abstraction based challenges.
"analysis" Analysis/visualisation (notebooks, presentations, recommendations, interpretation)
"measurable" Any competition with an objectively measurable goal/benchmark
"subjective" Any competition with a subjective determination of winners, such as through a judging panel
"science" Any challenge analysing scientific data (physics/biology/chemistry/...)
"sport" Any challenge analysing sports data (horse racing, NFL, NBA, soccer,...)
"business" Any challenge analysing business data (customer behaviour, credit card defaults,...)
"finance" Any challenge analysing financial markets data (crypto price prediction,...)
"education" Any challenge analysing education-related data (analysing students' essays, etc)
"geo" Any challenge analysing geographical data (localisation, mapping, etc)
"data" Any challenge where the core component is preparing or cleaning data, or creating new benchmark data sets
"open" Any data can be used, not just data that was given
"pvp" 'player-vs-player', i.e. evaluation is done by having competitors battle
"robotics" Any challenge involving teaching robots skills
"driving" Any challenge involving self-driving cars
"multiple" A competition composed of multiple mini-challenges
"mlops" A competition focused on MLOps - the operational aspects of ML in production - rather than modelling

About

A list of public machine learning/data science/AI contests.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published