/
bargava-subramanian-machine-learning-power-of-ensembles.json
23 lines (23 loc) · 2.67 KB
/
bargava-subramanian-machine-learning-power-of-ensembles.json
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
{
"copyright_text": "Creative Commons Attribution license (reuse allowed)",
"description": "Bargava Subramanian - Machine Learning: Power of Ensembles\n[EuroPython 2016]\n[22 July 2016]\n[Bilbao, Euskadi, Spain]\n(https://ep2016.europython.eu//conference/talks/machine-learning-power-of-ensembles)\n\nIn Machine Learning, the power of combining many models have proven to\nsuccessfully provide better results than single models.\n\nThe primary goal of the talk is to answer the following questions:\n\n1) Why and How ensembles produce better output?\n2) When data scales, what's the impact? What are the trade-offs to consider?\n3) Can ensemble models eliminate expert domain knowledge?\n\n-----\n\nIt is relatively easy to build a first-cut machine learning model. But\nwhat does it take to build a reasonably good model, or even a state-\nof-art model ?\n\nEnsemble models. They are our best friends. They help us exploit the\npower of computing. Ensemble methods aren't new. They form the basis\nfor some extremely powerful machine learning algorithms like random\nforests and gradient boosting machines. The key point about ensemble\nis that consensus from diverse models are more reliable than a single\nsource. This talk will cover how we can combine model outputs from\nvarious base models(logistic regression, support vector machines,\ndecision trees, neural networks, etc) to create a stronger/better\nmodel output.\n\nThis talk will cover various strategies to create ensemble models.\n\nUsing third-party Python libraries along with scikit-learn, this talk\nwill demonstrate the following ensemble methodologies:\n\n1) Bagging\n2) Boosting\n3) Stacking\n\nReal-life examples from the enterprise world will be show-cased where\nensemble models produced better results consistently when compared\nagainst single best-performing models.\n\nThere will also be emphasis on the following: Feature engineering,\nmodel selection, importance of bias-variance and generalization.\n\nCreating better models is the critical component of building a good\ndata science product.\n\nA preliminary version of the slides is available\n`here <https://speakerdeck.com/bargava/power-of-ensembles>`_",
"duration": 1008,
"language": "eng",
"recorded": "2016-08-05",
"related_urls": [
"https://speakerdeck.com/bargava/power-of-ensembles",
"https://ep2016.europython.eu//conference/talks/machine-learning-power-of-ensembles"
],
"speakers": [
"Bargava Subramanian"
],
"tags": [],
"thumbnail_url": "https://i.ytimg.com/vi/4EPun0eAgLc/maxresdefault.jpg",
"title": "Machine Learning: Power of Ensembles",
"videos": [
{
"type": "youtube",
"url": "https://www.youtube.com/watch?v=4EPun0eAgLc"
}
]
}