You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
"description": "When working on a new ML solution to solve a given problem, do you\nthink that you are simply using objective reality to infer a set of\nunbiased rules that will allow you to predict the future? Do you\nthink that worrying about the morality of your work is something\nother people should do? If so, this talk is for you.\n\nIn this brief time, I will try to convince you that you hold great\npower over how the future world will look like and that you should\nincorporate thinking about morality into the set of ML tools you use\nevery day. We will take a short journey through several problems,\nwhich surfaced over the last few years, as ML and AI generally,\nbecame more widely used. We will look at bias present in training\ndata, at some real-world consequences of not considering it\n(including one or two hair-raising stories) and cutting-edge research\non how to counteract this.\n\nThe outline of the talk is:\n\n- Intro the problem: ML algos can be biased!\n- Two concrete examples.\n- What's been done so far (i.e. techniques from recently-published papers).\n- What to do next: unanswered questions.",