New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[TALK] AI talk at Open Science & Societal Impact conference #3598
Comments
A few more things I'm thinking about:
|
Thanks for start this issue @penyuan! Here is the medium post with the writeup: https://jending.medium.com/c57ccdbce896 A few thoughts on how to weave this talk preparation and scoping into TTW:
|
I've fleshed out the content of the talk a bit more into three sections with contents in bullet point form: 1. "Open AI" is often neither open, articificial, nor intelligent
2. Moving towards an outcomes-based approach to AI
3. What outcomes do we want for open science?
So, in summary:
|
@penyuan some suggested text for slides 13/14 for the interlude of section 2. 2. Moving towards an outcomes-based approach to AI!
|
That sounds great @dingaaling thank you! Continuing the thread in section 2 about outcomes, your post inspired me to think more about specific examples to demonstrate this approach that (1) could be connected to scientific research; and (2) demonstrates the point of section 3, i.e. dealing with "AI" (let along "open" AI) often misses the point/deeper problems. Let me know what you think of this: The colloquially ambiguous use of the term "AI" can misdirect our attention as it is often neither artificial nor intelligent. Not only does it perpetuate AI as pixie dust that you sprinkle on to things to give them a magical sheen, it entrenches deep systemic problems we've had long before this popular term came along. For example, ChatGPT comes across as an autonomous, independent entity that you can have a human(-like) conversation with and do tasks for you. In fact, the development of its underlying statistical models and "intelligent" facade is built on traumatised sweatshop labourers - often in Africa countries - who manually provide training data (such as reported here, here, here, here, here, or here). Calling this "artificial intelligence" further distances us from the inequitable labour and colonialism that have long been deeply problematic. An outcomes-based approach to AI means that in addition to defining key terms, we consider the outcomes we want to see for these underlying issues and think about what tools we need to achieve that. What does this have to do with scientific research? In the past few years, I've peer reviewed several scientific papers where academic researchers crowdsource the labelling of their big datasets to an army of online volunteers, who provide training data to machine learning algorithms. Some researchers like to call this "citizen science" (I disagree), and stress in their papers how crowdsourcing (menial) work to volunteers saves money and is efficient for achieving their scientific aims. Much of the conversation is about how to ensure that these low-skilled volunteers provide scientifically rigorous results. In contrast, relatively little ink is spilled on what the activity means for this labour, or how this labour is not part of the "costs" that's saved for these academic scientists. In my view, those of us in the scientific community must engage with broader discourse - including non-academic circles - on the outcomes we'd like to see in a world with AI. [lead into section 3] Apparently my presentation has to be 17-18 minutes, so fitting everything in is a challenge, but I'd appreciate input from @dingaaling or @everyone on whether this fits with the outline above! |
I've published a "release candidate" iteration of the slides to Zenodo: https://doi.org/10.5281/zenodo.11051128 I'm still tweaking a few things, and the final version used for the talk this Thursday will use the same DOI. |
With many thanks to @dingaaling I "shipped" the final talk yesterday. Slides and video recording: https://doi.org/10.5281/zenodo.11051128 Recording on Internet Archive: https://archive.org/details/AI-is-not-the-problem-2024-04-25 Transcript: https://write.as/naclscrg/talk-ai-is-not-the-problem There's like 100 tabs on my browser with further reading from the development process, some of which was kindly suggested by @dingaaling. I'll try to somehow dump that somewhere, maybe in one of the documents above... Thanks everyone! |
Quick update from the 1 May 2024 Turing Way Collaboration Cafe. Notes from the cafe pad
things we're doing after today
|
Date of talk
2024-04-25
Details of the talk
I've been invited to give a talk at the Open Science & Societal Impact online conference. It's scheduled for 16:30 UTC on 25 April 2024, and @dingaaling has kindly agreed to be a co-author.
The meeting is about various facets of open science and policy, and the session I'm asked to speak in is on open science and AI (noting that "AI" can be a problematic term!).
After a couple of stimulating meetings with @dingaaling, here's the general structure of the talk (which is still in development):
OK, that's the gist of it. More coming!
Checklist
The text was updated successfully, but these errors were encountered: