Panel: Maximizing benefits and minimizing harms with language technologies
- Hal Daumé III, Steven Bird, Su Lin Blodgett, Margaret Mitchell, Hanna Wallach | Microsoft Research NYC, Charles Darwin University, Microsoft Research Montréal, Ethical AI LLC, Microsoft Research NYC
- Microsoft Research Summit 2021 | Responsible AI
Language is one of the main ways in which people understand and construct the social world. Current language technologies can contribute positively to this process—by challenging existing power dynamics, or negatively—by reproducing or exacerbating existing social inequities. In this panel, we will discuss existing concerns and opportunities related to the fairness, accountability, transparency, and ethics (FATE) of language technologies and the data they ingest or generate. It’s important to address these matters because language technologies might surface, replicate, exacerbate or even cause a range of computational harms—from exposing offensive speech or reinforcing stereotypes, to even more subtle issues, like nudging users towards undesirable patterns of behavior or triggering memories of traumatic events. In this session, we’ll cover such critical questions as: How can we reliably measure fairness-related and other computational harms? Whose data is included in training a model, and who is excluded as a result? How do we better foresee potential computational harms from language technologies?
Learn more about the 2021 Microsoft Research Summit: https://Aka.ms/researchsummit (opens in new tab)
-
-
Hal Daumé III
Principal Researcher
-
Margaret Mitchell
Researcher
-
Su Lin Blodgett
Senior Researcher
-
Hanna Wallach
Partner Research Manager
-
-
Responsible AI
-
Opening remarks: Responsible AI
- Hanna Wallach
-
Demo: RAI Toolbox: An open-source framework for building responsible AI
- Besmira Nushi,
- Mehrnoosh Sameki,
- Amit Sharma
-
Tutorial: Best practices for prioritizing fairness in AI systems
- Amit Deshpande,
- Amit Sharma
-
-
Panel discussion: Content moderation beyond the ban: Reducing borderline, toxic, misleading, and low-quality content
- Tarleton Gillespie,
- Zoe Darmé,
- Ryan Calo
-
Lightning talks: Advances in fairness in AI: From research to practice
- Amit Sharma,
- Michael Amoako,
- Kristen Laird
-
Lightning talks: Advances in fairness in AI: New directions
- Amit Sharma,
- Kinjal Basu,
- Michael Madaio
-
Panel: Maximizing benefits and minimizing harms with language technologies
- Hal Daumé III,
- Steven Bird,
- Su Lin Blodgett
-
Tutorial: Create human-centered AI with the Human-AI eXperience (HAX) Toolkit
- Saleema Amershi,
- Mihaela Vorvoreanu
-
Panel: The future of human-AI collaboration
- Aaron Halfaker,
- Charles Isbell,
- Jaime Teevan
-
Closing remarks: Responsible AI
- Ece Kamar