Panel: Large-scale neural platform models: Opportunities, concerns, and directions
- Eric Horvitz, Miles Brundage, Yejin Choi, Percy Liang | Microsoft, OpenAI, University of Washington / AI2, Stanford University
- Microsoft Research Summit 2021 | Deep Learning & Large-Scale AI
Large-scale, pretrained neural models are driving significant research and development across multiple AI areas. They have played a major role in research efforts and have been at the root of leaps forward in capabilities in natural language processing, computer vision, and multimodal reasoning. Over the last five years, large-scale neural models have evolved into platforms where fixed large-scale “platform models” are adapted via fine-tuning to develop capabilities on specific tasks. Research continues, and we have much to learn. While there is excitement about demonstrated capabilities, the “models as platforms” paradigm is concurrently raising questions and framing discussions about a constellation of concerns. These include challenges with safety and responsibility in regard to the understandability of emergent behaviors, the potential for systems to generate offensive output, and malevolent uses of new capabilities. Other discussion focuses on challenges with the cost of building platform models and with the rise of have and have-nots, where only a few industry organizations can construct platform models. Microsoft Chief Scientific Officer Eric Horvitz will lead an expert panel on neural platform models discussing research directions, responsible practices, and directions forward on key concerns.
Learn more about the 2021 Microsoft Research Summit: https://Aka.ms/researchsummit (opens in new tab)
-
-
Eric Horvitz
Chief Scientific Officer
-
-
-
Percy Liang
Researcher
-
-
Deep Learning & Large-Scale AI
-
Opening remarks: Deep Learning and Large-Scale AI
- Ahmed Awadallah
-
-
-
-
-
-
Roundtable discussion: Efficient and adaptable large-scale AI
- Ahmed Awadallah,
- Jianfeng Gao,
- Danqi Chen
-
-
-
-
Panel: Large-scale neural platform models: Opportunities, concerns, and directions
- Eric Horvitz,
- Miles Brundage,
- Yejin Choi
-
-
-
Research talk: WebQA: Multihop and multimodal
- Yonatan Bisk
-
-
Roundtable discussion: Beyond language models: Knowledge, multiple modalities, and more
- Yonatan Bisk,
- Daniel McDuff,
- Dragomir Radev
-
Closing remarks: Deep Learning and Large Scale AI
- Jianfeng Gao