Research talk: NUWA: Neural visual world creation with multimodal pretraining
- Lei Ji, Chenfei Wu | Microsoft Research Asia
- Microsoft Research Summit 2021 | Deep Learning & Large-Scale AI
Over the past years, large-scale pretrained models with billions of parameters have improved the state of the art in nearly every natural language processing (NLP) task. These models are fundamentally changing the research and development of NLP and AI in general. Recently, researchers are expanding such models beyond natural language texts to include more modalities, such as structured knowledge bases, images, and videos. With this background, the talks in this session are expected to introduce the latest advances in pretrained models, and also discuss the future of this research frontier. Hear from Lei Ji and Chenfei Wu from Microsoft Research Asia, in the second of three talks on recent advances and applications of language model pertaining.
Learn more about the 2021 Microsoft Research Summit: https://Aka.ms/researchsummit (opens in new tab)
-
-
Lei Ji
Senior Researcher
-
Chenfei Wu
Senior Researcher
-
-
Deep Learning & Large-Scale AI
-
Opening remarks: Deep Learning and Large-Scale AI
- Ahmed Awadallah
-
-
-
-
-
-
Roundtable discussion: Efficient and adaptable large-scale AI
- Ahmed Awadallah,
- Jianfeng Gao,
- Danqi Chen
-
-
-
-
Panel: Large-scale neural platform models: Opportunities, concerns, and directions
- Eric Horvitz,
- Miles Brundage,
- Yejin Choi
-
-
-
Research talk: WebQA: Multihop and multimodal
- Yonatan Bisk
-
-
Roundtable discussion: Beyond language models: Knowledge, multiple modalities, and more
- Yonatan Bisk,
- Daniel McDuff,
- Dragomir Radev
-
Closing remarks: Deep Learning and Large Scale AI
- Jianfeng Gao