Research talk: Towards data-efficient machine learning with meta-learning
- Guoqing Zheng | Microsoft Research Redmond
- Microsoft Research Summit 2021 | Deep Learning & Large-Scale AI
At Microsoft Research, we are approaching large-scale AI from many different perspectives, which include not only creating new, bigger models, but also developing unique ways of optimizing AI models from training to deployment. One of the main challenges posed by larger AI models is that they are more difficult to deploy in an affordable and sustainable way, and it is also still hard for them to learn new concepts and tasks effectively. Join Microsoft Researcher Guoqing Zheng for the third of three lightning talks in this series on Efficient and adaptable large-scale AI. See talks from Microsoft Researchers Subho Mukherjee and Yu Cheng to learn more about the work Microsoft is doing to improve the efficiency of computation and data in large-scale AI models.
Learn more about the 2021 Microsoft Research Summit: https://Aka.ms/researchsummit (opens in new tab)
Deep Learning & Large-Scale AI
-
Opening remarks: Deep Learning and Large-Scale AI
- Ahmed Awadallah
-
-
-
-
-
-
Roundtable discussion: Efficient and adaptable large-scale AI
- Ahmed Awadallah,
- Jianfeng Gao,
- Danqi Chen
-
-
-
-
Panel: Large-scale neural platform models: Opportunities, concerns, and directions
- Eric Horvitz,
- Miles Brundage,
- Yejin Choi
-
-
-
Research talk: WebQA: Multihop and multimodal
- Yonatan Bisk
-
-
Roundtable discussion: Beyond language models: Knowledge, multiple modalities, and more
- Yonatan Bisk,
- Daniel McDuff,
- Dragomir Radev
-
Closing remarks: Deep Learning and Large Scale AI
- Jianfeng Gao