Academic research plays such an important role in advancing science, technology, culture, and society. This grant program helps ensure this community has access to the latest and leading AI models.

Brad Smith, Vice Chair and President
green icon of a person standing on a circle with four smaller circles connected

AFMR Goal: Align AI with shared human goals, values, and preferences via research on models

which enhances safety, robustness, sustainability, responsibility, and transparency, while ensuring rapid progress can be measured via new evaluation methods

A common theme among these research projects revolves around improving the LLM’s alignment with human goals, addressing challenges like hallucinations, unfaithful information generation, lack of control, and improving their robustness, interpretability, and generalizability. Several proposals also emphasize enhancement of specific reasoning capabilities, like logical, commonsense, syntactic, inductive, abductive reasoning, and multi-document reasoning. Other specific advancements include enabling LLMs to reason about time-series data, collaborate amongst themselves, simulate public responses to projected AI actions, interact with external environments, etc. In terms of techniques, reinforcement learning, human feedback, retrieval-based methods, fine-tuning, model compression, task-oriented dialogue, and sequence decision-making is being explored for improving LLM’s performance and utility.