NSF Announces $140 Million Investment In Seven Artificial Intelligence Research Institutes

The U.S. National Science Foundation (NSF), along with several other federal agencies and higher education institutions, has announced a $140 million investment to establish seven new National Artificial Intelligence Research Institutes (AI Institutes).

The initiative represents a major effort by the federal government to develop an AI workforce and to advance fundamental understanding of the technology’s uses and risks. Funding for each institute, which includes collaborations among several universities, runs up to $20 million over a five-year period.

According to the announcement, the new AI Institutes will conduct research in several areas, including promoting ethical and trustworthy AI systems and technologies, developing novel approaches to cybersecurity, addressing climate change, expanding our understanding of the brain, and enhancing education and public health.

“The National AI Research Institutes are a critical component of our Nation’s AI innovation, infrastructure, technology, education, and partnerships ecosystem,” said NSF Director Sethuraman Panchanathan, in the announcement. “These institutes are driving discoveries that will ensure our country is at the forefront of the global AI revolution.”

In addition to the National Science Foundation, the AI Institutes will be supported by funding from the U.S. Department of Commerce’s National Institutes of Standards and Technology; U.S. Department of Homeland Security’s Science and Technology Directorate; U.S. Department of Agriculture’s National Institute of Food and Agriculture; U.S. Department of Education’s Institute of Education Sciences; U.S. Department of Defense’s Office of the Undersecretary of Defense for Research and Engineering; and the IBM Corporation.

Led by the University of Maryland, the NSF Institute for Trustworthy AI in Law & Society (TRAILS) aims to transform the practice of AI from one driven primarily by technological innovation to one driven by attention to ethics, human rights, and support for voices that have been marginalized in mainstream AI. It will focus on investigating what trust in AI looks like, whether current technical solutions for AI can be trusted, and which policy models can effectively sustain AI trustworthiness.

Read the rest of the article in Forbes.

Previous
Previous

GW Experts Weigh In On White House AI Risk Mitigation Initiatives

Next
Next

NIST Partners with NSF on New Institute for Trustworthy AI in Law & Society