TRAILS Announces Second Round of Seed Funding

The Institute for Trustworthy AI in Law & Society (TRAILS) has announced its second round of seed funding, jumpstarting a series of interdisciplinary projects that align with the institute’s vision of advancing artificial intelligence (AI) systems that benefit all of society.

The five grants announced today—totaling $685,000—will support efforts to improve AI-generated health information, enhance safety and trust in autonomous vehicles, address education disparities driven by race and location, examine AI-generated social media used during a pandemic or natural disaster, and build new frameworks for large language models (LLMs) employed in academia.

The projects involve participation from faculty and students in all four of TRAILS’ primary academic institutions: the University of Maryland, George Washington University, Morgan State University and Cornell University.

“This latest round of seed funding supports research and innovation that can have a direct impact on how people stay healthy, learn and travel—areas of our lives that will benefit immensely from AI systems that are more ethical, inclusive, trustworthy and efficient,” said Hal Daumé III, a professor of computer science at the University of Maryland who is the director of TRAILS.

Like the inaugural round of TRAILS funding unveiled in January, this latest cohort of projects was selected based on connectivity to the core values driving much of the institute’s activities—developing trustworthy AI algorithms, empowering users to make sense of AI systems, training the next generation of AI leaders, and promoting inclusive AI governance strategies.

The new grantees will interact with previously funded seed grant teams, Daumé added, learning from and supporting each other while collectively contributing to TRAILS’ shared body of knowledge.

TRAILS was launched in May 2023 with a $20 million award from the National Science Foundation (NSF) and the National Institute of Standards and Technology (NIST). Since then, faculty, students and postdocs affiliated with TRAILS have been active, coordinating AI workshops and seminars on Capitol Hill, hosting a summer academy to empower future AI innovators, partnering with an immersive language museum to explore the use and efficacy of machine translation software, and much more.

“We continue to push forward in our second year, making new connections and working with diverse stakeholders whose voices previously went unheard as AI systems were designed, developed and deployed,” said Darren Cambridge, the managing director of TRAILS. “We’re listening and greatly value multiple viewpoints as we work toward building the next generation of AI tools and technologies.”

Ranging from $115,000 to $150,000 apiece, the five projects selected for the second round of TRAILS seed funding are:

Valerie Reyna from Cornell and David Broniatowski from GW are investigating how people interpret health-related misinformation and disinformation outputted by AI systems, as well as varying levels of mistrust they have with that information. At first glance, generative AI platforms like ChatGPT can offer a compelling solution to providing health advice, at any scale, in a way that is interactive and tailored to individuals. But little is known of the psychological mechanisms that humans apply to information garnered from these AI systems, especially if the information is false or misleading. Using behavioral and computational methods that provide insight into the human decision-making process, the researchers will gauge people’s level of trust in generative AI information as opposed to health-related information provided by humans.

Peng Wei from GW and Furong Huang from UMD will collaborate with a Federal Highway Administration lab to develop deep reinforcement learning algorithms—AI that makes decisions based on repeated trial and error scenarios—which can improve the safety of autonomous vehicles. They plan to design robust reinforcement algorithms that are adaptable to multiple scenarios—from hands-off human “driver” behavior to varying traffic conditions. Ultimately, they expect their multi-modal AI framework, which combines both language and visualization, to increase the level of trust by human drivers and passengers.

Martha James, Victoria Van Tassell, Valerie Riggs and Naja Mack from Morgan State and Jing Liu and Wei Ai from UMD are addressing disparities in PK–12 education that are predictable by race and ZIP code. The researchers seek to expand best practices in teaching that currently identify instruction in reading and math as critical to academic success. But mastery of reading and math do not comprise the totality of a well-rounded education, the researchers say, and they plan to modify AI tools originally developed to support excellence in mathematics instruction to also encompass the “encore” content areas of vocal music, visual arts and physical education.

Giovanni Luca Ciampaglia from UMD and Erica Gralla and David Broniatowski from GW are investigating the trustworthiness of AI-driven social media platforms used during crisis situations like a natural disaster or pandemic. They will examine the interplay between two key elements: the AI-based algorithms that dictate content visibility and the architectural frameworks that govern user interactions. The core of their work, the researchers say, is to develop a simulation model that evaluates how, in a crisis context, different classes of social media platforms (as defined by their algorithms and architectures) handle the spread of vital information while preventing the propagation of harmful content.

Ryan Watkins, David Lippert and Zoe Szajnfarber, all from GW, are developing a pragmatic planning guide and toolkit for student/faculty project teams that use large language models like ChatGPT in a higher education setting. They will conduct a practical examination of the development processes used by these teams working on LLM-based projects—a custom chatbot for a history course, for example—with a specific focus on teams without a strong computer science background. The researchers will rely on established protocols in trustworthy AI, such as those recently established by NIST, to help build a comprehensive toolkit aimed at enhancing the trustworthiness, security and openness of AI applications in academic settings.


Previous
Previous

UMIACS Team Aims to Boost High-Performance Computing Software Development Using AI

Next
Next

TRAILS Faculty Launch New Study on Perception Bias and AI Systems