The Institute for Trustworthy AI in Law & Society (TRAILS) is the first organization to integrate artificial intelligence participation, technology and governance during the design, development, deployment and oversight of AI systems. We investigate what trust in AI looks like, how to create technical AI solutions that build trust, and which policy models are effective in sustaining trust.

TRAILS is a partnership between the University of Maryland, George Washington University, Morgan State University and Cornell University.

Funded by a $20 million award from the National Science Foundation and the National Institute of Standards and Technology, the institute is focused on transforming the practice of AI from one driven primarily by technological innovation to one that is driven by ethics, human rights, and input and feedback from communities whose voices have previously been marginalized.

In addition to UMD, GW, Cornell and Morgan State, other participation in TRAILS comes from private sector organizations like the DataedX Group, Planet Word, Arthur AI, Checkstep, FinRegLab and Techstars.

Our Mission

In the U.S. and internationally, many organizations aim to encourage trustworthy artificial intelligence systems—iterations of AI that users, developers, and deployers see as accountable, responsible, and unbiased. However, the researchers at TRAILS believe that there is no trust or accountability in AI systems without participation of diverse stakeholders.

Our Strategy

TRAILS researchers will work to ensure that future AI systems enhance human capacity, respect human dignity, and protect human rights by:

  • Developing new methods that promote AI trustworthiness.

  • Empowering users to make sense of AI systems.

  • Analyzing and promoting inclusive governance strategies to build trust and accountability in AI systems.

  • Training a multidisciplinary next generation of talent.

  • Centering voices that have been marginalized in mainstream AI.