TRAILSCon 2025 Examined Trustworthy AI at Work
TRAILS Co-PI David Broniatowski speaks at the conference’s closing discussion, as fellow panelist and Co-PI Susan Ariel Aaronson looks on. (William Atkins/GW Today)
Whether using machine learning to automate data entry or turning to ChatGPT to write an email, workers across a range of fields are integrating AI into their jobs. But how, exactly, are these AI systems being implemented? What guidelines are, or should be, in place? And how can we take advantage of these challenges and opportunities to ensure AI adoption that promotes innovation without causing harm?
Experts in industry, policy and academia explored these questions, and more, at a two-day conference held last month at the George Washington University that was hosted by the Institute for Trustworthy AI in Law & Society (TRAILS).
“AI at Work: Building and Evaluating Trust,” also known as TRAILSCon 2025, featured lively discussions between TRAILS researchers and experts from around the world, examining topics ranging from designing AI evaluation measures that users can understand and trust, to AI’s impact on the workforce, to expanding participation in the design, development and evaluation of AI systems.
The event was TRAILS’ first major onsite conference and a natural outgrowth of the institute’s mission of investigating what trust in AI looks like, creating technical AI solutions that build trust and determining which policy models effectively sustain trust.
“This is a moment of great change, but also great potential,” said GW President Ellen M. Granberg in opening remarks to the sold-out crowd of 350-plus participants. “Now more than ever, we need the experts that are gathered here to advance the important and collaborative conversations that will help us capitalize on the promise of AI and build systems that improve our processes and productivity and that are also safe and trustworthy.”
Launched in 2023 with a $20 million grant from the National Institute for Standards and Technology (NIST) and the National Science Foundation (NSF), TRAILS is a partnership between GW, the University of Maryland, Morgan State University and Cornell University.
Over the course of the two-day summit, attendees listened to conversations between thought leaders, held round table discussions and joined group workshops, with a focus on cross-disciplinary communication. Panels and workshops included “AI as Work,” led by Katie Shilton, a professor in the College of Information at the University of Maryland and a TRAILS Co-PI, and “Changing Education Practice to Support Learning About and With AI,” led by Virginia Byrne, an associate professor of higher education and student affairs at Morgan State University and a TRAILS Co-PI.
In one keynote panel moderated by Professor of Engineering Management and Systems Engineering David Broniatowski, TRAILS’ co-principal investigator and GW site lead, experts from Cornell, NIST, Microsoft and the Carnegie Endowment for International Peace (CEIP) discussed the importance of designing trustworthy, usable evaluation measures for AI systems.
One point made was that establishing consistent benchmarks for measurement would not only enable researchers to usefully evaluate AI’s impact on a given information system, but also provide the public with an understanding of what tasks these tools perform and how well they perform them.
In a panel discussion that ended the two-day event, titled “What’s Next for AI at Work?”, TRAILS Co-PI Susan Ariel Aaronson, a research professor in the Elliott School of International Affairs and director of the Digital Trade and Data Governance Hub, said she sees reason for hope in the frequent, high-level national and international conversations around AI-driven transformation of the workplace.
The ongoing effort “to try to think about what kind of world will we have for people who must earn a living” is unresolved, Aaronson said, but remains open in interesting ways, with new voices joining every day. “I'm usually the person who thinks about this in not the [most] optimistic terms, but in terms of the world of work, it does give me hope because we're having this debate early on,” she added.
Broniatowski, also on the closing panel, had his own thoughts. “One of the big concerns around trust in AI is that if people don't trust it … they're just not going to adopt it,” he said. “They're going to be so afraid of what it might do, or they might be afraid that others might impose it upon them.
On the other hand, Broniatowski noted, if people do have trust, they can work together with the developers of those tools and to develop tools that are really fit to purpose. “And once you have that ability of tools to fit to purpose,” he said, “that's when you really unlock all the potential innovations and achievements that AI can possibly bring us.”
—This news brief was adapted from an article in GW Today authored by Ruth Steinhardt