Trustworthy AI Resources

Expand your AI knowledge with diverse collections of books, free online courses, frameworks, and reports curated by TRAILS experts. Available anytime and anywhere, you can access a wealth of handpicked resources, learn from leading thinkers, and integrate trustworthy AI practices in your own work. Whether you're a student, researcher, educator, lifelong learner, or simply someone curious about AI and its applications, our online library has something for you. Start exploring today! To read academic papers published by TRAILS researchers, see our Zotero library.

AI Bookshelf

Free Online Courses

  • Learn about the impact and ethical challenges of conversational/voice AI, while exploring the principles and frameworks needed to help avoid potential harm as humans interact with the technology.

    Learn More

  • Join instructor and responsible AI leader Elizabeth Adams as she explores the foundational principles of responsible AI and the crucial role of leadership in fostering trust, transparency, ethical decision-making, and more. Discover the positive impact of integrating responsible AI practices across an entire organization, promoting cohesion and inclusion among diverse sets of teams including product builders, data scientists, and marketing professionals. Along the way, find out how a responsible AI culture can cultivate a sense of unity and eliminate perceived silos. By the end of this course, you’ll be prepared to play your part as an AI leader who’s committed to both organizational success and societal well-being.

    Learn More

  • In this hands-on course, learn the fundamental concepts underlying database system design, including not only the design of applications using databases, but also covering the fundamental implementation techniques used in database systems. Instructor Brandeis Marshall takes you through a practical application of database design, database implementation, and data querying to determine when SQL querying is more appropriate. Learn how to develop effective SQL queries and leverage SQL as part of your responsible/ethical data practices. Brandeis takes you through the full arc of a database project, and finishes the course with a project that lets you apply all you’ve learned.

    Learn More

  • Are you looking to broaden your SQL abilities while gaining practical experience? Want to master advanced SQL querying techniques beyond the basics? In this course, data scientist and data career coach Kedeisha Bryan teaches advanced SQL concepts through walkthrough exercises and real-world applications. Kedeisha shows you how to think in SQL code and not just memorize syntax. Learn advanced topics such as window functions, CTEs, subqueries, date time manipulation, and more.

    Learn More

Teaching/Education and AI

  • Guiding teachers on AI use and misuse in educationAI processes vast information, generates new content, and helps decision-making through predictive analyses. In education, AI has transformed the traditional teacher–student relationship into a teacher–AI–student dynamic.

    This shift requires a re-examination of teachers’ roles and the competencies they need in the AI era. Yet, few countries have defined these competencies or developed national programmes to train teachers in AI, leaving many educators without proper guidance.

    The AI competency framework for teachers addresses this gap by defining the knowledge, skills, and values teachers must master in the age of AI. Developed with principles of protecting teachers’ rights, enhancing human agency, and promoting sustainability, the publication outlines 15 competencies across five dimensions: Human-centred mindset, Ethics of AI, AI foundations and applications, AI pedagogy, and AI for professional learning. These competencies are categorized into three progression levels: Acquire, Deepen, and Create.

    As a global reference, this tool guides the development of national AI competency frameworks, informs teacher training programmes, and helps in designing assessment parameters. It also provides strategies for teachers to build AI knowledge, apply ethical principles, and support their professional growth. By 2022, only seven countries had developed AI frameworks or programs for teachers.

    Learn More

  • Artificial intelligence (AI) is increasingly integral to our lives, necessitating proactive education systems to prepare students to be responsible users and co-creators of AI. Integrating AI learning objectives into official school curricula is crucial for students globally to engage safely and meaningfully with AI.

    The UNESCO AI competency framework for students aims to help educators in this integration, outlining 12 competencies across four dimensions: Human-centered mindset, Ethics of AI, AI techniques and applications, and AI system design. These competencies span three progression levels: Understand, Apply, and Create. The framework details curricular goals and domain-specific pedagogical methodologies.

    Grounded in a vision of students as AI co-creators and responsible citizens, the framework emphasizes critical judgement of AI solutions, awareness of citizenship responsibilities in the era of AI, foundational AI knowledge for lifelong learning, and inclusive, sustainable AI design.

    Learn More

  • Educational outreach activities play a crucial role in maximizing the impact of AI teaching and trustworthy AI. These activities aim to extend the reach of these programs beyond their immediate participants and engage a wider audience. The Meharry School of Applied Computational Sciences’s capacity building initiative for TAIMS (Trustworthy AI in medical systems) is committed to broadening participation and engaging women, K-12 educators and students and underrepresented communities in the field of AI and machine learning. To this effort, Vibhuti Gupta, Ph.D., and his student Destiny Pounds developed nine-lesson trustworthy AI teaching modules that introduces educators to different aspects of trustworthy AI with detailed explanations, examples and hands-on exercises.

    Learn More

  • Compiled by the National Education Association (NEA), this report covers ideas that are essential to the question of AI in education, namely:

    1. Students and educators must remain at the center of education

    2. Evidence-based AI technology must enhance the educational experience

    3. Ethical development/use of AI technology and strong data protection practices

    4. Equitable access to and use of AI tools is ensured

    5. Ongoing education with and about AI: AI literacy and agency

    At the heart of all recommendations is the principle that humans must always be the center of the teaching and learning experience and play a significant role in every consequential education and employment decision.

    Learn More

Committees and Reports

  • The National AI Advisory Committee (NAIAC) consists of experts with a broad and interdisciplinary range of AI-relevant experience from across the private sector, academia, non-profits, and civil society. The NAIAC is tasked with advising the President and the National AI Initiative Office on topics related to AI, and release public reports on a semi-regular basis.

    Learn More

  • "Towards Effective Governance of Foundation Models and Generative AI” is a report sharing highlights and key recommendations from the fifth edition of The Athens Roundtable on AI, which took place on November 30th and December 1st, 2023 in Washington, D.C. The event brought together over 1,150 attendees in a two-day dialogue focused on governance mechanisms for foundation models and generative AI. Participants were encouraged to generate innovative “institutional solutions"—binding regulations, inclusive policy, standards-development processes, and robust enforcement mechanisms—to align the development and deployment of AI systems with the rule of law.

    Read the Report

  • Generative AI models are capable of performing a wide range of tasks that traditionally require creativity and human understanding. They learn patterns from existing data during training and can subsequently generate new content such as texts, images, and music that follow these patterns. Due to their versatility and generally high-quality results, they, on the one hand, represent an opportunity for digitalization. On the other hand, the use of generative AI models introduces novel IT security risks that need to be considered for a comprehensive analysis of the threat landscape in relation to IT security. In response to this risk potential, companies or authorities using them should conduct an individual risk analysis before integrating generative AI into their workflows. The same applies to developers and operators, as many risks in the context of generative AI have to be taken into account at the time of development or can only be influenced by the operating company. Based on this, existing security measures can be adjusted, and additional measures can be taken.

    Read the Report

  • The latest in a portfolio of evaluations managed by the NIST Information Technology Laboratory—Assessing Risks and Impacts of AI (ARIA)—will assess models and systems submitted by technology developers from around the world. ARIA is an evaluation environment which is sector and task agnostic. It will support three evaluation levels: model testing, red-teaming, and field testing. ARIA is unique in that it will move beyond an emphasis on system performance and accuracy and produce measurements on technical and contextual robustness. The program will result in guidelines, tools, methodologies, and metrics that organizations can use for evaluating their systems and informing decision making regarding positive or negative impacts of AI deployment. ARIA will inform the work of the U.S. AI Safety Institute at NIST.

    Learn More

  • To get better insight on how companies are doing with Responsible AI, PwC surveyed 1,001 US business and technology executives whose organizations use or intend to use AI. Most respondents (73%) say they use or plan to use both traditional forms of AI and GenAI. Of those, slightly more are focused on using the technologies solely for operational systems used by employees (AI: 40%; GenAI: 43%). A slightly smaller number of companies are targeting both employee and customer systems in their AI efforts (AI: 38%; GenAI 35%).

    Learn More