What Will the EU AI Act Mean for Ed Tech in the U.S.?

Earlier this month, the European Union approved new regulations on artificial intelligence technology in a framework dubbed the EU AI Act, which seeks to control how the technology is developed and deployed amid growing concerns about both its risks and applications across sectors like government, health care and education.

According to the EU’s website about the act, the new rules categorize AI systems based on their “risk” and prohibit AI practices and systems that pose “unacceptable risks,” such as biometric categorization systems that infer sensitive attributes like race, political opinions, union membership, religious or philosophical beliefs or sexual orientation, except in the case of “labelling or filtering of lawfully acquired biometric datasets or when law enforcement categorizes biometric data.” The regulations also put new prohibitions on emotion-recognition technology in workplaces or educational institutions, except for medical or safety reasons, and note that “high-risk” AI systems — such as tools used in education for evaluation and admissions purposes — must be designed to allow deployers to implement human oversight.

For American ed-tech companies, experts say these new regulations could affect what tools they can market to EU clients and what they’re allowed to provide for foreign exchange students, and by extension, what they spend their resources developing. What’s more, the EU AI Act could eventually inspire domestic policy changes in the U.S.

Shaila Rana, an IT professor at Purdue University, said the act places particular emphasis on regulating AI tools used in education, and noted that companies and universities based in the U.S. that do business with clients in the EU will have to be mindful of the new regulations.

Similar to how U.S. tech companies and organizations have to comply with EU data privacy regulations when working with European clients, she said, entities will have to “think twice before they develop and deploy” AI systems in the EU marketplace. She added that she’s hopeful the new regulations will serve as a model for how to approach regulating the AI industry in the United States.

“It’s going to be similar to how the GDPR [General Data Protection Regulation, the EU’s data privacy law] is [for companies based in the United States]. Even if you don’t have any EU citizens’ data, eventually, if you want to expand to the EU, you’re going to have to be GDPR compliant,” she said. “In terms of ed tech, they’re going to need to follow these baselines in case that legal precedence comes to the U.S. or if they have any foreign exchange students from the EU. Even when it comes to AI research, now educational organizations are going to need to really think twice and consider regulatory requirements and obligations that are outlined in the act.”

Bernard Marr, a business and technology writer who has written extensively about AI for publications like The Guardian and The Wall Street Journal, noted in an email to Government Technology that the regulations include an outright ban on technologies that threaten individual safety and rights, which includes AI that manipulates behavior among vulnerable groups like children. This could include devices like voice-activated toys that could promote harmful actions. He noted that the EU’s regulations in general have a “strong focus on protecting vulnerable populations,” which could have implications for organizations and companies wanting to deploy ed-tech tools in the EU market, particularly for K-12 students.

“Given the global operations of many U.S. firms, aligning with these international standards necessitates a fundamental shift towards embedding ethical AI practices and privacy-preserving technologies into their development processes,” he wrote. “This requirement to adapt underscores a broader trend where companies, regardless of their base of operations, must prioritize user safety and data protection, thereby reshaping the global tech development landscape to meet these comprehensive regulatory expectations.”

Marr said the EU AI Act will likely extend far beyond European borders and impact how AI tools are designed and deployed worldwide in the years to come.

“There’s a strong possibility that the EU’s AI regulations could inspire similar policies in the U.S., especially as concerns around privacy, bias, and ethical use of AI continue to grow,” he wrote in an email. “The comprehensive and principled approach taken by the EU could serve as a model for U.S. policymakers, advocating for a balanced path that encourages innovation in the ed-tech space while addressing key ethical challenges.”

In terms of how the act could influence policymakers in the U.S., Nazanin Andalibi, an assistant professor of information at the University of Michigan, said she hopes the EU regulations will inspire similar moves in the U.S. to ban emotion-recognition technology in workplaces and in education. She called the ban on emotion-recognition technology a smart move on the EU’s part, based on her research into the technology’s possible adverse effects.

“I would love to see the U.S. moving in this direction,” she said. “The harms of emotion-recognition technologies to workers would include harms to qualities like privacy and well-being, as well as harm to workers’ performance and employment status [and concerns about] bias, discrimination and stigma in the workplace.”

Susan Ariel Aaronson, a research professor of international affairs at George Washington University, said she believes the regulations’ focus on high-risk AI tools is also a step in the right direction, but added that she would recommend requiring more transparency from tech developers about how AI tools work and what data they use.

“I think if you say how the model was built, with what data and how you got that data, that’s very important to understanding [AI] hallucinations and other problems with various LLM models,” she said of policy considerations moving forward.

This article was published by Government Technology.


Previous
Previous

GW Engineering Professor Works to Secure Autonomous Aircraft from Bad Actors

Next
Next

Critics Miffed as Meta Restricts Access to Content Monitoring Tool