Brauneis Leads Development of Database to Inform Users on Litigation Relating to AI
Perhaps no area of law is growing so quickly as that surrounding artificial intelligence (AI). It can be a challenge to keep up with recent developments in this field, but Robert Brauneis, the Michael J. McKeon Professor of Intellectual Property Law, is making it easier with a database dedicated to AI litigation.
Spearheaded by Brauneis, the online, searchable AI Litigation Database was created to help lawyers, scholars, journalists and others stay informed. The database might also be useful for potential plaintiffs or potential defendants who want to research a specific question. Brauneis and the students in his course “Law in the Algorithmic Society” update the database when they learn of relevant cases.
“A couple of years ago, I was looking for some kind of resource that would track litigation involving artificial intelligence from the filing of the complaint onward,” Brauneis said. “Litigation is moving so fast that if you wait for a published decision to come out, you might be a year or two behind. That resource didn't exist, so I decided to create it with the help of a couple of colleagues and many law students.”
Legal scholars and others familiar with databases such as those maintained by LexisNexis and Westlaw know that they report opinions from finished cases. The AI Litigation Database tracks cases from the time they are filed.
The cases are searchable by keyword, the jurisdiction in which they were filed and area of application, among other terms. Application areas include employment, intellectual property, facial recognition and many more.
AI is being used to perform an increasing number of tasks, such as screening resumés of jobseekers and recommending bail (or not) for criminal defendants, to mention just a couple of countless applications. Since AI makes predictions about the future based on information it has gathered about the past, problems can arise when past decisions were discriminatory.
“Courts are using AI tools to score criminal defendants for how likely it is that they will show up for trial if they're released, and how likely it is that they may commit another crime while they're out on release,” Brauneis said. “There are advocates of using these tools, but there are people who are skeptical as well.”
When federal, state or local governments use AI, Brauneis added, it can be difficult for citizens to learn how decisions affecting their future are being made.
“Most governments don't have the in-house talent to develop their AI tools, and so they rely on contractors,” Brauneis said. “Many contractors want to keep much of their AI as a trade secret so that they can sell it to many different customers. If you are a defendant in a criminal case, you may be deprived of your liberty before you've even been tried. And if a government is making that decision partly on the basis of an AI tool, but the tool has been developed by a private company that uses trade secrecy to justify not disclosing much about how they developed this AI, then you've got a real complaint that something's being used against you that you can’t investigate.”
There are a variety of issues raised by AI. Black individuals might be misidentified as persons wanted in connection with a crime because the AI tool used to identify them was trained on mostly white faces, Brauneis said. Concerns about intellectual property rights arise when tools like ChatGPT or DALL-E are developed by using huge numbers of works still under copyright. Autonomous vehicles guided by AI can crash and cause injuries.
Alumna Kamaram Munira, J.D. ’23, said she enjoyed the Law in the Algorithmic Society course, for which she wrote a paper on autonomous vehicle accidents.
“Traditionally, when we deal with a car crash, we presume that the driver was negligent in some way,” Munira said. “I was dealing with vehicles that were fully automated, so an AI machine was doing all the thinking. A machine is not a legal entity, and you can’t hold that machine responsible for a crash. I looked at various statutes dealing with fully automated cars. The laws in different states were all over the place—we don’t have a good legal standard yet for litigation like this. It will be interesting to see where the legal field goes in the future.”
Brauneis is grateful to his students, who help him enter data on AI-related cases. There is a link on the site allowing users to suggest cases for inclusion.
“This is an exciting, cutting-edge area where we know new law is being made and is going to continue to be made over the next decades,” Brauneis said. “And it's important to have a tool that allows you to see that litigation happening in real time from complaint forward. And that's what we're trying to provide.”
The database grew out of the Ethical Tech Initiative (ETI), a collaboration among GW experts in law, engineering, computer science, media and public affairs to address the impacts of digital technology. ETI is co-directed by Brauneis and Dawn Nunziato, the William Wallace Kirkpatrick Research Professor at GW Law.
This article was published by GW Today.