Panel to discuss role of litigation and regulation in managing justice system use of AI

Law Commission of Ontario amid a multi-year law reform review on AI and automated decision-making

Panel to discuss role of litigation and regulation in managing justice system use of AI
Ryan Fritsch, legal counsel with the Law Commission of Ontario

An upcoming Law Commission of Ontario event will look at the role of the legal profession in society’s grappling with the rise of Artificial Intelligence in the justice system.

On Thursday, Dec. 9, the LCO will host on Zoom a 90-minute discussion on how litigation and regulation can help protect rights affected by AI and automated decision-making. Superior Court Justice Jill Presser will moderate a panel discussing litigation, which will be followed by a panel on regulation, moderated by Nye Thomas, LCO executive director.

The LCO is engaged in a multi-year law reform review of AI’s impact on the justice system and has published four papers on the topic in the last year. The project examines civil, administrative and criminal law and how AI and ADM are being used in the employment and labour contexts, government decision-making with respect to access to social benefits, bail, sentencing and evidence generation, says Ryan Fritsch, legal counsel with the LCO.

With its panel, “AI Decision-Making: Protecting Rights Through Litigation and Regulation in Canada and the United States,” the LCO, says Fritsch, is inviting the profession to consider three issues: What is coming from AI and what its impact will be on civil, administrative and criminal law; how lawyers can help begin to regulate some of its anticipated impacts; and what Canada can learn from countries which are a little further ahead in the process.

“This panel is kind of unique,” he says. “Because we're actually bringing in some experts from the United States to sit with experts in Canada and discuss the U.S. experience, which is probably several years ahead of Canada in terms of AI and its impact on justice.”

Among the panellists is Martha Owen, a public- and private-sector labour litigator at the Texas firm, Deats Durst & Owen, who has experience helping to reel-in AI decision-making. Owen was lead counsel for the Houston teachers’ federation in their challenge of the school district’s AI-powered, teacher-performance-assessment tool.

The school district refused to release the source code, so they could not explain how the AI was making its decisions, says Fritsch. That meant they could not then evaluate whether it was performing properly or being fair, he says. The litigation resulted in the school district being forced to get rid of the AI tool.

“It was making decisions about people's lives and careers, without really being able to explain and justify how it was doing it,” says Fritsch. “So they threw it out. It's one early example we had from a few years ago of the kinds of issues that arise with AI in terms of litigation.”

Also a panellist Thursday is Gerald Chan, partner at Stockwoods LLP Barristers and co-editor – along with Justice Presser – of Litigating Artificial Intelligence. Chan says that, while litigation plays a valuable role in educating the public on AI’s pros and cons, government must pass regulations to deal with these systems.

“You can only accomplish so much on an ad hoc basis through litigation, especially bearing in mind the limitations on access to justice,” he says. “Litigating the fairness and reliability of AI systems is a complicated enterprise. You need access to experts, disclosure of the source code, disclosure of validation studies, etc. A lot of cases could fall through the cracks if we’re relying solely on litigation to regulate the use of AI systems.”

Artificial intelligence is already playing a role in Canadian civil and administrative law, one example being Immigration Refugees and Citizenship Canada’s use of an automated routine to approve visitor visas, says Fritsch.

“That's an example of AI for good. They're using AI to expedite a positive decision… It's AI providing a more efficient service to approve more applicants and address the backlog.”

“Some other examples that we're aware of are a little more questionable,” he says.

Canada Revenue Agency is using AI to analyze the employment status of income tax filers. The technology has a role in recommending whether someone is a contractor or an employee, for example, which determines certain taxes and write-offs, says Fritsch.

“A little bit more murky. Because that raises questions. What if you disagree with what the AI is recommending? How do you challenge that? How do you review the AI's decision? You can't really get an explanation out of the AI for how it made the decision.”

To ameliorate the lack of transparency, organizations are looking at methods such as having a “human in the loop,” says Fritsch. The idea is that having people take the AI’s advice, interpret it, and then make the ultimate decision would be an effective check and balance. But he adds that the human in the loop raises questions, including how the human is trained, and whether, out of expediency, the human will simply defer to the AI.

This is where litigation, regulation and the role of lawyers come into play, he says.

“What we're suggesting is that lawyers and the participation of the legal profession is also a really important human in the loop.”

The profession is needed not only to be a “person in the process” but to be well-trained and aware of flawed AI’s potential for bias and discrimination, due process concerns, as well as explaining how a decision is reached and ensuring decisions are based on evidence and that people have a right to challenge them, says Fritsch.

Related stories

Free newsletter

Our newsletter is FREE and keeps you up to date on all the developments in the Ontario legal community. Please enter your email address below to subscribe.