AI needs debate about potential bias

Artificial intelligence — or, at a high level, computer systems that are self-learning and self-executing — is introducing profound efficiencies to the legal industry by automating an increasing number of tasks traditionally performed by legal professionals.

Artificial intelligence — or, at a high level, computer systems that are self-learning and self-executing — is introducing profound efficiencies to the legal industry by automating an increasing number of tasks traditionally performed by legal professionals.

In addition to being applied in the daily practice of law — due diligence, case law research and document review, to name a few examples — AI is also being used to assist with judicial decision-making.

A good example of this is in criminal law. In certain jurisdictions across the U.S., judges are employing AI to conduct risk assessments of defendants in sentencing and parole decisions.

These AI-based assessments analyze a comprehensive set of data (such as the defendant’s age, prior criminal record, postal code and employment history) to predict recidivism rates. The idea, in part, is that AI systems can promote fairness by reducing the influence of human bias.

However, as Kathryn Hume, vice president of product and strategy for integrate.ai, recently stated, “[A]s algorithms play an increasingly widespread role in society, automating — or at least influencing — decisions that impact whether someone gets a job or how someone perceives her identity, some researchers and product developers are raising alarms that data-powered products are not nearly as neutral as scientific rhetoric leads us to believe.”

Hume is reflecting upon two issues associated with the mainstream use of AI.

First is the issue of bias. The output of an AI system is dependent upon the data set available and analyzed by that system. Limits to the data set will influence the system’s decisions.  

Second is the issue of transparency. The debate about whether algorithms should be transparent (or an explainable analytical process) is controversial. Many of these systems are proprietary and protected as a trade secret.

In addition, some of these systems are so complex that even open systems can be next to impossible to understand, in some cases requiring another AI to interpret the analytical process.

The well-known case of Wisconsin v. Loomis is illustrative of the practical effects of both these issues. Eric Loomis, 35, was arrested for his involvement in an attempted drive-by shooting.

At the sentencing hearing, the presiding Justice Scott Horne rejected a plea deal and sentenced Loomis to six years imprisonment, explaining that “[t]he risk assessment tools that have been utilized suggest that you’re extremely high risk to reoffend.”

The tool referred to by Horne is a system called COMPAS, which stands for Correctional Offender Management Profiling for Alternative Sanctions. It is used in several U.S. jurisdictions to predict pre-trial recidivism, general recidivism and violent recidivism using individual and group data.

Loomis challenged the court’s reliance on COMPAS on the basis that, among other reasons, the proprietary software violated his right to due process.

The appellate court affirmed the lower court’s reliance on COMPAS, stating that COMPAS is simply one tool available to the judge to assist in determining an appropriate sentence. Human judgment is still applied to the sentence.

However, an in-depth study of the COMPAS system conducted by the investigative non-governmental organization ProPublica raised concerns about potential bias in the system’s outputs. ProPublica reported that many of the questions asked by COMPAS contain statistical biases pointing to a targeted group.

ProPublica ran its own detailed statistical test to isolate the effect of race from criminal history and recidivism and, when controlling for prior crimes, future recidivism, age and gender, African-American defendants were 77 per cent more likely to be assigned higher risk scores than white defendants.

The study also found that female defendants were 19.4 per cent more likely to get a higher score than men. The Loomis decision also highlights the importance of transparency. ProPublica’s methodology was extensive as the COMPAS system itself is protected. Algorithmic transparency in the judicial context should be given greater importance for at least two reasons.

First, it permits meaningful review of a system’s decisions. A judge’s reasons must permit meaningful appellate review and must explain how and why a judge came to a particular decision. Where the output of an AI system is a key feature in a judge’s decision, the algorithmic scoring process should be open to court challenge.

Second, it enables public trust in the administration of justice.

AI systems can have a positive impact on society. Staying with the same example, AI-based risk assessment systems are being credited with slowing the growth in the prison population in Virginia to five per cent from 31 per cent in a decade.

A better understanding of the outputs of AI systems in judicial decision-making can reduce skepticism and promote a perception of fairness.

AI is expected to assist lawyers and judges to focus on high-value work instead of more routine tasks and matters. But we need a broader discussion about the standards that should be associated with the transparency of these systems, particularly when used in the context of judicial decision-making.

Developers, policy-makers, lawyers, academics and enterprise each approach these issues from different perspectives.

Canada, as a leader in AI research and innovation, should also lead in tackling some of these thorny ethical and legal issues associated with AI incorporation in law.

 

Carole Piovesan is a litigator at McCarthy Tétrault LLP. She is the firm lead on AI for the cybersecurity, privacy and data management group and co-author of the firm’s paper: “From Chatbots to Self-Driving Cars: The Legal Risks of Adopting Artificial Intelligence in Your Business.”

Free newsletter

Our newsletter is FREE and keeps you up to date on all the developments in the Ontario legal community. Please enter your email address below to subscribe.

Recent articles & video

From ignored to a nation-to-nation relationship: Jason Madden’s 20 years advocating for Metis rights

Ontario Superior Court of Justice welcomes new judges Colin Stevenson and Gilead Kay

Ontario Superior Court upholds award of costs exceeding the damages in a personal injury case

Ontario Superior Court resolves estate dispute between siblings by passing over a sister as trustee

Erika Chamberlain steps down as dean of Western Law

Ont. CA orders new trial in pedestrian collision case due to unfair bad character evidence

Most Read Articles

Erika Chamberlain steps down as dean of Western Law

Ont. CA orders new trial in pedestrian collision case due to unfair bad character evidence

Ontario Superior Court of Justice welcomes new judges Colin Stevenson and Gilead Kay

From ignored to a nation-to-nation relationship: Jason Madden’s 20 years advocating for Metis rights