The Ontario Human Rights Commission has been working with community groups to assess the impacts of artificial intelligence in the criminal justice system, according to a new report.
The Ontario Human Rights Commission has been working with community groups to assess the impacts of artificial intelligence in the criminal justice system, according to a new report.
The report, “Together as one: 2018 community engagement report,” was released on June 18, along with the organization’s annual report.
The research into artificial intelligence is part of the OHRC’s goal of developing “detailed policy guidance” to prevent racial profiling. According to the report, the OHRC specifically asked community groups to weigh in on two issues in relation to racial profiling: “under-policing as a type of racial discrimination experienced by Indigenous people and racialized people living in certain neighbourhoods, and the use of artificial intelligence to augment or replace human judgment in policing.”
Unfortunately, the report said, there isn’t much research about the use of artificial intelligence in policing in Canada, compared to the U.S., where author Simone Browne has raised the profile of the issue through books like “Dark Matters: On the Surveillance of Blackness.”
The OHRC is not the only body in Canada trying to unpack the legal implications of AI. The OHRC’s report noted that organizations such as The John Howard Society might also be looking at the issue.
In March, an event at the Law Society of Ontario touched on a U.S. case, Wisconsin v. Loomis, where the defendant claimed that the use of a risk-assessment technology called Compas violated his right to due process. The panel also discussed Ewert v. Canada, 2018 SCC 30, where the appellant claimed that a series of tools used by Correctional Service Canada to assess the risk of recidivism were “developed and tested on predominantly non-Indigenous populations and that there was no research confirming that they were valid when applied to Indigenous persons.”
“When you check the data and you analyze on a certain population that is not the entire population, certainly, the bias is there,” said artificial intelligence expert To Anh Tran at the event.
In May, Immigration, Refugees and Citizenship Canada director of digital policy at Patrick McEvenue said IRCC has been testing artificial intelligence to approve the more straightforward eligibility applications of travellers from China and India.
It’s an issue that’s likely to become more relevant in law over time, the Law Commission of Ontario suggested earlier this year.
“There has been significant growth in the use of automated decision-making in the US criminal justice system. Early experiences in Canada seem to be following similar trend lines... ADM systems – which may include the use of algorithms, machine learning, and artificial intelligence systems – are being used or proposed for use in areas as diverse as immigration and refugee proceedings, police profiling, and to determine sentencing, bail and parole conditions,” the LCO said in a document. “What’s notable about these examples is that they are the areas of greatest concern to access to justice advocates: ‘poverty law’, human rights law, child welfare law, criminal law, and refugee/immigration law. Importantly, this is an early list of potential applications. Critically, there is no legal framework in Canada to guide the use of these technologies or their intersection with foundational rights related to due process, administrative fairness, human rights, and justice system transparency.”
The new report from the OHRC said that community groups suggested working with legal clinics on the issue of racial profiling through under-policing, as well as law and criminology professors and students, ethnocultural lawyers’ associations and legal clinics.
“[T]he Human Rights Legal Support Centre and specialty legal clinics might be able to share some information about accounts of under-policing without breaching confidentiality,” the report said.