Concerns surfacing over use of personal info in AI

As artificial intelligence develops and new approaches such as blockchain emerge dealing with huge amounts of data, lawyers say concerns are surfacing over the use and protection of individuals’ personal information. Imran Ahmad, who leads Miller Thomson LLP’s cybersecurity team, sees the potential for risk as he looks at the emergence and use of artificial intelligence.

Concerns surfacing over use of personal info in AI
Imran Ahmad says there could be legal risk around the way personal information is collected and used to develop artificial intelligence.

As artificial intelligence develops and new approaches such as blockchain emerge dealing with huge amounts of data, lawyers say concerns are surfacing over the use and protection of individuals’ personal information. Imran Ahmad, who leads Miller Thomson LLP’s cybersecurity team, sees the potential for risk as he looks at the emergence and use of artificial intelligence.

“For AI to be effective or even for deep learning to be effective, you need a lot of data,” he says. 

“Those issues around the data sets — how they’re collected, what is collected, what scope of information are we collecting from an individual — all of that is a thing which is unclear at this stage. That’s where some of the risk lies.”

The question is whether that information can be kept anonymous so no one is identified or whether it produces analysis with clearly identifiable information, says Molly Reynolds, a cybersecurity and privacy lawyer with Torys LLP in Toronto.

A scenario might involve putting information into a machine that conducts high-level data analysis of 30 people living in the same postal code, she says. If the data set focuses on a gender set, any additional characteristics that are inputted might lead to the identification of an individual.

“There’s huge privacy considerations when you’re designing these models,” she says, to ensure the right consent or legal basis for the machine-learning tool to use the information is obtained.

Reynolds says ensuring that information remains anonymous is key to protecting personal privacy, but she adds that isn’t guaranteed simply by combining information from several sources together.

“I also think it’s going to be important for lawyers because we’re looking at contracts because we are representing an organization that wants to buy or license an AI tool,” she says. 

“They need to know what the scope of their rights are and the contracts need to really specify whether personal information is going to be used, whether information is going to be anonymized, whether information is going to be aggregated, and all the parties really need to understand what that means in that particular implementation.”

The value for companies lies in that aggregate data, not the individual identifiers, adds Kirsten Thompson, who leads McCarthy Tétrault LLP’s national cybersecurity, privacy and data management group.

She says the challenge, however, is to ensure that the aggregated data set is not personal information. 

Thompson says there are no standards around that yet in the business world like there are in the health-care field where sensitive information is made anonymous when used for documents such as research studies.

“Often in the private sector, people will strip out a name and think that the information is anonymized and that’s not actually the case,” she says.

Canada’s privacy legislation requires that companies collecting data make any personal information they collect anonymous. 

The lack of clear standards laying out what is sufficient to make material anonymous is not only a risk to the individual and their information but also to the organization handling that data, she adds. The companies risk losing a great deal of data if they don’t adequately de-identify individuals.

The data can be kept for as long as the original business reasons for which it was collected exist. But there is a desire by organizations to preserve what they’ve collected to create larger data sets, says Thompson. That also leaves some uncertainty for the individuals on what happens to their information over the long term, she says.

“What the companies who are engaging in this need to be very careful of from their own perspective is that they have the appropriate consent to do this before they start using this information . . . and they have to be certain that if they’re acquiring data that it’s been appropriately de-identified and adequate consents have been obtained and all the good things that go with the processing of that,” she says.

“There’s a risk if you don’t have proper indemnification, limitations in liability.”

Canada, Thompson says, lacks the European Union’s requirement for transparency for decisions made by machines. In Europe, an individual denied a loan request or insurance following a decision made through AI has a right to know how that decision is made.

Although our privacy regime is different than that in Europe, Thompson says that approach is worth exploring, particularly since it’s clear these technologies are being increasingly adopted.

In addition to privacy concerns, Reynolds also sees the need to be wary of potential human rights and ethical issues. 

In collecting and analyzing data, developers need to be aware of how that information is being used and what trends are expected to emerge from that to ensure no particular groups are being discriminated against. 

A risk factor could involve the use of an individual’s ethnicity to determine or predict behaviour.

“That discussion has to happen before we can get to contracting because, just like with privacy compliance, the organizations entering the contract need to know what representation they’re expecting or they’re giving on how they designed the program and what the outputs will look like and how the program has been designed to avoid making decisions based on bias or based on prohibited grounds,” says Reynolds.

She says she doesn’t believe specific legislation is necessary to address the concerns that AI, machine learning and the use of big data present. 

But Reynolds says current laws should be reviewed to ensure that they are technology neutral or whether there are aspects that need to be updated to be able to accommodate these new uses. 

“I don’t think it’s an area where the federal or provincial government should be legislating on the technology because that will change so quickly it won’t be able to keep up. I think the focus should really be on making sure all the other guiding laws are technology neutral,” she says.

Lyndsay Wasser, co­chairperson of McMillan LLP’s privacy and data protection group and its cybersecurity group, sees a dilemma for lawmakers in this area. 

While there is a need for protection, she says, a big consideration has to be the impact that may have on business.

“On the one hand, we don’t want to dampen innovation and particularly when there are other countries which are less concerned about consumer protection who are charging forward and are going to end up the leaders,” she says. 

“But on the other hand, there are the fears of some people at least about the ethical implications and the potential consequences of not regulating.”

And until a balance that addresses both those issues is struck, lawyers are interpreting how the general principles of privacy law can apply to some new technologies, she says.

Ahmad, too, says there is room for clarity.

“AI is here to stay, blockchain is here to stay. And it would be helpful for the regulators to provide specific guidance. The legislation is sufficiently flexible to address these issues today,” he says.