FOCUS ON PRIVACY - The deadline to weigh in is March 13
The Office of the Privacy Commissioner of Canada says that the Personal Information Protection and Electronic Documents Act is no match for artificial intelligence — and now the OPC needs help to reform the law.
A consultation ending March 13 puts forth a suite of legal changes to bolster privacy protection when it comes to AI. For example, the OPC proposes defining AI separately from general data processing. A global engineering group has said “autonomous and intelligent systems” is more precise than “AI,” while Europe’s GDPR uses “automatic decision-making and profiling.” The consultation also focuses on people’s rights, particularly the right to object to decisions that are solely automated.
There are 11 total proposals in the OPC’s consultation, including: giving the privacy commissioner power to make binding orders and issue financial penalties; third party audits of record-keeping requirements for organizations; flexibility to use de-identified data in AI systems; alternatives for when it’s not practical to get users’ meaningful consent to use their data; minimizing data collection and other transparency-related changes.
“It is clear that AI provides for many beneficial uses. For example, AI has great potential in improving public and private services, and has helped spur new advances in the medical and energy sectors among others,” the OPC wrote when announcing the consultation. “However, the impacts to privacy, data protection and, by extension, human rights will be immense if clear rules are not enshrined in legislation.”
The OPC identified several reasons that AI poses a challenge to Canada’s existing privacy laws. For one, AI systems rely on large amounts of data to test algorithms, raising questions about whether limiting data collection will make the algorithms worse.
Another challenge for governing artificial intelligence is simply that people may not be able to foresee what AI systems will discover, said the OPC’s announcement. While in principle, the law would require people to say why they are collecting the data, the consultation asks whether this is practical.
Bennett Jones LLP lawyers reacted to the consultation, saying the proposals have the potential to be applied even more broadly and “may significantly impact how organizations conduct business, particularly with respect to the collection and use of personal information.” Miller Thomson LLP noted that the consultation implies that there will be real consequences and enforcement for noncompliance with the reformed PIPEDA, and that the OPC “wants the philosophies of data protection and human rights by design to permeate Canadian privacy law.”
“The intersection between privacy rights and human rights, which are both quasi-constitutional rights in Canada, is becoming more noticeable with increasing technological developments - including the development and deployment of AI,” says J. Andrew Sprague, a lawyer at Baker & McKenzie LLP. “While the OPC does not have jurisdiction over human rights law in Canada, the OPC is the right voice to advocate for stronger privacy protections that will, if implemented by the government of Canada, not only provide Canadians with necessary and appropriate additional privacy protections but also equally important enhanced human rights protections.”
But, says Sprague, the issue of enforcement powers is not unique to artificial intelligence technology — rather, the consultation is the latest one in a series of calls by the OPC for stronger powers.
“The OPC may want to consider engaging the public and experts in a separate consultation that focuses solely on a discussion about enforcement powers,” he says. “[B]y making the reform of OPC enforcement powers the last of 11 proposals, there is a risk that the input the OPC may receive may not be as robust as the input the OPC might receive through a stand-alone enforcement powers consultation not tied to its AI consultation.”
With the public pushing for more regulation and some businesses resisting new rules, the OPC seems to be shifting its approach, says Laila Paszti of Norton Rose Fulbright Canada LLP. Traditionally, PIPEDA has been considered “principles based” and “technology neutral” — to allow for flexibility — but the proposed rules might prove more rigid, says Paszti, who was previously a machine learning engineer.
Paszti gives an example of how laws that aren’t technology neutral have struggled to keep up in the past. Europe’s GDPR, which includes concepts of data controllers and separate processors, has been difficult to apply to blockchain, where the data “control” is decentralized, she says.
“Here, there were discussions about blockchain and discussions about autonomous vehicles, and there was this implied notion that there were no specific regulations required because of the technology-neutral stance that PIPEDA had,” she says.
“This consultation paper seems to move away from that technology-neutral stance and is actually thinking about regulation that's directly targeted to machine learning and deep learning systems. And the concern is that regulation is always not the best predictor of where technology will go. So, you're regulating at a certain point in time and as the technology moves, then you may have to evolve. Regulation may not catch up — or regulation may actually stymie innovation.”
Another issue raised by the consultation paper: Enforcing a right for people to have AI systems explained could prove too vague, while requiring too much disclosure could impede on intellectual property rights, she says.
“For example, in healthcare, the deep learning models can be very, very complex, but they're very good at, for instance, determining whether a tumor is cancerous or benign. But it may not be easy to understand why the machine learning system has made that determination,” she says. “Do we penalize that system because it's not explainable, even though it gives us very good predictions, and that's beneficial to society as a whole?”
Paszti says that in some sectors, explanations might be more important, while in other sectors, precision might have more value — a distinction might be made with sector-specific legislation instead, she says. However, that’s not raised by the consultation paper, which focuses on how explanations should be required, not whether they should be required, she notes.
On the positive side, Paszti says the consultation raises important points about when data becomes “anonymous” that could bring clarity to the machine learning industry.
“I think it's important for anyone who's a data custodian or who has data that may be used by these machine learning models . . . to have their voice heard,” she says.