Many lawyers who might not consider their work “high tech” may also need to step in to address human rights law considerations surrounding digital privacy and artificial intelligence.
Many lawyers who might not consider their work “high tech” — in administrative, mergers and acquisitions, trusts, contracts, criminal and health law — may also need to step in to address human rights law considerations surrounding digital privacy and artificial intelligence, according to an expert panel held May 15 in Toronto.
Because artificial intelligence learns from feedback, it’s important for lawyers to get involved in the conversations around building AI as soon as possible, says Carole Piovesan, a Toronto-based lawyer who recently helped start a new tech-focused law firm.
Right now, these systems are “arguably unaccountable under the law,” she said at the event.
“It’s changing faster than I could have anticipated,” Piovesan said, adding later: “As lawyers, this is some of the most exciting work we could be working on.”
The three-hour seminar — a “primer on artificial intelligence in Ontario’s legal system” — was held at Osgoode Hall Law School at York University by the Law Commission of Ontario and Canadian technology company Element AI.
Richard Zuroff, Element AI’s director of AI advisory and enablement, said that while AI is frequently treated as interchangeable with automation, AI actually allows for human augmentation. For instance, he said, a human can easily distinguish these two sentences: “The trophy won’t fit in the suitcase because it is too big,”; and “The trophy won’t fit in the suitcase because it is too small.” In the first sentence, “it” refers to the trophy and in the second sentence, “it” refers to the suitcase, making the pattern hard for a machine to figure out on its own.
Because new applications of personal data and artificial intelligence are evolving so quickly, it can be hard for companies to adequately disclose how people’s data is being used, said Philip Dawson, Element AI’s lead on public policy.
“Its impractical for companies to come back to consumers time and again to get fresh consent,” to use their data, Dawson said. Right now, the philosophy around businesses collection data is “if you’re not satisfied with the terms of service, or privacy policy, you can shop somewhere else,” he said.
Latest News
But with monolithic technology companies like Facebook, Google and others, consumers may not have many options, he said. Thus far, competition law and mergers haven’t taken into account privacy or consumer welfare when looking at product quality, or the potential abuses of power that could come from a company having more data, said Dawson.
Then there’s the issue of which definition of human rights should be used by algorithms, said Nye Thomas, the executive director of the Law Commission of Ontario.
New privacy laws, like Europe’s General Data Protection Regulation, do allow consumers to do things like withdraw consent for data usage. But even recent regulations are becoming dated compared to the pace of innovation in AI, Dawson said.
Dawson said the idea of data trusts — where a third party hosts exchanged data — is one possible solution that could work, but has received blowback recently in Canada. Google sister company Sidewalk Labs has considered data trusts for a Toronto-based “smart city” neighborhood, but the project is the subject of a lawsuit by the Canadian Civil Liberties Association.
Lawyer Amy ter Haar said that another way technology could protect data is through blockchain, which can create a “triple blind” exchange of data to reduce the risk of exposing data protected by solicitor-client privilege. This could work in areas like health law and smart contracts, she said.
Although changing the law is traditionally viewed as a “glacial” proposition, Piovesan said other governments are already looking to the way Canada is governing artificial intelligence. In February, the Canadian government updated a “Directive on Automated Decision-Making” to make sure artificial intelligence used by the government “is compatible with core administrative law principles such as transparency, accountability, legality, and procedural fairness.”
Immigration, Refugees and Citizenship Canada has been testing artificial intelligence to approve the more straightforward eligibility applications of travellers from China and India, said IRCC director of digital policy at Patrick McEvenue.
While artificial intelligence works with “remote” human action, it eventually uses analysis of data to learn by itself and act in unconventional ways, Piovesan said, which can make it hard to know who to hold accountable in a legal context.
She cited a Samathur Li Kin-kan, a Hong Kong-based businessman who made headlines earlier this month for suing a salesman that encouraged him to invest using an algorithm that could learn on its own.
It raises the question, she said, “To what extent can you extend liability to the individual creator or trainer?”
Toronto criminal lawyer Jill Presser said that as more people are affected by artificial intelligence, “we need to know what decision was made against us, and how and/or why, in order to challenge it.”
When lawyers get that disclosure, they’ll need to know when to bring in an expert witness, she said. Knowing what will be admissible in court in a litigation context will be more tricky if recent funding cuts to Legal Aid Ontario reduce the funding for test cases, she said.
“I really think the better way to go is to regulate and legislate so we don’t need to litigate…. Although I am a litigator, I really see it as a final resort,” she said. “I think over time the law society is going to require that we acquire greater and greater technological literacy.”