Making one person accountable when implementing AI in the workplace is vital: employment lawyer

Uses include application assessments and performance reviews

Making one person accountable when implementing AI in the workplace is vital: employment lawyer
Matt Chapman, Greenwood Law

When implementing artificial intelligence systems in the workplace, organizations should assign a point-person for its integration, says employment lawyer Matt Chapman.

Workplaces use AI for hiring, performance reviews, workplace surveillance, and to enhance health and safety, among other functions. Chapman is a partner at Greenwood Law and represents employees and employers. He spoke with Canadian Lawyer about the legal implications of AI use in the workplace.

“More importantly than anything else, they need to decide who's going to be accountable within the organization. If it's not expressly clear who's in charge of that, no one is.”

Chapman recommends that an organization implementing AI clearly designate someone with the authority to investigate, review, and present findings on the process. Unless someone holds the whole file, it will be easy to focus on the benefits of using the technology without anyone paying attention to the risks, the privacy implications, and data collection transparency.

Efficiency in hiring has been among the immediate benefits of AI, says Chapman. Rather than sifting through mountains of resumes looking for potential candidates, managers can use new tools to identify the best ones. A recent survey from IBM of 8,500 IT professionals from around the world found that 42 percent of companies use AI screening for hiring. While the technology has existed in some form for a while, he says it is getting better.

Employers use AI tools for performance reviews and to manage the performance of struggling employees. The technology can identify aspects of the employees' work processes that require improvement and on which they can focus.

One area where Chapman sees much potential for AI is occupational health and safety. AI can assess health risks, monitor workplace conditions for safety hazards, and calculate the risk of accidents or diseases spreading in the workplace. Long-haul semi-trucks are equipped with tools that monitor the driver’s operation of the vehicle to determine whether they are too tired to be on the road. AI can assess workplace video surveillance to spot erratic behaviour and intoxication or recognize health and safety risks associated with the organization of the facility.

While the tools can make the workplace safer, the other side of that coin is the employee providing the employer with a massive amount of sensitive data, says Chapman. This could include details about their physical health that they would not have necessarily wanted disclosed to their employer. Performance analysis may also reveal a health issue.

He says there is also the risk of discrimination, especially in the hiring process. If the tool finds competent choices and saves significant time and money in the hiring process, management may overlook other ways that the software is analyzing the data. AI may learn to prefer certain characteristics of others and, in doing so, discriminate against entire classes of people, creating a significant legal risk that the company may be unaware of for years.

“There's also the potential for intentional discrimination. There are plenty of organizations out there that do not want to see unionization within their workplaces. One real area of worry is probably going to be within some of the unions if an employer uses their AI to actively try and discriminate against people who may be pro-union.”

As part of Ontario’s Working for Workers Four Act, introduced late last year, the province proposed that employers should be required to disclose in public job postings if they use AI in the hiring process.

The most significant aspect of AI’s legal implications, says Chapman, is the question of how to regulate something constantly changing that most people do not understand.  

“We're still kind of in the wild west stage of AI regulation in Canada.”

Canada has taken a stab at it. In November 2021, the federal Liberals proposed the Artificial Intelligence and Data Act (AIDA), part of Bill C-27, the Digital Charter Implementation Act, 2022. The legislation has passed first and second reading and is now before the Standing Committee on Industry and Technology in the House of Commons.

However, the AIDA hit Parliament shortly before the launch of ChatGPT and other large language models, and Chapman says the feds have had to revisit it.

The Ontario government has assembled the Ontario Trustworthy Artificial Intelligence (AI) Framework to design ground rules for responsible and safe AI use in the public sector. While disclosure helps promote transparency, says Chapman, it is not likely to have an impact on people who need a job, nor does it put any obligations on the employer to protect the data.

“Attempting to regulate in a vacuum when this is constantly developing technology… that's going to be, legally, really, really, really difficult to do.”

“AI can intersect with so many different parts of our lives. Most of those are going to be innocuous. It's going to be beneficial in a lot of ways… But there are lots of areas for potential abuse, risk, and unexpected consequences, and trying to regulate that is going to be very difficult.”

Employment law legislation tends to be “reactionary” and “remedial” because it its designed to prevent things that have already been identified as problems, he says.  “So, we're probably not going to get a lot of meaningful AI legislation until we see things go wrong, unfortunately.”