AI is transforming business and the workplace. HR has a critical role to play as an advisor to the business when it comes to implementing AI or technologies that include an AI functionality. Beyond HR, AI is finding a place in many different job functions across organizations. That means that HR plays a dual role when it comes to AI – (i) understanding and implementing AI tools to assist with HR functions and (ii) helping the business understand and navigate the issues that come from a broader use of AI across the organization.
This post focuses on the legal issues related to AI. To learn more about the general risks and benefits of AI, read our earlier post.
Understanding the Legal Issues
The use of AI in the workplace triggers a number of legal considerations. The key issues are:
- Privacy
- Misstatement/Errors
- Bias
- Complying with AI regulation
- Understanding how AI impacts other legal obligations
Privacy
Artificial intelligence technologies must be trained, and they are trained through use. The terms of use for many free or open source AI products state that the provider may use the data that you input to improve their models. For example, OpenAI’s terms of use state that “[w]hen you use our non-API consumer services ChatGPT or DALL-E, we may use the data you provide us to improve our models.” There are steps you can take to switch off training, but this may limit the use of the tool.
This means that you may be handing over your data to a service provider. If you have included any confidential or sensitive information in a query, that information may no longer be private. It is important to carefully read the terms of use for any AI tool that you employ, and to ensure that any information that you share remains your property, remains confidential and is appropriately protected.
When using any AI platform for work purposes, the terms of use should be reviewed by either your internal IT security team and/our your lawyer. You should also ensure that you have policies in place so that your employees clearly understand when they are allowed to use AI tools and the scope of permitted use (as well as any other requirements that must be met for use).
Misstatements or Errors
There is also a risk of inaccuracies or complete fabrication with AI, which creates potential legal liability. A recent case involving a major airline highlights this issue. The airline used an AI support chatbot on its website. The chatbot allowed customers to ask questions and receive answers. In this case, the customer needed to purchase a ticket to attend a family member’s funeral and asked the chatbot how to obtain the bereavement rate for their ticket.
The chatbot advised “If you need to travel immediately or have already travelled and would like to submit your ticket for a reduced bereavement rate, kindly do so within 90 days of the date your ticket was issued by completing our Ticket Refund Application form.” Based on this information, the customer purchased a full price ticket and submitted the refund application after their return from the funeral.
When the customer submitted the form after returning from the funeral, the airline denied their request for the bereavement rate, stating that they could not obtain the bereavement rate after completing travel. The customer sued the airline for the difference between the full price they paid for the ticket and the bereavement rate, based on the information provided by the chatbot.
The airline argued that it should not be liable for the misleading words of its chatbot and that the correct information was posted elsewhere on its website. The adjudicator rejected this argument, finding that the airline did not take reasonable care to ensure its chatbot was accurate and also stating that there was no reason for the customer to know that one section of the airline’s website was accurate and another section was not.
This is an important case to keep in mind. While the law will likely continue to develop, if you implement tools that employ AI, you should expect that you will be required to take reasonable care to ensure their accuracy. Many HR tools are incorporating chatbots – while they are an efficient tool allowing employees to ask questions and quickly find information (such as policies or benefit information), you need to take reasonable steps to ensure that the information provided by the chatbot or other AI is correct.
Bias
Bias is a recognized issue in AI applications. An article by IBM notes that “[e]xamples of AI bias in the real world show us that when discriminatory data and algorithms are baked into AI models, the models deploy biases at scale and amplify the resulting negative effects.” Biases are incorporated into AI models in a number of ways – through training data, algorithms, and cognitive biases.
It is important to recognize that AI tools may have inherent biases. The IBM article included a number of real examples of bias in AI processing, one of which is directly applicable to HR. The article notes that “issues with natural language processing algorithms can produce biased results within applicant tracking systems. For example, Amazon stopped using a hiring algorithm after finding it favored applicants based on words like “executed” or “captured,” which were more commonly found on men’s resumes.”
This bias may lead to a human rights violation. For example, in the example above, the employer was effectively discriminating against women, which is a violation of human rights legislation in every Canadian jurisdiction.
Complying with AI Regulation
Governments are recognizing the risks of AI and they are beginning to introduce legislation to address some of those risks.
In Canada, the federal government’s Bill C-27 would create a new Act called the Consumer Privacy Protection Act (see our previous post for more on Bill C-27). If passed, that Act includes provisions dealing with “automated decision systems”.
Automated decision systems are defined as “any technology that assists or replaces the judgment of human decision-makers through the use of a rules-based program, regression analysis, predictive analytics, machine learning, deep learning, a neural network or other technique”.
The requirements regarding automated decision systems would apply broadly, but there are specific considerations in the employment context. For example, if a federally regulated employer uses automated decision systems in its hiring and HR practices (such as resume screening or ranking programs, aptitude tests, personality tests etc. that meet the above definition), it would need to comply with the following requirements in the CPPA:
- Plain Language: The organization must publish in plain language a general account of its use of any automated decision system to make predictions, recommendations or decisions about individuals that could have a significant impact on them; and
- Response to Request for Explanation: If an individual makes a request, the organization must provide the individual with an explanation of the prediction, recommendation or decision and the explanation must include the type of personal information that was used, the source of the information and the reasons or principal factors that led to the prediction, recommendation or decision.
Bill C-27 also introduces a new Act, the Artificial Intelligence and Data Act (the “AIDA”). The AIDA would apply to all businesses – not just federally regulated – and it would impose even greater obligations in respect of high-impact artificial intelligence systems. Much of the detail related to this legislation is still to be determined by regulations (assuming it comes into force), including what is meant by a high-impact system. Based on what we see in other jurisdictions though, it is likely that a high-impact system would include one that impacts employment.
At a high level, the AIDA would require:
- Assessments to determine whether a system is a high-impact system
- Establishing measures to assess and mitigate the risks of harm or biased output
- Monitoring those mitigation efforts
- Keeping records related to these obligations.
The Ontario government also introduced Bill 149 which includes a requirement that employers inform job seekers when AI is used to inform decisions in the hiring process. Employers must include a statement disclosing any use of artificial intelligence to screen, assess or select applicants for a position in publicly advertised job posting, subject to exceptions to be determined. Bill 149 has been passed, but the requirement to disclose the use of AI systems in job postings is not yet in force (with no date yet announced).
Finally, Quebec is the only jurisdiction with legislation related to AI use that is in force. Section 12.1 of Quebec’s Act Respecting the Protection of Personal Information in the Private Sector provides that if a decision is made about a person based exclusively on automated processing, you must tell the person this. If the person requests, you must also tell them:
- the personal information used to render the decision;
- the reasons and principal factors that led to the decision; and
- that they have a right to have the personal information corrected.
AI’s Impact on Other Obligations
AI may also impact your existing legal obligations. While there may be more, three examples highlight the issue:
- Workplace Investigations
- Unionized Workplaces
- Human Rights
First, employers have an obligation to conduct investigations into harassment or violence complaints under health and safety legislation. There may be other causes for investigations as well. The rise of AI and its accessibility may mean that you can no longer believe what you see. There are many examples of AI being used to create images, videos and voices. As HR professionals, you need to be mindful of this technology when conducting investigations. Be aware of AI and how it may be used, and remain skeptical.
Second, the labour relations legislation in BC, Manitoba, New Brunswick, Quebec, Saskatchewan and the Federal jurisdiction includes provisions that require an employer to provide the union with notice of a technological change that will impact a significant number of employees. The exact requirements vary by jurisdiction, but it is important to be aware of this requirement and consider its application when introducing AI as a tool in the workplace.
Finally, employers are required to comply with human rights legislation in every jurisdiction in Canada, and that includes ensuring that they do not discriminate against employees or prospective employees based on a prohibited ground. The biases that may be inherent in some AI tools may result in a breach of an employers’ human rights obligations and could lead to a human rights complaint.
How Compliance Works Helps HR Professionals
AI is advancing rapidly and governments are still figuring out how best to regulate AI. We can expect to see new legislation across all Canadian jurisdiction as governments address the issues related to the use of AI, particularly in areas where it has the potential to significantly impact individuals – with employment being a key area of concern. Compliance Works helps HR professionals stay on top of their changing obligations and ensures that they are aware of upcoming legal changes – usually within 24 hours of the changes being introduced.

Contact us to Request a Demo or email us at info@complianceworks.ca to learn how a subscription to Compliance Works can help your HR team succeed.