New! Add Legal Advice & HR Law Webinars to your subscription. Learn more

AI and HR – Risks and Legal Obligations

ai

AI is probably the hottest topic of the moment. Here’s something to think about though – recent research shows that AI makes up information up to 27% of the time. What does that mean for HR? As you think about AI and it’s place in HR, you need to be aware of its risks and shortcomings.

There is an understandable rush to think about how AI can be used across all facets of business and industry. HR is no exception. But the process must be thoughtful and technology must be embraced with eyes wide open, understanding both the benefits and the risks.

What is AI?

Artificial Intelligence was described in a recent Forbes article as “…a branch of computer science which performs tasks with the help of smart machines and various applications, with or without human cognitive functions or supervisions — such as interpreting speech, playing games and identifying patterns”. It can be incredibly complex.

We were already used to many forms of AI. Common examples include Siri, Alexa, chatbots and even Google Maps. In the HR context, artificial intelligence was already being used in resume screening tools and in some personality or aptitude assessments. And its potential use is growing. It is used now in technology that covers the full range of HR functions, from recruiting and onboarding, talent management, task management, schedule management and employee performance and engagement.

Risks

AI is not without it’s risks, and it is important to understand those risks before deciding whether, and when and how, to use these tools. There are two key risks to be aware of – privacy and inaccuracy.

Privacy

Artificial intelligence technologies must be trained, and they are trained through use. The terms of use for many free or open source AI products state that the provider may use the data that you input to improve their models. For example, OpenAI’s terms of use state that “[w]hen you use our non-API consumer services ChatGPT or DALL-E, we may use the data you provide us to improve our models.” There are steps you can take to switch off training, but this may limit the use of the tool.

This means that you may be handing over your data to a service provider. If you have included any confidential or sensitive information in a query, that information may no longer be private. It is important to carefully read the terms of use for any AI tool that you employ, and to ensure that any information that you share remains your property, remains confidential and is appropriately protected.

Inaccuracy/Hallucinations

There is also a risk of inaccuracies or complete fabrication with AI. By now we have all heard stories of incorrect information that has been generated by AI (remember the lawyer in New York who filed a brief generated by ChatGPT that included six fictitious cases). We have known that there are limits to AI based on the data set that it is applied to – the tool doesn’t know if information is true or not, it only generates responses based on the information that it has access to.

This can lead to incorrect answers either by repeating information that is not correct or by the tool creating a fictitious answer when it cannot find the data it needs to provide an answer. Answers generated by AI may also be incorrect because they are based on out-of-date information.

But you may be surprised to know that researchers have determined that even when asked to perform a very straightforward task with a complete and accurate data set, AI will still make things up.

Researchers wanted to test how often AI chatbots fabricate content (what they call “hallucinating”). They tested a number of different chatbots by providing them with a simple task – they asked the chatbots to summarize a short passage from a news article. The chatbots invented information at least 3% of the time, and as high as 27% of the time. The article provided the following example.

The researchers asked Google’s large language model, Palm chat, to summarize this short passage from a news article:

“The plants were found during the search of a warehouse near Ashbourne on Saturday morning. Police said they were in “an elaborate grow house”. A man in his late 40s was arrested at the scene.”

The chatbot gave this summary, completely inventing a value for the plants the man was growing and assuming — perhaps incorrectly — that they were cannabis plants:

“Police have arrested a man in his late 40s after cannabis plants worth an estimated £100,000 were found in a warehouse near Ashbourne.”

If you were to only read the summary, you would have no idea that it included made up facts. There is nothing to suggest or indicate that the summary isn’t accurate. In any given case, there is no way to know whether the information that you receive is in the 3-27% of time that the chatbot is generating a response that includes fictitious information. Put simply, you cannot know whether an answer is right or wrong.

How is Legislation Impacting HR’s use of AI

Governments are recognizing the risks of AI and they are beginning to introduce legislation to address some of those risks.

In Canada, the federal government’s Bill C-27 would create a new Act called the Consumer Privacy Protection Act (see our previous post for more on Bill C-27). If passed, that Act includes provisions dealing with “automated decision systems”.

Automated decision systems are defined as “any technology that assists or replaces the judgment of human decision-makers through the use of a rules-based program, regression analysis, predictive analytics, machine learning, deep learning, a neural network or other technique”.

The requirements regarding automated decision systems would apply broadly, but there are specific considerations in the employment context. For example, if a federally regulated employer uses automated decision systems in its hiring and HR practices (such as resume screening or ranking programs, aptitude tests, personality tests etc. that meet the above definition), it would need to comply with the following requirements in the CPPA:

  • Plain Language: The organization must publish in plain language a general account of its use of any automated decision system to make predictions, recommendations or decisions about individuals that could have a significant impact on them; and
  • Response to Request for Explanation: If an individual makes a request, the organization must provide the individual with an explanation of the prediction, recommendation or decision and the explanation must include the type of personal information that was used, the source of the information and the reasons or principal factors that led to the prediction, recommendation or decision.

The Ontario government also recently announced that it will also be introducing legislation related to to the use of AI by employers. The government announced that its legislation would require employers to inform job seekers when AI is used to inform decisions in the hiring process. We expect to see other jurisdictions follow suit.

How Compliance Works Helps HR Professionals

Employment law obligations are spread throughout a number of different Acts and regulations, which change all the time. It can be hard to know where to look, or to feel confident that you looked at everything and that the information you found is up to date.

Compliance Works makes it easy to identify all of your requirements by pulling together all of the related requirements from Acts and Regulations in an easy-to-read plain language summary that is always up to date, providing you with confidence that you have it all covered.

Screenshot 2023 09 06 at 11.36.09 AM
AI and HR - Risks and Legal Obligations 2

Compliance Works provides easy-to-read summaries and the latest changes on Accessibility, Employment Standards, Health & Safety, Human Rights, Labour Relations, Official Languages, Pay Equity and Privacy. Contact us to Request a Demosubscribe to Compliance Works publications, or email us at info@complianceworks.ca to learn how a paid subscription to Compliance Works can help your HR team succeed.

About the author

Gayle Wadden
Gayle Wadden CLO, Compliance Works
Gayle Wadden is a senior lawyer with deep experience in employment and corporate law. She is responsible for overseeing Compliance Works’ legal content.

The latest in HR laws delivered to your inbox

Subscribe to the Compliance Works newsletter for our takes on HR laws, compliance changes, and other trending workplace topics.

Related articles

Easy tutorials to
get you started

August 18, 2024

In this video we show you how to manage your Compliance Works account, including adding users.

August 16, 2024

In this video we will show you how to share HR compliance information with your colleagues.

March 25, 2024

In this video we show you how to find recent changes to HR laws from across Canada.

You’re all set! Our team will be in touch in the next 24 hours to schedule your personal demo. In the meantime, you can learn more about our software or explore our HR compliance resources.