Employment

[Podcast] AI in the Workplace – Training Data Issues

In the second part of our insightful artificial-intelligence series, Guy Brenner, who is the leader of Proskauer’s D.C. Labor & Employment Practice and co-head of Counseling, Training & Pay Equity Group, along with Jonathan Slowik, senior Counsel, Labor & Employment, from the firm’s Los Angeles office, examine the crucial issue of AI training data when making employment decisions. We discuss how issues relating to training data can pose a risk to employment discrimination laws even when AI systems have not been explicitly programmed to take into account protected characteristics. We also discuss the possibility of AI models being inaccurate due to inadequate or unrepresentative data. Listen to the podcast. Listen to the podcast. Guy Brenner:

Hello again to The Proskauer Brief – Hot Topics in Labor and Employment Law. Guy Brenner is a partner at Proskauer’s Employment Litigation and Counseling Group in Washington, DC. Jonathan Slowik is my colleague in the Labor and Employment Practice Group in Los Angeles. Jonathan, thank you for joining me again today. When it comes to employment decisions such as hiring and promotions, I’m excited to be here. When it comes to employment decisions such as hiring and promotions, Jonathan, thank you for joining me again today.

Jonathan Slowik: I’m excited to be here, guys.

Guy Brenner: If you haven’t heard part one of this series, we encourage you to go back to listen to it, because we go through what we hope is some useful background about what AI is and what kind of solutions it provides to employers and HR departments. As we promised at the conclusion of the previous episode, AI can also present some challenges and pitfalls to employers. This episode is the first in a series of episodes that will go over these potential pitfalls. Today, we will discuss training data and issues that may arise. So, Jonathan, can you begin by giving us a quick primer on what we mean by training data?

Jonathan Slowik: Certainly. To understand what training data is it’s important to first understand that most AI systems use some form of machine learning. Machine learning is a process whereby the AI analyzes a large amount of data to find patterns and connections. It then uses the lessons it learns to make predictions or draw conclusions in a different context. Jonathan:

Wow, that’s a great summary, Guy Brenner. What should employers be aware of in regards to machine learning? It may seem to some that this is a sensible approach, which leads to better results than a human could achieve. There are a few potential problems. One problem is that AI systems may be trained using data that is not representative, which incorporates historical biases, or correlates with protected features. Let’s make this more concrete. Imagine that an AI model was used to help with hiring decisions. This model had been trained using data from your company’s employees over a period years. The AI that analyzes this data and draws lessons about who was successful at your company previously uses those lessons to predict who will be most likely to succeed in the future. This seems to be a reasonable way of going about it. It’s clearly a job-related issue. Guy, do you see any problems with that approach?Guy Brenner:

I think the example itself illustrates what makes AI so attractive. Guy Brenner: Do you see any issues with that approach? Guy Brenner:

The example itself is what I think makes AI so appealing. If AI could predict and give clear guidance on those most likely to be successful at a company, based on its own data and track record, it would help employers narrow their recruiting efforts and save time and money. The appeal is clear, right? The issue is whether the data used reflects biases. What if, historically, your workforce was predominantly male or white or people who did not have primary caregiving responsibilities? You could see how the data being used could perpetuate biases that were latent in the organization.Jonathan Slowik:

Right. In any of these scenarios, the AI would make hiring decisions based upon a biased data set. This may reflect an historical failure of the organisation to promote a variety of candidates. It could be a reflection of, say, the changing trends in society. Guy Brenner: So Jonathan it seems that someone out there might think, “Well, it’s easy to make sure that the data is not demographic information,” right. Just, you’ll know, clean it up of any protected characteristics data, such as race, gender, age, etc. What’s, what’s wrong with that thing?

Jonathan Slowik: Well, that’s certainly a step in the right direction, but it might not solve the problem completely. Remember that there may be certain characteristics of your organization’s former and current employees which correlate with protected characteristics. Even if you have removed protected characteristics from the training data, the AI model could still recommend a pool of candidates who look like your previous and current employees. Guy Brenner:

I have heard a lot about algorithmic discrimination. The laws in New York City, Colorado (the Colorado law is not yet in effect) and other jurisdictions require bias audits on a regular basis to prevent algorithmic discrimination. Was what you’re talking about an example of that?Jonathan Slowik:

Yes, exactlyGuy Brenner:

Okay, but I’ll point out that even if our listeners are not in a jurisdiction that has laws concerning algorithmic discrimination or things of the like, that doesn’t mean that they don’t have to worry about this. There are laws that prohibit discrimination against protected characteristics, whether done by humans or machines. You can’t protect yourself by saying, “The AI made me do it.” Jonathan, what are some of the other ways unrepresentative data training can make an AI model go awry?Jonathan Slowik:

So, I think an underrated potential problem is that AI models can struggle with certain tasks if they don’t have sufficient volume of training data. I’ll use a simple, humorous example from the New York Times audio-podcast Hard Fork to illustrate my point. In this example, the same prompts were given to two different versions of the engine that runs ChatGPT. GPT-4 was at the time the most advanced version and was available to ChatGPT subscribers. GPT-2 was an early prototype and was a version which was only three years old. GPT-4 is more advanced than GPT-2. You can generate images, for example. The biggest difference between GPT-4 and GPT-2 is that GPT-4 has been trained on a lot more data. Some estimates suggest that GPT-4 was trained on 1000 times more data than GPT-2. Guy Brenner: Sure. “Roses, red and violets, blue.” Our love is eternal. “Our bond is true.” Not too bad.

Jonathan slowik: Not at all. It’s beautiful — you could see it on a stamp card, perhaps. It’s a little cheesy but it may be more due to the prompt. That’s GPT-4. Guy, would you like to take on GPT-2 now? Remember, this is an engine that has been trained on a smaller data set. Guy Brenner:

Here you go. “Roses are Red, Violets are Blue. My girlfriend is dead.” Not quite as good.Jonathan Slowik:

Not quite. Maybe it’s so bad that it’s good. Maybe less hallmark material and more novelty shop material. This is a silly example. If an AI ever produced, say, in an HR context gave, gave a similar absurd output, then it should not be taken seriously and could not be taken. So, if you received something so ridiculous, you’d ignore it. You can see that a lack in training data is causing an AI to misfire more nuancedly. For example, a lot of academic research has shown that facial recognition software is less accurate for women and people of colour. One possible reason is that these platforms were trained using data sets that disproportionately consisted of the faces and voices of white men. So, if the same is true about a video interviewing platform, for example, that analyzes facial expressions, tone of voice, speech patterns of candidates, that tool might also be less accurate when it comes to women or people of color, and that bias might be harder to detect, and maybe only over time and, and in hindsight.Guy Brenner:

It’s a lot like if someone comes to you with an answer and you don’t probe how they came to the answer and just assumed it was correct, you could later learn it was flawed in some way. AI is not a calculator. We are used to assuming that machines get things right. It’s not a calculator, right?Jonathan Slowik:

Exactly.Guy Brenner:

There’s some thinking, so to speak, involved. So now that we’ve covered training data issues, what are we going to discuss in the next episode?Jonathan Slowik:

In the next episode, we’ll discuss model opacity issues. Many of these advanced AI platforms can be described as “black boxes”, meaning that their internal workings are difficult to understand. In the next episode we will discuss the implications of an AI system being a black box. Employers who use such a tool for hiring, promotions or other employment decisions need to be aware. Thank you to all who are listening for joining us today on The Proskauer Brief. We’ll record new podcasts as developments warrant to keep you up-to-date on this fascinating and constantly changing area of law and technology. Please follow us on Apple Podcasts and YouTube Music to stay up-to-date on the latest topics in employment and labor law.

Story originally seen here

Editorial Staff

The American Legal Journal Provides The Latest Legal News From Across The Country To Our Readership Of Attorneys And Other Legal Professionals. Our Mission Is To Keep Our Legal Professionals Up-To-Date, And Well Informed, So They Can Operate At Their Highest Levels.

The American Legal Journal Favicon

Leave a Reply