[Podcast] AI at work: Black Box Issues
In the third part of our series on the potential pitfalls of using artificial intelligence (or AI), when it comes to making employment decisions, Guy Brenner and Jonathan Slowik explore the concept of “black box” systems – AI tools whose internal processes of decision-making are not transparent. Even the developers of these systems may not fully understand their internal workings. Listen in to learn more about the complexities of this conundrum and what it means for employers. Be sure to tune in for a closer look at the complexities of this conundrum and what it means for employers.
Listen to the podcast.
Guy Brenner: Welcome again to The Proskauer Brief: Hot Topics in Labor and Employment Law. Jonathan Slowik, a Special Employment Law Counsel in the Practice Group, based out of Los Angeles, joins Guy Brenner, a Partner in Proskauer’s Employment Litigation & Counseling team. This is the third part of a series of articles on the potential pitfalls of using artificial intelligence (AI) in employment decisions such as hiring and promotion. Jonathan, thank you for joining me today.
Jonathan Slowik: It’s great to be here, Guy.
Guy Brenner: If you haven’t heard the earlier installments of this series, we encourage you to go back and listen to them. In part one, you’ll learn about the AI solutions available to employers and HR departments. This includes tools such as resume scanners and chatbots. You’ll also hear about interviewing platforms, social networking tools, job fit testing, and performance reviews. In part two, you will learn how problems with training data can result in biased or other problematic outputs. In today’s episode, we discuss what we call “black-box” issues. Jonathan, what do we mean when we refer to an AI being a “black box”?
Jonathan Slowik: So a “black box” system draws conclusions without providing any explanations as to how those conclusions were reached. This is sometimes called “model opacity”, because we can’t see the inner workings of an AI. Even the developer who built the black box system may not understand the inner workings. Guy Brenner:
It’s pretty sobering – even the developers aren’t sure how it works. Why should an employer be concerned if an AI system is a “black box”? Another interesting study was released this spring. It examined what researchers called the overt and hidden biases of LLMs like Claude or the chat bots we have all come to rely upon. Another interesting study was published this spring. It examined what the researchers called overt and hidden biases in large language models (or LLMs), like the chatbots that many of us rely on. LLMs are trained using our own speech and writing. The most advanced versions use a large portion of the Internet. This vast corpus of writing and data naturally contains some ugly stereotypes. It’s not surprising that early versions of the technology displayed this bias in their responses. This is a big problem. No one wants to be putting out a racist chat bot, and these are the public developers solve for this problem primarily through what’s called human feedback training.This is a process kind of like sites like Reddit where people upvote or down vote things people say. In this process, humans or a group of people review a number of outputs generated by the model. They then upvote good outputs and downvote poor outputs. This feedback trains AI to not give racist outputs but to give more accurate ones. However, the researchers found even advanced alums displaying implicit bias against racial minority. Over time, this human feedback training helped the LLMs to mimic real people. Thankfully, most of us do not say racist things. Guy Brenner:
Jonathan as I listen to your story, the implications for employers are quite obvious and profound. If an employer can’t be sure that the AI is operating in an unbiased manner, biases may only become apparent after the fact. In these contexts, bias audits could be critical to mitigating the risk of algorithmic discrimination.Jonathan Slowik:
That’s exactly right. And that’s one reason why lawmakers and regulators, as they grapple with the issues created by these new AI tools, have been especially focused on bias audits as they begin to craft implement specific regulations for AI.
Guy Brenner: So what are we going to discuss in the next episode, Jonathan?
Jonathan Slowik: In the next episode, we’ll explore mismatches between a platform’s design and its end use as we’ll discuss. Even a system that is supposedly unbiased can produce biased results when it’s used in an unintended way. Thanks to all who are listening and joining us today on The Proskauer Brief. We’ll record new podcasts as developments warrant to keep you up-to-date on this fascinating and constantly changing area of law and technology. Please follow us on Apple podcasts, Google podcasts, and Spotify to stay up-to-date on the latest hot topics of employment and labor law.