Employment

[Podcast] AI at Work – Design Mismatches

In this final installment of the AI at Work series Guy Brenner and Jonathan Slowik address a critical issue – mismatches in how artificial intelligence tools (or AI) are designed versus how they are used in reality. Listen in as we explore real-world examples of these risks and what employers can do to ensure they are leveraging AI responsibly. Tune in as we explore real-world examples of these risks and what employers can do to ensure they are leveraging AI responsibly.

Listen to the podcast.

Guy Brenner: Welcome to The Proskauer Brief: Hot Topics in Labor and Employment Law. Jonathan Slowik, a Special Employment Law Counsel in the Practice Group, based out of Los Angeles, joins Guy Brenner, a Partner in Proskauer’s Employment Litigation & Counseling team, based out of Washington, D.C. This is the last part of our multi-part series on what employers should know about artificial intelligence (AI) when it comes to making employment decisions such as hiring and promotion. Jonathan, thank you for joining me today.

Jonathan Slowik: It’s great to be here, Guy.

Guy Brenner: So if our listeners haven’t heard the earlier installments of the series, we encourage you to go back and listen to them. In part one, we provide what we hope will be a useful background on what AI is and what it can offer employers. In part two, you will learn about the issues that arise with AI tools and how they can be distorted or produce other problems. In part three, the so-called “black box” issues were discussed. Other issues arise because it may be difficult to fully understand the inner workings many advanced AI systems. Today’s episode is all about mismatches in the design of an AI system and how it is used. Jonathan, for background, AI developers generally put a lot of effort in eliminating bias from their products, isn’t that right?

Jonathan Slowik: Yes, that’s right. This is a major selling feature for many of these developers. Employers have a vested interest in deploying tools that don’t create bias unintentionally. You can find on the websites of most developers statements or even pages detailing their efforts to ensure that their products are free of bias. This should give employers some comfort. Developers are clearly competing on this. Even if a product was designed to be unbiased, it can still produce biased results when it is deployed in a manner that the developer did not intend. Let’s look at a few examples. First, let’s say that an employer instructs his resume scanner to exclude applicants who are more than a specified distance from the workplace. Maybe on the theory that they are less likely than others to be serious candidates. If you recall, in part 1 of this series, hiring manager are currently overwhelmed with applications. It is possible to submit resumes in bulk on platforms such as LinkedIn or Indeed. Guy Brenner:

Well Jonathan, I can certainly see the appeal of this screening criteria. I can see how I could make something like that which hiring managers might have thought about in the past possible, when it would otherwise be impossible. AI is able to perform tasks in a matter seconds, thanks to its speed, efficiency, and ability. It sounds objective and unbiased, and it is a rational basis to try to sort through the many resumes that employers receive when they are trying to fill a job. The fact is, many places where we live are segregated based on race and ethnicity. So depending on where the workplace is located, this kind of approach might disproportionately screen out legitimate candidates of certain races, even though that may not be the intent.Jonathan Slowik:

Right. Even though you could do this manually, a hiring manager may decide to throw out all resumes that have a certain zip-code. Using technology to do this increases the risk. A hiring manager who does this manually may notice a pattern and realize at some point that the screening criteria was creating a non-representative pool. Software can do this quickly and at scale, but it only shows you the results. The same hiring manager who uses technology to screen out racial minority candidates might not even know that this is the case. All right. Next hypothetical. What if an employee uses a tool to cross-reference social media in order to verify a candidate’s background, and then gives a boost to candidates whose backgrounds can be verified this way? Guy Brenner: Well the one that comes into my mind is that I don’t believe this is a controversial statement that, in general, younger applicants are more social media active than older applicants. And I think that’s exacerbated depending on which platform we’re talking about.

Jonathan Slowik: So we actually have data on that. It’s not a stereotypical view. Guy Brenner:

Right. It’s easy to imagine a plaintiff’s attorney arguing that this screening tool could have a disparate effect on older applicants. I would also be concerned if the scoring takes into account other information on social media pages that could be used as proxy for discriminatory decisions.Jonathan Slowik:

Okay, one more hypothetical. Imagine an employer is trying to fill a position in a call centre. The test is designed to predict how well the applicant will handle distractions under normal working conditions. Imagine a call center with a variety of backgrounds. This is a screening tool that tests something related to the job. This test is being used to determine how a person will perform in the conditions they are likely to face when they start their job. Is this type of test problematic? Guy Brenner: Well first, as with any test, you want to know if it has a disparate impact on a particular group. You would want to validate it. I’m also curious if the company has considered whether certain applicants might be entitled to reasonable accommodations. For example, you can imagine someone who’s neurodiverse performing poorly on this type of simulation, but doing just fine if they were provided with some noise canceling headphones.

Jonathan Slowik: For sure. The EEOC has published guidance on this topic. These types of job simulations are often designed to test a candidate’s ability to complete tasks under typical working conditions. This is what the employer did here. The EEOC has stated that many employees with disabilities do not work under atypical conditions because they have reasonable accommodations. So for that reason, over reliance on the test without considering the impact on people with disabilities and whether the test should allow for accommodations is potentially problematic.

Guy Brenner: Well, thanks, Jonathan, and to those listening, thank you for joining us on The Proskauer Brief today. We hope that you found the series informative. Please note that we will continue to record new podcasts as necessary in order to keep you up-to-date on this ever-changing and fascinating area of law and technology. Please follow us on Apple Podcasts and YouTube music to stay up-to-date on the latest topics in employment and labor law.

Story originally seen here

Editorial Staff

The American Legal Journal Provides The Latest Legal News From Across The Country To Our Readership Of Attorneys And Other Legal Professionals. Our Mission Is To Keep Our Legal Professionals Up-To-Date, And Well Informed, So They Can Operate At Their Highest Levels.

The American Legal Journal Favicon

Leave a Reply