Artificial Intelligence in Employment Selection Procedures: Updated Guidance from the EEOC

Artificial Intelligence (AI) has recently taken the world by storm in the past year. Employees, including managers and human resources professionals, have begun integrating AI to accomplish daily tasks more quickly and efficiently. Some employees have used AI assistance in their selection processes by screening resumes, drafting interview questions, and evaluating performance. Employers, however, should proceed with caution, as AI poses significant challenges and risks for federal and state anti-discrimination compliance.

What is adverse impact?

Title VII (federal law) and the Arizona Civil Rights Act (ACRA) address discrimination by adverse impact. Adverse impact occurs when an employer’s practice or policy disproportionately affects a group of people based on their race, color, religion, sex, or national original. Employers may be found liable for discrimination by adverse impact, even if they had no intent to cause such discriminatory impact.

For example, a Tucson employer may have a dress code that applies to all employees from wearing head coverings. That dress code may adversely affect Muslim women who wear hijabs or Sikh men who wear turbans. Another Southern Arizona employer’s grooming policy that requires to have a certain hairstyle or length may adversely affect Black employees who wear natural hair or dreadlocks. A policy that requires employees to wear uniforms or clothing that conform to gender stereotypes may adversely affect transgender or nonbinary employees. Regardless of the employers’ intent in created those policies, they may be found liable under federal and Arizona law for discrimination by adverse impact.

How can AI cause adverse impact?

AI systems are often designed to learn from data and make predictions or recommendations based on patterns or correlations found in the data. However, data may be incomplete, inaccurate, biased, or outdated. For example, data may reflect historical or societal discrimination or stereotypes against certain groups of people. Data may also exclude or underrepresent certain groups of people or fail to capture relevant factors or variables that affect their outcomes.

If an employer uses an AI system that relies on such data to make employment decisions, the system may produce results that are unfair, inaccurate, or discriminatory. For example, an AI system may screen out qualified applicants based on their race, gender, or other protected characteristics that are irrelevant to their job performance. An AI system may also favor applicants who have similar backgrounds or characteristics as the existing employees or managers who provided the data.

Has Arizona enacted any laws about the use of AI and possible adverse impact?

            According to the National Conference of State Legislature, AI has the potential to transform and spur innovation across industry and government. Many states, such as California, Illinois, Maryland, and New York, have introduced or enacted bills that address the use of AI in employment decisions. The Arizona legislature has not introduced or enacted similar bills. (The Arizona legislature has introduced two AI-related bills, but neither apply to employment decisions). Still, state legislation trends can spread quickly across the United States and Arizona employers may want to consider enacting policies and procedures addressing the use of AI in employment decisions.  

How can employers prevent and address adverse impact caused by AI?

On May 18, 2023, the Equal Employment Opportunity Commission (EEOC) provided new guidance on this issue. Steps employers may take to prevent and address adverse impact include:

·        Conducting a job analysis to identify the essential functions and qualifications for each position and ensuring that any selection criteria are job-related and consistent with business necessity.

·        Monitoring and auditing the AI system regularly for any signs of adverse impact and making adjustments as needed to correct any problems.

·        Keeping records of any selection procedures and decisions and documenting any analyses or validations performed on the AI system.

·        Providing training and guidance to managers and employees who use or interact with the AI system and ensuring that they understand its purpose, limitations, and potential risks.

As employees begin identifying ways to utilize AI in their day-to-day tasks, employers and managers should be aware of the legal implications. If you have any questions regarding your use of AI related to employment selection processes, please contact Briana Ortega Law PLLC (bortegaloya@outlook.com).

------

Briana Ortega Law PLLC is a Southern Arizona law office with locations in Nogales and Tucson. Briana Ortega represents businesses, their owners and managers, and other individuals in employment and business matters including civil litigation. For more information about Briana Ortega Law PLLC’s practices, click here.

Previous
Previous

How Businesses Should Respond to EEOC Charges of Discrimination in Arizona

Next
Next

Non-Competes and Medical Professionals: A Conversation with Briana Ortega & Tim August