Audit Your AI: How To Get Rid of Talent Biases
Discover how to prevent AI bias in hiring and run an effective AI audit.
As AI grows in importance in HR and specifically hiring, laws to regulate AI in hiring are soon expected to become effective. In this wake, Sultan Saidov, president and co-founder of Beamery, touches upon how to prevent AI bias in hiring. He also discusses how to run an effective AI audit to prevent bias.
With the Great Resignation being succeeded by trends like the Great Reshuffle, Quiet Quitting, and Productivity Paranoia, today’s talent landscape is both challenging and dynamic. Now, amid shifting workforce expectations, labor shortages, layoffs, and a possible recession, companies and recruiters will soon face a new challenge: HR tech AI regulation.
AI In Hiring
With AI and machine learning developing rapidly, regulation has already fallen behind. However, some states are beginning to enact laws to change that. In 2023, New York will become the first state to enact compliance laws for using artificial intelligence in HR technology with its automated employment decision tools (AEDT) law. The new law regulates the use of AI in hiring to curb bias that can appear during the recruiting process. With 94% of business leaders agreeing that AI is critical to their organization’s success over the next five years, according to Deloitte, using AI in recruiting is likely to expand. This law can affect an innumerable number of employees and set a precedent for transparency in AI regulation laws that follow.
As a result of the new law, organizations will soon be required to provide each candidate with ten business days’ notice before being subject to AI tools, list the job qualifications and characteristics used by the tool, make the sources and types of data being used by the tool publicly available, and submit AI tools to an annual, independent “bias audit” with a publicly available summary. A bias audit can clarify the kinds of candidates an AI model favors or discriminates against. Artificial intelligence and machine learning are powerful business tools. Moving forward, companies will need to meet and practice much more transparency regarding the AI tools they employ during recruitment.
See More: How to Succeed with AI in Recruiting
Third-party AI Audit
Ahead of this new law, we completed a two-part audit of our recruiting software. This involved rigorous testing of our AI capabilities with a third-party auditor. As a part of the audit, the auditors examined our historical data and simulated models to discover the possibilities of inadvertent biases.
Using two tests — one with gender, ethnicity, and other data included in the models and one without — the auditors were able to report any possible hidden biases. In addition to the audit, we shared our AI process and principles with customers, candidates, and employees.
According to research by Dataconomy, 35% of hiring managers believe AI is the #1 trend influencing how employers hire. With AI influencing which candidates are chosen, it is important to identify and mitigate sources of bias wherever possible. To reduce bias, we can leverage AI to enable skill-based hiring, as inferring skills through AI allows us to look beyond credentials.
Unlike traditional recruiting, focusing on the candidate’s most important skills can dramatically reduce bias by removing unnecessary details like where someone went to school or years of experience. In the average recruiting process, recruiters may allow their preconceived notions and judgments to misdirect them from the best candidate for the role. AI tools can avoid these biases with algorithms that properly weigh skills, seniority, proficiency, and other experience. However, to verify that an organization’s HR AI tech is compliant, as well as minimize the risk of replicating human biases, it must be tested by third-party auditors to produce a report that is as accurate and objective as possible.
See More: AI and Blockchain Intersect to Revolutionize Audit Trails
Conducting an Effective AI Audit
To conduct an effective AI audit, companies should prepare by arranging interviews between a third-party auditor and the engineers responsible for their AI model and its algorithms. These interviews can serve as the basis of the audit, giving auditors more perspective into the humans creating the system and the human-machine interaction at the model’s center. In turn, interviews can flag qualitative information about the system and potential sources of risk before auditors begin looking into the data the AI has collected.
After interviews, the third-party organization must analyze the AI model itself, from the initial data collection practices through how the data is implemented. AI auditors may choose to run different scenarios through the AI model to isolate different factors, such as gender, race, disability, and age, to measure their effect on the simulation. This process can vary in length and includes (but is not limited to) examining how the AI was made, the features of the model, and standardization practices to determine the model’s effectiveness in several different scenarios.
When the AI audit is complete, the third-party auditor will share a report of audit findings, including possible risks and biases in the AI model. Once the findings are returned, reviewing them and working toward mitigating any biases found in the AI model and making changes in the software where necessary. Though changing the model may not be an easy fix, working toward changing the algorithm is the best next step for a more effective recruitment process. The most important part of any AI audit is ensuring that moving forward, the company’s hiring processes are as fair and skills-based as possible to locate the candidates who will perform the best in each role and minimize bias during recruitment.
Leading with Smarter Recruitment Technology
In today’s tumultuous talent landscape, recruiters can best prepare for success by ensuring their hiring processes zero in on how well candidates match the skills needed for their role. AI audits allow companies insight into how well their recruitment technology analyzes the critical data provided, bringing them closer to minimizing unintended algorithmic biases.
As more laws expand the regulations on HR recruiting technology, companies can set themselves up for success by being aware of the possible risks in their recruitment process and ensuring their tools do not discriminate against potential candidates. Fair recruitment practices are more important than ever, and AI models, when unbiased, can help to connect the very best candidates with roles where they can explore their potential.
What steps have you taken to prevent AI bias in hiring? Let us know on Facebook, Twitter, and LinkedIn.
Image Source: Shutterstock