How to remove AI bias in recruitment


The race is on to find the best way to make AI useful to recruiters – and that means figuring out how to effectively ‘unteach’ our biases.

You would think that if anyone could get artificial intelligence (AI) right, Amazon could. But the e-commerce giant and machine-learning pioneer had to ditch its experimental recruiting tool late last year, admitting it had a strong bias against female candidatesNot all providers of AI recruiting tools are as conscientious, says Business Disability International CEO Susan Scott-Parker.

“Unless the unintended consequences of AI-powered recruitment are urgently addressed, hundreds of millions of people already disadvantaged by assumptions triggered by their disability face lifetimes of needless unemployment and social exclusion,” she warns, noting the same systems could discriminate against any group.

So is it possible to de-bias AI? Well, many would like to think so.

Prevalence

In Australia, the uptake of AI in recruitment has been cautious, says Michelle Hancic, lead IO psychologist APAC for US-based hiring technology startup pymetrics.

“People are thinking, ‘Let’s see what others are doing and how much success they’re having before we jump in with both feet,’” she says.

That said, the professional services sector seems to be taking a lead, with ANZ, EY and Alexander Mann Solutions all making large investments in AI recruitment solutions.

Globally, recruiters say AI is helping them save time (67 per cent), remove human bias (43 per cent) and deliver the best candidate matches (31 per cent), according to LinkedIn’s Global Recruiting Trends 2018 report.

Of course the platform has its own AI-powered tool, Recruiter, which scores and prioritises candidates based on their similarities to “top performers” that you nominate.

Other AI-powered recruitment tools perform resume screening (including platforms such as Ideal and Amazon’s now-defunct tool), testing and initial interviews (Triplebyte), video interview analysis (HireVue and Paññã), talent management (Textkernal and Talentswot) and chatbots for candidates (Rai, Mya and Olivia).

Bad teachers

AI systems are built upon data you provide. Not being careful with that data can mean you ‘teach’ the AI your implicit (or explicit) racial, gender or ideological biases.

This became painfully obvious in 2016 when Tay.ai, a conversational chatbot created by Microsoft, used live Twitter interactions to become “smarter in real time”. Within a day it had become a racist misogynist, a reflection of its interactions with trolls.

“What the difference between an AI tool that stops millions from competing fairly, and a teddy bear with button eyes that a child might swallow? Answer: the consumer protection regulator pulls the toy bear off the market.” – Susan Scott-Parker

More recently came the news about the shuttering of Amazon’s experimental AI. As Reuters reports, it was trained on 10 years of resume submissions. Since most of those were from men (a reflection of the tech industry’s gender bias), the AI downgraded new resumes which contained the word “women’s” (as in “women’s chess club”) and those that listed two all-women colleges. Even after those shortfalls were fixed, the AI’s designers were worried it still valued other discriminatory data points.

De-biasing AI

There are plenty of disruptors looking to make AI-powered recruitment work, including pymetrics. It delivers 12 gamified neuroscience assessments based on academic literature, says pymetrics’ global head of diversity analytics, Dr Kelly Trindel.

“First we work with the client to identify successful incumbents and build a profile of their cognitive, social and emotional traits,” says the former chief analyst at the United States Equal Employment Opportunity Commission.

Pymetrics records how incumbents perform in the games (which measure more than 50 personality traits), then it builds a custom algorithm representing success for the specific vacant job function and organisation.

“Pymetrics then goes through an active de-biasing process before we go live with any candidate. We find out if the model that we’ve built is likely to cause bias and we go through a process of removing that bias,” says Trindel.

The company uses a reference set of tens of thousands of people to check for any potential biases, and a bias-checking tool – available open-source on GitHub – that applies a range of algorithm auditing techniques.

An algorithm is basically a step-by-step guide, or ‘recipe’, for solving a particular problem. Algorithm auditing can pick up on bias within a recipe, including data points.

For example, an algorithm audit may discover that absenteeism data has been used as a predictor for success which could disadvantage female candidates, or that the inclusion of postcode data is favouring those from certain cultural or socioeconomic backgrounds.

To address bias in the algorithm, many AI providers eliminate the offending data or lower its value. “Once we know if the overall effect is biased, then we can look at what is in the algorithm that is causing the bias and reduce the power of that variable, or remove it entirely,” says Trindel.

“The reason that AI is more powerful in doing this is because we have so many data points – we have the power to remove a data point without affecting the overall predictiveness of the model.”

London startup Applied also uses algorithms and data science to remove bias from hiring processes. Founder & CEO Kate Glazebrook explains: “Before applications are reviewed, Applied removes irrelevant information such as name, address, hobbies and education (both years and institute) which may introduce bias and detract from the detail that really matters.

“Additionally, behaviourally designed algorithms reshape how people see information to eradicate bias that can creep into assessment. For example, the platform randomises the sequence in which candidate Q&A responses are reviewed, to overcome assumptions being made based on previous comments.”

Beyond the recruitment space specifically, there is a growing number of open-source AI de-biasing resources and methodologies available.

Late last year IBM launched AI Fairness 360. It provides 10 mitigating algorithms that are designed to reduce bias throughout AI systems. One algorithm, for example, gives favourable outcomes to underprivileged groups and unfavourable outcomes to privileged groups (reject option classification), and another edits feature values to improve group fairness while preserving rank-ordering within groups (disparate impact remover). Additional algorithms focus on reweighing, prejudice removal and obfuscating certain information.

An alternative approach to de-biasing AI is regulating it, as Scott-Parker suggests: “What the difference between an AI tool that stops millions from competing fairly, and a teddy bear with button eyes that a child might swallow? Answer: the consumer protection regulator pulls the toy bear off the market.”

Strong regulatory controls, however, seem a long way off. So in the meantime, it’s up to those using AI-powered tools to do so carefully.

As Glazebrook says: “While computers help us analyse and manage data, humans are still critical to ensuring that computational power is spent answering the right questions, with the right inputs, and that we make sensible inferences from it.”

This article originally appeared in the March 2019 edition of HRM magazine.


To hear more from Susan Scott-Parker, register to attend AHRI’s National Convention and Exhibition in September this year. Click here for more details.

Subscribe to receive comments
Notify me of
guest

1 Comment
Inline Feedbacks
View all comments
LOCUS RAGS IT Company
LOCUS RAGS IT Company
4 years ago

Hello
My name is Jacob Parker and This is a very nice post, Thanks for Sharing.

More on HRM

How to remove AI bias in recruitment


The race is on to find the best way to make AI useful to recruiters – and that means figuring out how to effectively ‘unteach’ our biases.

You would think that if anyone could get artificial intelligence (AI) right, Amazon could. But the e-commerce giant and machine-learning pioneer had to ditch its experimental recruiting tool late last year, admitting it had a strong bias against female candidatesNot all providers of AI recruiting tools are as conscientious, says Business Disability International CEO Susan Scott-Parker.

“Unless the unintended consequences of AI-powered recruitment are urgently addressed, hundreds of millions of people already disadvantaged by assumptions triggered by their disability face lifetimes of needless unemployment and social exclusion,” she warns, noting the same systems could discriminate against any group.

So is it possible to de-bias AI? Well, many would like to think so.

Prevalence

In Australia, the uptake of AI in recruitment has been cautious, says Michelle Hancic, lead IO psychologist APAC for US-based hiring technology startup pymetrics.

“People are thinking, ‘Let’s see what others are doing and how much success they’re having before we jump in with both feet,’” she says.

That said, the professional services sector seems to be taking a lead, with ANZ, EY and Alexander Mann Solutions all making large investments in AI recruitment solutions.

Globally, recruiters say AI is helping them save time (67 per cent), remove human bias (43 per cent) and deliver the best candidate matches (31 per cent), according to LinkedIn’s Global Recruiting Trends 2018 report.

Of course the platform has its own AI-powered tool, Recruiter, which scores and prioritises candidates based on their similarities to “top performers” that you nominate.

Other AI-powered recruitment tools perform resume screening (including platforms such as Ideal and Amazon’s now-defunct tool), testing and initial interviews (Triplebyte), video interview analysis (HireVue and Paññã), talent management (Textkernal and Talentswot) and chatbots for candidates (Rai, Mya and Olivia).

Bad teachers

AI systems are built upon data you provide. Not being careful with that data can mean you ‘teach’ the AI your implicit (or explicit) racial, gender or ideological biases.

This became painfully obvious in 2016 when Tay.ai, a conversational chatbot created by Microsoft, used live Twitter interactions to become “smarter in real time”. Within a day it had become a racist misogynist, a reflection of its interactions with trolls.

“What the difference between an AI tool that stops millions from competing fairly, and a teddy bear with button eyes that a child might swallow? Answer: the consumer protection regulator pulls the toy bear off the market.” – Susan Scott-Parker

More recently came the news about the shuttering of Amazon’s experimental AI. As Reuters reports, it was trained on 10 years of resume submissions. Since most of those were from men (a reflection of the tech industry’s gender bias), the AI downgraded new resumes which contained the word “women’s” (as in “women’s chess club”) and those that listed two all-women colleges. Even after those shortfalls were fixed, the AI’s designers were worried it still valued other discriminatory data points.

De-biasing AI

There are plenty of disruptors looking to make AI-powered recruitment work, including pymetrics. It delivers 12 gamified neuroscience assessments based on academic literature, says pymetrics’ global head of diversity analytics, Dr Kelly Trindel.

“First we work with the client to identify successful incumbents and build a profile of their cognitive, social and emotional traits,” says the former chief analyst at the United States Equal Employment Opportunity Commission.

Pymetrics records how incumbents perform in the games (which measure more than 50 personality traits), then it builds a custom algorithm representing success for the specific vacant job function and organisation.

“Pymetrics then goes through an active de-biasing process before we go live with any candidate. We find out if the model that we’ve built is likely to cause bias and we go through a process of removing that bias,” says Trindel.

The company uses a reference set of tens of thousands of people to check for any potential biases, and a bias-checking tool – available open-source on GitHub – that applies a range of algorithm auditing techniques.

An algorithm is basically a step-by-step guide, or ‘recipe’, for solving a particular problem. Algorithm auditing can pick up on bias within a recipe, including data points.

For example, an algorithm audit may discover that absenteeism data has been used as a predictor for success which could disadvantage female candidates, or that the inclusion of postcode data is favouring those from certain cultural or socioeconomic backgrounds.

To address bias in the algorithm, many AI providers eliminate the offending data or lower its value. “Once we know if the overall effect is biased, then we can look at what is in the algorithm that is causing the bias and reduce the power of that variable, or remove it entirely,” says Trindel.

“The reason that AI is more powerful in doing this is because we have so many data points – we have the power to remove a data point without affecting the overall predictiveness of the model.”

London startup Applied also uses algorithms and data science to remove bias from hiring processes. Founder & CEO Kate Glazebrook explains: “Before applications are reviewed, Applied removes irrelevant information such as name, address, hobbies and education (both years and institute) which may introduce bias and detract from the detail that really matters.

“Additionally, behaviourally designed algorithms reshape how people see information to eradicate bias that can creep into assessment. For example, the platform randomises the sequence in which candidate Q&A responses are reviewed, to overcome assumptions being made based on previous comments.”

Beyond the recruitment space specifically, there is a growing number of open-source AI de-biasing resources and methodologies available.

Late last year IBM launched AI Fairness 360. It provides 10 mitigating algorithms that are designed to reduce bias throughout AI systems. One algorithm, for example, gives favourable outcomes to underprivileged groups and unfavourable outcomes to privileged groups (reject option classification), and another edits feature values to improve group fairness while preserving rank-ordering within groups (disparate impact remover). Additional algorithms focus on reweighing, prejudice removal and obfuscating certain information.

An alternative approach to de-biasing AI is regulating it, as Scott-Parker suggests: “What the difference between an AI tool that stops millions from competing fairly, and a teddy bear with button eyes that a child might swallow? Answer: the consumer protection regulator pulls the toy bear off the market.”

Strong regulatory controls, however, seem a long way off. So in the meantime, it’s up to those using AI-powered tools to do so carefully.

As Glazebrook says: “While computers help us analyse and manage data, humans are still critical to ensuring that computational power is spent answering the right questions, with the right inputs, and that we make sensible inferences from it.”

This article originally appeared in the March 2019 edition of HRM magazine.


To hear more from Susan Scott-Parker, register to attend AHRI’s National Convention and Exhibition in September this year. Click here for more details.

Subscribe to receive comments
Notify me of
guest

1 Comment
Inline Feedbacks
View all comments
LOCUS RAGS IT Company
LOCUS RAGS IT Company
4 years ago

Hello
My name is Jacob Parker and This is a very nice post, Thanks for Sharing.

More on HRM