The truth about AI? It’s not as free from unconscious bias as we think


Beauty is in the eye of the beholder, which raises the sticky issue of unconscious bias. Replacing humans with artificial intelligence should fix this, right? Turns out, it’s not that simple.

The first international beauty contest judged by a computer program called Beauty.AI was meant to judge contestants on ‘objective’ factors such as facial symmetry, wrinkles, skin colour, age and ethnicity. The idea is that without unconscious bias or human influence, the winners would be as close to ‘perfect’ as possible.

When the winners were announced, the creators were dismayed to see that few people of colour were chosen. More than 6000 people from 100 countries submitted photos. Of the 44 people judged as the most attractive, nearly all were white, a few were Asian and only one had dark skin.

Obviously, the creators did not consciously set out to create a program that favours white candidates, but Beauty.AI’s Chief Science Officer Alex Zhavornokov acknowledged that part of the problem was that the data used to establish standards of attractiveness did not include enough minorities.

“If you have not that many people of colour in the dataset, then you might actually have biased results,” Zhavornokov says. “When you’re training an algorithm to recognise certain patterns, you might not have enough data, or the data might be biased.”

This comes shortly after Facebook took heat for firing its entire trending news editorial staff and replacing it with an algorithm. The idea was that getting rid of humans would address concerns that its coverage was biased, but so far it’s been nothing short of a disaster for the company. Without humans to moderate the content, false news stories, inappropriate topics and slurs crept into the feed.

The idea that AI isn’t as free from unconscious bias as we think it is has serious implications for HR processes such as recruitment. Using computer programs to scan applications, rather than humans, always comes up as an option to increase age, gender, or racial and ethnic diversity in organisations. However, programs still need to be written by humans, meaning that bias can creep in at any time.

On a positive note, artificial intelligence is able to learn from its mistakes. Programs make judgements based on data input from designers. This means that if bias is discovered, like the one with Beauty.AI, acknowledging it and finding a solution can code bias out of the program over time.

In the meantime, there are ways HR can eliminate some of the problems. Getting rid of boxes asking applicants to indicate age, gender, race or ethnicity, and even name is one solution. Another is not looking at photos of candidates – or asking for them with the application – before inviting them for interviews; one Belgian study found recruiters make assumptions about a candidate’s personality based on their LinkedIn and Facebook profiles and images.

Microsoft CEO Nadella Satya, in an article for Slate, says, “The most critical next step in our pursuit of AI is to agree on an ethical and empathetic framework for its design.” So while artificial intelligence is making recruitment easier and in many ways better, for now it still needs a human touch.

1
Leave a reply

avatar
500
  Subscribe to receive comments  
Notify me of
Max underhill
Guest
Max underhill

The artificial intelligence has been around a long time (and is improving). In 1994 Kellogg used it in assessing competence development to approve progression to new models. If we treat AI as a stakeholder knowing what outcomes they can produce (and how to measure the success of the delivery) then we will have a realistic understanding of actual contribution. Other words we can design the “AI tool position” as we do any other contributing element whether human or technology. The technology largely has people as its stakeholder I.e its there to produce outcomes for people. It is these outcomes we… Read more »

More on HRM

The truth about AI? It’s not as free from unconscious bias as we think


Beauty is in the eye of the beholder, which raises the sticky issue of unconscious bias. Replacing humans with artificial intelligence should fix this, right? Turns out, it’s not that simple.

The first international beauty contest judged by a computer program called Beauty.AI was meant to judge contestants on ‘objective’ factors such as facial symmetry, wrinkles, skin colour, age and ethnicity. The idea is that without unconscious bias or human influence, the winners would be as close to ‘perfect’ as possible.

When the winners were announced, the creators were dismayed to see that few people of colour were chosen. More than 6000 people from 100 countries submitted photos. Of the 44 people judged as the most attractive, nearly all were white, a few were Asian and only one had dark skin.

Obviously, the creators did not consciously set out to create a program that favours white candidates, but Beauty.AI’s Chief Science Officer Alex Zhavornokov acknowledged that part of the problem was that the data used to establish standards of attractiveness did not include enough minorities.

“If you have not that many people of colour in the dataset, then you might actually have biased results,” Zhavornokov says. “When you’re training an algorithm to recognise certain patterns, you might not have enough data, or the data might be biased.”

This comes shortly after Facebook took heat for firing its entire trending news editorial staff and replacing it with an algorithm. The idea was that getting rid of humans would address concerns that its coverage was biased, but so far it’s been nothing short of a disaster for the company. Without humans to moderate the content, false news stories, inappropriate topics and slurs crept into the feed.

The idea that AI isn’t as free from unconscious bias as we think it is has serious implications for HR processes such as recruitment. Using computer programs to scan applications, rather than humans, always comes up as an option to increase age, gender, or racial and ethnic diversity in organisations. However, programs still need to be written by humans, meaning that bias can creep in at any time.

On a positive note, artificial intelligence is able to learn from its mistakes. Programs make judgements based on data input from designers. This means that if bias is discovered, like the one with Beauty.AI, acknowledging it and finding a solution can code bias out of the program over time.

In the meantime, there are ways HR can eliminate some of the problems. Getting rid of boxes asking applicants to indicate age, gender, race or ethnicity, and even name is one solution. Another is not looking at photos of candidates – or asking for them with the application – before inviting them for interviews; one Belgian study found recruiters make assumptions about a candidate’s personality based on their LinkedIn and Facebook profiles and images.

Microsoft CEO Nadella Satya, in an article for Slate, says, “The most critical next step in our pursuit of AI is to agree on an ethical and empathetic framework for its design.” So while artificial intelligence is making recruitment easier and in many ways better, for now it still needs a human touch.

1
Leave a reply

avatar
500
  Subscribe to receive comments  
Notify me of
Max underhill
Guest
Max underhill

The artificial intelligence has been around a long time (and is improving). In 1994 Kellogg used it in assessing competence development to approve progression to new models. If we treat AI as a stakeholder knowing what outcomes they can produce (and how to measure the success of the delivery) then we will have a realistic understanding of actual contribution. Other words we can design the “AI tool position” as we do any other contributing element whether human or technology. The technology largely has people as its stakeholder I.e its there to produce outcomes for people. It is these outcomes we… Read more »

More on HRM