AI and privacy: How HR can influence the conversation

technology AI privacy
Girard Dorney

By

written on February 1, 2018

How should HR be talking about the critical issues of AI, automation, and privacy at work? Anthropologist Genevieve Bell offers her perspective.

If you were advising an internationally renowned tech company on how to find a stellar employee for a position they’ve never had before, would you tell them to start their search in a bar? Definitely not – and you would have missed out. Because that was the beginning of anthropologist Genevieve Bell’s move to Intel in 1998.

But after twenty years of working in the highest echelons of the US tech industry, through the heady days of the early internet and the rise of smartphones, Bell has returned to Australia and is now a Professor at the Australian National University, and an upcoming AHRI National Convention and Exhibition speaker.

Even before the 2017 Boyer Lectures, she had established herself as a fascinating voice in the world of technological development, one providing historical and cultural perspectives to conversations that all too often lack both.

HRM talked to her about HR professionals in the 21st century, and how they can shape the conversations in their organisations around changing technology.

The following is an abridged version of an interview that will feature in the March edition of HRM magazine.

You’ve spoken about how privacy is being lost in our daily lives, as our data is given to companies and used in ways we have no control over. Do you think that same dynamic is taking place in our work lives?

If you think about the images of workplaces and work, the notion that companies are keeping track of employees is a longstanding anxiety. Think about punching into work – clocking in with time cards and time sheets. Most of that management of a workforce was highly visible. There was a card in a machine, there was a door you walked through.

When [Henry] Ford was building up his workforce in Dearborn, Michigan, he had a thing called the Five-Dollar Day program. It was this new form of payment for work – it was a highly paid job, five dollars a day was a lot of money back then.

But the money only came to you at the end of every fortnight if you agreed to have someone come to your house and ask questions about the food you were serving, whether you were drinking or not, whether your kids were in school, whether you were speaking English at home – it was this incredible surveillance of his workers. In the context of creating what he thought was an appropriate workforce.

So it’s interesting, we’ve had ideas of how to navigate through those things for a long time. And they’ve always been complicated.

Now imagine a world where many of the ways you are being managed are invisible to you. I think about some of the practices in Silicon Valley, where when you sign the employee contract with some of the companies, one of the things you agree to is that your company will scrutinise your social media profiles. And they’ll track what you’re posting on Facebook as part of your work.

That’s a very different way of thinking about what being good at your job might look like. So how we think about our jobs hasn’t necessarily caught up with the ways our jobs really are.

You’ve talked before about how our current conversation about AI is all doom and gloom for cultural reasons. And that in, say, Japan they just don’t talk about evil robots coming to take their jobs. So how do you think HR should be influencing the conversation around AI and automation?

The first thing is doing a better job of representing the research that already exists. So there is a body of research that often gets cited by way of explaining how it is that all these jobs are going to go away.

For instance the “30 per cent job loss by 2020” statistic that always get cited. But if you actually go look at that original study, they say certain tasks are going away and the automation of those tasks will change the nature of jobs. And that some jobs have more tasks that are automatable than others.

So part of it is: how can HR do a better job of telling a more accurate story that reflects the research that’s already been done. Is it the case that jobs will go away? Well, some might. But it’s more likely that certain tasks will be more augmented by technology but we’ll still need to have people in the equation.

Part of that is figuring out what are the things that we will need that will remain exquisitely human. So, let’s say certain kinds of negotiations are better with computation, others are not. The challenge there is: who is setting those terms?

Would an algorithm be useful for ensuring we had pay equity? We know there are long standing gender pay gaps in Australia. So maybe if you use an AI engine, it could ensure that everybody got paid fairly. Because it wouldn’t have issues such as the way men negotiate being different from the way women negotiate – so forms of micro-inequities could disappear.

That would be excellent, until you start to ask the question of how you’re going to program that AI object to do that? If you used existing pay data, that would just enshrine inequities where they currently exist. So that wouldn’t be any good.

If you did it by asking the person to answer a series of questions, how would you write the questions in such a way that they did not reveal information that could later get a bias written on top of it? So in fact there are some really interesting HR related questions about how do we start to build out this technology in the workforce to ensure that we are creating the kind of world we want to work in.

One of the questions companies and HR should be asking is, what is the result you’re looking for? Is it about efficiency? And if so, what’s the efficiency being gained, the time spent processing an employee request or complaint? But should that even be the measure of success, or should it be whether employees feel like they’re being appropriately serviced by HR – is it the amount of time it took, versus the fact that I was treated like I mattered?

And those questions may get you to very different notions of where you use AI versus where you don’t.

Gain fresh insights from cultural anthropologist and technologist Genevieve Bell and other global thinkers and business leaders at the AHRI National Convention and Exhibition in Melbourne on 28 – 30 August 2018. Early bird registration is now open.

Don’t miss out on more great content like this.

Comment

To comment on this article please provide your name and email address. Your email address will not be available publicly.

*