It’s supposed to be quid pro quo, we give up some of our privacy and in turn our lives are made easier. But is there a tipping point? And even if we wanted to stop this process, can we?
At this year’s Build developers conference in Seattle, Microsoft put on one hell of a show. Through videos and live demonstrations they revealed a future of work that Gizmodo has already labelled “creepy”.
As reported in Fast Company, Microsoft showed off “intelligent edge”, a network of cameras and sensors that feeds into an internet-based AI and analyses the behaviour of employees 24/7. Most of the scenes took place on a construction site. A camera spotted a staff member without safety goggles in a shop, so the AI contacted their supervisor. The moment a hazardous chemical was spilled, image recognition software knew what to do. Something as specific as an employee taking a selfie with a jackhammer? Robocall the manager.
“The solution is running more than 27 million recognitions per second across people, objects, and activities,” said an intelligent edge presenter.
In a stretch of imagination you’ve probably already made: what if this technology was applied to something other than safety? Remember this is a developer’s conference and that Microsoft is showing of a technical capability. It’s up to the developers to make use of it.
What if, instead of looking out for hazards, they designed a system to maximise productivity? It could track higher than average trips to the bathroom or understand someone’s spouse is distracting them at their desk, and before you know it, your manager has been alerted and is right there asking you what’s wrong. “Are you sick? Is everything okay at home? All good? Get back to work then.”
(Want to increase your knowledge on the mediation of workplace conflict ? Sign up to one of AHRI’s short courses.)
Microsoft is more than aware of the danger. CEO Satya Nadella told the developers, “What Orwell prophesied in 1984, where technology was being used to monitor, control, dictate, or what Huxley imagined we may do just by distracting ourselves without any meaning or purpose – neither of these futures is something that we want.
“The future of computing is going to be defined by the choices that you as developers make and the impact of those choices on the world.”
But can it be stopped?
It’s not just about whether developers turn technology to sinister ends, it’s on everyone – employees, executives and HR professionals – to understand where the line is. One of the difficulties with finding that line is that privacy isn’t being breached in a single, obvious moment. It’s being negotiated away piecemeal.
How many of us that were initially sceptical of technology now rely on it to keep track of both work and social events? People are already living with devices such as Amazon’s Echo/Alexa, that sit in their home listening at all times. Others have been shocked to find that they’d bought TVs that can be hacked to do the same. Samsung even warned users – who would be in their own homes – not to say things in front of their TVs that they wouldn’t repeat in public.
Also demonstrated at Build were the new features of Microsoft’s voice assistant, Cortana. It can now integrate more fully with your job. For instance, one demo saw it anticipating bad traffic and instantly informing work the employee would be late for a meeting, and autmoatically calling her into a different ongoing meeting.
On the one hand this sounds like the height of convenience. No more frantic phoning of colleagues. On the other, how much do you really want work to be present on your home devices? Could you be venting about your boss to a family member only for your phone to start text messaging the manager with your private complaints?
Some have claimed that our privacy is a very reasonable price to pay for the seamless convenience of modern devices – as one Gizmodo writer put this argument, “Sorry, George Orwell. I don’t give a f**k”.
But does this rationale make sense in the world of work? The kind of technologically enabled privacy breaches we’re talking about are most likely to be one-way and inherently unfair. If you want the job, you put up with the monitoring. And being watched doesn’t mean everyone will behave. The Uber sexual harassment case from earlier this year is evidence that organisations can be aware of harassment – they have “surveillance” on it – but decide that it’s okay.
How will HR manage the future of privacy in their organisation? We would love to hear from you.