AI: Should you be hopeful or terrified?


Two tech giants have been arguing about the future of AI. One is optimistic, the other is… let’s go with worried.

Recently Elon Musk, CEO of Tesla and SpaceX, made news by tweeting “If you’re not concerned about AI safety, you should be. Vastly more risk than North Korea.” This is not the first time Musk has issued a warning about AI, though the thing he’s worried about is probably not what you would think. But more on that in a bit.

A month ago Mark Zuckerberg, CEO of Facebook, questioned Tesla’s repeated admonitions. During a Facebook Live broadcast he answered a question that directly referred to Musk’s previous warnings about AI and his own opinions.

“I am optimistic,” the Facebook founder said, “And I think people who are naysayers and try to drum up these doomsday scenarios – I just don’t understand it. It’s really negative and in some ways I actually think it is pretty irresponsible.

“One of the top causes of death for people is car accidents still and if you can eliminate that with AI, that is going to be just a dramatic improvement.”

(Want to learn about the latest in HR tech? Be a part of the Australia’s largest HR Exhibition for FREE. Register online or onsite to visit the AHRI Exhibition at the ICC Sydney on 22 and 23 August.)

So what is Musk’s concern? There are numerous dire predictions about AI, some of which HRM has written about before. They include many things of interest to HR such as:

  1. AI streamlining repetitive aspects of our jobs
  2. The complete automation of certain jobs (and perhaps all jobs), which is a fear recently expressed by the CIO of Wesptac.
  3. Bot armies that can control public opinion (think fake news in last year’s US presidential election).

Musk might be worried about such potential futures, but that’s not why he has referred to the development of AI as “summoning the demon”. And this gets to the point of how this spat is, to some extent, two people talking past each other.

While Zuckerberg is cheerily hopeful about the AI that’s just around the corner – the one that will drive our cars and optimise our work – Musk is trying to warn humanity about the development of a superintelligence that will make us second-class citizens, or kill us all.

Wait, a superintelligent demon that will kill us all?

Yes, something like a superintelligent demon that will kill us all.

The concept is too complex to explain in detail here, but some experts believe we could create such a thing in the not too distant future. This superintelligence would be a machine capable of increasing its own intelligence exponentially, without the aid of humans, until it’s so smart that in no time it will view us the same way we view single-celled organisms. Then it will make itself even smarter.

It will not relegate humanity to second class status or render us extinct because it’s evil or it hates us (it won’t understand what those words mean to humans, or our values in general) it will do so because we’re a nuisance – but again, it won’t think in those terms.

All this being said, some experts believe we can shape the superintelligence to be benevolent and to provide us all with immortality. Which is nice.

It definitely sounds crazy – and there are AI researchers who doubt it can ever happen. But on the other hand, the internet and supersonic flying transport machines would’ve sounded insane to medieval people. One of Musk’s fears is that any developer working on AI could accidentally unlock the holy grail, and give their machine the capability to reprogram itself for greater intelligence. And then it’s off to the races.

Think about it this way: if you heard that companies around the world were developing something that might accidentally turn into a nuclear bomb, even if that chance was remote, you would want their work regulated.

This is the argument of Musk, a man who owns and is invested in several businesses focused on AI. And of Zuckerberg he tweeted, “I’ve talked to Mark about this. His understanding of the subject is limited.”

(For a more thorough breakdown of the theory behind an AI superintelligence, one that also explains why people are predisposed to optimism and to not believing in things they’ve never experienced, read this Wait But Why blog post.)

… and back to our regular programming

Leaving aside the “demon”, what are the latest predictions about AI’s impact on the future of work?

In May, a survey of AI researchers and experts found that a majority believed machine intelligence will outperform human intelligence in many activities within the next two decades. These activities include translating languages, driving a truck, writing a high-school essay and working in retail. By 2049 the experts predict AI will write a bestselling book, and that in 120 years labour will be fully automated.

If you think of this future as dark, and you’re looking for hope in this report, you might find the following amusing. The last job these AI researchers believe will be taken over by AI? The humble occupation of being an AI researcher. There’s some optimism bias for you.

Photo credit: Abode of Chaos / CC BY

1
Leave a reply

avatar
100000
  Subscribe to receive comments  
Notify me of
John Carruthers
Guest
John Carruthers

Excellent article. I’m with Musk – mainly because I understand how people are predisposed to an optimism bias and how that lets us down when we face big, but non-proximate problems (like climate change, AI or financial crashes). Readers wanting and economic and social investigation of how AI will impact us should read Ryan Avery’s acclaimed “The Wealth of Humans”. Avery is a senior writer at The Economist.

More on HRM

AI: Should you be hopeful or terrified?


Two tech giants have been arguing about the future of AI. One is optimistic, the other is… let’s go with worried.

Recently Elon Musk, CEO of Tesla and SpaceX, made news by tweeting “If you’re not concerned about AI safety, you should be. Vastly more risk than North Korea.” This is not the first time Musk has issued a warning about AI, though the thing he’s worried about is probably not what you would think. But more on that in a bit.

A month ago Mark Zuckerberg, CEO of Facebook, questioned Tesla’s repeated admonitions. During a Facebook Live broadcast he answered a question that directly referred to Musk’s previous warnings about AI and his own opinions.

“I am optimistic,” the Facebook founder said, “And I think people who are naysayers and try to drum up these doomsday scenarios – I just don’t understand it. It’s really negative and in some ways I actually think it is pretty irresponsible.

“One of the top causes of death for people is car accidents still and if you can eliminate that with AI, that is going to be just a dramatic improvement.”

(Want to learn about the latest in HR tech? Be a part of the Australia’s largest HR Exhibition for FREE. Register online or onsite to visit the AHRI Exhibition at the ICC Sydney on 22 and 23 August.)

So what is Musk’s concern? There are numerous dire predictions about AI, some of which HRM has written about before. They include many things of interest to HR such as:

  1. AI streamlining repetitive aspects of our jobs
  2. The complete automation of certain jobs (and perhaps all jobs), which is a fear recently expressed by the CIO of Wesptac.
  3. Bot armies that can control public opinion (think fake news in last year’s US presidential election).

Musk might be worried about such potential futures, but that’s not why he has referred to the development of AI as “summoning the demon”. And this gets to the point of how this spat is, to some extent, two people talking past each other.

While Zuckerberg is cheerily hopeful about the AI that’s just around the corner – the one that will drive our cars and optimise our work – Musk is trying to warn humanity about the development of a superintelligence that will make us second-class citizens, or kill us all.

Wait, a superintelligent demon that will kill us all?

Yes, something like a superintelligent demon that will kill us all.

The concept is too complex to explain in detail here, but some experts believe we could create such a thing in the not too distant future. This superintelligence would be a machine capable of increasing its own intelligence exponentially, without the aid of humans, until it’s so smart that in no time it will view us the same way we view single-celled organisms. Then it will make itself even smarter.

It will not relegate humanity to second class status or render us extinct because it’s evil or it hates us (it won’t understand what those words mean to humans, or our values in general) it will do so because we’re a nuisance – but again, it won’t think in those terms.

All this being said, some experts believe we can shape the superintelligence to be benevolent and to provide us all with immortality. Which is nice.

It definitely sounds crazy – and there are AI researchers who doubt it can ever happen. But on the other hand, the internet and supersonic flying transport machines would’ve sounded insane to medieval people. One of Musk’s fears is that any developer working on AI could accidentally unlock the holy grail, and give their machine the capability to reprogram itself for greater intelligence. And then it’s off to the races.

Think about it this way: if you heard that companies around the world were developing something that might accidentally turn into a nuclear bomb, even if that chance was remote, you would want their work regulated.

This is the argument of Musk, a man who owns and is invested in several businesses focused on AI. And of Zuckerberg he tweeted, “I’ve talked to Mark about this. His understanding of the subject is limited.”

(For a more thorough breakdown of the theory behind an AI superintelligence, one that also explains why people are predisposed to optimism and to not believing in things they’ve never experienced, read this Wait But Why blog post.)

… and back to our regular programming

Leaving aside the “demon”, what are the latest predictions about AI’s impact on the future of work?

In May, a survey of AI researchers and experts found that a majority believed machine intelligence will outperform human intelligence in many activities within the next two decades. These activities include translating languages, driving a truck, writing a high-school essay and working in retail. By 2049 the experts predict AI will write a bestselling book, and that in 120 years labour will be fully automated.

If you think of this future as dark, and you’re looking for hope in this report, you might find the following amusing. The last job these AI researchers believe will be taken over by AI? The humble occupation of being an AI researcher. There’s some optimism bias for you.

Photo credit: Abode of Chaos / CC BY

1
Leave a reply

avatar
100000
  Subscribe to receive comments  
Notify me of
John Carruthers
Guest
John Carruthers

Excellent article. I’m with Musk – mainly because I understand how people are predisposed to an optimism bias and how that lets us down when we face big, but non-proximate problems (like climate change, AI or financial crashes). Readers wanting and economic and social investigation of how AI will impact us should read Ryan Avery’s acclaimed “The Wealth of Humans”. Avery is a senior writer at The Economist.

More on HRM