Recent research has found that most global organisations are considering a ban on ChatGPT at work. However, employee sentiment checks indicate that this might not be the wisest move.
Leaders worldwide are toying with the possibility of prohibiting the use of ChatGPT and other generative AI platforms, found a recent report by BlackBerry.
The research, based on surveys of 2000 IT decision-makers across Australia, the US, Canada, the UK, France, Germany, the Netherlands and Japan, found that 75 per cent of employers are currently considering or implementing bans on using this technology in the workplace.
The top reasons cited for banning generative AI were its perceived potential risks to data security and privacy (67 per cent) and potential risks to corporate reputation (57 per cent).
The pace of change when it comes to AI developments means that many employers are struggling to understand and mitigate emerging threats, making a blanket ban on the technology an attractive option as they gain a better handle on its risks.
While this reaction is completely natural, it ignores the fact that the pace of AI progress shows no sign of slowing down, says Dr Sean Gallagher, Director at the Centre for the New Workforce and AHRI Future of Work Advisory Panel member.
“AI was expected to reach the median level of human creativity at around 2037. It achieved that this year. It wasn’t expected to be able to write at a top-quartile human level until 2050. It’s now expected to get there next year,” he told the audience at AHRI National Convention and Exhibition last month.
“This is the most disruptive technology we have probably ever seen in our lives.
“Bill Gates says AI, [particularly] generative AI, is going to be as powerful as the mobile phone or the internet. Others are saying it is even more powerful than that, comparing it to electricity: a general-purpose technology that is going to transform everything,” he says.
What are the consequences of a ChatGPT ban?
If leaders choose to ban AI and push the use of these platforms into the shadows, they risk creating a culture where employees use this technology in secret, says Gallagher.
He cites a US study which found that seven in 10 workers who use ChatGPT at work are not telling their employers about it. While Australian workers reported being more open about its use, the proportion of Aussie employees who would share their use of AI with their supervisors was still less than half (43 per cent).
“AI was expected to reach the median level of human creativity at around 2037. It achieved that this year.” – Dr Sean Gallagher, Director, Centre for the New Workforce and AHRI Future of Work Advisory Panel member.
Given the accessibility of ChatGPT and other generative AI platforms, Gallagher believes it has become extremely difficult for leaders to effectively enforce a blanket ban on their use.
He warns that leaders who prohibit these platforms are likely to see workers simply using them on their mobile phones and copying the output onto their computers.
Some web developers have created apps that can theoretically detect text that has been written by ChatGPT. However, users can evade these programs fairly easily by paraphrasing a few words or sentences in AI-generated text.
Think you can tell the difference between human and AI generated content? Take HRM’s quiz to find out.
Even in the education sector, where concerns are rife about AI’s impact on academic study, policymakers seem to be coming to the conclusion that knee-jerk bans on AI tend to be ineffective, counterproductive or both.
Currently, most students are banned from using artificial intelligence applications in public schools in all states apart from South Australia. However, in July this year, Federal Education Minister Jason Clare suggested that the ban on public school students using ChatGPT and similar tools may be reversed next year, pending the development of a draft framework addressing concerns about plagiarism and impacts on student learning.
“This is the sort of thing that students are going to need to learn how to use properly,” Clare said in an interview with Sky News.
“You can’t just put it away and assume that students won’t use it. But at the same time, I want to make sure that students are getting the marks they deserve, and can’t use it to cheat.”
For both educators and employers, a structured approach that considers and incorporates AI rather than outlawing it could be the smartest move for a digital-first, innovative and transparent culture.
Create your own guardrails
For leaders concerned about the risks of generative AI, a much more effective approach to mitigating them is creating robust policies and training guidelines for their use, suggests Gallagher.
He says that one of the factors behind widespread employee concern about AI is the fact that it is not openly embraced at work. Recent research from the University of Queensland and KPMG found that 75 per cent of people were concerned about the risks of using AI at work.
HR therefore needs to be more proactive in helping employees explore and implement these models. This will assist in quelling fears about the technology and fostering a culture of openness and curiosity about AI.
To ensure that company policies around AI are enforced effectively, the tech should be implemented slowly from the bottom up, says Gallagher. In other words, employers should begin to test it in small, repetitive and routine tasks while monitoring any risks or challenges.
The future of human work in the age of AI
As well as being hard to enforce, a ban on generative AI also keeps organisations from the significant productivity gains it can offer. By taking on routine, time-consuming tasks, this technology can free up time for employees to focus on more impactful work.
He points to an MIT study involving two groups of professionals completing writing tasks, where only one group was allowed to use ChatGPT to help them complete the task.
The participants who used ChatGPT completed tasks 11 minutes faster, with an 18 per cent increase in output quality.
“Just as importantly – and this is a key take home message for HR leaders – is that when empowered to use these tools, workers reported much higher levels of job satisfaction and much higher levels of self-efficacy, meaning they felt less threatened by it and they felt more empowered,” says Gallagher.
“When empowered to use these tools, workers reported much higher levels of job satisfaction and much higher levels of self-efficacy.” – Dr Sean Gallagher, Director, Centre for the New Workforce and AHRI Future of Work Advisory Panel member.
To reap these benefits in a secure way, he suggests allowing time for experimentation and peer-to-peer training with these tools, which will help create a psychologically safe space for continuous learning as AI’s capabilities continue to accelerate.
With this in mind, organisations adopting generative AI into their operations should be thinking about how they want their employees to spend that additional time, he says.
Gallagher’s vision of the future of human work in the AI age boils down to four main areas:
1. Navigating uncertainty
AI is most effective when fed plenty of information about past data and patterns, which can help it complete repetitive tasks with ease. However, when an unexpected or unprecedented event presents itself, humans are best-placed to step in and take charge.
“We can figure out the future when we’re working together with lots of other different humans who have different perspectives and vantage points,” says Gallagher.
2. Abstract thinking
For the same reasons, humans’ propensity for conceptual thinking makes their input preferable to AI when it comes to thinking and working outside the box.
Relieving workers of mundane, repetitive tasks and replacing them with activities that require abstract thinking can also help contribute to a strong culture of creativity and curiosity.
3. Profound understanding of people
Now that AI can take care of the less-human-centric parts of work, humans can place more focus on the nuances of relationships with various stakeholders and how to strengthen them.
“Figuring out the change in demand patterns of your customers and your clients, and how you’re going to meet them in the context of everything that’s going on… that is incredibly valuable and incredibly human work.”
4. Context, meaning and judgement.
While AI can be a useful tool to aid decision-making, this should ultimately come down to human judgement, says Gallagher.
“It should go without saying… Humans make the decisions. Every single word that any person puts on a page, they have to own it – every line of code, every image. Even if generative AI has done most of the work.”
Focusing training and culture on these four areas can allow organisations to increase their capacity for strategic work and protect their people’s sense of job security.
“Fortunately, these are the highest-value-creating activities in every organisation,” he says.
As leaders consider their approaches to ChatGPT, they should keep in mind that the future of work is inevitably intertwined with AI, he says. The challenge lies in integrating it thoughtfully for the benefit of businesses and employees alike.
Need help navigating workplace change? AHRI’s short course will arm you with the skills to understand change dynamics at an individual, team and organisational level.