Like many other emerging technologies, ChatGPT has the ability to make work easier, but it could also make things a whole lot more difficult. How should HR approach conversations about the use of this technology at work?
By now you’ve probably heard of ChatGPT – the all-powerful AI-generated language model that spits out human-like text based on the relevant prompts it receives.
While the platform is yet to completely nail complex styles of writing (although it’s not far off), it has blown users away with its ability to produce straightforward content at warp speed.
Whether it will become a staple in organisations remains to be seen, but it’s clearly making its mark now, hitting over a million users in its launch week (for context, it took Twitter and Facebook 24 and 10 months respectively to reach over one million users).
So what factors should we consider before introducing this AI-enabled technology into our daily work practices? And what do we put at risk by relying on it too much?
The power of ChatGPT
For those who’ve not yet played around with ChatGPT, here’s an example of what I was able to do in under a minute.
Prompt #1: Write one sentence about what HR professionals should consider when implementing ChatGTP.
“HR professionals implementing ChatGPT should consider the ethical and legal implications, such as data privacy and bias in the model’s responses, and ensure its smooth integration into HR processes and effective support of HR functions.”
Prompt #2: Now rewrite this as if it was a Shakespeare play:
“When ChatGPT doth grace the HR realm,
Considerations of ethics and law at the helm.”
Prompt #3: Now rewrite this as if it was a TV Western:
“As HR draws near to implementin’ ChatGPT, they gotta keep a sharp eye on the legal trail and make sure the AI don’t rustle any ethical cattle.”
Prompt #4: Now rewrite this as if it were part of a movie trailer:
“In the HR world, a revolution is brewing. The arrival of ChatGPT brings a new era of efficiency, but at what cost? As HR professionals face the challenge of integration, they must navigate the perils of ethical considerations and data privacy. Will they rise to the occasion and lead their organisation into the future, or will they be left behind? Find out in the thrilling tale of HR and ChatGPT.”
It’s not exactly Pulitzer prize-winning content, but it’s pretty impressive. And this is just version one.
“This will definitely become a tool that’s useful for a wide range of professionals. It’s going to become part of our daily life,” says Dr Marcel Scharth, lecturer in Business Analytics at the University of Sydney Business School.
“ChatGPT and similar technologies are definitely here to stay. The capabilities are only going to improve.”
However, we’re still pretty far from this technology becoming as prolific as some people are predicting it will be.
“When we look at tech hype cycles, we’re still looking at least five years, but probably 10+ years, until something like this becomes mainstream,” says Aaron McEwan FAHRI, Vice President of Research and Advisory at Gartner.
“My sense is that it will probably be used within HR for things like creating draft policies, but the take-up will be higher outside of work. You might find that freelancers, without being hamstrung by a legal department, are using this technology, learning from it and vastly accelerating the value that they can add.
“I think a lot of people will experiment with it, and the technology will get better because of this, but there are a whole lot of challenges before the wide-scale adoption [of ChatGPT],” he says.
“Also, technology always moves faster than legislation, so there will be that lag period.”
Plus, McEwan doesn’t think most businesses are ready for this technology yet, partly because larger businesses are conservative by nature.
“[Large organisations] will take time to adopt this technology. Only 16 per cent of leaders outside of the IT function have high digital dexterity,” he says.
This means the key decision-makers might be wary of signing off on the use of platforms like ChatGTP.
And even when it does reach its peak, it won’t destroy jobs, he says. It will just change them.
“[Jobs] will just grow in complexity and value,” says McEwan.
Scharth’s important caveat to this point is that “AI won’t take your job, but someone who knows how to use it well might.”
The HR use case
“I’m equally fascinated, concerned and excited by what this tool can offer,” says Gillian Kelly, Director and Head of Talent Marketing at Outplacement Australia.
Kelly can see ChatGPT being useful in assisting in the recruitment process by writing job descriptions, interview design, candidate engagement, refining interview questions and identifying skills based on candidates’ work experience and skills.
“The challenge is all these things come with risks based on the limitations of the tool and how it’s used,” she says. “If people don’t do their own fact-checking or ask the wrong questions, it may potentially lead them in the wrong direction.
“Used without substantial input and editing, candidates may end up with resumes and interview answers based on ChatGTP content rather than their own valid insights. This [could] add a new level of complexity to hiring.”
It will be critical that HR professionals and those charged with recruiting continue to add a rigorous human lens to all these processes, she says. AI-enabled tech should be viewed as the first draft – the foundations for which a human can build upon with their own unique insights, skills and lived experiences.
“AI won’t take your job, but someone who knows how to use it well might.” – Dr Marcel Scharth, lecturer in Business Analytics at the University of Sydney Business School
Scharth says ChatGPT could also be used as a brainstorming tool. For example, you might say, ‘Give me ten ideas for an employee engagement program.’ At this stage, the responses might be rudimentary, but they’ll act as a useful platform to build on.
Scharth’s students are already using this technology. Perhaps you might feel concerned that this could impede a future generation’s ability to engage in deep research, creativity and strategic thinking (that’s my take, anyway). But Scharth is more optimistic.
“They’re using it to answer questions about their studies. They can copy and paste a text book into ChatGPT and ask it to come up with questions for them so they can study it. It caught my attention quickly because it was correctly answering a lot of my exam questions,” he says.
“I think it’s a very powerful learning tool. For example, we can undoubtedly say that having access to the internet makes it so much easier to study.”
Again, the realities of this are unknown. Perhaps it will impede creativity or researching abilities, but both Scharth and McEwan feel confident it won’t ever replace the human touch.
“It’s very nascent and one of the things it doesn’t do well yet is offer true insight,” says McEwan. “To me, the cool part about it is that it takes away some of the drudgery. Like a lot of the emerging future of work tech, it tends to push things up the value bar.”
Other risks to keep in mind
The argument that McEwan and other experts are making is that humans will soon be freed up to work on the big-picture stuff.
“Time and energy are finite resources, so you probably want to use them on that analysis and strategic thinking,” says McEwan.
He likens ChatGPT to a calculator, which has made life easier for the many people who find mathematics mind-boggling (myself and McEwan included). He says, “Do I really need a calculator to do my job? Probably not.”
This is true. The short-term use case is obviously very attractive, especially for a function like HR which is constantly trying to balance a workload of short-term tasks (writing policies, recruiting, compliance and risk management etc) with longer-term challenges, such as designing optimal work environments for the future or developing asynchronous work styles.
But let’s consider 10, 20, 30 years into the future. How might this ‘shortcut’ method to researching, writing and generally getting things done impact us negatively?
For starters, it could perpetuate our existing culture of hyperproductivity. If AI is ‘freeing people up’ by taking away the menial tasks, will all employers really give this time back to employees to think strategically or will they simply just give them more work? If someone can now produce 10 as opposed to five widgets of work, what’s stopping an employer from demanding 15, 20, 30 more widgets? The risk is that technology could create a do-more-with-less culture on steroids.
The introduction of emails was meant to free us up from the time-consuming process of writing, typing and faxing information, and it did, but because it was so effective and efficient, now we’ve simply filled that ‘free’ time with… yep, you guessed it, more emails. Research from 2020 suggests some employees spent 6.5 hours per day reading and replying to emails.
“We’re already in a culture of hyperproductivity,” says McEwan. “So the question is, will [AI tech] exacerbate that? I think there is the potential for that.
“But I also think the pandemic has woken people up to this problem. For whatever reason, we had convinced ourselves that work was the centre of our lives. Collectively, in many parts of the world, that’s being pushed back on.”
Along with wellbeing concerns, there’s also potential legal risks to keep in mind, such as copyright challenges and intellectual property complexities – who owns a ChatGPT article? The person who input the request, the machine that wrote it or the many, many authors of the content that the AI has trawled through? (Keep an eye out next week for an article unpacking some of the legal implications of ChatGPT.)
“And then there’s the risk of PR disasters,” says McEwan. “Imagine an employee gets the program to write something for them and then is not diligent enough to proofread it correctly.”
Accuracy is also another common concern, says Scharth.
“Generating truthful information is not in-built into the algorithm. It’s not completely reliable. If the workforce isn’t aware of that… it could unwittingly generate poor-quality work, such as software with bugs. It’s not something to be trusted blindly. If it generates inaccurate information, you have no way of knowing.”
And as to whether or not this piece has been written using ChatGTP, you’ll just have to take my word that it’s not. Although, really, you’ll never know for sure.
Need help navigating workplace change? AHRI’s short course will arm you with the skills to understand change dynamics at an individual, team and organisational level.