An Australian HR consultancy has added its first AI employee, ‘Xai’, to the organisational chart. But it needed to put guardrails in place in order for Xai to work cohesively with their human colleagues.
There are plenty of reasons for HR to get excited about generative AI technology, such as ChatGPT. For a function that’s often straddling the administrative and strategic sides of work, this could free up HR professionals to sink their teeth into bigger-picture projects.
However, we’re perhaps a fair way off from this technology becoming as prolific as some people are suggesting.
Gartner predicts that we could be at least five years away from this technology becoming commonplace in the average large organisation. However, some companies have already entered experimentation mode.
HumanX HR, an Australian HR agency, recently ‘hired’ Xai to join their team. Xai is an AI bot but they still have a spot on the org chart and a job description (JD).
Tahnee McWhirter, Partner at HumanX HR, says the JD is “critical” in bringing a level of accountability to the role and making it clear to the human workforce how Xai should be incorporated into their workflow.
“The JD describes the purpose of the role and sets out measures for success. All these factors are important to scope, clarify and monitor,” says McWhirter. “By placing them in the context of a job description, it makes it clear that we’re setting and refining expectations – not policing with definitive rules. We don’t actually know where the boundaries lie. We’re playing and testing.”
Is there a place for AI on your org chart?
Xai is charged with tasks such as drafting ideas for communications to employees (e.g. newsletters and recruitment materials), data analysis, report generation and proposing strategies to improve HR processes.
However, it’s important that guardrails are put in place. For example, none of Xai’s work is to be sent out unless it has had a human eye on it, and staff aren’t allowed to rely on Xai’s information alone.
McWhirter looks at Xai as she would a graduate employee – part of the team, a useful pair of hands, but in need of consistent oversight. Xai also won’t perform certain tasks, such as inputting confidential employee or client information, dealing with intellectual property or employee relations materials.
“[Xai is] someone who needs work reviewed, guidance and lots of feedback,” she says. “Our clients rely on us for creative, bespoke and compliant work that’s designed for humans. Xai is here to create space for us to do our human work.”
There was also a change management piece that McWhirter needed to factor in, such as how employees would react to their new AI-generated colleague.
“Some in our team approach Xai with a healthy dose of skepticism, and others need to ‘get to know’ Xai purely from a tech-comfort point of view. The beauty of this trial is that feedback is encouraged, so challenging questions, trepidation and even fear are acceptable responses.
“The expectation is that the team will engage in respectful and healthy conversation to solve challenges, or use the trepidation to fuel further inquisition. Positioning Xai as a team member that we need to maintain a level of respect for ensures we stick to above-the-line behaviours and don’t fall into the trap where fear leads to closed-mindedness.”
Risks to keep in mind
Even though McWhirter and her team are seeing benefits from using Xai, there are myriad risks to keep in mind when engaging with AI technology.
For starters, it could perpetuate our existing culture of hyper-productivity. If AI is ‘freeing people up’ by taking away the menial tasks, will employers really give this time back to employees to think strategically, or will they simply give them more work?
For example, while the introduction of emails was meant to free us up from the time-consuming process of writing and faxing information, it was so effective and efficient that now we’ve filled that ‘free’ time with… yep, more emails.
“Smartphones and emails created a norm of around-the-clock responsiveness. So when a bot delivers its output quickly, the employee may feel the pressure to immediately action it,” says Neomal Silva, Chief Engagement Officer at Neomal Silvas Meditation.
Silva says loneliness is another concern to keep in mind.
“If you’re interacting with AI, rather than a human assistant, work may feel lonelier. Also, as managers get used to giving the bot an input and getting a fast output, the risk is that they inadvertently treat their employees in a similarly transactional way. This can adversely impact employee morale and wellbeing.”
“When a bot delivers its output quickly, the employee may feel the pressure to immediately action it.” – Neomal Silva, Chief Engagement Officer, Neomal Silvas Meditation
On the positive side, Aaron McEwan FAHRI, Vice President of Research and Advisory at Gartner, says these emerging technologies will also give us important data that could help employers optimise work.
“[We need to know] when we need to stop being innovative because we’re exhausted and move on to doing some admin. These tools will recognise when we need to stop.
“My hope is that we get to a place where the leaders of organisations have dashboards that tell them that the best way to run a workplace is to not work [employees] to the bone and throw productivity tools at people. It’s to give them time and space to think.”
If we become too reliant on using AI-generated platforms at work, it might creep into sensitive HR areas, such as logging OH&S matters, says Silva.
“If you’re dealing with a stressful matter like bullying in your team, wouldn’t you rather run it by an empathetic human, rather than get the policy from a soulless bot?”
There are also potential risks for your customer experience, he adds.
“If organisations use ChatGPT in customer service, companies might need to retain only a small number of customer service personnel who deal with queries the bot can’t answer.
“However, by that stage of the conversation, the customer, having perhaps tried several times to have the bot answer their query, may be frustrated. Consequently, [human] customer service workers risk burnout, deteriorating mental health, increased sick leave and higher turnover rates [due to dealing with an increase in annoyed customers].”
Think you can tell the difference between human and AI generated content? Take HRM’s quiz to find out.
Prepare for the inevitable
“AI will be part of our workplaces, so HR should prepare for that,” says Silva. “[This means creating] training plans, policies and helping employees handle the transition via upskilling, reskilling and education and wellbeing initiatives.”
As with other conversations about emerging technologies – the metaverse, Web3, etc. – it’s important that HR is curious to learn more and willing to apply a human lens to its implementation.
But you don’t need to be an expert, says McWhirter.
“We’d rather be a part of the conversation than following the pack. Our intention was to empower the team to engage with AI in a safe environment, where we have the opportunity to set a framework they can play in,” she says.
“Before everything hides behind a paywall, play. Set your principles to ensure you engage with AI ethically and pilot some simple use cases to understand how AI can create space for you to do high-value human work.
“The challenges will lie in understanding how humans and AI will work together – not how we will ‘use’ AI, but how we will engage with it. Policies and legislation will come in due course, but more importantly, we need to develop the mindsets to achieve an ethical culture. From an HR perspective, overcoming bias, maintaining creativity and managing confidential information will be the pressing challenges to solve.”
As to whether this piece has been written using ChatGPT, you’ll just have to take my word that it wasn’t. Although, you’ll never really know.
A longer version of this article first appeared in the May 2023 edition of HRM Magazine.