Australia needs better AI regulation, says Human Rights Commission


As AI grows more powerful and prolific in our workplaces, the Australian Human Rights Commission has called for greater government regulation to manage its risks.

In response to the government’s recent Supporting Responsible AI discussion paper, the Australian Human Rights Commission (AHRC) has proposed stricter regulations to manage the risks associated with artificial intelligence

The original discussion paper, published by the Department of Industry, Science and Resources, emphasises the huge opportunities for AI to boost economic and social outcomes. It cites McKinsey’s estimation that AI and automation could add between $1.1 trillion and $4 trillion to the Australian economy by the early 2030s.

However, the paper also acknowledges the potential for AI to be employed for harmful purposes, such as creating deepfakes and misinformation. It also lays out other challenges such as privacy concerns, a lack of transparency and algorithmic bias.

As AI continues to evolve and permeate our daily lives, these hazards are bound to escalate if left unchecked, said the AHRC in its submission. And AI’s increasing integration with neurotechnology and metaverse/virtual reality technologies will only compound its potential for harm.

The AHRC has put forth a series of recommendations to the government for better AI regulation, with many tied to the use of AI in the workplace.

Some notable recommendations include:

  • Privacy and data protection: The government should explore alternative models of privacy and data protection that don’t place the primary responsibility on individuals to safeguard their data.
  • User protection: AI chatbots should have robust safeguards to protect users, with testing to ensure they do not produce harmful responses.
  • Misinformation and disinformation: Clear requirements and pathways should be established for organisations to identify and report suspected misinformation and disinformation.
  • Education and training: Greater investment is needed in training government, private enterprise and consumers on the safety and limitations of AI products and how to scrutinise AI-informed decisions or recommendations.
  • Environmental impact: Organisations deploying AI should report on the environmental impact of their AI-related work.

These recommendations are reflective of growing public concerns about the knock-on effects of incorporating AI into our ways of working.

HRM recently spoke with Tracey Spicer, award-winning journalist, broadcaster and author of Man-Made: How the bias of the past is being built into the future, to unpack some of the key issues these recommendations are trying to address and how employers can proactively regulate AI within their own organisations.

Proposed measures to eliminate bias

Among the total of 47 recommendations put forward by the AHRC, a recurring theme is the potential for bias to creep into AI-powered softwares.

The nature of machine learning means AI bias tends to get worse with time, says Spicer. 

AI’s output heavily relies on the data it is trained on; when the input dataset is limited or skewed, the resulting bias becomes increasingly ingrained and entrenched within the software.

“[This] can be a matter of life or death,” she says. “For example, bias in AI used in a hospital setting could misdiagnose a person of colour if it’s been trained on a predominantly white dataset. In banking, algorithms could reject home loan or credit card applications from women, while AI hiring tools may sideline CVs from people living with disabilities.”

She warns that ChatGPT, one of the most popular and prolific AI technologies to emerge, is subject to many of these risks.

“[It] was predominantly tested on young white men, before being unleashed on an unsuspecting public,” she says. “The trend for tech giants to release products without proper testing is really problematic.”

Spicer emphasises the need for robust internal review processes for each piece of AI software being used in an organisation.

This is especially crucial when AI algorithms are used in the recruitment process. If historical data shows a bias towards certain demographics, such as preferencing male candidates or applicants from certain educational backgrounds, the AI system may learn these biases and replicate them in the selection process.

“Bias audits are crucial. They need to be done early and regularly, because the AI can deepen bias over time.

“If the automated hiring platform has been created by a third party vendor, you need to ask some questions.”

These questions might include: 

  • What did the dataset used for this product look like? 
  • Was the dataset cleaned prior to use (i.e. purged of incorrect, corrupted, misformatted, duplicate or incomplete data)?
  • Who is building these algorithms/programs? And what biases might they have?
  • Who is responsible for ensuring platforms are assessed for bias? Is that HR, IT or the managers utilising the platforms?
  • Are there multiple sources of data feeding into this platform?
  • How can we make this product more inclusive?

“I would recommend that [the bias audit] be done in an external manner, to reduce unconscious bias from within,” says Spicer. 

“The trend for tech giants to release products without proper testing is really problematic.” – Tracey Spicer, journalist, broadcaster and author

A proactive approach to AI regulation

Research indicates that the majority of employees have long been aware of – and concerned about – the human rights risks associated with AI.

The government’s Australian Community Attitudes to Privacy Survey 2020 showed 84 per cent of respondents believed people should have a right to know if a decision affecting them is made using an AI algorithm. Meanwhile, 78 per cent believed individuals should be told what factors and personal information are considered by the algorithm and how these factors are weighted.

These findings demonstrate that, whether an employer is using AI responsibly or not, a lack of transparency can erode employees’ trust in the organisation.

Rather than viewing AI regulation as an issue for leaders to deal with, democratising the conversation can go a long way in maintaining trust and encouraging employees to share any concerns they may have. 

HR can aid this process by promoting a culture of continuous learning and conversation about AI, keeping employees and stakeholders informed about the latest developments in AI ethics, best practices and regulatory requirements.

Spicer points to a global movement towards ‘mindful AI’, which encourages employers to broaden their view of AI beyond the tasks it can complete. This means taking a holistic approach that considers fairness, transparency and accountability, and always prioritises the needs and rights of people.

Whether the AHRC’s calls for greater government regulation are answered or not, taking preventive measures to minimise AI risks at an organisational level is crucial to position your company as a trustworthy employer.

“It’s certainly beneficial from a social justice perspective,” she says. “But it also benefits the bottom line. 

“If you create and use products with a greater level of diversity and inclusion, you’ll attract and retain talent and widen your potential market.”

What are your main concerns regarding the use of AI at work? Let us know in the comment section.


Learn more about the possibilities and risks of AI by accessing AHRI’s on-demand webinar, Generative AI For HR, via the member portal. Visit the webinars homepage for more information


 

Subscribe to receive comments
Notify me of
guest

0 Comments
Inline Feedbacks
View all comments
More on HRM

Australia needs better AI regulation, says Human Rights Commission


As AI grows more powerful and prolific in our workplaces, the Australian Human Rights Commission has called for greater government regulation to manage its risks.

In response to the government’s recent Supporting Responsible AI discussion paper, the Australian Human Rights Commission (AHRC) has proposed stricter regulations to manage the risks associated with artificial intelligence

The original discussion paper, published by the Department of Industry, Science and Resources, emphasises the huge opportunities for AI to boost economic and social outcomes. It cites McKinsey’s estimation that AI and automation could add between $1.1 trillion and $4 trillion to the Australian economy by the early 2030s.

However, the paper also acknowledges the potential for AI to be employed for harmful purposes, such as creating deepfakes and misinformation. It also lays out other challenges such as privacy concerns, a lack of transparency and algorithmic bias.

As AI continues to evolve and permeate our daily lives, these hazards are bound to escalate if left unchecked, said the AHRC in its submission. And AI’s increasing integration with neurotechnology and metaverse/virtual reality technologies will only compound its potential for harm.

The AHRC has put forth a series of recommendations to the government for better AI regulation, with many tied to the use of AI in the workplace.

Some notable recommendations include:

  • Privacy and data protection: The government should explore alternative models of privacy and data protection that don’t place the primary responsibility on individuals to safeguard their data.
  • User protection: AI chatbots should have robust safeguards to protect users, with testing to ensure they do not produce harmful responses.
  • Misinformation and disinformation: Clear requirements and pathways should be established for organisations to identify and report suspected misinformation and disinformation.
  • Education and training: Greater investment is needed in training government, private enterprise and consumers on the safety and limitations of AI products and how to scrutinise AI-informed decisions or recommendations.
  • Environmental impact: Organisations deploying AI should report on the environmental impact of their AI-related work.

These recommendations are reflective of growing public concerns about the knock-on effects of incorporating AI into our ways of working.

HRM recently spoke with Tracey Spicer, award-winning journalist, broadcaster and author of Man-Made: How the bias of the past is being built into the future, to unpack some of the key issues these recommendations are trying to address and how employers can proactively regulate AI within their own organisations.

Proposed measures to eliminate bias

Among the total of 47 recommendations put forward by the AHRC, a recurring theme is the potential for bias to creep into AI-powered softwares.

The nature of machine learning means AI bias tends to get worse with time, says Spicer. 

AI’s output heavily relies on the data it is trained on; when the input dataset is limited or skewed, the resulting bias becomes increasingly ingrained and entrenched within the software.

“[This] can be a matter of life or death,” she says. “For example, bias in AI used in a hospital setting could misdiagnose a person of colour if it’s been trained on a predominantly white dataset. In banking, algorithms could reject home loan or credit card applications from women, while AI hiring tools may sideline CVs from people living with disabilities.”

She warns that ChatGPT, one of the most popular and prolific AI technologies to emerge, is subject to many of these risks.

“[It] was predominantly tested on young white men, before being unleashed on an unsuspecting public,” she says. “The trend for tech giants to release products without proper testing is really problematic.”

Spicer emphasises the need for robust internal review processes for each piece of AI software being used in an organisation.

This is especially crucial when AI algorithms are used in the recruitment process. If historical data shows a bias towards certain demographics, such as preferencing male candidates or applicants from certain educational backgrounds, the AI system may learn these biases and replicate them in the selection process.

“Bias audits are crucial. They need to be done early and regularly, because the AI can deepen bias over time.

“If the automated hiring platform has been created by a third party vendor, you need to ask some questions.”

These questions might include: 

  • What did the dataset used for this product look like? 
  • Was the dataset cleaned prior to use (i.e. purged of incorrect, corrupted, misformatted, duplicate or incomplete data)?
  • Who is building these algorithms/programs? And what biases might they have?
  • Who is responsible for ensuring platforms are assessed for bias? Is that HR, IT or the managers utilising the platforms?
  • Are there multiple sources of data feeding into this platform?
  • How can we make this product more inclusive?

“I would recommend that [the bias audit] be done in an external manner, to reduce unconscious bias from within,” says Spicer. 

“The trend for tech giants to release products without proper testing is really problematic.” – Tracey Spicer, journalist, broadcaster and author

A proactive approach to AI regulation

Research indicates that the majority of employees have long been aware of – and concerned about – the human rights risks associated with AI.

The government’s Australian Community Attitudes to Privacy Survey 2020 showed 84 per cent of respondents believed people should have a right to know if a decision affecting them is made using an AI algorithm. Meanwhile, 78 per cent believed individuals should be told what factors and personal information are considered by the algorithm and how these factors are weighted.

These findings demonstrate that, whether an employer is using AI responsibly or not, a lack of transparency can erode employees’ trust in the organisation.

Rather than viewing AI regulation as an issue for leaders to deal with, democratising the conversation can go a long way in maintaining trust and encouraging employees to share any concerns they may have. 

HR can aid this process by promoting a culture of continuous learning and conversation about AI, keeping employees and stakeholders informed about the latest developments in AI ethics, best practices and regulatory requirements.

Spicer points to a global movement towards ‘mindful AI’, which encourages employers to broaden their view of AI beyond the tasks it can complete. This means taking a holistic approach that considers fairness, transparency and accountability, and always prioritises the needs and rights of people.

Whether the AHRC’s calls for greater government regulation are answered or not, taking preventive measures to minimise AI risks at an organisational level is crucial to position your company as a trustworthy employer.

“It’s certainly beneficial from a social justice perspective,” she says. “But it also benefits the bottom line. 

“If you create and use products with a greater level of diversity and inclusion, you’ll attract and retain talent and widen your potential market.”

What are your main concerns regarding the use of AI at work? Let us know in the comment section.


Learn more about the possibilities and risks of AI by accessing AHRI’s on-demand webinar, Generative AI For HR, via the member portal. Visit the webinars homepage for more information


 

Subscribe to receive comments
Notify me of
guest

0 Comments
Inline Feedbacks
View all comments
More on HRM