Advertisement

Concerning new issue in Aussie schools

Two young Asian students in uniforms arriving at school or university
Online agencies have predicted a rapid increase in complaints over harmful use of generative AI.

Australia’s online safety regulator has urged schools to overhaul their safety policies after recording its first-ever case of school students using artificially-generated sexually explicit material to bully others.

Major concerns were flagged during a federal probe into the use of artificial intelligence, during which committee members admitted they were struggling to play “catch up” in a rapidly changing digital space.

“The issue of generative AI really accelerates the ability to manipulate voice and images, which increases the risk of cyber-bullying to students and to teachers,” eSafety Executive Manager Paul Clark told a hearing on Wednesday.

“We wanted to make the committee aware of this development as our submission in July has noted we had not received a complaint as such.”

A dangerous new form of cyber-bullying has seen young people use AI to produce harmful images of their peers.
A dangerous new form of cyber-bullying has seen young people use AI to produce harmful images of their peers.

eSafety Commissioner Julie Inman Grant alerted education ministers that the first case of AI-generated sexually explicit content produced by school students to bully others was reported in August.

Ms Inman Grant said the case was “just the tip of the iceberg.”

Major search engines like Google will be subject to a new industry code as of March 2024, the inquiry heard, which will put forward general measures to prevent explicit AI content from being created and shared online.

When asked if schools urgently needed to be equipped with tools to confront AI risks, an eSafety spokesperson said the watchdog would adopt existing resources to deal with the issue.

“Cyber-bullying has obviously been a problem for quite some time, generative AI sort of changes the potential to amplify it and it’s a new vector potentially, but it’s not a new issue,” they said.

Roughly 42 per cent of high school students in Australia use artificial intelligence programs like Chat GPT during school hours, according to recent studies.

While AI offers “massive benefits” for young people, the probe heard it also poses major risks including stripping young people of their ability to think critically and exposing them to harmful stereotypes through biased algorithms.

Group of students walking through school hallway
Digital wellbeing groups have called for caution in implementing AI in schools.

CEO of the Centre for Digital Wellbeing Carla Wilshire said there was an urgent need to develop a better framework to monitor which AI systems are being used in schools.

She said other countries use accreditation tests on AI programs to ban the use of unethical systems outright, a move which Australia should seriously consider.

“One of the key issues comes around ethics and bias, particularly if you’re talking about regional areas where First Nations children are represented significantly,” Ms Wilshire said.

“Algorithm bias becomes a critical issue and to some extent, we would exercise caution in dropping standards to push technology faster into lower socio-economic schools.

“One of the risks you run is you are effectively creating another layer of technological disadvantage.”