Welsh public concerned about child safety in AI products, says NSPCC

Over three quarters of the Welsh public want child safety checks on new generative AI products.
That is according to new research undertaken by leading children’s safeguarding charity, the NSPCC.
With the use of artificial intelligence becoming more widespread, a survey commissioned by the organisation has revealed the ways that generative AI is being used to groom, harass and manipulate children and young people.
Savanta surveyed 3,356 people from across the UK, including 158 people in Wales, and found that most of the public – 89 per cent in Wales and 89 per cent across the UK – said that they have some level of concern that “this type of technology may be unsafe for children”.
Most of the public, 76 per cent, in Wales and 78 per cent across the UK, said they would prefer to have safety checks on new generative AI products, even if this caused delays in releasing products over a speedy roll-out without safety checks.
Key findings
The new NSPCC paper also shares key findings from research conducted by AWO, a legal and technology consultancy.
The research Viewing Generative AI and children’s safety in the round identifies seven key safety risks associated with generative AI; sexual grooming, sexual harassment, bullying, financially motivated extortion, child sexual abuse & exploitation material, harmful content, and harmful ads & recommendations.
Generative AI is currently being used to generate sexual abuse images of children, enable perpetrators to more effectively commit sexual extortion, groom children and provide misinformation or harmful advice to young people.
From as early as 2019, the NSPCC have been receiving contacts from children via Childline about AI.
One boy aged 14 told the service*: “I’m so ashamed of what I’ve done, I didn’t mean for it to go this far.
“A girl I was talking to was asking for pictures and I didn’t want to share my true identity, so I sent a picture of my friend’s face on an AI body.
“Now she’s put that face on a naked body and is saying she’ll post it online if I don’t pay her £50.
“I don’t even have a way to send money online, I can’t tell my parents, I don’t know what to do.”
One girl, aged 12 asked Childline*: “Can I ask questions about ChatGPT? Like how accurate is it?
“I was having a conversation with it and asking questions, and it told me I might have anxiety or depression. It’s made me start thinking that I might?”
The NSPCC paper outlines a range of different solutions to address these concerns including stripping out child sexual abuse material from AI training data and doing robust risk assessments on models to ensure they are safe before they are rolled out
A member of the NSPCC Voice of Online Youth, a group of young people aged 13-17 from across the UK, said: “A lot of the problems with Generative AI could potentially be solved if the information [that] tech companies and inventors give [to] the Gen AI was filtered and known to be correct.”
Call for UK Government action
The Government is currently considering new legislation to help regulate AI and there will be a global summit in Paris this February where policy makers, tech companies and third sector organisations, including the NSPCC and their Voice of Online Youth, will come together to discuss the benefits and risks of using AI.
The NSPCC is calling on the Government to adopt specific safeguards for children in its legislation. The charity says four urgent actions are needed by Government to ensure generative AI is safe for children:
- Adopt a Duty of Care for Children’s Safety: Gen AI companies must prioritise the safety, protection, and rights of children in the design and development of their products and services.
- Embed a Duty of Care in Legislation: The charity says that it is imperative that the Government enacts legislation that places a statutory duty of care on Gen AI companies, ensuring that they are held accountable for the safety of children.
- Place Children at the Heart of Gen AI Decisions: The needs and experiences of children and young people must be central to the design, development, and deployment of Gen AI technologies.
- Develop the Research and Evidence Base on Gen AI and Child Safety: The Government, academia, and relevant regulatory bodies should invest in building capacity to study these risks and support the development of evidence-based policies.
“Generative AI is a double-edged sword. On the one hand it provides opportunities for innovation, creativity and productivity that young people can benefit from; on the other it is having a devastating and corrosive impact on their lives,” said Chris Sherwood, CEO at the NSPCC.
“We can’t continue with the status quo where tech platforms ‘move fast and break things’ instead of prioritising children’s safety.
“For too long, unregulated social media platforms have exposed children to appalling harms that could have been prevented.
“Now, the Government must learn from these mistakes, move quickly to put safeguards in place and regulate generative AI, before it spirals out of control and damages more young lives.
“The NSPCC and the majority of the public want tech companies to do the right thing for children and make sure the development of AI doesn’t race ahead of child safety.
“We have the blueprints needed to ensure this technology has children’s wellbeing at its heart, now both Government and tech companies must take the urgent action needed to make Generative AI safe for children and young people.”
