Data protection authorities issue joint statement on AI-generated imagery and privacy concerns

Published on:
Thursday, Apr 02, 2026

Data protection authorities issue joint statement on AI-generated imagery and privacy concerns

OIC has signed a joint statement with 60 other domestic and global privacy authorities, highlighting concerns about artificial intelligence (AI) systems that generate realistic images and videos depicting identifiable individuals, without their knowledge or consent.

The OIC has signed a joint statement with 60 other domestic and global privacy authorities, highlighting concerns about artificial intelligence (AI) systems that generate realistic images and videos depicting identifiable individuals, without their knowledge or consent.

Queensland’s Information Commissioner, Joanne Kummrow, and Privacy Commissioner, Alexander White, signed the statement which was coordinated by the International Enforcement Cooperation Working Group (part of the Global Privacy Assembly).

Mr White said the united position highlights the seriousness of the issue, as well as the need for organisations developing and using AI systems to comply with applicable legal frameworks, including data protection and privacy rules.

“We are especially concerned about potential harm to vulnerable groups in the community, including children. This misuse of personal information can result in serious harms, such as reputational and emotional harm from cyber-bullying and exploitation,” Mr White said.

“While technological advancements can bring positives to our world, we must ensure AI content generation systems do not encroach on people’s privacy rights, dignity, safety and other fundamental rights.”

The joint statement is provided below.

Joint Statement on AI-Generated Imagery and the Protection of Privacy

23 February 2026

The co-signatories below are issuing this Joint Statement in response to serious concerns about artificial intelligence (AI) systems that generate realistic images and videos depicting identifiable individuals without their knowledge and consent.

While AI can bring meaningful benefits for individuals and society, recent developments - particularly AI image and video generation integrated into widely accessible social media platforms - have enabled the creation of non-consensual intimate imagery, defamatory depictions, and other harmful content featuring real individuals. We are especially concerned about potential harms to children and other vulnerable groups, such as cyber-bullying and/or exploitation.

Expectations for Organisations

The co-signatories remind all organisations developing and using AI content generation

systems that such systems must be developed and used in accordance with applicable legal frameworks, including data protection and privacy rules.

We also highlight that the creation of non-consensual intimate imagery can constitute a  criminal offence in many jurisdictions.

Whilst specific legal requirements vary by jurisdiction, fundamental principles should guide all organisations developing and using AI content generation systems, including:

  • Implement robust safeguards to prevent the misuse of personal information and generation of non-consensual intimate imagery and other harmful materials, particularly where children are depicted.
  • Ensure meaningful transparency about AI system capabilities, safeguards, acceptable uses and the consequences of misuse.
  • Provide effective and accessible mechanisms for individuals to request the removal of harmful content involving personal information and respond rapidly to such requests.
  • Address specific risks to children through implementing enhanced safeguards and providing clear, age-appropriate information to children, parents, guardians and educators.

Coordinated Response

The harms arising from non-consensual generation of intimate, defamatory, or otherwise harmful content depicting real individuals are significant and call for urgent regulatory attention.

To encourage the development of innovative and privacy-protective AI, the co-signatories of this statement are united in expressing their concern about the potential harms from the misuse of AI content generation systems. The co-signatories aim to share information on their approaches to addressing these concerns that can include enforcement, policy and education, as appropriate and to the extent that such sharing is consistent with applicable laws. This reflects our shared commitment and joint effort in addressing a global risk.

Conclusion

We call on organisations to engage proactively with regulators, implement robust safeguards from the outset, and ensure that technological advancement does not come at the expense of privacy, dignity, safety, and other fundamental rights - particularly for the most vulnerable of our global society.