top of page

A Deep Dive into Deep Fakes: AI is the Problem, Can it also be the Solution?

  • vanessabland
  • 6 hours ago
  • 7 min read

Editorial Assistant Siya Pujari discusses this vexed issue with Professor Niloufer Selvadurai


There has been significant movement towards introducing effective regulation for AI-centric technology, including apps and tools that allow for illegal ‘deepfaking’ of individuals and facilitate online sexual abuse. ‘Deep fakes’ are AI-generated videos, images, or audio recordings. They include ‘face swaps’ where a person’s face is replaced with another and ‘voice deep fakes’ which involve the cloning of another person's voice. A particularly troubling recent form is the ‘nudify’ app which uses AI to generate fake but realistic sexually explicit images of individuals.[1] 


While AI issues relating to the protection of artists’ working in creative industries and the detrimental use of AI in education by students has attracted public discourse and governmental scrutiny, AI regulation and digital content legal infrastructure regulation to reduce online sexual abuse has attracted less attention.


ree

The current laws regarding deepfakes can be found in the Criminal Code Amendment (Deepfake Sexual Material) Act 2024. This Act amended the Criminal Code Act 1995 to focus on offences regarding the non-consensual sharing of sexually explicit media, such as images and videos and also includes media created or changed through AI technology. Specifically, section 474.14 highlights the ability of apps to be used as tools in these offences by stating that using a carriage service to transmit sexual material, such as deepfakes without consent is a criminal offence that can be punishable by six years imprisonment. 


However, it is not clear how effective these laws are. In June 2025, the eSafety Commission Julie Inman Grant, disclosed that reports involving non-consensual deepfakes and digital images made of children under eighteen-years-old have more than doubled in the past eighteen months, with four out of five of the reports showing young women being targeted. [3] She shared that a new Online Safety Advisory alter had been released to parents and schools about the significant extent to which the ‘nudify’ apps had been utilised to its accessibility.[4]


Enlightening the issue further, a MIT Technology Review report found that a public bot on the platform Telegram provided free deepfake pornographic images that only required an image from the user. By July, the bot was used in the targeting of at least 100,000 young women who were mostly underage, according to the co-author of the report. The bot was subsequently used in various channels on the platform, “that would award images with the most likes with tokens for their creators to access the bot’s premium feature,” incentivising the digital abuse of women on tech platforms.[6]


In such a troubling context, the 2 September 2025 media release by the Minister for Communications, Anika Wells is to be welcomed. It stated that the intention of the government is to focus on closing gaps in the current law on technology used in abusive ways as well as placing the onus on tech companies to prevent the utilisation of these tools on their platforms. Wells states that “these new, evolving, technologies require [...] a new, proactive, approach to harm prevention — and we’ll work closely with industry to achieve this.”[2] 


The government proposes to hold tech companies and their platforms legally responsible for not preventing user interaction with the ‘nudify’ tools they host through the new regulation. With the new plans being focused on the role that tech companies play in this proliferation of these apps, it is important to note that multiple websites and apps that provide ‘nudify’ services run on services such as Amazon and Cloudflare’s hosting and content delivery services [7] while also profiting in the millions.[8] Visa also remains as a payment method for users on some sites and apps that sell and provide deepfake pornography.[9] The lack of specificity provided by the government leaves much to be contemplated in terms of how the ban will actually limit the consumption of the ‘nudify’ apps. 


To gain more insight into the broader legal dialogue on AI, I spoke to Professor Niloufer Selvadurai, the Co-Director of the Data Horizons Research Centre at Macquarie University.


What do you think the government should implement specifically in the new legislation in order to avoid potential loopholes that can be created by these AI laws e.g. users being able to avoid potential geographic restrictions by using VPNs or proxy servers?


“Before imposing regulation on tech companies, it is useful to calibrate the risks and rewards of any such regulation. As the Productivity Commission pointed out in a report earlier this year, Australia’s productivity is at a 60 year low. Through the ages, technological innovation has been the single most powerful driver of productivity growth. But while the benefits of regulation are typically easy to see, the potential economic effects on investment and venture capital are often a little less easy to identify and quantify. So it is a delicate exercise. The Wall Street Journal reported that venture capital is flowing out of Europe into the USA as a result of the European Union’s AI Act. Of course, in many cases, the economic cost is justified by the social utility, but this is not always the case. It is a complex decision-making process to be made on a case by case basis. And as we have seen from the media bargaining code discourse, tech companies exist in sensitive commercial ecosystems, sometimes supporting other traders and startups. So when you design a regulation, it is valuable to see the full picture.”


How can or should regulation extend to consider “off the radar” ways of deepfaking including file-sharing platforms, private forums or messaging apps that have already been hosting nudify chatbots such as Telegram? 


ree

“As the term ‘carriage service’ in the present legislation is broad, it encompasses all sorts of entities using telecommunications services, and also potentially intermediaries. The 2024 law also expands the content covered. The critical problem now is the effective identification and enforcement of this law. If a person complains, it activates the investigative process. If no one complains, then the law is a bit less effective. In this context, a ‘compliance by design’ approach, whereby digital platforms and others in the internet supply chain are required to adopt technical means for automatically monitoring and removing offensive content, including tools using AI, can be a useful solution. So while technology facilitates this abuse, it can also be part of the solution. The proposed 2025 laws go further to make digital platforms liable for platforms legally responsible for not preventing user interaction with the AI tools they host.”


Ultimately, is there a way to reconcile the constantly changing nature of generative AI, as seen with the drastic improvement of deepfake quality over the past few years, with the somewhat slow and stagnant nature of creating legal measures to restrict and regulate the use of AI through the parliamentary process? In other words, can you foresee or imagine a solution to this problem that will invariably continue to pop up in the legal and social zeitgeist just in different forms? 


“Yes, and I am so very glad you asked – this is exactly the problem I am seeking to solve in my present research. As innovations in generative AI, and now agentic AI, accelerate, the traditional machinery of lawmaking and enforcement is no longer effective. Specifically, as AI systems and agents become progressively opaque and ubiquitous, it is difficult for regulators to monitor their effects, identify malfeasance, and gather the necessary evidence to support legal proceedings. This vacuum of effective regulation is undermining safety and commercial certainty, reducing R&D investment and consumer confidence in early adoption. This in turn threatens productivity and economic prosperity. A new legal paradigm is required. My novel ‘compliance by design’ research involves embedding the law within the design of AI systems and agents, as well as within institutional practices, to mandate compliance. The framework supports compliance with privacy, finance, health, consumer, anti-discrimination and other laws. It seeks to provide a logistically feasible, multidisciplinary solution to a critical problem of our time – Safe and lawful AI. If you would like to read more, please see my article ‘Advancing lawful AI through 'compliance by design,'‘ (2025) 31(2) Computer and Telecommunications Law Review 35-38. Thomson Reuters, UK. As you know, Macquarie is involved in an exciting new research initiative in the field of bionnovation. And a Macquarie legal research team has recently applied this paradigm to bionnovation – see ‘A Regulatory Framework for Calibrating Risks and Rewards of Bioengineering: The Merits of a Compliance by Design Approach,’ Niloufer Selvadurai, Danielle Moon, Sarah Sorial and Robert Stokes, ANU Journal of Law and Technology, coming soon! Professor Sarah Sorial is the Deputy Dean of Research and Innovation for the Faculty of Arts, but we were fortunate enough to have Sarah as part of our legal research team for this project.”


Ultimately, and paradoxically, AI can be both destroyer and fixer — AI is the problem and can also be the solution once regulated effectively. And with the new laws being implemented, ideally, it would allow for greater personal and legal protection in the digital space with novel AI technologies only improving daily. 




ENDNOTES 

[1] Bates, Laura. The New Age of Sexism: How the AI revolution is reinventing misogyny. Simon & Schuster UK, 2025. 

[2] Department of Infrastructure, Transport, Regional Development, Communications, Sport and the Arts. Taking a stand against abusive technology. Australian Government, 2025, https://minister.infrastructure.gov.au/wells/media-release/taking-stand-against-abusive-technology

[3] eSafetyCommissioner. eSafety urges schools to report deepfakes as numbers double. Australian Government, 2025, https://www.esafety.gov.au/newsroom/media-releases/esafety-urges-schools-to-report-deepfakes-as-numbers-double

[4] eSafetyCommissioner. eSafety urges schools to report deepfakes as numbers double. Australian Government, 2025, https://www.esafety.gov.au/newsroom/media-releases/esafety-urges-schools-to-report-deepfakes-as-numbers-double

[5] Bates, Laura. The New Age of Sexism: How the AI revolution is reinventing misogyny. Simon & Schuster UK, 2025.

[6] Bates, Laura. The New Age of Sexism: How the AI revolution is reinventing misogyny. Simon & Schuster UK, 2025.

[7] Misha Ketchell. “Australia set to ban ‘nudify’ apps. How will it work?” The Conversation, 3 Sep. 2025, https://theconversation.com/australia-set-to-ban-nudify-apps-how-will-it-work-264349

[8] Misha Ketchell. “Australia set to ban ‘nudify’ apps. How will it work?” The Conversation, 3 Sep. 2025, https://theconversation.com/australia-set-to-ban-nudify-apps-how-will-it-work-264349

[9] Bates, Laura. The New Age of Sexism: How the AI revolution is reinventing misogyny. Simon & Schuster UK, 2025. 


Comments


Grapeshot acknowledges the traditional owners of the Wallumattagal land that we produce and distribute the magazine on, both past and present. It is through their traditional practices and ongoing support and nourishment of the land that we are able to operate. 

Always Was, Always Will Be 

bottom of page