select search filters
briefings
roundups & rapid reactions
Fiona fox's blog

expert reaction to recent discussions around AI generated non-consensual sexualised deepfakes

Scientists comment on discussions around AI generated sexualised deepfakes (‘AI nudification’). 

 

Prof Sandra Wachter, Professor of Technology and Regulation, Oxford Internet Institute (OII), University of Oxford, said:

“If child sexual abuse images and sexual violence against women do not prompt regulators to act, I don’t know what will. I think we have waited enough for companies to make responsible design choices and they have failed in this case. This clearly shows that innovation is not always good, and can indeed be harmful, especially against groups that already face enormous violence.

“The EU and UK have both enacted laws in the last couple of years to tackle some of these issues. The Digital Services Act, AI Act, General Data Protection Regulation and new Liability regimes were created to protect citizens from these harms, but at the moment geopolitical pressure could risk that some of these potentially get shelved or watered down to “cut the red tape” to “not stifle innovation”.

“The role of regulators is to protect citizens and so in my view it is time for clear, hard and enforceable laws because rampant sexual violence is what you get if there are no laws.”

 

Professor Brent Mittelstadt, Professor of Data Ethics and Policy at the Oxford Internet Institute (OII) at the University of Oxford, said:

“The existence of this problem on X is unsurprising given the reduction of AI safety, ethics, and legal compliance capacity at the platform since the Twitter takeover in 2022. Work in these areas is meant to prevent precisely these sorts of risks and harmful capabilities and use cases from arising and spreading in the first place. In my view, these companies should be held to account to uphold a basic level of ethical standards, and UK regulatory bodies should bring in quick and decisive change in these cases of potential harm.

“The ‘AI nudification’ phenomenon with X/Grok is unfortunately only the tip of the iceberg concerning uses of AI to generate non-consensual intimate imagery. It is of course an important and immediate problem to fix, but the problem of using AI to generate harmful media goes much deeper. My recent research [1] shows that over 35,000 text-to-image models designed to generate non-consensual intimate imagery are freely available on public platforms. These models can be run locally on a person’s computer or smart phone, are computationally cheap, and need only a few images to generate fake images of specific people. Fixing the current problem with Grok and X will not eliminate the demand for this type of technology, nor reduce the availability of open access models capable of the generating equally harmful imagery.”

[1] https://dl.acm.org/doi/10.1145/3715275.3732107

 

Dr Jonathan Aitken, Senior University Teacher in Robotics, University of Sheffield, said:

“The development of AI tools to the extent that they can be deployed into the mainstream has led to what could really be a predictable outcome. The concern over AI focuses on its ability, and the risk it poses. However, the concern needs to shift to the potential risks it poses through misuse and abuse. Whenever a robotic system is designed, international standards (e.g. ISO 10218 on Industrial Robot Safety) around safety force developers to ask and answer questions around what might happen if the system weren’t to be used as intended. These systems have a myriad of further safety features around them to reduce risk and prohibit unintended use.

“An AI tool with an interface to a platform such as X has little to no thought around use and misuse. The policy that users will be caught afterwards is irresponsible, once the prompts have been made and the images generated they are out in the world, and the victims cannot undo what’s been done.

“There is a real risk that AI development and deployment in the world remains the wild west with tools being released sometimes with little forethought about the consequences of their use. The development processes in the companies releasing these tools must be focussed upon, to ensure that proper protections are in place to catch and then remove the potential for misuse. To limit the use of the tool to paid users compounds the error, as it leaves the loophole open and still does not provide protection to potential victims – rather it looks like trying to financially capitalise on the situation.

“Adequate systems and safeguards cannot be designed without consideration of misuse and abuse. When the consequences are so severe as seen in this case it looks as though the question being asked is whether we can, rather than whether we should. Whilst that remains the case the call for further regulation is well justified. Ethical research determines that we think about the context of what we are doing before we do it, and as a developer or practitioner it is a key tenet to what we should consider when designing systems. However, it is clear that this practice is not being adhered to as tools like this find their way into the wild.”

 

 

Declared interests

Dr Jonathan Aitken: “No COIs.”

For all other experts, no reply to our request for DOIs was received.

 

 

 

 

in this section

filter RoundUps by year

search by tag