Maria Miller responded to the Government's AI consultation to highlight the threats faced by women and girls from rapidly developing AI technologies. Read the full response here:
AI Regulation: A Pro-Innovation Approach - Consultation Response
Overview
AI presents a real and credible threat to women and girls if appropriate safeguards are not put in place at an early stage. There are many ways AI can be developed; the Government’s white paper identifies deepfakes as a risk to human rights and, while I acknowledge the threats to democracy and national security implications posed by deepfakes created by hostile governments or organisations to influence democracies, it is important that Government also focus its response on the real and specific risk AI already poses to women.
In September 2019, AI firm Deeptrace estimated there were 15,000 deepfakes online at the time, a number which had doubled in the previous nine months. Upon analysing these deepfakes they discovered that 96% were pornographic, and of those 99% were depicting women. These were female celebrities whose faces were mapped onto those of pornstars. It is evident that, in absolute numbers, deepfakes are being used to create fake pornography.
At the standard of technology in 2019, it required hundreds of photographs or video recordings to map a realistic image, or extensive hours of audio to convincingly map a voice – celebrity women in the public eye were, therefore, the only feasible targets. In just a few short years technology has advanced such that no longer are huge data inputs required. Today, one photograph (perhaps easily acquired from a social media account) suffices for a credible deepfake. As we have seen with the rise of intimate image abuse, women who are not in the public eye will form the bulk of the victims of deepfake porn and there must be laws and systems in place to recognise the disproportionate threat they face and to protect them.
The data already shows that women are affected disproportionately: according to Sensity AI, deepfakes found since December 2018 are between 90% and 95% fake nude images of women.
The Online Safety Bill is shortly due to be amended to make the sharing of intimate images without consent an offence. This will provide for some legal redress for victims of deepfake pornography, however it does not solve the issue of images being made and those still existing on the internet.
Question Responses
- Do you agree that requiring organisations to make it clear when they are using AI would adequately ensure transparency?
Making AI use clear would go a considerable way to improving transparency, however it would not necessarily eliminate the harm of deepfake pornography that continues to appear realistic and remain online.
For example, online nudification software which virtually strips women of their clothing creates credible manipulated images. Use of nudification software is increasing at rapid pace – in 2020, a website called DeepSukebe was launched and received 38 million hits in 2021. These are not sites with an innocent intention which some users then bypass – DeepSukebe’s own Twitter bio described it as an ‘AI-leveraged nudifier’ whose mission was to ‘make all men’s dreams come true’. The website offered priced options, paid for in cryptocurrency and all images had to be female; the software did not work on men.
In examples of use of such software, it is important that manipulated images are clearly marked. As stands, these images are very convincing and can be used maliciously. Therefore, the making of nudified or intimate images without consent should be made a criminal offence.
- What other transparency measures would be appropriate, if any?
A digital watermark on AI generated and modified content should be mandatory to make clear which have been manipulated by AI.
Such a digital watermark automatically applied onto the images would not necessarily remove some of the consequences to the victim of such an image being posted, but it would at least verify that the image is not real. With technology ever improving, there will soon be a point where it is impossible to distinguish a genuine image from one generated by AI and accordingly we will not be able to rely on a viewer identifying what is real and what is not.
I would be supportive of organisations being required to use watermarks in their software when images are created. However, this action requires organisations to act in good faith. Those that intend to follow the law will do so but those acting with nefarious motives would be unlikely to implement such a step. Sharing intimate images without consent will soon become an offence in the UK. AI developments mean that, without delay, the making and taking of intimate images without consent should be made a criminal offence too, not possible within the scope of the Online Safety Bill.
- Do you agree that current routes to contestability or redress for AI-related harms are adequate?
No. As it stands, it is very difficult to remove images from the internet once they are there. In December 2021, Abdul Elahi was sentenced to 32 years in prison for sexually blackmailing hundreds of victims following the National Crime Agency’s Operation Makedom. Despite the serious crime and conviction, the Revenge Porn Helpline cannot remove all images because UK ISPs considered the images lawful and would not take the same blocking actions as with illegal content. Of the 150,000 pieces of content reported through Operation Makedom, 15,000 remain unactioned due to this issue. The Revenge Porn Helpline report that, when victims approach them for help, the action they want above all others is to have content removed.
This situation highlights a key problem which will affect victims of deepfake pornography – there is currently no reliable recourse for getting images taken down or blocked, even though this is the most distressing aspect for victims. This must be addressed in law.
- How could routes to contestability or redress for AI-related harms be improved, if at all?
One of the key reasons for this situation arising is that content is not considered unlawful by ISPs. In order to help rectify this, we urgently need legislation that will extend the sharing offences, as planned to be included in the Online Safety Bill, to making and taking intimate images. By appropriately defining what constitutes an intimate image to those which have been modified, in whole or in part, or generated by computation, deepfakes created by generative AI will be included. When consent to make, take, or share the image cannot be shown, the image would be unlawful which could open the door to ISPs taking action along the same lines as they must do with CSAM.
AI can also be used to identify victims of CSAM, and accordingly the same technology could be used to identify victims of deepfake pornography creation and intimate image abuse so that these images can be taken down.
There is also a responsibility to properly support victims. At the moment, the Revenge Porn Helpline is staffed by a small team with limited resources. There should be an established funding route whereby fines collected from Ofcom for violations under the Online Safety Bill can go directly to victim support groups to help those who are affected. Better funding means larger teams able to work on taking down non-consensual intimate image content.
Conclusion
AI is developing faster than regulation can keep up. It is therefore vital that we put in place robust structures at the earliest opportunity to pre-empt the problems we are likely to see.
In short, we must:
- Ensure AI-manipulated or -generated images are clearly marked.
- Further change the law on intimate image abuse to ensure that the making, taking, and sharing of intimate images is fully covered by law and that deepfake images are included in the relevant definitions.
- A reliable system for removing harmful deepfaked images is in place, ensuring images are correctly categorised for timely takedown.
- Use existing AI technology to identify victims of intimate image abuse.
- Ensure fines levied under the Online Safety Bill are used to directly support victim groups.
Deepfakes are wide-ranging in their use, but there is no doubt that their negatives will impact woman and girls far more than any other group. We must have systems in place to protect them as we navigate this rapidly changing technological field.