Young Woman Upset By What's On Her Phone

How to tackle the abuse of generative AI | Microsoft White Paper

As Microsoft launches its ‘Protecting the Public from Abusive AI-Generated Content’ White Paper, Hugh Milward, Vice President External Affairs, argues that we need to take a “technological, legal and partnership approach” to the issue.

Q. What prompted the creation of this White Paper?

A. The technologies we develop are incredibly powerful tools to help society advance and achieve powerful and meaningful things that improve the lives of individuals.

But at the same time, these tools can be used as weapons in the hands of people with bad intentions.

Bad actors can create deepfakes to mislead the public and interfere with democratic processes, for example, or generate sexually explicit fake images that the owner of the original photo or video has not consented to. That’s a horrendous, humiliating experience to have to go through. We should never forget that abusive AI affects real people in profound ways.

Fraudsters also use generative AI to scam often vulnerable older people, using deepfake celebrities to ‘endorse’ fake products or financial schemes. And images showing child sexual exploitation are a particularly loathsome example of the abuse of generative AI.

This is a global problem affecting millions of people.

Image of young woman's face emerging through bank of digitally rendered faces on a bank of screens

As a leading technology company that creates these tools, we’re committed to constantly improving their safety in order to build trust. We are thinking hard about how our technologies might be used in negative ways and what we can do to combat such usage.

Generative AI is a once-in-a-generation technology that could boost economic growth and radically improve the way we operate across the public and private sectors. But if people don’t trust it, they won’t use it, and those opportunities to change lives for the better will be lost.

Q. What steps is Microsoft taking to combat abusive AI?

A. We believe there should be a technological, legal and partnership approach to tackle abusive AI generated content and protect the public.

We’ve set out a comprehensive approach across six focus areas:

  • a strong safety architecture;
  • provenance and watermarking;
  • safeguarding our services from abusive conduct and content;
  • robust collaboration across industry, government and civil society;
  • updated legislation;
  • public awareness and education.  

So there is a lot to do for technology companies like Microsoft, and a lot that will require partnerships.

As an example of one of these layers, we have been working on content credentials, which is a technology which stores basic details about a piece of media, including whether AI has been used, as meta data.

We believe that content credentials can play a role in helping people to know where an image is from and whether it has been created using AI.

Content credentials involves a technology standard that others in industry have joined forces to create, but it will require public policy and public awareness if it is to be effective. 

We’ve also been working with partners such as Stop Non-consensual Intimate Image Abuse (StopNCII.org) to develop tools that can identify and take down abusive images.

And we’re also investing in digital literacy programs to improve education and awareness of the risks among the public.

Q. What does Microsoft want from government?  

A. Under UK law, in some important ways, AI-generated content is already covered – which is a good start. However, we see some key areas where the government could go further, both to plug some potential gaps and also raise the disincentives for criminals. Specifically this would include criminalising the creation and distribution of non-consensual sexual images, including deepfakes.

More broadly we’re also asking the government to review whether the law currently addresses the challenges posed by AI both for child sexual abuse and non-consensual intimate imagery.

The government could also play an important role on implementation of content provenance tools: both by using these tools within the public sector to help people have trust in official information, and also through policy to require AI system providers to ensure that people know they are interacting with AI.

Q. What can people do if they are victims of abusive AI?

A. It’s becoming a lot easier to report cases of abusive AI and we’ve developed a centralised reporting portal. For adults, anyone can request the removal of a nude or sexually explicit image or video of themselves which has been shared without their consent through Microsoft’s centralised reporting portal.

The NCMEC’s ‘Take It Down’ service helps young people remove online explicit images of themselves

Young people who are concerned about the release of their intimate imagery can also report to the National Center for Missing & Exploited Children’s ‘Take It Down’ service.

Q. Aren’t we always going to be one step behind the bad guys?

A. Every new technology can be used for good or bad. And the bad actors are often the early adopters of new technologies. So even if the bad guys are always trying new things, we will still make it as difficult as possible for them to use our technologies for harm.

Q. What more should we be doing?

A. We’ve shown that that when we work closely with law enforcement we can effectively prevent fraud. And our collaboration with election officials during the UK General Election helped neutralise misinformation campaigns.

But abusive AI is a problem that is likely to be with us for some time, so we need to redouble our efforts and collaborate creatively with tech companies, charity partners, civil society and government to address this issue. We can’t do this alone.

We need to encourage all stakeholders to think more creatively about the challenge and question the conclusions we may have already reached on how to tackle it.

The technology is moving fast, so we must move as fast to protect our fellow citizens and make life as difficult for the bad guys as possible.

Download the White Paper here.