Ensuring the ethical use of AI voice technology

Request access

Try our hyper-realistic voices

Zeena Qureshi

CEO & Co-founder

July 20, 2021

The case of Anthony Bourdain

“Roadrunner”—the new documentary on celebrity chef, author, and world-traveler Anthony Bourdain—raises important ethical questions about how AI voice technology should be used. In the film, an AI version of Bourdain’s voice reads snippets of text that he wrote but did not actually speak aloud while he was living. Some of that text was from a personal email he sent to a friend—the last email the friend received from Bourdain before Bourdain committed suicide in 2018.  

The film’s director, Morgan Neville, has faced backlash on social media and in the press for this use of AI—in part because it was not disclosed to the film’s audience. Though documentaries frequently use actors to perform lines previously spoken or written by historical figures, in those cases, it’s clear to the audience that actors are being employed. In “Roadrunner,” the lines generated by the AI model might be mistaken for an authentic recording. 

In an article for The New Yorker, Neville was dismissive of the ethical issues: “We can have a documentary-ethics panel about it later.” Neville subsequently claimed, in a separate statement, that his team received permission from Bourdain’s estate and literary agent to use the AI voice. However, Bourdain’s wife (from whom he was separated) emphatically stated that she did not give her consent. Clearly, questions of the ethical use of AI in the documentary remain. 

Sonantic technology was not used for “Roadrunner,” but we at Sonantic can certainly envision projects in which entertainment studios would want to create custom AI models based on the voices of real-life individuals, including those who are deceased. How can we help those studios realise their creative vision in an ethical way? And how can we prevent other types of unethical applications of AI voice technology?

“Deepfakes” and other misuses of AI voices

AI voice technology has tremendous potential for enhancing the creativity and increasing the efficiency of content production. But for both technology innovators and the studios that employ this technology, ensuring the ethical use of AI voice models must be a top priority. 

The possibility of employing AI voice technology to create “deepfake” videos is a particularly serious concern. Using a voice model to make it appear as if a politician said something they didn’t really say, for example, could have grave repercussions. 

AI models could also be used to capitalise on vocal talent without securing permission or providing adequate compensation. For example, unscrupulous companies could use AI models of celebrity voices for unauthorised product endorsements or unpaid participation in creative projects.

How can technology companies and content creators prevent these and other types of unethical uses of AI voice technology?

Ensuring the ethical use of AI voice technology at Sonantic

At Sonantic, we recognise that we have a special responsibility to help prevent the misuse of AI voice technology. From the start, we’ve taken a multi-faceted approach to ensuring ethical uses of AI. 

First, we follow the European Union’s Ethics Guidelines for Trustworthy Artificial Intelligence. Among the requirements for “trustworthy” AI is the transparent disclosure of its use: When humans interact with AI systems, they should know they are doing so.

Building on those guidelines, we’ve also instituted key principles that are integrated into our business model and technology development: 

  1. We are a B2B company that works only with creative organisations. 
  2. We partner with our actors, ensuring they are a part of the process. 
  3. We do not train algorithms on publicly available data, and we do not create voices where the owner of the voice—or the owner’s estate—is unaware of its re-purpose. 
  4. We enforce usage restrictions throughout the lifecycle of each client’s projects.

How are those principles applied in practice? First, we always strive to understand the scope of a new project before committing to work. Then, if a client requests a custom voice based on the voice of a living actor, we ensure that we receive permission from that actor. We always make sure that actors have a hand in training their own AI model. And we enforce usage restrictions by using a disclosure process and detection capabilities that we’ve developed.

If we are asked to create a custom model based on the voice of a deceased actor, we take further measures. 

  • We first gain a clear understanding of how the AI model will be used, and then we ask the client to obtain permission from the actor’s estate. 
  • We also consider the deceased actor’s legacy. By talking to friends, family, and colleagues, we make sure that use of a custom model will only sustain or enhance—and not tarnish—that legacy. 
  • To provide full transparency to the audience, we also ask the studio to disclose the use of the AI voice. 
  • Lastly, we ask the client to have the estate sign off on the final deliverable. 

Taking these extra steps helps to confirm that the studio has authorisation for using the deceased actor’s voice from the people the actor knew and trusted the most. In addition, disclosing the use of an AI voice avoids any possible confusion or sense of deception among audiences. 

Building on an ethical foundation

At Sonantic, we believe that following established guidelines and our own ethical principles can help studios to steer clear of potentially damaging ethical issues while also preventing abuses. By building on an ethical foundation, we can help studios and actors make the most of this powerful technology.

→ Return to blog

Heading

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Suspendisse varius enim in eros elementum tristique. Duis cursus, mi quis viverra ornare, eros dolor interdum nulla, ut commodo diam libero vitae erat. Aenean faucibus nibh et justo cursus id rutrum lorem imperdiet. Nunc ut sem vitae risus tristique posuere.

  • It prevents you to directly style elements of a RT when this RT is bound to the CMS. I hope this limitation will be removed soon, but in the meantime the workaround is pretty easy to handle. Proceed as follow:
  • CMS. I hope this limitation will be removed soon, but in the meantime the workaroun
  • Vents you to directly style elements of a RT when this RT is bound to the CMS. I hope this limitation will be removed soon, but in the meantime the wor