“BEWARE!!” wrote Tom Hanks on social media recently, referring to a video circulating online in which he appeared to be promoting a… dental plan. Except that it wasn’t Tom Hanks. It was an AI-generated version of Tom Hanks, which Tom Hanks had no involvement in and had not consented to. The context may seem comparatively trivial (how many people are reading – or care – about random online videos regarding dental plans?). But that’s not the point. The threat (and, let’s be honest, the benefits) posed by AI is a revolution like nothing that the entertainment industry has seen or faced before.
Hollywood labour action
The last time the Writers’ Guild of America chose to strike was in 2007-08 in relation to residuals derived from content created for new media. The previous significant strike that members of SAG-AFTRA, the actors’ union, undertook was in 1978-79. That strike focused on the producers of radio and television commercials. Not exactly the jugular of Hollywood’s economy. The fact that both the WGA and SAG-AFTRA chose to strike concurrently in 2023, at a particularly febrile time for the industry in the immediate post-Covid era, is demonstrative of how serious a threat this technology poses to those within the industry.
While we have not seen the details of the deal reached between the WGA and the AMPTP, and SAG-AFTRA is still in negotiations, we know that AI is a central issue that both the actors’ and writers’ unions are particularly concerned about.
AI’s potential brings with it heightened concerns about the unresolved legal and reputational issues facing talent in the entertainment industry. One of the most significant concerns that actors face is the creation of AI-generated deepfakes. Deepfakes, the technology capable of creating hyper-realistic, yet fictitious audiovisual content, stand at the forefront of this existential predicament. AI’s thunderous pace of cloaking faces and voices with striking accuracy has outgrown the adaptable convenience of existing laws. When shared or circulated on social media or distributed online, these videos and images can have the potential to seriously damage a person’s reputation or brand or violate their privacy, leading to severe consequences for their careers and personal lives.
Technology has shown its benefits, enabling the creation of realistic digital characters, enhancing special effects, and even creating entire storylines. Tom Hanks himself has been digitally de-aged in the recently released “A Man Called Otto” using AI technology. However, this progress has also given rise to the creation of AI-generated content that blurs the lines between reality and fiction. This gives rise to various questions: where does consent lie in a production company choosing to unilaterally regenerate an actor’s likeness? What can an actor or musician do where they (or their music) are recreated using technology without their permission?
Legal complications and reputational risk
The English legal landscape surrounding AI and talent rights remains largely unresolved. Current laws struggle to keep pace with the rapid advancements in technology, leaving talent vulnerable to various forms of exploitation. The Online Harms Bill, which is due to become law in England & Wales shortly, creates a new criminal offence of sharing “deepfake” pornography. However, it stops short of outlawing any other type of AI-generated content that was created without the subject’s consent; in the circumstances, individuals are left with existing laws.
Beyond legal complications, AI deepfakes pose significant reputational risks. Talent can fall victim to false narratives, inflammatory statements, or explicit content that they never consented to. AI-generated content can spread rapidly through social media platforms, and as it becomes ever more realistic, so the risks increase. Tom Hanks was addressing one deepfake video of him. However, if there were hundreds or thousands more deepfake videos of him online, it may be more difficult to identify where the truth lies.
Reputational risks spiral far beyond the immediate embarrassment of a falsely portrayed action or event. Deepfakes can erode public trust (both generally, and in terms of personal brand) and, in the hands of malevolent actors, can fuel misinformation and deception campaigns.
To combat the rise of AI deepfakes, English law provides a range of possible options for those who fall victim to the technology:
2. Privacy and harassment
3. Data protection
4. Intellectual property
5. Criminal law
AI itself can provide solutions to detect and authenticate real content from manipulated ones. The integration of blockchain technology, which enables secure and transparent digital transactions, could assist in certifying genuine and fake content and assisting talent in protecting their rights.
A bright future?
While AI presents various challenges, it is crucial to recognise its potential as a powerful ally for talent in the entertainment industry. Collaboration between talent and AI can lead to innovative projects and creative opportunities. By proactively engaging with AI and being part of the technological conversation, talent can influence the development of AI tools in the entertainment sector and simultaneously ensure their own protection.
Please note this content was originally published by Broadcast, click here to read.
This publication is a general summary of the law. It should not replace legal advice tailored to your specific circumstances.
© Farrer & Co LLP, October 2023
About the authors