Kernochan Symposium 2025: Part 1
Session 1: Morning
Transcript
Kernochan Symposium 2025 - Deepfakes: In Search of Global Solutions - Morning Session
[PIPPA LOENGARD] Good morning. My name is Pippa Loengard. I'm the executive director of the Kernochan Center, and my colleagues, Jane Ginsberg, Shyam Balganesh, Cate McGrail, Samara Weiss, and the Columbia Alliance Program, which coordinates our Columbia Law School programming with our French colleagues at comparable law schools in France, are delighted to welcome you all to the "Kernochan Center
Symposium: Deepfakes: Problems and Potential Solutions in Comparative and International IP Law."
Today, we're going to look at one of the potentially more troubling issues surrounding the rise of artificial intelligence, how various communities across the globe are trying to combat deepfakes, and what could be some successful solutions to a potentially dangerous phenomenon. I would like all of you interested in this subject and drawn to today's program because of its comparative nature and wish to stay abreast of the topics to-- and this is a shameless plug, but that's why I've got the microphone --
to think about joining the US chapter of ALAI, which excuse my French among the native speakers around here. The Alliance Littéraire Et Artistique Internationale, which I really was hesitant to say in front of as many French speakers as we have. But you can learn more about this organization which works on these issues of international law at our website, alaiusa.org.
Before we get started, I would like to go over a few logistics. You will all find the bios for the moderators and the readings for today using the QR code, which was at registration. If for some reason you did not, please feel free now to get up and grab that because we are not going to be introducing all of our speakers in advance of the panels. Please be aware that today's symposium is being recorded, and the video will be available on our website in due course. As such, please use microphones during the question-and-answer periods to ensure that questions can be heard on the recording, and you just push the button in front of you to turn on the microphone. And each seat has its own.
This is the first of many reminders today that you will hear about CLE credit. If you would like CLE credit and you have not signed in yet at the desk outside this room, please, now is the time to do so. And we are giving credit for the morning sessions and then the afternoon sessions. There is no partial per-session credit, so please be aware of that.
Bathrooms, perhaps the most important thing for the day, are past the elevators on your right. We'll begin in a moment with Professor Ginsburg and our former student, whom we're very proud of, Makena Binker-Cosen, demonstrating what powers technology has afforded us in the field of replication and editing. And we'll follow that by a short overview of the problems society is facing.
Our next panel will look at if and how individual rights can be used as a means of protecting one's own abilities to retaliate, I guess is the word, against deepfakes. And then we'll have a half hour break before our third panel on what transparency obligations, if any, there are in US and EU law. If you need to make a call or do something else during the break, there is a room across the way that we have reserved as well. That should be a quiet room.
We will be simulcasting the program there in case, we do overflow this room, but right now it is a quiet room if you need some space. After the transparency obligations panel comes the most important part of the day, lunch. And then we will begin a robust afternoon programming. All right? But for now, I thank you all again. And we'll turn the program over to Jane Ginsberg and Makena Binker Cosen.
[APPLAUSE]
[JANE GINSBERG] And while they're getting set up, you have the fuller bios introductions in your materials via the QR codes. But our first speaker will be Jennifer Rothman, who is a professor at the University of Pennsylvania Law School, and as I said, the world's leading expert on the right of publicity. I highly recommend her website, Rothman's Roadmap to the Right of Publicity.
She will be followed by Graeme Austin. I guess you want to stay there so you can see the slide. Yeah. Who is a professor at Victoria University of Wellington and also University of Melbourne, who will be-- and my co-author, who will be talking about the picture in the Commonwealth. And finally, Valerie-Laure Benabou, who is a professor at the University of Paris-Saclay, who will be talking about French and EU protections.
[JENNIFER ROTHAM] All right. Well, thank you, Jane, for that introduction, and to you and Pippa and the rest of the organizers for putting on this symposium and inviting me to speak. I was asked to speak about the current legal landscape in the United States that regulates deepfakes. And I may have some different takes than Dana and some areas of agreement as well, of course.
But in addition to what currently regulates deepfakes, what's on the horizon, being able to talk about this in the 30 minutes, which I've been allotted would be nearly impossible, because over the last few years dozens, or depending how you categorize it, hundreds of laws have been passed that either specifically address deepfakes or address things that overlap with and cover, in part, deepfakes. Just taking California as an example, which seems to pass new AI related bills, some of which cover deepfake issues, almost every week-- six were passed in the last three weeks alone. Six new laws.
So instead of trying to cover everything, I want to start by setting forth some guideposts for sorting through this increasingly complicated landscape, and only then consider some of the existing laws and those being proposed. So, as part of this guidepost, I want to propose a taxonomy of deepfakes that will guide our tour of developing such a taxonomy, I think, is desperately needed, as urgent calls for legislative fixes and to address deepfakes have largely collapsed distinct types of deepfakes into a single monolith.
And this lack of nuance in speaking about deepfakes has masked some problems at issue and obscured the applicability of existing legal structures to combat them. In addition, this lack of precision about why we care about deepfakes and different types of deepfakes has led to many newly enacted and proposed legislation that may actually worsen the dangers of deepfakes rather than combat them. So before developing this taxonomy, I want to take a few moments to develop a common understanding of deepfakes.
You're like, I already know. That's why I'm here. I know what it is. I'm talking about it. I'm on it. But actually, there are different meanings of the term, and I want us all to be on the same page as we parse what we think we're talking about and what we think the law should cover or not cover. So as some of you know, the term deepfake actually originated by a Reddit user of that name in 2017 in the context of porn, like most things on the internet, and then has exploded, exploded -- exploded in the use of the term, in the actual creation and spread of deepfakes, and has expanded beyond the world of porn.
So we now think of deepfakes as maybe illustrated by porn, but not exclusively porn. We see laws and proposed bills using different terms to get at deepfakes, using the term digital forgeries, other digital replicas, or voice clones. I will not use my time to go through the numerous different definitions, and I will say even the definitions of digital replicas vary widely across different bills and laws that have been passed. Instead, I want to hone in on two material differences across these definitions that are essential to pick sides on, as we discuss deepfakes today.
So one is, do deepfakes have to be deceptive? Some definitions say yes. Others don't. Some say likely to deceive, but not necessarily deceptive. Some require for liability and intent to deceive. And a second area of dispute is, are deepfakes just about people? Thus far, we've been talking about them just about people. But deepfakes could also be fakes of objects, places, people, events, or entities.
And in fact, the European Union's AI Act defines deepfakes more broadly to include all of these categories. For our purposes, I'm going to have our operative definition be one of human beings, people, deepfakes of people. And I'm choosing this in part because this is the main focus of concern, both among those who are in the room and of legislatures around the country and around the globe.
Second. Second, I am not going to require that deepfakes be deceptive.
They do not need to deceive the public to be defined as a deepfake, but they need to appear to be an authentic recording of a person when they are not one. And then we can get at whether it's deceptive. And I'll highlight that authenticity, seeming authentic doesn't mean it needs to be a realistic capture of the person or in a realistic context, but it needs to be something that could be perceived as an authentic recording of them. And note, in spite of its etymological origin, I'm using depict to capture both the use of someone's voice or their likeness.
All right. So with that sort of working definition in hand, I want to briefly touch upon the harms of deepfakes because again, we can't really evaluate what we're dealing with, even a taxonomy of deepfakes or the validity of current laws or the value of future ones, without knowing what harms are involved. And again, I'm anticipating that we have a fairly sophisticated audience. You may already know what you think the harms are, but it's worth just a brief foray into some of the articulated harms, which largely fall into three categories.
Those that affect those who are depicted in deepfakes, members of the public who may be deceived by deepfakes, and then those that would affect other stakeholders, for example, those who are connected with, such as relatives of those depicted, as well as those who may have a financial stake in the person or voice, person's voice that appears in some of the deepfakes, particularly record labels or other copyright holders.
The key considerations for all of these harms center on two key considerations that I want you to keep in mind throughout my talk and hopefully throughout the day and then as you leave the space as well, which is deepfakes are harmful when they are not authorized by the person depicted in the deepfake. They are also harmful, particularly to the public, when they are deceptive as to their authenticity.
So those who are depicted in deepfakes suffer a variety of harms, from losing control over their own identity, which works injuries to their rights of self-determination and autonomy. It also could injure their dignity and reputation, particularly if they're put in a humiliating setting such as a pornographic one, or shown doing something or saying something that they never said that may be truly shocking or offensive. There are also a variety of market harms that could befall a person who is depicted in a deepfake.
They could lose job opportunities, endorsement deals, have reduced salaries, lose licensing opportunities, or be in breach of merchandising contracts, and overall, have their goodwill diminished. This would be particularly true for those who are well-known performers, who are commercializing their identities, or whose performances themselves might be substituted for. But market harms could befall even ordinary people who are depicted. Harms to the public largely center on whether the public is deceived, and this harm to the public occurs without regard to whether the deepfake is authorized or not by the person depicted.
The harm stems from the public thinking that something is authentic that is not. This sort of deception of truth could destabilize our political system by circulating fake images and recordings of political figures saying and doing things they never did in ways that could affect voter perceptions of these individuals, alter outcomes of elections. Deepfakes of politicians could cause civil unrest and even global catastrophes by inciting wars or conflicts engendered by false statements or actions appearing to be the authentic speech from world leaders.
Deceptive deepfakes can also more broadly destabilize our access to information and truth. As Brian Chen recently wrote in The New York Times, we may be facing the end of visual fact. Can civil society survive if we not only don't have common references and sources, but also do not have reliable documentation of real world events? The criminal justice system and the tort system themselves may be threatened by the undermining of image and voice based evidence.
And we as a society may also be impoverished in AI generated slop of culture in place of high quality, human driven content. Now, this could, of course, happen with non-deceptive and even authorized deepfakes that could maybe lower the quality of culture. But the law may have a place in regulating our knowledge of whether we're seeing authentic performances or not, so that the public can choose between them. Deceptive deepfakes could also affect consumer purchasing decisions in ways that could be harmful.
And in our final category, harms to related parties, I think are not as central as the last two, but also are things that we should be cognizant of particularly in the context of unauthorized deepfakes, and recognize that there are market injuries, particularly to those such as record labels, that are very concerned about the spread of deepfakes.
But again, the primary center of these harms is, are they authorized by the person depicted and do they deceive the public. So with this in hand, let's consider how we can distinguish deepfakes between one another, because they're not all the same. To the extent that deepfakes are distinguished from one another in discussions, both legislative or in the media, it has usually been on the basis of the context in which the fakes appear.
For example, to distinguish among deepfakes that appear in political contexts, or that show people in pornographic contexts, or that depict performers and may substitute for the value of their works. These contextual distinctions have obscured deeper thinking about whether deepfakes across these contexts are or should be considered different from one another from a jurisprudential perspective. A more nuanced parsing of deepfakes is essential to better distinguish between the problems that are appropriate for legal redress versus those that are more appropriate for collective bargaining or market based solutions, or may simply need to be tolerated, or in some instances, even celebrated.
The focus on this context in which deepfakes appear has also led to a lot of very specific deepfake focused and AI focused regulation in different contexts, in the context of elections, in the context of pornography, in the context of likely media plaintiffs. And this has obscured some of the addressing of some of the harms that I just identified. But it also has led to the passage of a number of laws that I think sit on shaky constitutional ground for their lack of inclusivity in terms of being so narrowly targeted.
With that as background, I propose a different approach to thinking about deepfakes and putting them into the following categories: Those which are unauthorized by the person depicted, those which are authorized by the person depicted, those which are deceptively authorized, and those which are fictional. What do I mean?
Unauthorized deepfakes are what we talk about most of the time and see the most outrage about. These are ones in which the person depicted never agreed to appear. These are high profile examples that have been wielded to pass laws, to propose bills, and are discussed on Capitol Hill. Recent calls for action at the federal level were largely driven by a 2023 viral AI generated song, "Heart On My Sleeve," which sort of successfully imitated the voices of the artist Drake and the Weeknd.
Numerous other well-known recording artists and actors and celebrities have been faked, including Tom Hanks, an AI generated Tom Hanks hawking for dental services. But it's not just the famous. It's also the ordinary. On the top right as an image of one of the Jersey teens who had her face swapped into pornography by her lovely classmates in her New Jersey high school, and then advocated for New Jersey to adopt an intimate image law that would address this.
Politicians, too, have been victims of unauthorized and deceptive uses of their identities. In the lower left, you see one recent-ish version of a deepfake of President Obama as imagined and celebrated by President Trump being arrested in the Oval Office. Deepfake voice clones have been used to scam family members by using voices of loved ones. And in the lower right, you see a Sora 2 image that I'm not going to say the video, but it has Jenna Ortega reanimated, speaking in the voice of the character Wednesday, as well as using copyrighted characters.
These unauthorized deepfakes present all of the harms that I articulated. They caused the personality and market based injuries to the person depicted who didn't authorize them and, if deceptive, can also injure the public more broadly. These contrast with authorized deepfakes. Authorized deepfakes don't cause the personality injuries or market based injuries to the public. I mean to the person depicted.
And if they're not deceptive, they don't harm the public either, and they may even be things that we would want to celebrate, like Eminem being replicated, dancing with a younger version of himself, or to de-age actors, or YouTube's new Dream Tracks, which does, in a more seamless, very rapid way, what we just saw in the Taylor Swift version, where you can have AI generate your lyrics on a particular topic and then have it voiced in an authorized version of a famous singer's voice.
Charlie Puth is one of the artists who agreed to do this. Or the Speechify app, which allows things to be read to you in the voice of someone who authorized the use of their voice to do so. The laws should regulate deceptive authorized deepfakes, so the public is deceived. Even if it's authorized, it still causes all the harms to the public. But if it's authorized and not deceptive, that should be fair game. The third category I identify is one that has largely been overlooked, but is essential to understand.
And this is the category of deceptively authorized deepfakes. Here, a person may have agreed to appear in one work or recording, but did not agree, did not agree to have their voice, likeness, or performance reused in a new context, such as a deepfake. Or alternatively, the depicted person may not own or control the rights to their own name, likeness, or voice.
In each of these scenarios, a deepfake might be categorized as authorized in a technical legal sense but, in fact, be unauthorized in the most important sense because the person whose voice or image is used in the deepfake did not knowingly approve of the specific use, something that causes the very same harms to the person depicted as an entirely unauthorized deepfake, and would likely be, per se, deceptive to the public because of the misperception of authorization.
I have questioned elsewhere the legitimacy and constitutionality of allowing someone other than the person themselves, which I have dubbed the identity holder, to own that person's name, likeness, or voice. And I've also warned about broad licenses that would give long-term, expansive control over a person's identity to someone other than the person themselves. Yet, some new and long-standing state laws, and some being proposed at the federal level, would allow such transfers and broad licenses and allow someone other than the person themselves to own or control that person's digital replica.
The digital replica being considered in Congress that has the most support right now, The NO FAKES Act, allows for long term licenses of another person's digital replica but does not require their ongoing knowledge and approval by the person depicted of those replicas and how they're used, and the bill expressly allows authorized representatives to approve such licenses, such long term licenses, without the person knowing that those licenses have even been entered.
Minor student athletes, aspiring actors, recording artists, and models may be particularly vulnerable to having others take control and even ownership of their voices, likenesses, and performances. And it's not just people who are trying to be in the public eye. It may be all of us who, without thinking, agree to online terms of service that claim to be able to use, in any new context, our images, voices, and recordings. So you may find a deepfake of yourself out there and it would technically be authorized, but in this deceptive way, which should be categorized as unauthorized when we think about remedying the harms of deepfakes.
Deceptively authorized deepfakes raise complicated questions that I can't fully engage with now, but at the intersection of a variety of legal regimes, including contract law, state publicity rights, and federal copyright law. On the left of the screen is Lehrman and his co-plaintiff in the Lehrman v Lovo case that Jane talked about earlier. Here, the company reached out to them and they agreed to have their voices used as voice clones, but then they were used beyond the scope of the contractual agreement. In that instance, as I will turn to the New York Civil rights law, the New York's Right of Publicity and Privacy laws did protect them and gave them a claim in this instance.
But this is not a deepfakes problem. This is a long standing problem in terms of people agreeing to some sort of copyrighted recording that might be used in ways they don't like. And generally, copyright law has been held to preempt state law that prevents these unauthorized uses of a person's identity. There's the famous Laws v Sony case, which allowed the sampling of recording in a new recording without the performer's approval, as well as the re-use in video games, including in a digital context dating back to the 1990s of performers who agreed to appear in one video game being reused because of copyright law in a second video game, which might be highly relevant for digital replicas.
So again, deceptively authorized. Deepfakes cause all of the harms we care about with deepfakes. They can be deceptive, and they can injure the person being depicted. The last category I want to highlight is fictional deepfakes. Sometimes we don't think of them as deepfakes, but they really are deepfakes. And we should. These are things like the recent coverage of Tilly Norwood, a completely AI generated actor who, at least for short periods of time, seems real and can speak with emotional gravitas, or models who are being AI generated, or songs which are being AI generated.
And I will note that the record labels are engaged with Spotify in creating intentionally AI generated music. Some of these may make us uncomfortable, but if they don't deceive the public, the person depicted doesn't exist, isn't injured, and if the public, as I said, knows that they are not real, then we don't have those harms either. And we may just need to tolerate this in the same way some of us need to tolerate Love Island or other reality shows or animated things. [AUDIENCE LAUGHS]
So just some of these examples hopefully suggest that there can be good deepfakes. They're not all bad. And so we want to leave room for them both from a speech perspective, from an artistic expression, and this technology has also been tremendously helpful in ways for people with disabilities. But I am cognizant of my time, so I will be happy to discuss that more in Q&A.
But I want to use these benchmarks, both of the different types of deepfakes, and our focus on whether deepfakes are authorized by the person depicted, and the question of whether they deceive the public into thinking that fakes are authentic, as we consider the legal landscape in the United States. The center of regulation of identity rights in the United States are right of publicity laws based in the United States. Although the recent vintage of the term deepfakes, I think, has caused people to not realize that we have a whole bunch of laws on the books that cover unauthorized uses of people's identities, even in the context of new technology, we actually have a lot of laws that cover people's identities and protect them from being misused.
So at the heart of this law is a state law that protects against unauthorized uses of a person's identity, including use of name, likeness, voice, or other indicia of identity. These laws emerged in the early 1900s. They are interchangeable in most states between the right of publicity and privacy based appropriation torts. Most states treat them as the same unless they've adopted a separate statute. There are a few states, like New York, which has adopted a privacy law by statute, which is essentially equivalent, in many respects, though not identical, with a common law privacy based appropriation tort.
As Jane pointed out earlier, the right of publicity, because it is a state law, varies from state to state across the 50 states. There are common features of many, and also some very difficult to navigate differences. Which is why, out of frustration, I started my website, which has a helpful map, so I and all of you can keep track of the different state laws. So I want to give a few illustrative examples to see how they apply to deepfakes.
Beginning with California, one of the dominant marketplaces of right of publicity and entertainment, California actually has, depending how you count, three or four different publicity or appropriation based privacy laws, starting with the common law. The common law is very broad. It unquestionably covers deepfakes, whether they're in a commercial context or not, but potentially would have a claim for Taylor Swift, even for this non-commercial use.
California's statutory right of publicity, which was actually passed as a privacy law to protect ordinary people by extending statutory damages and a fee shifting provision, also would apply to deepfakes, but perhaps a little more narrowly, such that it might need to be in the context of the stream of commerce. California also has a postmortem right of publicity, and this was revised in 2024, going to effect in this year to expressly include digital replicas.
Although the main concern before this was that exemptions for postmortem rights in California, which is only a statutory right, exempted audiovisual works in certain contexts. So this change, this 2024-25 change, now makes it so they are not exempted, with some exceptions that I don't have time to go into here, but also largely covers digital replicas and deepfakes.
New York statutory right of publicity, which is actually called its right of privacy in the state, applied, as Jane talked about, to the voice clone situation in Lehrman v Lovo. It will apply without regard to the commercializing-- commerciality of the person's identity, but it is limited to for the purposes of trade, though this is broadly interpreted to apply to video games and movies, but might not apply to educational uses, as Jane suggested.
Tennessee has adopted in 2024 a very long ELVIS Act, which I can't possibly do justice to here, but explicitly addresses the use of digital replicas and deepfakes as well as the software used to create them. It applies in commercial and non-commercial contexts, and it does have a number of exceptions in certain instances, but is quite broad. So in short, state right of publicity laws do cover deepfakes, but they may not do as much work as we want.
They do, on their face, perhaps a good job of stopping unauthorized uses in deepfakes, but most allow deceptive but authorized deepfakes and do nothing whatsoever to protect the public from being deceived by deepfakes. Second, because some of them explicitly or in practice, allow ownership and control by someone other than the person depicted, they also leave open the possibility of the deceptively authorized deepfakes, which cause the same harms to the people depicted.
And while many, maybe most states allow claims outside of the commercial use context, some limit who can bring claims, some limit liability to the commercial use context, which would be insufficient for many of the deepfakes. Damages may also be hard to prove, especially for ordinary people. And although some states have statutory damages to address this, not all do. Notably, California in the last three weeks adopt a statutory damages provision of $250,000 for intimate image deepfakes.
There also may be the problem of copyright preemption, which I mentioned, which if someone agreed to be in one copyrighted work, somebody could wield copyright law to recreate a performance in a new context in a digital replica and use copyright to enable it. There's also a Section 230 problem. For those who know that, this allows platforms to be immunized from liability, and may prevent them from taking down deepfakes on the basis of publicity laws. But there is a circuit split about whether the right of publicity falls within this immunity or not. And of course, then there's the 50-state approach problem.
All right. So right of publicity does a lot of work, but maybe not as much as we would like. But states have passed a slew of other laws targeted to protect people's identities, some of them recent, some-- all of them are fairly recent, but some of them are from the last couple of years. Some of them are over the last 10 to 20 years. These are intimate image laws that specifically focus on deepfakes and other intimate image circulation, those focused on biometric privacy, notably in Illinois, but also other states. Catfishing and impersonation laws apply.
There are a host of student athlete name, image, likeness laws that apply in this space, specific digital replica laws that have been passed, some AI specific laws that overlap, some election related laws that address deepfake laws, and even labor laws have gotten in the game in the last year or two. Some of these adequately protect deepfakes, but very few of them focus both on the question of whether the public is deceived by these uses, and whether they're protecting the people depicted from having others control their identities and authorize these deepfakes.
Other state statutes also cover identity rights and would apply to deepfakes. So Taylor Swift's menu of options here is getting very long. There's false advertising and consumer protection laws. In the context of pornographic things, there's obscenity, child pornography, I think defamation and false light tort and infliction of emotional distress. Torts could also do a lot of work in this space.
And then, of course, there's both state and federal trademark and unfair competition claims for those who are using their names or identities to sell products or services. It has some of the liabilities that Jane pointed out, which is, there needs to be a likelihood of confusion of the deepfake, and also the person needs to actually be engaging in commerce. Copyright laws have some limits, although I do think it's early days. There was recently a decision in Concord Music v Anthropic, which rejected a fair use defense to training data to the ingestion of copyrighted works for training data.
So I do think it's early. It's not clear yet and the Copyright Office has not decided whether digital replicas could be copyrighted under current law, and there are also a number of pending suits brought by the music industry based on sound recordings. Also, notably, the federal government this year passed and President Trump signed into law the Take It Down Act, which specifically targets deepfakes in the intimate image context, making sure that platforms have to take them down and also providing criminal consequences.
The last bit of legislation I'm going to focus on is bills. I'm almost done. Focus on bills under consideration in Congress. There are a host of them. I'm going to focus in on the NO FAKES Act because, as I said, that's the one that seems to have the most support. NO FAKES stands for Nurture Originals, Foster Art, and Keep Entertainment Safe Act. The title itself highlights that this was drafted with the entertainment industries, particularly the record labels, in mind, even though it would apply to everyone.
The bill creates a federal digital replica right, including a postmortem right, and provides liability, as well as statutory damages. It does provide some breathing room for the reuse of copyrighted works, although with some limitations. It has notice and takedown provisions. The bill runs 39 pages in length. I would not have been able to talk about it if that's all I did during my 30 minutes, so I want to highlight a few key concerns with it.
Won't surprise you, this bill doesn't address our two key considerations that flow from our worries about deepfakes. It doesn't address whether deepfakes or digital replicas are deceptive at all. In fact, it allows them to be deceptive as long as they're authorized. So this seems like a worrisome primary federal law that doesn't focus at all on whether the public is deceived and potentially incentivizes a market that can profit from deceptive deepfakes. Secondly, as drafted, it exacerbates the problem of deceptively authorized deepfakes. Although-- my screen just went out.
Although I will say that it thankfully doesn't allow other people to own someone's digital replica, it allows a 10-year licensing regime with few limits and also allows, as I mentioned earlier, authorized representatives to control the person's identity and sign these licensing agreements. There are a host of other concerns with this, which I'm happy to discuss in Q&A with NO FAKES, but I want to keep us centered on the deepfakes issue. As you can see, there is what I have dubbed elsewhere an identity thicket in the United States of overlapping laws that cover people's identities.
It's growing, apparently weekly or maybe daily with different laws that empower different people to control people's identities based in a single person. And these conflicting and overlapping rights are difficult to parse. With all of that said, with all of that said and this focus on law, I would be remiss-- thank you. I would be remiss if-- I guess this slide, which I couldn't see, because I couldn't see the screen, just reminds us what we should be focusing on.
But with all of this said, law is only one tool, and we should also think about ways in which the law could support technological and market based solutions to deepfakes, encouraging the building in of guardrails, the development of authentication software, disclosure and transparency requirements, and detection software. Much of this is already in process, but laws could also consider ways to incentivize this. And then, of course, there's market preferences. It's early days of generative AI technology. We don't know where it's going to go, but it may be that people will crave human interaction and human performances and authenticity.
Here we are in the epicenter of theater. It's possible the theater will thrive and revive when people want to see live, verifiably authentic people performing. So in our rush to fix the problems of deepfakes, we should make sure that the law and laws that are passed do not worsen the problem by giving legitimacy to deceptively authorized deep fakes, or by ignoring the problems of even authorized deep fakes that deceive the public. Unfortunately, too much of the recent proposed and enacted legislation does exactly this.
[APPLAUSE]
[GINSBERG] Are you going to go next? I thought you were going at the end.
[BEN SHEFFNER] I don't mean to.
[GINSBERG] No, no. So I neglected to introduce our next speaker, Ben Sheffner of the Motion Picture Association, who's, I think, going to be talking about authorized and maybe deceptively authorized deepfakes in light of Jennifer's exposition of the situation in the United States. And then we will move to the international context.
[SHEFFNER] Thank you very much, Professor Ginsberg and Pippa and the Kernochan Center and Columbia Law School for the opportunity to speak with you all today. I'm very much a US lawyer, so I'm going to be learning a lot today from the international perspective. And also just a little bit of scene setting here. I come at this from a much different perspective as most of the speakers here today.
I'm not an advocate. I don't have tenure. I'm an advocate for the members of the Motion Picture Association, which are the seven major motion picture and television producers and distributors here in the US. So we have an interest, of course, in combating abusive uses of deepfakes. We don't want to deceive anybody. We don't want to misappropriate people's identity in ways that are unfair. But we also have an interest in making movies and television shows that use new technologies to depict people, which is, of course, what movies and television shows do every day.
So also I really appreciate Professor Rothman setting the table and giving us all a crash course in the various laws that already protect people's identities in various ways and some of the proposals. So I'm an attorney, but most of my job actually involves not traditional law practice, but policy and advocacy, which means that all of these laws are bills that Professor Rothman was talking about at both the federal and the state level. Legislators often come to us and ask for our input.
We also spend a lot of time talking with other stakeholders in this area, whether it's record labels, representatives of recording artists, the union that represents actors, the major internet platforms, social media platforms, video game publishers, et cetera, all of whom have an interest in this area of law. And often, we try to get together and come to agreement on legislation that we could all support before it moves through Congress or state legislatures.
So before getting into the substance, I also want to do a little bit of definition of terms, which Professor Rothman did as well. I'll be using the term deepfakes largely interchangeably with the term digital replica. Deepfakes, I think, sometimes implies an element of deception, but I think both deepfakes and digital replicas, those terms can be used in contexts that are both deceptive and non-deceptive. Digital replica is the more common term in the motion picture industry where I work, so I will mainly use those terms.
So this issue around deepfakes or digital replicas has taken on great importance for our industry over the last several years with the rise of increasingly advanced generative AI systems. But the rules around how a studio is allowed to depict people on the screen have been extremely important to our industry for a long time, well before anybody had heard of deepfakes or Sora 2.
As we heard from Professor Rothman a few minutes ago, the main body of law here in the US that regulates depictions of individuals is right of publicity. While right of publicity is considered a form of intellectual property, as Professor Rothman said, its origins are in privacy law. The two have melded together in recent years. Very importantly, from our perspective, right of publicity, properly understood, should be focused on what we call commercial uses. And in this context, commercial does not simply mean making money.
Rather, at least as we see it, it means uses in advertisements or on merchandise. So if a company uses a celebrity's name or anybody's face, really, to advertise a product without his or her permission, that's a violation of the person's right of publicity. And that's not very controversial. Certainly, we who represent motion pictures producers have no problem with a law that says you can't use somebody's image, likeness, or voice to advertise a product without their permission.
But where we at the MPA become concerned is when individuals seek to invoke right of publicity laws to prevent depictions of individuals in what we call expressive works, including movies and television shows. In a series of cases over the last several decades, people who are depicted in movies and television shows, usually in ways they don't like, but sometimes just because they want more money, have sued producers, claiming that such depictions violate their right of publicity.
Examples include the movie The Hurt Locker, which won Best Picture about 15 years or so ago, where an Army sergeant named Jeffrey Sarver claimed that the main character of that movie was actually him. And Olivia de Havilland, the famous actress who was portrayed by Catherine Zeta-Jones in the TV series Feud, alleged that the producers needed her permission to portray her. The courts here in the US have almost universally rejected such claims, and many state right of publicity laws now explicitly exclude uses in expressive works, either in the statute themselves or through the body of case law that's developed around them.
So by the late 2010s, there was relative stability in right of publicity law. Again, if you want to use somebody's likeness in an advertisement, you need to get permission. But if you want to portray them in a movie, you don't. That somewhat easy détente that I just described, however, was upended starting around a decade ago with the rise of generative AI technologies that allowed anyone to create increasingly realistic videos of people doing things they didn't do or saying things they didn't say.
And I want to say that the presentation we had at the outset this morning was really terrific in demonstrating some of these technologies. Much of the concern, as Professor Rothman pointed out, was around so-called deepfake pornography, in which the faces of both famous and non-famous women-- and yes, it's 99% women-- were digitally inserted into pornographic videos against their will. The representative of actors were also concerned that digital replicas of them could soon be inserted into movies and TV programs, potentially undermining their ability to make a living.
Those concerns that I just described led the representatives of actors and others who were being victimized by deepfakes or digital replicas, or saw that they could be soon, to question whether existing law would provide a remedy for such abuses. The first place to look, of course, was right of publicity, which is the body of law that we have traditionally used to regulate depictions of individuals.
But as I described, right of publicity law is often limited to advertising and merchandising uses, and many of the new or anticipated abuses of digital replicas were not in advertisements or merchandise. Instead, they were in expressive works, whether random videos on YouTube or TikTok, or potentially in movies and television shows. So representatives of those victims or potential victims of deepfakes or digital replicas realized they probably needed new laws to address this new problem. We've seen various approaches here in the US.
Many states have recently enacted new legislation that addresses very specific forms of deepfake related abuses-- for example, deepfake pornography or deepfakes of political candidates during election season. And as Professor Rothman mentioned, in May of this year, President Trump signed into law a new federal law called the Take It Down Act, which specifically addresses the problem of pornographic deepfakes through both criminal law and a right to have your deepfakes, pornographic deepfakes of a person taken down from social media platforms.
But we have also seen the introduction of a large volume of legislation to broadly regulate uses of digital replicas, including in expressive works. In 2024, 10 states introduced bills to broadly regulate the use of digital replicas in expressive works, and bills passed in four states, including the ELVIS Act, which Elvis, of course, has nothing to do with a certain performer from Memphis, Tennessee, but is the Ensuring Likeness, Voice, and Image Security Act.
For those of you not from the US, we here tend to do something as really strange. Usually you start with a group of words and create an acronym. Legislatures do the opposite here. They start with a catchy acronym and then reverse engineer it and shove in words that sometimes make sense and often don't. And so we had four bills actually passed in 2024. We had Tennessee, California, Illinois, and here in New York. This year, bills in 2025, bills have been introduced in 14 states and passed so far in three. That's Montana, Illinois, and New York, although the one here in New York has not yet been signed by the governor.
And as Professor Rothman mentioned, an important piece of legislation has been introduced in the US Congress called the NO FAKES Act, which, again, is the Nurture Originals, Foster Art, and Keep Entertainment Safe Act. The NO FAKES Act would establish a brand new federal intellectual property right in one's likeness and voice, standing alongside other existing federal intellectual property rights, including copyright, patent, trademark, and more recently, trade secrets.
While the NO FAKES Act has not yet passed Congress, it does have support from many of the major stakeholders with interests in this issue, including the Motion Picture Association, SAG-AFTRA, the union which represents actors, the Recording Industry Association of America, which represents the major record labels, the Recording Academy, which represents individual recording artists, OpenAI, IBM, and Google and YouTube. So why did we at the Motion Picture Association endorse the NO FAKES Act?
After all, most companies don't like regulation, especially regulation that gives individuals the right to sue them. So there are several parts to my answer. First of all, we agree with actors and recording artists of the fundamental premise of the bill, which is that one should generally not be able to replicate other's likenesses and voices in new works in which they did not actually perform. In fact, the MPA's members endorsed that principle outside the legislative context in their 2023 collective bargaining agreement that resolved the strike by SAG-AFTRA.
In short, that agreement guaranteed actors what we sometimes call the three Cs. That's consent, compensation, and control. If you want to use a digital replica of an actor, you need to obtain their consent, you need to pay them, and you need to give them control over the uses of that digital replica. But back to the NO FAKES Act.
We at the MPA endorsed it, despite some misgivings about expansion of right of publicity law into expressive works, quite explicitly, because we believe it contains adequate safeguards that protect the ability of movie studios to depict individuals using innovative technologies in ways that we believe are protected by the First Amendment to the US Constitution, and thus do not require permission from the depicted individuals. I'll focus on two of those safeguards. First, the term digital replica is defined narrowly so that it only includes highly realistic depictions of the individual.
The definition would not include, for example, cartoon versions of an individual like you might see on the shows The Simpsons or South Park, even if it's apparent to the audience whom the cartoon is depicting. And second, and arguably most important from our perspective, the NO FAKES Act includes a robust set of exceptions that are intended to protect the ability to use digital replicas consistent with principles of free expression. Those exceptions include the following types of uses:
Bona fide news, public affairs, or sports broadcasts or accounts, depiction of an individual in a documentary or in a historical or biographical manner, including some degree of fictionalization-- means basically docudramas and biopics. Bona fide commentary, criticism, scholarship, satire, and parody, minor uses, and then use of a digital replica to advertise those works in which the digital replica actually appeared.
Let's consider examples of the types of uses where the permission of the depicted individual would not be required. One of my favorite examples is actually quite old. It's the movie Forrest Gump, which came out way back in 1994, 31 years ago. That fictional movie featured the title character, Forrest, navigating American life from the 1950s to the 1980s, sometimes interacting with actual historical figures. Famously, the producers, using digital replica technology available at the time, featured Forrest meeting and conversing with presidents Kennedy, Johnson, and Nixon.
The NO FAKES exception for depictions of an individual, quote, "in a historical or biographical manner, including some degree of fictionalization" would ensure that today filmmakers could do the same sort of thing using modern digital replica technology. And I should mention, it's been publicly reported that Paramount, the producer of Forrest Gump, did not obtain permission from the heirs of those three presidents when they depicted them in the movie. They felt they were protected by the First Amendment and there were never any claims.
To take a more modern example, there's a series called For All Mankind, which is aired on, streamed on Apple TV and produced by our member Sony. It's an alternative history version of the US-USSR space race, and I highly recommend it. The show uses digitally manipulated videos to present a fictional version of history that incorporates real people, including John Lennon, President Reagan, doing and saying things they did not actually do or say in real life. But those depictions add verisimilitude to the show's fictional narrative.
Another important exception is for parody and satire, forms of commentary that the US Supreme Court has told us several times are protected by the First Amendment. A show like Saturday Night Live often features actors depicting real individuals, including politicians, in order to poke fun at them or make a political point. But under the NO FAKES Act, the producers could also use a digital replica of, for example, President Trump, depicting him saying things even more outlandish than he actually does in real life.
And turnabout, of course, is fair play. President Trump would be protected when he engages in one of his favorite pastimes, posting to Truth Social AI generated videos mocking his political opponents. The NO FAKES Act is not perfect. Like almost all legislation, it reflects sometimes painful compromises that are necessary to get a deal done. For example, the NO FAKES Act protects not only living individuals but also extends protections 70 years after an individual's death, mirroring the term of copyright, despite the significantly different nature of these rights and the justifications for them.
We at MPA are concerned that such a lengthy postmortem term creates unnecessary risks regarding the depiction of historical figures without a countervailing justification for such a lengthy right, although we acknowledge that these risks are significantly mitigated by the presence of the exceptions that I detailed just a couple minutes ago. I do want to address a point that Professor Rothman said in one of her criticisms of the NO FAKES Act is that it does not include an element of deception.
And that's accurate. And there were a couple reasons for that. One of them is that the representatives of the actors and the recording artists and the recording companies who were involved in the negotiations vehemently objected to a deception element. Their point is you should not be able to make a new song that sounds exactly like Taylor Swift, even if everybody knows that it's not actually Taylor Swift, or use a digital replica of an actor to appear in a movie without his or her permission, even if everybody knows that, for example, the person is dead and could not have actually acted.
Their point is, the harm that falls upon the depicted individual is-- the deception is not relevant to the harm that person would suffer. And the other reason is, as Professor Rothman also mentioned, is that there's lots of other laws out there. The NO FAKES Act does not solve all the problems in the world associated with deepfakes. We still have defamation laws. We still have laws against fraud. We still have the Lanham Act, all of which may apply depending on the circumstances.
So while it's very difficult to get any legislation enacted in Congress these days-- you may be aware that our Congress can't even agree to fund the government's basic operations, at least at the moment. We do believe this bill has a better chance than most, given its bipartisan support and endorsements from such a broad array of stakeholders.
Lastly, I've been asked to address the issue of commercialization of deepfakes or digital replicas. The idea here is that actors or other personalities would authorize creation of digital replicas of themselves and have those replicas endorse products, or even act in new movies, or sing in new songs without having to do the hard work of showing up on set or in a recording studio. And in theory, these replicas could continue to act or sing or endorse products even after the individual is dead.
For example, talent agency CAA announced in 2023 the creation of a so-called "CAA vault," which can securely hold digital replicas of its clients for potential licensing opportunities. But what we haven't seen yet is significant deployment of such digital replicas by the talent themselves. One reason I'm sure is the technology, while it's amazing and it's improving rapidly almost every day, still has not progressed to the point where a digital replica is truly lifelike enough to replace a human actor, at least in works longer than a few seconds.
And second, the public reaction to these new-- to these digital replicas, and especially digital replicas of deceased individuals, has been almost universally, almost uniformly negative. The words I hear most often in reaction to such uses include "creepy" and "ghoulish," words few brands want to be associated with. In closing, this is an area of law in considerable flux. New technologies are putting significant pressure on old laws, which are arguably inadequate to address current problems.
We can debate here exactly what is the right approach to addressing the concerns raised by actors, recording artists, and others about abusive uses of digital replicas. But politicians are not waiting for people like us to come up with the optimal solution. Instead, they're forging ahead with legislation, often inspired by anecdotes and examples that tug at the heartstrings but may actually already be addressed by existing law. It's my sincere hope that discussions, like the ones we're having today, will help steer policymakers in the right direction. Thank you.
[APPLAUSE]
[GRAEME AUSTIN] Hello, everyone. My name is Graeme Austin. First of all, I'd like to thank Jane Ginsburg for the kind invitation to be here at the Kernochan Center. It's lovely to be back in Columbia. Jane gave me the brief of trying to capture some of the legal developments in Commonwealth nations. I'm going to focus mainly on civil law, but it is worth mentioning criminal law in this context from a legislative design perspective.
One of the things that criminal law can do in certain jurisdictions is to give the victims of deepfakes access to things like victim support funding that is relevant. I also think it's relevant, too, when thinking about legislative design, to think about access to justice, questions, and cost structures and attorneys' fees, for example. I think that makes a big difference in terms of how you might think about a legislative design to meet the problem of deepfakes.
But I thought I'd start with the pessimistic news, a statement from Lord Walker in a very famous UK Supreme Court decision. It's the juggernaut case on economic torts in the United Kingdom. And his lordship says, "[U]nder English law it is not possible for a celebrity to claim a monopoly in his or her image, as if it were a trademark or brand." I want to develop two themes in my remarks.
First of all, building on what other speakers have said, notwithstanding this statement from the UK Supreme Court about the absence, at least in the law of England and Wales, of a personality right of the kind that you'd see in many jurisdictions here, other legal vehicles have stepped up to do a lot of the work that a bespoke right of publicity would do. And the second point is that in some Commonwealth jurisdictions, we do have rights of publicity that would look much more familiar to United States attorneys.
Those are interesting because the tentative thesis that I want to develop is that they derive from and are infused with ideas of human dignity that we find in the new constitutions. So the post-apartheid constitution in South Africa, the very long constitution of India has infused with notions of rights of dignity, and that is starting to influence the tort law thinking. So those are the two themes. I thought I'd start off, though, with defamation. As Jennifer mentioned, defamation does some of this work.
And there's a very old case of Tolley and Fry. The case illustrates how far we've come. So Cyril Tolley was an amateur golfer. He was extraordinarily famous. This is an invitation to a dinner that he appeared at, I think at Oxford. It might have been Cambridge. He was often depicted in this cartoon-like fashion. What you have to imagine now is that there's a chocolate bar sticking out of his pocket. I've asked people to try and find the original drawing and have not been able to do this.
That was done without his authorization. And very sweetly, the advertisement said, "The caddy to Tolley said, 'Oh, Sir! Good shot, sir, that ball, see it go, Sir! My word, how it flies, like a cartet of Fry's. They're handy, they're good and priced low, Sir!'" Fry's was the originator of the chocolate bar, and Fry's was the company that developed the chocolate bar completely without his authorization. He was depicted hawking these chocolate bars. This was considered to be defamatory because he was an amateur.
In fact, the House of Lords in the case said this was defamatory because he appeared to prostitute himself to be associated with a commercial product. You see how far we've gone. I just thought I'd talk about a few defamation cases and Commonwealth jurisdictions. The Tolley and Fry decision I just mentioned was used relatively recently in Singapore, where a politician who had appeared singing for charity in a restaurant, in a karaoke bar-- if I sing a song, you'll contribute $1,000 to this charity.
The restaurant then used his photograph without authorization to advertise its restaurant. This was defamatory because it sullied the pristine image of politicians. It probably says something about the fastidiousness required of Singaporean politicians. There's nowhere good I can go with that in this context. So again, this was suggesting that he was making money out of his image, or money out of his position as a politician, which was considered by the Singaporean court to be defamatory.
Sex does well in the context of defamation. The next case there, the Hussey case, involved a well-known Singaporean model whose image was then used for an escort service. Again, defamatory. In Australia, there's a case, the Australian Consolidated Press case, which involved depiction of an Australian rugby player in the shower, published in an English newspaper where his genitals could be sort of made out. Given the physique of most Australian rugby players, it's difficult to know why that would take him down in the eyes of right thinking people.
But the defamatory sting in the case was that somebody of his standing would not give permission to his image being used in a newspaper like this. It also detracted from some of the charities that he was associated with, particularly children's charities. The Charleston case. Again, this is a decision of the House of Lords in the United Kingdom. This was really an early deepfakes case.
It involved depictions of actors in an implausibly successful Australian TV program called Neighbors. Many of the people here won't know it, but if you're from Britain, you know it. It was extraordinarily successful, as they say. And somebody had developed a computer game that made depictions of leading actors in the case appearing naked and in sexually explicit contexts. One of the British tabloids published this.
There was a map of Australia covering the genitals of the actors in the case, that you get the flavor of the case here from a statement and the article. The famous faces from the TV soap are the unwilling stars of a sordid computer game that is available to their child fans. The game superimposes stars' heads on near-naked bodies of real porn models. The stars knew nothing about it. The actors sue for defamation.
As Lord Wright said, though, it was, in that style of British tabloids, a tone of self-righteous indignation directed at the makers of the game which contrasts oddly with the prominence given to the main photographs. But the reasonable reader of a British tabloid newspaper-- if that's not an oxymoron-- must be assumed to have read the whole article, which explained that the actors were outraged about this and they knew nothing about it. So it took away the defamatory sting.
There's one last defamation case that I do want to mention. It's an obscure South African case. It's not reported anywhere. But it involved a 12-year-old girl, a surfer who was photographed on the beach. And then she was looking away from the camera, but her image was then used on the cover of a surfing magazine in South Africa called Zigzag. And then underneath the picture was the word "filth." Does anyone know what "filth" means in that context? No?
Columbia is probably not all that associated with surfing. Maybe if I asked the same question in California, we'd have a different answer. "Filth" means really great in surfing language, but of course, it had that sexual connotation. There, she succeeded in her defamation claim. The court also said she would probably succeed in a kind of right of publicity claim as well. What's interesting about the case for our perspectives in the light of the NO FAKES Act, this is a case of an ordinary person who succeeded with these torts, not a celebrity.
All right. So defamation does some of the work. But I think it would be fair to say that it does most of its work in sexual contexts. Breach of confidence. That Allen case that I referred to at the beginning was actually a breach of confidence case, in some respects, and it involved the famous scoop of the photographs of the weddings, the wedding of Michael Douglas and Catherine Zeta-Jones who, as we know, played Olivia de Havilland in that TV show that Ben mentioned.
The couple had given exclusive rights to one magazine, and a photographer came in and scooped the photographs himself. It was succeeded as a breach of confidence case, that the images of the wedding were confidential. For our purposes, I think what's interesting in the case was some of the evidence that was there. So even though this was an economic tort case, the evidence from Catherine Zeta-Jones went along these lines. The hard reality of the film industry is that preserving my image, particularly as a woman, is vital to my career.
So it was getting at some of the harms that Jennifer Rothman mentioned early on. So a breach of confidence is another vehicle for protecting some of the interests that underlie the right of publicity torts. Canada is an interesting case. In some of the provinces, four or five of the provinces, there are bespoke privacy statutes. In British Columbia, the privacy statute, as we see also in the United States, provides a kind of right of publicity statutory tort.
So if you just look at the wording there, it's a tort actionable without proof of damage to use the name or portrait of another for the purpose of advertising or promotion, the sale of or other trading in property or services. And the interesting question, if this was considered to cover those services of creation of the deepfakes. And "portrait" is designed quite broadly, it does have the caricature, kind of wrong that it's not there arguably in the NO FAKES Act.
Where there has been the most development in some jurisdictions, notwithstanding the UK Supreme Court saying there is no right of publicity is passing off. And the greatest development, I think, is in Australia, which had very early on adapted passing off to the right of publicity. This was the first case. Henderson. It involved ballroom dancers. They were quite famous, and their image was used on a record sleeve for ballroom dancing music. What was done in this case, the legal innovation in the case was that passing off no longer required the plaintiff and the defendant to be engaged in the same kind of business.
So this was considered to be a passing off. And then we get the greatest development-- we'd have to check my cultural reference points-- with the litigation that came out of the Crocodile-- I see familiar people looking, nodding at me. The Crocodile Dundee franchise now, and there is, some of you might remember, an iconic scene in Crocodile Dundee, where in New York, where we are, he and his girlfriend were about to be mugged. The girlfriend says, hand over your wallet. He's got a knife.
Paul Hogan brings out an enormous knife and says, no, that's a knife. And this was used in a number of television commercials and in stores. There was one store that depicted koalas holding large knives. It's a kind of marsupial deepfake that was used here. That was considered to be passing off. And then it was also used in an advertisement for shoes where the girlfriend says, "Hand over your wallet. He's got leather shoes." And the Paul Hogan figure, the simulation of Paul Hogan says, you call those shoes? These are leather shoes.
The high point-- and this was the case that we put in the materials-- was about Rihanna. This is a Court of Appeal decision for England and Wales where a shop, a store sold t-shirts depicting Rihanna. They had permission from a copyright perspective to do this, but this was considered to be passing off, notwithstanding the absence of a bespoke tort there. One of the reasons was fans would know that image because of the Talk That Talk album, which had used very similar imagery.
All right. So those are jurisdictions without a right of publicity, mostly, but where other causes of action have come into play. I'll just briefly mention consumer protection and media standards regulation. I found a Singaporean advertising code of practice that's very broad. Advertisements and sales promotions should not manipulate, such as through electronic morphing any person to create a misleading or untruthful presentation. So it's often useful to take those into account.
Now very quickly, I just want to finish with right of publicity privacy rights cases. I've mentioned Canada. Canada also has a common law of publicity that has developed. This was a famous water skier whose image was used in advertising that made out the tort in Ontario of right of publicity. The Gould Estate one. This was an unsuccessful claim where the famous pianist Glenn Gould sued when a book was published about him.
This is where the Ontario Court developed this distinction between sales and subject. If the celebrity is the subject of the depiction, that does not give rise to a tort. But if the celebrity is used to sell something, then that gives rise to a tort. And that's their vehicle for infusing this area of law with concerns about freedom of expression, that talking about the person is fine.
South Africa. I want just very quickly move to South Africa and India in the last two minutes that I have. But I wanted to quote this from one of the leading South African cases. There aren't many. This was used in that case about the child surfer. But the South African court said, "The value of human dignity in our Constitution is not only concerned with an individual's sense of self-worth, but constitutes an affirmation of the worth of human beings in our society.
It includes the intrinsic worth of human beings shared by all people, as well as the individual reputation of each person built upon his or her own individual achievements." So it's linking what are largely commercial interests in many contexts to these kinds of dignitary interests, expression of which we find in new constitutions like the South African Constitution.
And then one final example comes from India. It's a decision involving Aishwarya Rai. She is probably one of the most famous people in the world. I'll raise you Taylor Swift on this one. She was a Miss World, an extraordinarily successful Bollywood star. She has, as you would expect, cultivated a particular image. She is a brand ambassador. She makes a lot of money out of her personality, out of her image, as well as being an extraordinarily highly respected actor in Bollywood movies.
We've come full circle from the Tolley and Fry case. Here, the right of publicity was recognized because of her commercial success. In Tolley and Fry using defamation, he had a claim because of his amateur status. And the last thing I'll end with is this was an interim decision. We don't have a final judgment. But if you just have a look at the kinds of things that she was claiming for, and the court said on an interim basis, we're giving relief to all of this. So a website representing itself as the plaintiff's official website.
A site that allowed downloadable wallpapers, a site that purveyed t-shirts featuring the plaintiff's name and photographs, e-commerce platforms selling and facilitating images, all that familiar stuff. This happens to seem to happen a lot in India. A motivational feature sperm using the plaintiff's name and image. You find other Bollywood stars having this happen to them. We've got her on our books when they don't. And then we get to this sort of deepfakes. Chat box enabling users to engage with impersonation of the plaintiff, including sexualized content.
The YouTube channel, Google, and its capacity as owner of YouTube, and various John Doe defendants who had used her image in the deepfake context. The procedural posture in the case was quite interesting, as she was seeking-- successfully seeking exemption from procedural requirement in some of the courts in India that they go to mediation first before litigation. The judge said, you don't have to go to mediation with this kind of case.
But also on an interim basis, the judge provided civil remedies, injunctions, take down notices in respect of all of those defendants. So a couple of points to conclude. Piecemeal laws to provide these kinds of remedies across a number of jurisdictions. And then I think the emergence of right of publicity slash privacy laws, private law claims, but infused with constitutional focal points on the dignity of the individual. Thank you.
[APPLAUSE]
[VALÉRIE-LAURE BENABOU] That's what I say to my students when they get emotional for speaking in front of the public. Drink a little bit of water. So that's what I did. Thank you so much for welcoming me. And I hope even if my English is not native and I may do some mistakes, that you can understand, or a European French perspective on the question.
I must say that after listening to my previous colleague, I was wondering, what are the choices I made in my presentation not to talk to these issues of defamations or passing off or parasitism like we have in France, or unfair competition, which are actually also in the scope. But deepfake is covering so many issues that you can tackle, you can address them through various scope of legislation. So my choices are maybe disputable, but I decided, nevertheless, to select some issues in Europe, in Europe and in France.
And first I wanted also, as my colleague, to start with the consideration that criminal law already in France or in different countries of the EU, is already banning the unauthorized representation of a person. And in this perspective, deepfakes are not the target. It's only the means to commit the offense, like harassment through deepfake or offense against privacy through deepfakes. But it seems to me that it was not my burden to talk about those criminal law, because we are addressing private rights.
But that deepfakes were not the subject matter of those legislation. Still, lately, EU addresses the question of deepfake per se as such in the AI Act that my colleague Celia will discuss about this afternoon. And in the definition of the AI Act, the deepfake seems to be an unrevealed alteration of reality. So it's the untruthfulness which is at stake now.
Whereas in France, lately we have updated our legislation on what we call also deepfakes. But deepfake here is the unrevealed use of a technique. It's not a question of is it true or is not true, but it's a use of the image, which is not consented by the person. And if it's not obvious that this is algorithmically generated content, or if it's not expressly mentioned. So the problem of fakeness, may I say that, is the non-trivial use of a specific technique.
And we don't see the question of the truth of what has been said or done. It's apart. It's something else. So even in EU, we are not really very precise on what we call deepfakes. But what I wanted to say is that, for me, deepfake is the question of alter reality. But etymologically speaking, alter means two things different things. Other. Alter, the other. Or alter, to grapple, to destroy, to alter the image.
And I guess that that may be something we can keep in mind that a deepfake, like we are trying to distinguish between deepfakes and replica. It's something else. It's the other. But it's also sometimes something which is deceiving. And we have both a consideration to take into account. So starting from that, what are private rights and deepfakes? Mostly, if the consent of the person who is depicted is relevant to decide whether it is a criminal offense or not.
So private rights may be relevant in the inner circle of what is not a criminal offense. So meaning, what is my power as an individual person to control or to oppose a deepfake which would not be contrary to the criminal law? Do I have a margin of maneuver to say if it's a fake or if it's something that is normal and that I can do business with? So it's problematic because the line between what is legal and not legal will depend on the consent of the person.
But we know that in intellectual property for sure, because when we give the consent to something, well, it's legal, and when we don't, it's illegal. And you can't sue on criminal grounds. So let's start from that. And I wanted also to share this dilemma. Is a representation of oneself a dimension of oneself, of myself? If I have my image reproduced, is it me? Or is it something else than me, which is an image which is created by someone else, or by myself, on which I may have a different control of my own attributes?
And for me, this is contributing to draw a line between what can be a property right and what is being something as a personality right. And we'll see that this is the relevant distinction for us. But it's quite difficult because when am I really myself? When I'm creating a fake image of myself, like I'm making up-- like I do some makeup, is it me, or is it an image that I am building? And upon which I can claim property, or is it only a dimension of my personality?
So we have different types of protection, and I really missed some. But you have had this wonderful speech of my colleagues, more or less. Well, it's not the same in civil law than in common law, obviously. But we share some elements on defamation something and privacy. I will focus on intellectual property and personality rights. Intellectual property, I go really through that very quickly because I read the NO FAKES Act, and it seems to me that, well, the purpose is to create a new intellectual property right.
And my concern was, but what is the counterpart for-- what is the social utility brought by the person, by the image of a person in a day to day life, on which he could claim a kind of intellectual property? And it seems to me that it's dangerous to extend IP rights when never there is no social utility for the creation, for enhancing creation, investment in something that we can all share.
And I'm not sure that it's a good thing to allow to a person a private intellectual property right on his own image in order only to prevent and to ban the use of the image whenever we can maybe have an interest, a public interest to discuss, to share, to use the image of the voice of the person. If it's an IP right, then you have an exclusive right, which means that you have the right to oppose so you can control the absence.
You can make sure that there is no deepfake, but you can also license and agree. And this is market. This is business. This is, well, a problem of sharing the revenue of the exploitation of the deepfake. In the EU, we have lately harmonized a part of our contractual law and we provide for some protection specific for authors and performers, like individual protection, that can have a right to be associated to the users, to the exploitation without any possibility of buyout. We can discuss about that.
And the international private law may change the game, but under EU law, if a performer or author assigns right for deepfake exploitation, it should not be-- it should be associated to all the profit of exploitation, and there's no lump sum. So it's a little bit different from what has been explained lately. The problem is, do we have a case there? Is copyright relevant? And Jane told us that, well, it's complicated sometimes to claim copyright.
It's complicated because we have this distinction between ID and form and expression, meaning that the style is not protected. It's a public domain. So if you're making a fake, which is in the style of a creator, well, it's free. You can do that. And if you want to claim that your copyright has been infringed, you must show that there being copies or communication to the public of a piece of your work, which is still recognizable and should be also original, so that you have the threshold of originality to demonstrate in order to claim that someone cannot use the part of your work in the deepfake thing.
And that may be complicated in the training or in the output. Whereas-- sorry-- for the producers right, it's easier because the producers right is a write on the fixation. And the Court of Justice, in a very famous decision of July 2019, has decided that in the sample-- so you reproduce a short excerpt of a phonogram, you have a right, unless the sample is included in the phonogram in a modified form unrecognizable to the ear.
Meaning that whenever you listen to the voice of a singer on something which was a fixation by the producer, whenever you can recognize the voice, notwithstanding the means that has been necessary for-- several means necessary to get to the result. But you recognize the voice, then you have a use or a communication to the public of the phonogram that the producer can-- I'm sorry. I don't have any screen anymore.
So the producer has a claim in saying you cannot do that. OK. The problem is-- thank you. The problem is, are the performers also entitled to sue if I can recognize the voice? And we have a problem here because we usually consider that the performer is only protected if he performs a work. So if you use the voice of an artist or a singer, but he is not interpreting a performance he has done, then you may not consider that it's covered by this right.
So my thinking about that is, well, sure, the voice of a singer itself used, but in another work, it's not a performance. So I cannot claim my exclusive rights as a performer. But if what is used is my voice singing-- I am an opera singer and in the deepfake appears me singing as my work, my performance were extracted, this is the value that has been extracted. It's not my day-to-day life voice, but my voice as a singer. And this value can be considered, to my point of view, as an expression of a performance.
And therefore, we can claim that maybe the performer has a standing. If I use the image of Jim Carrey when he is buying his milk in the morning, he has no performer right. But if I use the grimace of Jim Carrey in a deepfake, then considering that it's his work, it's his job to make the grimaces, I think the extraction of the value is one of the performer.
Problem is that, do I perform myself being a work? Do I perform myself being a work? It seems to me that we can split. If I am interpreting a work which is a character I'm creating or being created by a creator, then I have a copyright. I have a related right. If I only see the reproduction of my image and my voice during my day to day life, well, no work is being interpreted, and I cannot extend the performance rights to this situation.
Well, very quickly, we have a database, sui generis right, in the EU that is being harmonized. Not really interesting, but I was wondering whether I can consider myself as a database of my own data. And if so, if I can consider that self-care, self-education is investing in my database so that I can claim that anyone extracting my data is actually extracting-- is infringing my data sui generis right. It's a hypothesis.
Quite strange, but I wanted to share that with you. Trademark. I won't go into the details. According to me, it's complicated, according to the EU trademark law, to consider that the image of a person can be trademarked if it's for the image itself. It's only if it's related to products. There's no protection of the notoriety by itself through trademarks, so it won't be sufficient unless you have registered a trademark for separate product.
What is really relevant for us today is that we also have more rights, not only economic rights. We have that at international level, but as you know, Article 6 of the Berne Convention is subject to reservation. You don't have it in US as we have it in some of the EU law, not all of us. So moral right, unfortunately, is not being harmonized. That's why I will give the French example.
We have a very broad moral right that encompasses several prerogatives, mainly the right to claim attribution and right to integrity of the work. And this has been interpreted very broadly by the courts, like in this decision of the Cour de Cassation, where Jean Frerotte, a famous singer, wanted to oppose the use of his song in a compilation where appeared also some singers that were collaborating with the Nazis during the war. So he was a Communist, and he was considering he was not very comfortable to be in this compilation.
And the court decided that the artist may absolutely oppose to this compilation because it was likely to alter the meaning of the expression and that he could, according to his moral right, decide not and ban also the authorization of the producer that has been given for the compilation. Moral right is really interesting, because moral right is not assignable. So even if I assign my exploitation rights, I can still and always oppose to the alteration of my work or my performance.
So this is a guarantee that even if there is a bargaining between me and someone, it cannot bypass my consent by going further and distorting my authorization for applying to a situation that is harming my dignity, my integrity, my paternity. But it still is a private right. It's not something that testifies the authenticity of the work or of the performance. This has nothing to do with the truth. Someone can absolutely use this moral right to deny attribution of a work that he has made. We have had the case several times.
So very quickly, because I only have one minute. We also have limits to IP rights. And those limits are parody, pastiche, caricature, quotation. And lately, in the conclusion of the Advocate General Emilio of June 2025, the Advocate has established a very interesting distinction between parody, pastiche, and quotation citation.
And what is relevant for me in this comparison is that it stresses the fact that the right holder can keep his monopoly whenever there may be a deception for the public that the use of the work has been authorized or not, or that the public cannot trace the origin of the parity or the pastiche. So I think it's really interesting that in those cases, what is meaningful is what has been understood by the public of this distanciation between the genuine work and the parody.
And it seems to me very relevant to focus on that, which is how do we embed the comprehension of the public as regards what is being done with the work or with the performance. Is it something that is understood that it's not the genuine intent of the author of the performance to do what has been done with his creation?
Finally, in the technique case there is a very interesting theme, which is the parody had been used-- the exception of parody had been used by a group of far right in Belgium to justify the imitation of a cartoon but with a discriminatory message underlying. And the Court of Justice said, well, as the author, you have the right not to be associated to such a discriminatory message, even if it was not grounded on what we call moral right, which, as I said, is not harmonized.
We can see that maybe in the technique case an emergence of something like a embryo of moral right at the EU level, saying to the right holders, you can oppose to something that has been caricature parody, but that goes far beyond what is considered as a mockery and that embeds discriminatory messages. I will stop with this.
Just to say that-- I just skip to the end. Oops, sorry. Just to share some thought about the protection of the dead persons. We have, in privacy right and also in GDPR, a protection of the individuals on his likeness and voices and also data. But this protection ends with the life of the person in both cases. After the life of the person, no protection granted on this.
And the idea of these IP rights 70 years after death of the person in the NO FAKES Act seems to me very dangerous to extend such a right of privacy or of control of one's image on that person. I think that we shall take in mind that if it's a right of an individual and not a creator, not a performer, there's no interest in fighting for a longer period after the death of the person on the ground of IP rights and on the ground of a privacy right or personality right. I can elaborate more on the Q&A. So thank you so much.
[APPLAUSE]
[GINSBERG] You don't have a seat? You do?
[ROTHMAN] No, I do. I'm just--
[GINSBERG] Have a seat. Yes. So thank you to all the panel members. I'm going to start the Q&A with first asking if any of the members of the panel want to react to something that another member of the panel said.
[AUSTIN] Thanks, Jen. I wanted to pick up just because it's so immediate on your last point, Valerie, about your discomfort with the extension of these rights and the endurance of these rights after death. And I wondered how we think about that in the context of some of the remarks that Jennifer made when Jennifer so carefully outlined the harms, including harms, to family members and related people as well. And I wonder if we can be so sure that those kinds of harms do not endure after the performer has died.
[BENABOU] OK. I kept the photo of Francois Mitterrand dead on his bed. That was the starting point for a trial in France, deciding that there's no privacy right or personality right after the death of the author. But the heirs, if they can demonstrate that they have a separate heart on their own, they can claim. But they are not an extension of the person. They are separated person and they suffer harm, because, obviously, seeing my husband dead in the press is something that is harmful, but it's not Francois Mitterrand's problem anymore.
[ROTHMAN] Yeah. Just go. I don't know if this is on. So I recently wrote with a co-author, Anita Allen, an article on postmortem privacy, which, if this topic interests you, goes into more depth than I could possibly here. And I cut out postmortem from my talk for time. But I think that's right. We have very different harms between the living and the dead. But we might care very much-- our living selves might care very much about how we anticipate will be depicted in the afterlife, which could affect us more broadly as a society.
And our relatives may have their own experiences. And the way in which current postmortem laws on the books are drafted, and the way the NO FAKES Act proposes to create this postmortem bill, focuses on those who commercialize their identities after death, creating a market in the dead rather than protecting the dignity and reputation of the deceased or of the loved ones who might want to limit that commercialization, which I think has significant inequities and is largely focused on the wrong problems.
And in addition, for those who like or don't like but are familiar with tax law, the system in the US, if you can commercialize, if you're a well-known individual who could have valuable commercial rights the way the system is currently designed-- and NO FAKES proposes it would actually force people to commercialize the dead, even against the wishes of the deceased and their families, to pay off an estate tax which would be assessed at the fully commercialized value.
So the NO FAKES Act actually could be very, very challenging. And you may have seen some reactions by Robin Williams' daughter to a recent AI generated version of her father. And rather than being able to stop those uses the way NO FAKES and other laws are being drafted would actually force her to, against her own wishes, commercialize him to pay off that tax bill. [GINSBERG] I do have one reaction to something else.
[ROTHMAN] Unsurprisingly, for Ben. So I, as Ben knows, I am very supportive of the protection of expressive works and creative works, and I think that is essential, which is why I tried to highlight that there's so many wonderful uses of this technology. And we tend to think of deepfakes as pejorative, but they're actually-- whether we call it digital replicas, we don't bring that baggage with us or something else-- wonderful effects, creative works that can be created and that needs to be kept in mind.
But with that said, I think that while some states limit the right of publicity to the advertising and merchandise context, the vast majority do not. And this dates back to the origin of these laws and continues today. So the examples that Ben referred to, the Sarver case and de Havilland, both of which I think were correct-- and as Ben may remember, I was the lead counsel for the intellectual property and constitutional law professors on the de Havilland case and argued it in the Court of Appeals on our behalf, which we won.
Defending the right to depict de Havilland in the series. But these were First Amendment decisions, saying that the First Amendment protected these uses, not that they didn't make out a case as a prima facie basis. And so when we're talking about deepfakes, I think that's very important. There are going to be a lot of things that are protected by the First Amendment and Fair Use. Good actors in the movie industry are all going to fit into that category with most uses, and certainly the way it's currently being conceptualized. But bad actors could escape liability if we overly narrow the scope of these laws.
And just one more thing about what Ben said with regard to the wide support of NO FAKES. There were a lot of glaring absences in that list of supporters, which is any individual performing artist and any member of the general public. And so it's no surprise that the bill is drafted in a way that has exemptions for the motion picture industry, gives the record labels broad standing, and, as a carve out, for those who are subject to collective bargaining agreements, like the Screen Actors Guild. So anyone who doesn't fit into those boxes is not well-protected or served by this law as currently drafted.
[GINSBERG] OK. We have a little bit of time for Josh. And please say who you are.
[JOSH BERLOWITZ] Hi. I'm Josh Berlowitz from Kirkland and Ellis. This was fabulous. Thank you so much, all of you. It was very interesting. I want to pick up where Professor Rothman just left off, which is the harm to the public. And for a variety of reasons, from all of your talks, I think it makes sense that the efforts to combat deepfakes and digital replicas have been focused on intellectual property rights, rights of publicity, personality rights, moral rights, and all sorts of things that an individual can enforce. But the problem is that limits the consideration of the public.
And we don't have a lot of-- or we don't have that many rights that the public can enforce in the US. Consumer protection laws are fairly narrow that individuals can have a private right of action on. And I'm thinking about false advertising laws where an individual can bring a class action and say, we as the public were deceived and we bought this product and we didn't mean to, but we were deceived by this company for x, y, and z reasons and recover.
And you can figure out who was harmed and you can grant a remedy, assuming the case is made out. And I'm wondering what can be done to protect the public. What would a public right look not to be deceived by digital replicas?
[GINSBERG] I just want to point out that our next panel is going to be on transparency. But transparency does not exhaust the scope of your question. So Ben.
[SHEFFNER] Yeah. I mean, I think your question is part of the answer. There already are existing causes of action that can be used in the scenario you pointed out. You remember Professor Rothman's presentation. She had the example of Tom Hanks. There was a video circulating, I can't remember. A fake ad purported to show him endorsing some sort of dental service, and there was outrage about that, understandably.
But he has a cause of action under state right of publicity law in probably every state in the country, probably under the federal Lanham Act as well. And as you pointed out, I'm not as familiar with this body of law, but there are consumer protection statutes if somebody was deceived into purchasing that product because they falsely believed that Tom Hanks had actually endorsed it, and maybe a whole class would have a cause of action.
In my conversations with legislators, when we're talking about these issues, is I often try to say, hey, take a pause. Slow down before you enact this broad new bill that says, all digital replica rights are illegal, but here's a bunch of exceptions. And stop and ask, is the harm that you're actually worried about already covered by existing law? And in a lot of cases, it will be.
Maybe not every case, and there's some gaps. That's why, at least initially, there was this focus on protecting professional performers who were worried about having their performances replicated without their permission by digital replicas. But again, if a digital replica is used to endorse a product, I'm pretty sure that the victim, the person who was falsely depicted there, would already be able to see it right before city underwriting law in all 50 states.
[AUSTIN] OK. I think that's a-- I think that's a great question. When preparing for this, I was doing a thought experiment, trying to superimpose. It's Federalist 52, isn't it, where Madison says, where copyright, where public goods and the private right fully coincide. Yeah.
[GINSBERG] Federalist 43.
[AUSTIN] Thank you. Can you make the same claim with these kinds of rights, where you have, at a systemic level, the public good, and the private right aligning in a way that is claimed for copyright and patent by a framing generation. And one of the reflections on this-- I had a hunch that the United Kingdom Supreme Court, a court that is very focused on commercial interests and the integrity of private law, sets its face against publicity rights because of a conviction that things like passing off, breach of confidence have a heavy thumb on the public interest in the way that these other rights might not, at least in their view.
I think there are claims for the public good that can be made with these kinds of rights that they are not focusing on. And by the time I finish with this idea that these new constitutions or rights inherent in some of the constitutions in the Commonwealth, like the Canadian Charter of Rights, protecting dignity leads to an infusion of those ideas in tort law. But it's also important not to lose sight of the public serving aspect, that those torts that you find more fully committed to in the United Kingdom.
[GINSBERG] That Federalist 43, Madison also said the states cannot separately make effectual provision for the protection of patent and copyright. And I think that's part of our problem here. Did you want to say anything else?
[ROTHMAN] I just wanted to add to that. So I love that question. I've been thinking about that a lot. And some of it is, I think, inflected in existing law but largely consumer protection. I think some of the protection for dignity resonates from the EU, in France and Commonwealth countries, and is also in aspects of our law, particularly privacy and privacy laws. But I think going forward, it can-- our focus and trying to center concerns over deceiving the public can drive some of our choices.
So it's separate from existing laws. So we could be creating mechanisms for government enforcement of criminal and civil penalties for deceptive deepfakes. We need to have a functional regulatory state, but if it worked, we could be setting that up. I think we could also, as I mentioned towards the end of my comments, support and encourage through legislation the adoption of technology. And I guess we're going to talk a little bit more about that later today. Which is facilitating authentication, detection, and transparency.
And importantly, to keep in mind as we think about passing new legislation, both at the federal and state level, is this creating an architecture which will give more fuel to circulating deceptive deepfakes? And I think some of the ways they've been drafted actually enhance the likelihood of deceiving the public rather than mitigate it. And that makes it, in my book, a bad law.
[BENABOU] I just wanted to add something about deepfakes, which would be fake news, because we didn't address the question of fake news. But when the press publishers have been advocating, have been lobbying in the EU to get the related rights on press publisher, they relied a lot on the risk of fake news and saying, we need to have a control over the press publication to fight against fake news.
And I wonder whether it's relevant to give to the press publisher a control on the information, because they will oppose to the fake news. Because so far, I didn't see the result of that. Maybe. But I was wondering. But I also thought that the public interest may be represented in some of the tools of protection of cultural heritage, because the authenticity of something is something-- the historical truth is something that we all share.
And it seems to me that we shall maybe address the issue not through the IP right but through something like cultural heritage. For the dead person, for example, protection of the dead person not portraying Martin Luther King. It's our cultural heritage, and someone shall represent the interest in the public of not being deceived by this kind of fake person.
[GINSBERG] We had one more question, and I know it's our coffee break. But I think I will invade the coffee break. Actually, Ted, it was the person behind you who had raised his hand. So sorry, Ted.
[CHARLES BOWDEN] Thank you very much for your whole presentation. It was very interesting. My name is Charles Bowden. I'm a PhD student in philosophy at Sorbonne. And my question is actually about the distinction and the categories Professor Rothman brought up, but this question is open to all of the participants. You made a distinction between authorized, non-authorized, and fictions within deepfakes. And I'm interested in the last categories, the fiction one. Can't we consider that all deepfakes are fiction? And if yes, can we make distinctions inside this category? And if not, what could be, in your perspective, a non-fictional deepfake? Thank you.
[SHEFFNER] Maybe I'll take it. I'll take this one. So the term that we in the motion picture industry use for that category of deepfake, let's call them, like the Tilly Norwood fake actress is synthetic performers. And this was actually a big topic of discussion in the negotiations between the new studios and SAG-AFTRA, the union representing actors, back in 2023. And there was a big fight of this.
And the ultimate resolution is that if the studio wants to use a synthetic performer, meaning a performer that does not actually resemble any one particular actor, they have to let the union know about it and give them the opportunity to bargain over that use. And you would say, well, why should the union care? Because that person is not an actual human being actor. They don't pay union dues or anything.
But their answer would be that a synthetic performer like Tilly Norwood only exists because it was trained, because the AI model that produced that was trained on performances that embody hundreds, maybe even thousands or tens of thousands of performances by union members. In other words, again, that Tilly Norwood wouldn't exist but for real actors.
I actually just heard from somebody at SAG-AFTRA the other day who said that since that agreement was entered into in late 2023, there has not been a single instance of a studio actually going to the union and saying, we want to use a synthetic performer. Let's talk about it. But it's something that the actors are very concerned and they think that they deserve to-- they don't like it at all. But that if a studio is going to use it, they believe they should share in the benefit, because again, they believe it was created by taking little bits of thousands of performances and assembling it into a new synthetic performance.
[ROTHMAN] I like the term synthespian better. So I don't claim that this is epistemologically necessarily the best way to frame it. What I'm focused on-- and you're right. They're all, in one sense, fictional because they're fake. They're not something that happened, and so they're all fictionalized in that sense. So what I meant in this sense was that it's a fictional person depicted. And so in the other ones, we have depictions of real people, even if the deepfake itself is fictional.
And so I wanted to just highlight that and problematize even that category. Do we think to question, given how we frame and understand the harms that flow from deepfakes? Do we understand these sort of fictionalized deepfakes where there's not actually a real person depicted? Is that a deepfake, or is that not a deepfake? Because we could define, and some of the laws and definitions of deepfakes and digital replicas say it has to depict a real person, and simulate a real person.
So we could define it that way, or we could not define it that way. And so by having that category, I was trying to highlight, let's think about it. We're not hurting the individual depicted because they're not real. But we might be deceiving the public in the same way as the others, and so we might want to keep it in.
[GINSBERG] And you might be substituting for the living of real actors. Right. So I've now in allowed us to invade the coffee break by more than 10 minutes. So I think we should take our break with apologies to those who wanted to continue with the Q&A. So we will return at noon or as close to noon as possible. So our 30 minute break is a 20 minute break. And when we come back, we will talk about transparency.
[APPLAUSE]
[LOENGARD] OK. I think we're going to start. Just to preview, lunches, box lunches will be available where breakfast is now. You're welcome to eat them here or in the room across the hall. You're welcome to go outside. Whatever. I have not been outside in five hours, so for all I know, there's a tornado. But in theory, you're welcome to go wherever is most comfortable for you. And then we'll reconvene.
I don't have my schedule, but I'm going to say 2 o'clock. And if I'm wrong, go by the schedule, not by me. So those will be available right after this amazing session that we are pleased to host next, which it features Fordham Law Professor Olivier Sylvain and Professor Celia Zolynski of the University of Paris Pantheon-Sorbonne. Again with the French names in front of the French speakers. It's deadly.
Who will discuss what the intersection of deepfakes-- what about the intersection of deepfakes and free expression, what protections transparency measures can give, and what new proposed or enacted policies in the United States and the EU offer in terms of combating the unauthorized manipulation of images. So we have had our preview, and I leave it to Celia to take us up forward.
[CELIA ZOLYNSKI] Thank you so much. So first of all, I would like to thank Professor Jane Ginsberg and all the organizers and also the program alliance to make this comparative symposium possible. And I'm delighted to be with you today. And I'm going to present you an overview of the topic based on the first results we have regarding research I've been leading and I continue to study on deepfakes technologies and the legal framework, especially for an opinion for the French commission of human rights about teen intimacy and digital services.
This opinion, published in February 2024, analyzed impacts of non-consensual sexually explicit deepfakes. There's also research, a research project in my research center dedicated to the production of an open source large language model. In this project, we are studying with partners the technical and the legal challenges of watermarks. And we are currently finishing also a legal study for the French agency for health about security on social media and risks for teens.
And I'm beginning a mission for the minister of culture, an institution for the minister of culture, about deepfakes in creative sector. So with all this research, I propose you to share my thoughts-- maybe not really response, but thoughts about deepfakes and the current legal framework, and maybe the evolution of this legal framework in the EU perspective.
So let's begin with what we are talking about. So this morning we understand that the very notion of deepfakes is not so clear, and that we have a lot of definition about what could be, what it is, what should be a deepfake. So because the previous speakers have made a brilliant presentation, I just want you to remind that we have a legal definition in the EU in the AI Act.
So Article 3.60 of the AI Act defined the deepfakes such as AI generated or manipulated image, audio, or video content that resembles existing persons, objects, places, entities, or events and would falsely appear to a person to be authentic or truthful. I'm sorry. I lost my screen, so. OK. I continue. So as you understand, it is a very comprehensive definition, the one we have now, in the EU.
So it's focusing on synthetic content. It's not only focusing on a person but including a lot of other things, including events. And we had a lot of deepfakes during the Olympic games in Paris, as you may know. So the question is still, if we need misleading purpose-- and what I can add now is that the AI Act does not impose to prove the goal pursued by the producer of the content. So maybe-- OK.
The next question is, why are we focusing on deepfakes? What's new with deepfakes? We know that drawing an image or even a video does not constitute real. I'm sorry for this. Proof of reality. It is subjective. It is a representation perceived or constructed by the author. So this brings bring us back to the classic debate about the relationship with the audience and fiction. And many consider that this debate is renewed with deepfakes, because some believe that hyperrealistic AI generated or AI manipulated content could make the public perceive, could prevent the public from taking a step back.
And this could blur the line between fiction and reality, that what my colleague in art are telling me where we are sitting in an interdisciplinary perspective, what is or could be deepfakes. We must therefore understand, as Jennifer Rothman explained this morning, that deepfakes are not a unique phenomenon. The context, the goal pursued by the author should not be considered unique. So we understood this very well and we have to take this into account.
We must therefore add that of a distortion of the information space with multiple synthetic media now presented if they are authentic. So we have also the impact of a saturation of the digital space with the dissemination of a massive amount of inauthentic content and the impact of the new features of what we call-- what Tim Wu called attention economy. So discussing deepfakes requires, I would say most importantly also, taking into account the massive infringement of individual rights that can result from the production and the sharing of non-consensual digital forgeries, especially non-consensual intimate images and child sexual abuse material.
So many of these issues were already highlighted by various authors several years ago, and some are now widely recognized. And this was clearly mentioned by the report of the international summit on Action for AI in Paris, launched in Paris in 2025. So this report about AI safety highlights, as you can read in the slide, that we have risks regarding individual and societies, such as misinformation, gender based violence, erosion of public trust in digital media, and so on.
So what we need is first, as we understand, to define precisely what deepfake is when we are studying the legal framework, of course. And also we have to take into account the various domains in which the public can be exposed to deepfakes to define this legal framework. So what about the EU law? What about the EU regulation regarding deepfakes?
Given the issues I've just mentioned, the European authorities decides to tackle this issue by adopting the AI Act in 2024. It was one of the most debated issue during the negotiation of the AI Act. Just notice that it was also in 2024 that French law has been adapted to apprehend specific risks under criminal law. So just a word for you to better understand if you don't know the AI Act, because but it's very famous, so you probably know the EU AI Act. But just a few words about the general context of this AI Act.
I would like to underline that this AI Act is a transversal regulation that aims to create a single market, a single EU market, and harmonized rules in the EU for trustworthy and humancentric AI promotion. And this new piece of regulation is based on compliance mechanisms and what we call risk analysis approach. The goal is to promote innovation, but also to tackle the level of risks regarding safety and regarding human rights, including in human rights, democracy, and the rule of law. In this perspective, one of the principles of this AI Act is not to consider the technique itself, but to consider the uses of the techniques.
That's why deepfakes are captured by the AI Act with several layers of the regulation. So here we will see that deepfakes regulation is a perfect example of this regulatory approach we decided decide to adopt with the AI Act. That means raising flags, raising sometimes red flags, but also pushing innovation. It's quite difficult sometimes to put this both and to promote both at the same time, we will see.
So deepfake techniques, as you know, can be used for various purposes and various contexts. So sometimes we know that offers great opportunity, opportunities that are socially desirable, education, information, creation, and so on. And they can cause also massive harms, disinformation, fraud, bullying, harassment, and infringement to dignity. So that's why deepfakes have not been considered, such as prohibited per se by principle by the AI Act. It was a discussion, but at the end, it is not prohibited per se.
But with respect to the logic of risk analysis regulatory approach adopted by the AI Act, the EU authorities have particularly identified the need to avoid specific risks of manipulation of the public, and especially to avoid malicious uses to preserve the public interest. It was the main goal of the AI Act during the negotiation and at the moment of the adoption of this AI Act. So in July 2024.
The idea was to take a risk of impersonation, deception, and also it is very important to tackle risks regarding elections and integrity in the information ecosystem. So, as you may see in this pyramid, we have the various layers of the AI Act. And the deepfake could be captured by these various layers. The first layer is considering all deepfakes. I take the last quite yellow part.
Here that means that for all deepfakes, the AI Act imposed transparency requirements. So this needed to be more transparent regarding the necessity to differentiate AI generated content or manipulated content to human created content has been one of the most important goal of this AI Act. It was like an obsession. Everyone was talking about this.
But we have also a use of deepfakes that could be considered, such as high risk. We have a category of high risk uses of AI systems, and this qualification determines the application of most of the whole AI Act compliance system. And here, if you consider deepfakes, you can observe that only one specific use of deepfakes is qualified, such as high risk is electoral context or referendum context. Only this category fall under this qualification of high risk.
What about the prohibited uses? Prohibited uses, the top of the pyramid, is we have a list of these prohibited uses considered such as unacceptable risks regarding safety and fundamental rights. And in the list of Article 5 of the AI Act, you don't find any deepfake mention. So here we have a current debate because after the adoption of the AI Act, we realize, the EU legislator realized too late, I add, that we have these very harmful issues of non-consensual sexual deepfakes and child sexual abuse material.
And the EU Commission has published last February guidelines to interpretate what Article 5 mentioned. And in this document published by the EU Commission, so last February, February 2025, NCII and CSAM are mentioned, such as possible prohibited uses. But it is now clearly debated, because the condition to apply the prohibition of Article 5 are very strict, and we are not sure that we could have all the conditions applicable for these cases. So this is a main question and a potential issue we have now regarding the EU law.
And finally, this whole structure is completed by imposing a specific obligation on providers for of GPAI system and GPAI model that could cause systemic risks. And this is now described by the code of practice published by the EU in July 2025. And this code of practice mentioned that specific uses of deepfakes such as CSAM and non-consensual intimate abuse images can generate systemic risks. So here again, quite late, but we take it to account indirectly these deepfakes.
Considering that, under the AI Act, we understand that most of the deepfakes are captured by only transparency requirements. The idea is to preserve public interest from malicious users and disinformation. So let's dive into this specific tool of regulation. We need to understand here whether these transparency requirements should be the cornerstone of the regulation. In other words, does transparency offer sufficient remedies or should it be considered insufficient to ensure the public interest. That is the question we have now to address.
My first point is then to identify the questions raised by such an approach adopted by the EU in the AI Act. And my second point is to determine how transparency could be an effective means of addressing the potential risks we have to mention. The microphone is not-- OK. So first of all, I would like to consider with you a lot of questions we have why transparency is imposed regarding deepfakes, how to implement this. And we will see also what are the limits of such an approach imposing transparency requirements with the AI Act.
So first of all, we mentioned why. So I skipped this. And I just want to precise how do we apply these transparency requirements. This is precise in Article 50 of the AI Act. So Article 50 of the AI Act introduced two levels of requirements, of transparency requirements that will be applicable in August 2026. So first of all, as you see in the slide, we have a marking obligation, marking obligation imposed to providers, AI providers.
They must design their AI systems, and this includes include general purpose ones, to mark outputs as artificially generated or manipulated in a machine readable format that can be detected. And we have recitals in the AI Act. This EU legislation is quite a long one. Precising what kind of technical tools could be used, technical techniques such as watermarking, for example. The aim here is to facilitate trustworthy detection and identification of AI generated and manipulated content.
In addition, we have another requirement. It is labeling. Labeling is imposed to deployers of AI systems. They have to label deepfakes in such a way that the public can be informed of the synthetic nature of the content. And because these provisions could be quite difficult to implement for these actors, the EU Commission has launched a consultation to better define a soft flow. I mean, guidelines and code of conduct that will be published. We have a lot of act to prepare.
So guidelines and code of conduct will be soon published to explain more precisely how to respect these transparency requirements. And I recommend you to read the various response of the stakeholders. It's very interesting to identify the question raised in a practical approach and asked to take into account the context, and of course, also challenge, because there's a lot of challenges.
We can identify that there are challenges taken into account by the AI Act itself. The AI Act, for example-- and I just take this one because the time is running. The AI Act take into account the necessity to preserve freedom of expression and to preserve freedom of creation and freedom of science. We know that deepfakes techniques can be used, for example, for a historical perspective, for a scientific perspective, and so on.
So here to preserve this freedom of art and science, AI Acts presides that the label is still necessary. So there's not a total exemption in case of creative content or scientific content, but the label has to be adapted not to hamper the display or enjoyment of the work. So here the question that we have to address is, what is an artistic content? And the AI Act presides that the content has to be evidently artistic, creative, satirical, fictional, or analogous. Sorry for my horrible French accent. So let's keep this. Artistic, creative, satirical, and fictional.
Another question could be, what about the use of this artistic nature of the content to reach another goal? And we have lots of difficulty to address, to draw the line, the frontier between what can benefit from this exemption and what could not be, especially with the use of parody that we have with Gen AI. Other limits have to be also considered, such as, as you know, technical challenges. We are now currently working on standardization of watermarking.
For example, we have coalition of factors such as Coalition for Content, Provenance, and Authenticity with the C2P for watermarking work in progress. And in France we have an initiative which is called Provenance For Trust, which is a coalition of actors such as operators, experts on labeling content, experts of detection of AI generated content, but also the initiative journalists, Initiative of [FRENCH] to promote certification for media.
We know that we have a lot of technical challenges to consider, especially regarding the robustness and accuracy of watermarks. That's what we are currently studying with my research project to build an open LLM respecting all these issues. We have also cognitive challenges, because we know that the label could have very important limits to, well, inform the public of the very nature of the deepfake. And here I would like to precise that the necessity to associate academics, researchers to better identify the good label, to address cognitive biases in this particular context.
We have also what we can call epistemic challenges, because we have a lot of questions about even if a deepfake is labeled as such, we have to consider the impact of the narrative behind the deepfake. And in this context, we can consider that transparency requirements is not enough to prevent public manipulation. So we can conclude here that reflexivity is the real challenge.
We have to build, to propose, to consider reflexivity as a challenge, to consider the relationship between users and content, between users and information space, and to ensure better users agency. So considering all of this, we need to go a step further to understand if we can consider transparency as a remedy to limit the impact of deepfakes considering the public perspective.
So here, we can argue that we need to consider the conditions under which transparency can fully play its role in limiting risks in order to protect the right of the public in this specific context of deepfakes. And to reach this goal, we need to consider transparency not only as a requirement, specific requirements, but as a real transparency.
That's what we are promoting in the EU with promoting what we can call regulation based on transparency. Here we can take the example of another piece of regulation, the Digital Services Act, which is ensuring the safety of digital space, imposing new obligations to online platforms, especially the very large ones which have a very large audience.
Here in this DSA, online services, especially very large online platforms, have to respect transparency requirements with their terms and conditions, with the conception of their service, the algorithmic system, and so on. But they also have to respect other obligations. For example, to publish transparency reports on their activities, also to produce periodic risk assessment reports to analyze the systemic risks that can cause their activities, for example, on personal data, privacy, dignity, mental health of end users, children rights, and even pluralism of media.
And considering deepfakes, this is particularly important to monitor the effectiveness of mitigation measures they have to take to address this risk, for example, for individuals or for democratic debates. The DSA also imposes external and independent audits. And here we are studying how we can ensure independent adversarial audits to challenge the guardrail implemented by the AI provider, for example, to avoid specific kind of deepfakes.
And last, but not least, this regulation based on transparency is possible if we organize access to data, access to data to the regulator, but also academics and non-profit organizations to challenge the responsibility of these providers. So we can say that technical transparency has to be complemented with public transparency.
I will finish by mentioning that what we need also is to organize a systemic regulation of deepfakes, taking into account also the question of propagation of deepfakes, because it is very important issue, as we know. And this is taken into account by the DSA that imposes, as we mentioned, to very large online platforms, to make risk analysis and take mitigation measures, including labeling deepfakes and ensuring that this label is stayed in the content even if the content is shared with other users.
And here I just want to mention to conclude that we have various precisions published by the EU Commission regarding first the context of election. So we have precision regarding deepfake in the very context of elections, imposing to control the diffusion of deepfakes by online platforms, very large ones. And also we have a specificity with the protection of minors, which is one of the goals of the DSA, to ensure a high level of protection of minors as users.
And here we have a specific provisions regarding deepfakes that has just been added by another guidelines published by the EU Commission in July. So here you see that the EU regulation tackle a lot of consequences and issues. And this will be my conclusion. We need also to consider that regulation is not sufficient. As we already mentioned, we have to consider education literacy as essential to ensure reflexivity and resilience of the public.
And I will take the chance to mention what we are promoting in the AI Observatory of Paris University until a few years to promote conference and podcasts to make the public, especially teens, more aware of risks of manipulation, considering deepfakes and to prevent risk of harm such as CSAM and non-consensual intimate image, especially considering sextortion games. So you can have more information with the QR code, and don't hesitate to send us an email if you want to be involved in such initiative. Thank you so much.
[APPLAUSE]
[OLIVIER SYLVAIN] Hi, everyone. It's great to be here, and I'm so pleased to have been invited to join this conversation. And I come to you not as an IP specialist. I am a public law person, and I tend to think of these problems as public law problems. So that's why I was intrigued by Josh's question and other kinds of things that have come up towards the end of the last panel discussion.
So I also want to take up Jennifer's wonderful intervention, her introduction, really, to maybe add or amend the list of things to think about. So I'm drawn to the idea of thinking about authorization and deception as the sorts of priorities for attending to deepfakes and related AI abuses. And then Jennifer, you also asked, is this also limited to just people? Which I think is a great way to framing this-- reframing of this.
And to the extent we're thinking about people, are we thinking about individual people, or are we thinking about the greater public? And for that, I'll start with this and I'll end with this. There are very few institutions that are designed to attend to public harms. And I think we generally associate them with agencies, federal agencies, state agencies. The problem here is that this is contingent on the efficacy of the administrative state in any given moment.
But there's also another formidable problem, and that's what I'm going to take up here. And that is apart from the operational problems, it's a constitutional one. What happens when a federal agency has authority to regulate the kinds of information that we've been talking about today? So that's where I'll end up. In order to really put a context to all this, I want to make sure you understand that I'm not coming at this as an IP problem.
Now, information disclosure is a broad or transparency is a broad category of regulatory intervention. It can be useful for many reasons that are not really related just to the conveyance of information to an individual consumer. Information forcing could be good for learning and research. Talk a little bit about that in a moment. It also, as you likely know if you're a student of anything Cass Sunstein is written, there are behavioral impacts associated with transparency. It nudges people to attend to potential harms to the extent they have to attend to risks.
And this is not new. In environmental regulation, the National Environmental Policy Act of 1970 sets out the impact assessment obligation. And you all likely know in the context of civil rights, we think about disparate impact assessments and privacy assessments. These are all designed not merely for the disclosure to an individual, but also presumably to habit forming. So this sounds more like a regulatory intervention to me when I think about it. There's also probably a taxonomy of transparency that we should have some clarity on. And we've already actually seen some of it in Celia's presentation.
One are mandated disclosures, which I think risk assessments probably fall into the category of, although there are many other kinds of mandated disclosures. Nutrition labels and the ways in which we think about labeling mandated disclosure, or breach disclosures, for those of you who attend to cybersecurity issues. Audit requirements. They are a kind of disclosure, but they don't do the same thing as a mandated risk assessment. Counter notification process.
The Digital Millennium Copyright Act, as many of you likely know, has a mechanism to publicize, although to just one person, a potential aggrieved party, the possibility of a takedown. Counternotification is something that comes up in the Take It Down Act, and I'm going to talk-- most of my conversation or most of the things I'll say will be addressed to that. There's also appellate process.
Any given platform or company that decides to take something down ostensibly ought to be able to give individuals whose contents are taken down the opportunity to appeal after some explanation. That's a version of transparency, I would say, even if it sounds in due process. And Danielle Citron has written about administrative due process in this context. As I worked for two years under Chair Khan, Lina Khan is the senior advisor to the chair, and I learned a lot about civil investigative demands that the FTC would issue.
That they would issue not for the purposes necessarily, of just commencing an investigation, but for producing a public report about an industry. Broadband was a subject of one 6B report, and their data practices were the kinds of things that produced a lot of information to be useful because of the kinds of access the FTC has to private organizations more than other agencies.
And finally, data access. I often think of this as related to researcher-- that researchers have access to the ways in which platforms use data. There's a lot of learning that has to happen in that space. I'm a senior fellow at the Knight First Amendment Institute here at Columbia, and that is one of the priorities for them, for example.
Given that taxonomy, my focus here is going to be very narrow, and it's going to be on the kinds of mandated disclosures and risk assessments that we've already been hearing about. And it's going to be also narrowed in the context of elections and consumer harm. We can think of assessments as being applied to a variety of different settings. But to be clear, these are the areas that have come up. I'm not talking here about provenance for the purposes of IP holders or creators' rights.
Really talking about public harms, the kinds of harms for which agencies and governments ostensibly stand on feet of consumers. But here, what's different about the EU, the US. There are many things are different between the EU and the US. But one, of course, is the First Amendment. This is why transparency is a tricky regulatory intervention here. And I'm going to start by talking about this in the context of social media regulation, because that's actually the area in which the Supreme Court has recently given us some guidance about what transparency requirements may or may not do.
And as many of you likely know, the big case that the Supreme Court decided a couple of years ago in the summer of 2024 involved regulation or state legislation out of Texas and Florida that regulated principally the content moderation practices of the big platforms. Now, the regulations were principally addressed to the obligations these companies had to attend to certain kinds of content or actually forbid discriminatory takedowns and suspensions of users. And that's principally what Justice Kagan's opinion is addressed to.
But those statutes also had transparency provisions. Required social media platforms to provide users with notice and individualized explanation for why content would be taken down. Texas's law also required platforms to afford users the opportunity to appeal those decisions. And we have some language in the Supreme Court opinion about this. Not a lot. Now NetChoice and industry folks and First Amendment advocates brought cases against the state laws, arguing that their violations of the First Amendment because they burdened editorial decisions of the companies.
To the extent that a company has to attend to an explanation every time they take something down, visits a burden on them with regards to the content that they've taken down. And so that affects their editorial decision making. This is actually pretty intuitive in First Amendment doctrine. The Zauderer case is the principal case I'll mention briefly in a second that talks about this. It's a kind of balancing, but it's a formidable balancing, given the speech interests at stake.
The 11th Circuit, reviewing the Florida case, said that the individualized explanation requirements were unduly burdensome. The Fifth Circuit didn't think so. They didn't think that Texas's approach to this was burdensome, and suggestive of some confusion in the doctrine about what ought to happen. Justice Kagan's opinion puts a lot of cold water on any effort to regulate content moderation. That's the main takeaway. Even though I don't think the court assertively says so, they remand this back because the challenge that NetChoice and others bring is a facial challenge, and the court says no.
To do a proper facial challenge analysis, you have to know that there are substantial applications, substantial range of applications would be affected by this regulation. So the cases were remanded back to the courts below. But part of the analysis was the consideration of whether or not the disclosure requirements or the explanation requirement imposed a burden on the speech interests of the companies. We don't have an answer on the First Amendment, but we have strong indications that the Supreme Court would strike down the statute, ultimately, when it comes and has been fully fleshed out below.
Justice Thomas, who is apt to invite all kinds of litigation involving matters that worry him, has said that he would want to revisit the Zauderer test for how to evaluate whether something is too unduly burdensome of a speech interest. And he's skeptical that Zauderer actually articulates a view that is consistent with First Amendment norms. He would actually, if not do away with it, substantially narrow the claim that there's a burden on speech interests, which is an interesting intervention.
So the split here between the 11th and Fifth Circuit is actually a story that reveals tension, but also a story that's consistent with the American experiment, and that is that the states are supposed to be labs of experimentation. This is what students learn in law school. In our current climate, it's a bit-- we can probably put it a little bit more crisply. And that is that California and Texas are the big labs for experimentation here.
And I want to talk about California's laws, but it's worth saying that 26 states have passed laws regulating political deepfakes in particular. And many of these have prohibitions, but moreover, disclosure and transparency requirements. OK. I want to make sure that I do recognize that Congress has been thinking about this. No matter what Congress is doing now, which is nothing, there has been-- except for twiddling their thumbs, I guess.
There have been proposals put forward on regulating this space. The Protect Elections From Deceptive AI Act is a bipartisan bill that would prohibit the distribution of materially deceptive media that is generated by AI relating to federal candidates. The federal candidate can bring an action, which brings up all sorts of things that came up before about how to vindicate harms. And there's a First Amendment exception for parody and content involving news broadcasts.
I want to talk now about California. I don't want to linger too much on it, since you've heard some about California. But to the extent we have litigation on transparency, it really does involve California laws. There are two statutes that were passed late last year, AB 2839 and AB 2655. AB 2655 has, speaking of acronyms for titles of a statute, one that I need to repeat. Defending Democracy and Deepfake Deception A. I mean act. So DDDA. Someone decided to do that on purpose.
It requires large platforms to label certain content inauthentic, fake, or false, during the 120 days of an election cycle right before the election and disclosure requirements after the election. The content that portrays candidates for elective office and current elected officials have to include a statement that says, this image, audio, or video has been manipulated and is not authentic. Given what you've heard from me about burdens on speech, it is suggestive that this is potentially the kind of thing that the doctrine wouldn't allow.
Well, and indeed, this is something that has been subject of lawsuit. So there is a substantive deceptive media and advertisements provision that California has produced. I think maybe Doug might mention a bit about that later. I don't want to impose too much on you. But there is the transparency provision.
There is a lawsuit against the substantive provisions that has produced orders that suggest that it is unconstitutional as a matter of First Amendment doctrine, because it is viewpoint based and content based, focusing on particular candidates, and only on the extent to which it is undermining the confidence-- under the language of the statute, undermining the confidence of the public. This is viewpoint determined because it does not say anything about any positive representations that might happen in AI generated content.
So the district court in California and Eastern District of California has declared the substantive provisions unconstitutional. With regards to the transparency provisions, there's a weird order from the bench from Judge Mendez, the same judge who says, no, this is a case that can't move forward. That is to say that this statute can't move forward because Section 230, a provision that Jennifer mentioned, preempts the state's effort to regulate the distribution of user generated content. And I want to talk-- I'll return to this later on.
OK. I think I need to speed ahead and just talk about the Take It Down Act. So we have cases that are addressed to transparency. We have a standard for evaluating whether it's unduly burdensome for speakers. And we don't have any clear direction from the Supreme Court. But we have some inkling, given the Moody versus NetChoice case-- that is the Texas and Florida cases.
The tools to address known exploitation by mobilization technological deepfakes on websites, networks-- that's the Take It Down Act-- criminalizes the non-consensual distribution of intimate images, whether authentic or digitally manipulated. There are definitions of what is an intimate visual depiction that's drawn from another provision in the US code. The Consolidated Appropriations Act has a definition of this. And there are distinctions in the statute between intimate visual depictions of adults and those involving children with regards to adults.
Among other things, the intimate visual depiction has to have been obtained or created under circumstances in which the person knew, the person who posted it knew or reasonably should have known, that the identifiable individual had a reasonable expectation of privacy. So the person, whoever posted it, had some expectation that the other person had expectation of privacy. Alternatively, the inauthentic intimate visual depiction disclosed is without consent. So there's our authorization mechanism.
With regards to minors, the statute says that knowingly publishing intimate visual depictions with the intent to abuse, humiliate, harass, or degrade the minor, or arouse or gratify the sexual desire of any person is a violation. And, for what it's worth, this is consonant with other ways in which, I think, the public laws address to harms to children and obscenity laws more generally. The fines are imprisonment. There's a fine, a criminal fine and criminal imprisonment as a possibility.
The civil penalty, not sure completely about, but the offenses involving adults can put someone in prison for no more than two years, and those involving minors no more than three years. That's the substantive obligation. Now with regards to notice and takedown. The Take It Down Act, which I should have said was passed with the President's signature in April to great fanfare. And importantly, Melania Trump supported it as well. It requires cover platforms to remove non-consensual, intimate visual depictions within 48 hours of having notice of it.
The difference between these notice and takedown provisions and the criminal provisions is there is no similar cabining of what is an intimate image for the purpose of the statute. And this is going to be important for thinking about the vulnerabilities of this law. The platform must pose clear and conspicuous information about the removal process. The FTC has enforcement authority to issue penalties for non-compliance. There is no private right of action. There is a safe harbor for platforms that, in good faith, remove content when they have notice of it.
This parallels the so-called immunity under Section 230 for interactive computer services. By the way, this provision is an amendment to Section 223, which is the neighbor of Section 230, for those of you who pay attention. And the last thing I'll say about this is this is a law that passed-- remarkably, that this passed in this Congress, and there was bipartisan consensus for this in April. So you'd think that this would mean everything is in the clear. After all, everybody wants to protect the kids.
But there are some flaws here, and I'll just identify a couple. Given the limitations of time, there's really not much I can say. So of course, I'll make the observation. The companies didn't love the 48 hour. There's a 48 hour takedown requirement. Once you have notice, you have 48 hours to take it down. Companies didn't love that, but that's in there.
There is a potential overbreadth problem here. And that is, given the limitations of the constraint of the criminal provisions, the notice and takedown provision provisions do not have similar constraints. And so you might see protected content getting taken down. And maybe you might even count on that, of a journalist photographs of a topless protest. I mean, it could potentially, for example, be something that is taken down.
Now, there is a more pernicious problem, and it is what makes this upside down in many regards for people who are worried about gender based abuse and systemic harms, and that is effectively an exception for abusers. There are exceptions in it for law enforcement and intelligence gathering. But there is also an exception for a person who possesses or publishes a digital forgery of himself or herself, engaged in nudity or sexually explicit conduct.
That is to say, if you're a partner with someone at the time that this-- and you're in the video with someone else, but that you're in it, suggests that or this provides that you are exempt from it. And this is precisely the kind of exemption that you might expect be abused by abusers. And so this returns me to the core concern for public law interventions. There is a remedy for systemic harms. And the remedy is not necessarily in individual actions but by interventions by public agencies.
The danger is that the laws can't be written in ways that are too broad or overbroad, and they have to be cabined in a way that tend to real speech interests. So I think I will stop here. Actually, I'll make one observation with regards to the dangers associated with something that's too broadly written. And I want to refer to the FCC's recent-- the Federal Communications Commission's recent threats, or the chair of the FCC's recent threats of Brendan Carr under the new distortion guidance and the public interest regulation.
Broadly worded statute, not sufficiently constrained, potentially invasive of protected speech. And for the same reason, I think we want to worry about that control that a federal agency has. But I do think there are very few institutions or entities that are capable of addressing the problems I described outside of federal agencies. Thank you.
[APPLAUSE]
[GINSBERG] Sure. We have time. We have time for a couple questions. Do we have any questions for Olivier and Celia? Over there.
[SIDE CONVERSATION]
[AUDIENCE MEMBER] Like that? Hi. Thank you so much for your lecture-- for your performance. And I have a question to Olivier about the European Union AI Act. So as I understand, the main purpose of this act is to distinguish between AI generated content and real content. And how do you plan to deal with users who can remove AI watermarks from their content? Because it's very easy to put watermark or to remove that. And how can you control this action in terms of transparency?
[ZOLYNSKI] The real question you ask, we don't know yet how can we build a very robust-- I have problem with microphones today. A very robust watermarks. So it's a technical challenge, and this is a main question asked in this public consultation I've mentioned for the French-- for the EU Commission. I will speak like this. So we are currently studying this, not my research team because we are only in low, but the technical partners. So [FRENCH NAME] in France, and I could share with you some more information in a few months.
We are finishing our project, research project in July, trying to identify a robust watermark. Adding a new challenge is about watermark for texts. We know that watermark could be developed for image and videos, but for text is another challenge. So I cannot respond precisely to your question. That's why we try-- this is not addressing all the issues, but we try to enforce the responsibility of online platforms, especially social media, under the DSA, and force them to deploy research and techniques robust one to ensure that watermarks and labeling can be not removed when the content is shared in their platforms because they are obliged to.
Regarding the obligations imposed by the DSA to very large online platforms. So we promote this and for them to deploy specific research, even if they do not, they could face sanctions under the DSA. This is another logic to force the providers to deploy and invest. This is the real point. Invest a lot on such techniques.
[AUDIENCE MEMBER] Thank you.
[GINSBERG] David and then I'm going to say, if we can keep it a little brief, we'll go to lunch.
[AUDIENCE MEMBER] Yeah. Thank you so much. My question is for Olivier, and I'm just kind of curious if you're comfortable speculating about what you anticipate will happen in 2026 once the platform obligations go into effect. Because I can see a universe where this starts off a little rocky and then turns out, kind of similar to the Copyright takedown request process, which I think at this point in 2025 is not overly controversial in terms of the broad scope of the way that the process works.
But I could also see either, as you said, the FTC being somewhat opportunistic in the way that it's enforcing it. And I could also see a NetChoice type challenge from one or more platforms to raise the overbreadth issues. Just, to the extent you feel comfortable speculating what you think may happen, I'd be very curious, since we've kind of had this year long period of waiting to find out.
[SYLVAIN] My speculation is going to be as good as the speculation pregnant in your question. I agree it's subject to manipulation. There is no counter notification process, as you know. The FTC, the question of whether to go after a platform will be contingent on the FTC's regulatory priorities. And as someone who believes in agencies-- believe it or not, I do-- this gives me a special concern. And this is not something that is inevitable. Congress could write a law that attends to these problems, but I'm afraid may not have. So I don't know what's going to happen, but I think whatever you are guessing, your guess is as good as mine.
[AUDIENCE MEMBER] Thank you.
[APPLAUSE]
[PIPPA LOENGARD] Good morning. My name is Pippa Loengard. I'm the executive director of the Kernochan Center, and my colleagues, Jane Ginsberg, Shyam Balganesh, Cate McGrail, Samara Weiss, and the Columbia Alliance Program, which coordinates our Columbia Law School programming with our French colleagues at comparable law schools in France, are delighted to welcome you all to the "Kernochan Center
Symposium: Deepfakes: Problems and Potential Solutions in Comparative and International IP Law."
Today, we're going to look at one of the potentially more troubling issues surrounding the rise of artificial intelligence, how various communities across the globe are trying to combat deepfakes, and what could be some successful solutions to a potentially dangerous phenomenon. I would like all of you interested in this subject and drawn to today's program because of its comparative nature and wish to stay abreast of the topics to-- and this is a shameless plug, but that's why I've got the microphone --
to think about joining the US chapter of ALAI, which excuse my French among the native speakers around here. The Alliance Littéraire Et Artistique Internationale, which I really was hesitant to say in front of as many French speakers as we have. But you can learn more about this organization which works on these issues of international law at our website, alaiusa.org.
Before we get started, I would like to go over a few logistics. You will all find the bios for the moderators and the readings for today using the QR code, which was at registration. If for some reason you did not, please feel free now to get up and grab that because we are not going to be introducing all of our speakers in advance of the panels. Please be aware that today's symposium is being recorded, and the video will be available on our website in due course. As such, please use microphones during the question-and-answer periods to ensure that questions can be heard on the recording, and you just push the button in front of you to turn on the microphone. And each seat has its own.
This is the first of many reminders today that you will hear about CLE credit. If you would like CLE credit and you have not signed in yet at the desk outside this room, please, now is the time to do so. And we are giving credit for the morning sessions and then the afternoon sessions. There is no partial per-session credit, so please be aware of that.
Bathrooms, perhaps the most important thing for the day, are past the elevators on your right. We'll begin in a moment with Professor Ginsburg and our former student, whom we're very proud of, Makena Binker-Cosen, demonstrating what powers technology has afforded us in the field of replication and editing. And we'll follow that by a short overview of the problems society is facing.
Our next panel will look at if and how individual rights can be used as a means of protecting one's own abilities to retaliate, I guess is the word, against deepfakes. And then we'll have a half hour break before our third panel on what transparency obligations, if any, there are in US and EU law. If you need to make a call or do something else during the break, there is a room across the way that we have reserved as well. That should be a quiet room.
We will be simulcasting the program there in case, we do overflow this room, but right now it is a quiet room if you need some space. After the transparency obligations panel comes the most important part of the day, lunch. And then we will begin a robust afternoon programming. All right? But for now, I thank you all again. And we'll turn the program over to Jane Ginsberg and Makena Binker Cosen.
[APPLAUSE]
[JANE GINSBERG] And while they're getting set up, you have the fuller bios introductions in your materials via the QR codes. But our first speaker will be Jennifer Rothman, who is a professor at the University of Pennsylvania Law School, and as I said, the world's leading expert on the right of publicity. I highly recommend her website, Rothman's Roadmap to the Right of Publicity.
She will be followed by Graeme Austin. I guess you want to stay there so you can see the slide. Yeah. Who is a professor at Victoria University of Wellington and also University of Melbourne, who will be-- and my co-author, who will be talking about the picture in the Commonwealth. And finally, Valerie-Laure Benabou, who is a professor at the University of Paris-Saclay, who will be talking about French and EU protections.
[JENNIFER ROTHAM] All right. Well, thank you, Jane, for that introduction, and to you and Pippa and the rest of the organizers for putting on this symposium and inviting me to speak. I was asked to speak about the current legal landscape in the United States that regulates deepfakes. And I may have some different takes than Dana and some areas of agreement as well, of course.
But in addition to what currently regulates deepfakes, what's on the horizon, being able to talk about this in the 30 minutes, which I've been allotted would be nearly impossible, because over the last few years dozens, or depending how you categorize it, hundreds of laws have been passed that either specifically address deepfakes or address things that overlap with and cover, in part, deepfakes. Just taking California as an example, which seems to pass new AI related bills, some of which cover deepfake issues, almost every week-- six were passed in the last three weeks alone. Six new laws.
So instead of trying to cover everything, I want to start by setting forth some guideposts for sorting through this increasingly complicated landscape, and only then consider some of the existing laws and those being proposed. So, as part of this guidepost, I want to propose a taxonomy of deepfakes that will guide our tour of developing such a taxonomy, I think, is desperately needed, as urgent calls for legislative fixes and to address deepfakes have largely collapsed distinct types of deepfakes into a single monolith.
And this lack of nuance in speaking about deepfakes has masked some problems at issue and obscured the applicability of existing legal structures to combat them. In addition, this lack of precision about why we care about deepfakes and different types of deepfakes has led to many newly enacted and proposed legislation that may actually worsen the dangers of deepfakes rather than combat them. So before developing this taxonomy, I want to take a few moments to develop a common understanding of deepfakes.
You're like, I already know. That's why I'm here. I know what it is. I'm talking about it. I'm on it. But actually, there are different meanings of the term, and I want us all to be on the same page as we parse what we think we're talking about and what we think the law should cover or not cover. So as some of you know, the term deepfake actually originated by a Reddit user of that name in 2017 in the context of porn, like most things on the internet, and then has exploded, exploded -- exploded in the use of the term, in the actual creation and spread of deepfakes, and has expanded beyond the world of porn.
So we now think of deepfakes as maybe illustrated by porn, but not exclusively porn. We see laws and proposed bills using different terms to get at deepfakes, using the term digital forgeries, other digital replicas, or voice clones. I will not use my time to go through the numerous different definitions, and I will say even the definitions of digital replicas vary widely across different bills and laws that have been passed. Instead, I want to hone in on two material differences across these definitions that are essential to pick sides on, as we discuss deepfakes today.
So one is, do deepfakes have to be deceptive? Some definitions say yes. Others don't. Some say likely to deceive, but not necessarily deceptive. Some require for liability and intent to deceive. And a second area of dispute is, are deepfakes just about people? Thus far, we've been talking about them just about people. But deepfakes could also be fakes of objects, places, people, events, or entities.
And in fact, the European Union's AI Act defines deepfakes more broadly to include all of these categories. For our purposes, I'm going to have our operative definition be one of human beings, people, deepfakes of people. And I'm choosing this in part because this is the main focus of concern, both among those who are in the room and of legislatures around the country and around the globe.
Second. Second, I am not going to require that deepfakes be deceptive.
They do not need to deceive the public to be defined as a deepfake, but they need to appear to be an authentic recording of a person when they are not one. And then we can get at whether it's deceptive. And I'll highlight that authenticity, seeming authentic doesn't mean it needs to be a realistic capture of the person or in a realistic context, but it needs to be something that could be perceived as an authentic recording of them. And note, in spite of its etymological origin, I'm using depict to capture both the use of someone's voice or their likeness.
All right. So with that sort of working definition in hand, I want to briefly touch upon the harms of deepfakes because again, we can't really evaluate what we're dealing with, even a taxonomy of deepfakes or the validity of current laws or the value of future ones, without knowing what harms are involved. And again, I'm anticipating that we have a fairly sophisticated audience. You may already know what you think the harms are, but it's worth just a brief foray into some of the articulated harms, which largely fall into three categories.
Those that affect those who are depicted in deepfakes, members of the public who may be deceived by deepfakes, and then those that would affect other stakeholders, for example, those who are connected with, such as relatives of those depicted, as well as those who may have a financial stake in the person or voice, person's voice that appears in some of the deepfakes, particularly record labels or other copyright holders.
The key considerations for all of these harms center on two key considerations that I want you to keep in mind throughout my talk and hopefully throughout the day and then as you leave the space as well, which is deepfakes are harmful when they are not authorized by the person depicted in the deepfake. They are also harmful, particularly to the public, when they are deceptive as to their authenticity.
So those who are depicted in deepfakes suffer a variety of harms, from losing control over their own identity, which works injuries to their rights of self-determination and autonomy. It also could injure their dignity and reputation, particularly if they're put in a humiliating setting such as a pornographic one, or shown doing something or saying something that they never said that may be truly shocking or offensive. There are also a variety of market harms that could befall a person who is depicted in a deepfake.
They could lose job opportunities, endorsement deals, have reduced salaries, lose licensing opportunities, or be in breach of merchandising contracts, and overall, have their goodwill diminished. This would be particularly true for those who are well-known performers, who are commercializing their identities, or whose performances themselves might be substituted for. But market harms could befall even ordinary people who are depicted. Harms to the public largely center on whether the public is deceived, and this harm to the public occurs without regard to whether the deepfake is authorized or not by the person depicted.
The harm stems from the public thinking that something is authentic that is not. This sort of deception of truth could destabilize our political system by circulating fake images and recordings of political figures saying and doing things they never did in ways that could affect voter perceptions of these individuals, alter outcomes of elections. Deepfakes of politicians could cause civil unrest and even global catastrophes by inciting wars or conflicts engendered by false statements or actions appearing to be the authentic speech from world leaders.
Deceptive deepfakes can also more broadly destabilize our access to information and truth. As Brian Chen recently wrote in The New York Times, we may be facing the end of visual fact. Can civil society survive if we not only don't have common references and sources, but also do not have reliable documentation of real world events? The criminal justice system and the tort system themselves may be threatened by the undermining of image and voice based evidence.
And we as a society may also be impoverished in AI generated slop of culture in place of high quality, human driven content. Now, this could, of course, happen with non-deceptive and even authorized deepfakes that could maybe lower the quality of culture. But the law may have a place in regulating our knowledge of whether we're seeing authentic performances or not, so that the public can choose between them. Deceptive deepfakes could also affect consumer purchasing decisions in ways that could be harmful.
And in our final category, harms to related parties, I think are not as central as the last two, but also are things that we should be cognizant of particularly in the context of unauthorized deepfakes, and recognize that there are market injuries, particularly to those such as record labels, that are very concerned about the spread of deepfakes.
But again, the primary center of these harms is, are they authorized by the person depicted and do they deceive the public. So with this in hand, let's consider how we can distinguish deepfakes between one another, because they're not all the same. To the extent that deepfakes are distinguished from one another in discussions, both legislative or in the media, it has usually been on the basis of the context in which the fakes appear.
For example, to distinguish among deepfakes that appear in political contexts, or that show people in pornographic contexts, or that depict performers and may substitute for the value of their works. These contextual distinctions have obscured deeper thinking about whether deepfakes across these contexts are or should be considered different from one another from a jurisprudential perspective. A more nuanced parsing of deepfakes is essential to better distinguish between the problems that are appropriate for legal redress versus those that are more appropriate for collective bargaining or market based solutions, or may simply need to be tolerated, or in some instances, even celebrated.
The focus on this context in which deepfakes appear has also led to a lot of very specific deepfake focused and AI focused regulation in different contexts, in the context of elections, in the context of pornography, in the context of likely media plaintiffs. And this has obscured some of the addressing of some of the harms that I just identified. But it also has led to the passage of a number of laws that I think sit on shaky constitutional ground for their lack of inclusivity in terms of being so narrowly targeted.
With that as background, I propose a different approach to thinking about deepfakes and putting them into the following categories: Those which are unauthorized by the person depicted, those which are authorized by the person depicted, those which are deceptively authorized, and those which are fictional. What do I mean?
Unauthorized deepfakes are what we talk about most of the time and see the most outrage about. These are ones in which the person depicted never agreed to appear. These are high profile examples that have been wielded to pass laws, to propose bills, and are discussed on Capitol Hill. Recent calls for action at the federal level were largely driven by a 2023 viral AI generated song, "Heart On My Sleeve," which sort of successfully imitated the voices of the artist Drake and the Weeknd.
Numerous other well-known recording artists and actors and celebrities have been faked, including Tom Hanks, an AI generated Tom Hanks hawking for dental services. But it's not just the famous. It's also the ordinary. On the top right as an image of one of the Jersey teens who had her face swapped into pornography by her lovely classmates in her New Jersey high school, and then advocated for New Jersey to adopt an intimate image law that would address this.
Politicians, too, have been victims of unauthorized and deceptive uses of their identities. In the lower left, you see one recent-ish version of a deepfake of President Obama as imagined and celebrated by President Trump being arrested in the Oval Office. Deepfake voice clones have been used to scam family members by using voices of loved ones. And in the lower right, you see a Sora 2 image that I'm not going to say the video, but it has Jenna Ortega reanimated, speaking in the voice of the character Wednesday, as well as using copyrighted characters.
These unauthorized deepfakes present all of the harms that I articulated. They caused the personality and market based injuries to the person depicted who didn't authorize them and, if deceptive, can also injure the public more broadly. These contrast with authorized deepfakes. Authorized deepfakes don't cause the personality injuries or market based injuries to the public. I mean to the person depicted.
And if they're not deceptive, they don't harm the public either, and they may even be things that we would want to celebrate, like Eminem being replicated, dancing with a younger version of himself, or to de-age actors, or YouTube's new Dream Tracks, which does, in a more seamless, very rapid way, what we just saw in the Taylor Swift version, where you can have AI generate your lyrics on a particular topic and then have it voiced in an authorized version of a famous singer's voice.
Charlie Puth is one of the artists who agreed to do this. Or the Speechify app, which allows things to be read to you in the voice of someone who authorized the use of their voice to do so. The laws should regulate deceptive authorized deepfakes, so the public is deceived. Even if it's authorized, it still causes all the harms to the public. But if it's authorized and not deceptive, that should be fair game. The third category I identify is one that has largely been overlooked, but is essential to understand.
And this is the category of deceptively authorized deepfakes. Here, a person may have agreed to appear in one work or recording, but did not agree, did not agree to have their voice, likeness, or performance reused in a new context, such as a deepfake. Or alternatively, the depicted person may not own or control the rights to their own name, likeness, or voice.
In each of these scenarios, a deepfake might be categorized as authorized in a technical legal sense but, in fact, be unauthorized in the most important sense because the person whose voice or image is used in the deepfake did not knowingly approve of the specific use, something that causes the very same harms to the person depicted as an entirely unauthorized deepfake, and would likely be, per se, deceptive to the public because of the misperception of authorization.
I have questioned elsewhere the legitimacy and constitutionality of allowing someone other than the person themselves, which I have dubbed the identity holder, to own that person's name, likeness, or voice. And I've also warned about broad licenses that would give long-term, expansive control over a person's identity to someone other than the person themselves. Yet, some new and long-standing state laws, and some being proposed at the federal level, would allow such transfers and broad licenses and allow someone other than the person themselves to own or control that person's digital replica.
The digital replica being considered in Congress that has the most support right now, The NO FAKES Act, allows for long term licenses of another person's digital replica but does not require their ongoing knowledge and approval by the person depicted of those replicas and how they're used, and the bill expressly allows authorized representatives to approve such licenses, such long term licenses, without the person knowing that those licenses have even been entered.
Minor student athletes, aspiring actors, recording artists, and models may be particularly vulnerable to having others take control and even ownership of their voices, likenesses, and performances. And it's not just people who are trying to be in the public eye. It may be all of us who, without thinking, agree to online terms of service that claim to be able to use, in any new context, our images, voices, and recordings. So you may find a deepfake of yourself out there and it would technically be authorized, but in this deceptive way, which should be categorized as unauthorized when we think about remedying the harms of deepfakes.
Deceptively authorized deepfakes raise complicated questions that I can't fully engage with now, but at the intersection of a variety of legal regimes, including contract law, state publicity rights, and federal copyright law. On the left of the screen is Lehrman and his co-plaintiff in the Lehrman v Lovo case that Jane talked about earlier. Here, the company reached out to them and they agreed to have their voices used as voice clones, but then they were used beyond the scope of the contractual agreement. In that instance, as I will turn to the New York Civil rights law, the New York's Right of Publicity and Privacy laws did protect them and gave them a claim in this instance.
But this is not a deepfakes problem. This is a long standing problem in terms of people agreeing to some sort of copyrighted recording that might be used in ways they don't like. And generally, copyright law has been held to preempt state law that prevents these unauthorized uses of a person's identity. There's the famous Laws v Sony case, which allowed the sampling of recording in a new recording without the performer's approval, as well as the re-use in video games, including in a digital context dating back to the 1990s of performers who agreed to appear in one video game being reused because of copyright law in a second video game, which might be highly relevant for digital replicas.
So again, deceptively authorized. Deepfakes cause all of the harms we care about with deepfakes. They can be deceptive, and they can injure the person being depicted. The last category I want to highlight is fictional deepfakes. Sometimes we don't think of them as deepfakes, but they really are deepfakes. And we should. These are things like the recent coverage of Tilly Norwood, a completely AI generated actor who, at least for short periods of time, seems real and can speak with emotional gravitas, or models who are being AI generated, or songs which are being AI generated.
And I will note that the record labels are engaged with Spotify in creating intentionally AI generated music. Some of these may make us uncomfortable, but if they don't deceive the public, the person depicted doesn't exist, isn't injured, and if the public, as I said, knows that they are not real, then we don't have those harms either. And we may just need to tolerate this in the same way some of us need to tolerate Love Island or other reality shows or animated things. [AUDIENCE LAUGHS]
So just some of these examples hopefully suggest that there can be good deepfakes. They're not all bad. And so we want to leave room for them both from a speech perspective, from an artistic expression, and this technology has also been tremendously helpful in ways for people with disabilities. But I am cognizant of my time, so I will be happy to discuss that more in Q&A.
But I want to use these benchmarks, both of the different types of deepfakes, and our focus on whether deepfakes are authorized by the person depicted, and the question of whether they deceive the public into thinking that fakes are authentic, as we consider the legal landscape in the United States. The center of regulation of identity rights in the United States are right of publicity laws based in the United States. Although the recent vintage of the term deepfakes, I think, has caused people to not realize that we have a whole bunch of laws on the books that cover unauthorized uses of people's identities, even in the context of new technology, we actually have a lot of laws that cover people's identities and protect them from being misused.
So at the heart of this law is a state law that protects against unauthorized uses of a person's identity, including use of name, likeness, voice, or other indicia of identity. These laws emerged in the early 1900s. They are interchangeable in most states between the right of publicity and privacy based appropriation torts. Most states treat them as the same unless they've adopted a separate statute. There are a few states, like New York, which has adopted a privacy law by statute, which is essentially equivalent, in many respects, though not identical, with a common law privacy based appropriation tort.
As Jane pointed out earlier, the right of publicity, because it is a state law, varies from state to state across the 50 states. There are common features of many, and also some very difficult to navigate differences. Which is why, out of frustration, I started my website, which has a helpful map, so I and all of you can keep track of the different state laws. So I want to give a few illustrative examples to see how they apply to deepfakes.
Beginning with California, one of the dominant marketplaces of right of publicity and entertainment, California actually has, depending how you count, three or four different publicity or appropriation based privacy laws, starting with the common law. The common law is very broad. It unquestionably covers deepfakes, whether they're in a commercial context or not, but potentially would have a claim for Taylor Swift, even for this non-commercial use.
California's statutory right of publicity, which was actually passed as a privacy law to protect ordinary people by extending statutory damages and a fee shifting provision, also would apply to deepfakes, but perhaps a little more narrowly, such that it might need to be in the context of the stream of commerce. California also has a postmortem right of publicity, and this was revised in 2024, going to effect in this year to expressly include digital replicas.
Although the main concern before this was that exemptions for postmortem rights in California, which is only a statutory right, exempted audiovisual works in certain contexts. So this change, this 2024-25 change, now makes it so they are not exempted, with some exceptions that I don't have time to go into here, but also largely covers digital replicas and deepfakes.
New York statutory right of publicity, which is actually called its right of privacy in the state, applied, as Jane talked about, to the voice clone situation in Lehrman v Lovo. It will apply without regard to the commercializing-- commerciality of the person's identity, but it is limited to for the purposes of trade, though this is broadly interpreted to apply to video games and movies, but might not apply to educational uses, as Jane suggested.
Tennessee has adopted in 2024 a very long ELVIS Act, which I can't possibly do justice to here, but explicitly addresses the use of digital replicas and deepfakes as well as the software used to create them. It applies in commercial and non-commercial contexts, and it does have a number of exceptions in certain instances, but is quite broad. So in short, state right of publicity laws do cover deepfakes, but they may not do as much work as we want.
They do, on their face, perhaps a good job of stopping unauthorized uses in deepfakes, but most allow deceptive but authorized deepfakes and do nothing whatsoever to protect the public from being deceived by deepfakes. Second, because some of them explicitly or in practice, allow ownership and control by someone other than the person depicted, they also leave open the possibility of the deceptively authorized deepfakes, which cause the same harms to the people depicted.
And while many, maybe most states allow claims outside of the commercial use context, some limit who can bring claims, some limit liability to the commercial use context, which would be insufficient for many of the deepfakes. Damages may also be hard to prove, especially for ordinary people. And although some states have statutory damages to address this, not all do. Notably, California in the last three weeks adopt a statutory damages provision of $250,000 for intimate image deepfakes.
There also may be the problem of copyright preemption, which I mentioned, which if someone agreed to be in one copyrighted work, somebody could wield copyright law to recreate a performance in a new context in a digital replica and use copyright to enable it. There's also a Section 230 problem. For those who know that, this allows platforms to be immunized from liability, and may prevent them from taking down deepfakes on the basis of publicity laws. But there is a circuit split about whether the right of publicity falls within this immunity or not. And of course, then there's the 50-state approach problem.
All right. So right of publicity does a lot of work, but maybe not as much as we would like. But states have passed a slew of other laws targeted to protect people's identities, some of them recent, some-- all of them are fairly recent, but some of them are from the last couple of years. Some of them are over the last 10 to 20 years. These are intimate image laws that specifically focus on deepfakes and other intimate image circulation, those focused on biometric privacy, notably in Illinois, but also other states. Catfishing and impersonation laws apply.
There are a host of student athlete name, image, likeness laws that apply in this space, specific digital replica laws that have been passed, some AI specific laws that overlap, some election related laws that address deepfake laws, and even labor laws have gotten in the game in the last year or two. Some of these adequately protect deepfakes, but very few of them focus both on the question of whether the public is deceived by these uses, and whether they're protecting the people depicted from having others control their identities and authorize these deepfakes.
Other state statutes also cover identity rights and would apply to deepfakes. So Taylor Swift's menu of options here is getting very long. There's false advertising and consumer protection laws. In the context of pornographic things, there's obscenity, child pornography, I think defamation and false light tort and infliction of emotional distress. Torts could also do a lot of work in this space.
And then, of course, there's both state and federal trademark and unfair competition claims for those who are using their names or identities to sell products or services. It has some of the liabilities that Jane pointed out, which is, there needs to be a likelihood of confusion of the deepfake, and also the person needs to actually be engaging in commerce. Copyright laws have some limits, although I do think it's early days. There was recently a decision in Concord Music v Anthropic, which rejected a fair use defense to training data to the ingestion of copyrighted works for training data.
So I do think it's early. It's not clear yet and the Copyright Office has not decided whether digital replicas could be copyrighted under current law, and there are also a number of pending suits brought by the music industry based on sound recordings. Also, notably, the federal government this year passed and President Trump signed into law the Take It Down Act, which specifically targets deepfakes in the intimate image context, making sure that platforms have to take them down and also providing criminal consequences.
The last bit of legislation I'm going to focus on is bills. I'm almost done. Focus on bills under consideration in Congress. There are a host of them. I'm going to focus in on the NO FAKES Act because, as I said, that's the one that seems to have the most support. NO FAKES stands for Nurture Originals, Foster Art, and Keep Entertainment Safe Act. The title itself highlights that this was drafted with the entertainment industries, particularly the record labels, in mind, even though it would apply to everyone.
The bill creates a federal digital replica right, including a postmortem right, and provides liability, as well as statutory damages. It does provide some breathing room for the reuse of copyrighted works, although with some limitations. It has notice and takedown provisions. The bill runs 39 pages in length. I would not have been able to talk about it if that's all I did during my 30 minutes, so I want to highlight a few key concerns with it.
Won't surprise you, this bill doesn't address our two key considerations that flow from our worries about deepfakes. It doesn't address whether deepfakes or digital replicas are deceptive at all. In fact, it allows them to be deceptive as long as they're authorized. So this seems like a worrisome primary federal law that doesn't focus at all on whether the public is deceived and potentially incentivizes a market that can profit from deceptive deepfakes. Secondly, as drafted, it exacerbates the problem of deceptively authorized deepfakes. Although-- my screen just went out.
Although I will say that it thankfully doesn't allow other people to own someone's digital replica, it allows a 10-year licensing regime with few limits and also allows, as I mentioned earlier, authorized representatives to control the person's identity and sign these licensing agreements. There are a host of other concerns with this, which I'm happy to discuss in Q&A with NO FAKES, but I want to keep us centered on the deepfakes issue. As you can see, there is what I have dubbed elsewhere an identity thicket in the United States of overlapping laws that cover people's identities.
It's growing, apparently weekly or maybe daily with different laws that empower different people to control people's identities based in a single person. And these conflicting and overlapping rights are difficult to parse. With all of that said, with all of that said and this focus on law, I would be remiss-- thank you. I would be remiss if-- I guess this slide, which I couldn't see, because I couldn't see the screen, just reminds us what we should be focusing on.
But with all of this said, law is only one tool, and we should also think about ways in which the law could support technological and market based solutions to deepfakes, encouraging the building in of guardrails, the development of authentication software, disclosure and transparency requirements, and detection software. Much of this is already in process, but laws could also consider ways to incentivize this. And then, of course, there's market preferences. It's early days of generative AI technology. We don't know where it's going to go, but it may be that people will crave human interaction and human performances and authenticity.
Here we are in the epicenter of theater. It's possible the theater will thrive and revive when people want to see live, verifiably authentic people performing. So in our rush to fix the problems of deepfakes, we should make sure that the law and laws that are passed do not worsen the problem by giving legitimacy to deceptively authorized deep fakes, or by ignoring the problems of even authorized deep fakes that deceive the public. Unfortunately, too much of the recent proposed and enacted legislation does exactly this.
[APPLAUSE]
[GINSBERG] Are you going to go next? I thought you were going at the end.
[BEN SHEFFNER] I don't mean to.
[GINSBERG] No, no. So I neglected to introduce our next speaker, Ben Sheffner of the Motion Picture Association, who's, I think, going to be talking about authorized and maybe deceptively authorized deepfakes in light of Jennifer's exposition of the situation in the United States. And then we will move to the international context.
[SHEFFNER] Thank you very much, Professor Ginsberg and Pippa and the Kernochan Center and Columbia Law School for the opportunity to speak with you all today. I'm very much a US lawyer, so I'm going to be learning a lot today from the international perspective. And also just a little bit of scene setting here. I come at this from a much different perspective as most of the speakers here today.
I'm not an advocate. I don't have tenure. I'm an advocate for the members of the Motion Picture Association, which are the seven major motion picture and television producers and distributors here in the US. So we have an interest, of course, in combating abusive uses of deepfakes. We don't want to deceive anybody. We don't want to misappropriate people's identity in ways that are unfair. But we also have an interest in making movies and television shows that use new technologies to depict people, which is, of course, what movies and television shows do every day.
So also I really appreciate Professor Rothman setting the table and giving us all a crash course in the various laws that already protect people's identities in various ways and some of the proposals. So I'm an attorney, but most of my job actually involves not traditional law practice, but policy and advocacy, which means that all of these laws are bills that Professor Rothman was talking about at both the federal and the state level. Legislators often come to us and ask for our input.
We also spend a lot of time talking with other stakeholders in this area, whether it's record labels, representatives of recording artists, the union that represents actors, the major internet platforms, social media platforms, video game publishers, et cetera, all of whom have an interest in this area of law. And often, we try to get together and come to agreement on legislation that we could all support before it moves through Congress or state legislatures.
So before getting into the substance, I also want to do a little bit of definition of terms, which Professor Rothman did as well. I'll be using the term deepfakes largely interchangeably with the term digital replica. Deepfakes, I think, sometimes implies an element of deception, but I think both deepfakes and digital replicas, those terms can be used in contexts that are both deceptive and non-deceptive. Digital replica is the more common term in the motion picture industry where I work, so I will mainly use those terms.
So this issue around deepfakes or digital replicas has taken on great importance for our industry over the last several years with the rise of increasingly advanced generative AI systems. But the rules around how a studio is allowed to depict people on the screen have been extremely important to our industry for a long time, well before anybody had heard of deepfakes or Sora 2.
As we heard from Professor Rothman a few minutes ago, the main body of law here in the US that regulates depictions of individuals is right of publicity. While right of publicity is considered a form of intellectual property, as Professor Rothman said, its origins are in privacy law. The two have melded together in recent years. Very importantly, from our perspective, right of publicity, properly understood, should be focused on what we call commercial uses. And in this context, commercial does not simply mean making money.
Rather, at least as we see it, it means uses in advertisements or on merchandise. So if a company uses a celebrity's name or anybody's face, really, to advertise a product without his or her permission, that's a violation of the person's right of publicity. And that's not very controversial. Certainly, we who represent motion pictures producers have no problem with a law that says you can't use somebody's image, likeness, or voice to advertise a product without their permission.
But where we at the MPA become concerned is when individuals seek to invoke right of publicity laws to prevent depictions of individuals in what we call expressive works, including movies and television shows. In a series of cases over the last several decades, people who are depicted in movies and television shows, usually in ways they don't like, but sometimes just because they want more money, have sued producers, claiming that such depictions violate their right of publicity.
Examples include the movie The Hurt Locker, which won Best Picture about 15 years or so ago, where an Army sergeant named Jeffrey Sarver claimed that the main character of that movie was actually him. And Olivia de Havilland, the famous actress who was portrayed by Catherine Zeta-Jones in the TV series Feud, alleged that the producers needed her permission to portray her. The courts here in the US have almost universally rejected such claims, and many state right of publicity laws now explicitly exclude uses in expressive works, either in the statute themselves or through the body of case law that's developed around them.
So by the late 2010s, there was relative stability in right of publicity law. Again, if you want to use somebody's likeness in an advertisement, you need to get permission. But if you want to portray them in a movie, you don't. That somewhat easy détente that I just described, however, was upended starting around a decade ago with the rise of generative AI technologies that allowed anyone to create increasingly realistic videos of people doing things they didn't do or saying things they didn't say.
And I want to say that the presentation we had at the outset this morning was really terrific in demonstrating some of these technologies. Much of the concern, as Professor Rothman pointed out, was around so-called deepfake pornography, in which the faces of both famous and non-famous women-- and yes, it's 99% women-- were digitally inserted into pornographic videos against their will. The representative of actors were also concerned that digital replicas of them could soon be inserted into movies and TV programs, potentially undermining their ability to make a living.
Those concerns that I just described led the representatives of actors and others who were being victimized by deepfakes or digital replicas, or saw that they could be soon, to question whether existing law would provide a remedy for such abuses. The first place to look, of course, was right of publicity, which is the body of law that we have traditionally used to regulate depictions of individuals.
But as I described, right of publicity law is often limited to advertising and merchandising uses, and many of the new or anticipated abuses of digital replicas were not in advertisements or merchandise. Instead, they were in expressive works, whether random videos on YouTube or TikTok, or potentially in movies and television shows. So representatives of those victims or potential victims of deepfakes or digital replicas realized they probably needed new laws to address this new problem. We've seen various approaches here in the US.
Many states have recently enacted new legislation that addresses very specific forms of deepfake related abuses-- for example, deepfake pornography or deepfakes of political candidates during election season. And as Professor Rothman mentioned, in May of this year, President Trump signed into law a new federal law called the Take It Down Act, which specifically addresses the problem of pornographic deepfakes through both criminal law and a right to have your deepfakes, pornographic deepfakes of a person taken down from social media platforms.
But we have also seen the introduction of a large volume of legislation to broadly regulate uses of digital replicas, including in expressive works. In 2024, 10 states introduced bills to broadly regulate the use of digital replicas in expressive works, and bills passed in four states, including the ELVIS Act, which Elvis, of course, has nothing to do with a certain performer from Memphis, Tennessee, but is the Ensuring Likeness, Voice, and Image Security Act.
For those of you not from the US, we here tend to do something as really strange. Usually you start with a group of words and create an acronym. Legislatures do the opposite here. They start with a catchy acronym and then reverse engineer it and shove in words that sometimes make sense and often don't. And so we had four bills actually passed in 2024. We had Tennessee, California, Illinois, and here in New York. This year, bills in 2025, bills have been introduced in 14 states and passed so far in three. That's Montana, Illinois, and New York, although the one here in New York has not yet been signed by the governor.
And as Professor Rothman mentioned, an important piece of legislation has been introduced in the US Congress called the NO FAKES Act, which, again, is the Nurture Originals, Foster Art, and Keep Entertainment Safe Act. The NO FAKES Act would establish a brand new federal intellectual property right in one's likeness and voice, standing alongside other existing federal intellectual property rights, including copyright, patent, trademark, and more recently, trade secrets.
While the NO FAKES Act has not yet passed Congress, it does have support from many of the major stakeholders with interests in this issue, including the Motion Picture Association, SAG-AFTRA, the union which represents actors, the Recording Industry Association of America, which represents the major record labels, the Recording Academy, which represents individual recording artists, OpenAI, IBM, and Google and YouTube. So why did we at the Motion Picture Association endorse the NO FAKES Act?
After all, most companies don't like regulation, especially regulation that gives individuals the right to sue them. So there are several parts to my answer. First of all, we agree with actors and recording artists of the fundamental premise of the bill, which is that one should generally not be able to replicate other's likenesses and voices in new works in which they did not actually perform. In fact, the MPA's members endorsed that principle outside the legislative context in their 2023 collective bargaining agreement that resolved the strike by SAG-AFTRA.
In short, that agreement guaranteed actors what we sometimes call the three Cs. That's consent, compensation, and control. If you want to use a digital replica of an actor, you need to obtain their consent, you need to pay them, and you need to give them control over the uses of that digital replica. But back to the NO FAKES Act.
We at the MPA endorsed it, despite some misgivings about expansion of right of publicity law into expressive works, quite explicitly, because we believe it contains adequate safeguards that protect the ability of movie studios to depict individuals using innovative technologies in ways that we believe are protected by the First Amendment to the US Constitution, and thus do not require permission from the depicted individuals. I'll focus on two of those safeguards. First, the term digital replica is defined narrowly so that it only includes highly realistic depictions of the individual.
The definition would not include, for example, cartoon versions of an individual like you might see on the shows The Simpsons or South Park, even if it's apparent to the audience whom the cartoon is depicting. And second, and arguably most important from our perspective, the NO FAKES Act includes a robust set of exceptions that are intended to protect the ability to use digital replicas consistent with principles of free expression. Those exceptions include the following types of uses:
Bona fide news, public affairs, or sports broadcasts or accounts, depiction of an individual in a documentary or in a historical or biographical manner, including some degree of fictionalization-- means basically docudramas and biopics. Bona fide commentary, criticism, scholarship, satire, and parody, minor uses, and then use of a digital replica to advertise those works in which the digital replica actually appeared.
Let's consider examples of the types of uses where the permission of the depicted individual would not be required. One of my favorite examples is actually quite old. It's the movie Forrest Gump, which came out way back in 1994, 31 years ago. That fictional movie featured the title character, Forrest, navigating American life from the 1950s to the 1980s, sometimes interacting with actual historical figures. Famously, the producers, using digital replica technology available at the time, featured Forrest meeting and conversing with presidents Kennedy, Johnson, and Nixon.
The NO FAKES exception for depictions of an individual, quote, "in a historical or biographical manner, including some degree of fictionalization" would ensure that today filmmakers could do the same sort of thing using modern digital replica technology. And I should mention, it's been publicly reported that Paramount, the producer of Forrest Gump, did not obtain permission from the heirs of those three presidents when they depicted them in the movie. They felt they were protected by the First Amendment and there were never any claims.
To take a more modern example, there's a series called For All Mankind, which is aired on, streamed on Apple TV and produced by our member Sony. It's an alternative history version of the US-USSR space race, and I highly recommend it. The show uses digitally manipulated videos to present a fictional version of history that incorporates real people, including John Lennon, President Reagan, doing and saying things they did not actually do or say in real life. But those depictions add verisimilitude to the show's fictional narrative.
Another important exception is for parody and satire, forms of commentary that the US Supreme Court has told us several times are protected by the First Amendment. A show like Saturday Night Live often features actors depicting real individuals, including politicians, in order to poke fun at them or make a political point. But under the NO FAKES Act, the producers could also use a digital replica of, for example, President Trump, depicting him saying things even more outlandish than he actually does in real life.
And turnabout, of course, is fair play. President Trump would be protected when he engages in one of his favorite pastimes, posting to Truth Social AI generated videos mocking his political opponents. The NO FAKES Act is not perfect. Like almost all legislation, it reflects sometimes painful compromises that are necessary to get a deal done. For example, the NO FAKES Act protects not only living individuals but also extends protections 70 years after an individual's death, mirroring the term of copyright, despite the significantly different nature of these rights and the justifications for them.
We at MPA are concerned that such a lengthy postmortem term creates unnecessary risks regarding the depiction of historical figures without a countervailing justification for such a lengthy right, although we acknowledge that these risks are significantly mitigated by the presence of the exceptions that I detailed just a couple minutes ago. I do want to address a point that Professor Rothman said in one of her criticisms of the NO FAKES Act is that it does not include an element of deception.
And that's accurate. And there were a couple reasons for that. One of them is that the representatives of the actors and the recording artists and the recording companies who were involved in the negotiations vehemently objected to a deception element. Their point is you should not be able to make a new song that sounds exactly like Taylor Swift, even if everybody knows that it's not actually Taylor Swift, or use a digital replica of an actor to appear in a movie without his or her permission, even if everybody knows that, for example, the person is dead and could not have actually acted.
Their point is, the harm that falls upon the depicted individual is-- the deception is not relevant to the harm that person would suffer. And the other reason is, as Professor Rothman also mentioned, is that there's lots of other laws out there. The NO FAKES Act does not solve all the problems in the world associated with deepfakes. We still have defamation laws. We still have laws against fraud. We still have the Lanham Act, all of which may apply depending on the circumstances.
So while it's very difficult to get any legislation enacted in Congress these days-- you may be aware that our Congress can't even agree to fund the government's basic operations, at least at the moment. We do believe this bill has a better chance than most, given its bipartisan support and endorsements from such a broad array of stakeholders.
Lastly, I've been asked to address the issue of commercialization of deepfakes or digital replicas. The idea here is that actors or other personalities would authorize creation of digital replicas of themselves and have those replicas endorse products, or even act in new movies, or sing in new songs without having to do the hard work of showing up on set or in a recording studio. And in theory, these replicas could continue to act or sing or endorse products even after the individual is dead.
For example, talent agency CAA announced in 2023 the creation of a so-called "CAA vault," which can securely hold digital replicas of its clients for potential licensing opportunities. But what we haven't seen yet is significant deployment of such digital replicas by the talent themselves. One reason I'm sure is the technology, while it's amazing and it's improving rapidly almost every day, still has not progressed to the point where a digital replica is truly lifelike enough to replace a human actor, at least in works longer than a few seconds.
And second, the public reaction to these new-- to these digital replicas, and especially digital replicas of deceased individuals, has been almost universally, almost uniformly negative. The words I hear most often in reaction to such uses include "creepy" and "ghoulish," words few brands want to be associated with. In closing, this is an area of law in considerable flux. New technologies are putting significant pressure on old laws, which are arguably inadequate to address current problems.
We can debate here exactly what is the right approach to addressing the concerns raised by actors, recording artists, and others about abusive uses of digital replicas. But politicians are not waiting for people like us to come up with the optimal solution. Instead, they're forging ahead with legislation, often inspired by anecdotes and examples that tug at the heartstrings but may actually already be addressed by existing law. It's my sincere hope that discussions, like the ones we're having today, will help steer policymakers in the right direction. Thank you.
[APPLAUSE]
[GRAEME AUSTIN] Hello, everyone. My name is Graeme Austin. First of all, I'd like to thank Jane Ginsburg for the kind invitation to be here at the Kernochan Center. It's lovely to be back in Columbia. Jane gave me the brief of trying to capture some of the legal developments in Commonwealth nations. I'm going to focus mainly on civil law, but it is worth mentioning criminal law in this context from a legislative design perspective.
One of the things that criminal law can do in certain jurisdictions is to give the victims of deepfakes access to things like victim support funding that is relevant. I also think it's relevant, too, when thinking about legislative design, to think about access to justice, questions, and cost structures and attorneys' fees, for example. I think that makes a big difference in terms of how you might think about a legislative design to meet the problem of deepfakes.
But I thought I'd start with the pessimistic news, a statement from Lord Walker in a very famous UK Supreme Court decision. It's the juggernaut case on economic torts in the United Kingdom. And his lordship says, "[U]nder English law it is not possible for a celebrity to claim a monopoly in his or her image, as if it were a trademark or brand." I want to develop two themes in my remarks.
First of all, building on what other speakers have said, notwithstanding this statement from the UK Supreme Court about the absence, at least in the law of England and Wales, of a personality right of the kind that you'd see in many jurisdictions here, other legal vehicles have stepped up to do a lot of the work that a bespoke right of publicity would do. And the second point is that in some Commonwealth jurisdictions, we do have rights of publicity that would look much more familiar to United States attorneys.
Those are interesting because the tentative thesis that I want to develop is that they derive from and are infused with ideas of human dignity that we find in the new constitutions. So the post-apartheid constitution in South Africa, the very long constitution of India has infused with notions of rights of dignity, and that is starting to influence the tort law thinking. So those are the two themes. I thought I'd start off, though, with defamation. As Jennifer mentioned, defamation does some of this work.
And there's a very old case of Tolley and Fry. The case illustrates how far we've come. So Cyril Tolley was an amateur golfer. He was extraordinarily famous. This is an invitation to a dinner that he appeared at, I think at Oxford. It might have been Cambridge. He was often depicted in this cartoon-like fashion. What you have to imagine now is that there's a chocolate bar sticking out of his pocket. I've asked people to try and find the original drawing and have not been able to do this.
That was done without his authorization. And very sweetly, the advertisement said, "The caddy to Tolley said, 'Oh, Sir! Good shot, sir, that ball, see it go, Sir! My word, how it flies, like a cartet of Fry's. They're handy, they're good and priced low, Sir!'" Fry's was the originator of the chocolate bar, and Fry's was the company that developed the chocolate bar completely without his authorization. He was depicted hawking these chocolate bars. This was considered to be defamatory because he was an amateur.
In fact, the House of Lords in the case said this was defamatory because he appeared to prostitute himself to be associated with a commercial product. You see how far we've gone. I just thought I'd talk about a few defamation cases and Commonwealth jurisdictions. The Tolley and Fry decision I just mentioned was used relatively recently in Singapore, where a politician who had appeared singing for charity in a restaurant, in a karaoke bar-- if I sing a song, you'll contribute $1,000 to this charity.
The restaurant then used his photograph without authorization to advertise its restaurant. This was defamatory because it sullied the pristine image of politicians. It probably says something about the fastidiousness required of Singaporean politicians. There's nowhere good I can go with that in this context. So again, this was suggesting that he was making money out of his image, or money out of his position as a politician, which was considered by the Singaporean court to be defamatory.
Sex does well in the context of defamation. The next case there, the Hussey case, involved a well-known Singaporean model whose image was then used for an escort service. Again, defamatory. In Australia, there's a case, the Australian Consolidated Press case, which involved depiction of an Australian rugby player in the shower, published in an English newspaper where his genitals could be sort of made out. Given the physique of most Australian rugby players, it's difficult to know why that would take him down in the eyes of right thinking people.
But the defamatory sting in the case was that somebody of his standing would not give permission to his image being used in a newspaper like this. It also detracted from some of the charities that he was associated with, particularly children's charities. The Charleston case. Again, this is a decision of the House of Lords in the United Kingdom. This was really an early deepfakes case.
It involved depictions of actors in an implausibly successful Australian TV program called Neighbors. Many of the people here won't know it, but if you're from Britain, you know it. It was extraordinarily successful, as they say. And somebody had developed a computer game that made depictions of leading actors in the case appearing naked and in sexually explicit contexts. One of the British tabloids published this.
There was a map of Australia covering the genitals of the actors in the case, that you get the flavor of the case here from a statement and the article. The famous faces from the TV soap are the unwilling stars of a sordid computer game that is available to their child fans. The game superimposes stars' heads on near-naked bodies of real porn models. The stars knew nothing about it. The actors sue for defamation.
As Lord Wright said, though, it was, in that style of British tabloids, a tone of self-righteous indignation directed at the makers of the game which contrasts oddly with the prominence given to the main photographs. But the reasonable reader of a British tabloid newspaper-- if that's not an oxymoron-- must be assumed to have read the whole article, which explained that the actors were outraged about this and they knew nothing about it. So it took away the defamatory sting.
There's one last defamation case that I do want to mention. It's an obscure South African case. It's not reported anywhere. But it involved a 12-year-old girl, a surfer who was photographed on the beach. And then she was looking away from the camera, but her image was then used on the cover of a surfing magazine in South Africa called Zigzag. And then underneath the picture was the word "filth." Does anyone know what "filth" means in that context? No?
Columbia is probably not all that associated with surfing. Maybe if I asked the same question in California, we'd have a different answer. "Filth" means really great in surfing language, but of course, it had that sexual connotation. There, she succeeded in her defamation claim. The court also said she would probably succeed in a kind of right of publicity claim as well. What's interesting about the case for our perspectives in the light of the NO FAKES Act, this is a case of an ordinary person who succeeded with these torts, not a celebrity.
All right. So defamation does some of the work. But I think it would be fair to say that it does most of its work in sexual contexts. Breach of confidence. That Allen case that I referred to at the beginning was actually a breach of confidence case, in some respects, and it involved the famous scoop of the photographs of the weddings, the wedding of Michael Douglas and Catherine Zeta-Jones who, as we know, played Olivia de Havilland in that TV show that Ben mentioned.
The couple had given exclusive rights to one magazine, and a photographer came in and scooped the photographs himself. It was succeeded as a breach of confidence case, that the images of the wedding were confidential. For our purposes, I think what's interesting in the case was some of the evidence that was there. So even though this was an economic tort case, the evidence from Catherine Zeta-Jones went along these lines. The hard reality of the film industry is that preserving my image, particularly as a woman, is vital to my career.
So it was getting at some of the harms that Jennifer Rothman mentioned early on. So a breach of confidence is another vehicle for protecting some of the interests that underlie the right of publicity torts. Canada is an interesting case. In some of the provinces, four or five of the provinces, there are bespoke privacy statutes. In British Columbia, the privacy statute, as we see also in the United States, provides a kind of right of publicity statutory tort.
So if you just look at the wording there, it's a tort actionable without proof of damage to use the name or portrait of another for the purpose of advertising or promotion, the sale of or other trading in property or services. And the interesting question, if this was considered to cover those services of creation of the deepfakes. And "portrait" is designed quite broadly, it does have the caricature, kind of wrong that it's not there arguably in the NO FAKES Act.
Where there has been the most development in some jurisdictions, notwithstanding the UK Supreme Court saying there is no right of publicity is passing off. And the greatest development, I think, is in Australia, which had very early on adapted passing off to the right of publicity. This was the first case. Henderson. It involved ballroom dancers. They were quite famous, and their image was used on a record sleeve for ballroom dancing music. What was done in this case, the legal innovation in the case was that passing off no longer required the plaintiff and the defendant to be engaged in the same kind of business.
So this was considered to be a passing off. And then we get the greatest development-- we'd have to check my cultural reference points-- with the litigation that came out of the Crocodile-- I see familiar people looking, nodding at me. The Crocodile Dundee franchise now, and there is, some of you might remember, an iconic scene in Crocodile Dundee, where in New York, where we are, he and his girlfriend were about to be mugged. The girlfriend says, hand over your wallet. He's got a knife.
Paul Hogan brings out an enormous knife and says, no, that's a knife. And this was used in a number of television commercials and in stores. There was one store that depicted koalas holding large knives. It's a kind of marsupial deepfake that was used here. That was considered to be passing off. And then it was also used in an advertisement for shoes where the girlfriend says, "Hand over your wallet. He's got leather shoes." And the Paul Hogan figure, the simulation of Paul Hogan says, you call those shoes? These are leather shoes.
The high point-- and this was the case that we put in the materials-- was about Rihanna. This is a Court of Appeal decision for England and Wales where a shop, a store sold t-shirts depicting Rihanna. They had permission from a copyright perspective to do this, but this was considered to be passing off, notwithstanding the absence of a bespoke tort there. One of the reasons was fans would know that image because of the Talk That Talk album, which had used very similar imagery.
All right. So those are jurisdictions without a right of publicity, mostly, but where other causes of action have come into play. I'll just briefly mention consumer protection and media standards regulation. I found a Singaporean advertising code of practice that's very broad. Advertisements and sales promotions should not manipulate, such as through electronic morphing any person to create a misleading or untruthful presentation. So it's often useful to take those into account.
Now very quickly, I just want to finish with right of publicity privacy rights cases. I've mentioned Canada. Canada also has a common law of publicity that has developed. This was a famous water skier whose image was used in advertising that made out the tort in Ontario of right of publicity. The Gould Estate one. This was an unsuccessful claim where the famous pianist Glenn Gould sued when a book was published about him.
This is where the Ontario Court developed this distinction between sales and subject. If the celebrity is the subject of the depiction, that does not give rise to a tort. But if the celebrity is used to sell something, then that gives rise to a tort. And that's their vehicle for infusing this area of law with concerns about freedom of expression, that talking about the person is fine.
South Africa. I want just very quickly move to South Africa and India in the last two minutes that I have. But I wanted to quote this from one of the leading South African cases. There aren't many. This was used in that case about the child surfer. But the South African court said, "The value of human dignity in our Constitution is not only concerned with an individual's sense of self-worth, but constitutes an affirmation of the worth of human beings in our society.
It includes the intrinsic worth of human beings shared by all people, as well as the individual reputation of each person built upon his or her own individual achievements." So it's linking what are largely commercial interests in many contexts to these kinds of dignitary interests, expression of which we find in new constitutions like the South African Constitution.
And then one final example comes from India. It's a decision involving Aishwarya Rai. She is probably one of the most famous people in the world. I'll raise you Taylor Swift on this one. She was a Miss World, an extraordinarily successful Bollywood star. She has, as you would expect, cultivated a particular image. She is a brand ambassador. She makes a lot of money out of her personality, out of her image, as well as being an extraordinarily highly respected actor in Bollywood movies.
We've come full circle from the Tolley and Fry case. Here, the right of publicity was recognized because of her commercial success. In Tolley and Fry using defamation, he had a claim because of his amateur status. And the last thing I'll end with is this was an interim decision. We don't have a final judgment. But if you just have a look at the kinds of things that she was claiming for, and the court said on an interim basis, we're giving relief to all of this. So a website representing itself as the plaintiff's official website.
A site that allowed downloadable wallpapers, a site that purveyed t-shirts featuring the plaintiff's name and photographs, e-commerce platforms selling and facilitating images, all that familiar stuff. This happens to seem to happen a lot in India. A motivational feature sperm using the plaintiff's name and image. You find other Bollywood stars having this happen to them. We've got her on our books when they don't. And then we get to this sort of deepfakes. Chat box enabling users to engage with impersonation of the plaintiff, including sexualized content.
The YouTube channel, Google, and its capacity as owner of YouTube, and various John Doe defendants who had used her image in the deepfake context. The procedural posture in the case was quite interesting, as she was seeking-- successfully seeking exemption from procedural requirement in some of the courts in India that they go to mediation first before litigation. The judge said, you don't have to go to mediation with this kind of case.
But also on an interim basis, the judge provided civil remedies, injunctions, take down notices in respect of all of those defendants. So a couple of points to conclude. Piecemeal laws to provide these kinds of remedies across a number of jurisdictions. And then I think the emergence of right of publicity slash privacy laws, private law claims, but infused with constitutional focal points on the dignity of the individual. Thank you.
[APPLAUSE]
[VALÉRIE-LAURE BENABOU] That's what I say to my students when they get emotional for speaking in front of the public. Drink a little bit of water. So that's what I did. Thank you so much for welcoming me. And I hope even if my English is not native and I may do some mistakes, that you can understand, or a European French perspective on the question.
I must say that after listening to my previous colleague, I was wondering, what are the choices I made in my presentation not to talk to these issues of defamations or passing off or parasitism like we have in France, or unfair competition, which are actually also in the scope. But deepfake is covering so many issues that you can tackle, you can address them through various scope of legislation. So my choices are maybe disputable, but I decided, nevertheless, to select some issues in Europe, in Europe and in France.
And first I wanted also, as my colleague, to start with the consideration that criminal law already in France or in different countries of the EU, is already banning the unauthorized representation of a person. And in this perspective, deepfakes are not the target. It's only the means to commit the offense, like harassment through deepfake or offense against privacy through deepfakes. But it seems to me that it was not my burden to talk about those criminal law, because we are addressing private rights.
But that deepfakes were not the subject matter of those legislation. Still, lately, EU addresses the question of deepfake per se as such in the AI Act that my colleague Celia will discuss about this afternoon. And in the definition of the AI Act, the deepfake seems to be an unrevealed alteration of reality. So it's the untruthfulness which is at stake now.
Whereas in France, lately we have updated our legislation on what we call also deepfakes. But deepfake here is the unrevealed use of a technique. It's not a question of is it true or is not true, but it's a use of the image, which is not consented by the person. And if it's not obvious that this is algorithmically generated content, or if it's not expressly mentioned. So the problem of fakeness, may I say that, is the non-trivial use of a specific technique.
And we don't see the question of the truth of what has been said or done. It's apart. It's something else. So even in EU, we are not really very precise on what we call deepfakes. But what I wanted to say is that, for me, deepfake is the question of alter reality. But etymologically speaking, alter means two things different things. Other. Alter, the other. Or alter, to grapple, to destroy, to alter the image.
And I guess that that may be something we can keep in mind that a deepfake, like we are trying to distinguish between deepfakes and replica. It's something else. It's the other. But it's also sometimes something which is deceiving. And we have both a consideration to take into account. So starting from that, what are private rights and deepfakes? Mostly, if the consent of the person who is depicted is relevant to decide whether it is a criminal offense or not.
So private rights may be relevant in the inner circle of what is not a criminal offense. So meaning, what is my power as an individual person to control or to oppose a deepfake which would not be contrary to the criminal law? Do I have a margin of maneuver to say if it's a fake or if it's something that is normal and that I can do business with? So it's problematic because the line between what is legal and not legal will depend on the consent of the person.
But we know that in intellectual property for sure, because when we give the consent to something, well, it's legal, and when we don't, it's illegal. And you can't sue on criminal grounds. So let's start from that. And I wanted also to share this dilemma. Is a representation of oneself a dimension of oneself, of myself? If I have my image reproduced, is it me? Or is it something else than me, which is an image which is created by someone else, or by myself, on which I may have a different control of my own attributes?
And for me, this is contributing to draw a line between what can be a property right and what is being something as a personality right. And we'll see that this is the relevant distinction for us. But it's quite difficult because when am I really myself? When I'm creating a fake image of myself, like I'm making up-- like I do some makeup, is it me, or is it an image that I am building? And upon which I can claim property, or is it only a dimension of my personality?
So we have different types of protection, and I really missed some. But you have had this wonderful speech of my colleagues, more or less. Well, it's not the same in civil law than in common law, obviously. But we share some elements on defamation something and privacy. I will focus on intellectual property and personality rights. Intellectual property, I go really through that very quickly because I read the NO FAKES Act, and it seems to me that, well, the purpose is to create a new intellectual property right.
And my concern was, but what is the counterpart for-- what is the social utility brought by the person, by the image of a person in a day to day life, on which he could claim a kind of intellectual property? And it seems to me that it's dangerous to extend IP rights when never there is no social utility for the creation, for enhancing creation, investment in something that we can all share.
And I'm not sure that it's a good thing to allow to a person a private intellectual property right on his own image in order only to prevent and to ban the use of the image whenever we can maybe have an interest, a public interest to discuss, to share, to use the image of the voice of the person. If it's an IP right, then you have an exclusive right, which means that you have the right to oppose so you can control the absence.
You can make sure that there is no deepfake, but you can also license and agree. And this is market. This is business. This is, well, a problem of sharing the revenue of the exploitation of the deepfake. In the EU, we have lately harmonized a part of our contractual law and we provide for some protection specific for authors and performers, like individual protection, that can have a right to be associated to the users, to the exploitation without any possibility of buyout. We can discuss about that.
And the international private law may change the game, but under EU law, if a performer or author assigns right for deepfake exploitation, it should not be-- it should be associated to all the profit of exploitation, and there's no lump sum. So it's a little bit different from what has been explained lately. The problem is, do we have a case there? Is copyright relevant? And Jane told us that, well, it's complicated sometimes to claim copyright.
It's complicated because we have this distinction between ID and form and expression, meaning that the style is not protected. It's a public domain. So if you're making a fake, which is in the style of a creator, well, it's free. You can do that. And if you want to claim that your copyright has been infringed, you must show that there being copies or communication to the public of a piece of your work, which is still recognizable and should be also original, so that you have the threshold of originality to demonstrate in order to claim that someone cannot use the part of your work in the deepfake thing.
And that may be complicated in the training or in the output. Whereas-- sorry-- for the producers right, it's easier because the producers right is a write on the fixation. And the Court of Justice, in a very famous decision of July 2019, has decided that in the sample-- so you reproduce a short excerpt of a phonogram, you have a right, unless the sample is included in the phonogram in a modified form unrecognizable to the ear.
Meaning that whenever you listen to the voice of a singer on something which was a fixation by the producer, whenever you can recognize the voice, notwithstanding the means that has been necessary for-- several means necessary to get to the result. But you recognize the voice, then you have a use or a communication to the public of the phonogram that the producer can-- I'm sorry. I don't have any screen anymore.
So the producer has a claim in saying you cannot do that. OK. The problem is-- thank you. The problem is, are the performers also entitled to sue if I can recognize the voice? And we have a problem here because we usually consider that the performer is only protected if he performs a work. So if you use the voice of an artist or a singer, but he is not interpreting a performance he has done, then you may not consider that it's covered by this right.
So my thinking about that is, well, sure, the voice of a singer itself used, but in another work, it's not a performance. So I cannot claim my exclusive rights as a performer. But if what is used is my voice singing-- I am an opera singer and in the deepfake appears me singing as my work, my performance were extracted, this is the value that has been extracted. It's not my day-to-day life voice, but my voice as a singer. And this value can be considered, to my point of view, as an expression of a performance.
And therefore, we can claim that maybe the performer has a standing. If I use the image of Jim Carrey when he is buying his milk in the morning, he has no performer right. But if I use the grimace of Jim Carrey in a deepfake, then considering that it's his work, it's his job to make the grimaces, I think the extraction of the value is one of the performer.
Problem is that, do I perform myself being a work? Do I perform myself being a work? It seems to me that we can split. If I am interpreting a work which is a character I'm creating or being created by a creator, then I have a copyright. I have a related right. If I only see the reproduction of my image and my voice during my day to day life, well, no work is being interpreted, and I cannot extend the performance rights to this situation.
Well, very quickly, we have a database, sui generis right, in the EU that is being harmonized. Not really interesting, but I was wondering whether I can consider myself as a database of my own data. And if so, if I can consider that self-care, self-education is investing in my database so that I can claim that anyone extracting my data is actually extracting-- is infringing my data sui generis right. It's a hypothesis.
Quite strange, but I wanted to share that with you. Trademark. I won't go into the details. According to me, it's complicated, according to the EU trademark law, to consider that the image of a person can be trademarked if it's for the image itself. It's only if it's related to products. There's no protection of the notoriety by itself through trademarks, so it won't be sufficient unless you have registered a trademark for separate product.
What is really relevant for us today is that we also have more rights, not only economic rights. We have that at international level, but as you know, Article 6 of the Berne Convention is subject to reservation. You don't have it in US as we have it in some of the EU law, not all of us. So moral right, unfortunately, is not being harmonized. That's why I will give the French example.
We have a very broad moral right that encompasses several prerogatives, mainly the right to claim attribution and right to integrity of the work. And this has been interpreted very broadly by the courts, like in this decision of the Cour de Cassation, where Jean Frerotte, a famous singer, wanted to oppose the use of his song in a compilation where appeared also some singers that were collaborating with the Nazis during the war. So he was a Communist, and he was considering he was not very comfortable to be in this compilation.
And the court decided that the artist may absolutely oppose to this compilation because it was likely to alter the meaning of the expression and that he could, according to his moral right, decide not and ban also the authorization of the producer that has been given for the compilation. Moral right is really interesting, because moral right is not assignable. So even if I assign my exploitation rights, I can still and always oppose to the alteration of my work or my performance.
So this is a guarantee that even if there is a bargaining between me and someone, it cannot bypass my consent by going further and distorting my authorization for applying to a situation that is harming my dignity, my integrity, my paternity. But it still is a private right. It's not something that testifies the authenticity of the work or of the performance. This has nothing to do with the truth. Someone can absolutely use this moral right to deny attribution of a work that he has made. We have had the case several times.
So very quickly, because I only have one minute. We also have limits to IP rights. And those limits are parody, pastiche, caricature, quotation. And lately, in the conclusion of the Advocate General Emilio of June 2025, the Advocate has established a very interesting distinction between parody, pastiche, and quotation citation.
And what is relevant for me in this comparison is that it stresses the fact that the right holder can keep his monopoly whenever there may be a deception for the public that the use of the work has been authorized or not, or that the public cannot trace the origin of the parity or the pastiche. So I think it's really interesting that in those cases, what is meaningful is what has been understood by the public of this distanciation between the genuine work and the parody.
And it seems to me very relevant to focus on that, which is how do we embed the comprehension of the public as regards what is being done with the work or with the performance. Is it something that is understood that it's not the genuine intent of the author of the performance to do what has been done with his creation?
Finally, in the technique case there is a very interesting theme, which is the parody had been used-- the exception of parody had been used by a group of far right in Belgium to justify the imitation of a cartoon but with a discriminatory message underlying. And the Court of Justice said, well, as the author, you have the right not to be associated to such a discriminatory message, even if it was not grounded on what we call moral right, which, as I said, is not harmonized.
We can see that maybe in the technique case an emergence of something like a embryo of moral right at the EU level, saying to the right holders, you can oppose to something that has been caricature parody, but that goes far beyond what is considered as a mockery and that embeds discriminatory messages. I will stop with this.
Just to say that-- I just skip to the end. Oops, sorry. Just to share some thought about the protection of the dead persons. We have, in privacy right and also in GDPR, a protection of the individuals on his likeness and voices and also data. But this protection ends with the life of the person in both cases. After the life of the person, no protection granted on this.
And the idea of these IP rights 70 years after death of the person in the NO FAKES Act seems to me very dangerous to extend such a right of privacy or of control of one's image on that person. I think that we shall take in mind that if it's a right of an individual and not a creator, not a performer, there's no interest in fighting for a longer period after the death of the person on the ground of IP rights and on the ground of a privacy right or personality right. I can elaborate more on the Q&A. So thank you so much.
[APPLAUSE]
[GINSBERG] You don't have a seat? You do?
[ROTHMAN] No, I do. I'm just--
[GINSBERG] Have a seat. Yes. So thank you to all the panel members. I'm going to start the Q&A with first asking if any of the members of the panel want to react to something that another member of the panel said.
[AUSTIN] Thanks, Jen. I wanted to pick up just because it's so immediate on your last point, Valerie, about your discomfort with the extension of these rights and the endurance of these rights after death. And I wondered how we think about that in the context of some of the remarks that Jennifer made when Jennifer so carefully outlined the harms, including harms, to family members and related people as well. And I wonder if we can be so sure that those kinds of harms do not endure after the performer has died.
[BENABOU] OK. I kept the photo of Francois Mitterrand dead on his bed. That was the starting point for a trial in France, deciding that there's no privacy right or personality right after the death of the author. But the heirs, if they can demonstrate that they have a separate heart on their own, they can claim. But they are not an extension of the person. They are separated person and they suffer harm, because, obviously, seeing my husband dead in the press is something that is harmful, but it's not Francois Mitterrand's problem anymore.
[ROTHMAN] Yeah. Just go. I don't know if this is on. So I recently wrote with a co-author, Anita Allen, an article on postmortem privacy, which, if this topic interests you, goes into more depth than I could possibly here. And I cut out postmortem from my talk for time. But I think that's right. We have very different harms between the living and the dead. But we might care very much-- our living selves might care very much about how we anticipate will be depicted in the afterlife, which could affect us more broadly as a society.
And our relatives may have their own experiences. And the way in which current postmortem laws on the books are drafted, and the way the NO FAKES Act proposes to create this postmortem bill, focuses on those who commercialize their identities after death, creating a market in the dead rather than protecting the dignity and reputation of the deceased or of the loved ones who might want to limit that commercialization, which I think has significant inequities and is largely focused on the wrong problems.
And in addition, for those who like or don't like but are familiar with tax law, the system in the US, if you can commercialize, if you're a well-known individual who could have valuable commercial rights the way the system is currently designed-- and NO FAKES proposes it would actually force people to commercialize the dead, even against the wishes of the deceased and their families, to pay off an estate tax which would be assessed at the fully commercialized value.
So the NO FAKES Act actually could be very, very challenging. And you may have seen some reactions by Robin Williams' daughter to a recent AI generated version of her father. And rather than being able to stop those uses the way NO FAKES and other laws are being drafted would actually force her to, against her own wishes, commercialize him to pay off that tax bill. [GINSBERG] I do have one reaction to something else.
[ROTHMAN] Unsurprisingly, for Ben. So I, as Ben knows, I am very supportive of the protection of expressive works and creative works, and I think that is essential, which is why I tried to highlight that there's so many wonderful uses of this technology. And we tend to think of deepfakes as pejorative, but they're actually-- whether we call it digital replicas, we don't bring that baggage with us or something else-- wonderful effects, creative works that can be created and that needs to be kept in mind.
But with that said, I think that while some states limit the right of publicity to the advertising and merchandise context, the vast majority do not. And this dates back to the origin of these laws and continues today. So the examples that Ben referred to, the Sarver case and de Havilland, both of which I think were correct-- and as Ben may remember, I was the lead counsel for the intellectual property and constitutional law professors on the de Havilland case and argued it in the Court of Appeals on our behalf, which we won.
Defending the right to depict de Havilland in the series. But these were First Amendment decisions, saying that the First Amendment protected these uses, not that they didn't make out a case as a prima facie basis. And so when we're talking about deepfakes, I think that's very important. There are going to be a lot of things that are protected by the First Amendment and Fair Use. Good actors in the movie industry are all going to fit into that category with most uses, and certainly the way it's currently being conceptualized. But bad actors could escape liability if we overly narrow the scope of these laws.
And just one more thing about what Ben said with regard to the wide support of NO FAKES. There were a lot of glaring absences in that list of supporters, which is any individual performing artist and any member of the general public. And so it's no surprise that the bill is drafted in a way that has exemptions for the motion picture industry, gives the record labels broad standing, and, as a carve out, for those who are subject to collective bargaining agreements, like the Screen Actors Guild. So anyone who doesn't fit into those boxes is not well-protected or served by this law as currently drafted.
[GINSBERG] OK. We have a little bit of time for Josh. And please say who you are.
[JOSH BERLOWITZ] Hi. I'm Josh Berlowitz from Kirkland and Ellis. This was fabulous. Thank you so much, all of you. It was very interesting. I want to pick up where Professor Rothman just left off, which is the harm to the public. And for a variety of reasons, from all of your talks, I think it makes sense that the efforts to combat deepfakes and digital replicas have been focused on intellectual property rights, rights of publicity, personality rights, moral rights, and all sorts of things that an individual can enforce. But the problem is that limits the consideration of the public.
And we don't have a lot of-- or we don't have that many rights that the public can enforce in the US. Consumer protection laws are fairly narrow that individuals can have a private right of action on. And I'm thinking about false advertising laws where an individual can bring a class action and say, we as the public were deceived and we bought this product and we didn't mean to, but we were deceived by this company for x, y, and z reasons and recover.
And you can figure out who was harmed and you can grant a remedy, assuming the case is made out. And I'm wondering what can be done to protect the public. What would a public right look not to be deceived by digital replicas?
[GINSBERG] I just want to point out that our next panel is going to be on transparency. But transparency does not exhaust the scope of your question. So Ben.
[SHEFFNER] Yeah. I mean, I think your question is part of the answer. There already are existing causes of action that can be used in the scenario you pointed out. You remember Professor Rothman's presentation. She had the example of Tom Hanks. There was a video circulating, I can't remember. A fake ad purported to show him endorsing some sort of dental service, and there was outrage about that, understandably.
But he has a cause of action under state right of publicity law in probably every state in the country, probably under the federal Lanham Act as well. And as you pointed out, I'm not as familiar with this body of law, but there are consumer protection statutes if somebody was deceived into purchasing that product because they falsely believed that Tom Hanks had actually endorsed it, and maybe a whole class would have a cause of action.
In my conversations with legislators, when we're talking about these issues, is I often try to say, hey, take a pause. Slow down before you enact this broad new bill that says, all digital replica rights are illegal, but here's a bunch of exceptions. And stop and ask, is the harm that you're actually worried about already covered by existing law? And in a lot of cases, it will be.
Maybe not every case, and there's some gaps. That's why, at least initially, there was this focus on protecting professional performers who were worried about having their performances replicated without their permission by digital replicas. But again, if a digital replica is used to endorse a product, I'm pretty sure that the victim, the person who was falsely depicted there, would already be able to see it right before city underwriting law in all 50 states.
[AUSTIN] OK. I think that's a-- I think that's a great question. When preparing for this, I was doing a thought experiment, trying to superimpose. It's Federalist 52, isn't it, where Madison says, where copyright, where public goods and the private right fully coincide. Yeah.
[GINSBERG] Federalist 43.
[AUSTIN] Thank you. Can you make the same claim with these kinds of rights, where you have, at a systemic level, the public good, and the private right aligning in a way that is claimed for copyright and patent by a framing generation. And one of the reflections on this-- I had a hunch that the United Kingdom Supreme Court, a court that is very focused on commercial interests and the integrity of private law, sets its face against publicity rights because of a conviction that things like passing off, breach of confidence have a heavy thumb on the public interest in the way that these other rights might not, at least in their view.
I think there are claims for the public good that can be made with these kinds of rights that they are not focusing on. And by the time I finish with this idea that these new constitutions or rights inherent in some of the constitutions in the Commonwealth, like the Canadian Charter of Rights, protecting dignity leads to an infusion of those ideas in tort law. But it's also important not to lose sight of the public serving aspect, that those torts that you find more fully committed to in the United Kingdom.
[GINSBERG] That Federalist 43, Madison also said the states cannot separately make effectual provision for the protection of patent and copyright. And I think that's part of our problem here. Did you want to say anything else?
[ROTHMAN] I just wanted to add to that. So I love that question. I've been thinking about that a lot. And some of it is, I think, inflected in existing law but largely consumer protection. I think some of the protection for dignity resonates from the EU, in France and Commonwealth countries, and is also in aspects of our law, particularly privacy and privacy laws. But I think going forward, it can-- our focus and trying to center concerns over deceiving the public can drive some of our choices.
So it's separate from existing laws. So we could be creating mechanisms for government enforcement of criminal and civil penalties for deceptive deepfakes. We need to have a functional regulatory state, but if it worked, we could be setting that up. I think we could also, as I mentioned towards the end of my comments, support and encourage through legislation the adoption of technology. And I guess we're going to talk a little bit more about that later today. Which is facilitating authentication, detection, and transparency.
And importantly, to keep in mind as we think about passing new legislation, both at the federal and state level, is this creating an architecture which will give more fuel to circulating deceptive deepfakes? And I think some of the ways they've been drafted actually enhance the likelihood of deceiving the public rather than mitigate it. And that makes it, in my book, a bad law.
[BENABOU] I just wanted to add something about deepfakes, which would be fake news, because we didn't address the question of fake news. But when the press publishers have been advocating, have been lobbying in the EU to get the related rights on press publisher, they relied a lot on the risk of fake news and saying, we need to have a control over the press publication to fight against fake news.
And I wonder whether it's relevant to give to the press publisher a control on the information, because they will oppose to the fake news. Because so far, I didn't see the result of that. Maybe. But I was wondering. But I also thought that the public interest may be represented in some of the tools of protection of cultural heritage, because the authenticity of something is something-- the historical truth is something that we all share.
And it seems to me that we shall maybe address the issue not through the IP right but through something like cultural heritage. For the dead person, for example, protection of the dead person not portraying Martin Luther King. It's our cultural heritage, and someone shall represent the interest in the public of not being deceived by this kind of fake person.
[GINSBERG] We had one more question, and I know it's our coffee break. But I think I will invade the coffee break. Actually, Ted, it was the person behind you who had raised his hand. So sorry, Ted.
[CHARLES BOWDEN] Thank you very much for your whole presentation. It was very interesting. My name is Charles Bowden. I'm a PhD student in philosophy at Sorbonne. And my question is actually about the distinction and the categories Professor Rothman brought up, but this question is open to all of the participants. You made a distinction between authorized, non-authorized, and fictions within deepfakes. And I'm interested in the last categories, the fiction one. Can't we consider that all deepfakes are fiction? And if yes, can we make distinctions inside this category? And if not, what could be, in your perspective, a non-fictional deepfake? Thank you.
[SHEFFNER] Maybe I'll take it. I'll take this one. So the term that we in the motion picture industry use for that category of deepfake, let's call them, like the Tilly Norwood fake actress is synthetic performers. And this was actually a big topic of discussion in the negotiations between the new studios and SAG-AFTRA, the union representing actors, back in 2023. And there was a big fight of this.
And the ultimate resolution is that if the studio wants to use a synthetic performer, meaning a performer that does not actually resemble any one particular actor, they have to let the union know about it and give them the opportunity to bargain over that use. And you would say, well, why should the union care? Because that person is not an actual human being actor. They don't pay union dues or anything.
But their answer would be that a synthetic performer like Tilly Norwood only exists because it was trained, because the AI model that produced that was trained on performances that embody hundreds, maybe even thousands or tens of thousands of performances by union members. In other words, again, that Tilly Norwood wouldn't exist but for real actors.
I actually just heard from somebody at SAG-AFTRA the other day who said that since that agreement was entered into in late 2023, there has not been a single instance of a studio actually going to the union and saying, we want to use a synthetic performer. Let's talk about it. But it's something that the actors are very concerned and they think that they deserve to-- they don't like it at all. But that if a studio is going to use it, they believe they should share in the benefit, because again, they believe it was created by taking little bits of thousands of performances and assembling it into a new synthetic performance.
[ROTHMAN] I like the term synthespian better. So I don't claim that this is epistemologically necessarily the best way to frame it. What I'm focused on-- and you're right. They're all, in one sense, fictional because they're fake. They're not something that happened, and so they're all fictionalized in that sense. So what I meant in this sense was that it's a fictional person depicted. And so in the other ones, we have depictions of real people, even if the deepfake itself is fictional.
And so I wanted to just highlight that and problematize even that category. Do we think to question, given how we frame and understand the harms that flow from deepfakes? Do we understand these sort of fictionalized deepfakes where there's not actually a real person depicted? Is that a deepfake, or is that not a deepfake? Because we could define, and some of the laws and definitions of deepfakes and digital replicas say it has to depict a real person, and simulate a real person.
So we could define it that way, or we could not define it that way. And so by having that category, I was trying to highlight, let's think about it. We're not hurting the individual depicted because they're not real. But we might be deceiving the public in the same way as the others, and so we might want to keep it in.
[GINSBERG] And you might be substituting for the living of real actors. Right. So I've now in allowed us to invade the coffee break by more than 10 minutes. So I think we should take our break with apologies to those who wanted to continue with the Q&A. So we will return at noon or as close to noon as possible. So our 30 minute break is a 20 minute break. And when we come back, we will talk about transparency.
[APPLAUSE]
[LOENGARD] OK. I think we're going to start. Just to preview, lunches, box lunches will be available where breakfast is now. You're welcome to eat them here or in the room across the hall. You're welcome to go outside. Whatever. I have not been outside in five hours, so for all I know, there's a tornado. But in theory, you're welcome to go wherever is most comfortable for you. And then we'll reconvene.
I don't have my schedule, but I'm going to say 2 o'clock. And if I'm wrong, go by the schedule, not by me. So those will be available right after this amazing session that we are pleased to host next, which it features Fordham Law Professor Olivier Sylvain and Professor Celia Zolynski of the University of Paris Pantheon-Sorbonne. Again with the French names in front of the French speakers. It's deadly.
Who will discuss what the intersection of deepfakes-- what about the intersection of deepfakes and free expression, what protections transparency measures can give, and what new proposed or enacted policies in the United States and the EU offer in terms of combating the unauthorized manipulation of images. So we have had our preview, and I leave it to Celia to take us up forward.
[CELIA ZOLYNSKI] Thank you so much. So first of all, I would like to thank Professor Jane Ginsberg and all the organizers and also the program alliance to make this comparative symposium possible. And I'm delighted to be with you today. And I'm going to present you an overview of the topic based on the first results we have regarding research I've been leading and I continue to study on deepfakes technologies and the legal framework, especially for an opinion for the French commission of human rights about teen intimacy and digital services.
This opinion, published in February 2024, analyzed impacts of non-consensual sexually explicit deepfakes. There's also research, a research project in my research center dedicated to the production of an open source large language model. In this project, we are studying with partners the technical and the legal challenges of watermarks. And we are currently finishing also a legal study for the French agency for health about security on social media and risks for teens.
And I'm beginning a mission for the minister of culture, an institution for the minister of culture, about deepfakes in creative sector. So with all this research, I propose you to share my thoughts-- maybe not really response, but thoughts about deepfakes and the current legal framework, and maybe the evolution of this legal framework in the EU perspective.
So let's begin with what we are talking about. So this morning we understand that the very notion of deepfakes is not so clear, and that we have a lot of definition about what could be, what it is, what should be a deepfake. So because the previous speakers have made a brilliant presentation, I just want you to remind that we have a legal definition in the EU in the AI Act.
So Article 3.60 of the AI Act defined the deepfakes such as AI generated or manipulated image, audio, or video content that resembles existing persons, objects, places, entities, or events and would falsely appear to a person to be authentic or truthful. I'm sorry. I lost my screen, so. OK. I continue. So as you understand, it is a very comprehensive definition, the one we have now, in the EU.
So it's focusing on synthetic content. It's not only focusing on a person but including a lot of other things, including events. And we had a lot of deepfakes during the Olympic games in Paris, as you may know. So the question is still, if we need misleading purpose-- and what I can add now is that the AI Act does not impose to prove the goal pursued by the producer of the content. So maybe-- OK.
The next question is, why are we focusing on deepfakes? What's new with deepfakes? We know that drawing an image or even a video does not constitute real. I'm sorry for this. Proof of reality. It is subjective. It is a representation perceived or constructed by the author. So this brings bring us back to the classic debate about the relationship with the audience and fiction. And many consider that this debate is renewed with deepfakes, because some believe that hyperrealistic AI generated or AI manipulated content could make the public perceive, could prevent the public from taking a step back.
And this could blur the line between fiction and reality, that what my colleague in art are telling me where we are sitting in an interdisciplinary perspective, what is or could be deepfakes. We must therefore understand, as Jennifer Rothman explained this morning, that deepfakes are not a unique phenomenon. The context, the goal pursued by the author should not be considered unique. So we understood this very well and we have to take this into account.
We must therefore add that of a distortion of the information space with multiple synthetic media now presented if they are authentic. So we have also the impact of a saturation of the digital space with the dissemination of a massive amount of inauthentic content and the impact of the new features of what we call-- what Tim Wu called attention economy. So discussing deepfakes requires, I would say most importantly also, taking into account the massive infringement of individual rights that can result from the production and the sharing of non-consensual digital forgeries, especially non-consensual intimate images and child sexual abuse material.
So many of these issues were already highlighted by various authors several years ago, and some are now widely recognized. And this was clearly mentioned by the report of the international summit on Action for AI in Paris, launched in Paris in 2025. So this report about AI safety highlights, as you can read in the slide, that we have risks regarding individual and societies, such as misinformation, gender based violence, erosion of public trust in digital media, and so on.
So what we need is first, as we understand, to define precisely what deepfake is when we are studying the legal framework, of course. And also we have to take into account the various domains in which the public can be exposed to deepfakes to define this legal framework. So what about the EU law? What about the EU regulation regarding deepfakes?
Given the issues I've just mentioned, the European authorities decides to tackle this issue by adopting the AI Act in 2024. It was one of the most debated issue during the negotiation of the AI Act. Just notice that it was also in 2024 that French law has been adapted to apprehend specific risks under criminal law. So just a word for you to better understand if you don't know the AI Act, because but it's very famous, so you probably know the EU AI Act. But just a few words about the general context of this AI Act.
I would like to underline that this AI Act is a transversal regulation that aims to create a single market, a single EU market, and harmonized rules in the EU for trustworthy and humancentric AI promotion. And this new piece of regulation is based on compliance mechanisms and what we call risk analysis approach. The goal is to promote innovation, but also to tackle the level of risks regarding safety and regarding human rights, including in human rights, democracy, and the rule of law. In this perspective, one of the principles of this AI Act is not to consider the technique itself, but to consider the uses of the techniques.
That's why deepfakes are captured by the AI Act with several layers of the regulation. So here we will see that deepfakes regulation is a perfect example of this regulatory approach we decided decide to adopt with the AI Act. That means raising flags, raising sometimes red flags, but also pushing innovation. It's quite difficult sometimes to put this both and to promote both at the same time, we will see.
So deepfake techniques, as you know, can be used for various purposes and various contexts. So sometimes we know that offers great opportunity, opportunities that are socially desirable, education, information, creation, and so on. And they can cause also massive harms, disinformation, fraud, bullying, harassment, and infringement to dignity. So that's why deepfakes have not been considered, such as prohibited per se by principle by the AI Act. It was a discussion, but at the end, it is not prohibited per se.
But with respect to the logic of risk analysis regulatory approach adopted by the AI Act, the EU authorities have particularly identified the need to avoid specific risks of manipulation of the public, and especially to avoid malicious uses to preserve the public interest. It was the main goal of the AI Act during the negotiation and at the moment of the adoption of this AI Act. So in July 2024.
The idea was to take a risk of impersonation, deception, and also it is very important to tackle risks regarding elections and integrity in the information ecosystem. So, as you may see in this pyramid, we have the various layers of the AI Act. And the deepfake could be captured by these various layers. The first layer is considering all deepfakes. I take the last quite yellow part.
Here that means that for all deepfakes, the AI Act imposed transparency requirements. So this needed to be more transparent regarding the necessity to differentiate AI generated content or manipulated content to human created content has been one of the most important goal of this AI Act. It was like an obsession. Everyone was talking about this.
But we have also a use of deepfakes that could be considered, such as high risk. We have a category of high risk uses of AI systems, and this qualification determines the application of most of the whole AI Act compliance system. And here, if you consider deepfakes, you can observe that only one specific use of deepfakes is qualified, such as high risk is electoral context or referendum context. Only this category fall under this qualification of high risk.
What about the prohibited uses? Prohibited uses, the top of the pyramid, is we have a list of these prohibited uses considered such as unacceptable risks regarding safety and fundamental rights. And in the list of Article 5 of the AI Act, you don't find any deepfake mention. So here we have a current debate because after the adoption of the AI Act, we realize, the EU legislator realized too late, I add, that we have these very harmful issues of non-consensual sexual deepfakes and child sexual abuse material.
And the EU Commission has published last February guidelines to interpretate what Article 5 mentioned. And in this document published by the EU Commission, so last February, February 2025, NCII and CSAM are mentioned, such as possible prohibited uses. But it is now clearly debated, because the condition to apply the prohibition of Article 5 are very strict, and we are not sure that we could have all the conditions applicable for these cases. So this is a main question and a potential issue we have now regarding the EU law.
And finally, this whole structure is completed by imposing a specific obligation on providers for of GPAI system and GPAI model that could cause systemic risks. And this is now described by the code of practice published by the EU in July 2025. And this code of practice mentioned that specific uses of deepfakes such as CSAM and non-consensual intimate abuse images can generate systemic risks. So here again, quite late, but we take it to account indirectly these deepfakes.
Considering that, under the AI Act, we understand that most of the deepfakes are captured by only transparency requirements. The idea is to preserve public interest from malicious users and disinformation. So let's dive into this specific tool of regulation. We need to understand here whether these transparency requirements should be the cornerstone of the regulation. In other words, does transparency offer sufficient remedies or should it be considered insufficient to ensure the public interest. That is the question we have now to address.
My first point is then to identify the questions raised by such an approach adopted by the EU in the AI Act. And my second point is to determine how transparency could be an effective means of addressing the potential risks we have to mention. The microphone is not-- OK. So first of all, I would like to consider with you a lot of questions we have why transparency is imposed regarding deepfakes, how to implement this. And we will see also what are the limits of such an approach imposing transparency requirements with the AI Act.
So first of all, we mentioned why. So I skipped this. And I just want to precise how do we apply these transparency requirements. This is precise in Article 50 of the AI Act. So Article 50 of the AI Act introduced two levels of requirements, of transparency requirements that will be applicable in August 2026. So first of all, as you see in the slide, we have a marking obligation, marking obligation imposed to providers, AI providers.
They must design their AI systems, and this includes include general purpose ones, to mark outputs as artificially generated or manipulated in a machine readable format that can be detected. And we have recitals in the AI Act. This EU legislation is quite a long one. Precising what kind of technical tools could be used, technical techniques such as watermarking, for example. The aim here is to facilitate trustworthy detection and identification of AI generated and manipulated content.
In addition, we have another requirement. It is labeling. Labeling is imposed to deployers of AI systems. They have to label deepfakes in such a way that the public can be informed of the synthetic nature of the content. And because these provisions could be quite difficult to implement for these actors, the EU Commission has launched a consultation to better define a soft flow. I mean, guidelines and code of conduct that will be published. We have a lot of act to prepare.
So guidelines and code of conduct will be soon published to explain more precisely how to respect these transparency requirements. And I recommend you to read the various response of the stakeholders. It's very interesting to identify the question raised in a practical approach and asked to take into account the context, and of course, also challenge, because there's a lot of challenges.
We can identify that there are challenges taken into account by the AI Act itself. The AI Act, for example-- and I just take this one because the time is running. The AI Act take into account the necessity to preserve freedom of expression and to preserve freedom of creation and freedom of science. We know that deepfakes techniques can be used, for example, for a historical perspective, for a scientific perspective, and so on.
So here to preserve this freedom of art and science, AI Acts presides that the label is still necessary. So there's not a total exemption in case of creative content or scientific content, but the label has to be adapted not to hamper the display or enjoyment of the work. So here the question that we have to address is, what is an artistic content? And the AI Act presides that the content has to be evidently artistic, creative, satirical, fictional, or analogous. Sorry for my horrible French accent. So let's keep this. Artistic, creative, satirical, and fictional.
Another question could be, what about the use of this artistic nature of the content to reach another goal? And we have lots of difficulty to address, to draw the line, the frontier between what can benefit from this exemption and what could not be, especially with the use of parody that we have with Gen AI. Other limits have to be also considered, such as, as you know, technical challenges. We are now currently working on standardization of watermarking.
For example, we have coalition of factors such as Coalition for Content, Provenance, and Authenticity with the C2P for watermarking work in progress. And in France we have an initiative which is called Provenance For Trust, which is a coalition of actors such as operators, experts on labeling content, experts of detection of AI generated content, but also the initiative journalists, Initiative of [FRENCH] to promote certification for media.
We know that we have a lot of technical challenges to consider, especially regarding the robustness and accuracy of watermarks. That's what we are currently studying with my research project to build an open LLM respecting all these issues. We have also cognitive challenges, because we know that the label could have very important limits to, well, inform the public of the very nature of the deepfake. And here I would like to precise that the necessity to associate academics, researchers to better identify the good label, to address cognitive biases in this particular context.
We have also what we can call epistemic challenges, because we have a lot of questions about even if a deepfake is labeled as such, we have to consider the impact of the narrative behind the deepfake. And in this context, we can consider that transparency requirements is not enough to prevent public manipulation. So we can conclude here that reflexivity is the real challenge.
We have to build, to propose, to consider reflexivity as a challenge, to consider the relationship between users and content, between users and information space, and to ensure better users agency. So considering all of this, we need to go a step further to understand if we can consider transparency as a remedy to limit the impact of deepfakes considering the public perspective.
So here, we can argue that we need to consider the conditions under which transparency can fully play its role in limiting risks in order to protect the right of the public in this specific context of deepfakes. And to reach this goal, we need to consider transparency not only as a requirement, specific requirements, but as a real transparency.
That's what we are promoting in the EU with promoting what we can call regulation based on transparency. Here we can take the example of another piece of regulation, the Digital Services Act, which is ensuring the safety of digital space, imposing new obligations to online platforms, especially the very large ones which have a very large audience.
Here in this DSA, online services, especially very large online platforms, have to respect transparency requirements with their terms and conditions, with the conception of their service, the algorithmic system, and so on. But they also have to respect other obligations. For example, to publish transparency reports on their activities, also to produce periodic risk assessment reports to analyze the systemic risks that can cause their activities, for example, on personal data, privacy, dignity, mental health of end users, children rights, and even pluralism of media.
And considering deepfakes, this is particularly important to monitor the effectiveness of mitigation measures they have to take to address this risk, for example, for individuals or for democratic debates. The DSA also imposes external and independent audits. And here we are studying how we can ensure independent adversarial audits to challenge the guardrail implemented by the AI provider, for example, to avoid specific kind of deepfakes.
And last, but not least, this regulation based on transparency is possible if we organize access to data, access to data to the regulator, but also academics and non-profit organizations to challenge the responsibility of these providers. So we can say that technical transparency has to be complemented with public transparency.
I will finish by mentioning that what we need also is to organize a systemic regulation of deepfakes, taking into account also the question of propagation of deepfakes, because it is very important issue, as we know. And this is taken into account by the DSA that imposes, as we mentioned, to very large online platforms, to make risk analysis and take mitigation measures, including labeling deepfakes and ensuring that this label is stayed in the content even if the content is shared with other users.
And here I just want to mention to conclude that we have various precisions published by the EU Commission regarding first the context of election. So we have precision regarding deepfake in the very context of elections, imposing to control the diffusion of deepfakes by online platforms, very large ones. And also we have a specificity with the protection of minors, which is one of the goals of the DSA, to ensure a high level of protection of minors as users.
And here we have a specific provisions regarding deepfakes that has just been added by another guidelines published by the EU Commission in July. So here you see that the EU regulation tackle a lot of consequences and issues. And this will be my conclusion. We need also to consider that regulation is not sufficient. As we already mentioned, we have to consider education literacy as essential to ensure reflexivity and resilience of the public.
And I will take the chance to mention what we are promoting in the AI Observatory of Paris University until a few years to promote conference and podcasts to make the public, especially teens, more aware of risks of manipulation, considering deepfakes and to prevent risk of harm such as CSAM and non-consensual intimate image, especially considering sextortion games. So you can have more information with the QR code, and don't hesitate to send us an email if you want to be involved in such initiative. Thank you so much.
[APPLAUSE]
[OLIVIER SYLVAIN] Hi, everyone. It's great to be here, and I'm so pleased to have been invited to join this conversation. And I come to you not as an IP specialist. I am a public law person, and I tend to think of these problems as public law problems. So that's why I was intrigued by Josh's question and other kinds of things that have come up towards the end of the last panel discussion.
So I also want to take up Jennifer's wonderful intervention, her introduction, really, to maybe add or amend the list of things to think about. So I'm drawn to the idea of thinking about authorization and deception as the sorts of priorities for attending to deepfakes and related AI abuses. And then Jennifer, you also asked, is this also limited to just people? Which I think is a great way to framing this-- reframing of this.
And to the extent we're thinking about people, are we thinking about individual people, or are we thinking about the greater public? And for that, I'll start with this and I'll end with this. There are very few institutions that are designed to attend to public harms. And I think we generally associate them with agencies, federal agencies, state agencies. The problem here is that this is contingent on the efficacy of the administrative state in any given moment.
But there's also another formidable problem, and that's what I'm going to take up here. And that is apart from the operational problems, it's a constitutional one. What happens when a federal agency has authority to regulate the kinds of information that we've been talking about today? So that's where I'll end up. In order to really put a context to all this, I want to make sure you understand that I'm not coming at this as an IP problem.
Now, information disclosure is a broad or transparency is a broad category of regulatory intervention. It can be useful for many reasons that are not really related just to the conveyance of information to an individual consumer. Information forcing could be good for learning and research. Talk a little bit about that in a moment. It also, as you likely know if you're a student of anything Cass Sunstein is written, there are behavioral impacts associated with transparency. It nudges people to attend to potential harms to the extent they have to attend to risks.
And this is not new. In environmental regulation, the National Environmental Policy Act of 1970 sets out the impact assessment obligation. And you all likely know in the context of civil rights, we think about disparate impact assessments and privacy assessments. These are all designed not merely for the disclosure to an individual, but also presumably to habit forming. So this sounds more like a regulatory intervention to me when I think about it. There's also probably a taxonomy of transparency that we should have some clarity on. And we've already actually seen some of it in Celia's presentation.
One are mandated disclosures, which I think risk assessments probably fall into the category of, although there are many other kinds of mandated disclosures. Nutrition labels and the ways in which we think about labeling mandated disclosure, or breach disclosures, for those of you who attend to cybersecurity issues. Audit requirements. They are a kind of disclosure, but they don't do the same thing as a mandated risk assessment. Counter notification process.
The Digital Millennium Copyright Act, as many of you likely know, has a mechanism to publicize, although to just one person, a potential aggrieved party, the possibility of a takedown. Counternotification is something that comes up in the Take It Down Act, and I'm going to talk-- most of my conversation or most of the things I'll say will be addressed to that. There's also appellate process.
Any given platform or company that decides to take something down ostensibly ought to be able to give individuals whose contents are taken down the opportunity to appeal after some explanation. That's a version of transparency, I would say, even if it sounds in due process. And Danielle Citron has written about administrative due process in this context. As I worked for two years under Chair Khan, Lina Khan is the senior advisor to the chair, and I learned a lot about civil investigative demands that the FTC would issue.
That they would issue not for the purposes necessarily, of just commencing an investigation, but for producing a public report about an industry. Broadband was a subject of one 6B report, and their data practices were the kinds of things that produced a lot of information to be useful because of the kinds of access the FTC has to private organizations more than other agencies.
And finally, data access. I often think of this as related to researcher-- that researchers have access to the ways in which platforms use data. There's a lot of learning that has to happen in that space. I'm a senior fellow at the Knight First Amendment Institute here at Columbia, and that is one of the priorities for them, for example.
Given that taxonomy, my focus here is going to be very narrow, and it's going to be on the kinds of mandated disclosures and risk assessments that we've already been hearing about. And it's going to be also narrowed in the context of elections and consumer harm. We can think of assessments as being applied to a variety of different settings. But to be clear, these are the areas that have come up. I'm not talking here about provenance for the purposes of IP holders or creators' rights.
Really talking about public harms, the kinds of harms for which agencies and governments ostensibly stand on feet of consumers. But here, what's different about the EU, the US. There are many things are different between the EU and the US. But one, of course, is the First Amendment. This is why transparency is a tricky regulatory intervention here. And I'm going to start by talking about this in the context of social media regulation, because that's actually the area in which the Supreme Court has recently given us some guidance about what transparency requirements may or may not do.
And as many of you likely know, the big case that the Supreme Court decided a couple of years ago in the summer of 2024 involved regulation or state legislation out of Texas and Florida that regulated principally the content moderation practices of the big platforms. Now, the regulations were principally addressed to the obligations these companies had to attend to certain kinds of content or actually forbid discriminatory takedowns and suspensions of users. And that's principally what Justice Kagan's opinion is addressed to.
But those statutes also had transparency provisions. Required social media platforms to provide users with notice and individualized explanation for why content would be taken down. Texas's law also required platforms to afford users the opportunity to appeal those decisions. And we have some language in the Supreme Court opinion about this. Not a lot. Now NetChoice and industry folks and First Amendment advocates brought cases against the state laws, arguing that their violations of the First Amendment because they burdened editorial decisions of the companies.
To the extent that a company has to attend to an explanation every time they take something down, visits a burden on them with regards to the content that they've taken down. And so that affects their editorial decision making. This is actually pretty intuitive in First Amendment doctrine. The Zauderer case is the principal case I'll mention briefly in a second that talks about this. It's a kind of balancing, but it's a formidable balancing, given the speech interests at stake.
The 11th Circuit, reviewing the Florida case, said that the individualized explanation requirements were unduly burdensome. The Fifth Circuit didn't think so. They didn't think that Texas's approach to this was burdensome, and suggestive of some confusion in the doctrine about what ought to happen. Justice Kagan's opinion puts a lot of cold water on any effort to regulate content moderation. That's the main takeaway. Even though I don't think the court assertively says so, they remand this back because the challenge that NetChoice and others bring is a facial challenge, and the court says no.
To do a proper facial challenge analysis, you have to know that there are substantial applications, substantial range of applications would be affected by this regulation. So the cases were remanded back to the courts below. But part of the analysis was the consideration of whether or not the disclosure requirements or the explanation requirement imposed a burden on the speech interests of the companies. We don't have an answer on the First Amendment, but we have strong indications that the Supreme Court would strike down the statute, ultimately, when it comes and has been fully fleshed out below.
Justice Thomas, who is apt to invite all kinds of litigation involving matters that worry him, has said that he would want to revisit the Zauderer test for how to evaluate whether something is too unduly burdensome of a speech interest. And he's skeptical that Zauderer actually articulates a view that is consistent with First Amendment norms. He would actually, if not do away with it, substantially narrow the claim that there's a burden on speech interests, which is an interesting intervention.
So the split here between the 11th and Fifth Circuit is actually a story that reveals tension, but also a story that's consistent with the American experiment, and that is that the states are supposed to be labs of experimentation. This is what students learn in law school. In our current climate, it's a bit-- we can probably put it a little bit more crisply. And that is that California and Texas are the big labs for experimentation here.
And I want to talk about California's laws, but it's worth saying that 26 states have passed laws regulating political deepfakes in particular. And many of these have prohibitions, but moreover, disclosure and transparency requirements. OK. I want to make sure that I do recognize that Congress has been thinking about this. No matter what Congress is doing now, which is nothing, there has been-- except for twiddling their thumbs, I guess.
There have been proposals put forward on regulating this space. The Protect Elections From Deceptive AI Act is a bipartisan bill that would prohibit the distribution of materially deceptive media that is generated by AI relating to federal candidates. The federal candidate can bring an action, which brings up all sorts of things that came up before about how to vindicate harms. And there's a First Amendment exception for parody and content involving news broadcasts.
I want to talk now about California. I don't want to linger too much on it, since you've heard some about California. But to the extent we have litigation on transparency, it really does involve California laws. There are two statutes that were passed late last year, AB 2839 and AB 2655. AB 2655 has, speaking of acronyms for titles of a statute, one that I need to repeat. Defending Democracy and Deepfake Deception A. I mean act. So DDDA. Someone decided to do that on purpose.
It requires large platforms to label certain content inauthentic, fake, or false, during the 120 days of an election cycle right before the election and disclosure requirements after the election. The content that portrays candidates for elective office and current elected officials have to include a statement that says, this image, audio, or video has been manipulated and is not authentic. Given what you've heard from me about burdens on speech, it is suggestive that this is potentially the kind of thing that the doctrine wouldn't allow.
Well, and indeed, this is something that has been subject of lawsuit. So there is a substantive deceptive media and advertisements provision that California has produced. I think maybe Doug might mention a bit about that later. I don't want to impose too much on you. But there is the transparency provision.
There is a lawsuit against the substantive provisions that has produced orders that suggest that it is unconstitutional as a matter of First Amendment doctrine, because it is viewpoint based and content based, focusing on particular candidates, and only on the extent to which it is undermining the confidence-- under the language of the statute, undermining the confidence of the public. This is viewpoint determined because it does not say anything about any positive representations that might happen in AI generated content.
So the district court in California and Eastern District of California has declared the substantive provisions unconstitutional. With regards to the transparency provisions, there's a weird order from the bench from Judge Mendez, the same judge who says, no, this is a case that can't move forward. That is to say that this statute can't move forward because Section 230, a provision that Jennifer mentioned, preempts the state's effort to regulate the distribution of user generated content. And I want to talk-- I'll return to this later on.
OK. I think I need to speed ahead and just talk about the Take It Down Act. So we have cases that are addressed to transparency. We have a standard for evaluating whether it's unduly burdensome for speakers. And we don't have any clear direction from the Supreme Court. But we have some inkling, given the Moody versus NetChoice case-- that is the Texas and Florida cases.
The tools to address known exploitation by mobilization technological deepfakes on websites, networks-- that's the Take It Down Act-- criminalizes the non-consensual distribution of intimate images, whether authentic or digitally manipulated. There are definitions of what is an intimate visual depiction that's drawn from another provision in the US code. The Consolidated Appropriations Act has a definition of this. And there are distinctions in the statute between intimate visual depictions of adults and those involving children with regards to adults.
Among other things, the intimate visual depiction has to have been obtained or created under circumstances in which the person knew, the person who posted it knew or reasonably should have known, that the identifiable individual had a reasonable expectation of privacy. So the person, whoever posted it, had some expectation that the other person had expectation of privacy. Alternatively, the inauthentic intimate visual depiction disclosed is without consent. So there's our authorization mechanism.
With regards to minors, the statute says that knowingly publishing intimate visual depictions with the intent to abuse, humiliate, harass, or degrade the minor, or arouse or gratify the sexual desire of any person is a violation. And, for what it's worth, this is consonant with other ways in which, I think, the public laws address to harms to children and obscenity laws more generally. The fines are imprisonment. There's a fine, a criminal fine and criminal imprisonment as a possibility.
The civil penalty, not sure completely about, but the offenses involving adults can put someone in prison for no more than two years, and those involving minors no more than three years. That's the substantive obligation. Now with regards to notice and takedown. The Take It Down Act, which I should have said was passed with the President's signature in April to great fanfare. And importantly, Melania Trump supported it as well. It requires cover platforms to remove non-consensual, intimate visual depictions within 48 hours of having notice of it.
The difference between these notice and takedown provisions and the criminal provisions is there is no similar cabining of what is an intimate image for the purpose of the statute. And this is going to be important for thinking about the vulnerabilities of this law. The platform must pose clear and conspicuous information about the removal process. The FTC has enforcement authority to issue penalties for non-compliance. There is no private right of action. There is a safe harbor for platforms that, in good faith, remove content when they have notice of it.
This parallels the so-called immunity under Section 230 for interactive computer services. By the way, this provision is an amendment to Section 223, which is the neighbor of Section 230, for those of you who pay attention. And the last thing I'll say about this is this is a law that passed-- remarkably, that this passed in this Congress, and there was bipartisan consensus for this in April. So you'd think that this would mean everything is in the clear. After all, everybody wants to protect the kids.
But there are some flaws here, and I'll just identify a couple. Given the limitations of time, there's really not much I can say. So of course, I'll make the observation. The companies didn't love the 48 hour. There's a 48 hour takedown requirement. Once you have notice, you have 48 hours to take it down. Companies didn't love that, but that's in there.
There is a potential overbreadth problem here. And that is, given the limitations of the constraint of the criminal provisions, the notice and takedown provision provisions do not have similar constraints. And so you might see protected content getting taken down. And maybe you might even count on that, of a journalist photographs of a topless protest. I mean, it could potentially, for example, be something that is taken down.
Now, there is a more pernicious problem, and it is what makes this upside down in many regards for people who are worried about gender based abuse and systemic harms, and that is effectively an exception for abusers. There are exceptions in it for law enforcement and intelligence gathering. But there is also an exception for a person who possesses or publishes a digital forgery of himself or herself, engaged in nudity or sexually explicit conduct.
That is to say, if you're a partner with someone at the time that this-- and you're in the video with someone else, but that you're in it, suggests that or this provides that you are exempt from it. And this is precisely the kind of exemption that you might expect be abused by abusers. And so this returns me to the core concern for public law interventions. There is a remedy for systemic harms. And the remedy is not necessarily in individual actions but by interventions by public agencies.
The danger is that the laws can't be written in ways that are too broad or overbroad, and they have to be cabined in a way that tend to real speech interests. So I think I will stop here. Actually, I'll make one observation with regards to the dangers associated with something that's too broadly written. And I want to refer to the FCC's recent-- the Federal Communications Commission's recent threats, or the chair of the FCC's recent threats of Brendan Carr under the new distortion guidance and the public interest regulation.
Broadly worded statute, not sufficiently constrained, potentially invasive of protected speech. And for the same reason, I think we want to worry about that control that a federal agency has. But I do think there are very few institutions or entities that are capable of addressing the problems I described outside of federal agencies. Thank you.
[APPLAUSE]
[GINSBERG] Sure. We have time. We have time for a couple questions. Do we have any questions for Olivier and Celia? Over there.
[SIDE CONVERSATION]
[AUDIENCE MEMBER] Like that? Hi. Thank you so much for your lecture-- for your performance. And I have a question to Olivier about the European Union AI Act. So as I understand, the main purpose of this act is to distinguish between AI generated content and real content. And how do you plan to deal with users who can remove AI watermarks from their content? Because it's very easy to put watermark or to remove that. And how can you control this action in terms of transparency?
[ZOLYNSKI] The real question you ask, we don't know yet how can we build a very robust-- I have problem with microphones today. A very robust watermarks. So it's a technical challenge, and this is a main question asked in this public consultation I've mentioned for the French-- for the EU Commission. I will speak like this. So we are currently studying this, not my research team because we are only in low, but the technical partners. So [FRENCH NAME] in France, and I could share with you some more information in a few months.
We are finishing our project, research project in July, trying to identify a robust watermark. Adding a new challenge is about watermark for texts. We know that watermark could be developed for image and videos, but for text is another challenge. So I cannot respond precisely to your question. That's why we try-- this is not addressing all the issues, but we try to enforce the responsibility of online platforms, especially social media, under the DSA, and force them to deploy research and techniques robust one to ensure that watermarks and labeling can be not removed when the content is shared in their platforms because they are obliged to.
Regarding the obligations imposed by the DSA to very large online platforms. So we promote this and for them to deploy specific research, even if they do not, they could face sanctions under the DSA. This is another logic to force the providers to deploy and invest. This is the real point. Invest a lot on such techniques.
[AUDIENCE MEMBER] Thank you.
[GINSBERG] David and then I'm going to say, if we can keep it a little brief, we'll go to lunch.
[AUDIENCE MEMBER] Yeah. Thank you so much. My question is for Olivier, and I'm just kind of curious if you're comfortable speculating about what you anticipate will happen in 2026 once the platform obligations go into effect. Because I can see a universe where this starts off a little rocky and then turns out, kind of similar to the Copyright takedown request process, which I think at this point in 2025 is not overly controversial in terms of the broad scope of the way that the process works.
But I could also see either, as you said, the FTC being somewhat opportunistic in the way that it's enforcing it. And I could also see a NetChoice type challenge from one or more platforms to raise the overbreadth issues. Just, to the extent you feel comfortable speculating what you think may happen, I'd be very curious, since we've kind of had this year long period of waiting to find out.
[SYLVAIN] My speculation is going to be as good as the speculation pregnant in your question. I agree it's subject to manipulation. There is no counter notification process, as you know. The FTC, the question of whether to go after a platform will be contingent on the FTC's regulatory priorities. And as someone who believes in agencies-- believe it or not, I do-- this gives me a special concern. And this is not something that is inevitable. Congress could write a law that attends to these problems, but I'm afraid may not have. So I don't know what's going to happen, but I think whatever you are guessing, your guess is as good as mine.
[AUDIENCE MEMBER] Thank you.
[APPLAUSE]
