Kernochan Symposium 2024: Panel 2
[CAITE MCGRAIL] So welcome back, everyone. If everyone could grab their coffees and get back to their seats, we're going to get started. So my name is Caite McGrail. I'm the fellow here at the Kernochan Center. And it's my pleasure to moderate this panel on licensing images and motion pictures.
So we're thrilled to have Ron Wheeler. He's the head of business and legal affairs at the Motion Picture Licensing Corporation, which handles non-theatrical licensing for the motion picture industry. And we have Benjamin Marks. He's the head of the IP and media practice at Weil, Gotshal & Manges. And he's also the counsel for Getty Images in their lawsuit against Stability AI. And finally, Nancy Wolff, she's the partner and the co-chair of both the litigation department and the Art Law Group at Cowan, DeBaets, Abrahams & Sheppard LLP. And she's also counsel to the Digital Media Licensing Association on our panel today.
So their full bios, as I mentioned, are available using the QR code on your agenda. In today's panel, we're hoping to discuss the current licensing regimes in these industries. We're going to consider the impact of AI. And we're also going to evaluate the viability of new licensing schemes. So we're going to examine licensing as a potential solution, whether current licensing schemes are adequate or new models are needed, and if so how these new regimes should be structured.
So with that, I'd like to begin, if each of these panelists could provide a brief background on their industries and their current distribution models to consumers. And I'd like to start with Nancy. And in particular, of course, in the image industry, there are individual photographers. But also, I think we'd like to understand the role of Shutterstock and Getty Images. I think these might be familiar to most of our audience, but not maybe exactly how those distribution models work.
[NANCY WOLFF] OK. Thank you. So I'll jump right in. The stock photo industry is one of really the oldest, I think, licensing models there is. It's been around for probably over 100 years. It started in analog, and then ended up in digital format. And it's an industry that really took advantage of the fact that copyright is divisible, and an image that was taken on a magazine shoot can be used, or the outtakes from the same shoot, on another book cover, in a brochure for many different reasons.
And in the early days of when photographs were taken with cameras that had film, which some people here may know--
[RON WHEELER] No film.
[WOLFF] --to getting black and white, and then color, there were a lot of very specialty libraries that were in the arts or science or medical. And there were picture researchers that would go through file cabinets and find images that were all in folders by subject matter. Then came the 1990s where their digital cameras started evolving. And then the CD-ROMs that started distributing images. And finally, we had a lot of consolidation in the industry where many of these smaller specialty libraries were being purchased by companies that really had the resources to develop online platforms that would allow users to search by using keywords that were attached to now digital images, whether they were taken from digital cameras or scanned, and be able to search and find images.
And the whole purpose of stock is that you have an existing image, whether it's a still image, now video illustration, that already exists. So you don't have to send a photographer on assignment to yet another wonderful tropical island to get the picture of someone walking on the beach. You can search for something there and make selections of what images would be most appropriate for your content.
So as part of the consolidation in the industry, we ended up with many larger companies that really are tech companies. So Shutterstock is one of them. And they have a large number of contributors, as well as, I would say, Getty Images. There was Corbis, which was around from the, I guess, mid-1990s until a number of years ago when they sold their company to the Chinese and are now distributed by Getty Images.
There's Alamy, which is an interesting platform because that is a platform that any one, whether you're a professional or you're a collection of historic images or even other image libraries can upload their content. And they really don't curate. They just have a lot of content.
There's Adobe, the company that's known for all their creative software products, also acquired an image library. And so now there's Adobe Images. And if you look at them as aggregators, I think Alamy now has over-- about 380 million items of content, whether it's video, images, illustration, vector. Shutterstock says they have over 450 million. Getty Images has millions of images. It may be a little more curated. And I don't know the number of Adobe.
But as you can see, there are a lot of aggregators that represent the content that's either wholly owned by them or owned by contributors, which have a contract with these contributors. And the contributors will get a percentage share of any license that's made that relates to their content. And so they're big players in the market. And I know we're going to get into this later, but they've really been in the business of licensing their content and their data for a long time.
And I actually saw an interesting fact this morning that I think the first stock photo was really from someone named H. Armstrong Roberts, who when taking a photo of a group of-- it was 1920-- a group of people in front of some tri-motor airplane decided it would be a good idea to get a model release. And so that was the beginning of the stock industry. And that company lasted for three generations and just closed this year when the last H. Armstrong Roberts passed away.
[MCGRAIL] Thanks, Nancy. And so, Ron, could you give us a similar sense of the movie industry's current distribution model and then, of course, its history?
[WHEELER] Sure. Yeah. So if you put all those pictures together in a series of 22, 48, 2 hours, then you get a motion picture. And obviously, so that's the origin of-- it's the next generation technology of still photography with the addition, of course, audio in 1927 with The Jazz Singer.
Anyway, the licensing model is the thing that's a little different from the still photography licensing model. The individual motion pictures, whether it's a television episode or a movie, is an entire work in itself. And it's actually the performance of the work, each individual work that is the licensed object. So it's not an input into another work, generally speaking. Obviously, you have clip licensing, which is not what we do. It's more like still photography than motion pictures per se.
But as Roy reminded me, Motion Picture Licensing Corporation, the company I work for, is one of only two IMEs, Independent Management Entities, in European parlance in the US. And what we do is we also aggregate.
So we're similar to the companies that Nancy mentioned in that regard. We aggregate TV episodes and movies from-- we have over 1,000 producers. So we exclusively represent producers. We don't represent directors or any other talent. We have the producer rights that we license as a group in a blanket license, similar to what CMOs license in Europe.
And we operate not all over the world, but in about 38 countries around the world. And the license we offer, we actually offer outside the US, not in the US or the UK. But we offer individual licenses to the individual pieces as well. But the group license, the blanket license is to play anything. It's all you can eat, similar to the stock licenses that Nancy mentions.
What else can I say about it? I think that's probably a pretty good introduction for the motion picture.
[MCGRAIL] Thank you. And so, Ben, because you have such a breadth and depth of experience in these industries and others we're going to hear about today, could you share the role of statutory and compulsory licensing in these, and just sort of creative industries in general? The music industry is the most obvious comparison. I know we don't want to steal the thunder of the next panel, but I think it would be helpful to have a sense of how those schemes fit into this larger picture.
[BENJAMIN MARKS] Sure. I mean, I think the largest point-- there are points of distinctions among the various industries. And when we think about how licensing markets will evolve, I think it's important to recognize that there's not going to be one model that will fit across all industries. And there are unique circumstances within each industry that will suggest different types of forms of licensing, whether it's collective or statutory or individual, may make sense.
I don't think there's anything intrinsic about AI that suggests you need different rules than have already emerged in those marketplaces. And so what do I mean by that? The image licensing market, for example, is what I think of as a fully functioning, competitive marketplace. Unlike some of the other industries that we'll talk about, you don't see antitrust lawsuits. You don't have monopolists or complementary oligopolies that create inefficient markets.
And so there has not been-- you have not had government oversight. I'm not aware of any government consent decrees in the image licensing business. There have been in the motion picture licensing industry for many years. There were what were called the Paramount decrees that affected the way-- that really prevented vertical integration of movie studios and movie theaters. That was terminated, what, in the last 10 years or so?
[WHEELER] Yeah, 2017.
[MARKS] So that was terminated seven or eight years ago, when you had the rise in competition from vertically integrated streaming services and the type of competition that that would engender, and ensuring a level playing field across that new medium. So it really depends on what you look like.
The music industry, as we'll hear later, or as we'll talk about, has long had statutory licensing for some types of exploitation. It's had collective licensing for other types of exploitation. And those collective licensing organizations, the largest ones, have been subject to antitrust consent decrees. Even with those antitrust consent decrees, you've had antitrust litigation on and off for over 100 years in the music industry.
And so the rules that emerge and how the licensing models that emerge may be very different, given whatever the existing paradigms are, and what degree of government intervention there has been and what role there has been for collective licensing. So I think that's an important framing as we talk about each of these different media and how they work.
And I think the conference today is helpfully organized around some of these different ones. So we can talk about them individually. But I think those different dynamics will have a lot to say about what licensing models already exist, which ones are likely to emerge, what role there is, or what role there is for government to play, whether there's a need for government to play a role. Those are not-- we're not starting-- now we have AI, so we need-- I think we'll get different answers, depending on what the existing dynamics of the various marketplaces are.
[MCGRAIL] Yeah. So turning to AI, now that we have a sense of how the landscape was pre-AI, I think I'd like to give everyone a sense of how AI has impacted these industries. Some of them may be obviously quite obvious. We've talked about them today.
But Nancy, in particular with the image industry, can you give a sense of the day-to-day impact that this sea change has been felt? And in particular, I think we'd be curious to know about maybe the less obvious implications of AI, for example, how AI is not only interested in copying the images from Getty, but also the metadata, and how that's incredibly important to them as well.
[WOLFF] Yeah. Thank you. So the image industry has been looking at AI. And the different companies have taken different approaches. Getty Images is involved in litigation. And most of the large libraries have actually been working on having their own sort of curated AI, generative AI models.
But the real concern is that the industry believes in AI and believes in innovation as a tool, but it wants to have a license market that is still based on contractual transactions that take advantage of the value of their work. And one thing to think about with images, it's the generative AI. And so if you don't have very good, accurate metadata that relates the metadata to the image, the training will not be accurate.
And the image industry, because it's always been driven to how customers search for things, have very robust and accurate metadata, not only to the captions of the image when you're looking at editorial images that have lots of descriptions about what is in the image, what the event is, where it took place, the date, descriptions of the content of the event.
But when it comes to the commercial images as well that all have moderate releases, there's information about not only the description of what's in the content, but how people search for images. It talks about gestures and moods, and all kinds of information that it's very valuable, the industry believes. And the value to that is why-- and we'll get into this later-- that data has been a licensing model for this industry, as well as the individual images, for some time.
When you look at-- even if you take your smartphone out, there's a lot of licensing behind the scenes that go on. So the camera companies and the phone companies understand things like skin tones, because when you look at these large image libraries, they have quite diverse images. And so you will get different ethnicities. You'll have content from around the world. And it will not have some of the biases if you just scrape content from the internet. So if you type in "judge," you're not just going to get a white man, hopefully.
But I think all this sort of data about the data is so valuable when anyone is considering any kind of whether it's a large model or a smaller model, that the data, as well as the individual images, is so valuable and has a lot of use in training models.
[MCGRAIL] Thanks, Nancy. And so now, Ron, turning to the movie industry, where is the industry feeling the pain, I guess, of the AI, advent of the AI age?
[WHEELER] Yeah, I think, really, that's on two different fronts. On the one hand, producers themselves, the people we represent, are very interested in AI as an input to their own productions. So if you just think of the concept of what traditionally was called "extras." It's not the word anymore. It's called "background actors." But they're day players. They're obviously relatively interchangeable.
If you can just imagine that background actors would be AI generated in the future, that would save producers a lot of money. Obviously, background actors themselves are not very excited about that. You may remember that one of the big issues in the strikes last year, SAG and the Writers Guild, was precisely basically replacement of their creative works by AI generation.
So that's one front. It's basically trying to get some of the exorbitant costs of production under control using AI. So that's very positive, obviously, for producers, a mixed bag, perhaps, for talent themselves.
On the output side, the generative side, if you will, I think it's the reverse. It's in other words, the producers themselves are worried about substitution of AI-generated video for creatively produced video. So I think in that regard, some producers are increasingly trying to get ahead of the curve a bit.
You may have seen Lionsgate, one of the mini majors in Hollywood, just negotiated a deal just two weeks ago. I think it is, with Runway. And that's an exclusive deal from Lionsgate for its own library of films and TV shows to train its producers, if you will, and the associated talent on the creative works that have gone before so that they can have ideas.
The classic concept is elevator pitches like, "Well, it's Star Wars meets Breaking Bad," that sort of thing. If you think about that, that's essentially an AI concept. What would Star Wars be like if it were married with Breaking Bad. You could have an AI generate that idea. Then humans would have to write it up and act in it. At least the lead actors would be human.
So I think it's early days in the motion picture industry, but I think those are the two poles of where AI is going to play a significant increasing role.
[MCGRAIL] Thanks. And so now that I think we've sort of laid the groundwork to understand where these industries stand today and as regards to AI, Ben, I want to turn to the question of licensing. And is this a workable solution to this impact? And as you alluded to earlier, is it better suited for some industries more than others?
[MARKS] Look, I clearly think licensing is a solution, and it's the right solution. What those licenses look like may vary by industry, and what the concerns are and what things you need to think about in drafting and negotiating the licenses may vary depending on the use cases, or will vary, depending on the use cases. But we've seen licensing deals in the image licensing space.
Ron mentioned, we've just seen one in the motion picture space. I think the motion picture space will-- I expect it will follow. The technology is a little harder to do. It has been slower to develop on text to video than it was for text to image or text to text, just because the technology is more complicated to generate-- to have artificially generated video.
But that technology is clearly coming. And it's coming faster than anybody would have predicted even 12 or 24 months ago. The time frame is near-term, not even medium-term or long-term.
So you are seeing license deals being negotiated. I think Matt Stratton mentioned the several dozen publicly reported deals. There are far more deals than those that have been publicly reported in the marketplace. And so you're seeing those emerge. How the licenses will emerge over time and how they'll adapt, I think the last panel talked about that a number of agreements are being styled as data access agreements.
That's largely driven by litigation posturing about what people want to say or not say in terms of-- because there is pending litigation. As court decisions come down and settle out rules, the licensing market will adapt to whatever the courts say, as they do in every other context.
But I do think licensing is the solution. And you will have-- there are a lot of things that you can control with licensing that will make more sense in bilateral negotiations than having the blunter instrument of a one size fits all compulsory license.
So let me give you an example: In the image licensing space, you have-- it's not just that there are copyright concerns or financial concerns of I own the rights to a bunch of images. Somebody wants to use them to train AI. And I want to get paid. And I want, like, a cut of whatever they're going to earn when they commercialize their product. Yes, that's a concern. I don't want to suggest otherwise.
But there are other concerns, which is that very often the pictures are of real people. You have right of personality concerns. You have concerns that you don't want-- you may have images of Taylor Swift and Travis Kelce in your image library because you have pictures of her concerts and pictures of his football games and pictures of them out in the world. You may want to take steps and control when somebody uses the training-- uses images for training, that they are training in a way that doesn't allow the user of that model to create their own home pornography of Taylor Swift and Travis Kelce.
So you can take steps to prevent that in a licensing model. If you just said a government rate of you can train whatever you want as long as you pay a penny rate to the image licensor, you're not going to be able to address those nuances. So you have a lot more ability to make sure that-- or you have a lot more input.
And it's one of the-- copyright interests protect more than just the financial remuneration. They protect how it's used and how your works are portrayed. And it gives the rights holder the flexibility to negotiate for that. Whether they will or won't, that's what the market will determine. And that's who should determine it. The marketplace should sort itself out. Maybe some licensors won't care what use is made. Maybe some licensors will care greatly. And that'll get negotiated.
But I think the direct licensing market, certainly for the image marketplace, makes the most sense. As I said, whether or not there are unique factors in other markets, where some degree of government intervention is warranted, there may be. There very well may be. But it ought to be a targeted approach.
And that is the case of how we've dealt with licensing in other contexts. You have compulsory, must carry rules in the cable television industry. You had the Paramount decrees in the movie industry. You have the compulsory licenses and antitrust consent decrees in the movie industry, but you don't really have anything comparable in book publishing. You don't really have anything comparable in image licensing. And you don't really have anything comparable in news media.
So I think that's the right path is use government intervention when you're fixing a specific market problem. But as we're seeing with this body of voluntary deals emerging in the marketplace, I haven't seen the case made that you have to have a government intervention for all use cases in the AI space. I haven't seen the case made for that.
[MCGRAIL] Thanks, Ben. And so, Nancy, Ben was alluding to the images industry. And can you give us a sense of what form of licensing that you think is most appropriate?
[WOLFF] Yes. And in fact, licensing has gone on. And for all the reasons Ben said, the terms of the licenses and the images that are chosen to be used make a lot of difference. So the image industry has images without releases because the licensing has deals with news pictures, what's called editorial. And they need to accurately, of course, depict the events that are going on.
And so many of the internal AI models that companies are doing, such as Getty Images or Shutterstock, will limit their own models that you could use on what's known as their commercial images, where there are releases for the people. And if there's any identifiable objects or products or buildings, there might be some property releases as well.
And so you reduce some of the risk because there's risk of the privacy laws. There's risk of revealing personal information. So if an image library is licensing its data, it will make sure that model names are removed and things like that to protect the personal identity of the subjects in it.
And when you have one on one transactions with companies and image libraries, the image libraries can set the terms of how the content can be used. And they can also vet the purposes. So they can vet any people who would want to or companies that would want to be licensing their data for purposes that may not serve the public interest. And they can choose who their partners are that they want to work with. And so, for example, I think there's deals with NVIDIA.
And the other important part is that it's difficult when you look at the output to see which part of the images are used for anyone, and maybe through tokens and in the future, that'll be easier. But what all the companies, to my knowledge, are doing, whether it's Adobe, Shutterstock, or Getty, they are setting up pools so that there's compensation, both for the input side when copies are being made, and also for the output. So there'll be some compensation to the contributors based on the use of the images in the output.
And I think one issue that is also leading to licensable content is that because there's copyrightable issues about whether the output can be owned, that commercial users are still a little bit hesitant to use complete generative AI when they're using it for commercial projects. For example, if you're an ad agency making content for a client, you have to deliver content that you can satisfy reps and warranties, particularly about ownership.
So a lot of this generative AI may be very useful for ideation and coming up, but if you're giving advice on best practices to clients that are in the design space and the advertising space, they're going to not want to use it as the solution for the final project because they can't turn over anything that, at this time, is copyrightable.
Because I know there was a real concern as generative AI gets better and better that it can replace all stock photography. But right now, I think the companies that have even their own internal AI, where you can choose to license a stock photo or you can create something for AI, they're finding that people are using a lot of it more just for fun and to try it out. But it's not a strong licensing market yet.
They're still looking to license traditional stock, because you can get more rights. You can license it. And so it is going to keep evolving. But there has been a strong market for the data that these companies have because it is so much more reliable.
You can search or I guess you can license subsets if you just need scientific, or if you need something that deals in the arts. There's a lot of curation that can be done when you're working with a reputable image library that has a lot of information associated with the content.
[MARKS] The other observation I want to come back to a point that Professor Ginsburg made earlier, which is when you're talking about these licenses, is it on the input side or on the output side? And the answer is it can be one or it can be both. I don't think you have any that are just on the output side, but not the input side. But you can have input and output.
So there are some generative AI use cases where the objective really is for the output not to bear any resemblance to any input. And where when that happens, where you're seeing models generate clearly infringing content, where the model has memorized a particular set of lyrics or a particular image or what have you, the model developer-- that is to the chagrin of the model developer. They didn't sort of mean for that to happen.
You have other use cases where they very much want to be able to attribute. They want display rights to particular content or where some aspect of the output will be artificially generated. But other aspects, they'll want authenticated content. So for instance, if the objective is I want an image of a boy walking on the beach holding a balloon, because you're putting on a birthday, you're making a birthday card, or you're selling birthday cards, or you're wanting it for an ad campaign, it may not matter to you exactly what it looks like, or that it's authenticated content, artificially generated content may work perfectly well for that scenario.
If what you're doing is using artificial intelligence to generate a term paper on the 1960 election, you may want pictures of the candidates that are authentic pictures of the candidates, or you may want the actual clip of the debate between President Nixon and President Kennedy when they were candidates. And that's important to have that actual content.
And so the model will want the ability to display authentic, presumably with attribution, and the like. And so you will see variances. And again, this is another argument for why directly negotiated licenses will make more sense, because there will be lots of variations and lots of gradations as the marketplace emerges.
And the other point I would make in that regard is there are two different things when we think about how these marketplaces should emerge, the commercial harms or potential commercial harms to the rights-holder side varies by market. And the public policy concerns vary by market.
So you've seen-- there have been some highly publicized instances of people using AI generated music to manipulate the statutory license payments that are available from interactive-- from streaming services. You've seen there was a highly publicized case of the fake Drake recording.
There's commercial harms. There's reputational harms to some extent. That's very different than the political misinformation we see with fake news, with manipulated news articles, with images of people that look real, but they're not. The images that were circulating on the internet that purported to show Kamala Harris with-- I mean, it was photoshopped, not artificially generated. But the potential societal harms may vary greatly by this.
If you have artificially generated medical information, that's wrong. It's obviously a different level of public health concern than, oh, was that really a particular musician or not? I don't want to minimize any of the concerns, but the concerns really will vary by use case.
[WHEELER] Yeah. I just would add briefly with another reference to the Lionsgate example I mentioned earlier. So Lionsgate itself wants AI generated ideas, really. They're not interested in Runway displaying anything, but Runway is essentially a vendor to Lionsgate to generate ideas for future projects. That's an example of the licensing model being very different from the other AI examples that we've been discussing. And that's why licensing is the way to go here.
[WOLFF] And the image industry has had a concern about authentication for some years. Even before AI, Adobe was instrumental in starting what's now an independent non-profit standards organization, but it's called the Content Authentication Initiative. And the concern has always been that images can be manipulated. Photoshop could do it before there was generative AI, just it's so much faster and easier. And the concern that people will no longer trust images.
So it's the Content Authentication Initiative would allow someone, if they clicked on an icon, to see the changes that have happened to any image. And it's getting harder and harder to figure out whether images have been manipulated now. And the story in the UK of the Mother's Day image that had a photoshopped image. And all the news and the press had to retract that image because it was altered. So it's very important in the news industry that you have authentic images.
And in the commercial side, where you may be just coming up with a creative campaign, it could be fun to create the pig with wings or whatever you can do with AI or right now cats coming out of ovens and pants. No.
[WHEELER] No, cats with scarves.
[WOLFF, LAUGHING] Sorry.
[MARKS] Yes. Cats with scarves.
[WOLFF] But there's a lot of satire and humor you can use with generative AI, which would not be appropriate in the news realm.
[MCGRAIL] Sure. And so, Ron, I want to give you a chance to talk about where you think-- what form of licensing you think is most appropriate in the movie industry and where you see that's headed.
[WHEELER] So again, back to my initial comment that AI ultimately could be perceived as a threat to the motion picture industry. I think deals like the Lionsgate deal that I've talked about a couple of times now are at least the short-term future of AI in the motion picture industry, where basically it's a tool for producers to generate new content, new original and creative content, fully authenticated.
I think that's going to come along. Right now, it's mostly a still photography issue, but I think it's only a matter of time before fake video TV episodes are ubiquitous on the internet and elsewhere.
So I think that the short-term, it's going to be a tool used by producers for their internal purposes. I suppose, obviously, if the money is right, apropos of the licensing discussion that we've had all morning, if the money is right, producers might say, "Well, gee, if you pay me enough, I'll let you use my images to create your videos, your TV and movies." But I think that's a ways off because of the threat model that I mentioned earlier.
[MCGRAIL] Thanks. And so I also want to understand, are there challenges relating to the impact of AI that licensing doesn't address? It does seem like it addresses, obviously, the renumeration issue. And we've also alluded to in the image industry, and obviously the movie industry, too, that the licenses can sometimes in some ways control the outputs. But are there additional challenges, particularly as it relates to the creators and the authors, that licensing doesn't assist with?
[MARKS] Yes. There are some that-- look, the technological tools may allow-- will displace certain types of creators in some ways in some circumstances. I don't think there's any doubt about that. Whether or not that's ultimately healthy for society or not, that's a separate question. That's probably beyond the scope of today's panel, or at least we need a few more days to sort through all the issues.
But there will be dislocations. On the other hand, it will open up new opportunities for types of things. So there will be harms, the harms that come to some members of the creative ecosystem, the creative copyright ecosystem. Even if you license it, once these tools are built and then you no longer need stock photography, well, then you no longer need stock photography because these tools have been built, and maybe people have been compensated. But you won't need another person to go take the picture of the kid with a balloon on the beach because you just have tools that can create that if you need it.
Other types of-- other aspects of the image licensing business, there will still be a need-- knock on wood-- still be a need for authentic news photographs. So no matter how good the tools get, you will still want actual coverage of actual events, actual wars, actual political events, actual scandals, whatever else it is. So there should still be a need for news photojournalists. So I think the impacts will really vary depending on how it goes.
But there's no question that generative AI will infiltrate workstreams across broad sectors of the economy. We've seen it. I read-- I get emails every day with how x firm or y firm is incorporating generative AI into their own work streams of producing legal work product.
We've seen some spectacular flops where people use generative AI to write legal briefs, and then they don't accurately cite check and remove all the fake cases. So that, I think, will work itself out as people figure out how to make sure they're not doing that. And as associates who do that understand they're going to lose their jobs if they don't use cite to real cases in briefs that get filed in courtrooms.
So there's clearly going to be an impact. And it's going to change the way we work. I think there's no doubt about that. And copyright licensing won't solve that issue of how society adapts to these changes-- adapts to these new tools. We will have to adapt, but hopefully it's within a paradigm-- it's within a copyright paradigm that works for both creators and users.
[WOLFF] To add, I think AI also can reduce some of the tedious and dreary jobs. For example, matching keywords to images can be a lot easier if a lot of the descriptives can be done. I don't know if they'll be able to figure out the gestures and the moods, but they could learn that.
And so there's a lot of real useful issues, useful tools. And the photography industry has always sort of been a combination of human creativity and tools. I mean, cameras right now do a lot, but it's still the photographer's eye. And proof of that is when I take a picture with my iPhone, it looks like I took a picture of the iPhone, and no one would want it. When I see some of my clients work that maybe they'll share on Instagram and stuff, it's their eye.
I mean, again, AI learns from repetition. It learns from what it's been given before. And hopefully, there will be creative people that use this as a tool that will create things that AI will not know to create-- some standard kind of stock images with the white backgrounds maybe. But if you're still a luxury good company, you're going to want real models holding your handbag. You're not going to really want some fake person, probably, you would think.
And I talked to documentary filmmakers that see such potential that if you're doing a film about a historical figure from the past and all you have are letters that you could actually, instead of doing an illustration and a reenactment, you can do an audiovisual sort of reenactment of what's in the historical materials. I mean, people are thinking of creative ways of using that that will enhance people's enjoyment, as well as just make pictures for us to laugh at.
[MARKS] Our PowerPoint decks will get a lot more elaborate.
[WHEELER] Yeah, exactly. So I've already mentioned the background actor/extra issue in motion picture production, but you can also think of CGI. So the movie Gladiator, one of my favorite movies of all time, has CGI generated pictures of-- images of the crowd at the Colosseum and other places where the gladiators battle.
You could easily imagine AI making that way better. I mean, if you guys remember, it actually looks fake a little bit because it's a 20-year-old film-- more than that, actually, I think. But anyway, so there's lots of exciting things that AI can do for the motion picture industry in a sort of an input phase. And I think you'll see more and more of that in the early phases. Later on, there's going to be the issue of fake Gladiator. But that's a ways down the pike, I think.
[MCGRAIL] Yeah. So as we've been talking today, I think the specter of fair use in the ongoing litigation has been-- is sort of part and parcel with the licensing conversation. And so I'd like to understand, do you think that there needs to be a finding on fair use from a court before sort of licensing becomes part and parcel of this industry?
And Nancy, you said that certain of the AI companies actually are licensing images. And so do you think that that's on an individual basis and that we need a finding on fair use before it becomes industry wide? But I'd like to understand your perspectives on that question, considering especially that the litigation is moving at a snail's pace.
[WOLFF] Well, I think licensing is happening despite litigation, because, one, litigation is uncertain. You're not going to know the output. And I think there's value to the licensable data sets that you can obtain from the industry that are far greater than what you can rely on in fair use and just scraping.
There's a lot of faults in what you can scrape. You're going to get biases. You're going to get the issue with hallucinations. And then you're going to have the problems which are sort of fundamental and why Getty Image brought that case. You're going to have some subjects that maybe there's not that much content of. And it's going to look a lot like the images that Getty has in its editorial library.
I think the ones-- the examples in the complaint are these soccer shots, where you still have the watermark from Getty on the output, because these models don't need high res images. They just need lots of low res images, which are also what's being licensed. And so you're going to have these types of problems if you're just going to rely on what's out there.
And so I think the value of having content that you can rely on that has releases, so you don't worry about violating a lot of the EU-- and also, a lot of states in the US now have a lot of privacy issues, dealing with facial recognition. I think it's both-- I think Texas, California, maybe Oregon, and I'm sure it's going to just keep growing.
But the regulations, as you heard in the first panel, that may apply to the EU. I mean, content is global. And these types of issues are going to come to everyone's mind, just like everyone had to jump in and figure out GRDP. We're going to have to figure out how to live within the bounds of a lot of these directives as well.
So these licenses are going on now. There is, I think, a healthy market. And there's also a market for creating content that can be used to train models. I know photographers and other artists are being paid to take photographs, to make illustrations, and art that don't look like anything else, or don't look like their style, because I think it's a lot more controlled when you know that an apple is an apple and not whatever may be on the internet.
The one thing is when with all this metadata that's so valuable, often the software that's used when someone uploads content online strips out most of that metadata. So you're not going to have something that says a cat is a cat. Or it may be wrong. It may be mislabeled.
So I think that the data that this industry has is valuable. It's already being licensed. When there's a license, you can control the guardrails to prevent things like fake news from being created, and things like that. So I don't think there's going to be-- what I'm trying to say a long way is I don't think we're going to wait for a decision when they need good content and they need reliable content.
[MCGRAIL] That sounds good. So I think we can open up questions and answers to the audience. Yes, Professor Ginsburg.
[PROFESSOR JANE GINSBURG] This is a question for Ron.
[WHEELER] Sure.
[GINSBURG] You said that Lionsgate is making deals with itself. And I'm wondering if the reason for that is the old license, new media problem. That is that the various contributors to a motion picture may have signed contracts well before the advent of AI. And there's some ambiguity as to whether that contract covers this use.
[WHEELER] Yeah, absolutely true. In fact, Nancy has mentioned several times the release issue. So obviously-- not obviously-- but in the case of motion pictures, shame on any producer who doesn't have releases-- contracts with every single element of the production. And they either do or do not permit a licensing for AI purposes.
My guess-- I obviously don't work at Lionsgate. I have no familiarity with the details of that particular deal. I've only read publicly available articles about it. But my guess is since they did license the content, that there's probably-- the analysis is probably that since it's actually only for Lionsgate's use, that it was well within the bounds of their contractual arrangements with the talent.
But that's a guess on my part. But I have a feeling that they have good lawyers over there. They're one of our clients, actually. So I think they probably checked that all out before they signed the deal with Runway.
But it's a good point. I mean, the actual talent is the source of the creative work, obviously. And so their interests have to be taken into account. And they have representatives, of course, individually, as well as collectively.
In terms of the guilds, the guilds actually have played-- I mentioned the strikes, actually, last year. A huge element of the settlement of the strike was arrangements about compensation for AI uses. And in fact, there's also recently, just again earlier this month, new law putting some of those protections into California law that Governor Newsom signed at, I think, DGA headquarters in LA.
[AUDIENCE MEMBER] Further to what you were just saying--
[GINSBURG] Could you introduce yourself?
[AUDIENCE MEMBER] I'm sorry.
[GINSBURG] Introduce yourself.
[SHARON WEINSTOCK] Oh, hi. I'm Sharon Weinstock. I'm with--
[GINSBURG] We can't hear back here.
[WHEELER] Just maybe this-- is there a volume?
[WEINSTOCK] Hi, I'm Sharon Weinstock. I'm with McGraw Hill. And further to your comment about or your letting us know about that internal licensing for internal use by Lionsgate, is there money flowing back and forth from which the original writers, producers are getting paid in some way for the ideas that are being generated from that review of the existing scripts?
[WHEELER] Works. Yeah, I have to say, I don't know. I only read what's in the article. And I don't believe the article discusses any financial terms. Clearly, Runway is the name of the company. They clearly don't work for free. So my guess is probably that Lionsgate paid them to perform the generative AI.
Again, it's a guess. I'm not knowledgeable about the details of the-- which in that case, there would be no money coming to Lionsgate, and therefore nothing to distribute to the talent. On the other hand, back to the talent contracts that Professor Ginsburg referred to, it may well be that it says, "If you license for AI purposes, even for your internal use, and something is created from it, then we get a residual." I can just imagine that.
But again, the creative, the lawyers and the agents for the talent would be the ones to, again, control the licensing. Another example of why licensing is the way to go, rather than have it be fair use or anything else.
[MCGRAIL] Yes.
[AUDIENCE MEMBER] Hi, I'm Olivia. I'm a student from Columbia Law. And I have a very specific question for Ms. Nancy Wolff. Thank you so much for your presentation. So in a significant AI paper, attention is all you need. It mentioned that--
[WOLFF] Speak a little closer to the microphone.
[OLIVIA] So in a significant AI paper, attention is all you need. It mentioned how AI generates content through a transformer system with encoding the input's language and a decoder creates response. As we all mentioned and we all know, just ChatGPT can use this transformer system through using the amount of tests online.
In your book, The Routledge Companion to Copyright and Creativity in the 21st Century, you mentioned the Copyright Act permits authors to their heirs under certain circumstances to terminate the exclusive or non-exclusive grant of an author's copyright. If the ChatGPT grabbed the information online, how it would influence the determination of the exclusive or non-exclusive grants of the author's copyrights?
[MCGRAIL] Maybe you could just rephrase the last piece of your question.
[OLIVIA] Yeah. So you mentioned that the Copyright Act permits authors or their heirs to terminate the exclusive or non-exclusive grants of the author's copyright. And if the ChatGPT--
[WOLFF] Who's-- what is authorizing the--
[MARKS] I think the question relates to how does the right of termination under the Copyright Act interplay with generative AI licenses. Like, once you've licensed works to a large language model, does the author's right to terminate the grant to the distributor-- is that the question?
[OLIVIA] Yes, exactly.
[WOLFF] Well, termination also just only applies for rights in the US. And it's, I believe, only for exclusive rights, not non-exclusive rights. So I think-- and it would prevent new uses from being made. So if the license has been to train in a model, then it couldn't be trained in any new models.
Now, the question, I guess, would be, can it be removed for derivatives? I don't know the technology well enough to know that. Termination is very specific and needs to be filed. And a lot of, I think, individual content owners may not go through that whole process because the point of the termination was really to put yourself in a better bargaining position than you could at the time to make another deal. And will all this be worth it for licensing it again for another language model? I don't know. I think it's too soon to tell.
[OLIVIA] Thank you so much.
[MCGRAIL] Yes, thanks.
[AUDIENCE MEMBER] This is maybe more of a technical question than legal. But we're talking about doing licenses that bake in guardrails to protect against competitive output. How comfortable are we that this is enforceable? I just-- I can't really get my mind around how one would enforce that or police it even?
[MCGRAIL] Would you mind just introducing yourself?
[SUZANNE KELSEY] Oh, sorry. Suzanne Kelsey at McGraw Hill.
[MCGRAIL] Thanks.
[MARKS] Sorry. The question is, how do you enforce what? The guardrails on display.
[KELSEY] Are we concerned about how to enforce those guardrails? And are they-- is it-- I don't-- I mean, again, it's sort of maybe a technology question. But it's all well and good for an AI company to say, well, we don't want to reproduce your content. We want to get rights to train. And we will not have output that-- well, how do they protect against that? I don't understand how that could be.
[MARKS] Well, you want to do it?
[WOLFF] Well, I mean, I could talk about what the stock photo industry is doing for their own AI libraries is that they don't allow all prompts. You cannot have a prompt that says, "Make me something that's in the style of a living artist." Or I believe Getty Images, because it's on commercial material, doesn't allow prompts that if you typed in, "Make me a picture of Kamala Harris with Putin," you couldn't do it. It won't allow that. So it's through, I think, some of the prompts that you can't do that. And they just don't allow someone to type in anything and get content.
[MARKS] And there are-- Suzanne, there are two separate aspects. One is, how do you enforce it? Well, you can come up with mechanisms in the contract that decide how you're going to enforce it, whether it's liquidated damages, or some other provision.
There's a separate question, which is the detection piece. And that may be what you're asking about, which is if you say-- let's say the contract that allows for the access and the training, and allows limited display rights of up to X number of words, if it's a text, if what you're licensing is text. It says, the display of X number of consecutive words or substantially similar will be deemed permitted.
And let's say that limit is-- I'll make a number up-- 100. And then the output is sometimes giving 105 or 110 words. That may be hard to police, but it's not impossible to police. And you can have people doing checks, like as you do with any other--
[WHEELER] You can have AI doing checks.
[MARKS] Yeah. How is it that people detect when a textbook publisher allegedly exceeds a print run by a few hundred copies? People-- there may be some undetected breaches of the contract. And there will be others that are detected, and they can enforce it. And it'll be policed like everything else.
So it is a challenge. And you're absolutely right that it's a challenge, but I don't think it's an insurmountable challenge. I think people can figure out how to set up monitoring, how to do audits, how to do-- I mean, this is not-- this would not be the first time people have figured out that there are technical things happening behind the scenes and they need some right to go in and inspect the plumbing every once in a while to make sure someone's living up to their contractual obligations.
[WHEELER] Yeah.
[WOLFF] I think it's somewhat similar to any contract where you have a royalty that's based on net earnings that you're going to always want to have some kind of audit clause that you can enforce at some time to check. And I think if there's some kind of limitations, you can audit, whether if you put in something that exceeds those limitations, you can see if that's happening or not. But just like every time you speed, you don't get a ticket. You're not going to catch everything.
[KELSEY] In the book world, I can envision lots of output that perhaps would not violate that, what you just described, those guardrails, but still be potentially infringing, and certainly competitive.
[MARKS] And to me, then the answer is if as you're looking at the model, if it's likely to generate competitive output, then you just have to have the license terms make sense. If you're operating within a license paradigm on both sides and everyone's acknowledging the need for a license to do the training, then you can address, well, hey, how are we going to prevent this from being competitive with our output? That will affect presumably the price at which you're willing to do the deal or not do the deal.
[WHEELER] I was just going to make the point that your question is far more apt for the unlicensed case, where you're trying to figure out whether your creative work has been scraped and fed into a large language model or some other generative AI. That's much more challenging than enforcing the terms of a contract.
[MCGRAIL] Yes. Thank you.
[ROBERT ROTH] Thanks. My name is Robert Roth. I'm a media lawyer and photojournalist. If I may-- excuse me-- if I may follow up with Nancy, please.
[WOLFF] Sure.
[ROTH] You made a point before about how certain systems are barring prompts which would avoid people creating, for example, pictures of Vice President Harris looking in a way that she doesn't look like in real life. Well, in that case, how did we wind up with a very well circulated picture of Pope Francis looking like the Michelin Man.
And if you're going to tell me-- [INAUDIBLE] if you're going to tell me that was independently created, let's say, by Photoshop, which may be the case, in that case, isn't that picture, the false picture of Pope Francis looking like the Michelin Man, then being used to train other databases so that they will use that to create other pictures?
[WOLFF] Well, I was referring to the AI platforms that the industry has created within their own platforms, such as Adobe and Getty and Shutterstock. So the Pope in the puff jacket is created on-- I believe that was Midjourney, which is on an unlicensed platform that has scraped everything, the internet, which, of course, Getty and no one can control, because that's outside of the platform that exists within the individual companies.
[MARKS] That's the difference between the license paradigm and the unlicensed paradigm. One, the rights owners have the way to control and prevent that if that's an outcome they don't want, I'm not suggesting that if somebody wanted to have the rights to pictures of Pope Francis in the Michelin Man, they couldn't license it to combine those two for whatever purposes. But the image that you're talking about was created outside of a licensing paradigm. And so I think that's a distinction.
[WHEELER] Fake Pope.
[ROTH] If it was done with it was done with Midjourney, or it was done with Dall-E, or any other program like that, they had to first obtain a picture of the Pope.
[WOLFF] They scraped the internet.
[MARKS] They scraped it from the internet.
[WOLFF] They scraped the internet.
[MARKS] Without permission.
[WHEELER] They didn't license it.
[WOLFF] It's not licensed. So there's no-- when you don't license something, there's no restrictions.
[WHEELER, CHUCKLING] Yeah.
Very difficult to police those licenses, those non licenses.
[MCGRAIL] Yes, please.
[AUDIENCE MEMEBER] Hi. You all have mentioned several times the importance of metadata. And one question I have for you is, some of these social media platforms sort of routinely strip out all of the metadata. They may or may not be allowed to do that based on their licenses. But how do you deal with a situation like that, where you're trying to track the usage of your works, but the metadata is gone?
[MARKS] Well, it's obviously harder to track, but I think the point about-- one of the points about the metadata is it's what makes a license from an image licensing library valuable is that you get the metadata and the captions associated with the work, so that when the model is ingesting images, they know that picture one is a picture of a dog and not a cat. And they know that picture two is a picture of a cat and not a horse.
And so when people then-- when the model gets trained, and then people say, I want a picture of a cat, it's got a picture of a cat and not a horse. So if you're scraping just from the internet writ large, where you don't have the detailed captions and metadata associated with it, it's harder to train the model. So that's just part of the value proposition is getting the metadata and the image at the same time and having the descriptive information linked to particular images as a licensed set. That's just part of the value proposition.
It doesn't mean that you couldn't figure out some way to train a large language model if all you had were photographs. It just-- it would be harder to train the model than if you also have all of this rich, descriptive metadata.
But it doesn't address the problem, the related problem that you're talking about is if you're a rights owner and a platform is stripping out the metadata and the credit information, and posting it, how do you police it? You're right. That makes it harder to police. I don't know that there's a solution for that. But I think that's probably a separate problem.
[WOLFF] There is image recognition technology.
[WHEELER] Image recognition, watermarks.
[WOLFF] It doesn't require metadata. So you can at least find the content. You know if it was licensed or not.
[WHEELER] Watermarks also are a great detection tool.
[WOLFF] And a lot of software strips metadata, not for legal purposes, just because they're trying to make the content smaller. Engineers like fast data, and metadata slows things down. So there's a lot of software that just automatically strips things.
[MCGRAIL] Makena, yeah.
[MAKENA JOY BINKER COSEN] So you guys talked about how for commercial purposes, generative AI may be used more so for ideation than the final product, because companies are interested in owning and making sure that whatever they put out, they have some level of control over it. So what I'm curious about is that ideas aren't copyrightable. But if we're using generative AI to make ideas, it may suggest expressive elements, not just you're mashing up Star Wars and Breaking Bad or whatever and here's an idea, but like, oh, the frame is set up in this way, that it might have learned going through images, going through movies of what are typical forms of expression.
So when you take that idea and you then use it to make your own commercial image or movie, how much does using generative AI for the ideation process limit your ownership, if at all, of a recreation of that with your own photographers, with your own filmmakers, if at all. And if it does limit it, how much do directors need to change elements of whatever is generated by AI to be able to have a commercial purpose of actually selling it? Because the value they're adding is ownership.
[WHEELER] Yeah. Well, it has to be copyrightable, right? That's the whole discussion about whether anything generated in whole by AI is, at least so far in the US anyway, considered not to be copyrightable. But how about partial, or the ideas behind it, so on. I think I'm interested in what Nancy and Ben have to say about it, but that's the whole copyrightability question.
If you're Lionsgate, actually, you absolutely want to have a copyrightable product when you take whatever you take from your Runway deal and turn it into Breaking Bad 2, or whatever it is.
[WOLFF] Well, I think creatives have always used source material for ideation, whether they've just downloaded stock images for ideas. And what I think they're using AI-- and I've been shown some examples when they're worried. It's like, does this look close enough? Help me out.
But a lot of it, it's really a fast way to see if you like the way something looks that you would normally have done with images and Photoshop, or something like that. But now you can say, well, do I like something over here better than over that? So it's always been done. It's just it can be done much more quickly now.
[WHEELER] More thoroughly, more thoroughly.
[WOLFF] Right. And so it sort of speeds the process up. And then it's used as what they call inspiration. Now, of course, there's always been issues of where the line is an inspiration, and make sure that your end product is your own creative result. But this has always gone on, that you get concepts and ideas from source materials. And it's just a very fast way to try things out, like, do I want it in pink? Do I want it in blue? I don't know.
[MARKS] And not only do you get concepts and ideas in source materials, but often like roughs of products, whether it's in the television industry, or in the advertising industry, or whatever, will use copyrighted materials for the internal pitches and presentations. So this is what it's going to look like. Or they'll put in a popular song as an example that's not cleared for the movie, but they're using that, like, this is the kind of thing we're going to have. And then maybe they--
[WHEELER] The elevator pitch.
[MARKS] Yeah. Before they do the commercial release, they either go clear that song, or they substitute it out for something that they do have the rights to. But people have been making those kinds of uses all the time. And I don't think AI would be any different than the internal before you go commercial release.
And in the image licensing space, there are licensed models that allow access to image libraries for those types of uses. You can--
[WOLFF] It's a comp use.
[MARKS] Yeah. It's a comp use. So you can take anything from a library for the pitch to put your deck together to show what it would look like. But before you actually go live with it, you then have to come back to us, and pay for it, and clear the rights, and make sure we're all good. So that's, I think, the role that generative AI would play in those sets. It would be comparable to what the pre-generative AI use of copyrighted material in a similar way, but where you still would have to clear all the rights before you release your product commercially.
[WOLFF] And we're talking about generative AI. But AI has been used in so much software already. If you look at all the creative suites that Adobe has, you can have backgrounds, or in your phone now, remove someone from a picture, change the background. It's there.
And so what we're talking about now is generative AI, but everything you-- there's so much AI in this phone already that lets you do things so much easier than you could do before. It allows all of us to make semi-good pictures.
[MARKS] It comes back to what Roy was saying earlier, that we've been licensing AI. We just weren't calling it AI.
[WHEELER] That's true.
[AUDIENCE MEMBER] My name is [INAUDIBLE]. I'm a judge from Korea and a recent Columbia Law School graduate. And my question is that when you license-- when image creators or movie creators license their works to AI, can they limit the use of their work? I don't want this to represent a certain propaganda, or politically, or commercially, because unlike other industries, such as publishing or music, I think image can be so easily manipulated to represent misleading ideas, or any ideas that the creators don't agree with.
[WOLFF] If you look at any license agreement you get now, if you went to any image licensing company, there will be terms and conditions. And one that's always there is you can't create a work that is illegal, defamatory. And then there's also some that say if you're going to use anything that would be offensive for all these reasons, you have to label it as something that's been done for illustrative purposes. So contractually, there's ways to do that.
Obviously, you can't always control the output. But I suppose if we're talking generative AI, there could be certain prompts that you couldn't do. There's a lot of-- the industry is already preventing users from doing things, and even searching for terms that would be dangerous to children. I mean, there's a lot of things going on already that are being vetted out.
[WHEELER] So just to take an example, another slightly different example, in all of our customer agreements with people who display the movies and TV shows that we licensed for public performance, say you may not use any of our content to endorse another product or service of yours or anyone else's. So that's, again, a challenge to enforce perhaps, sometimes, but absolutely prohibit use that you don't want.
[MARKS] You've hit on a very complicated issue because the line between despicable political misinformation and hilarious parody is not always going to be clear, and it will depend entirely on someone's perspective. And so it will be a challenge to sort all of that out about what kind of guardrails content owners-- the copyright question is, what kind of content-- what kind of guardrails does the content owner have the right to impose and enforce as a matter of licensing or withholding the rights to use in connection with generative AI? And then the separate question of what as a society do we want to tolerate. Because the copyright question doesn't solve the political misinformation question, except in limited circumstances.
[WOLFF] And when you come to libel and defamation or copyright infringement, the US, at least, has a strong First Amendment that does allow parody and some sort of making fun, as long as you know it's making fun, like we had the RuPaul Republicans at one time. And so not all countries have the same First Amendment, but they do have directives that protect speech, as well as privacy. That all gets balanced.
So there's always going to be some room for political satire and speech. It's just when you know something's real or when someone's making fun of it. It may be that there will have to be some kind of labeling that something is AI so you know that it's been generated. And that's part of what a lot of the discussions are, the transparency of what's going in, and then giving information to the public so they know what they're looking at, because the real concern is you're never going to trust images anymore.
[MARKS] The one thing everybody should be cognizant of is that the technological abilities and the implementation of these tools for nefarious purposes will be first in line, and the legislation will follow. We're not going to prescribe it in advance. So we're going to have a bumpy period as we sort through what the implications of the way people are using these tools are and how we want to regulate them and what our ability to regulate is. But I think the technology will be a step ahead of the regulation.
[WHEELER] Yeah, as always.
[WOLFF] If you look at the language of the proposed No Fakes Act, they do have carve outs for things that would be political speech, and things that would be newsworthy, documentary style work, things that would be protected by the First Amendment.
[MCGRAIL] Thanks, yes.
[WOLFF] Not on transparency, though.
[MCGRAIL] Go ahead.
[DAVID STRICKLER] David Strickler, I'm a judge on the Copyright Royalty Board. I have a question for the panel. There's been a number of comments from the panel and from the audience about the difficulty of enforcing a lot of these contract clauses. And I think Ben mentioned the existence in these contracts of liquidated damage clauses. In light of the fact that enforcement is so difficult or assuming it is in fact so difficult, is there an attempt to make these liquidated damage clauses reflective of that by making them as onerous and difficult, and I would use the word painful as possible without spilling over and becoming unenforceable penalty clauses?
I think while we are seeing the growth of the marketplace, certainly, I don't-- we haven't-- I'm not aware of any litigation over whether or not a liquidated damages provision in any of these types of contracts was enforceable or not. So I think that's something that will sort itself out.
But I do think that there are negotiations over concepts like liquidated damages if something slips through, if somebody says, I'll only train for this purpose, but not for that purpose, or I'll only use it for this. People are talking about what the enforcement should be. I don't think we're at the stage of the evolution yet where we know what are the metes and bounds of what's an enforceable provision, what's a not enforceable provision.
Certainly, the rights owner side will want to make the penalties onerous, and the licensee side will want to make the provisions less onerous. But how that sorts out, I think that will just be a process that will play out over time. I don't have a sense of where those lines are.
[WOLFF] I don't know either. But I do know that if it is a penalty, that makes it unenforceable. I mean, you have to somehow figure out what a realistic damage would be. And I know that is a concern, because we do licensing for AI for clients in the office. And it is hard to figure out. What happens if someone breaches? What do you do?
You can't easily, as I understand it, unlearn it. I think we're not at the point where we could tell it to train itself to unlearn for some of the models that they're licensing for right now.
[STRICKLER] Right. The point would not be, of course, whether or not the liquidation-- or the primary point, whether the liquidated damage clause would be enforceable or not. The question is whether it's sufficiently onerous on paper that it makes one who is otherwise going to infringe reluctant to test it by making sure that the liquidated damage clause is painful, and no one wants to-- no infringer would like to test.
In the same way, by analogy, someone mentioned driving. And you can drive a little over the speed limit, and you won't get a ticket. But you could also drive under the influence, and you're likely not to get a ticket. But if you do get a ticket, it's incredibly onerous. And therefore, you may want to avoid it. So that would be the same general principle of inducement and incentive.
[WOLFF] So I think there's two issues, because, first, when you're doing licensing, it's generally for the company that's creating the model. And then there'll be a third-party user who goes on the platform to create something. So you'd have to hold the model company liable for the acts of its users. And generally, that's also going to be a contractual relationship, if you agree to use, where that risk is allocated to, and how you can enforce it against the user that's not the customer.
[WHEELER] I was just going to add the comment that my personal experience, liquidated damages clauses are not as valuable as people sometimes think they are, because, of course, you have to collect on the liquidated damages that are not self-enforcing. And if someone disagrees that the clause has been triggered, then you don't get the liquidated damages, absent litigation.
So I think the strongest element that a licensor has is termination, recognizing that the issue about unlearning, which is yet to be decided, whether it can be a termination can be effective once something has been trained. But to me, that's the strongest weapon that any licensor has.
[MCGRAIL] Go ahead.
[ROY KAUFMAN] Hi. Roy Kaufman from CCC. I hope in five years we're living in a world where the biggest problems we have are people exceeding license terms and not [INTERPOSING VOICES].
[WHEELER] We should be so lucky, right?
[KAUFMAN] --as opposed to just telling us to go pound sand.
[WHEELER] Exactly.
[KAUFMAN] I have a question, Nancy and Ron, primarily for you, but obviously, I welcome Ben's comments on this. I had mentioned briefly in my talk about Open Science and Creative Commons licenses. And in a world of user generated content and photographers who are putting stuff on sites where they might put up a Creative Commons, CC BY-NC license, or even a CC BY, give me credit re-use license. Certainly, YouTube creators do use open licenses. And I believe every creator should be able to set the terms that they want, including open licensing.
In our industry, we're starting to hear people question the decisions that they made five years ago around applying a Creative Commons license, because all of a sudden now something that they hadn't thought of, and where's that attribution that they're getting when something gets scraped and trained, and blah, blah, blah. What are you guys hearing in your industries about practices around various open licenses and whether the AI world is causing them to rethink perhaps what they did in the past?
[WHEELER] Do you want to go first, Nancy?
[WOLFF] Well, most of the, I think-- most of the licenses that are done through the aggregators are not CC licenses. It's more individuals who want to use sharing platforms and they want to control in somewhat by choosing the CC license, whether they get attribution or not. So that's not my general-- I know that there are some free licenses within the stock image model, but they do have some restrictions.
Unsplash is a part of Getty. And I don't know if it's a true CC license, or it's just a broad rights license that's used.
[WHEELER] I was just--
[WOLFF] Yeah, I don't know.
[WHEELER] I was just going to comment about-- you mentioned YouTube creators. Actually, my own experience with YouTube-- I used to work almost 25 years at 20th Century Fox. It was, of course, a piracy platform at the beginning. And then it became user generated content. And now if you go on YouTube, as I do, it's increasingly licensed copyrighted works. Full movies are on there, sometimes free with a subscription, sometimes you pay on a per use basis.
My general sense of the world is that everyone who started off looking for name recognition, and clicks, and so on, eventually says, show me the money. So I think that the direction of the world-- not exclusively, of course, there are plenty of platforms where free content is available without a license or maybe a Creative Commons license. But I think the long run trend is actually, to use the theme of this conference, licensing copyrightable works.
[MCGRAIL] Thank you. So I don't think I see-- oh, yeah, question in the back.
[PETER CRAMER] Hi. Peter Cramer from Proskauer. You mentioned the idea of payments, which I assume you meant royalties at the output step from these models. Curious as to your thoughts from a practical perspective, given that these models are black boxes, and we don't-- there's no direct line from a piece of training data to a generated output, how do you calculate that? How do you determine whether or not you or some other licensor is entitled to a royalty payment? To use the boy on the beach with the balloon example, if you don't know which of the zillions of little boy, balloon, beach, sky, sand, ocean photos, how much and which of those went into a generated output, who is entitled to payment and how much?
[MARKS] Well, you raise a good point. I think it's complicated. But one model is that you can see in some deals is that there are limited output rights, including with attribution. Whether or not those are on a per output basis or those are part of a flat fee that are just part of the overall economics of the deal, to be able to-- in the public descriptions of some of the deals with licensors, it includes the right to-- some limited display rights to attribute clips to whatever it is, a magazine, or a newspaper, or a publisher, whatever it is.
What the actual economics are, are they tracing it back to the actual individual author of a particular thing on a per display basis? Or is that part of a broader economics of the license? That's going to just vary by agreement and negotiation and get worked out.
[WOLFF] So in the photo licensing or stock image licensing arena, the companies are creating funds for contributors because it is difficult. I think at some point, there will be ways to figure out what the output is comprised of. I believe-- and I don't understand it that well, but there may be things you can tokenize, so you can figure out the elements, but they are creating funds that will go to the contributors because they realize it is difficult. So they just are coming up with a fund and coming up with a formula that they feel is reasonable until some time that it could be more accurate.
[MARKS] And there will be rough-- and they are, admittedly, very rough parallels to what other content industries have experienced in other circumstances. So if you look at the way music royalties get distributed now, like the granularity and accuracy of paying artists and composers the right amount for the right number of performances is easier on a streaming service, is easier when you're licensing a streaming service because the streaming service can report exactly how many songs they played to exactly how many people than a radio station that says, we put it out over the air, and we think we had a bunch of listeners.
And there were ratings services that would say, this is a more popular station than that, but nobody knows exactly who was listening when to what song. And so they use blunter instruments to distribute for some kinds of media than for others. So television is the same way in terms of figuring out--
[WHEELER] Nielsen ratings.
[MARKS] Nielsen ratings. And Nielsen ratings have gotten better and better over time, or probably more and more accurate over time. Or I guess maybe I shouldn't say that so declaratively. People may have different perspectives on--
[WHEELER] Paramount just dumped Nielsen yesterday.
[MARKS] --on how accurate they are. I realize I may be getting out over my skis on that one. But it is an issue that the industry will have to confront, but it's not different in kind than similar kinds of issues that media have figured out how to distribute to creatives over time. So I think that'll just be something that the marketplace will work out over time.
And it will probably get easier and easier to do it accurately as these tools progress and people figure out, hey, we need some way to pay people accurately. And then they'll develop a tool to do that. I don't know to what extent that exists yet. But I could-- that's easy to envision that piece solving itself over time.
[WHEELER] Especially as streaming becomes more ubiquitous, that's definitely the most accurate way to-- it's like, you had this many streams to this many people.
[MCGRAIL] OK. Well, thank you to our wonderful panelists for this really illuminating panel. And lunch is available now in room 107, which you're going to go down the hall, and then it's all the way down to the left. And there'll be people and signs to hopefully direct you. And we'll be back after lunch. And also, please sign out for CLE before you go to lunch. And then you're going to sign back in when you come back. So thank you again so much.
[APPLAUSE]