OpenAI Retreats From Unpopular Sora Opt-Out Policy
OpenAI is changing how its AI video generator Sora handles copyrighted material after a wave of criticism from Hollywood and other rightsholders over its initial opt-out copyright policy. See our previous piece on the company’s original plan here.
In a blog post late Friday (copied in full at the bottom of this piece), CEO Sam Altman said the company will give rights holders “more granular control” over the use of their characters in Sora videos. Until now, studios and artists had to proactively ask OpenAI to block their characters or file takedown requests if they appeared. Altman said the new approach will move closer to Sora’s existing opt-in model for personal likenesses, but with added tools to decide how (or if) characters can be used.
“We are hearing from a lot of rightsholders who are very excited for this new kind of interactive fan fiction … but want the ability to specify how their characters can be used (including not at all),” Altman wrote.
Altman also said OpenAI is exploring a revenue-sharing system for creators and companies who allow their intellectual property to be used in user-generated Sora videos. The company has been surprised by how much video people are generating, often for very small audiences, and says it must find a way to monetize the platform while rewarding rightsholders. The exact payout model has not been finalized, but it is expected to roll out soon.
The shift comes after Sora, launched last week on an invite-only basis, quickly filled with clips of well-known characters from shows such as South Park, Dragon Ball Z, Rick and Morty, and SpongeBob SquarePants. Talent agencies and studios warned clients to opt out, and legal experts predicted potential copyright suits if the policy remained unchanged.
Altman compared the current pace of change to the early days of ChatGPT, saying OpenAI will “iterate quickly” and fix missteps as it goes.
Altman’s blog post:
Sora update #1
We have been learning quickly from how people are using Sora and taking feedback from users, rightsholders, and other interested groups. We of course spent a lot of time discussing this before launch, but now that we have a product out we can do more than just theorize.
We are going to make two changes soon (and many more to come).
First, we will give rightsholders more granular control over generation of characters, similar to the opt-in model for likeness but with additional controls.
We are hearing from a lot of rightsholders who are very excited for this new kind of “interactive fan fiction” and think this new kind of engagement will accrue a lot of value to them, but want the ability to specify how their characters can be used (including not at all). We assume different people will try very different approaches and will figure out what works for them. But we want to apply the same standard towards everyone, and let rightsholders decide how to proceed (our aim of course is to make it so compelling that many people want to). There may be some edge cases of generations that get through that shouldn’t, and getting our stack to work well will take some iteration.
In particular, we’d like to acknowledge the remarkable creative output of Japan–we are struck by how deep the connection between users and Japanese content is!
Second, we are going to have to somehow make money for video generation. People are generating much more than we expected per user, and a lot of videos are being generated for very small audiences. We are going to try sharing some of this revenue with rightsholders who want their characters generated by users. The exact model will take some trial and error to figure out, but we plan to start very soon. Our hope is that the new kind of engagement is even more valuable than the revenue share, but of course we we want both to be valuable.
Please expect a very high rate of change from us; it reminds me of the early days of ChatGPT. We will make some good decisions and some missteps, but we will take feedback and try to fix the missteps very quickly. We plan to do our iteration on different approaches in Sora, but then apply it consistently across our products.


