Like many of you, I read our coverage of the Sora is a text-to-image app yesterday with more fear than excitement. It hails from OpenAI, and basically allows you to type in a prompt that is then immediately translated into animated images. These can range from goofy to photorealistic.
As you’ve surely heard a million times, film and TV are visual mediums. When a new tool that extracts visuals from prompts gets introduced, there will obviously be ramifications within Hollywood.
So, let’s unpack a few.
What are the Ramifications of the New ‘Sora’ OpenAI App on Hollywood?
here is sora, our video generation model:nnhttps://t.co/CDr4DdCrh1nntoday we are starting red-teaming and offering access to a limited number of creators.nn@_tim_brooks @billpeeb @model_mechanic are really incredible; amazing work by them and the team.nnremarkable moment.
As you can see from Sam Altman’s above Tweet, OpenAI has introduced a new AI model called “Sora,” which is a text-to-video generator. This innovative tool allows for the creation of videos from text instructions, making it a significant advancement in AI-driven content generation.
When it comes to Hollywood, this program is going to absolutely change how things are done at every step of the production process.
Take pre-production; When it comes time to make a movie or TV show, this program could take care of all the previs, almost creating a shot for shot version of the movie based on the prompts you can give it. Like moving storyboards.
When it comes to production, it’s not out of the realm of possibility that as this tech gets better, studios will be able to generate ideas without having to pay teams of animators to create them.
The ease of creating diverse visual content could lead to the emergence of new genres and formats that blend live-action, animation, and AI-generated content in novel ways, pushing the boundaries of current storytelling paradigms.
But it could also decrease professionalism. If anyone can just do this stuff, do we all become “content creators” instead of filmmakers and storyteller?
Will the industry I love behind to shrink uncontrollably due to anyone being able to prompt their ideas to life?
Can you have a great idea or screenplay that cuts through the noise if everyone is generating slop?
Will Movies and TV Become Like TikTok?
Of course, as this tech improves, people can animate their own stories. And right now, most people consume stories via their phones, watching short snippets.
I have a real worry that Generation Z has trained themselves on short-form content in such a way that it will slowly begin to replace longer things like movies and TV shows.
That might be overkill, but it crosses my mind when I see how Reels and TikTok have incorporated product placement. And how man amateur content creators have dominated that medium.
What happens when they can make their own animated stories to release on those platforms? We already get the AI voiceover videos and images. This is just the next logical step.
Do We Have Any Legal Protections?
The secondary worry is that as the photo-realism gets better and better, they’ll be able to license people’s likeness or generate content starring people without their consent.
There was already a huge news story involving fake pornogrpahic images of Taylor Swift generated by AI. Where does the line get drawn?
And what kinds of protections can we assume?
Right now, we have nothing in place. There will have to be congressional hearings and we will have to look into the ethics of all this.
The reason is that AI cannot generate ideas from thin air. It scours the internet and takes images generated by human beings as well as aert, photos, drawings, and any other visuals. Then it amalgamates all of that and adapts from it.
We’re still trying to define if this is plagiarism or stealing.
All of this needs to happen soon, before these programs are readily used by the public.
The Federal Trade Commission has some ideas for rules that make AI impressions of real people illegal.
The FTC wrote in a news release. “The agency is taking this action in light of surging complaints around impersonation fraud, as well as public outcry about the harms caused to consumers and to impersonated individuals. Emerging technology — including AI-generated deepfakes — threatens to turbocharge this scourge, and the FTC is committed to using all of its tools to detect, deter, and halt impersonation fraud.”
In summary, this technology could revolutionize how content is produced, making it more accessible and efficient while also raising essential discussions around creativity, copyright, and the ethical use of AI in media.
For now, this stuff is moving fast and we don’t totally have a grip on what it means, but these are my thoughts.
Let me know what you think in the comments.
Author: Jason Hellerman
This article comes from No Film School and can be read on the original site.