In an exclusive interview with C21, OpenAI’s Chad Nelson discusses how recently launched AI video generator Sora will push the boundaries of what’s possible with video storytelling.

Chad Nelson
“Mind-blowing.”
That’s how one Content London delegate described Chad Nelson’s demonstration of Sora, the text-to-video generator from artificial intelligence giant OpenAI, at the event earlier this month.
A few days later, ChatGPT’s parent company released the tool to users in the US and several other countries, having teased an early version in February that shook the foundations of the film and TV business.
The tech is now available as a standalone product at Sora.com to ChatGPT Plus and Pro users in any market ChatGPT is available, except for the UK, Switzerland and the European Economic Area. Expanded access in some of these markets is expected in the coming months.
When an early version of Sora was revealed in February, it was only made available to a group of around 300 testers, ‘red teamers’ and other creative partners. According to the company, the newly released version, Sora Turbo, is “significantly faster” than that preview model.
Users of Sora Turbo are able to generate videos of up to 1080p resolution, clips up to 20 seconds long and in either widescreen, vertical or square aspect ratios.
OpenAI has said Sora Turbo features interfaces that will make it easier to prompt Sora with video, images and text. However, it noted that the latest version of Sora has “many limitations” and “often generates unrealistic physics and struggles with complex actions over long durations.”
OpenAI, which has a valuation of around US$157bn after its most recent capital raise, said it was “introducing our video generation technology now to give society time to explore its possibilities and co-develop norms and safeguards that ensure it’s used responsibly as the field advances.”

Sora users can create stunningly realistic video from simple prompts
It is widely expected that putting Sora into the hands of the public will have major implications for the future of the film and TV business. OpenAI says it wants this “early version” of Sora to enable people to explore new forms of creativity, tell their stories and push the boundaries of what’s possible with video storytelling.
There’s no doubt the demand is there, with both ChatGPT and Sora experiencing usage-related outages following the release. In response, OpenAI CEO Sam Altman said on X: “We significantly underestimated demand for Sora; it is going to take a while to get everyone access.”
But there are plenty refusing to believe the hype. Sora is “too expensive to scale and not good enough to make [OpenAI] money,” according to influential tech critic Ed Zitron, one of the main proponents of the theory that big tech’s multibillion-dollar bet on AI is creating a bubble that will eventually burst.
Nevertheless, one of the main takeaways from Content London was that, combined with the power of AI tools, the creator economy being established on social media could be “supercharged” in the years ahead, with the potential to change the whole economic model of film and TV.
Nelson – creative specialist at OpenAI – says the tech has the potential to turn an industry he believes is “too much work-for-hire” into one where any individual could create the equivalent of a high-end film or TV show without the need for strong financial backing or formal distribution.
On the possibility that Sora and other text-to-video tools like it released by OpenAI’s competitors, such as Meta, Runway and Google, could unleash a tsunami of video content, Nelson suggests we could be “about to enter a world where 50 Lord of the Rings might be released every day.”
In Sora’s current form, Nelson says it could be used for conceptual development, pre-visualisation, “light” storytelling, post-production and VFX iterations, rapid marketing and social media, but future versions of the tech will be even more advanced. It does not currently provide audio to accompany its video.
Demonstrating Sora’s capabilities, Nelson presents examples of ways he has prompted the tool to create anything from a Viking battle scene, lunar new year celebrations and a boy running through nature to an exposition shot for a sci-fi project and a highly realistic video of a couple kissing in a photo booth on their third date.

A still from the Sora-created ad for a stop-motion-style video game concept titled Forever Land
Nelson says VFX shots that would have previously taken a week to put together could now be done in four hours using Sora, while he has also used Sora to generate an ad for a video game filled with cute characters made entirely of felt and yarn, titled Forever Land.
Mimicking the kind of painstaking stop-motion animation that takes months and sometimes years to produce, it’s the latest evidence of just how the VFX and animation industries will be disrupted by tools such as Sora, with indies set to be able to incorporate special effects on par with those in Hollywood blockbusters in their productions.
But Sora is by no means perfect, with glitches particularly evident in shots depicting fast movement, while a video of Sora’s failed attempt to replicate a gymnast’s movement went viral after its launch.
Nelson says: “It’s still weak at certain things – sports, gymnastics, even juggling. Will it get to true physics? Yes, but there are still some gaps. There’s no magic button. You still have to curate it. There’s still so much artistry that goes into visual storytelling. But it will come down to what resonates with audiences, rather than who has the best VFX.”
The exec adds that he hopes Sora will be able to render 4K-quality video by some point in 2025. Such is Sora’s ability to create realistic high-resolution video from text instructions, its release was purposefully held back until after the recent US election, Nelson says.
Mammoth questions around copyright and ethics remain, and OpenAI admits its safeguards – such as default visible watermarks and an internal search tool that uses technical attributes to help verify if content came from Sora – are “imperfect.”
The company says it is committed to blocking particularly damaging forms of misuse, such as child sexual abuse material and sexual deepfakes. Uploads of people are being limited at launch, but it intends to roll the feature out to more users as it refines its “deepfake mitigations.”
To prepare Sora for broader use, OpenAI worked with what it calls red teamers – domain experts in areas such as disinformation, illegal content and safety – whom it says “rigorously” tested the model to identify potential risks. Their feedback played a key role in shaping Sora, helping OpenAI fine-tune safeguards while making the model as useful as possible.
“We will actively monitor patterns of misuse, and when we find it we will remove the content, take appropriate action and use these early learnings to iterate on our approach to safety,” OpenAI has said.
Nelson, who has a background in animation and is the creator of AI-generated short film Critterz, claims the tech will “supercharge creativity,” having spoken at Content London last year when he was creative director of Native Foreign and a consultant for OpenAI, which went on to bring him on board as a creative specialist.

The first glimpse of Sora’s capabilities made waves across the business early in 2024
“What happens if we are able to brainstorm in the final output? What does that do to our creativity? We wanted to design Sora so it wasn’t only engineers who could use it. We want writers to use it. We want it to be a productive tool,” he notes, pointing to its ability to creating high-quality pitch materials.
Discussing how it could impact what people watch and the quality level of content, Nelson says Sora may raise the bar by allowing creatives to focus more on writing, while also breaking down barriers to entry for young filmmakers.
“Writing is going to be even more important in this future, because production value might not be the differentiator. It’s going to come down to how much you can connect through the writing, characters and story,” he says.
“Maybe a lot of derivative type of content will still have a market, but it’s going to be interesting to see [the impact of] focusing more of your time on storytelling and building character as opposed to the steps it takes to get a story on screen.
“I don’t think anyone sets out to create anything mediocre. A lot of people say if they had more time, they could take it to the next level. These tools will allow good, creative storytellers to really push forward and go beyond what they thought they could do and experiment more.
“If I was a young filmmaker, how do you get to prove your vision and see it come to life? Most studios I know aren’t willing to give US$90m to an up-and-coming filmmaker. It could help get some of those risky things funded.
“Those who are already in the business, I’m sure all of you have had projects that never got funded or are passion projects. Some of that can come to life.”
He adds: “The other thing that’s interesting is the whole economic model of this industry and how things are funded and ownership. If you can start to produce and build more of your content without a US$5m investment just to get anything on screen, how does that change the economic model and IP ownership?”
Nelson admits new tech such as Sora can be “scary” for an industry contemplating its future, with the potential to both bring down production costs but also result in job losses that could ravage the business.
“My whole career, I’ve loved entertainment and technology. I can’t wait to see what people do with it,” he says.
Interview by David Jenkinson, C21’s editor-in-chief and managing director, at Content London 2024. With additional reporting by Jordan Pinto.