Blog/General

State of Slop: Disney's Sora Disaster

Published April 14, 2026

State of Slop: Disney's Sora Disaster
Blueprint Body
Cover image by OKSLOP contributor Blueprint Body

In December 2025, Disney announced a $1 billion partnership with OpenAI. The pitch: integrate Sora-generated video into Disney+, letting subscribers create content featuring 200+ Disney characters. Bob Iger called it "thoughtfully and responsibly extending the reach of our storytelling through generative AI."

By March 2026, three months later, Disney had abandoned the entire project and pulled out of its OpenAI investment completely.

What happened? And what does it tell us about where AI actually is?

The bet

Disney's vision wasn't subtle. They imagined a future where AI video generation would transform how people consume (and create) entertainment. Subscribers would use Sora to generate personalized content with their favorite characters. New revenue streams. Engagement metrics through the roof. The future of media.

This wasn't a pilot or a test. A billion dollars is a statement. Disney believed AI video had crossed the threshold from "interesting demo" to "production-ready product."

They were wrong.

What actually happened

When Disney+ subscribers got access to Sora-powered creation tools, they mostly didn't use them. The AI-generated content that did appear saw low engagement. The platform got flooded with low-quality SpongeBob, Peppa Pig, and South Park variations, characters Disney doesn't even own.

Meanwhile, the broader Sora ecosystem developed a different reputation. The Sora subreddit became a ghost town except for users attempting to generate explicit content. The tool's primary use cases drifted toward pornography, nonconsensual imagery, and disinformation.

As 404 Media put it: "It turns out when you try to serve slop on a product people pay for, no one wants it."

The pattern

Disney isn't alone. TCL's "Film Machine," once positioned as the AI that would automate moviemaking, has been quietly shelved. Their promotional videos from over a year ago average a few thousand views each, comments disabled.

The much-hyped AI-generated fight scene between deepfake Brad Pitt and Tom Cruise? Industry observers noted it retained "continuity errors" and an "uncanny feel." Impressive as a tech demo. Unwatchable as entertainment.

The pattern repeats: breathless announcement, modest demo, quiet retreat.

The misperception

Here's what keeps getting wrong in the AI-will-replace-Hollywood narrative: the assumption that generating video frames is the hard part of making movies.

It's not. The hard part is the thousand decisions that make a story work. Pacing. Performance. What the camera emphasizes. What it doesn't show. The cut that lands. The moment of silence before a line.

AI can generate pixels. It cannot yet make the decisions that give those pixels meaning. And the gap between "technically possible" and "actually good" is vast.

Where AI actually works

Studios are using AI, just not how the hype suggested. The current reality: editing assistance, storyboarding, scratch voiceover, concept visualization. Behind-the-scenes tools that speed up workflows without replacing creative judgment.

That's useful. That's real. But it's not "AI will put Hollywood out of work." It's "AI is another tool in the toolkit, like digital color grading or CGI before it."

The difference matters. One story generates headlines and investment. The other describes what's actually happening.

What this means for AI images

We build an AI image platform, so we think about this stuff.

The Disney disaster reinforces something we've believed from the start: AI-generated content works when it's honest about what it is and appropriate for the context.

Stock photos for a blog post? Good enough is good enough. A placeholder image while you're building a prototype? Perfect use case. Visualization to communicate an idea quickly? Exactly right. (For a full decision tree, see When to Use AI Images.)

Feature film distributed to paying subscribers? Not there yet. And pretending otherwise, betting a billion dollars on it, doesn't make it true.

The hype cycle is the problem

AI video generation will get better. So will image generation. The underlying technology genuinely is improving.

But the hype cycle creates unrealistic expectations, which create high-profile failures, which create backlash against AI tools broadly, including the ones that work fine for their intended purpose.

Every "AI will replace X" headline that doesn't pan out makes it harder to have honest conversations about what AI tools can and can't do. It poisons the discourse with false promises and inevitable disappointment.

The honest version

Here's what we'd say if we were Disney's PR team (they wouldn't hire us):

"AI video generation isn't ready for premium entertainment. The technology produces output that viewers immediately recognize as artificial and low-effort. We tried to force a use case that doesn't exist yet. We were wrong."

That's not fun to admit when you've spent a billion dollars. But it's true. And the sooner the industry accepts where AI actually is, instead of where investors wish it was, the sooner we can build things that actually work.


State of Slop is an occasional series where we look at the AI content landscape: what's working, what's failing, and what the gap between hype and reality actually looks like.