This week, OpenAI announced it would be shutting down Sora, its popular AI video app. This is not just about killing a video toy; it signals a strategic pivot at OpenAI.
You probably weren’t Sora’s target user, but watching this montage of its top clips is a great way to see how far this impressive tech has come.
Top Sora Clips Video via YouTube.
It’s both fun and scary to think about how fast technologies like this have evolved … and what they will make possible.
It’s easy to think Sora’s shutdown isn’t a big deal … but it’s a signal of OpenAI’s new playbook on infrastructure, partnerships, and profit.
And with that new playbook, OpenAI announced several other important changes this week. Here are a few of the highlights.
The End of Their Disney Partnership
Shutting down Sora also forced the termination of a major $1 billion investment deal between OpenAI and Disney, as well as licensing agreements that allowed the use of Disney-owned characters in AI-generated video content.
It’s a reminder that when OpenAI prunes products like Sora, it’s also pruning capital-intensive bets and risky content partnerships.
Pushing Pause on “Adult Mode”
Last October, Sam Altman announced plans for an erotica mode. However, the tension between boldness and caution shows up in the gap between OpenAI’s ‘not the morality police’ rhetoric and its quiet slowdown on controversial features.
The Financial Times later reported that the pause is “indefinite,” with Cristina Criddle citing “sexual datasets and eliminating illegal content” as challenges for OpenAI. This reflects the growing regulatory and reputational risk around generative sexual content.
ChatGPT Just Got More Reliable
OpenAI updated ChatGPT with a 33% reduction in factual errors, plus a significantly expanded memory for longer conversations.
Changes like these hint at where OpenAI wants to focus: scalable, everyday systems that drive recurring revenue.
And it doesn’t stop there …
The Great DRAM Over-Buy
Originally, it was reported that OpenAI had secured forward commitments for up to 40% of the world’s DRAM supply. This was to help their future data center growth as AI demand increases.
In plain English, DRAM is the short-term memory that lets these models think; if you want bigger, smarter models, you need a lot of it.
As these announcements roll in, many are also scrutinizing how much RAM OpenAI locked up in advance.
With this, I think the memory bull run (which began over 2 years ago) is coming to an end. Many of the large AI labs have secured more DRAM via forward contracts than what they will realistically need. This has created the sense of an artificial shortage supported by essentially FOMO on DRAM supply. Like in previous cycles, this will unwind.
– Seeking Alpha
With Google’s new TurboQuant AI compression algorithm, and OpenAI switching focus, many see the drop in RAM prices as more than a blip — potentially a real change in the cycle.
Where OpenAI Goes Next …
From Owning to Orchestrating Infrastructure
After initially pursuing massive, vertically integrated infrastructure through its multi-hundred-billion-dollar Stargate initiative, OpenAI has begun shifting toward a more flexible, capital-efficient model.
If labs over-bought memory during the AI gold rush, then shifting from owning massive data centers to orchestrating capacity from partners starts to look less like backtracking and more like smart risk management.
Instead of owning and operating the bulk of its global compute footprint, OpenAI is increasingly leaning on partnerships and leased capacity from cloud providers. Internally, this has been reflected in a restructuring that separates infrastructure design, partner management, and operations — signaling a shift from a “build everything” strategy to a “coordinate and optimize” approach (e.g., using multiple cloud providers, negotiating for power in different regions, etc.).

At the same time, the company is clearly narrowing its product focus.
Video apps like Sora are entertaining for users, but they’re also brutally compute-intensive for the providers. As you look at Anthropic’s revenue and those of other competitors, it’s clear that chat, code, and enterprise use are where the immediate growth and low-hanging fruit lie.
How This Fits the Longer-Term Plan
AI has already consumed massive funding to get here — and it will require even more to reach the next plateau.
Rather than a retreat, this shift aligns with a longer-term strategy: preserving capital, accelerating deployment, and keeping options open in a rapidly evolving compute landscape. Leveraging partners allows OpenAI to scale faster while avoiding bottlenecks tied to financing, power availability, and hardware cycles.
In that context, “Stargate” appears to be evolving—from a fixed set of owned assets into a broader, more modular strategy for bringing compute online wherever it is most efficient.
The end goal hasn’t changed: securing enough compute to train and deploy increasingly powerful AI systems. What has changed is the path — shifting from infrastructure ownership to infrastructure orchestration, and from experimental breadth to commercial depth.
This aligns with their move from non-profit to IPO. They’re clearly focused on profitability in the near term, not just the long term.
But these shifts could also signal changes that open opportunities for more players to enter the space and carve out their little slice of the digital landscape.
I’ll continue to watch how OpenAI manages the delicate balance between rapid innovation, financial pressures, and the broader public good. The story is still unfolding, and what happens next will shape the technological future we all live in.
How It Shows Up in Everyday Use
All of this might sound abstract, but you can feel these shifts in everyday usage too. If you’re curious, I use a paid version of ChatGPT throughout the day. I’ve gotten used to it; I understand when to listen and when to ignore it. With that said, I’ve also been happy to pay for Perplexity (but I use it in much more limited circumstances). It gives me access to different models, and I feel like it’s been a good value. However, today I finally decided to pay for Anthropic as well because the quality of the responses I’ve been getting has led me to change my usage behavior.
Interestingly, if I ask different models a question and then show their answers to ChatGPT, ChatGPT often favors Claude’s responses as well.
I know all of that is subject to change, and tools are leapfrogging one another with increasing frequency. With that said, I thought it was worth sharing.
Let me know which tools you use and rely on most.
Onwards!

Leave a Reply