Thoughts about the markets, automated trading algorithms, artificial intelligence, and lots of other stuff

  • The Distance Between Then And Now

    We just got back from Portland, where we were visiting my oldest son — and meeting my newborn grandson.

    It was a great trip. Nothing monumental happened, but years from now, we’ll continue to look back on it fondly.

    I got to hold my grandson for the first time. I got to play with my granddaughter. And I got to remember how much work play takes when you are doing it intentionally. Lifting her up, bouncing her on my leg, jumping, reading, getting down on the floor to see the world from her height. Let’s just say, my body is reminding me of how much fun we had. But it was worth it.

    That alone would’ve been enough. But trips like this tend to stir up more than just memories — they stir perspective.

    What Once Was …

    It reminded me of my grandfather.

    Albert Getson wrestling as the Green Hornet in the 1950s.

    His body was wrecked by years of professional wrestling as the Green Hornet. By the time I knew him, “playing” looked different. He’d lie on the couch, and I’d climb on top of him. He called it “playing on the second floor.”

    Me and my Grandfather in 1967.

    At the time, to me, it just felt like fun. Looking back, it was an adaptation. It was love, finding a way.

    And then there’s the harder realization: by my age, my grandpa was already dead, and my dad was already gone because of a cancer that would be caught much earlier and treated today.

    So, yeah, feeling sore after playing with my granddaughter hits a little differently. It’s a reminder that I still get to show up and participate … that I still have time. That’s not something to take for granted

    It reminds me of Ray Kurzweil’s “Longevity Escape Velocity,” which is the idea that medical and biotechnological progress will reach a point where, each year, remaining life expectancy increases by more than one year, so you are effectively “outrunning” aging over time.

    … but try not to die before that happens.

    Don’t Touch That Dial …

    In part, that’s why this visit also had me thinking about technology.

    Who’s surprised?

    We were talking about air conditioning — how recent it really is in the grand scheme of things, and how quickly it’s become something we can’t imagine living without. Take it away, and most of us would struggle immediately.

    Or think about this: my great-grandmother was born before cars or planes existed.

    Or that widespread access to electricity in cities started to roll out in the 1920s.

    Think about how technologies like these have reshaped where and how people live. Entire regions went from inhospitable to must-see travel destinations.

    And then I think about my own timeline.

    I was born before hand-held calculators were invented or color TVs were standard.

    My kids? They were born before Wi-Fi, before smartphones, before MP3s. They remember floppy disks, dial-up modems, and landlines. They remember printing directions or following someone who inevitably sped through a yellow light, leaving you guessing at the next turn.

    Some things haven’t changed, though. Human nature stays frustratingly the same. My father yelling at early robo-receptionists in the 1990s feels surprisingly modern.

    Through all of it, I’ve always taken a certain pride in being able to keep up. I may not set up my own tech anymore, but I still understand it well enough to be dangerous. My team sees it in the way I think through problems and, even more so, in the types of prompts I write.

    I enjoy working with AI. It gives me energy and hope.

    But this weekend was a reminder: there’s always another level.

    The More Things Change, The More They Stay The Same

    My youngest son works with me. My oldest son works in an AI-adjacent space. He is deeply technical, has the kind of mind that builds the systems the rest of us use, and he’s helped improve things you’d definitely recognize. For what it is worth, though, it has always surprised me how differently he and I use technology.

    We started talking about LLMs. I told him how impressed I was with the pace of progress and how much better it is than I imagined it could be in so little time.

    We talked about how the fear of missing out is so prevalent today because everyone knows somebody using AI for something they hadn’t thought of or doing something they wish they could.

    As our conversation progressed, I told him that a year and a half ago, I was focused on learning how to prompt better, but now I believe it’s more important to tell AI what you want and ask it to help figure out how to get it.

    As any good son would, he explained it with just a hint of … let’s call it “constructive skepticism” about my approach. He criticized what I was doing as still telling the AI too much and putting too many of my constraints on its ability to do things. He explained that the next generation of agentic swarms is designed to bypass those limitations.

    He then gave me a little demo, and I had FOMO again.

    And that’s kind of the point.

    No matter how much you think you understand something or how proud you are about what you can do, there’s always more.

    I almost want to describe the demo in detail and explain some of the business ideas it gave me. But the point isn’t about the technology; it’s about change (and what we make of that).

    The pace of change right now is staggering. These tools aren’t just improving year over year — they’re improving constantly.

    And that compresses everything.

    Learning curves. Advantage windows. Expectations.

    It also makes perspective more valuable, not less.

    Because when you zoom out far enough (from wrestling grandfathers to newborn grandsons, from no cars to self-driving ones, from no air conditioning to climate-controlled everything) you start to see the pattern.

    We adapt.

    We build.

    We take things for granted.

    And if we’re lucky, we get the chance to notice it while it’s happening.

    This weekend, I did … And it felt like a gift.

    Onwards!

  • Allbirds: The Sole of a Company Rewritten in Code

    Sometimes, the truth is stranger than fiction. For some reason, that seems truer than it used to.

    This week, Allbirds, the eco-friendly sneaker brand formerly valued at over $4 billion, announced it is exiting the shoe industry to shift completely into artificial intelligence infrastructure.

    Allbirds will sell its remaining intellectual property and shoe assets (for $39 million) and rebrand as NewBird AI.

    It’s not exactly a natural or expected move into AI, but the stock still shot up more than 500% on the news. Yikes! Stories like this make it harder to argue against the AI bubble discussion or that Wall Street is just frothing at anything in the space.

    Where to Start …

    While this sounds like AI-fabricated fake news, it’s real.

    And while it’s easy to dismiss this as a play for attention (or even a nod to meme-stock energy), the rationale runs deeper than that.

    Still, history matters. A late start in AI infrastructure, combined with a legacy brand built for something entirely different, creates as many challenges as opportunities.

    That said, choosing to pivot rather than shut it down makes more sense than it first appears.

    Why Not Just Cut Your Losses and Start Over?

    At first glance, this kind of move feels random. But it might make sense from a financial engineering perspective.

    If you want to become an AI infrastructure company, why not just start one? Clean slate. Clean story. No baggage from a struggling consumer brand that used to be on top of the world. No confused customers wondering why their favorite shoes are suddenly talking about GPUs.

    But that’s not really how the game works.

    Because what Allbirds has (despite everything) is something a brand-new AI startup doesn’t: structure.

    • It’s already public.
    • It already has access to capital markets.
    • It already has a ticker, a shareholder base, and the ability to raise money without starting from zero.

    To some people, that matters more than the logo on the door. And if you wanna play that game, you can monetize the logo on the door too.

    In a world where AI infrastructure is capital-intensive from day one, speed is necessary for survival.

    Starting a new entity means building credibility, raising initial funding rounds, assembling a board, and proving your thesis (often before you even get to compete). In this market, even after doing all that, some would argue they’re still behind.

    Repurposing an existing public company dramatically compresses that timeline.

    Investors understand the story: compute demand is exploding, infrastructure is scarce, and the winners could be massive. You’re no longer asking the market to believe in better shoes—you’re asking it to believe in a bigger trend.

    Fixing your brand positioning and supply chain, and recovering a business in steep decline, is a monumental task.

    AI is hot, and apparently a much shorter leap.

    Keeping the existing entity also allows management to use what’s left (cash, brand equity, public listing) as a kind of launchpad.

    In some ways, it’s closer to a merger with the future than a continuation of the past.

    The real question is whether that’s enough to earn a place in a market that’s already moving this fast.

    Still, The Other Shoe Drops …

    Of course, the pivot comes with tradeoffs.

    You inherit expectations that no longer match reality. You risk alienating the people who believed in the original mission.
    And you invite a certain amount of skepticism about whether this is ‘vision’ or gimmicky opportunism.

    It’s a risky play … but you never know. And you can’t win if you don’t play.

    Does The Glass Slipper Fit?

    Zoom out, and it fits a broader pattern.

    We’re in a moment where identity is fluid, timelines are compressed, and the cost of being late feels existential. Companies aren’t just evolving — they’re jumping tracks.

    Within that paradigm, you could argue that starting from scratch is the slower and riskier move.

    While it sounds silly … can you blame them?

    It doesn’t mean they’ll be a success. There will be more losers than winners in this transitory period. But, at least they’re playing the game.

    While that’s not a ship I’d want to be riding on, I can’t blame them for trying to stay afloat.

  • Artemis II and the Pale Blue Dot

    Artemis II was a nine-day lunar flyby mission with a crew of four astronauts, launched on April 1, 2026. It was the first crewed NASA-led Artemis flight and the first human journey beyond low Earth orbit since Apollo 17 in 1972. 

    During their lunar flyby, the crew achieved the record for the farthest distance from Earth by humans, reaching 252,756 miles (406,771 km), surpassing Apollo 13’s previous record of 248,655 miles (400,171 km).

    Friday, they splashed down safely in the Pacific Ocean.

    Artemis II astronauts Jeremy Hansen, Christina Koch, Victor Glover, and Reid Wiseman are seen onstage Saturday at Ellington Field at Johnson Space Center in Houston.  – Ronaldo Schemidt/AFP/Getty Images

    “Victor, Christina and Jeremy, we are, we are bonded forever, and no one down here is ever going to know what the four of us just went through … And it was the most special thing that will ever happen in my life.” – Reid Wiseman

    This is the kind of story that’s easy to file under ‘space news’ – but for entrepreneurs, investors, and leaders, it’s also a case study in how fast the frontier moves when compounding technology meets long‑term conviction.

    As we move forward, we’ll talk more about the emerging business landscape around space (from connectivity and Earth‑observation data to in‑orbit manufacturing, commercial stations, logistics, and even space‑based energy). But today’s piece is really about something more fundamental: marking a milestone on the path and widening our sense of where we are and what’s possible.

    From Humble Beginnings …

    To appreciate how far we’ve come, I think it’s helpful to think about the early days of Space Travel. In 1977, the Voyager 1 launched into space.   Just over a dozen years later, the Voyager 1 spacecraft had traveled farther than any spacecraft/probe/human-made anything had gone before. It was approximately 6 billion kilometers away from Earth. At that point, the Voyager 1 was “told” by Carl Sagan to turn around and take one last photo of the Earth… a pale blue dot

    The resulting photo is impressive precisely because it shows so little in so much.

    A photo showing 1 blue pixel, the Earth, taken by Voyager 1 in 1977

    “Every saint and sinner in the history of our species lived there – on a mote of dust suspended in a sunbeam.”  – Carl Sagan

    Earth is in the far-right sunbeam – a little below halfway down the image. This image (and the ability to send it back to Earth) was the culmination of years of effort, technological advancement, and the dreams of mankind.

    Carl Sagan’s Pale Blue Dot speech is still profound and moving. Invest three minutes to watch and listen.

    Carl Sagan via YouTube
     

    Here’s the transcript:

    Look again at that dot. That’s here. That’s home. That’s us.

    On it everyone you love, everyone you know, everyone you ever heard of, every human being who ever was, lived out their lives.

    The aggregate of our joy and suffering, thousands of confident religions, ideologies, and economic doctrines, every hunter and forager, every hero and coward, every creator and destroyer of civilization, every king and peasant, every young couple in love, every mother and father, hopeful child, inventor and explorer, every teacher of morals, every corrupt politician, every “superstar,” every “supreme leader,” every saint and sinner in the history of our species lived there–on a mote of dust suspended in a sunbeam.

    The Earth is a very small stage in a vast cosmic arena.

    Think of the rivers of blood spilled by all those generals and emperors so that, in glory and triumph, they could become the momentary masters of a fraction of a dot. Think of the endless cruelties visited by the inhabitants of one corner of this pixel on the scarcely distinguishable inhabitants of some other corner, how frequent their misunderstandings, how eager they are to kill one another, how fervent their hatreds.

    Our posturings, our imagined self-importance, the delusion that we have some privileged position in the Universe, are challenged by this point of pale light. Our planet is a lonely speck in the great enveloping cosmic dark. In our obscurity, in all this vastness, there is no hint that help will come from elsewhere to save us from ourselves.

    The Earth is the only world known so far to harbor life.

    There is nowhere else, at least in the near future, to which our species could migrate. Visit, yes. Settle, not yet. Like it or not, for the moment the Earth is where we make our stand.

    It has been said that astronomy is a humbling and character-building experience. There is perhaps no better demonstration of the folly of human conceits than this distant image of our tiny world. To me, it underscores our responsibility to deal more kindly with one another, and to preserve and cherish the pale blue dot, the only home we’ve ever known.

    How powerful a statement from a grainy pixel.

    … To New Heights

    Today, we have people living in space, posting videos from the ISS, and high-resolution images of space and galaxies near and far. Artemis II shows we’re going back to the moon, and that that’s only the beginning. We also recently talked about the other new goals and explorations already on the proverbial docket.

    We take for granted the scale of the technological phase shift. The smartphone in your pocket has more computing power than the systems that first took us to the moon – and it has for decades.

    As humans, we’re wired to think locally and linearly. We evolved to live our lives in small groups, to fear outsiders, and to stay in a general region until we die. We’re not wired to think about the billions and billions of individuals on our planet, or the rate of technological growth – or the minuteness of that all compared to the vastness of space.  

    However, today’s reality necessitates that we think about the world, our impact, and what’s now possible for us.

    We’ve created better, faster ways to travel, instantaneous communication networks across vast distances, and megacities. Our tribes have gotten much bigger – and with that, our ability to enact massive change has grown as well. 

    Space was the proving ground for many of today’s breakthrough technologies. Now, similar waves are building in AI, medicine, genetic engineering, robotics, and even ‘world‑building’—not just in virtual environments, but in how we design cities, companies, and economies. As leaders, our job is to spot these trajectories early, place disciplined bets, and build systems that can adapt as the frontier moves.

    It’s hard to comprehend the scale of the universe and the scale of our potential – but that’s exactly why it’s worth exploring. The view from a ‘pale blue dot’ reminds us that most of what feels urgent today won’t matter in a decade, but the systems we build and the bets we make will. This week, ask yourself: where are you still thinking locally and linearly in a world that rewards global, exponential thinking? 

    Onwards!

  • When Does Helpful Become Too Much? Rethinking Our Relationship with AI

    I went to dinner with my good friend John Raymonds and our sons this week. John is a deep thinker and an experienced entrepreneur. Unsurprisingly, the conversation turned to AI. I thought I’d share some of the things we examined and discussed.

    John and I at a Texas steak house with our sons,

    As we all admitted to using AI more often and for more things, what stood out wasn’t our agreement, but rather the tension around our use cases. While we’re all very bullish on AI (excited, even), we kept circling two questions: “How much AI is too much?” and “How important is it to preserve your own voice (and thinking) in the wake of AI content generation?” Neither had clean answers, yet both felt increasingly important and relevant.

    How much AI is too much?

    When you see AI everywhere all the time, you might have passed the point of diminishing returns (or you might be passionate about the discovery of something as important as the discovery of fire, electricity, or the Internet).

    The real issue is that “too much AI” is not about volume of usage but about when AI starts to (a) complicate your process or (b) dilute your voice. Ultimately, you must consciously decide where the line is for you.

    Here’s an example we talked about, stemming from Zach and me writing this weekly commentary together. I’ve been experimenting with something I call “content pillars.” That is where I combine various sources (including research, articles, notes, recordings, etc.) on a subject, run them through various AI tools with layered prompts, and distill everything into a multi-faceted outline that provides a more complete, multi-dimensional view of the material and its meaning. The goal is to get a better sense of the big picture (and make it easier to spot patterns, overlaps, contradictions, tensions, and agreements). I include most of the process steps in the content pillar. The result is dense, sometimes overwhelming, but undeniably rich. I used to do a lot of that in my head. This consolidates all of that in one place, making it easy to save for reference or reuse. To me, this was a step forward.

    My son pushed back.

    At some point, he argued, the process becomes so complex that it requires its own layer of AI just to consume it. The time investment balloons. What started as a tool to simplify thinking and streamline our process can quietly become something that complicates it.

    He’s not wrong.

    But I don’t think that invalidates the process … or the end result. Beyond the output, there’s value in building and using these systems (exploring, experimenting, and stretching a different kind of mental muscle). The process itself becomes the product, at least in part.

    Still, the question lingers: if the tool designed to accelerate us begins to slow us down, where’s the line?

    I won’t really attempt to answer that here. However, I will note that I originally created the process to augment, automate, and extend parts of my work. Over time, I refined the process to the point where I wanted to share it. That’s when I was confronted by a stubborn truth I’ve battled many times: a process designed for you won’t necessarily please others.

    What I found is that very few people want information in the quantity, velocity, depth, or breadth that I would choose. In fact, it became clear to me that I wasn’t even the real audience anymore. As I continued to build these content pillars, they expanded as I began to view each pillar as a new, richer data set to feed the machine (rather than producing something I’d want to consume myself or share with others).

    But the point remains, if you design something to satisfy a machine, it shouldn’t surprise you that it doesn’t satisfy a human.

    How much does voice matter?

    The second tension was more subtle and subjective.

    Something shifts as AI becomes more embedded in writing, editing, and content generation. It doesn’t take a literary genius to see or feel it.

    Sentences smooth out. Paragraphs tighten. Structure improves. But sometimes, the voice flattens. Most of us can tell when something is written by AI, even if we can’t always tell when something is written with the help of AI.

    Yet, even tools like Grammarly optimize toward familiarity. They rely on proven patterns, common phrasing, and widely accepted “good writing.” The result is predictably better writing… but also just predictable writing.

    Of course, there’s a tradeoff.

    AI enables depth. It helps us see angles we might have missed, incorporate ideas we wouldn’t have found, and build more comprehensive pieces. It has also been vital in catching when we’re making assumptions or claims without backing them up. Our writing becomes more informed, more structured, and often more valuable to the reader.

    But at what cost?

    I see it firsthand. My son spends extra time pulling our writing back toward something that feels like us (restoring tone, rhythm, personality). It’s deliberate work, and it can be frustrating.

    Our previous rhythm was relatively painless, but we’d also plateaued in the caliber and tone of our articles.

    So the question becomes: Is added value worth a diluted voice? Or is voice itself part of the value we’re trying to create?

    Could we spend extra time improving our prompts so that our voice is more carefully curated? If the voice is there but we didn’t write it, is it still our article?

    Different generations, different instincts

    What became clear over dinner wasn’t just disagreement—it was a difference in posture.

    John and I are leaning in hard. There’s a kind of curiosity that borders on recklessness. We’re exploring, testing limits, and integrating AI into everything we can. Not because we have to, but because we want to see what’s possible. John even built a niche AI app recently, just to prove he could.

    There’s joy in that.

    Our sons, on the other hand, seem to play a different role. Not resistant or disengaged (they both use these tools extensively), but more measured. More aware of the tradeoffs and the nature of their parents. More willing to question whether efficiency is always the goal.

    If we are accelerating, they are steering.

    Perhaps its a result of them being so close to us that they end up playing “defense”. And maybe that balance matters more than either side being “right.”

    We hear it all the time: too much of a good thing becomes a bad thing.

    But AI complicates that idea. Because it’s not just a tool — it’s a multiplier of output, of speed, of ideas … and of noise.

    So how do we know when we’ve crossed the line?

    That is a question worth sitting with.

    Maybe it’s not about a universal threshold. Maybe it’s more personal, more situational. Maybe the better question isn’t “how much is too much?” but:

    • Is this helping me think more clearly, or just more quickly?
    • Is this enhancing my voice, or replacing it?
    • Am I using the tool, or adapting myself to fit the tool?

    There may not be definitive answers yet.

    But the act of asking — of pausing long enough to notice how these tools are shaping not just what we produce, but how we think — might be the most important habit we can build right now.

  • Happy Easter: Spring As An Opportunity For Rebirth

    Today is Easter – and also part of Passover, the Jewish holiday that recounts the story of Exodus

    The overlap is evident in Da Vinci’s Last Supper, which depicts a Passover Seder (the traditional meal that commemorates the Exodus) and Jesus’s last meal before his crucifixion. 

    Part of the Passover Seder tradition involves discussing how to share the story in ways that resonate with different people, recognizing that everyone understands and relates to things differently. This echoes our previous discussion on happiness and how that feeling varies for each of us.

    To do this, we examine the Passover story through the lens of four archetypal children — the Wise Child, the Wicked Child, the Simple Child, and the Child Who Does Not Know How to Ask.

    The four children reflect different learning styles — intellectual (Wise), skeptical (Wicked), curious (Simple), and passive (Silent) — and highlight the need to adapt communication to the diverse personalities and developmental stages of our audience.

    This seems even more relevant today, as we struggle to come to a consensus on what to believe and how to communicate with people (or machines) who think differently. 

    On a lighter note, one of the memorable phrases from Exodus is Moses’Let my people go!” For generations, people assumed he was talking to the Pharoah about his people’s freedom. But after a week of eating clogging food like matzoh, matzoh balls, and even fried matzoh … for many Jews, “Let my people go” takes on a different meaning.

    After Passover, and as we enter a new season, it’s a great time for a mental and physical ‘Spring Cleaning,’ and delve into your experiences to cultivate more of what you desire and less of what you don’t.

    Here is to Spring, Re-Birth, and Spring Cleaning.

    As a reminder, it doesn’t take a new year to start good habits.

    Hope you had a great and meaningful weekend.

  • Are You Happy? Or Do You Just Think You Are?

    I am often amazed at how little human nature has changed throughout recorded history.

    Despite the exponential progress we’ve made in health, wealth, society, tools, and understanding … we still struggle to find meaning, purpose, and happiness in our lives and our existence.

    Last week, I shared an article on Global Happiness Levels in 2026. Here are a few bullets that summarize the findings: 

    • We underestimate others’ kindness, but it’s more common than we think.
    • Community boosts happiness—eating and living with others matter. Social media is a poor replacement and can actually be hurting your sense of community.
    • Despair is falling globally, except in isolated, low-trust places like the U.S.
    • Hope remains—trust and happiness can rebound with connection and a sense of purpose.

    That post didn’t attempt to define happiness. Instead, it categorized data on people’s reported feelings about happiness. This post will focus philosophically on the definition of happiness.

    While it seems like a simple concept, happiness is complex. We know many things that contribute to and detract from it; we know humans strive for it, but it is still surprisingly challenging to give it a uniform definition. 

    A few years ago, a hobbyist philosopher analyzed 93 philosophy books, spanning from 570 BC to 1588, in an attempt to find a universal definition of Happiness. Here are those findings.

    via Reddit.

    It starts with a simple list of definitions from various philosophers. It does a meta-analysis to create some meaningful categories of definition. Then it presents the admittedly subjective conclusion that:

    Happiness is to accept and find harmony with reason

    My son, Zach, pointed out that while “happiness” is a conscious choice, paradoxically, the “pursuit of happiness” often backfires. Why? Because happiness is a result of acceptance. However, when happiness is the goal, people often focus on what they lack … rather than what they already have or the progress they’ve made. That perspective shift drops them into the ‘Gap’ instead of letting the ‘Gain’ lift them

    So, it got me thinking – and that led me to play around with search and AI a little to broaden my data sources and perspectives. If you would like to view the raw data, here are the notes I compiled (along with the AI-generated version of what this article could have been if it had been left to AI rather than to Zand and me). A side note: It made Zach happy that we didn’t include them in the article, while it made me happy to collect them and keep them in my notes.

    Across centuries, philosophers have wrestled with a deceptively simple question: What does it mean to live a good life?

    As entrepreneurs, investors, and leaders, we often chase performance, innovation, or edge — but underneath it all, there’s a quieter inquiry: Am I living well?

    Happiness aside, across 93 influential philosophical texts spanning two millennia, one word consistently reappears: Eudaimonia. This is not happiness in the modern sense of pleasure, but a richer concept of human flourishinga life filled with purpose, virtue, and meaning.

    Ancient thinkers didn’t view happiness as just a passing feeling but as something deeper — a life lived in line with purpose and virtue. Some focused on developing strong inner character, while others believed it came from living in harmony with nature or a higher power. There has always been debate about how much external factors like wealth, luck, or relationships genuinely matter, and that question still isn’t fully settled. By the time the Renaissance arrived, the discussion began to shift more toward personal, subjective experience. But across all these perspectives and eras, one idea keeps recurring: happiness is something you nurture over time, not just something you consume.

    Contradictions and Tensions

    Thoughts on happiness contain paradoxes, contradictions, and tensions. Examining the boundaries between what you are certain of and what you are uncertain of is where insights occur.

    Here are a few to get you started.

    • Virtue vs. External Goods: Aristotle acknowledges external goods (wealth, friends) as necessary for complete happiness, while Stoics claim virtue alone suffices. This tension challenges the simplicity of virtue-based happiness, suggesting a nuanced balance between inner character and outer circumstances.
    • Subjective vs. Objective Happiness: Ancient philosophers often defined Happiness as an objective state (living virtuously or intellectually flourishing), whereas modern definitions more often emphasize subjective satisfaction that varies from individual to individual. This tension probes whether happiness is a universal or personal experience.
    • Happiness as Pleasure vs. Happiness as Duty/Struggle: Epicureanism equates happiness with pleasure (the absence of pain), whereas Cynics and Stoics emphasize enduring hardship and discipline as the path to happiness, creating a paradox between comfort and resilience.

    Three Metaphors To Help You Think About Happiness

    As we’ve seen, trying to answer the question “what is happiness?” quickly leads to a mix of data, perspectives, and even contradictions — between internal and external factors, momentary feelings and lifelong fulfillment, control and circumstance.

    To help make sense of that complexity, metaphors offer a useful kind of structure.

    Think of the Stoic ship captain: you can’t control the ocean, but you can steer your ship, anchoring the idea that happiness is tied to how we manage our internal world amid external uncertainty. Or, Plato’s vision of the soul as a team adds another layer: reason guides spirit and appetite, suggesting that happiness depends on internal alignment and self-governance. And Aristotle’s garden reminds us that happiness isn’t a single outcome, but something cultivated over time, shaped by effort, environment, and care. Together, these frameworks don’t resolve every tension, but they give us a clearer way to navigate them … turning an abstract question into something we can actually work with.

    Happiness Isn’t a Destination — It’s a Design

    The philosophers didn’t agree on everything, but they aligned on one thing: happiness isn’t something you find. It’s something you build.

    Perhaps happiness is less about chasing peaks and more about tending the kind of life you won’t regret.

    I’m curious how this lands for you — especially if you’re building something big.

    Reach out and let me know what happiness looks like from your vantage point.

    Onwards!

  • The End of Sora and the Future of OpenAI

    This week, OpenAI announced it would be shutting down Sora, its popular AI video app. This is not just about killing a video toy; it signals a strategic pivot at OpenAI.

    You probably weren’t Sora’s target user, but watching this montage of its top clips is a great way to see how far this impressive tech has come.

    Top Sora Clips Video via YouTube.

    It’s both fun and scary to think about how fast technologies like this have evolved … and what they will make possible.

    It’s easy to think Sora’s shutdown isn’t a big deal … but it’s a signal of OpenAI’s new playbook on infrastructure, partnerships, and profit.

    And with that new playbook, OpenAI announced several other important changes this week. Here are a few of the highlights.

    The End of Their Disney Partnership

    Shutting down Sora also forced the termination of a major $1 billion investment deal between OpenAI and Disney, as well as licensing agreements that allowed the use of Disney-owned characters in AI-generated video content.

    It’s a reminder that when OpenAI prunes products like Sora, it’s also pruning capital-intensive bets and risky content partnerships.

    Pushing Pause on “Adult Mode”

    Last October, Sam Altman announced plans for an erotica mode. However, the tension between boldness and caution shows up in the gap between OpenAI’s ‘not the morality police’ rhetoric and its quiet slowdown on controversial features.

    The Financial Times later reported that the pause is “indefinite,” with Cristina Criddle citing “sexual datasets and eliminating illegal content” as challenges for OpenAI. This reflects the growing regulatory and reputational risk around generative sexual content.

    ChatGPT Just Got More Reliable

    OpenAI updated ChatGPT with a 33% reduction in factual errors, plus a significantly expanded memory for longer conversations.

    Changes like these hint at where OpenAI wants to focus: scalable, everyday systems that drive recurring revenue.

    And it doesn’t stop there …

    The Great DRAM Over-Buy

    Originally, it was reported that OpenAI had secured forward commitments for up to 40% of the world’s DRAM supply. This was to help their future data center growth as AI demand increases.

    In plain English, DRAM is the short-term memory that lets these models think; if you want bigger, smarter models, you need a lot of it.

    As these announcements roll in, many are also scrutinizing how much RAM OpenAI locked up in advance.

    With this, I think the memory bull run (which began over 2 years ago) is coming to an end. Many of the large AI labs have secured more DRAM via forward contracts than what they will realistically need. This has created the sense of an artificial shortage supported by essentially FOMO on DRAM supply. Like in previous cycles, this will unwind.
    Seeking Alpha

    With Google’s new TurboQuant AI compression algorithm, and OpenAI switching focus, many see the drop in RAM prices as more than a blip — potentially a real change in the cycle.

    Where OpenAI Goes Next …

    From Owning to Orchestrating Infrastructure

    After initially pursuing massive, vertically integrated infrastructure through its multi-hundred-billion-dollar Stargate initiative, OpenAI has begun shifting toward a more flexible, capital-efficient model.

    If labs over-bought memory during the AI gold rush, then shifting from owning massive data centers to orchestrating capacity from partners starts to look less like backtracking and more like smart risk management.

    Instead of owning and operating the bulk of its global compute footprint, OpenAI is increasingly leaning on partnerships and leased capacity from cloud providers. Internally, this has been reflected in a restructuring that separates infrastructure design, partner management, and operations — signaling a shift from a “build everything” strategy to a “coordinate and optimize” approach (e.g., using multiple cloud providers, negotiating for power in different regions, etc.).

    At the same time, the company is clearly narrowing its product focus.

    Video apps like Sora are entertaining for users, but they’re also brutally compute-intensive for the providers. As you look at Anthropic’s revenue and those of other competitors, it’s clear that chat, code, and enterprise use are where the immediate growth and low-hanging fruit lie.

    How This Fits the Longer-Term Plan

    AI has already consumed massive funding to get here — and it will require even more to reach the next plateau.

    Rather than a retreat, this shift aligns with a longer-term strategy: preserving capital, accelerating deployment, and keeping options open in a rapidly evolving compute landscape. Leveraging partners allows OpenAI to scale faster while avoiding bottlenecks tied to financing, power availability, and hardware cycles.

    In that context, “Stargate” appears to be evolving—from a fixed set of owned assets into a broader, more modular strategy for bringing compute online wherever it is most efficient.

    The end goal hasn’t changed: securing enough compute to train and deploy increasingly powerful AI systems. What has changed is the path — shifting from infrastructure ownership to infrastructure orchestration, and from experimental breadth to commercial depth.

    This aligns with their move from non-profit to IPO. They’re clearly focused on profitability in the near term, not just the long term.

    But these shifts could also signal changes that open opportunities for more players to enter the space and carve out their little slice of the digital landscape.

    I’ll continue to watch how OpenAI manages the delicate balance between rapid innovation, financial pressures, and the broader public good. The story is still unfolding, and what happens next will shape the technological future we all live in.

    How It Shows Up in Everyday Use

    All of this might sound abstract, but you can feel these shifts in everyday usage too. If you’re curious, I use a paid version of ChatGPT throughout the day. I’ve gotten used to it; I understand when to listen and when to ignore it. With that said, I’ve also been happy to pay for Perplexity (but I use it in much more limited circumstances). It gives me access to different models, and I feel like it’s been a good value. However, today I finally decided to pay for Anthropic as well because the quality of the responses I’ve been getting has led me to change my usage behavior.

    Interestingly, if I ask different models a question and then show their answers to ChatGPT, ChatGPT often favors Claude’s responses as well.

    I know all of that is subject to change, and tools are leapfrogging one another with increasing frequency. With that said, I thought it was worth sharing.

    Let me know which tools you use and rely on most.

    Onwards!