Web/Tech

  • Choosing To Be Mindful in the Age of AI

    In the age of AI, we’re obsessed with better answers. But the real leverage may come from better questions.

    It’s easier to solve someone else’s problem than your own. Why? Because your biases, emotions, and problem-solving frameworks become part of the problem. Likewise, your blind spots likely go unexamined when you’re both the observer and the subject.

    As an entrepreneur, I strive to be objective about the decisions I make. Towards that goal, using key performance indicators, getting different perspectives from trusted advisors, and relying on tried-and-true decision frameworks all help. 

    Mindfulness as a Decision Framework

    Combining all three creates a form of “mindfulness” that comes from dispassionately observing from a perspective of all perspectives.

    That almost-indifferent, objective approach is also where exponential technologies like AI excel. They amplify intelligence by helping make better decisions, take smarter actions, and continually improve performance. 

    In 2021, I shot a video about mindfulness and the future of AI. I think it has held up remarkably well.

    via YouTube

    When I shot this video, AI was still relatively limited.

    In just a few years, the technology has come so far. When I originally published the video, I suggested that:

    The future of AI will likely be based on swarm intelligence, where many specialist components communicate, coordinate, and collaborate to view a situation more objectively, better evaluate the possibilities, and determine the best outcome in a dynamic and adaptable way that adds a layer of objectivity and nuance to decision-making.

    Five years later, that prediction has largely materialized. Multi-agent frameworks, retrieval-augmented generation, and tool-using LLMs now orchestrate specialized components to tackle complex problems. The architecture isn’t identical to biological swarm intelligence, but the principle holds: better decisions emerge from coordinated, specialized perspectives, and from understanding the actual purpose of your tools.

    What Hasn’t Changed

    AI is a powerful solution for a seemingly infinite number of problems. But, much like the internet, it’s easy to get distracted by shiny objects, flashy intrusions, or compelling answers.

    It is important to stay mindful and diligent as you apply AI and AI agents to your business.

    Many of my friends are getting excited about these tools, and they’re using them for countless capabilities, but they’re not necessarily doing a good job of evaluating whether they should be.

    Sometimes, you shouldn’t even be looking for the right answer, you should be looking for the right question.

    The Importance of Better Questions

    One of the lessons I teach to our younger employees is that an answer is not THE answer. It’s intellectually lazy to think you’re done simply because you come up with a solution. There are often many ways to solve a problem, and the goal is to determine which yields the best results.

    Even if you find THE answer, it is likely only THE answer temporarily. It is a step in the right direction that buys you time to learn, improve, and re-evaluate.

    Mindfulness comes from slowing down, stepping back, and looking at something from multiple perspectives, and AI can be a powerful tool for that when used intentionally. It can help us explore different viewpoints, challenge assumptions, and think more broadly.

    But the greatest benefit of AI may not be in generating better answers. More often, it comes from helping us ask better questions.

    Used mindfully, AI becomes less of a shortcut to conclusions and more of a tool for deeper thinking.

    Recently, I’ve started using AI to sharpen my questions, and it’s changing the way I approach problems. At first, that sounds abstract, but in practice it forces a very different kind of thinking. Instead of immediately searching for conclusions, you start asking what actually makes a question “better” in the first place. How do you move from a vague sense of uncertainty to a question precise enough to reveal something useful?

    When I’m evaluating a project now, I rarely ask AI something broad like, “Is this a good opportunity?” Questions like that usually produce predictable answers. Instead, I use AI to pressure-test my own thinking. I’ll ask it to identify the assumptions underneath the idea, explore what would have to be true for the project to fail, or point out the questions I haven’t considered yet. The process feels less like outsourcing thought and more like refining it.

    That shift — from answer-seeking to question-sharpening — has changed how I handle ambiguity and make decisions. It has also changed what I consider trustworthy. I’ve started building what I think of as a “question pattern library”: prompts and frameworks that consistently help add structure to messy situations. Some questions help clarify the framing by forcing you to define the real decision being made rather than reacting to surface-level symptoms. Others establish criteria, helping determine how success should actually be measured before debating solutions. And some are designed to expose bottlenecks by identifying which assumption, if proven false, would completely change the next step.

    Over time, I’ve realized these questions work best when they build on each other. At important checkpoints, I’ll often run through a simple sequence: What became clearer? What does this change? Why does it matter? What’s the next best move? The answers themselves matter less than the way the questions force clearer thinking.

    The more I use AI this way, the more I think its greatest value may not be generating better answers at all. Used mindfully, its real strength is helping us examine our own thinking more carefully. Better questions create better distinctions, and better distinctions usually lead to better judgment. So before asking AI for an answer this week, it may be worth asking it to help you frame a better question first. You might discover that the most valuable part of the interaction isn’t the response, but the thinking process that led to it.

  • The New AI Leaderboard … and the Cost of Staying On It

    For decades, I’ve been an Early Adopter of technologies. I love exploring tools to get an idea of where things are going and what’s possible.

    In part, that means I don’t wait for things to settle down and a clear winner to arrive. Instead, I tend to try several tools that claim to do something that excites me.

    On one hand, my wife questions whether this is a waste of time, energy, and money. But the practical realities of technology businesses make it a workable strategy for me in my role.

    Companies have different levels of access to talent, opportunities, and resources. Consequently, the first tool that does something cool isn’t necessarily the one that takes off or gets big (or the one that continues to play the game, even if it does so slowly, committed to getting better till it wins). This is especially true in highly contested areas like large language models.

    A Look At My AI Usage …

    Like many of you, I use many AI tools every day. I pay for ChatGPT, Claude, Perplexity, and Microsoft CoPilot. I also pay for limited subscription access to Google Gemini and Elon Musk’s Grok (and for a host of other useful special-purpose tools like Grammarly, Granola, and Wispr Flow).

    For a while, ChatGPT has been my default. Projects tend to start there and end there. It’s been my source of comfort.

    Even though I start in ChatGPT, I might then show it to Perplexity and say, “Hey, here’s something I built in ChatGPT. What do you think and what would you change?” This process often results in a new idea or a different perspective. I tend to bring those ideas or perspectives back to ChatGPT, saying, “Hey, Perplexity recommended this … What do you think?”

    As you might guess, I’ve tried various iterations of that game. For example, I might start something in Perplexity or Google Gemini … but over time (at least for the type of work that I do), ChatGPT earned its place as my default.

    Now, in part, I’m writing this post because Claude has started taking more and more of my cycles: the answers it gives, the user interface, the integrations. It’s really interesting to see how fast it’s improving. Obviously, ChatGPT just released a new interim version to counter the momentum shift Claude is gaining from so much favorable press.

    There’s another reason that I know Claude is getting better. It’s still critical of things I produce in other models, but other models are increasingly impressed with what I produce in Claude.

    That’s notable because AI systems typically prefer their own outputs. The fact that one model regularly elevates another suggests something else is happening.

    Meanwhile, the gap at the top is narrowing. And it’s changing quickly in part because people share outputs from one model with another. This process is a form of cross-pollination that allows LLMs to see (and learn from) a wider range of perspectives and techniques.

    So, objectively, which models are really the best? That’s where things get murky. Benchmarks try to answer the question, but they only capture slices of capability.

    The Smartest AI Models of 2026 … Well, April 2026

    via visualcapitalist

    Lists like this are less a “stamp of approval” and more like a snapshot in time. Models aren’t just getting better every day; new models based on radically different exponential capabilities are being created and released in shorter time cycles, too.

    In the list above, Grok-4.20 Expert Mode and OpenAI GPT 5.4 Pro (Vision) tie for the top spot (based on TrackingAI’s April 2026 Mensa Norway benchmark), each scoring 145. The top tier is becoming more crowded, with leading models separated by just a few points. Scores have increased significantly since 2025, demonstrating the rapid progress in frontier AI reasoning on visual pattern-recognition tests. But even that doesn’t account for the fact that ChatGPT’s version 5.5 was released this week.

    While this is only one test of AI capabilities, it’s very interesting to see how close the best models have gotten. It’s also worth noting that in 2025, the highest score was 135.

    Meanwhile, use of these tools is skyrocketing.

    Using cutting-edge AI isn’t a differentiator anymore — it’s the price of admission. The real question isn’t who has the best AI; it’s who can afford to keep up with the pace of change.

    Which raises a more important question than “Who’s winning?”:

    Can AI Firms Afford to Keep Up?

    Last week, I talked about my eldest son lightly teasing me for still trying to overly direct Claude in performing tasks. It’s not just indicative of me getting older … it’s a broader, faster shift.

    via visualcapitalist

    Early AI development was talent-driven. The limiting factor was human capital — researchers, engineers, and domain experts pushing systems forward.

    That constraint is shifting. Today, leading AI firms are increasingly defined by access to compute. Training, fine-tuning, and running these models at scale require massive infrastructure investments, often dwarfing even the highest salaries in tech.

    Anthropic spent almost $7 billion on compute in 2025.

    Talent still matters, but it’s no longer the primary bottleneck.

    Can You Afford To Keep Up?

    As companies start leaning more heavily on tools like ChatGPT and Claude, the economics get a little less straightforward.

    At first, AI feels like a no-brainer. You’re getting more done, faster— emails, summaries, code, all of it. And the cost? It barely registers. A few cents here and there, easy to ignore. But then usage creeps in. And with automation and agents, what was occasional becomes constant. It gets baked into workflows, products, and day-to-day habits.

    And since everything runs on tokens, the meter is always running in the background.

    Suddenly, AI stops feeling like “free leverage” and starts acting more like a quiet, always-on teammate. A fantastic and efficient teammate … but one that happens to bill you for every task, and more when you ask it to show its work. At that point, it’s not surprising that the costs can stack up to something meaningful. In reality, AI can cost more than human workers now.

    That’s not a knock on AI — it’s just the reality of using industrial-grade AI at scale.

    It’s easy to think of AI as a pure efficiency gain, something that just improves margins. But in practice, it’s both sides of the equation. It drives output, and it adds cost. The companies building these tools have always known that. Now the companies using them are starting to see it too.

    I’m fully committed to AI, and yet I somehow continue to explore even further. But the deeper you delve, the more important it becomes to pause and catch your breath.

    Activity isn’t progress if it doesn’t move you in the right direction.

    Onwards!

  • The Distance Between Then And Now

    We just got back from Portland, where we were visiting my oldest son — and meeting my newborn grandson.

    It was a great trip. Nothing monumental happened, but years from now, we’ll continue to look back on it fondly.

    I got to hold my grandson for the first time. I got to play with my granddaughter. And I got to remember how much work play takes when you are doing it intentionally. Lifting her up, bouncing her on my leg, jumping, reading, getting down on the floor to see the world from her height. Let’s just say, my body is reminding me of how much fun we had. But it was worth it.

    That alone would’ve been enough. But trips like this tend to stir up more than just memories — they stir perspective.

    What Once Was …

    It reminded me of my grandfather.

    Albert Getson wrestling as the Green Hornet in the 1950s.

    His body was wrecked by years of professional wrestling as the Green Hornet. By the time I knew him, “playing” looked different. He’d lie on the couch, and I’d climb on top of him. He called it “playing on the second floor.”

    Me and my Grandfather in 1967.

    At the time, to me, it just felt like fun. Looking back, it was an adaptation. It was love, finding a way.

    And then there’s the harder realization: by my age, my grandpa was already dead, and my dad was already gone because of a cancer that would be caught much earlier and treated today.

    So, yeah, feeling sore after playing with my granddaughter hits a little differently. It’s a reminder that I still get to show up and participate … that I still have time. That’s not something to take for granted

    It reminds me of Ray Kurzweil’s “Longevity Escape Velocity,” which is the idea that medical and biotechnological progress will reach a point where, each year, remaining life expectancy increases by more than one year, so you are effectively “outrunning” aging over time.

    … but try not to die before that happens.

    Don’t Touch That Dial …

    In part, that’s why this visit also had me thinking about technology.

    Who’s surprised?

    We were talking about air conditioning — how recent it really is in the grand scheme of things, and how quickly it’s become something we can’t imagine living without. Take it away, and most of us would struggle immediately.

    Or think about this: my great-grandmother was born before cars or planes existed.

    Or that widespread access to electricity in cities started to roll out in the 1920s.

    Think about how technologies like these have reshaped where and how people live. Entire regions went from inhospitable to must-see travel destinations.

    And then I think about my own timeline.

    I was born before hand-held calculators were invented or color TVs were standard.

    My kids? They were born before Wi-Fi, before smartphones, before MP3s. They remember floppy disks, dial-up modems, and landlines. They remember printing directions or following someone who inevitably sped through a yellow light, leaving you guessing at the next turn.

    Some things haven’t changed, though. Human nature stays frustratingly the same. My father yelling at early robo-receptionists in the 1990s feels surprisingly modern.

    Through all of it, I’ve always taken a certain pride in being able to keep up. I may not set up my own tech anymore, but I still understand it well enough to be dangerous. My team sees it in the way I think through problems and, even more so, in the types of prompts I write.

    I enjoy working with AI. It gives me energy and hope.

    But this weekend was a reminder: there’s always another level.

    The More Things Change, The More They Stay The Same

    My youngest son works with me. My oldest son works in an AI-adjacent space. He is deeply technical, has the kind of mind that builds the systems the rest of us use, and he’s helped improve things you’d definitely recognize. For what it is worth, though, it has always surprised me how differently he and I use technology.

    We started talking about LLMs. I told him how impressed I was with the pace of progress and how much better it is than I imagined it could be in so little time.

    We talked about how the fear of missing out is so prevalent today because everyone knows somebody using AI for something they hadn’t thought of or doing something they wish they could.

    As our conversation progressed, I told him that a year and a half ago, I was focused on learning how to prompt better, but now I believe it’s more important to tell AI what you want and ask it to help figure out how to get it.

    As any good son would, he explained it with just a hint of … let’s call it “constructive skepticism” about my approach. He criticized what I was doing as still telling the AI too much and putting too many of my constraints on its ability to do things. He explained that the next generation of agentic swarms is designed to bypass those limitations.

    He then gave me a little demo, and I had FOMO again.

    And that’s kind of the point.

    No matter how much you think you understand something or how proud you are about what you can do, there’s always more.

    I almost want to describe the demo in detail and explain some of the business ideas it gave me. But the point isn’t about the technology; it’s about change (and what we make of that).

    The pace of change right now is staggering. These tools aren’t just improving year over year — they’re improving constantly.

    And that compresses everything.

    Learning curves. Advantage windows. Expectations.

    It also makes perspective more valuable, not less.

    Because when you zoom out far enough (from wrestling grandfathers to newborn grandsons, from no cars to self-driving ones, from no air conditioning to climate-controlled everything) you start to see the pattern.

    We adapt.

    We build.

    We take things for granted.

    And if we’re lucky, we get the chance to notice it while it’s happening.

    This weekend, I did … And it felt like a gift.

    Onwards!

  • Allbirds: The Sole of a Company Rewritten in Code

    Sometimes, the truth is stranger than fiction. For some reason, that seems truer than it used to.

    This week, Allbirds, the eco-friendly sneaker brand formerly valued at over $4 billion, announced it is exiting the shoe industry to shift completely into artificial intelligence infrastructure.

    Allbirds will sell its remaining intellectual property and shoe assets (for $39 million) and rebrand as NewBird AI.

    It’s not exactly a natural or expected move into AI, but the stock still shot up more than 500% on the news. Yikes! Stories like this make it harder to argue against the AI bubble discussion or that Wall Street is just frothing at anything in the space.

    Where to Start …

    While this sounds like AI-fabricated fake news, it’s real.

    And while it’s easy to dismiss this as a play for attention (or even a nod to meme-stock energy), the rationale runs deeper than that.

    Still, history matters. A late start in AI infrastructure, combined with a legacy brand built for something entirely different, creates as many challenges as opportunities.

    That said, choosing to pivot rather than shut it down makes more sense than it first appears.

    Why Not Just Cut Your Losses and Start Over?

    At first glance, this kind of move feels random. But it might make sense from a financial engineering perspective.

    If you want to become an AI infrastructure company, why not just start one? Clean slate. Clean story. No baggage from a struggling consumer brand that used to be on top of the world. No confused customers wondering why their favorite shoes are suddenly talking about GPUs.

    But that’s not really how the game works.

    Because what Allbirds has (despite everything) is something a brand-new AI startup doesn’t: structure.

    • It’s already public.
    • It already has access to capital markets.
    • It already has a ticker, a shareholder base, and the ability to raise money without starting from zero.

    To some people, that matters more than the logo on the door. And if you wanna play that game, you can monetize the logo on the door too.

    In a world where AI infrastructure is capital-intensive from day one, speed is necessary for survival.

    Starting a new entity means building credibility, raising initial funding rounds, assembling a board, and proving your thesis (often before you even get to compete). In this market, even after doing all that, some would argue they’re still behind.

    Repurposing an existing public company dramatically compresses that timeline.

    Investors understand the story: compute demand is exploding, infrastructure is scarce, and the winners could be massive. You’re no longer asking the market to believe in better shoes—you’re asking it to believe in a bigger trend.

    Fixing your brand positioning and supply chain, and recovering a business in steep decline, is a monumental task.

    AI is hot, and apparently a much shorter leap.

    Keeping the existing entity also allows management to use what’s left (cash, brand equity, public listing) as a kind of launchpad.

    In some ways, it’s closer to a merger with the future than a continuation of the past.

    The real question is whether that’s enough to earn a place in a market that’s already moving this fast.

    Still, The Other Shoe Drops …

    Of course, the pivot comes with tradeoffs.

    You inherit expectations that no longer match reality. You risk alienating the people who believed in the original mission.
    And you invite a certain amount of skepticism about whether this is ‘vision’ or gimmicky opportunism.

    It’s a risky play … but you never know. And you can’t win if you don’t play.

    Does The Glass Slipper Fit?

    Zoom out, and it fits a broader pattern.

    We’re in a moment where identity is fluid, timelines are compressed, and the cost of being late feels existential. Companies aren’t just evolving — they’re jumping tracks.

    Within that paradigm, you could argue that starting from scratch is the slower and riskier move.

    While it sounds silly … can you blame them?

    It doesn’t mean they’ll be a success. There will be more losers than winners in this transitory period. But, at least they’re playing the game.

    While that’s not a ship I’d want to be riding on, I can’t blame them for trying to stay afloat.

  • Artemis II and the Pale Blue Dot

    Artemis II was a nine-day lunar flyby mission with a crew of four astronauts, launched on April 1, 2026. It was the first crewed NASA-led Artemis flight and the first human journey beyond low Earth orbit since Apollo 17 in 1972. 

    During their lunar flyby, the crew achieved the record for the farthest distance from Earth by humans, reaching 252,756 miles (406,771 km), surpassing Apollo 13’s previous record of 248,655 miles (400,171 km).

    Friday, they splashed down safely in the Pacific Ocean.

    Artemis II astronauts Jeremy Hansen, Christina Koch, Victor Glover, and Reid Wiseman are seen onstage Saturday at Ellington Field at Johnson Space Center in Houston.  – Ronaldo Schemidt/AFP/Getty Images

    “Victor, Christina and Jeremy, we are, we are bonded forever, and no one down here is ever going to know what the four of us just went through … And it was the most special thing that will ever happen in my life.” – Reid Wiseman

    This is the kind of story that’s easy to file under ‘space news’ – but for entrepreneurs, investors, and leaders, it’s also a case study in how fast the frontier moves when compounding technology meets long‑term conviction.

    As we move forward, we’ll talk more about the emerging business landscape around space (from connectivity and Earth‑observation data to in‑orbit manufacturing, commercial stations, logistics, and even space‑based energy). But today’s piece is really about something more fundamental: marking a milestone on the path and widening our sense of where we are and what’s possible.

    From Humble Beginnings …

    To appreciate how far we’ve come, I think it’s helpful to think about the early days of Space Travel. In 1977, the Voyager 1 launched into space.   Just over a dozen years later, the Voyager 1 spacecraft had traveled farther than any spacecraft/probe/human-made anything had gone before. It was approximately 6 billion kilometers away from Earth. At that point, the Voyager 1 was “told” by Carl Sagan to turn around and take one last photo of the Earth… a pale blue dot

    The resulting photo is impressive precisely because it shows so little in so much.

    A photo showing 1 blue pixel, the Earth, taken by Voyager 1 in 1977

    “Every saint and sinner in the history of our species lived there – on a mote of dust suspended in a sunbeam.”  – Carl Sagan

    Earth is in the far-right sunbeam – a little below halfway down the image. This image (and the ability to send it back to Earth) was the culmination of years of effort, technological advancement, and the dreams of mankind.

    Carl Sagan’s Pale Blue Dot speech is still profound and moving. Invest three minutes to watch and listen.

    Carl Sagan via YouTube
     

    Here’s the transcript:

    Look again at that dot. That’s here. That’s home. That’s us.

    On it everyone you love, everyone you know, everyone you ever heard of, every human being who ever was, lived out their lives.

    The aggregate of our joy and suffering, thousands of confident religions, ideologies, and economic doctrines, every hunter and forager, every hero and coward, every creator and destroyer of civilization, every king and peasant, every young couple in love, every mother and father, hopeful child, inventor and explorer, every teacher of morals, every corrupt politician, every “superstar,” every “supreme leader,” every saint and sinner in the history of our species lived there–on a mote of dust suspended in a sunbeam.

    The Earth is a very small stage in a vast cosmic arena.

    Think of the rivers of blood spilled by all those generals and emperors so that, in glory and triumph, they could become the momentary masters of a fraction of a dot. Think of the endless cruelties visited by the inhabitants of one corner of this pixel on the scarcely distinguishable inhabitants of some other corner, how frequent their misunderstandings, how eager they are to kill one another, how fervent their hatreds.

    Our posturings, our imagined self-importance, the delusion that we have some privileged position in the Universe, are challenged by this point of pale light. Our planet is a lonely speck in the great enveloping cosmic dark. In our obscurity, in all this vastness, there is no hint that help will come from elsewhere to save us from ourselves.

    The Earth is the only world known so far to harbor life.

    There is nowhere else, at least in the near future, to which our species could migrate. Visit, yes. Settle, not yet. Like it or not, for the moment the Earth is where we make our stand.

    It has been said that astronomy is a humbling and character-building experience. There is perhaps no better demonstration of the folly of human conceits than this distant image of our tiny world. To me, it underscores our responsibility to deal more kindly with one another, and to preserve and cherish the pale blue dot, the only home we’ve ever known.

    How powerful a statement from a grainy pixel.

    … To New Heights

    Today, we have people living in space, posting videos from the ISS, and high-resolution images of space and galaxies near and far. Artemis II shows we’re going back to the moon, and that that’s only the beginning. We also recently talked about the other new goals and explorations already on the proverbial docket.

    We take for granted the scale of the technological phase shift. The smartphone in your pocket has more computing power than the systems that first took us to the moon – and it has for decades.

    As humans, we’re wired to think locally and linearly. We evolved to live our lives in small groups, to fear outsiders, and to stay in a general region until we die. We’re not wired to think about the billions and billions of individuals on our planet, or the rate of technological growth – or the minuteness of that all compared to the vastness of space.  

    However, today’s reality necessitates that we think about the world, our impact, and what’s now possible for us.

    We’ve created better, faster ways to travel, instantaneous communication networks across vast distances, and megacities. Our tribes have gotten much bigger – and with that, our ability to enact massive change has grown as well. 

    Space was the proving ground for many of today’s breakthrough technologies. Now, similar waves are building in AI, medicine, genetic engineering, robotics, and even ‘world‑building’—not just in virtual environments, but in how we design cities, companies, and economies. As leaders, our job is to spot these trajectories early, place disciplined bets, and build systems that can adapt as the frontier moves.

    It’s hard to comprehend the scale of the universe and the scale of our potential – but that’s exactly why it’s worth exploring. The view from a ‘pale blue dot’ reminds us that most of what feels urgent today won’t matter in a decade, but the systems we build and the bets we make will. This week, ask yourself: where are you still thinking locally and linearly in a world that rewards global, exponential thinking? 

    Onwards!

  • When Does Helpful Become Too Much? Rethinking Our Relationship with AI

    I went to dinner with my good friend John Raymonds and our sons this week. John is a deep thinker and an experienced entrepreneur. Unsurprisingly, the conversation turned to AI. I thought I’d share some of the things we examined and discussed.

    John and I at a Texas steak house with our sons,

    As we all admitted to using AI more often and for more things, what stood out wasn’t our agreement, but rather the tension around our use cases. While we’re all very bullish on AI (excited, even), we kept circling two questions: “How much AI is too much?” and “How important is it to preserve your own voice (and thinking) in the wake of AI content generation?” Neither had clean answers, yet both felt increasingly important and relevant.

    How much AI is too much?

    When you see AI everywhere all the time, you might have passed the point of diminishing returns (or you might be passionate about the discovery of something as important as the discovery of fire, electricity, or the Internet).

    The real issue is that “too much AI” is not about volume of usage but about when AI starts to (a) complicate your process or (b) dilute your voice. Ultimately, you must consciously decide where the line is for you.

    Here’s an example we talked about, stemming from Zach and me writing this weekly commentary together. I’ve been experimenting with something I call “content pillars.” That is where I combine various sources (including research, articles, notes, recordings, etc.) on a subject, run them through various AI tools with layered prompts, and distill everything into a multi-faceted outline that provides a more complete, multi-dimensional view of the material and its meaning. The goal is to get a better sense of the big picture (and make it easier to spot patterns, overlaps, contradictions, tensions, and agreements). I include most of the process steps in the content pillar. The result is dense, sometimes overwhelming, but undeniably rich. I used to do a lot of that in my head. This consolidates all of that in one place, making it easy to save for reference or reuse. To me, this was a step forward.

    My son pushed back.

    At some point, he argued, the process becomes so complex that it requires its own layer of AI just to consume it. The time investment balloons. What started as a tool to simplify thinking and streamline our process can quietly become something that complicates it.

    He’s not wrong.

    But I don’t think that invalidates the process … or the end result. Beyond the output, there’s value in building and using these systems (exploring, experimenting, and stretching a different kind of mental muscle). The process itself becomes the product, at least in part.

    Still, the question lingers: if the tool designed to accelerate us begins to slow us down, where’s the line?

    I won’t really attempt to answer that here. However, I will note that I originally created the process to augment, automate, and extend parts of my work. Over time, I refined the process to the point where I wanted to share it. That’s when I was confronted by a stubborn truth I’ve battled many times: a process designed for you won’t necessarily please others.

    What I found is that very few people want information in the quantity, velocity, depth, or breadth that I would choose. In fact, it became clear to me that I wasn’t even the real audience anymore. As I continued to build these content pillars, they expanded as I began to view each pillar as a new, richer data set to feed the machine (rather than producing something I’d want to consume myself or share with others).

    But the point remains, if you design something to satisfy a machine, it shouldn’t surprise you that it doesn’t satisfy a human.

    How much does voice matter?

    The second tension was more subtle and subjective.

    Something shifts as AI becomes more embedded in writing, editing, and content generation. It doesn’t take a literary genius to see or feel it.

    Sentences smooth out. Paragraphs tighten. Structure improves. But sometimes, the voice flattens. Most of us can tell when something is written by AI, even if we can’t always tell when something is written with the help of AI.

    Yet, even tools like Grammarly optimize toward familiarity. They rely on proven patterns, common phrasing, and widely accepted “good writing.” The result is predictably better writing… but also just predictable writing.

    Of course, there’s a tradeoff.

    AI enables depth. It helps us see angles we might have missed, incorporate ideas we wouldn’t have found, and build more comprehensive pieces. It has also been vital in catching when we’re making assumptions or claims without backing them up. Our writing becomes more informed, more structured, and often more valuable to the reader.

    But at what cost?

    I see it firsthand. My son spends extra time pulling our writing back toward something that feels like us (restoring tone, rhythm, personality). It’s deliberate work, and it can be frustrating.

    Our previous rhythm was relatively painless, but we’d also plateaued in the caliber and tone of our articles.

    So the question becomes: Is added value worth a diluted voice? Or is voice itself part of the value we’re trying to create?

    Could we spend extra time improving our prompts so that our voice is more carefully curated? If the voice is there but we didn’t write it, is it still our article?

    Different generations, different instincts

    What became clear over dinner wasn’t just disagreement—it was a difference in posture.

    John and I are leaning in hard. There’s a kind of curiosity that borders on recklessness. We’re exploring, testing limits, and integrating AI into everything we can. Not because we have to, but because we want to see what’s possible. John even built a niche AI app recently, just to prove he could.

    There’s joy in that.

    Our sons, on the other hand, seem to play a different role. Not resistant or disengaged (they both use these tools extensively), but more measured. More aware of the tradeoffs and the nature of their parents. More willing to question whether efficiency is always the goal.

    If we are accelerating, they are steering.

    Perhaps its a result of them being so close to us that they end up playing “defense”. And maybe that balance matters more than either side being “right.”

    We hear it all the time: too much of a good thing becomes a bad thing.

    But AI complicates that idea. Because it’s not just a tool — it’s a multiplier of output, of speed, of ideas … and of noise.

    So how do we know when we’ve crossed the line?

    That is a question worth sitting with.

    Maybe it’s not about a universal threshold. Maybe it’s more personal, more situational. Maybe the better question isn’t “how much is too much?” but:

    • Is this helping me think more clearly, or just more quickly?
    • Is this enhancing my voice, or replacing it?
    • Am I using the tool, or adapting myself to fit the tool?

    There may not be definitive answers yet.

    But the act of asking — of pausing long enough to notice how these tools are shaping not just what we produce, but how we think — might be the most important habit we can build right now.

  • The End of Sora and the Future of OpenAI

    This week, OpenAI announced it would be shutting down Sora, its popular AI video app. This is not just about killing a video toy; it signals a strategic pivot at OpenAI.

    You probably weren’t Sora’s target user, but watching this montage of its top clips is a great way to see how far this impressive tech has come.

    Top Sora Clips Video via YouTube.

    It’s both fun and scary to think about how fast technologies like this have evolved … and what they will make possible.

    It’s easy to think Sora’s shutdown isn’t a big deal … but it’s a signal of OpenAI’s new playbook on infrastructure, partnerships, and profit.

    And with that new playbook, OpenAI announced several other important changes this week. Here are a few of the highlights.

    The End of Their Disney Partnership

    Shutting down Sora also forced the termination of a major $1 billion investment deal between OpenAI and Disney, as well as licensing agreements that allowed the use of Disney-owned characters in AI-generated video content.

    It’s a reminder that when OpenAI prunes products like Sora, it’s also pruning capital-intensive bets and risky content partnerships.

    Pushing Pause on “Adult Mode”

    Last October, Sam Altman announced plans for an erotica mode. However, the tension between boldness and caution shows up in the gap between OpenAI’s ‘not the morality police’ rhetoric and its quiet slowdown on controversial features.

    The Financial Times later reported that the pause is “indefinite,” with Cristina Criddle citing “sexual datasets and eliminating illegal content” as challenges for OpenAI. This reflects the growing regulatory and reputational risk around generative sexual content.

    ChatGPT Just Got More Reliable

    OpenAI updated ChatGPT with a 33% reduction in factual errors, plus a significantly expanded memory for longer conversations.

    Changes like these hint at where OpenAI wants to focus: scalable, everyday systems that drive recurring revenue.

    And it doesn’t stop there …

    The Great DRAM Over-Buy

    Originally, it was reported that OpenAI had secured forward commitments for up to 40% of the world’s DRAM supply. This was to help their future data center growth as AI demand increases.

    In plain English, DRAM is the short-term memory that lets these models think; if you want bigger, smarter models, you need a lot of it.

    As these announcements roll in, many are also scrutinizing how much RAM OpenAI locked up in advance.

    With this, I think the memory bull run (which began over 2 years ago) is coming to an end. Many of the large AI labs have secured more DRAM via forward contracts than what they will realistically need. This has created the sense of an artificial shortage supported by essentially FOMO on DRAM supply. Like in previous cycles, this will unwind.
    Seeking Alpha

    With Google’s new TurboQuant AI compression algorithm, and OpenAI switching focus, many see the drop in RAM prices as more than a blip — potentially a real change in the cycle.

    Where OpenAI Goes Next …

    From Owning to Orchestrating Infrastructure

    After initially pursuing massive, vertically integrated infrastructure through its multi-hundred-billion-dollar Stargate initiative, OpenAI has begun shifting toward a more flexible, capital-efficient model.

    If labs over-bought memory during the AI gold rush, then shifting from owning massive data centers to orchestrating capacity from partners starts to look less like backtracking and more like smart risk management.

    Instead of owning and operating the bulk of its global compute footprint, OpenAI is increasingly leaning on partnerships and leased capacity from cloud providers. Internally, this has been reflected in a restructuring that separates infrastructure design, partner management, and operations — signaling a shift from a “build everything” strategy to a “coordinate and optimize” approach (e.g., using multiple cloud providers, negotiating for power in different regions, etc.).

    At the same time, the company is clearly narrowing its product focus.

    Video apps like Sora are entertaining for users, but they’re also brutally compute-intensive for the providers. As you look at Anthropic’s revenue and those of other competitors, it’s clear that chat, code, and enterprise use are where the immediate growth and low-hanging fruit lie.

    How This Fits the Longer-Term Plan

    AI has already consumed massive funding to get here — and it will require even more to reach the next plateau.

    Rather than a retreat, this shift aligns with a longer-term strategy: preserving capital, accelerating deployment, and keeping options open in a rapidly evolving compute landscape. Leveraging partners allows OpenAI to scale faster while avoiding bottlenecks tied to financing, power availability, and hardware cycles.

    In that context, “Stargate” appears to be evolving—from a fixed set of owned assets into a broader, more modular strategy for bringing compute online wherever it is most efficient.

    The end goal hasn’t changed: securing enough compute to train and deploy increasingly powerful AI systems. What has changed is the path — shifting from infrastructure ownership to infrastructure orchestration, and from experimental breadth to commercial depth.

    This aligns with their move from non-profit to IPO. They’re clearly focused on profitability in the near term, not just the long term.

    But these shifts could also signal changes that open opportunities for more players to enter the space and carve out their little slice of the digital landscape.

    I’ll continue to watch how OpenAI manages the delicate balance between rapid innovation, financial pressures, and the broader public good. The story is still unfolding, and what happens next will shape the technological future we all live in.

    How It Shows Up in Everyday Use

    All of this might sound abstract, but you can feel these shifts in everyday usage too. If you’re curious, I use a paid version of ChatGPT throughout the day. I’ve gotten used to it; I understand when to listen and when to ignore it. With that said, I’ve also been happy to pay for Perplexity (but I use it in much more limited circumstances). It gives me access to different models, and I feel like it’s been a good value. However, today I finally decided to pay for Anthropic as well because the quality of the responses I’ve been getting has led me to change my usage behavior.

    Interestingly, if I ask different models a question and then show their answers to ChatGPT, ChatGPT often favors Claude’s responses as well.

    I know all of that is subject to change, and tools are leapfrogging one another with increasing frequency. With that said, I thought it was worth sharing.

    Let me know which tools you use and rely on most.

    Onwards!

  • A Look at Global Happiness Levels in 2026 (and Over the Past 10 Years)

    Are you Happy?

    Asking whether someone is happy seems like a simple question. But what does the question really ask, and what does ‘happy’ really mean?

    Once you define happiness (for you), how, when, and for how long do you measure it?

    These are some reasons why measuring happiness is harder and more complex than it might seem at first glance.

    What Does Happiness Really Measure?

    At its core, happiness means experiencing more positive emotions than negative ones. With a bit of reflection, you see that happiness is reinforced by comfort, freedom, financial security, and other things people aspire to. 

    Regardless of how hard it is to describe (let alone quantify) … humans strive for happiness.

    Likewise, it is hard to imagine a well-balanced and objective “Happiness Report” because much of the data needed to compile it is subjective and relies on self-reporting. 

    Nonetheless, the World Happiness Report takes an annual look at quantifiable factors (such as health, wealth, GDP, and life expectancy) and more intangible factors (such as social support, generosity, emotions, and perceptions of local government and businesses). Below is an infographic highlighting the World Happiness Report data for 2026.

    World Happiness Report via visualcapitalist

    Despite the news, global happiness hasn’t collapsed – but it has become more uneven.

    Click here to see a dashboard with the raw worldwide data.

    In 2022, when I shared this, we were seeing the immediate ramifications of COVID-19 on happiness levels. There was a significant increase in negative emotions reported – specifically, worry and sadness. And yet, happiness scores are relatively resilient and stable, and humanity persevered in the face of economic insecurity, anxiety, and more.

    In the 2025 report, one of the key focuses was the increase in pessimism about others’ benevolence. There seems to be a rise in distrust that doesn’t match the actual statistics on acts of goodwill. For example, when researchers dropped wallets on the street, the proportion of wallets returned was far higher than people expected. 

    Unfortunately, our well-being depends on both our perception of others’ benevolence and their actual benevolence. 

    The World’s Happiest Countries in 2026

    Before we dive into the global trends, a surface-level view shows that Nordic nations (e.g., Finland and Denmark) boast the happiest people. Unfortunately, those represent a small fraction of the world’s population.

    All of the top 10 nations have populations under 20 million. Interestingly, Mexico is a significant outlier, ranking #12 with a population of 131.9 million.

    And despite what you may think, the US is also among the few large nations in the top 50.

    Building a Case Against Social Media Usage

    Diving deeper into the results, young people are significantly less happy than they were 15 years ago. You might assume war, economic anxiety, politics, or family structure are to blame — but much of the decline appears tied to social media use. While that may seem like a safe scapegoat or a simple hypothesis, the research supports it. While no single factor explains everything, the researchers estimate that always-available social media is a statistically significant contributor to rising mental illness among adolescents in Western nations.

    Diving in a little deeper, the PISA study of 15-year-olds in 47 countries shows that those who use social media for over seven hours a day have much lower well-being than those who use it for less than one hour. In a sample of US college students, the majority wish that social media platforms did not exist.

    These numbers are significantly worse in Western countries than in Eastern and African countries.

    Based on their research, the Report argues that the rapid adoption of “always-available social media” by adolescents in the early 2010s is a statistically significant contributor to the population-level increases in mental illness in Western nations.

    Social media is so toxic that it’s affecting the population at large … not just the most at-risk.

    So, Why Do People Use It?

    Many empirical studies cast doubt on whether social media makes people happy, affecting how we value, choose, and consider well-being. The main takeaway is that many individuals use social media mainly because others do. They don’t want to be left out. If social media use were decreased or eliminated, many people would benefit, and they are aware of this.

    Last year, we talked about the importance of trust and social connections for well-being, but also how social media had created a very low-trust society, as evidenced by the political silos and online vitriol. Unsurprisingly, the estimated relationship between internet use and well-being differs significantly across generations, genders, and regions. It is highly negative for Gen Z, moderately negative for Millennials, near zero for Gen X, and slightly positive for Baby Boomers.

    Older adults enjoy the benefits of stable trust, growing attachment, enhanced safety, and moderate digital engagement. In contrast, younger adults often experience a decline in these foundations within highly saturated digital environments.

    Clearly, social media isn’t creating the healthy connections our younger generations need. Meanwhile, generations that grew up with less digital-centric relationships seem to be handling the changes more robustly.

    Longer-term Trends

    World Happiness Report via Visual Capitalist

    Over the last decade, the top of the world’s happiest countries list has remained remarkably consistent.

    So have global happiness levels as a whole. The relative balance demonstrated in the face of such adversity may point towards the existence of a hedonic treadmill – or a set-point of happiness.

    Large countries, like India, often bring down the averages, but even that has remained relatively consistent.

    Despite that, the distribution of happiness has changed significantly. While life satisfaction is stable, how people feel day to day has shifted downward. Stress, worry, and sadness have increased globally, and younger generations are impacted to a larger degree.

    Takeaways

    Happiness hasn’t collapsed globally—but it’s become more uneven and, in some ways, more fragile.

    In the US and a few other regions, the decline in happiness and social trust points to the rise in political polarisation and distrust of “the system”. As life satisfaction declines, anti-system votes rise.

    Worsening the situation is our growing dependence on social media instead of face-to-face relationships. Although most people realize it’s harmful and don’t want to use it, many can’t imagine missing out. As a result, they spend time and energy passively consuming content instead of being active in their own lives.

    As AI‑generated content and AI chatbot partners become more common, it will be interesting to see how they reshape this data.

    Regardless, humanity has always proved resilient and enduring. Regardless of the circumstances, people can focus on what they choose, define what it means to them, and act accordingly.

    Remember, throughout history, things have gotten better. There are dips here and there, but like the S&P 500 … we rally eventually.

    The data show that happiness is surprisingly resilient but not guaranteed.

    Younger generations are paying the price for a world built around screens, feeds, and algorithms — but they also have the most to gain from changing course.

    We may not control global trends, but we do control how we spend our attention, who we spend time with, and what we build together. Those choices compound, just like returns in the S&P 500. Over time, they can move both our personal and collective happiness charts in the right direction.

    Onwards!

  • Energym: AI Satire or Eventual Reality?

    A few weeks ago, I shared an AI music video. It seemed noteworthy at the time because even though the music and video were AI-generated, the result felt surprisingly human.

    Here’s a question for you …

    Once AI can convincingly create art, what meaningful work is left uniquely for humans?

    That’s the central tension in this mockumentary-style ad for Energym. Click below to watch. It was clever … and mildly unsettling in its plausibility.

    The Energym parody imagines a 2036 where humans have lost their sense of purpose. So what do they do? Exercise so hard that they generate the energy needed for the very AI that took their jobs. The video features cameos from Elon Musk, Jeff Bezos, and Sam Altman (well, at least their 10-years-older personages).

    Energym is funny because it’s not as far from reality as we’d like — and it quietly says something important about our evolving relationship with AI.

    Ironically, there is a real Energym exercise bike designed for fitness and energy production (though I assume it’s unrelated). When a parody and a product look this similar … it’s hard to tell whether it’s a cautionary tale or a potential roadmap.

    Good humor is often rooted in truth. Perhaps healthy dystopian fears are, too.

    When Satire Starts To Feel Real

    Obviously, satire is tongue-in-cheek and often exaggerates real fears. Expect to see more content poking fun at our growing dependence on artificial intelligence.

    The Energym video was produced by Hans Buyse and Jan De Loore. De Loore, who authored the script, edited, and produced the video, is also a cofounder of Kitchhock, a solo AI creative studio based in Belgium. De Loore also applies his creative expertise and the latest generative video AI technology to produce real advertisements for Belgian companies through his AI video studio, AiCandy.

    AI as an Amplifier, Not a Replacement

    To me, this video shows where AI truly excels: helping you bring new, unusual ideas to life that would have been hard or expensive to produce before.

    I’ve seen an explosion of creative work built with new AI tools, and for the most part, that’s great. The danger is letting them automate away your own creativity and critical thinking instead of amplifying them.

    If you do decide to let it replace you, at least you might get ripped in the process.

    Onwards.

  • Turning Friends Into Frenemies: A Powerful Prompting Framework

    If you’re reading this, you’re probably using AI more than you used to — but how has your use actually evolved?

    The more I use AI, the more I worry it agrees with me too much … or worse, that I agree with it too quickly.

    For me, as AI becomes more powerful, I’m using it in more places more often. It’s becoming a step in almost every process I do.

    At the beginning, my use was very simple. I would highlight a sentence and say, “Improve this.” I was often surprised by an LLM’s ability to take a jumble of words and distill something shorter and more meaningful. I’m sure many started to feel like they could put voice to their thoughts.

    Then, I was impressed by AI’s ability to turn long articles or collections of sources into tight summaries that made clear why they mattered and what to do next.

    Over time, I learned to use AI to help me do things I already did, to the point where it enhanced my ability to do it … or freed me up to do a little bit more.

    Now, if I’m doing something repeatedly and not using AI or automation, I assume that’s a problem.

    Not everyone feels that way.

    For example, this weekly commentary is still primarily written by humans (my son, Zach, and me). As AI becomes a larger part of the production process, Zach becomes increasingly dubious of how I use AI. He worries that AI-In-the-Loop processes impact our writing in ways that we quickly become desensitized to or stop noticing altogether(for example, logical patterns, word choices, common idiosyncrasies, or misplaced confidence).

    Too Much of Anything Isn’t Good — Even Agreement

    As amazing as AI tools are, it’s well-documented that they can be sycophantic, hallucinate & fabricate, and be surprisingly rigid in their process … if  you don’t have a good enough process in place to manage things like that.

    Meanwhile, prompt researchers found that making AI agents ruder resulted in better performance on complex reasoning tasks.

    So I wanted an AI that communicates with me like a sharp board member — not a flattering intern. That’s where prompts like the Frenemy Prompt come in.

    I saw this on Tech Radar. Here is the basic idea.

    Respond with direct, critical analysis. Prioritize clarity over kindness. Do not compliment me or soften the tone of your answer. Identify my logical blindspots and point out the flaws in my assumptions. Fact-check my claims. Refute my conclusions where you can.

    The Frenemy Fact-Check

    This prompt turns an AI into a hostile‑but‑useful executive reviewer that turns text into decision‑grade clarity by separating what’s said, what’s assumed, what’s missing, and what actually matters.

    It’s designed to:

    • Take a piece of text (an article, memo, thread, transcript)
    • Refuse to trust it
    • Separate what is actually said from what is assumed
    • Stress‑test it like a skeptical board member, and
    • End with a clear decision posture and a best next step

    A Fluff-Busting Example:

    If you’ve been a reader for a while, then you know I share a lot of links to a wide range of materials. What you see is the filtered list after I’ve read it, tagged it, ranked it, and decided what to share … but what I see is much broader. Some of it is AI slop, conspiracy theory madness, or aspirational thinking masquerading as strategic thinking. I wanted something that helped me sort, sift, and filter what comes across my screen. For example, this week I clicked on this article claiming that new experiments confirm the existence of parallel universes. If I simply asked AI to summarize the article, it would. But this prompt gives me something different. Its response started this way:

    Executive Brief:

    Posture: Probe — The piece argues that multiple experimental and theoretical threads make the many‑worlds / multiverse idea increasingly plausible; it’s an interpretive synthesis, not a proof.

    Biggest risk: Conflating interpretation and empirical demonstration — many claims are inference/speculation built on experimental results.

    Next action: Identify the article’s specific factual claims and separate which are quote-backed, which are inference, and which require verification.

    That prompt then goes on to identify all the different claims to verify, fluff to bust, and even makes it easy for me to expand the research or reconcile the language. Here is the output of the first step if you are curious.

    The point of this article isn’t to share a polished prompt. My production version is long, messy, and customized to my workflow and input sources. However, if you’re interested, here is a basic Frenemy Fact-Check Framework prompt that you can customize.

    I’m sharing the idea as a seed — useful on its own, but far more powerful once you make it simple, repeatable, consistent, and scalable.

    For context, my current version, 7.0, is over twice as long, has portions that a human won’t understand, and understands me and my needs much better than this seed.

    And it was AI that helped me iterate on the prompt until it reached that point.

    Creating a Production-Grade Process

    The way you do that is by analyzing what you’re doing, both in terms of what the audience sees (front stage) and what is required to reliably produce the front-stage experience (backstage).

    Most prompts focus on the front stage and don’t handle the backstage well enough to be reliable in production.

    Front Stage vs. Back Stage

    Front stage, it looks like: “AI reads something and gives a sharp executive review.”

    Backstage, it’s doing something much more important: It’s not focused on “smartness” or “creativity”… it is manufacturing reliability.

    Think of it like a restaurant:

    • The dining room is what customers see (front stage).
    • The kitchen is why the same dish comes out the same way every night (backstage).

    A professional-grade Frenemy prompt must include the kitchen spec for decision-grade analysis.

    Here are some high-level concepts to consider in a prompt like this.

    First Principles of the Prompt

    At its heart, the system enforces three laws:

    Law 1: Words ≠ Truth

    If it’s not quoted, it’s not solid.

    Anything not directly supported by text must be labeled:

    • Inference (reasonable but not stated)
    • Speculation (guessing)

    Law 2: Structure Beats Intelligence

    There is a difference between could be strengthened by briefly contrasting “clever but inconsistent” vs. “structured and reliable.” My production prompt doesn’t rely on the model being ‘smart.’ It relies on the structure we wrap around it.

    It relies on:

    • Rigid section definitions
    • Mandatory labels
    • Forced ordering
    • Hard cap limits

    This is why it’s long. But, it’s not verbosity — it’s scaffolding.

    Law 3: Decisions Are the Point

    Every run ends with:

    • A posture (Proceed / Pause / Probe / Pivot)
    • A biggest risk
    • A next action
    • A control panel that helps the user choose what happens next

    As AI makes analysis easier to generate, it becomes even more important not to automate “analysis for analysis’s sake.” This prompt framework was designed to encourage right actions.

    The longer the content and project you give AI, the more likely it is to break protocol and make mistakes. A production-grade prompt like this constrains the AI so it can’t “help” in the wrong way, and blocks hallucinations or fake precision by default. It turns raw text into structured evidence, labels ambiguity clearly, and keeps outputs consistent and stable—even under pressure or long inputs. Most importantly, it keeps humans in control through a clear command interface, which is why it’s far more reliable than the average prompt.

    I’d love to hear about ways you’re using AI to improve the quality of your output, enhance your performance, or expand what you believe is possible.