Thoughts about the markets, automated trading algorithms, artificial intelligence, and lots of other stuff

  • The New AI Leaderboard … and the Cost of Staying On It

    For decades, I’ve been an Early Adopter of technologies. I love exploring tools to get an idea of where things are going and what’s possible.

    In part, that means I don’t wait for things to settle down and a clear winner to arrive. Instead, I tend to try several tools that claim to do something that excites me.

    On one hand, my wife questions whether this is a waste of time, energy, and money. But the practical realities of technology businesses make it a workable strategy for me in my role.

    Companies have different levels of access to talent, opportunities, and resources. Consequently, the first tool that does something cool isn’t necessarily the one that takes off or gets big (or the one that continues to play the game, even if it does so slowly, committed to getting better till it wins). This is especially true in highly contested areas like large language models.

    A Look At My AI Usage …

    Like many of you, I use many AI tools every day. I pay for ChatGPT, Claude, Perplexity, and Microsoft CoPilot. I also pay for limited subscription access to Google Gemini and Elon Musk’s Grok (and for a host of other useful special-purpose tools like Grammarly, Granola, and Wispr Flow).

    For a while, ChatGPT has been my default. Projects tend to start there and end there. It’s been my source of comfort.

    Even though I start in ChatGPT, I might then show it to Perplexity and say, “Hey, here’s something I built in ChatGPT. What do you think and what would you change?” This process often results in a new idea or a different perspective. I tend to bring those ideas or perspectives back to ChatGPT, saying, “Hey, Perplexity recommended this … What do you think?”

    As you might guess, I’ve tried various iterations of that game. For example, I might start something in Perplexity or Google Gemini … but over time (at least for the type of work that I do), ChatGPT earned its place as my default.

    Now, in part, I’m writing this post because Claude has started taking more and more of my cycles: the answers it gives, the user interface, the integrations. It’s really interesting to see how fast it’s improving. Obviously, ChatGPT just released a new interim version to counter the momentum shift Claude is gaining from so much favorable press.

    There’s another reason that I know Claude is getting better. It’s still critical of things I produce in other models, but other models are increasingly impressed with what I produce in Claude.

    That’s notable because AI systems typically prefer their own outputs. The fact that one model regularly elevates another suggests something else is happening.

    Meanwhile, the gap at the top is narrowing. And it’s changing quickly in part because people share outputs from one model with another. This process is a form of cross-pollination that allows LLMs to see (and learn from) a wider range of perspectives and techniques.

    So, objectively, which models are really the best? That’s where things get murky. Benchmarks try to answer the question, but they only capture slices of capability.

    The Smartest AI Models of 2026 … Well, April 2026

    via visualcapitalist

    Lists like this are less a “stamp of approval” and more like a snapshot in time. Models aren’t just getting better every day; new models based on radically different exponential capabilities are being created and released in shorter time cycles, too.

    In the list above, Grok-4.20 Expert Mode and OpenAI GPT 5.4 Pro (Vision) tie for the top spot (based on TrackingAI’s April 2026 Mensa Norway benchmark), each scoring 145. The top tier is becoming more crowded, with leading models separated by just a few points. Scores have increased significantly since 2025, demonstrating the rapid progress in frontier AI reasoning on visual pattern-recognition tests. But even that doesn’t account for the fact that ChatGPT’s version 5.5 was released this week.

    While this is only one test of AI capabilities, it’s very interesting to see how close the best models have gotten. It’s also worth noting that in 2025, the highest score was 135.

    Meanwhile, use of these tools is skyrocketing.

    Using cutting-edge AI isn’t a differentiator anymore — it’s the price of admission. The real question isn’t who has the best AI; it’s who can afford to keep up with the pace of change.

    Which raises a more important question than “Who’s winning?”:

    Can AI Firms Afford to Keep Up?

    Last week, I talked about my eldest son lightly teasing me for still trying to overly direct Claude in performing tasks. It’s not just indicative of me getting older … it’s a broader, faster shift.

    via visualcapitalist

    Early AI development was talent-driven. The limiting factor was human capital — researchers, engineers, and domain experts pushing systems forward.

    That constraint is shifting. Today, leading AI firms are increasingly defined by access to compute. Training, fine-tuning, and running these models at scale require massive infrastructure investments, often dwarfing even the highest salaries in tech.

    Anthropic spent almost $7 billion on compute in 2025.

    Talent still matters, but it’s no longer the primary bottleneck.

    Can You Afford To Keep Up?

    As companies start leaning more heavily on tools like ChatGPT and Claude, the economics get a little less straightforward.

    At first, AI feels like a no-brainer. You’re getting more done, faster— emails, summaries, code, all of it. And the cost? It barely registers. A few cents here and there, easy to ignore. But then usage creeps in. And with automation and agents, what was occasional becomes constant. It gets baked into workflows, products, and day-to-day habits.

    And since everything runs on tokens, the meter is always running in the background.

    Suddenly, AI stops feeling like “free leverage” and starts acting more like a quiet, always-on teammate. A fantastic and efficient teammate … but one that happens to bill you for every task, and more when you ask it to show its work. At that point, it’s not surprising that the costs can stack up to something meaningful. In reality, AI can cost more than human workers now.

    That’s not a knock on AI — it’s just the reality of using industrial-grade AI at scale.

    It’s easy to think of AI as a pure efficiency gain, something that just improves margins. But in practice, it’s both sides of the equation. It drives output, and it adds cost. The companies building these tools have always known that. Now the companies using them are starting to see it too.

    I’m fully committed to AI, and yet I somehow continue to explore even further. But the deeper you delve, the more important it becomes to pause and catch your breath.

    Activity isn’t progress if it doesn’t move you in the right direction.

    Onwards!

  • AI in Education: Opportunity, Acceleration … and the Inevitable Tradeoffs

    Artificial intelligence has moved from the edges of education to the center of it (in many respects, faster than expected).

    What started as a tool for efficiency is now reshaping how students learn, how teachers teach, and how institutions operate.

    The question isn’t whether AI belongs in education — that ship has sailed. The real question is simpler and harder: Is AI making students better thinkers, or just faster ones? The answer depends almost entirely on how it’s used. AI doesn’t change education so much as it amplifies it — raising the ceiling for motivated learners while lowering the floor for disengaged ones.

    The Upside: More Access, More Personalization, More Speed

    At its best, AI expands what education can be.

    The Microsoft 2025 AI in Education report highlights a shift from AI as a “time-saver” to a tool that increases student agency, giving learners more control over how they engage with material.

    That shows up in a few key ways:

    • Personalized learning: AI systems adapt content, pacing, and feedback to individual students, improving outcomes and engagement. When a child is stuck, having an AI tool to work with can be the difference between learning and being left behind.
    • Accessibility: Translation, transcription, and text-to-speech tools make content available to more learners, including those with disabilities or language barriers.
    • Immediate feedback: Students can learn at their own pace and iterate more quickly, closing gaps in understanding as they arise. And individual students can receive customized responses when they need or want them, even as teachers are assigned more students.
    • Operational efficiency: Schools are using AI to streamline administrative work, allowing them to focus more on teaching. AI isn’t just for students; it’s for teachers as well.

    Adoption reflects this value. Roughly 86% of education organizations are already using generative AI, making it one of the fastest-adopting sectors.

    Students, unsurprisingly, are already ahead of institutions — and often ahead of policy.

    The Downside: Dependency, Shortcuts, and Skill Erosion

    But the same strengths that make AI powerful also introduce real risk.

    A consistent theme across research is that AI doesn’t just make learning easier; it can make it shallower. Students themselves often describe AI-assisted work as “too easy,” which may sound like efficiency but can come at the cost of effort, original thinking, and the character-building struggle that fosters understanding and the ability to do hard things. Over time, that convenience can turn into dependence. Tasks get completed faster, but with less depth, and core skills like critical thinking, creativity, and problem-solving begin to erode.

    There are structural concerns as well:

    • Academic integrity: AI-generated work blurs the line between assistance and substitution.
    • Accuracy and trust: AI systems can hallucinate or provide incorrect information.
    • AI literacy gap: Even as usage rises, fewer than half of educators and students feel confident using it effectively.

    In other words, adoption is outpacing understanding. As a result, you end up with potential for worse long-term outcomes for at-risk students.

    Education has always aimed to do more than just teach children literacy and numeracy. It focuses on critical thinking and developing skills that apply in the real world. As AI becomes more widespread, it’s important to balance teaching these skills with recognizing how the “real world” continues to evolve.

    The Reality: A Tool That Amplifies Intent

    The emerging consensus is less about “AI is good” or “AI is bad,” and more about AI is amplifying whatever learning behaviors already exist.

    Used well, it deepens understanding … helping students explore ideas, iterate faster, and engage more meaningfully. Used poorly, it becomes a shortcut that replaces the very thinking education is meant to build.

    Even the research reflects this duality: with intentional design and guidance, AI can deepen learning rather than replace it.

    AI in education isn’t a binary shift. It’s a leverage point.

    It raises the ceiling for what motivated, curious students can achieve. It also lowers the floor for disengaged students to bypass learning entirely.

    The same dynamic playing out in classrooms is also appearing in workplaces.

    • AI is making the least curious people less curious, but it is also allowing creative people to do more and expand possibilities.
    • AI isn’t going to steal your job, but a smart person with impactful usage of AI tools will.

    That gap will widen. And the differentiator won’t be access to AI, but how it’s used, taught, and governed.

    Education has always been about more than answers.
    AI just makes that distinction impossible to ignore.

  • The Distance Between Then And Now

    We just got back from Portland, where we were visiting my oldest son — and meeting my newborn grandson.

    It was a great trip. Nothing monumental happened, but years from now, we’ll continue to look back on it fondly.

    I got to hold my grandson for the first time. I got to play with my granddaughter. And I got to remember how much work play takes when you are doing it intentionally. Lifting her up, bouncing her on my leg, jumping, reading, getting down on the floor to see the world from her height. Let’s just say, my body is reminding me of how much fun we had. But it was worth it.

    That alone would’ve been enough. But trips like this tend to stir up more than just memories — they stir perspective.

    What Once Was …

    It reminded me of my grandfather.

    Albert Getson wrestling as the Green Hornet in the 1950s.

    His body was wrecked by years of professional wrestling as the Green Hornet. By the time I knew him, “playing” looked different. He’d lie on the couch, and I’d climb on top of him. He called it “playing on the second floor.”

    Me and my Grandfather in 1967.

    At the time, to me, it just felt like fun. Looking back, it was an adaptation. It was love, finding a way.

    And then there’s the harder realization: by my age, my grandpa was already dead, and my dad was already gone because of a cancer that would be caught much earlier and treated today.

    So, yeah, feeling sore after playing with my granddaughter hits a little differently. It’s a reminder that I still get to show up and participate … that I still have time. That’s not something to take for granted

    It reminds me of Ray Kurzweil’s “Longevity Escape Velocity,” which is the idea that medical and biotechnological progress will reach a point where, each year, remaining life expectancy increases by more than one year, so you are effectively “outrunning” aging over time.

    … but try not to die before that happens.

    Don’t Touch That Dial …

    In part, that’s why this visit also had me thinking about technology.

    Who’s surprised?

    We were talking about air conditioning — how recent it really is in the grand scheme of things, and how quickly it’s become something we can’t imagine living without. Take it away, and most of us would struggle immediately.

    Or think about this: my great-grandmother was born before cars or planes existed.

    Or that widespread access to electricity in cities started to roll out in the 1920s.

    Think about how technologies like these have reshaped where and how people live. Entire regions went from inhospitable to must-see travel destinations.

    And then I think about my own timeline.

    I was born before hand-held calculators were invented or color TVs were standard.

    My kids? They were born before Wi-Fi, before smartphones, before MP3s. They remember floppy disks, dial-up modems, and landlines. They remember printing directions or following someone who inevitably sped through a yellow light, leaving you guessing at the next turn.

    Some things haven’t changed, though. Human nature stays frustratingly the same. My father yelling at early robo-receptionists in the 1990s feels surprisingly modern.

    Through all of it, I’ve always taken a certain pride in being able to keep up. I may not set up my own tech anymore, but I still understand it well enough to be dangerous. My team sees it in the way I think through problems and, even more so, in the types of prompts I write.

    I enjoy working with AI. It gives me energy and hope.

    But this weekend was a reminder: there’s always another level.

    The More Things Change, The More They Stay The Same

    My youngest son works with me. My oldest son works in an AI-adjacent space. He is deeply technical, has the kind of mind that builds the systems the rest of us use, and he’s helped improve things you’d definitely recognize. For what it is worth, though, it has always surprised me how differently he and I use technology.

    We started talking about LLMs. I told him how impressed I was with the pace of progress and how much better it is than I imagined it could be in so little time.

    We talked about how the fear of missing out is so prevalent today because everyone knows somebody using AI for something they hadn’t thought of or doing something they wish they could.

    As our conversation progressed, I told him that a year and a half ago, I was focused on learning how to prompt better, but now I believe it’s more important to tell AI what you want and ask it to help figure out how to get it.

    As any good son would, he explained it with just a hint of … let’s call it “constructive skepticism” about my approach. He criticized what I was doing as still telling the AI too much and putting too many of my constraints on its ability to do things. He explained that the next generation of agentic swarms is designed to bypass those limitations.

    He then gave me a little demo, and I had FOMO again.

    And that’s kind of the point.

    No matter how much you think you understand something or how proud you are about what you can do, there’s always more.

    I almost want to describe the demo in detail and explain some of the business ideas it gave me. But the point isn’t about the technology; it’s about change (and what we make of that).

    The pace of change right now is staggering. These tools aren’t just improving year over year — they’re improving constantly.

    And that compresses everything.

    Learning curves. Advantage windows. Expectations.

    It also makes perspective more valuable, not less.

    Because when you zoom out far enough (from wrestling grandfathers to newborn grandsons, from no cars to self-driving ones, from no air conditioning to climate-controlled everything) you start to see the pattern.

    We adapt.

    We build.

    We take things for granted.

    And if we’re lucky, we get the chance to notice it while it’s happening.

    This weekend, I did … And it felt like a gift.

    Onwards!

  • Allbirds: The Sole of a Company Rewritten in Code

    Sometimes, the truth is stranger than fiction. For some reason, that seems truer than it used to.

    This week, Allbirds, the eco-friendly sneaker brand formerly valued at over $4 billion, announced it is exiting the shoe industry to shift completely into artificial intelligence infrastructure.

    Allbirds will sell its remaining intellectual property and shoe assets (for $39 million) and rebrand as NewBird AI.

    It’s not exactly a natural or expected move into AI, but the stock still shot up more than 500% on the news. Yikes! Stories like this make it harder to argue against the AI bubble discussion or that Wall Street is just frothing at anything in the space.

    Where to Start …

    While this sounds like AI-fabricated fake news, it’s real.

    And while it’s easy to dismiss this as a play for attention (or even a nod to meme-stock energy), the rationale runs deeper than that.

    Still, history matters. A late start in AI infrastructure, combined with a legacy brand built for something entirely different, creates as many challenges as opportunities.

    That said, choosing to pivot rather than shut it down makes more sense than it first appears.

    Why Not Just Cut Your Losses and Start Over?

    At first glance, this kind of move feels random. But it might make sense from a financial engineering perspective.

    If you want to become an AI infrastructure company, why not just start one? Clean slate. Clean story. No baggage from a struggling consumer brand that used to be on top of the world. No confused customers wondering why their favorite shoes are suddenly talking about GPUs.

    But that’s not really how the game works.

    Because what Allbirds has (despite everything) is something a brand-new AI startup doesn’t: structure.

    • It’s already public.
    • It already has access to capital markets.
    • It already has a ticker, a shareholder base, and the ability to raise money without starting from zero.

    To some people, that matters more than the logo on the door. And if you wanna play that game, you can monetize the logo on the door too.

    In a world where AI infrastructure is capital-intensive from day one, speed is necessary for survival.

    Starting a new entity means building credibility, raising initial funding rounds, assembling a board, and proving your thesis (often before you even get to compete). In this market, even after doing all that, some would argue they’re still behind.

    Repurposing an existing public company dramatically compresses that timeline.

    Investors understand the story: compute demand is exploding, infrastructure is scarce, and the winners could be massive. You’re no longer asking the market to believe in better shoes—you’re asking it to believe in a bigger trend.

    Fixing your brand positioning and supply chain, and recovering a business in steep decline, is a monumental task.

    AI is hot, and apparently a much shorter leap.

    Keeping the existing entity also allows management to use what’s left (cash, brand equity, public listing) as a kind of launchpad.

    In some ways, it’s closer to a merger with the future than a continuation of the past.

    The real question is whether that’s enough to earn a place in a market that’s already moving this fast.

    Still, The Other Shoe Drops …

    Of course, the pivot comes with tradeoffs.

    You inherit expectations that no longer match reality. You risk alienating the people who believed in the original mission.
    And you invite a certain amount of skepticism about whether this is ‘vision’ or gimmicky opportunism.

    It’s a risky play … but you never know. And you can’t win if you don’t play.

    Does The Glass Slipper Fit?

    Zoom out, and it fits a broader pattern.

    We’re in a moment where identity is fluid, timelines are compressed, and the cost of being late feels existential. Companies aren’t just evolving — they’re jumping tracks.

    Within that paradigm, you could argue that starting from scratch is the slower and riskier move.

    While it sounds silly … can you blame them?

    It doesn’t mean they’ll be a success. There will be more losers than winners in this transitory period. But, at least they’re playing the game.

    While that’s not a ship I’d want to be riding on, I can’t blame them for trying to stay afloat.

  • Artemis II and the Pale Blue Dot

    Artemis II was a nine-day lunar flyby mission with a crew of four astronauts, launched on April 1, 2026. It was the first crewed NASA-led Artemis flight and the first human journey beyond low Earth orbit since Apollo 17 in 1972. 

    During their lunar flyby, the crew achieved the record for the farthest distance from Earth by humans, reaching 252,756 miles (406,771 km), surpassing Apollo 13’s previous record of 248,655 miles (400,171 km).

    Friday, they splashed down safely in the Pacific Ocean.

    Artemis II astronauts Jeremy Hansen, Christina Koch, Victor Glover, and Reid Wiseman are seen onstage Saturday at Ellington Field at Johnson Space Center in Houston.  – Ronaldo Schemidt/AFP/Getty Images

    “Victor, Christina and Jeremy, we are, we are bonded forever, and no one down here is ever going to know what the four of us just went through … And it was the most special thing that will ever happen in my life.” – Reid Wiseman

    This is the kind of story that’s easy to file under ‘space news’ – but for entrepreneurs, investors, and leaders, it’s also a case study in how fast the frontier moves when compounding technology meets long‑term conviction.

    As we move forward, we’ll talk more about the emerging business landscape around space (from connectivity and Earth‑observation data to in‑orbit manufacturing, commercial stations, logistics, and even space‑based energy). But today’s piece is really about something more fundamental: marking a milestone on the path and widening our sense of where we are and what’s possible.

    From Humble Beginnings …

    To appreciate how far we’ve come, I think it’s helpful to think about the early days of Space Travel. In 1977, the Voyager 1 launched into space.   Just over a dozen years later, the Voyager 1 spacecraft had traveled farther than any spacecraft/probe/human-made anything had gone before. It was approximately 6 billion kilometers away from Earth. At that point, the Voyager 1 was “told” by Carl Sagan to turn around and take one last photo of the Earth… a pale blue dot

    The resulting photo is impressive precisely because it shows so little in so much.

    A photo showing 1 blue pixel, the Earth, taken by Voyager 1 in 1977

    “Every saint and sinner in the history of our species lived there – on a mote of dust suspended in a sunbeam.”  – Carl Sagan

    Earth is in the far-right sunbeam – a little below halfway down the image. This image (and the ability to send it back to Earth) was the culmination of years of effort, technological advancement, and the dreams of mankind.

    Carl Sagan’s Pale Blue Dot speech is still profound and moving. Invest three minutes to watch and listen.

    Carl Sagan via YouTube
     

    Here’s the transcript:

    Look again at that dot. That’s here. That’s home. That’s us.

    On it everyone you love, everyone you know, everyone you ever heard of, every human being who ever was, lived out their lives.

    The aggregate of our joy and suffering, thousands of confident religions, ideologies, and economic doctrines, every hunter and forager, every hero and coward, every creator and destroyer of civilization, every king and peasant, every young couple in love, every mother and father, hopeful child, inventor and explorer, every teacher of morals, every corrupt politician, every “superstar,” every “supreme leader,” every saint and sinner in the history of our species lived there–on a mote of dust suspended in a sunbeam.

    The Earth is a very small stage in a vast cosmic arena.

    Think of the rivers of blood spilled by all those generals and emperors so that, in glory and triumph, they could become the momentary masters of a fraction of a dot. Think of the endless cruelties visited by the inhabitants of one corner of this pixel on the scarcely distinguishable inhabitants of some other corner, how frequent their misunderstandings, how eager they are to kill one another, how fervent their hatreds.

    Our posturings, our imagined self-importance, the delusion that we have some privileged position in the Universe, are challenged by this point of pale light. Our planet is a lonely speck in the great enveloping cosmic dark. In our obscurity, in all this vastness, there is no hint that help will come from elsewhere to save us from ourselves.

    The Earth is the only world known so far to harbor life.

    There is nowhere else, at least in the near future, to which our species could migrate. Visit, yes. Settle, not yet. Like it or not, for the moment the Earth is where we make our stand.

    It has been said that astronomy is a humbling and character-building experience. There is perhaps no better demonstration of the folly of human conceits than this distant image of our tiny world. To me, it underscores our responsibility to deal more kindly with one another, and to preserve and cherish the pale blue dot, the only home we’ve ever known.

    How powerful a statement from a grainy pixel.

    … To New Heights

    Today, we have people living in space, posting videos from the ISS, and high-resolution images of space and galaxies near and far. Artemis II shows we’re going back to the moon, and that that’s only the beginning. We also recently talked about the other new goals and explorations already on the proverbial docket.

    We take for granted the scale of the technological phase shift. The smartphone in your pocket has more computing power than the systems that first took us to the moon – and it has for decades.

    As humans, we’re wired to think locally and linearly. We evolved to live our lives in small groups, to fear outsiders, and to stay in a general region until we die. We’re not wired to think about the billions and billions of individuals on our planet, or the rate of technological growth – or the minuteness of that all compared to the vastness of space.  

    However, today’s reality necessitates that we think about the world, our impact, and what’s now possible for us.

    We’ve created better, faster ways to travel, instantaneous communication networks across vast distances, and megacities. Our tribes have gotten much bigger – and with that, our ability to enact massive change has grown as well. 

    Space was the proving ground for many of today’s breakthrough technologies. Now, similar waves are building in AI, medicine, genetic engineering, robotics, and even ‘world‑building’—not just in virtual environments, but in how we design cities, companies, and economies. As leaders, our job is to spot these trajectories early, place disciplined bets, and build systems that can adapt as the frontier moves.

    It’s hard to comprehend the scale of the universe and the scale of our potential – but that’s exactly why it’s worth exploring. The view from a ‘pale blue dot’ reminds us that most of what feels urgent today won’t matter in a decade, but the systems we build and the bets we make will. This week, ask yourself: where are you still thinking locally and linearly in a world that rewards global, exponential thinking? 

    Onwards!

  • When Does Helpful Become Too Much? Rethinking Our Relationship with AI

    I went to dinner with my good friend John Raymonds and our sons this week. John is a deep thinker and an experienced entrepreneur. Unsurprisingly, the conversation turned to AI. I thought I’d share some of the things we examined and discussed.

    John and I at a Texas steak house with our sons,

    As we all admitted to using AI more often and for more things, what stood out wasn’t our agreement, but rather the tension around our use cases. While we’re all very bullish on AI (excited, even), we kept circling two questions: “How much AI is too much?” and “How important is it to preserve your own voice (and thinking) in the wake of AI content generation?” Neither had clean answers, yet both felt increasingly important and relevant.

    How much AI is too much?

    When you see AI everywhere all the time, you might have passed the point of diminishing returns (or you might be passionate about the discovery of something as important as the discovery of fire, electricity, or the Internet).

    The real issue is that “too much AI” is not about volume of usage but about when AI starts to (a) complicate your process or (b) dilute your voice. Ultimately, you must consciously decide where the line is for you.

    Here’s an example we talked about, stemming from Zach and me writing this weekly commentary together. I’ve been experimenting with something I call “content pillars.” That is where I combine various sources (including research, articles, notes, recordings, etc.) on a subject, run them through various AI tools with layered prompts, and distill everything into a multi-faceted outline that provides a more complete, multi-dimensional view of the material and its meaning. The goal is to get a better sense of the big picture (and make it easier to spot patterns, overlaps, contradictions, tensions, and agreements). I include most of the process steps in the content pillar. The result is dense, sometimes overwhelming, but undeniably rich. I used to do a lot of that in my head. This consolidates all of that in one place, making it easy to save for reference or reuse. To me, this was a step forward.

    My son pushed back.

    At some point, he argued, the process becomes so complex that it requires its own layer of AI just to consume it. The time investment balloons. What started as a tool to simplify thinking and streamline our process can quietly become something that complicates it.

    He’s not wrong.

    But I don’t think that invalidates the process … or the end result. Beyond the output, there’s value in building and using these systems (exploring, experimenting, and stretching a different kind of mental muscle). The process itself becomes the product, at least in part.

    Still, the question lingers: if the tool designed to accelerate us begins to slow us down, where’s the line?

    I won’t really attempt to answer that here. However, I will note that I originally created the process to augment, automate, and extend parts of my work. Over time, I refined the process to the point where I wanted to share it. That’s when I was confronted by a stubborn truth I’ve battled many times: a process designed for you won’t necessarily please others.

    What I found is that very few people want information in the quantity, velocity, depth, or breadth that I would choose. In fact, it became clear to me that I wasn’t even the real audience anymore. As I continued to build these content pillars, they expanded as I began to view each pillar as a new, richer data set to feed the machine (rather than producing something I’d want to consume myself or share with others).

    But the point remains, if you design something to satisfy a machine, it shouldn’t surprise you that it doesn’t satisfy a human.

    How much does voice matter?

    The second tension was more subtle and subjective.

    Something shifts as AI becomes more embedded in writing, editing, and content generation. It doesn’t take a literary genius to see or feel it.

    Sentences smooth out. Paragraphs tighten. Structure improves. But sometimes, the voice flattens. Most of us can tell when something is written by AI, even if we can’t always tell when something is written with the help of AI.

    Yet, even tools like Grammarly optimize toward familiarity. They rely on proven patterns, common phrasing, and widely accepted “good writing.” The result is predictably better writing… but also just predictable writing.

    Of course, there’s a tradeoff.

    AI enables depth. It helps us see angles we might have missed, incorporate ideas we wouldn’t have found, and build more comprehensive pieces. It has also been vital in catching when we’re making assumptions or claims without backing them up. Our writing becomes more informed, more structured, and often more valuable to the reader.

    But at what cost?

    I see it firsthand. My son spends extra time pulling our writing back toward something that feels like us (restoring tone, rhythm, personality). It’s deliberate work, and it can be frustrating.

    Our previous rhythm was relatively painless, but we’d also plateaued in the caliber and tone of our articles.

    So the question becomes: Is added value worth a diluted voice? Or is voice itself part of the value we’re trying to create?

    Could we spend extra time improving our prompts so that our voice is more carefully curated? If the voice is there but we didn’t write it, is it still our article?

    Different generations, different instincts

    What became clear over dinner wasn’t just disagreement—it was a difference in posture.

    John and I are leaning in hard. There’s a kind of curiosity that borders on recklessness. We’re exploring, testing limits, and integrating AI into everything we can. Not because we have to, but because we want to see what’s possible. John even built a niche AI app recently, just to prove he could.

    There’s joy in that.

    Our sons, on the other hand, seem to play a different role. Not resistant or disengaged (they both use these tools extensively), but more measured. More aware of the tradeoffs and the nature of their parents. More willing to question whether efficiency is always the goal.

    If we are accelerating, they are steering.

    Perhaps its a result of them being so close to us that they end up playing “defense”. And maybe that balance matters more than either side being “right.”

    We hear it all the time: too much of a good thing becomes a bad thing.

    But AI complicates that idea. Because it’s not just a tool — it’s a multiplier of output, of speed, of ideas … and of noise.

    So how do we know when we’ve crossed the line?

    That is a question worth sitting with.

    Maybe it’s not about a universal threshold. Maybe it’s more personal, more situational. Maybe the better question isn’t “how much is too much?” but:

    • Is this helping me think more clearly, or just more quickly?
    • Is this enhancing my voice, or replacing it?
    • Am I using the tool, or adapting myself to fit the tool?

    There may not be definitive answers yet.

    But the act of asking — of pausing long enough to notice how these tools are shaping not just what we produce, but how we think — might be the most important habit we can build right now.

  • Happy Easter: Spring As An Opportunity For Rebirth

    Today is Easter – and also part of Passover, the Jewish holiday that recounts the story of Exodus

    The overlap is evident in Da Vinci’s Last Supper, which depicts a Passover Seder (the traditional meal that commemorates the Exodus) and Jesus’s last meal before his crucifixion. 

    Part of the Passover Seder tradition involves discussing how to share the story in ways that resonate with different people, recognizing that everyone understands and relates to things differently. This echoes our previous discussion on happiness and how that feeling varies for each of us.

    To do this, we examine the Passover story through the lens of four archetypal children — the Wise Child, the Wicked Child, the Simple Child, and the Child Who Does Not Know How to Ask.

    The four children reflect different learning styles — intellectual (Wise), skeptical (Wicked), curious (Simple), and passive (Silent) — and highlight the need to adapt communication to the diverse personalities and developmental stages of our audience.

    This seems even more relevant today, as we struggle to come to a consensus on what to believe and how to communicate with people (or machines) who think differently. 

    On a lighter note, one of the memorable phrases from Exodus is Moses’Let my people go!” For generations, people assumed he was talking to the Pharoah about his people’s freedom. But after a week of eating clogging food like matzoh, matzoh balls, and even fried matzoh … for many Jews, “Let my people go” takes on a different meaning.

    After Passover, and as we enter a new season, it’s a great time for a mental and physical ‘Spring Cleaning,’ and delve into your experiences to cultivate more of what you desire and less of what you don’t.

    Here is to Spring, Re-Birth, and Spring Cleaning.

    As a reminder, it doesn’t take a new year to start good habits.

    Hope you had a great and meaningful weekend.