Web/Tech

  • Which Jobs Are The Most at Risk of AI Disruption?

    Everywhere you look, someone is predicting which jobs AI will eliminate or automate away next. For many people, the real question is more personal: Is my job safe — or will my company survive?

    To answer that, it helps to zoom out.

    Back in 2018, I asked a simple question: Which industries were most at risk of disruption? This was pre‑AI boom, so the focus was on digitization and automation (rather than large language models or copilots). That article identified the key signals that an industry was ripe for disruption. That simple framework still applies today.

    Here’s a brief summary of the findings.

    1. Digitization Level – Industries like agriculture, construction, hospitality, healthcare, and government were among the least digitized, yet they still accounted for 34% of GDP and 42% of employees.
    2. Regulation Intensity – In heavily regulated industries, companies that find ways to work around legacy rules can become effective competitors quickly (e.g., Lyft or Tesla).
    3. Number of Competitors – Crowded markets with excess capacity or wasted resources (like taxis waiting for fares or empty airplane seats) are vulnerable to new business models. 
    4. Automatability – Even in 2018, many industries and tasks were ready to be automated but hadn’t been due to the cost or labor of switching to new technologies.

    Ultimately, disruption was about relieving a customer’s headache while lowering costs for the producer, the customer, or both.

    Today, AI’s inexorable march is unmistakable as it takes over more tasks and more of the content we create.

    In 2024, the WEF evaluated which jobs were most prone to small or significant alteration by AI. IT and finance have the highest share of tasks expected to be ‘largely’ impacted by AI — which is not particularly surprising. Followed by customer sales, operations, HR, marketing, legal, and (lastly) supply chain.

    Now, new Microsoft data takes a more granular look at which specific jobs are most exposed to generative AI.

    via visualcapitalist

    Microsoft assessed AI exposure using three indicators derived from Copilot usage:

    • Coverage: How often tasks associated with a job appear in Copilot conversations
    • Completion: Frequency of Copilot successfully completing those tasks
    • Overall AI Applicability Score: A combined metric indicating how well AI can support or execute tasks within a specific role.

    Language-heavy & research-based roles are at the highest risk of disruption. Think roles like interpreters, historians, writers, and customer service.

    But exposure does not automatically mean replacement. Augmenting roles with AI will become increasingly common.

    Even though creative and communication roles sit near the top, more technical roles will still feel a meaningful impact as well.

    Fear not … there is still a place for humans. In many cases, AI functions as a complement rather than a substitute, because these jobs still require judgment, creativity, and human interaction.

    Are you using AI in your daily process yet?

    At Capitalogix, we focus on amplifying intelligence. To us, that means the ability to make better decisions, take smarter actions, and continuously improve performance. In many ways, it comes down to better real-time decision-making. Practically, that means using technology to calculate, find, or know easy things faster … rather than predicting harder things better.

    You don’t have to predict every change. You do have to build the habit of experimenting with AI in the work you already do. The gap between winners and losers will be about learning speed, not job title.

    In the next few years, the biggest divide will not be between ‘AI jobs’ and ‘non‑AI jobs.’ It will be between people who learn to wield AI and people who pretend it is not their problem.

    A few years from now, when I write a follow‑up to this article, I suspect we will look back and clearly see the gap between winners and losers. It might come down to something as simple as this question:

    What are you doing to make sure that you ride the wave, rather than getting crushed by it?

  • “Real” Doesn’t Mean What It Used To Anymore

    Your Brand Style Guide Isn’t Enough Anymore

    Not long ago, high-quality art, music, and video had a built-in bottleneck: skill. If you wanted a specific emotional effect—or a certain level of craftsmanship—you either had to earn the craft yourself or hire someone who had.

    That bottleneck is dissolving.

    I recently watched an AI-generated music video on YouTube; if I hadn’t paid attention, I might not have known it was entirely created by technology rather than humans. Here’s the link.

    Artist & Song: Lolita Cercel – Pe peronu’ de la garã – AI Artist & Music Video

    Don’t expect to be wowed. I didn’t love the music or the video. But it’s still a notable achievement. For example, recognize how much it feels like a professionally produced music video. While there are some clear limitations in the production, It doesn’t feel like a party trick (even though, technologically, it still is a party trick). It feels like art.

    When I first watched it, I remember thinking it reminded me of a slightly older style of music. I couldn’t tell whether the words were Portuguese or Romanian. But I was focused on the little details, rather than its slick production or cool technology.

    The singer, Lolita Cercel, is entirely a construct of Tom, a Bacau-based video designer. She doesn’t exist except in AI.

    Neither did the music. Tom wanted to convey emotion through his song lyrics, and he decided AI was a powerful tool to turn his thoughts into things.

    “I tried to make it as realistic as possible. The inspiration came from an 80-year-old collection of poems by a Romanian author who used colloquial, slum language. I liked the style and adapted it for ‘Lolita’ to make it authentic … It’s a mix of artificial intelligence and classical music. I work on several videos in parallel, shooting, editing, adjusting. Technology has allowed me to bring my ideas to life,

    Tom

    That moment matters because the world doesn’t need perfection for the game to change.

    When the market believes “you can’t tell,” Whether something was produced by humans or technology, the operating assumptions of media, marketing, and trust start rewriting themselves.

    Now, for the sake of this article, I’m not focused on the nature of art and artists. I’m focused on media and the nature of attraction and consumption, particularly in business contexts.

    The Skill Shift: From “Making” to “Specifying + Judging”

    Until recently, to create something truly captivating, you had to pay the best and the brightest and hope for the best.

    It’s only really in the last 20 years that the average business could effectively test an ad before releasing it. Ad agencies hired ‘Mad Men’ savants, and a team of writers, designers, composers, artists, editors, and more, to create a piece that would hopefully stand the test of time … or at least drive some sales.

    The new advantage is more subtle — and ultimately more powerful: the ability to specify what you want and judge whether you got it. Often, with a minimal team.

    Everyone can watch and react to content. Far fewer can define (clearly and repeatably) what they want to produce in the mind of another human (e.g., trust, reassurance, curiosity, confidence, or urgency). And even fewer can define what “good enough” means (or how they will measure it) before they generate the content.

    In a world where production becomes cheap, taste becomes expensive.

    From Brand Book to Brand Operating System

    Style guides and brand books still matter. Voice, formatting, color choices, visual identity—none of that disappears.

    AI changes the game by altering the volume and nature of what gets produced. As people are exposed to more and more content of similar quality and production values, what really changes is the level of what constitutes “average”.

    With endless opportunities and distractions, the differentiator becomes consistency: your ability to deliver your promise again and again across channels and formats — without drifting into generic sameness.

    That’s where a Brand Operating System comes in.

    While a brand book is static, a Brand Operating System is a living specification that reliably turns identity into output and serves as a robust framework for AI initiatives.

    A BrandOS includes:

    • Audience psychology: what your audience hopes for, fears, rejects, and values
    • Proof standards: what they require to trust you (and what triggers skepticism)
    • Ambiguity tolerance: how much uncertainty they’ll accept before confidence drops
    • Response targets: the emotional outcomes you want to reliably provoke
    • Guardrails: what you never do (tone, claims, promises, compliance boundaries)
    • A recipe: the variables that make the output recognizably you

    Put differently: the BrandOS is how you scale production without losing the signal or the soul of what makes you … you.

    “Experience” Is the Product & Feedback Loops Are the Engine

    Here’s the thing: in a lot of these markets, results aren’t enough. Everyone can point to returns, claims, outputs—whatever. That stuff commoditizes fast.

    What actually sticks is how the system behaves over time. Does it feel consistent? Does it make sense? Do you understand what it’s doing when things go right and when they don’t? That’s where trust comes from.

    Under the surface, as AI or technology becomes more advanced, it’s harder for people to understand what it does. That’s why experience itself becomes the differentiator …

    Good systems adapt over time. They are not only focused on the immediate outcome. They focus on learning, growing, and adapting to the practical realities of the environment and audience. One way to accomplish that is to use feedback loops to provide the system with better context on what’s happening, how it’s performing, and which areas may need attention or improved data.

    I’ve been enjoying an app called Endel lately. It generates music on demand and can link to biometric signals. When I select the “Move” module, it uses data from devices such as an Apple Watch to adjust what it plays. As my pace changes — from walking to jogging — the cadence of the music shifts with me. It feels responsive, as if the system is listening, pacing, or even leading.

    That’s the shift: closed-loop generation; generation that adapts to feedback.

    We already do this in business:

    • In marketing: opens, engagement, retention curves, where people stop watching
    • In trading and investing: risk-adjusted targets, volatility stability, whether outcomes reflect skill or luck

    A Brand Operating System is what happens when you make those loops explicit, measurable, and repeatable.


    “Enough of Me” Has to Be Specified

    If you want AI to magnify you instead of replacing you, you have to define what “you” means.

    For me, “enough of me” looks like:

    1. A signature point of view: a high-level perspective of perspectives and what’s possible
    2. Metaphors: because they compress complexity into something people can carry
    3. Constructive challenge: not to tear things down, but to test what to trust

    Every person and every company has an equivalent set of signature variables—whether they’ve articulated them or not.

    If you don’t specify them, the system will default to what it thinks performs. And performance alone often converges on generic engagement rather than authentic resonance.

    Guardrails: The Power of “Forbidden Moves”

    Here’s a practical truth: At scale, the most important part of your BrandOS isn’t what it produces … It’s what it refuses to produce.

    Forbidden moves are how you protect trust. They ensure you get more of what you want and less of what you don’t—especially when content is manufactured at volume.

    Examples of forbidden moves (adapt these to your domain):

    • No absolute certainty in probabilistic environments
    • No hype language that undermines trust with sophisticated audiences
    • No claims without proof standards (define what counts as proof)
    • No manufactured intimacy that mimics a relationship you didn’t earn
    • No tone drift that breaks your promise (snarky, overly casual, overly salesy—whatever is off-brand)

    Guardrails aren’t constraints. They’re how you keep the system aligned with the asset you’re actually building: credibility.

    Entropy Is Inevitable—So Detect It Early

    The risk of outsourcing capability is that the tool changes. Models update. Distribution shifts. Channels fatigue. What worked last quarter can quietly stop working next month.

    We’ve discussed this before, but almost everything decays or drifts over time. It’s important to be able to measure that. Here are two examples:

    • Marketing drift: if open rates drop materially or engagement falls, something is drifting.
    • Trading drift (high level): if risk-adjusted targets degrade, volatility exceeds targets, or outcomes start to look like luck rather than understanding, something is drifting.

    No technique always works.

    But something is always working.

    The winners aren’t the ones who find a trick and freeze it. They’re the ones who build systems that notice change early, recalibrate, and keep moving forward.

    The Real Choice

    Your choice isn’t really whether or not to use AI. If you don’t, you’re going to get left behind.

    AI will continue to make ‘real’ cheap; your BrandOS is how to keep your “meaning” valuable.

    Your choice is whether you’ll let AI optimize you into generic engagement, and eventual irrelevancy … or whether you’ll build a BrandOS that protects what makes you you, while adapting fast enough to stay ahead of drift.

  • Carving a New Path: Humanizing The Exceptional

    How automated is too automated? 

    “To speak to a representative, say … representative …. “ 

    “Representative.” 

    “Sorry I didn’t catch that … would you like for me to repeat the options menu?” 

    “NO” 

    “Sorry I didn’t catch that … please state wh…” 

    “REPRESENTATIVE” 

    “Sorry, all of our representatives are busy helping others at the moment … Goodbye.” 

    *CALL ENDS* 

    How many of us have been in this scenario when on the phone with an airline, insurance company, or any other automated call center?

    Where are the people? Why can’t I speak to a human?  

    One of my son’s few memories of my Dad involved listening to him go through a scenario like this with a late-1990s auto-attendant. It was funny. My Dad became increasingly frustrated that he couldn’t get to an actual human being. It devolved into: “Shut up! Stop talking! I’ll give you $50 if you let me talk with a real person.” And it went downhill from there.

    Despite being frustrating, these systems save companies time, money, and resources. And in an ideal world, they streamline callers into organized categories, resulting in a more efficient experience.  They’re clearly working on some level because you’re seeing increased adoption of AI chatbots, robo-callers, and digital support systems. 

    The evolution of this technology is already replacing people in marketing, sales, consulting, coaching, and even therapy. Sometimes to mixed effects

    But does the efficiency or effectiveness it creates justify the lack of human connection?  Why did so many of the legacy call systems get rated so poorly?

    There’s hope, though. I remember air travel before apps let me check in online and skip the counter. I remember banks before ATMs. In both of those situations, I was so anchored in my past experience. I was more aware of what I was missing rather than what I was getting.

    Recently, I came across an article highlighting a trendy new restaurant in Venice, Italy. They serve the best dishes from several popular restaurants across the city! They must have a massive kitchen and extensive staff to take on such a task, right? Wrong. This restaurant is fully automated; you order and receive food via … vending machines. 

    My first reaction was this … the convenience sounds fantastic, but wouldn’t that turn a valuable part of the experience into a commodity? It seems like you’d lose so much of the community, human interaction, and pampering that you enjoy when going to a nice restaurant. As I continued to read, however, the article explained that, to “humanize” the restaurant, it is used as a meeting place for food tastings, community gatherings, and question-and-answer sessions. As the world changes, so do the types of experiences people crave.

    Humanity and automation merged beautifully.   

    Semi-Automated Often Beats Fully Automated  

    Systemize the predictable so you can humanize the exceptional

    — Isadore Sharp, Four Seasons

    Earlier, I mentioned automated call centers and how frustrating they can be. I’ve come in contact with several who have found a healthy balance in how they automate their system.  

    For example, an apartment complex near me uses an AI agent to screen calls and send them to the correct department.

    Often, the automation tags and organizes calls before routing them to their intended destination, or answers frequently asked questions without connecting them to a human. Either way, it reduces the need to transfer calls to find the correct department or gets the caller the information they need without tying up phone lines and wasting their and the receptionists’ time with basic questions.  

    There’s a lot of automation that can happen that isn’t a replacement of humans, but of mind-numbing behavior.

    — Stewart Butterfield

    This quote highlights the point of automation! Expedite the menial tasks, which in turn frees up the people working to provide a far more attentive experience.   

    Humans tend to seek ways to increase efficiency in every aspect of their world. But we are social creatures, craving meaningful connection and community. Therefore, the human element will not only persist but remain vital.

  • Language As A Limitation: Is Artificial Intelligence “Conscious”?

    Man acts as though he were the shaper and master of language, while in fact language remains the master of man. – Martin Heidegger

    Words are powerful. They can be used to define, obscure, or even to create reality. They can be taken alone, as precise definitions, or they can be part of a broader spectrum or scale. As such, they can create or destroy … uplift or demoralize. Their power is seemingly limitless.

    Language is like a hammer … you can use it to create or destroy something. Although it evolved to aid social interactions and facilitate our understanding of the world, it can also constrain how we perceive it and limit our grasp of technological advances and possibilities.

    Before I go into where language fails us, it’s essential to understand why language is so important.

    Language Facilitates Our Growth

    Because without our language, we have lost ourselves. Who are we without our words? – Melina Marchetta

    Language is one of the master keys to advanced thought. As infants, we learn by observing our environment, reading facial expressions and body language, and reflecting on our perceptions. As we improve our understanding and use of language, our brains and cognitive capabilities develop more rapidly.

    It’s this ability to cooperate and share expertise that has allowed us to build complex societies and advance technologically. However, as exponential technologies accelerate our progress, language itself may seem increasingly inadequate for the tasks at hand.

    What happens when we don’t have a word for something?

    The limits of my language mean the limits of my world – Ludwig Wittgenstein

    English is famous for coopting words from other languages; there are many cases of languages having nuanced words that you can’t express well in other languages. 

    • Schadenfreude – German for pleasure derived by someone from another person’s misfortune.
    • Layogenic – Tagalog for someone who looks good from afar but appears less attractive as you see the person closer
    • Koi No Yokan – Japanese for the sense upon first meeting a person that the two of you are going to fall in love 

    Expressing new concepts opens up our minds to new areas of inquiry. In the same vein, the lack of an appropriate concept or word often limits our understanding.

    Wisdom comes from finer distinctions … but sometimes we don’t have words for those distinctions. Here are two examples.

    • An artist who has studied extensively for many years can somehow “know” that a work is a fake without being able to explain why.
    • A professional athlete can better recognize the potential in an amateur than a bystander. 

    How is that possible?

    They’re subconsciously recognizing and evaluating factors that others couldn’t assess consciously.

    Language as a Limitation

    When it comes to atoms, language can be used only as in poetry. The poet, too, is not nearly so concerned with describing facts as with creating images. -Niels Bohr

    In Buddhism, there’s the idea of an Ultimate Reality and a Conventional Reality. Ultimate Reality refers to the objective nature of something, while the Conventional Reality is tied inextricably to our thought processes, and is heavily influenced by our choice of language.

    Said differently, language is one of the most important factors in determining what you focus on, what you make it mean, and even what you choose to do. Ultimately, language conveys cultural and personal values and biases, and influences how we perceive “reality”.

    This is part of the challenge we have with AI systems. They have incredible power to shape our exposure to language and thought patterns. Consequently, it gives the platform significant power to shape its audience’s thoughts and perceptions. We talked about this in last week’s article. We’ll dive deeper in the future.

    To paraphrase philosopher David Hume, our perception of the world is drawn from ideas and impressions. Ideas can only ever be derived from our impressions through a process that often leads us to contradictions and logical fallacies.

    Instead of exploring the true nature of things or thinking abstractly, language sifts and categorizes experiences according to our prior heuristics. When you’re concerned about survival, those heuristics save you a lot of energy; when you’re trying to expand the breadth and depth of humanity’s capabilities, they’re potentially a hindrance. 

    The world around us is changing faster than ever, and complexity is increasing exponentially. It will only get harder to describe the variety and magnificence of existence with our lexicon … so why try?

    We personify the world around us, and it limits our creativity. 

    Many of humanity’s greatest inventions came from skepticism, abstractions, and disassociations from norms.

    A mind enclosed in language is in prison.  – Simone Weil

    What could we create if we let go of language and our intertwined belief systems?

    There has recently been a lot of press in which AI experts are saying that the next big jump in AI won’t come from large language models but from world models of intelligence.

    Likewise, AI consciousness and superintelligence have become more common topics of discussion and speculation.

    When will AI have human-like consciousness?

    I will try to answer that, but first, I want to deconstruct the idea a bit. The question itself makes assumptions based on how humans tend to personify things and rely on past patterns to evaluate what’s in front of us.

    Said differently, I’m not sure we want AI to think the way humans do. I think we want to make better decisions, take smarter actions, and improve performance. And that means thinking better than humans do.

    Back to the original question, I think the term “consciousness” is likely a misnomer, too.

    What is consciousness, and what makes us think that for technology to surpass us, it needs it? The idea that AI will eventually have a “consciousness” may be a symptom of our own linguistic biases. 

    Artificial consciousness may not be anything like human consciousness in the same way that alien lifeforms may not be carbon-based. An advanced AI could solve problems that even the brightest humans cannot. However, being made of silicon or graphene, it may not have a conscious experience. Even if it did, it likely wouldn’t feel emotions (like shame, or greed) … at least the way we describe them.

    Meanwhile, it seems like we pass some new hallmark of consciousness exhibited by increasingly sophisticated AIs every day. They even have their own AI-only social media network now.

    Humans Are The Real Black Box

    But if thought corrupts language, language can also corrupt thought – George Orwell

    Humans are nuanced and surprisingly non-rational creatures. We’re prone to cognitive biases, fear, greed, and discretionary mistakes. We create heuristics from prior experiences (even when it does not serve us), and we can’t process information as cleanly or efficiently as a computer. We unfailingly search for meaning, even where there often isn’t any. Though flawed, we’re perfect in our imperfections. 

    When scientists use expensive brain-scanning machines, they can’t make sense of what they see. When humans give explanations for their own behavior, they’re often inaccurate – more like retrospective rationalizations or confabulations than summaries of the complex computer that is the human brain.

    When I first wrote on this subject, I described Artificial Intelligence as programmed, precise, and predictable. At the time, AI was heavily influenced by the data fed into it and the programming of the human who created it. In a way, that meant AI was transparent, even if the logic was opaque.

    Today, AI can exhibit emergent capabilities, such as complex reasoning, in-context learning, and abstraction, that were not explicitly programmed by humans. These behaviors can be impressive and highly useful. They are beginning to extend far beyond what the original developers explicitly designed or anticipated (which is why we’re discussing user-sovereign systems versus institutional systems).

    In short, we don’t just need to understand how AI was built; we need frameworks for understanding how it acts in diverse contexts. If an AI system behaves consistently with its design goals, performs safely, and produces reliable results, then our trust in it can be justified even if we don’t have perfect insight into every aspect of its internal reasoning — but that trust should be based on rigorous evaluation, interpretability efforts, and awareness of limitations.

    Do you agree? Reach out and tell me what you think.

  • Staying Productive in the Age of Abundant Intelligence

    Recently, several savvy friends have sent me stories about using ChatGPT or other LLMs (like Grok) to beat the markets. The common thread is excitement: a stock pick that worked, or a small portfolio that made money over a few weeks.

    While that sounds impressive, I’m less excited than they are — and more interested in what it reveals about how we’re using AI and what’s becoming possible.

    Ultimately, in a world where intelligence is cheap and ubiquitous, discernment and system design are what keep you productive and sane.

    When I look at these LLM‑based trading stories, I find them interesting (and, yes, I do forward some of them internally with comments about key ideas or capabilities I believe will soon be possible or useful in other contexts).

    But interesting isn’t the same as exciting, useful, or trustworthy.

    While I’m still skeptical about using LLMs for autonomous trading, I’m thrilled by how far modern AI has come in reasoning about complex, dynamic environments in ways that would have seemed far-fetched not long ago. And I believe LLMs are becoming an increasingly important tool to use with other toolkits and system design processes.

    LLM-based trading doesn’t excite me yet, because results like those expressed in the example above aren’t simple, repeatable, or scalable. Ten people running the same ‘experiment’ would likely get ten wildly different outcomes, depending on prompts, timing, framing, and interpretation. That makes it a compelling anecdote, not a system you’d bet your future on. With a bit of digging (or trial and error), you’ll likely find that for every positive result, there are many more stories of people losing more than they expected (especially, over time).

    And that distinction turns out to matter a lot more than whether an individual experiment worked.

    Two very different ways to use AI today

    One way to make sense of where AI fits today is to separate use cases into two broad categories, like we did last week.

    The first is background AI. These are tools that quietly make things better without demanding much thought or oversight. Here are a few simple examples: a maps app rerouting around traffic, autocomplete finishing a sentence, or using Grammarly to edit the grammar and punctuation of this post. You don’t need a complex system around these tools, and you don’t have to constantly check or tune them. You just use them.

    There’s no guilt in that. There’s no anxiety about whether tools like these are changing your work in some fundamental way. They remove friction and fade into the background. In many cases, they’re already infrastructure.

    The second category is very different: foreground or high-leverage AI. These are areas where quality is crucial, judgment and taste are key, and missteps can subtly harm results over time.

    Writing is the most obvious example. AI can help generate drafts, outlines, and alternatives at remarkable speed. But AI writing also has quirks: it smooths things out, defaults to familiar phrasing, and often sounds confident even when it’s wrong or vague. Used lazily, it strips away your authentic voice. Even used judiciously, it can still subtly shift tone and intent in ways that aren’t always obvious until later.

    This is where the ‘just let the AI do it’ approach quietly breaks down.

    AI as a thought partner, not a ghostwriter

    For most use-cases, I believe the most productive use of AI isn’t to let it do the work for you, but to help you think. The distinction here is between an outsourcer (AI as the doer/finisher) and an amplifier (making you more precise, more aware, more deliberate).

    We’ve talked about it before, and it is similar to rubber-duck debugging. For example, when writing or editing these articles, I often use AI to homogenize data from different sources or to identify when I’ve been too vague (assuming the reader has knowledge that hasn’t been explicitly stated). AI also helps surface blind spots, improve framing, and generate alternatives when I’m struggling to be concise or to be better understood.

    Sometimes the AI accelerates my process (especially with administrivia), but more often, it slows me down in a good way by making me more methodical about what I’m doing. I’m still responsible for judgment and intent, but it helps surface opportunities to improve the quality of my output.

    I have to be careful, though. Even though I’m not letting AI write my articles, I’m reading exponentially more AI-generated writing. As a result, it’s probably influencing my thought patterns, preferences, and changing my word usage more than I’d like to admit. It also nudges me toward structures, formatting, and ‘best practices’ that make my writing more polished — but also more predictable and less distinctive.

    Said differently, background AI is infrastructure, while foreground AI is where judgment, taste, and risk live. And “human-in-the-loop” framing isn’t about caution or control for its own sake. It’s about preserving quality and focus in places where it matters.

    From creating to discerning

    As AI becomes more capable, something subtle yet meaningful happens to human productivity. The constraint is no longer how much you can create or consume; it’s how well you can choose what to create and what’s worth consuming.

    I often say that the real AI is Amplified Intelligence (which is about making better decisions, taking smarter actions, and continuously improving performance) … but now it’s also Abundant Intelligence.

    As it becomes easier to create ideas, drafts, strategies, and variations, they risk becoming commodities. And it pays to remember:

    Noise scales faster than signal.

    In that environment, the human role shifts from pure creation to discernment: deciding what deserves attention, what’s a distraction, and what should be turned into a repeatable system.

    Tying that back to trading, an LLM can generate a thousand trade ideas; the hard part is deciding which, if any, deserve real capital.

    This is true in writing, strategy, and (more broadly) in work as a whole. AI is excellent at generating options. It is much less reliable at deciding which options matter over time and where it is biased or misinformed.

    Keeping your eyes on the prize

    All of this points to a broader theme: staying productive in a rapidly changing world is not about chasing every new tool or proving that AI can beat humans at specific tasks. It’s about knowing where automation helps and where it’s becoming a crutch or a hindrance.

    In a world of abundant intelligence, productivity is less about how much your AI can do and more about how clearly you decide what it should do — and what you must still own.

    Some problems benefit from general tools that “just work.” Others require careful system design, clear constraints, and ongoing human judgment. Some require fully bespoke systems, built by skilled teams over time, with decay‑filters to ensure longevity (like what we build at Capitalogix). Using one option when you really need another leads to fragile results and misplaced confidence.

    The advantage, going forward, belongs to people and organizations that understand this distinction — and design their workflows to keep humans engaged where they add the most value. In a world where intelligence is increasingly abundant, focus, judgment, and discernment become the real differentiators.

    Early adoption doesn’t require blind acceptance.

    Onwards!

  • Who’s Prompting Who? How AI Changes the Way You Think and Write

    We like to think we’re the ones training our dogs.

    It looks something like … Sit. Treat. Repeat.

    But every now and then, it’s worth asking a slightly uncomfortable question: what if the dog thinks it’s training you?

    From its perspective, it performs a behavior and you respond with a reward. Same loop. Same reinforcement. Just flipped.

    AI has quietly created a similar loop for writers.

    We think we’re prompting the machine. But more often, the machine is nudging us: toward familiar structures, familiar tones, familiar conclusions. It rewards certain styles by making them feel “done” before they’re actually saying anything new.

    That shift matters most for entrepreneurs, executives, and investors—people whose writing isn’t just content, but communication that moves decisions, capital, and teams.

    The real advantage isn’t AI writing for you

    The advantage isn’t AI writing for you.

    It’s AI forcing you to think before you write.

    Most people use AI like an answer vending machine: “Give me a post about X,” “Rewrite this,” “Summarize that,” “Make it punchier.”

    That’s fine if the goal is speed.

    But if the goal is signal—original insight, clear judgment, a point of view that people can trust—then outsourcing the thinking is the fastest way to produce more words with less value.

    Which brings us to the enemy.

    The enemy: careless, blustery & formulaic content

    AI makes nice-sounding snippets cheap and available. Your articles become more quotable, but also at the expense of conciseness and clarity.

    So the world fills up with writing that is:

    • polished enough to forward,
    • plausible enough to believe,
    • and forgettable enough that no one really needed it in the first place.

    That’s content inflation: more words competing for the same limited attention.

    It usually happens through two traps:

    1) Template trance

    AI is great at default frameworks: lists, pro/con structures, “here are 7 steps,” tidy summaries, executive tone, confident conclusions.

    Those patterns are useful. They’re also seductive.

    You start to expect them. You begin to think in them. And, the output feels complete because it’s formatted like something you’ve seen a hundred times before. It even happens with sentence structure and punctuation, like the em-dash.

    2) Outsourced judgment

    But, when left to it’s own devices, AI does more than write … it chooses.

    The emphasis, the framing, the “what matters,” the implied certainty, the vibe.

    And if you’re not careful, your job quietly shifts from “author” to “approver.”

    That’s how you end up with a lot more content … and a lot less you.

    A real mini-case: writing with my son Zach

    I’ve seen this clearly while writing the weekly commentary with my son, Zach. It’s becoming a recurring challenge and a bigger issue, so the topic of today’s posts was to document and discuss the internal conflict we feel each week as we try to write something that both sounds like us and meets our new standards.

    Over time, Zach has become increasingly sensitive (and frustrated) with the AI-ification of output. AI subtly pushes writing toward what “performs” well—what fits algorithms, formats, engagement loops—rather than what actually sounds like us, and as I use more AI in the research process, it becomes more apparent.

    He’s right to be wary.

    The temptation is always there: AI can generate something polished in seconds. You can ship something that looks finished before you’ve done the thinking that makes it worth reading.

    What changed for us wasn’t “using AI less.”

    It was changing what role we gave it.

    Instead of letting AI impose structure, we used it to force judgment.

    We stopped asking it for answers and started asking it to push back:

    • What are you really trying to say?
    • What are you avoiding?
    • What’s your actual opinion versus a generic summary?
    • What would a skeptic challenge?
    • What example proves you mean this?

    That interrogation loop changed the writing. It felt less AI-produced and more real, human, and valuable.

    Here’s the uncomfortable truth: AI is designed to feel satisfying

    AI isn’t just intelligent.

    It’s friction-reducing and often sycophantic.

    Behind the scenes, it’s optimized to produce outputs that feel helpful, fast, and “complete”—often in fewer iterations. That’s great for productivity. It’s also exactly how you slide into template trance.

    There’s a reason AI output can feel like mental fast food, and it’s similar to what we’ve been yelling at kids for with TikTok and social media:

    • quick reward,
    • low effort,
    • easy consumption,
    • repeatable satisfaction.

    The problem isn’t that fast food exists.

    The problem is when it becomes the only thing you eat, or when you confuse it with nourishment.

    A simple discipline shift: declare what’s yours

    In an AI-saturated world, one of the most underrated credibility moves is simple:

    Declare what is yours.

    Not as a disclaimer. As a signal of integrity.

    Label the difference between:

    • your opinion vs a summary,
    • your questions vs your conclusions,
    • your hope or fear vs “the facts,”
    • your judgment vs a compilation.

    Many of our articles recently have focused on moving forward in an AI-centric society – and how to protect your humanity and productivity in the process.

    The core lesson is the same through all of them. The future isn’t just about production – it’s about trust & transparency.

    The fix: make AI question you first.

    If AI can herd you into defaults, you can also use AI to herd yourself into depth.

    The simplest change is to stop asking AI to answer first—and start requiring it to question you first.

    Here are the steps in the process

    The Question First “Who’s Prompting Who?” Writing Loop

    Use this whenever you want signal, not sludge. Ask AI to question you about what you want to write about. Have it ask you to:

    1. State the intent (plain English): What are you writing, for whom, and why?
    2. Explain it simply: Write a “smart 10-year-old” version of your point.
    3. Diagnose gaps: Identify: vague logic, missing steps, missing definitions, missing examples, missing counterpoints.
    4. Interrogate for specificity: Generate 3–7 targeted questions about: assumptions, tradeoffs, constraints, decision implications, audience objections.
    5. Refine and simplify: Re-write the thesis in one sentence. Then outline in 5 bullets.
    6. Working notes capture: Have AI keep a compact ledger: Thesis / Claims / Examples / Counterpoint / Takeaway.
    7. Only then, draft: Draft once the thinking is real.

    This question-first prompting loop is the difference between “AI makes words” and “AI makes thinking sharper.”

    The punchline: the dog isn’t the problem

    AI isn’t a villain, but if you use it recklessly, you can be.

    If you treat AI like a vending machine, it will happily feed you. And you may gradually trade judgment for velocity.

    But if you treat AI like an interrogator, it becomes something else entirely:

    A tool that helps you notice what you actually believe, pressure-test it, and articulate it in a way that sounds like a human with a spine.

    So yes: keep asking what you want AI to do.

    Just don’t forget the deeper question:

    Who’s prompting who?

    P.S. Keep reading for a behind-the-scenes look into how we used prompting to help write this article.


    Behind the Scenes: The Conversation That Wrote the Article (Without Writing It)

    This post didn’t start with an outline. It started with an interrogation. If you’re interested, here is a link to the chat transcript and prompt.

    In the thread that produced this piece, the key shift was role design: I didn’t want an answer machine. I wanted a Socratic interrogator — a system that makes me declare what I actually believe, separate my point of view from generic summary, and test the idea until it had a clear golden thread.

    That’s the point: the advantage isn’t AI writing for you. It’s AI interrogating you until your ideas are worth writing.

  • From Chatbots to Coworkers: The Architecture of True Delegation in Agentic AI

    For the last decade, artificial intelligence has been framed as a breakthrough in conversational technology (generating smarter answers, faster summaries, and more fluent chats). That framing is already obsolete.

    The consequential shift underway is not about conversation at all. It’s about delegation.

    AI is transitioning from a reactive interface to an agentic coworker: systems that draft, schedule, purchase, reconcile, and execute across tools, files, and workflows — without waiting for permission or direction.

    At Capitalogix, we built an agentic system that autonomously trades financial markets. Others have deployed AI that wires funds, adjusts pricing, and communicates with customers. The results are transformative. The risks are material.

    The critical question is no longer “How smart is the model?” It’s “What architecture governs its ability to act?” Digging deeper, do you trust the process enough to let it execute decisions that shape your business, your reputation, and your competitive position?

    That trust isn’t earned through better algorithms. It’s engineered through better architecture.

    Let’s examine what that actually requires.

    Delegation Beats Conversation

    Early AI systems were like automated parrots (they could retrieve and generate), but remained safely boxed inside a conversation or process. Agentic systems break those boundaries. They operate across applications, invoke APIs, move money, and trigger downstream effects.

    As a result, the conversation around AI fundamentally shifts. It’s no longer defined by understanding or expression, but by the capacity to perform multi-step actions safely, auditably, and reversibly.

    Those distinctions matter. Acting systems require invisible scaffolding (permissions, guardrails, audit logs, and recovery paths) that conversational interfaces never needed.

    In other words, delegation demands more than better models. It demands better control systems. To help with that, here is a simple risk taxonomy framework to evaluate agent delegations:

    • Execution risk: Agent does the wrong thing
    • Visibility risk: You can’t see what the agent did
    • Reversibility risk: You can’t undo what the agent did
    • Liability risk: You own the consequences of agent actions.

    Organizations that treat agentic AI as “chat plus plugins” will underestimate both its upside and its risk. Those that treat it as a new layer of operational infrastructure (closer to an automation control plane than a productivity app) will be better positioned to scale it responsibly.

    Privacy’s Fork in the Road

    As agents gain autonomy, privacy becomes a paradox. Privacy-first designs (encrypted, device-keyed interactions where even vendors cannot access logs) unlock the potential for sensitive use cases like legal preparation, HR conversations, and personal counseling.

    But that same strength introduces tension. Encryption that protects users can also obstruct auditability, legal discovery, and incident response. When agents act on behalf of individuals or organizations, the absence of records is a major stumbling block.

    This forces a choice:

    • User-sovereign systems, where privacy is maximized and oversight is minimized.
    • Institutional systems, where compliance, accountability, and traceability are non-negotiable.

    Reconciling these paths will necessitate the development of new technical frameworks and policy requirements. Viewing privacy as an absolute good without addressing its trade-offs is no longer sustainable as systems become more autonomous.

    Standards Are Infrastructure, Not Plumbing

    History is clear on this point: standards create coordination, but they also concentrate power. Open governance can lower barriers and expand ecosystems. Vendor-controlled standards can just as easily become toll roads.

    Protocols like Google’s Universal Commerce Protocol (UCP) are not neutral technical conveniences; they are institutional levers.

    Who defines how agents authenticate, initiate payments, and complete transactions will shape:

    • Who captures margin
    • Who bears liability, and
    • Who can compete?

    For businesses, protocol choices are strategic choices. Interoperability today determines negotiating leverage tomorrow.

    Ignoring this dynamic doesn’t make it disappear—it just cedes influence to those who understand it better.

    APIs, standards bodies, and partnerships quietly determine who becomes a gatekeeper and who remains interchangeable. The question of “who runs the agent” is inseparable from pricing power, data access, and long-term market structure.

    Organizations that control payment protocols become the new Visa. Those who define authentication standards become the new OAuth. And companies that treat these choices as “technical decisions” will wake up to discover they’ve locked themselves into someone else’s ecosystem—with pricing power, data access, and competitive flexibility determined by whoever wrote the rules

    Last But Not Least: The UX Problem

    One of the most underestimated challenges in agentic AI is actually human understanding and adoption. Stated differently, Human trust is the most underestimated challenge in AI adoption.

    The key is calibrating trust: users must feel confident enough not to intervene prematurely, yet vigilant enough to catch genuine errors.

    A related issue (especially when the process exceeds the capabilities of humans to keep up or understand what the AI is doing in real-time) is that it becomes increasingly important that the answers are correct. Why? Because errors executed at machine speed compound exponentially.

    Another challenge is that users lack shared mental models for delegation. They don’t intuitively grasp what an agent can do, when it will act, or how to interrupt it when something goes wrong … and thus, the average user still fears it.

    Trust is not built on raw performance. It’s built on predictability, transparency, and reversibility.

    Organizations that ignore this will face slow adoption, misuse, or catastrophic over-trust. Those who design explicitly for trust calibration will create a durable competitive advantage.

    The Architecture of The Future

    As we look at these various issues (Privacy, UX, Infrastructure) one thing becomes clear.

    The real transformation in AI is architectural, not conversational.

    Delegation at scale requires three integrated systems:

    • Leashes (controls, limits, audits),
    • Keys (privacy, encryption, access), and
    • Rules (standards, governance, accountability).

    Design any one in isolation, and the system fails (becoming either unusable or dangerously concentrated).

    At Capitalogix, we treat agentic AI as a system design challenge and infrastructure (not as a productivity feature). We measure risk, align incentives, and build governance alongside capability.

    This requires constant vigilance: updating rules, parameters, data sources, and privacy settings as conditions evolve. Likewise, every architectural decision needs an expiration date … because, without them, outdated choices become invisible vulnerabilities.

    This approach isn’t defensive — it’s how we scale responsibly.

    The winners in this transition won’t be those with the smartest models. They’ll be those who engineer trustworthy apprentices that can act autonomously while remaining aligned with organizational goals.

    Three Questions Before Deploying Agentic AI

    1. Can you audit every action this agent takes?
    2. Can you explain its decisions to regulators, customers, or boards?
    3. Can you revoke its authority without breaking critical workflows?

    The future isn’t smarter chat. It’s delegation you can trust.

    And trust, as always, is not just given; it’s engineered … then earned

  • Generative AI’s Explosive Growth

    Generative AI has moved from novelty to necessity in under two years — and the data proves it.

    What started as a curiosity is now quietly rewiring how we work, create, and consume information. The result is an invisible revolution happening inside our apps, our workflows, and our daily decisions.

    The Invisible Revolution

    Gen AI apps are increasingly a part of my day.

    While I still supervise most AI tasks, these tools now touch nearly every aspect of my workflow.

    Gen AI also works quietly in the background — organizing and filtering emails, files, and news stories — even in places I’ve forgotten to ask it to help.

    That experience isn’t unique; it reflects a broad behavioral shift across age groups, industries, and geographies.

    Here’s a chart that shows the rise of generative AI apps compared to other app categories on popular platforms. There’s a phrase that captures what this chart reveals:

    One of these things is not like the others.

    Take a look.

    Chart showing generative AI app downloads vastly outpacing other mobile app categories on iOS and Google Play

    via visualcapitalist

    THE DATA: Growth Unlike Any Other Category 

    While this data covers the iOS and Google Play stores — which represent the majority of consumer app downloads — it doesn’t capture enterprise or web-based AI usage, where adoption may be even higher.

    AI‑generated text, images, and video have quickly become a major force in content creation and moderation. Many younger users may now consume a majority of their content through AI-mediated or AI-generated experiences (e.g., personalized feeds and AI‑curated playlists, as well as synthetic influencers and chat‑based companions).

    The trajectory becomes even more striking when you examine the financial projections.

    According to Sensor Tower, Generative AI apps are projected to reach 4 billion downloads, generate $4.8 billion in in-app purchase revenue, and account for 43 billion hours spent in 2025 alone. Generative AI applications are anticipated to reach over $10 billion in consumer spending by 2026. Additionally, by then, Gen AI is expected to be among the top five mobile app categories in terms of downloads, revenue, and user engagement.

    THE BEHAVIOR SHIFT: From Tools to Workflows

    Beyond installs and revenue, user engagement is accelerating, reflecting increased consumer willingness to pay for AI tools, subscriptions, and premium features as these apps become part of daily workflows.

    Key insight: This isn’t just another app category — it’s infrastructure. And, learning to work with AI is quickly becoming a baseline skill. Just as spreadsheets and email became non‑negotiable skills in earlier eras, fluency with AI tools will soon be assumed rather than optional.

    How to adapt, starting now

    • Audit where AI already touches your workflows—email, content, customer interactions—and identify obvious gaps or redundancies.
    • Pilot one or two Gen AI tools deeply rather than dabbling in many, and track the impact on time saved or output quality.
    • Establish simple guardrails for accuracy, privacy, and human review so AI becomes a reliable partner, not a blind spot.

    The momentum is undeniable. In the AI era, standing still means falling behind (but at an exponential pace). The question isn’t whether to adopt AI … it is how quickly you can adapt your workflows, teams, and strategies to use it well. Those who learn to partner with these tools now will define what ‘normal’ looks like in the years ahead.

    Onwards!

  • Getting To Know Yourself Better With Prompts

    As we approach year-end, my thoughts have been on finishing strong and planning for a great 2026.

    Last week, we looked at a prompt that created a new keystone habit. This week, I’m sharing another simple prompt that I found valuable and insightful. It’s designed to review your conversation history, conduct a mini-assessment, and give you a glimpse into your blind spot.

    Like last week’s prompt, as written, it’s somewhat generic and might hallucinate a little if it doesn’t have enough data. That’s easy to fix by improving the prompt. But for the purposes of getting started, this is good enough.

    Here is the base prompt to try in your primary AI tool.

    From all of our interactions, what is one thing that you can tell me about myself that I may not know about myself?”

    Sometimes, less is more.

    There are lots of ways to use something like this. For example, you can tell it to be “brutally honest” or to “roast you” so that you hear it in humorous terms. With that in mind, here are a bunch of copy/paste prompt variants that produce the same kind of “surprising but grounded” self-insight, each from a different angle.

    Pattern + Blind Spot Variants

    • Strength-with-a-Shadow

    From our interactions, name one strength I clearly have and the most likely downside of that strength when overused. Give 2 examples from our chats and 1 practical guardrail.

    • Default Operating System

    What is my “default mode” behavior, under pressure, based on our interactions? What does it protect me from, and what does it cost me?

    • Hidden Constraint

    Identify one hidden assumption I seem to carry. Explain how it helps me, how it limits me, and one experiment to test it.

    • Blind Spot That Looks Like a Virtue

    What’s a behavior of mine that most people would praise, but that could quietly create problems? Be specific and non-psychological.

    Decision-making + execution variants

    • Where I Over-Engineer

    Where do I tend to add unnecessary complexity? Give one example pattern, why I do it, and a “2-step simplification rule” I can apply.

    • Where I Under-Commit

    Based on our interactions, where might I stay in analysis longer than needed? Give a “commitment trigger” and a script for making the decision.

    • One Question I Avoid

    What is one question I rarely ask, but should, given my goals? Provide the exact wording and when to use it.

    • My “Next Constraint”

    If I had to improve only one constraint in my system (time, focus, delegation, communication, risk), which one is highest leverage and why?

    Communication + Relationships Variants

    • How I’m Experienced by Others

    Based on my writing and requests, how might teammates/investors experience me on a good day vs a stressed day? Give 3 traits each and 1 calibration move.

    • Trust Friction

    Identify one way my communication style could unintentionally reduce trust or clarity. Give a rewrite pattern I can apply.

    • Authority vs Warmth Dial

    Where do I sit on the authority↔warmth spectrum in my messages? What’s the risk at my current setting, and how do I adjust without becoming fake?

    Energy + Focus Variants

    • My Energy Signature

    Infer my likely “energy curve” and where I do my best thinking. Give a schedule template that matches it and one rule for protecting it.

    • My Procrastination Costume

    What form of “productive procrastination” do I use (based on our chats)? Give a 60-second interrupt and a 10-minute re-entry plan.

    Identity + Growth Variants (Grounded, Non-Therapy)

    • My Core Values in Disguise

    What values do my patterns suggest (not what I claim)? Give 3 values, the evidence, and one way each can be expressed more cleanly.

    • My Edge

    What’s one capability I’m unusually strong at that I might be underpricing? Give one way to productize it and one way to teach it.

    Tighter “One Thing” Variants

    • One Sentence, Then Proof

    Tell me one thing about myself I might not know in a single sentence. Then justify it with 3 specific signals from our interactions and 1 counter-signal.

    • If-Then Insight

    If I keep doing X, then Y will happen (good and bad). Identify X and Y from our interactions, and give one small change.

    • The Uncomfortable Gift

    Give me one insight that’s slightly uncomfortable but genuinely helpful. Be kind, direct, and practical. End with one question for me.

    Hopefully, you found something that helped you find what you were looking for.

    It’s a good reminder that AI is not supposed to replace you … It’s supposed to amplify the best parts of you.

    A lot of these exercises and thought patterns are based on activities I used to do in my own planning, or with trusted advisors. As I use AI more in my everyday life, it has collected enough data to be a powerful analysis tool (and that is a scary reminder of how much it knows and remembers).

    I believe in examining your thinking – and using those insights to choose smarter and better actions. Prompts like this are a powerful tool for building that habit … but only if you remember that it is still you choosing and acting!

    Don’t outsource what makes you human to the machines … but that doesn’t mean you can’t use a helping hand.

  • What Not To Do: A Simple Lesson From Tech’s Recent Failures

    As technology gets bigger, its failures get bigger too — and sometimes so do the efforts to hide them. For example, recently, a wave of stories has exposed ‘AI’ products that were really human‑powered behind the scenes.

    I asked ChatGPT to make an image based on the context of this article, and it took me a little bit too literally. What do you think?

    A prominent example involves a London-based company, Builder.AI, which at one point was valued at $1.5 billion, that was exposed for secretly employing approximately 700 real people to perform services it marketed as AI-delivered. This company, which has since filed for bankruptcy, received investments from major firms including Microsoft. 

    Other reports have highlighted similar patterns:

    • A company providing “AI-powered” voice interfaces for fast-food drive-thrus could only complete 30% of orders without human intervention.
    • Amazon was found to have secretly relied on real employees while promoting an “AI” product.
    • NEO, the home robot, was marketed as a butler that could perform any of your chores reliably … but took two minutes to fold a sweater, couldn’t crack a walnut, and was teleoperated the entire time.

    These incidents demonstrate a pattern of companies leveraging AI hype to win investment and customers, while hiding how much work is still done by humans.

    There’s nothing wrong with humans in the loop; the problem is pretending they aren’t there and selling that pretense as innovation.

    But hiding humans wasn’t the only way tech disappointed us this year.

    Finding New Ways to Fail

    This weekend, Waymo suspended its robotaxi service in San Francisco after a massive blackout appeared to leave many of its vehicles stalled on city streets.

    A recent ChatGPT update was sycophantic to a fault, assuring users that even their most mundane ideas were brilliant and incisive. Unfortunately, OpenAI responded by swinging the pendulum too far in the other direction. Their next update, GPT-5, was so cold that it prompted them to revive the ability to choose which model you used, and likely contributed to Altman’s recent “code red”.

    Or, you can point to the countless “meme coins” that made money only for their creators before being rug-pulled, such as the Hawk Tuah Coin.

    The Path To Success

    The common thread isn’t that technology is moving too fast — it’s that too many people are trying to leap over the boring parts.

    Many of this year’s failures were caused by people trying to skip the fundamentals.

    Meme coins didn’t fail because communities don’t matter — they failed because speculation was mistaken for value. The humanoid robots didn’t disappoint because robotics is a dead end — they disappointed because demos were sold as deployments. And the companies quietly swapping humans in for “AI” didn’t collapse because AI is useless — they collapsed because trust, once broken, is almost impossible to recover.

    A What Not To Do List

    These truths sound obvious, but the past year suggests many leaders still ignore them.

    • Don’t hide humans and call it AI.
    • Don’t sell demos as finished products.
    • Don’t mistake speculation for sustainable value.
    • Don’t optimize for virality at the expense of trust.

    For entrepreneurs, the lesson is uncomfortable but simple: reality wins in the long run. You can borrow attention for a moment, but you have to earn durability. Markets and customers will forgive slow progress, but they won’t forgive dishonesty.

    What To Do Instead

    • Validate in the real world.
    • Disclose human‑in‑the‑loop honestly.
    • Align metrics with durability.
    • Design for boring reliability before spectacle.

    In some ways, it’s easier than ever to ‘succeed’. With that said, does success simply mean building something that works, or does it mean building something that’s strategic and unique and captures the imagination and wallets of an audience big enough to fuel your desired bigger future?

    It’s the same paradox that AI‑created marketing faces. It’s now much easier to create something that sounds logical, but it is harder to stand out because you’re competing for attention in a growing sea of sameness and noise.

    The next generation of meaningful companies won’t be built by chasing the loudest narrative or the newest acronym. They’ll be built by founders who understand the difference between a prototype and a product, between timely and timeless, and between promise and proof.

    Hype can open the door. Execution keeps it open.