Thoughts about the markets, automated trading algorithms, artificial intelligence, and lots of other stuff

  • “Real” Doesn’t Mean What It Used To Anymore

    Your Brand Style Guide Isn’t Enough Anymore

    Not long ago, high-quality art, music, and video had a built-in bottleneck: skill. If you wanted a specific emotional effect—or a certain level of craftsmanship—you either had to earn the craft yourself or hire someone who had.

    That bottleneck is dissolving.

    I recently watched an AI-generated music video on YouTube; if I hadn’t paid attention, I might not have known it was entirely created by technology rather than humans. Here’s the link.

    Artist & Song: Lolita Cercel – Pe peronu’ de la garã – AI Artist & Music Video

    Don’t expect to be wowed. I didn’t love the music or the video. But it’s still a notable achievement. For example, recognize how much it feels like a professionally produced music video. While there are some clear limitations in the production, It doesn’t feel like a party trick (even though, technologically, it still is a party trick). It feels like art.

    When I first watched it, I remember thinking it reminded me of a slightly older style of music. I couldn’t tell whether the words were Portuguese or Romanian. But I was focused on the little details, rather than its slick production or cool technology.

    The singer, Lolita Cercel, is entirely a construct of Tom, a Bacau-based video designer. She doesn’t exist except in AI.

    Neither did the music. Tom wanted to convey emotion through his song lyrics, and he decided AI was a powerful tool to turn his thoughts into things.

    “I tried to make it as realistic as possible. The inspiration came from an 80-year-old collection of poems by a Romanian author who used colloquial, slum language. I liked the style and adapted it for ‘Lolita’ to make it authentic … It’s a mix of artificial intelligence and classical music. I work on several videos in parallel, shooting, editing, adjusting. Technology has allowed me to bring my ideas to life,

    Tom

    That moment matters because the world doesn’t need perfection for the game to change.

    When the market believes “you can’t tell,” Whether something was produced by humans or technology, the operating assumptions of media, marketing, and trust start rewriting themselves.

    Now, for the sake of this article, I’m not focused on the nature of art and artists. I’m focused on media and the nature of attraction and consumption, particularly in business contexts.

    The Skill Shift: From “Making” to “Specifying + Judging”

    Until recently, to create something truly captivating, you had to pay the best and the brightest and hope for the best.

    It’s only really in the last 20 years that the average business could effectively test an ad before releasing it. Ad agencies hired ‘Mad Men’ savants, and a team of writers, designers, composers, artists, editors, and more, to create a piece that would hopefully stand the test of time … or at least drive some sales.

    The new advantage is more subtle — and ultimately more powerful: the ability to specify what you want and judge whether you got it. Often, with a minimal team.

    Everyone can watch and react to content. Far fewer can define (clearly and repeatably) what they want to produce in the mind of another human (e.g., trust, reassurance, curiosity, confidence, or urgency). And even fewer can define what “good enough” means (or how they will measure it) before they generate the content.

    In a world where production becomes cheap, taste becomes expensive.

    From Brand Book to Brand Operating System

    Style guides and brand books still matter. Voice, formatting, color choices, visual identity—none of that disappears.

    AI changes the game by altering the volume and nature of what gets produced. As people are exposed to more and more content of similar quality and production values, what really changes is the level of what constitutes “average”.

    With endless opportunities and distractions, the differentiator becomes consistency: your ability to deliver your promise again and again across channels and formats — without drifting into generic sameness.

    That’s where a Brand Operating System comes in.

    While a brand book is static, a Brand Operating System is a living specification that reliably turns identity into output and serves as a robust framework for AI initiatives.

    A BrandOS includes:

    • Audience psychology: what your audience hopes for, fears, rejects, and values
    • Proof standards: what they require to trust you (and what triggers skepticism)
    • Ambiguity tolerance: how much uncertainty they’ll accept before confidence drops
    • Response targets: the emotional outcomes you want to reliably provoke
    • Guardrails: what you never do (tone, claims, promises, compliance boundaries)
    • A recipe: the variables that make the output recognizably you

    Put differently: the BrandOS is how you scale production without losing the signal or the soul of what makes you … you.

    “Experience” Is the Product & Feedback Loops Are the Engine

    Here’s the thing: in a lot of these markets, results aren’t enough. Everyone can point to returns, claims, outputs—whatever. That stuff commoditizes fast.

    What actually sticks is how the system behaves over time. Does it feel consistent? Does it make sense? Do you understand what it’s doing when things go right and when they don’t? That’s where trust comes from.

    Under the surface, as AI or technology becomes more advanced, it’s harder for people to understand what it does. That’s why experience itself becomes the differentiator …

    Good systems adapt over time. They are not only focused on the immediate outcome. They focus on learning, growing, and adapting to the practical realities of the environment and audience. One way to accomplish that is to use feedback loops to provide the system with better context on what’s happening, how it’s performing, and which areas may need attention or improved data.

    I’ve been enjoying an app called Endel lately. It generates music on demand and can link to biometric signals. When I select the “Move” module, it uses data from devices such as an Apple Watch to adjust what it plays. As my pace changes — from walking to jogging — the cadence of the music shifts with me. It feels responsive, as if the system is listening, pacing, or even leading.

    That’s the shift: closed-loop generation; generation that adapts to feedback.

    We already do this in business:

    • In marketing: opens, engagement, retention curves, where people stop watching
    • In trading and investing: risk-adjusted targets, volatility stability, whether outcomes reflect skill or luck

    A Brand Operating System is what happens when you make those loops explicit, measurable, and repeatable.


    “Enough of Me” Has to Be Specified

    If you want AI to magnify you instead of replacing you, you have to define what “you” means.

    For me, “enough of me” looks like:

    1. A signature point of view: a high-level perspective of perspectives and what’s possible
    2. Metaphors: because they compress complexity into something people can carry
    3. Constructive challenge: not to tear things down, but to test what to trust

    Every person and every company has an equivalent set of signature variables—whether they’ve articulated them or not.

    If you don’t specify them, the system will default to what it thinks performs. And performance alone often converges on generic engagement rather than authentic resonance.

    Guardrails: The Power of “Forbidden Moves”

    Here’s a practical truth: At scale, the most important part of your BrandOS isn’t what it produces … It’s what it refuses to produce.

    Forbidden moves are how you protect trust. They ensure you get more of what you want and less of what you don’t—especially when content is manufactured at volume.

    Examples of forbidden moves (adapt these to your domain):

    • No absolute certainty in probabilistic environments
    • No hype language that undermines trust with sophisticated audiences
    • No claims without proof standards (define what counts as proof)
    • No manufactured intimacy that mimics a relationship you didn’t earn
    • No tone drift that breaks your promise (snarky, overly casual, overly salesy—whatever is off-brand)

    Guardrails aren’t constraints. They’re how you keep the system aligned with the asset you’re actually building: credibility.

    Entropy Is Inevitable—So Detect It Early

    The risk of outsourcing capability is that the tool changes. Models update. Distribution shifts. Channels fatigue. What worked last quarter can quietly stop working next month.

    We’ve discussed this before, but almost everything decays or drifts over time. It’s important to be able to measure that. Here are two examples:

    • Marketing drift: if open rates drop materially or engagement falls, something is drifting.
    • Trading drift (high level): if risk-adjusted targets degrade, volatility exceeds targets, or outcomes start to look like luck rather than understanding, something is drifting.

    No technique always works.

    But something is always working.

    The winners aren’t the ones who find a trick and freeze it. They’re the ones who build systems that notice change early, recalibrate, and keep moving forward.

    The Real Choice

    Your choice isn’t really whether or not to use AI. If you don’t, you’re going to get left behind.

    AI will continue to make ‘real’ cheap; your BrandOS is how to keep your “meaning” valuable.

    Your choice is whether you’ll let AI optimize you into generic engagement, and eventual irrelevancy … or whether you’ll build a BrandOS that protects what makes you you, while adapting fast enough to stay ahead of drift.

  • Carving a New Path: Humanizing The Exceptional

    How automated is too automated? 

    “To speak to a representative, say … representative …. “ 

    “Representative.” 

    “Sorry I didn’t catch that … would you like for me to repeat the options menu?” 

    “NO” 

    “Sorry I didn’t catch that … please state wh…” 

    “REPRESENTATIVE” 

    “Sorry, all of our representatives are busy helping others at the moment … Goodbye.” 

    *CALL ENDS* 

    How many of us have been in this scenario when on the phone with an airline, insurance company, or any other automated call center?

    Where are the people? Why can’t I speak to a human?  

    One of my son’s few memories of my Dad involved listening to him go through a scenario like this with a late-1990s auto-attendant. It was funny. My Dad became increasingly frustrated that he couldn’t get to an actual human being. It devolved into: “Shut up! Stop talking! I’ll give you $50 if you let me talk with a real person.” And it went downhill from there.

    Despite being frustrating, these systems save companies time, money, and resources. And in an ideal world, they streamline callers into organized categories, resulting in a more efficient experience.  They’re clearly working on some level because you’re seeing increased adoption of AI chatbots, robo-callers, and digital support systems. 

    The evolution of this technology is already replacing people in marketing, sales, consulting, coaching, and even therapy. Sometimes to mixed effects

    But does the efficiency or effectiveness it creates justify the lack of human connection?  Why did so many of the legacy call systems get rated so poorly?

    There’s hope, though. I remember air travel before apps let me check in online and skip the counter. I remember banks before ATMs. In both of those situations, I was so anchored in my past experience. I was more aware of what I was missing rather than what I was getting.

    Recently, I came across an article highlighting a trendy new restaurant in Venice, Italy. They serve the best dishes from several popular restaurants across the city! They must have a massive kitchen and extensive staff to take on such a task, right? Wrong. This restaurant is fully automated; you order and receive food via … vending machines. 

    My first reaction was this … the convenience sounds fantastic, but wouldn’t that turn a valuable part of the experience into a commodity? It seems like you’d lose so much of the community, human interaction, and pampering that you enjoy when going to a nice restaurant. As I continued to read, however, the article explained that, to “humanize” the restaurant, it is used as a meeting place for food tastings, community gatherings, and question-and-answer sessions. As the world changes, so do the types of experiences people crave.

    Humanity and automation merged beautifully.   

    Semi-Automated Often Beats Fully Automated  

    Systemize the predictable so you can humanize the exceptional

    — Isadore Sharp, Four Seasons

    Earlier, I mentioned automated call centers and how frustrating they can be. I’ve come in contact with several who have found a healthy balance in how they automate their system.  

    For example, an apartment complex near me uses an AI agent to screen calls and send them to the correct department.

    Often, the automation tags and organizes calls before routing them to their intended destination, or answers frequently asked questions without connecting them to a human. Either way, it reduces the need to transfer calls to find the correct department or gets the caller the information they need without tying up phone lines and wasting their and the receptionists’ time with basic questions.  

    There’s a lot of automation that can happen that isn’t a replacement of humans, but of mind-numbing behavior.

    — Stewart Butterfield

    This quote highlights the point of automation! Expedite the menial tasks, which in turn frees up the people working to provide a far more attentive experience.   

    Humans tend to seek ways to increase efficiency in every aspect of their world. But we are social creatures, craving meaningful connection and community. Therefore, the human element will not only persist but remain vital.

  • Language As A Limitation: Is Artificial Intelligence “Conscious”?

    Man acts as though he were the shaper and master of language, while in fact language remains the master of man. – Martin Heidegger

    Words are powerful. They can be used to define, obscure, or even to create reality. They can be taken alone, as precise definitions, or they can be part of a broader spectrum or scale. As such, they can create or destroy … uplift or demoralize. Their power is seemingly limitless.

    Language is like a hammer … you can use it to create or destroy something. Although it evolved to aid social interactions and facilitate our understanding of the world, it can also constrain how we perceive it and limit our grasp of technological advances and possibilities.

    Before I go into where language fails us, it’s essential to understand why language is so important.

    Language Facilitates Our Growth

    Because without our language, we have lost ourselves. Who are we without our words? – Melina Marchetta

    Language is one of the master keys to advanced thought. As infants, we learn by observing our environment, reading facial expressions and body language, and reflecting on our perceptions. As we improve our understanding and use of language, our brains and cognitive capabilities develop more rapidly.

    It’s this ability to cooperate and share expertise that has allowed us to build complex societies and advance technologically. However, as exponential technologies accelerate our progress, language itself may seem increasingly inadequate for the tasks at hand.

    What happens when we don’t have a word for something?

    The limits of my language mean the limits of my world – Ludwig Wittgenstein

    English is famous for coopting words from other languages; there are many cases of languages having nuanced words that you can’t express well in other languages. 

    • Schadenfreude – German for pleasure derived by someone from another person’s misfortune.
    • Layogenic – Tagalog for someone who looks good from afar but appears less attractive as you see the person closer
    • Koi No Yokan – Japanese for the sense upon first meeting a person that the two of you are going to fall in love 

    Expressing new concepts opens up our minds to new areas of inquiry. In the same vein, the lack of an appropriate concept or word often limits our understanding.

    Wisdom comes from finer distinctions … but sometimes we don’t have words for those distinctions. Here are two examples.

    • An artist who has studied extensively for many years can somehow “know” that a work is a fake without being able to explain why.
    • A professional athlete can better recognize the potential in an amateur than a bystander. 

    How is that possible?

    They’re subconsciously recognizing and evaluating factors that others couldn’t assess consciously.

    Language as a Limitation

    When it comes to atoms, language can be used only as in poetry. The poet, too, is not nearly so concerned with describing facts as with creating images. -Niels Bohr

    In Buddhism, there’s the idea of an Ultimate Reality and a Conventional Reality. Ultimate Reality refers to the objective nature of something, while the Conventional Reality is tied inextricably to our thought processes, and is heavily influenced by our choice of language.

    Said differently, language is one of the most important factors in determining what you focus on, what you make it mean, and even what you choose to do. Ultimately, language conveys cultural and personal values and biases, and influences how we perceive “reality”.

    This is part of the challenge we have with AI systems. They have incredible power to shape our exposure to language and thought patterns. Consequently, it gives the platform significant power to shape its audience’s thoughts and perceptions. We talked about this in last week’s article. We’ll dive deeper in the future.

    To paraphrase philosopher David Hume, our perception of the world is drawn from ideas and impressions. Ideas can only ever be derived from our impressions through a process that often leads us to contradictions and logical fallacies.

    Instead of exploring the true nature of things or thinking abstractly, language sifts and categorizes experiences according to our prior heuristics. When you’re concerned about survival, those heuristics save you a lot of energy; when you’re trying to expand the breadth and depth of humanity’s capabilities, they’re potentially a hindrance. 

    The world around us is changing faster than ever, and complexity is increasing exponentially. It will only get harder to describe the variety and magnificence of existence with our lexicon … so why try?

    We personify the world around us, and it limits our creativity. 

    Many of humanity’s greatest inventions came from skepticism, abstractions, and disassociations from norms.

    A mind enclosed in language is in prison.  – Simone Weil

    What could we create if we let go of language and our intertwined belief systems?

    There has recently been a lot of press in which AI experts are saying that the next big jump in AI won’t come from large language models but from world models of intelligence.

    Likewise, AI consciousness and superintelligence have become more common topics of discussion and speculation.

    When will AI have human-like consciousness?

    I will try to answer that, but first, I want to deconstruct the idea a bit. The question itself makes assumptions based on how humans tend to personify things and rely on past patterns to evaluate what’s in front of us.

    Said differently, I’m not sure we want AI to think the way humans do. I think we want to make better decisions, take smarter actions, and improve performance. And that means thinking better than humans do.

    Back to the original question, I think the term “consciousness” is likely a misnomer, too.

    What is consciousness, and what makes us think that for technology to surpass us, it needs it? The idea that AI will eventually have a “consciousness” may be a symptom of our own linguistic biases. 

    Artificial consciousness may not be anything like human consciousness in the same way that alien lifeforms may not be carbon-based. An advanced AI could solve problems that even the brightest humans cannot. However, being made of silicon or graphene, it may not have a conscious experience. Even if it did, it likely wouldn’t feel emotions (like shame, or greed) … at least the way we describe them.

    Meanwhile, it seems like we pass some new hallmark of consciousness exhibited by increasingly sophisticated AIs every day. They even have their own AI-only social media network now.

    Humans Are The Real Black Box

    But if thought corrupts language, language can also corrupt thought – George Orwell

    Humans are nuanced and surprisingly non-rational creatures. We’re prone to cognitive biases, fear, greed, and discretionary mistakes. We create heuristics from prior experiences (even when it does not serve us), and we can’t process information as cleanly or efficiently as a computer. We unfailingly search for meaning, even where there often isn’t any. Though flawed, we’re perfect in our imperfections. 

    When scientists use expensive brain-scanning machines, they can’t make sense of what they see. When humans give explanations for their own behavior, they’re often inaccurate – more like retrospective rationalizations or confabulations than summaries of the complex computer that is the human brain.

    When I first wrote on this subject, I described Artificial Intelligence as programmed, precise, and predictable. At the time, AI was heavily influenced by the data fed into it and the programming of the human who created it. In a way, that meant AI was transparent, even if the logic was opaque.

    Today, AI can exhibit emergent capabilities, such as complex reasoning, in-context learning, and abstraction, that were not explicitly programmed by humans. These behaviors can be impressive and highly useful. They are beginning to extend far beyond what the original developers explicitly designed or anticipated (which is why we’re discussing user-sovereign systems versus institutional systems).

    In short, we don’t just need to understand how AI was built; we need frameworks for understanding how it acts in diverse contexts. If an AI system behaves consistently with its design goals, performs safely, and produces reliable results, then our trust in it can be justified even if we don’t have perfect insight into every aspect of its internal reasoning — but that trust should be based on rigorous evaluation, interpretability efforts, and awareness of limitations.

    Do you agree? Reach out and tell me what you think.

  • What are the Major Dangers and Opportunities in 2026?

    Over the past few weeks, we’ve discussed the threats and opportunities in AI. We’ve also recently taken a look at the themes that drove markets in 2025.

    To summarize:

    1. AI and data infrastructure were big winners in 2025. So were precious metals and emerging markets. Meanwhile, REITS, non-AI software, and oil & gas underperformed.
    2. AI is still an incredible opportunity, but adoption won’t give you a sustainable competitive advantage; that comes from using it better in focused and methodical ways.
    3. One of the key themes/challenges of the coming years is something I call “The Future of Work”. We have important thinking to do about better understanding AI, what it enables, and where humans fit in this changing equation.

    The Same Picture From a Different Perspective

    Visual Capitalist recently released two charts that I thought were interesting. The first looks at global GDP growth. The second examines the top global risks for the coming year across various domains.

    In the context of our recent discussions, I think they add value.

    Beyond surface-level data, they also help explain how fear and excitement affect sentiment.

    Who’s Powering Economic Growth in 2026?

    via visualcapitalist

    Global GDP growth is expected to be around 3% in 2026. A net positive. The infographic tells an interesting story as some of the larger economies slow and emerging markets grow.

    In fact, the U.S. and the EU account for less than 20% of expected growth. Meanwhile, the Asia-Pacific Region accounts for about 60% of the predicted growth, driven primarily by China and India.

    Both countries are incredibly populous and industrious, so their roles are unsurprising. However, the implications and second- and third-order effects of this might be surprising if the trends continue.

    Overall, the growth in 2026 is expected to be driven by emerging markets, supported by population and workforce growth, as well as rising consumption.

    Expected Stumbling Blocks to Growth

    via visualcapitalist

    The infographic depicts sentiment data collected by the World Economic Forum through interviews with over 1,300 experts.

    It doesn’t take much to realize the world is a powder keg of geopolitical and economic conflict. It’s undoubtedly been an underlying theme for many of our insights.

    In 2026, geoeconomic confrontation is the top global risk, driven by multiple factors, primarily the tenuous transatlantic alliances and competition between the U.S. and China.

    We live in a fascinating era. In addition to wars and the rapid growth of AI, we face increased polarization and misinformation. Meanwhile, environmental changes are evident through resource shortages and more severe weather events.

    Choosing Cautious Optimism

    It’s easy, looking at all of this together, to feel pulled in two directions at once.

    On one hand, the risks are real and increasingly interconnected. Some of the factors include: geopolitical tension, economic fragmentation, intentional misinformation, climate pressure, and a technology that’s moving faster than most institutions (or people) can comfortably absorb. That’s not noise. That’s signal.

    On the other hand, growth persists. Innovation continues. New regions, new populations, and new ideas are doing what they’ve always done: stepping into the gaps left by older systems.

    The center of gravity is shifting, not collapsing.

    This is where cautious optimism earns its place.

    History suggests that humanity rarely solves problems cleanly or quickly, but it does tend to solve them eventually … not through a single breakthrough or perfect plan, but through adaptation. You might even call it evolution.

    AI fits squarely into that pattern. It’s neither salvation nor doom. It’s leverage. And like all leverage, its impact depends on who uses it, how deliberately, and to what end. The real challenge ahead isn’t whether the technology works (because it already does) but whether humans can understand it well enough to integrate it responsibly into economic systems, organizations, and daily life.

    We’re entering a period where progress and instability coexist. That argues for selectivity over speed, yet curiosity over fear.

    I’m not calling for blind optimism or to deny the challenges in front of us. But the opportunities are real, and they reward a willingness to think in longer arcs instead of short cycles.

    Onwards!

  • Staying Productive in the Age of Abundant Intelligence

    Recently, several savvy friends have sent me stories about using ChatGPT or other LLMs (like Grok) to beat the markets. The common thread is excitement: a stock pick that worked, or a small portfolio that made money over a few weeks.

    While that sounds impressive, I’m less excited than they are — and more interested in what it reveals about how we’re using AI and what’s becoming possible.

    Ultimately, in a world where intelligence is cheap and ubiquitous, discernment and system design are what keep you productive and sane.

    When I look at these LLM‑based trading stories, I find them interesting (and, yes, I do forward some of them internally with comments about key ideas or capabilities I believe will soon be possible or useful in other contexts).

    But interesting isn’t the same as exciting, useful, or trustworthy.

    While I’m still skeptical about using LLMs for autonomous trading, I’m thrilled by how far modern AI has come in reasoning about complex, dynamic environments in ways that would have seemed far-fetched not long ago. And I believe LLMs are becoming an increasingly important tool to use with other toolkits and system design processes.

    LLM-based trading doesn’t excite me yet, because results like those expressed in the example above aren’t simple, repeatable, or scalable. Ten people running the same ‘experiment’ would likely get ten wildly different outcomes, depending on prompts, timing, framing, and interpretation. That makes it a compelling anecdote, not a system you’d bet your future on. With a bit of digging (or trial and error), you’ll likely find that for every positive result, there are many more stories of people losing more than they expected (especially, over time).

    And that distinction turns out to matter a lot more than whether an individual experiment worked.

    Two very different ways to use AI today

    One way to make sense of where AI fits today is to separate use cases into two broad categories, like we did last week.

    The first is background AI. These are tools that quietly make things better without demanding much thought or oversight. Here are a few simple examples: a maps app rerouting around traffic, autocomplete finishing a sentence, or using Grammarly to edit the grammar and punctuation of this post. You don’t need a complex system around these tools, and you don’t have to constantly check or tune them. You just use them.

    There’s no guilt in that. There’s no anxiety about whether tools like these are changing your work in some fundamental way. They remove friction and fade into the background. In many cases, they’re already infrastructure.

    The second category is very different: foreground or high-leverage AI. These are areas where quality is crucial, judgment and taste are key, and missteps can subtly harm results over time.

    Writing is the most obvious example. AI can help generate drafts, outlines, and alternatives at remarkable speed. But AI writing also has quirks: it smooths things out, defaults to familiar phrasing, and often sounds confident even when it’s wrong or vague. Used lazily, it strips away your authentic voice. Even used judiciously, it can still subtly shift tone and intent in ways that aren’t always obvious until later.

    This is where the ‘just let the AI do it’ approach quietly breaks down.

    AI as a thought partner, not a ghostwriter

    For most use-cases, I believe the most productive use of AI isn’t to let it do the work for you, but to help you think. The distinction here is between an outsourcer (AI as the doer/finisher) and an amplifier (making you more precise, more aware, more deliberate).

    We’ve talked about it before, and it is similar to rubber-duck debugging. For example, when writing or editing these articles, I often use AI to homogenize data from different sources or to identify when I’ve been too vague (assuming the reader has knowledge that hasn’t been explicitly stated). AI also helps surface blind spots, improve framing, and generate alternatives when I’m struggling to be concise or to be better understood.

    Sometimes the AI accelerates my process (especially with administrivia), but more often, it slows me down in a good way by making me more methodical about what I’m doing. I’m still responsible for judgment and intent, but it helps surface opportunities to improve the quality of my output.

    I have to be careful, though. Even though I’m not letting AI write my articles, I’m reading exponentially more AI-generated writing. As a result, it’s probably influencing my thought patterns, preferences, and changing my word usage more than I’d like to admit. It also nudges me toward structures, formatting, and ‘best practices’ that make my writing more polished — but also more predictable and less distinctive.

    Said differently, background AI is infrastructure, while foreground AI is where judgment, taste, and risk live. And “human-in-the-loop” framing isn’t about caution or control for its own sake. It’s about preserving quality and focus in places where it matters.

    From creating to discerning

    As AI becomes more capable, something subtle yet meaningful happens to human productivity. The constraint is no longer how much you can create or consume; it’s how well you can choose what to create and what’s worth consuming.

    I often say that the real AI is Amplified Intelligence (which is about making better decisions, taking smarter actions, and continuously improving performance) … but now it’s also Abundant Intelligence.

    As it becomes easier to create ideas, drafts, strategies, and variations, they risk becoming commodities. And it pays to remember:

    Noise scales faster than signal.

    In that environment, the human role shifts from pure creation to discernment: deciding what deserves attention, what’s a distraction, and what should be turned into a repeatable system.

    Tying that back to trading, an LLM can generate a thousand trade ideas; the hard part is deciding which, if any, deserve real capital.

    This is true in writing, strategy, and (more broadly) in work as a whole. AI is excellent at generating options. It is much less reliable at deciding which options matter over time and where it is biased or misinformed.

    Keeping your eyes on the prize

    All of this points to a broader theme: staying productive in a rapidly changing world is not about chasing every new tool or proving that AI can beat humans at specific tasks. It’s about knowing where automation helps and where it’s becoming a crutch or a hindrance.

    In a world of abundant intelligence, productivity is less about how much your AI can do and more about how clearly you decide what it should do — and what you must still own.

    Some problems benefit from general tools that “just work.” Others require careful system design, clear constraints, and ongoing human judgment. Some require fully bespoke systems, built by skilled teams over time, with decay‑filters to ensure longevity (like what we build at Capitalogix). Using one option when you really need another leads to fragile results and misplaced confidence.

    The advantage, going forward, belongs to people and organizations that understand this distinction — and design their workflows to keep humans engaged where they add the most value. In a world where intelligence is increasingly abundant, focus, judgment, and discernment become the real differentiators.

    Early adoption doesn’t require blind acceptance.

    Onwards!

  • Who’s Prompting Who? How AI Changes the Way You Think and Write

    We like to think we’re the ones training our dogs.

    It looks something like … Sit. Treat. Repeat.

    But every now and then, it’s worth asking a slightly uncomfortable question: what if the dog thinks it’s training you?

    From its perspective, it performs a behavior and you respond with a reward. Same loop. Same reinforcement. Just flipped.

    AI has quietly created a similar loop for writers.

    We think we’re prompting the machine. But more often, the machine is nudging us: toward familiar structures, familiar tones, familiar conclusions. It rewards certain styles by making them feel “done” before they’re actually saying anything new.

    That shift matters most for entrepreneurs, executives, and investors—people whose writing isn’t just content, but communication that moves decisions, capital, and teams.

    The real advantage isn’t AI writing for you

    The advantage isn’t AI writing for you.

    It’s AI forcing you to think before you write.

    Most people use AI like an answer vending machine: “Give me a post about X,” “Rewrite this,” “Summarize that,” “Make it punchier.”

    That’s fine if the goal is speed.

    But if the goal is signal—original insight, clear judgment, a point of view that people can trust—then outsourcing the thinking is the fastest way to produce more words with less value.

    Which brings us to the enemy.

    The enemy: careless, blustery & formulaic content

    AI makes nice-sounding snippets cheap and available. Your articles become more quotable, but also at the expense of conciseness and clarity.

    So the world fills up with writing that is:

    • polished enough to forward,
    • plausible enough to believe,
    • and forgettable enough that no one really needed it in the first place.

    That’s content inflation: more words competing for the same limited attention.

    It usually happens through two traps:

    1) Template trance

    AI is great at default frameworks: lists, pro/con structures, “here are 7 steps,” tidy summaries, executive tone, confident conclusions.

    Those patterns are useful. They’re also seductive.

    You start to expect them. You begin to think in them. And, the output feels complete because it’s formatted like something you’ve seen a hundred times before. It even happens with sentence structure and punctuation, like the em-dash.

    2) Outsourced judgment

    But, when left to it’s own devices, AI does more than write … it chooses.

    The emphasis, the framing, the “what matters,” the implied certainty, the vibe.

    And if you’re not careful, your job quietly shifts from “author” to “approver.”

    That’s how you end up with a lot more content … and a lot less you.

    A real mini-case: writing with my son Zach

    I’ve seen this clearly while writing the weekly commentary with my son, Zach. It’s becoming a recurring challenge and a bigger issue, so the topic of today’s posts was to document and discuss the internal conflict we feel each week as we try to write something that both sounds like us and meets our new standards.

    Over time, Zach has become increasingly sensitive (and frustrated) with the AI-ification of output. AI subtly pushes writing toward what “performs” well—what fits algorithms, formats, engagement loops—rather than what actually sounds like us, and as I use more AI in the research process, it becomes more apparent.

    He’s right to be wary.

    The temptation is always there: AI can generate something polished in seconds. You can ship something that looks finished before you’ve done the thinking that makes it worth reading.

    What changed for us wasn’t “using AI less.”

    It was changing what role we gave it.

    Instead of letting AI impose structure, we used it to force judgment.

    We stopped asking it for answers and started asking it to push back:

    • What are you really trying to say?
    • What are you avoiding?
    • What’s your actual opinion versus a generic summary?
    • What would a skeptic challenge?
    • What example proves you mean this?

    That interrogation loop changed the writing. It felt less AI-produced and more real, human, and valuable.

    Here’s the uncomfortable truth: AI is designed to feel satisfying

    AI isn’t just intelligent.

    It’s friction-reducing and often sycophantic.

    Behind the scenes, it’s optimized to produce outputs that feel helpful, fast, and “complete”—often in fewer iterations. That’s great for productivity. It’s also exactly how you slide into template trance.

    There’s a reason AI output can feel like mental fast food, and it’s similar to what we’ve been yelling at kids for with TikTok and social media:

    • quick reward,
    • low effort,
    • easy consumption,
    • repeatable satisfaction.

    The problem isn’t that fast food exists.

    The problem is when it becomes the only thing you eat, or when you confuse it with nourishment.

    A simple discipline shift: declare what’s yours

    In an AI-saturated world, one of the most underrated credibility moves is simple:

    Declare what is yours.

    Not as a disclaimer. As a signal of integrity.

    Label the difference between:

    • your opinion vs a summary,
    • your questions vs your conclusions,
    • your hope or fear vs “the facts,”
    • your judgment vs a compilation.

    Many of our articles recently have focused on moving forward in an AI-centric society – and how to protect your humanity and productivity in the process.

    The core lesson is the same through all of them. The future isn’t just about production – it’s about trust & transparency.

    The fix: make AI question you first.

    If AI can herd you into defaults, you can also use AI to herd yourself into depth.

    The simplest change is to stop asking AI to answer first—and start requiring it to question you first.

    Here are the steps in the process

    The Question First “Who’s Prompting Who?” Writing Loop

    Use this whenever you want signal, not sludge. Ask AI to question you about what you want to write about. Have it ask you to:

    1. State the intent (plain English): What are you writing, for whom, and why?
    2. Explain it simply: Write a “smart 10-year-old” version of your point.
    3. Diagnose gaps: Identify: vague logic, missing steps, missing definitions, missing examples, missing counterpoints.
    4. Interrogate for specificity: Generate 3–7 targeted questions about: assumptions, tradeoffs, constraints, decision implications, audience objections.
    5. Refine and simplify: Re-write the thesis in one sentence. Then outline in 5 bullets.
    6. Working notes capture: Have AI keep a compact ledger: Thesis / Claims / Examples / Counterpoint / Takeaway.
    7. Only then, draft: Draft once the thinking is real.

    This question-first prompting loop is the difference between “AI makes words” and “AI makes thinking sharper.”

    The punchline: the dog isn’t the problem

    AI isn’t a villain, but if you use it recklessly, you can be.

    If you treat AI like a vending machine, it will happily feed you. And you may gradually trade judgment for velocity.

    But if you treat AI like an interrogator, it becomes something else entirely:

    A tool that helps you notice what you actually believe, pressure-test it, and articulate it in a way that sounds like a human with a spine.

    So yes: keep asking what you want AI to do.

    Just don’t forget the deeper question:

    Who’s prompting who?

    P.S. Keep reading for a behind-the-scenes look into how we used prompting to help write this article.


    Behind the Scenes: The Conversation That Wrote the Article (Without Writing It)

    This post didn’t start with an outline. It started with an interrogation. If you’re interested, here is a link to the chat transcript and prompt.

    In the thread that produced this piece, the key shift was role design: I didn’t want an answer machine. I wanted a Socratic interrogator — a system that makes me declare what I actually believe, separate my point of view from generic summary, and test the idea until it had a clear golden thread.

    That’s the point: the advantage isn’t AI writing for you. It’s AI interrogating you until your ideas are worth writing.

  • From Chatbots to Coworkers: The Architecture of True Delegation in Agentic AI

    For the last decade, artificial intelligence has been framed as a breakthrough in conversational technology (generating smarter answers, faster summaries, and more fluent chats). That framing is already obsolete.

    The consequential shift underway is not about conversation at all. It’s about delegation.

    AI is transitioning from a reactive interface to an agentic coworker: systems that draft, schedule, purchase, reconcile, and execute across tools, files, and workflows — without waiting for permission or direction.

    At Capitalogix, we built an agentic system that autonomously trades financial markets. Others have deployed AI that wires funds, adjusts pricing, and communicates with customers. The results are transformative. The risks are material.

    The critical question is no longer “How smart is the model?” It’s “What architecture governs its ability to act?” Digging deeper, do you trust the process enough to let it execute decisions that shape your business, your reputation, and your competitive position?

    That trust isn’t earned through better algorithms. It’s engineered through better architecture.

    Let’s examine what that actually requires.

    Delegation Beats Conversation

    Early AI systems were like automated parrots (they could retrieve and generate), but remained safely boxed inside a conversation or process. Agentic systems break those boundaries. They operate across applications, invoke APIs, move money, and trigger downstream effects.

    As a result, the conversation around AI fundamentally shifts. It’s no longer defined by understanding or expression, but by the capacity to perform multi-step actions safely, auditably, and reversibly.

    Those distinctions matter. Acting systems require invisible scaffolding (permissions, guardrails, audit logs, and recovery paths) that conversational interfaces never needed.

    In other words, delegation demands more than better models. It demands better control systems. To help with that, here is a simple risk taxonomy framework to evaluate agent delegations:

    • Execution risk: Agent does the wrong thing
    • Visibility risk: You can’t see what the agent did
    • Reversibility risk: You can’t undo what the agent did
    • Liability risk: You own the consequences of agent actions.

    Organizations that treat agentic AI as “chat plus plugins” will underestimate both its upside and its risk. Those that treat it as a new layer of operational infrastructure (closer to an automation control plane than a productivity app) will be better positioned to scale it responsibly.

    Privacy’s Fork in the Road

    As agents gain autonomy, privacy becomes a paradox. Privacy-first designs (encrypted, device-keyed interactions where even vendors cannot access logs) unlock the potential for sensitive use cases like legal preparation, HR conversations, and personal counseling.

    But that same strength introduces tension. Encryption that protects users can also obstruct auditability, legal discovery, and incident response. When agents act on behalf of individuals or organizations, the absence of records is a major stumbling block.

    This forces a choice:

    • User-sovereign systems, where privacy is maximized and oversight is minimized.
    • Institutional systems, where compliance, accountability, and traceability are non-negotiable.

    Reconciling these paths will necessitate the development of new technical frameworks and policy requirements. Viewing privacy as an absolute good without addressing its trade-offs is no longer sustainable as systems become more autonomous.

    Standards Are Infrastructure, Not Plumbing

    History is clear on this point: standards create coordination, but they also concentrate power. Open governance can lower barriers and expand ecosystems. Vendor-controlled standards can just as easily become toll roads.

    Protocols like Google’s Universal Commerce Protocol (UCP) are not neutral technical conveniences; they are institutional levers.

    Who defines how agents authenticate, initiate payments, and complete transactions will shape:

    • Who captures margin
    • Who bears liability, and
    • Who can compete?

    For businesses, protocol choices are strategic choices. Interoperability today determines negotiating leverage tomorrow.

    Ignoring this dynamic doesn’t make it disappear—it just cedes influence to those who understand it better.

    APIs, standards bodies, and partnerships quietly determine who becomes a gatekeeper and who remains interchangeable. The question of “who runs the agent” is inseparable from pricing power, data access, and long-term market structure.

    Organizations that control payment protocols become the new Visa. Those who define authentication standards become the new OAuth. And companies that treat these choices as “technical decisions” will wake up to discover they’ve locked themselves into someone else’s ecosystem—with pricing power, data access, and competitive flexibility determined by whoever wrote the rules

    Last But Not Least: The UX Problem

    One of the most underestimated challenges in agentic AI is actually human understanding and adoption. Stated differently, Human trust is the most underestimated challenge in AI adoption.

    The key is calibrating trust: users must feel confident enough not to intervene prematurely, yet vigilant enough to catch genuine errors.

    A related issue (especially when the process exceeds the capabilities of humans to keep up or understand what the AI is doing in real-time) is that it becomes increasingly important that the answers are correct. Why? Because errors executed at machine speed compound exponentially.

    Another challenge is that users lack shared mental models for delegation. They don’t intuitively grasp what an agent can do, when it will act, or how to interrupt it when something goes wrong … and thus, the average user still fears it.

    Trust is not built on raw performance. It’s built on predictability, transparency, and reversibility.

    Organizations that ignore this will face slow adoption, misuse, or catastrophic over-trust. Those who design explicitly for trust calibration will create a durable competitive advantage.

    The Architecture of The Future

    As we look at these various issues (Privacy, UX, Infrastructure) one thing becomes clear.

    The real transformation in AI is architectural, not conversational.

    Delegation at scale requires three integrated systems:

    • Leashes (controls, limits, audits),
    • Keys (privacy, encryption, access), and
    • Rules (standards, governance, accountability).

    Design any one in isolation, and the system fails (becoming either unusable or dangerously concentrated).

    At Capitalogix, we treat agentic AI as a system design challenge and infrastructure (not as a productivity feature). We measure risk, align incentives, and build governance alongside capability.

    This requires constant vigilance: updating rules, parameters, data sources, and privacy settings as conditions evolve. Likewise, every architectural decision needs an expiration date … because, without them, outdated choices become invisible vulnerabilities.

    This approach isn’t defensive — it’s how we scale responsibly.

    The winners in this transition won’t be those with the smartest models. They’ll be those who engineer trustworthy apprentices that can act autonomously while remaining aligned with organizational goals.

    Three Questions Before Deploying Agentic AI

    1. Can you audit every action this agent takes?
    2. Can you explain its decisions to regulators, customers, or boards?
    3. Can you revoke its authority without breaking critical workflows?

    The future isn’t smarter chat. It’s delegation you can trust.

    And trust, as always, is not just given; it’s engineered … then earned