Thoughts about the markets, automated trading algorithms, artificial intelligence, and lots of other stuff

  • The “Chart Of The Century” In 2026: A Look At Consumer Price Inflation (CPI)

    If you ask most people whether life has gotten more affordable since 2000, the instinctive answer is ‘no.’ Groceries feel expensive, rent and healthcare feel punishing, and headlines about inflation haven’t helped.

    Yet Mark Perry’sChart of the Century’ tells a more complicated story. Over the past 25 years, average wages have grown faster than overall inflation — meaning many Americans can buy more with each hour of work than they could in 2000. The challenge is that this progress is uneven and often invisible, especially in the everyday essentials where people feel prices most.

    The most current version reports price increases from 2000 through the end of 2025 for 14 categories of goods and services, along with the average wage and overall Consumer Price Index. Here are the key findings.

    • Wage growth has outpaced inflation by a significant margin (131% vs. 92.6%) from 2000 to 2025, resulting in a 20% increase in real purchasing power.
    • Sharp divergence exists between sectors: Technology and tradable goods have become much cheaper, while healthcare, education, and childcare costs soared.
    • Market competition and trade liberalization drive price declines, while regulated markets and limited competition drive price increases.
    • Despite objective improvements in purchasing power, many consumers still feel financial pressure due to changing consumption patterns and “quality of life creep”.
    • Policy challenges remain in balancing regulation with market forces, particularly in essential services like healthcare and education.

    Core Economic Metrics: The Big Picture

    The foundation of this analysis rests on three critical metrics that provide context for all other price trends:

    MetricChange
    (2000-2025)
    Consumer Price Index (CPI)+92.6%
    Average Hourly Income+131%
    Real Purchasing Power+20%

    From January 2000 to now, the CPI for All Items has increased by over 90%. That is a big jump from its 59.6% level in 2019, when I first shared this chart.

    These numbers tell a surprising story: despite widespread perceptions of economic hardship, Americans’ wages have grown significantly faster than inflation over these 25 years. This translates to a meaningful increase in real purchasing power – the ability to buy more goods and services with the same amount of work.

    However, this aggregate picture masks dramatic variations across different categories of goods and services. Let’s explore these divergent trends.

    The price of technology, electronics, and consumer goods — think toys and television sets — has tumbled over the past two decades. Why? These categories benefit from global competitiontechnological innovation, and manufacturing efficiencies.

    Meanwhile, the cost of hospital stays, childcare, and college tuition, to name a few, has surged. Why? These sectors share important characteristics: they are typically non-tradable services (cannot be imported), operate in markets with limited competition, and are often subject to extensive regulation.

    Below is Perry’s Chart of the Century.

    Mark Perry’s “Chart of the Century” Showing CPI of selected US Consumer Goods

    via Mark Perry

    Here is how to read this chart.

    • Think of the CPI line as the baseline: it’s the average rate of inflation for all items combined.
    • Any category’s line running far above that baseline is a “price climber” that has gotten significantly harder to afford over time.
    • Any line trending well below that baseline is a “price deflator” that has become more affordable relative to your income and other prices.

    Looking at the chart, the conventional wisdom holds: many ‘luxuries’ have gotten cheaper, while several everyday ‘necessities’ have become more expensive.

    For context, at the beginning of 2020, food, beverages, and housing were in line with inflation. They’ve now skyrocketed above inflation, which helps to explain the unease many households are feeling right now. Since last year, they’ve increased by another 3.1% (the CPI for ALL ITEMS increased by 2.7%). College tuition and hospital services have also continued to rise relative to inflation over the past few years.

    Market Dynamics: Understanding the Divergence

    What explains these dramatically different price trajectories? Here are several (but not all of the) key factors:

    Factors Driving Price Increases

    • Government regulation creates compliance costs and barriers to entry.
    • Quasi-monopolistic markets with limited price competition.
    • Non-tradeable services protect from foreign competition.
    • Limited technological disruption in certain service sectors.

    Factors Driving Price Decreases

    • Foreign competition putting downward pressure on prices.
    • Technological advancement reducing production costs.
    • Manufacturing optimization increases efficiency.
    • Market competition forces price discipline.
    • Trade liberalization expands access to global markets.

    Looking at the prices that decrease the most, they’re all technologies. New technologies almost always become less expensive as we optimize manufacturing, components become cheaper, and competition increases. According to VisualCapitalist, at the turn of the century, a flat-screen TV would cost around 17% of the median income ($42,148). Since then, though, prices fell quickly. Today, a new TV typically costs less than 1% of the U.S. median income ($54,132).

    Longer-term trends also matter. For example, in 2020, I asked how the coronavirus would affect prices … and the actual impact turned out far less dramatic than many feared. If you don’t look at the rise in inflation but instead the change in trajectories, very few categories were heavily affected. While hospital services have increased significantly since 2019, they were already rising. There were some immediate impacts, but they went away relatively quickly. 

    Another key factor is average hourly income. Since 2000, overall inflation has increased by 92%, while average hourly income has increased by 131%. This means that hourly income increased ~40% faster than prices (which indicates a 20% decrease in overall time prices). So, if your work earned 10 units of goods in 2000, it would now get you 12 units. This represents a mild increase in abundance since last year

    via Human Progress

    Although 10 of the 14 items rose in nominal prices over the past 25 years, only 5 had a higher time price when accounting for increases in hourly wages. Those items were medical care services, childcare and nursery school, college textbooks, college tuition and fees, and hospital services. 

    So how does this show up in real life?

    The Consumer Experience: Perception vs. Reality

    It’s striking to look at data like that, knowing that the average household is feeling a ‘crunch’ right now.

    My guess is that few consumers distinguish between perception and reality. However, feeling a crunch isn’t necessarily the same as being in a crunch.

    For instance, we must account for ‘quality of life creep,’ where people tend to splurge on luxuries as their standard of living improves. With the ease of online shopping and access to consumer credit, it has become increasingly easy to make impulse purchases, leading to reduced savings and feelings of financial scarcity. This phenomenon is a function of increased consumption (rather than inflation), yet it still leaves consumers feeling like they’re struggling to make ends meet. Our sense of what’s normal has risen, and that’s hard to unlearn. 

    Perry’s ‘Chart of the Century’ reveals the complex relationships between inflation, consumption, and economic growth. While households may feel financial strain, the data shows that income has outpaced inflation, and technology has made many goods more affordable. Nonetheless, there is still a real sense of economic struggle, especially in these last few months. 

    Economic Patterns: Regulated vs. Free Markets

    A clear pattern emerges when examining the relationship between market structures and price trends.

    Regulated Markets (such as healthcare and education) tend to have higher prices over time, less price competition, and limited consumer choice.

    Free Markets show price decreases over time, feature greater competition, and provide consumers with more options.

    This pattern raises important questions about the role of regulation in various economic sectors and the balance between consumer protection and market efficiency.

    With that in mind, how can policymakers address sectors experiencing significant price hikes, such as healthcare and education, without stifling innovation in tradable goods and services? 

    Future Outlook

    Beyond all that, here are three other key trends to watch.

    • AI Disruption: Telemedicine and online education could bend the cost curves for healthcare and education.
    • Trade Wars: While the tariffs were just struck down by the Supreme Court, that likely doesn’t mean the end of this saga.
    • Generational Shifts: Millennials prioritize experiences over goods, potentially easing service demand.

    As innovation and policy evolve, it will be interesting to see if we can make essential services as dynamically competitive as consumer electronics. While America excels in many ways, we lag behind several countries in healthcare and education in terms of cost and outcomes.

    I’d love to know what you think about this and how you see it playing out.

    Onwards.

  • A Look At The Megacities of Today & Tomorrow

    Studying long‑term population trends shows not just how we grew from small tribes to global megacities, but where capital, innovation, and geopolitical power are likely to concentrate next.

    From ancient civilizations to modern metropolises, population dynamics have influenced everything from economic prosperity to social structures.

    A Window Into Our Past Gives Us a Glimpse at Our Future.


    By studying this critical aspect of human history, we can gain valuable insights into the past, present, and future of societies.

    Population growth is a complex, multifaceted phenomenon with far‑reaching implications. It reveals the demographic forces that have shaped our world — and continue to influence where we’re headed.

    Historically, human populations grew steadily but relatively slowly until that changed dramatically

    Scientists estimate that humans have existed for at least 130,000 years. However, it took until 1804 for our population to reach 1 billion. We doubled that population by 1927 (123 years later), and then doubled it again only 47 years later (in 1974). 

    Early population growth was largely driven by the agricultural revolution. Since the Industrial Revolution (in 1804), advances in health and safety, and new technologies have significantly improved the quality of life, spurring rapid population growth.

    For investors, operators, and policymakers, demographic data is not trivia. It is one of the few datasets that let us reliably look decades into the future — and it shapes where markets, labor, and infrastructure demand will emerge.

    Here is a quick overview of key factors to consider.

    Demographics: A Glimpse of the Future

    It’s hard to predict some things accurately. Accordingly, one goal in data science is to figure out what we can “know” to “guess” less.

    Population growth is a prime example. One of the easiest ways to predict how many 60-year-olds there will be in 40 years is to look at how many 20-year-olds there are today. Obviously, the number won’t be exact, but it’s a pretty good head start.

    Likewise, demographic forecasts are powerful but not omniscient; wars, pandemics, and climate shocks can redraw maps. The point isn’t precision, but probabilities—and they still overwhelmingly favor certain regions and city types.”

    This principle of using known data to make educated predictions applies to many aspects of future planning, including urban development and resource allocation. By leveraging current demographic information, we can better prepare for the challenges and opportunities that will likely impact the cities of tomorrow.

    • Economic Implications: A growing population can expand the workforce, fueling economic growth. However, it can also strain resources, requiring increased investment in infrastructure, education, and healthcare.
    • Social and Environmental Pressures: Demographic shifts, such as aging populations or youth bulges, can profoundly affect social structures, healthcare systems, and the environment.
    • Technological Acceleration: Dense urban populations can accelerate innovation, data creation, and the adoption of new technologies.

    These demographic patterns are not abstract — they have concrete consequences for economies, societies, and the environment.

    Why It Matters

    Population growth is more than just a numerical metric. It is a fundamental lens through which we can analyze:

    • Historical Development: By understanding past population trends, we can better appreciate the factors that have shaped human civilizations.
    • Future Planning: Governments, businesses, and organizations can use population data to inform decisions on resource allocation, infrastructure development, and social policies.

    Have World Population Growth Numbers Peaked?

    World population growth rates peaked in the late 1960s and have declined sharply over the past four decades, but we’re still on an upward trend. We’re expected to reach 9 billion people by 2050, but much of that growth comes from developing countries – and it’s almost exclusively from urban areas. 

    Axios created an interactive graphic that shows how birth rates and population structures vary across countries.

    via Axios (Click for an Interactive Graph)

    There are more people alive today than at any point in history — and the population is still growing, just more unevenly. Much of the future population growth will not occur in remote villages, but in dense urban centers. That shift is already reshaping the map of the world’s largest cities.

    Urbanization: Megacities

    For more than a century, humanity has been quietly reorganizing itself from villages into vast, dense cities. For example, in the 1800’s, only about 10% of the population lived in urban areas. Since 2014, over 50% of the world’s population has lived in urban areas – today it’s approximately 55%. That number is growing.

    Ironically, as we grow more digitally connected, our world is shrinking, and our populations are concentrating. 

    What Megacities Change: Power, Capital, and Talent

    An interesting consequence of this rapid urbanization and population growth in developing countries has been the increased development of Megacities – defined as cities with populations greater than 10 million.

    Today, there are 33 megacities – more than triple the number in the 1990s. They will increasingly anchor a disproportionate share of the world’s talent, capital, and innovation.

    This creates a set of interesting opportunities and challenges. 

    For example, how will these cities deal with infrastructure – sanitation, transportation, etc?

    Visual Capitalist’s graphic below ranks the world’s 50 largest cities from 1975 to 2050.

    an infographic showing the expected 50 most populous cities in 2050

    via visualcapitalist

    Today, in most high-income countries, about 80% of the population lives in urban areas, in contrast to the predominantly rural populations of lower-income countries. 

    As a result, we see many of these megacities forming in developing countries.

    At the same time, many high‑income countries face aging or even shrinking populations. That divergence — young, rapidly growing cities in emerging markets versus older, slower‑growing cities in developed markets — will shape everything from capital flows to immigration policy.

    Tokyo and New York helped define the last century of urbanization. Dhaka, Lagos, and Jakarta may define the next.

    Looking ahead to 2050, Dhaka is expected to become the world’s most populous city, with more than 52 million inhabitants, just ahead of Jakarta. Meanwhile, Africa emerges as the world’s fastest-growing urban region, accounting for 13 of the 50 largest cities.

    As a side note, we’re also seeing countries like China making substantial investments and alliances in these developing areas. This is likely done to profit from the expected growth and also to shift the future balance of power in their favor. Sometimes it makes sense to focus on the marathon rather than just the sprint.

    Demographics Meets AI

    As the world becomes more digitally decentralized and globally connected, our physical lives are concentrating into a relatively small number of megacities. Understanding where those cities are, and how they grow, is one of the most reliable ways to anticipate where opportunity — and risk — will cluster next.

    Over the next few decades, two forces will shape the playing field more than any others: where people live (demographics) and how they work (AI). As cities like Dhaka and Jakarta become powerhouses, it will force to rethink which places sit at the center of the global map.

    Over the next few decades, a handful of megacities will punch far above their weight in shaping markets, culture, and geopolitics. Not every megacity will thrive; governance quality, adaptability, and technology adoption will separate winners from cautionary tales.

    If most of the world’s growth concentrates in perhaps 50 megacities, what does a well‑positioned portfolio, business, or career look like — and how far is it from where you are today?

  • How To Detect Baloney with Carl Sagan: Trust, Tests, and Tiny Bets

    Information can glitter like gold — and still turn out to be worthless fool’s gold.

    Too often, organizations chase compelling narratives, market buzz, or charismatic claims instead of rigorous evidence. Decisions that matter need more than persuasion … they need proof.

    Carl Sagan had a name for the tools that keep you from falling for fool’s gold. He called it the “Baloney Detection Kit.” Sagan originally outlined them in The Demon-Haunted World (and they were recently summarized in Big Think ).

    A photo of Carl Sagan on a black background

    Collectively, they are a set of critical thinking tools to help separate fact from fiction. These ideas aren’t just for science; they form a solid foundation for any high‑stakes business decision.

    This post shows how to turn Sagan’s Baloney Detection Kit into concrete workflows, metrics, and tiny bets that make your organization more trustworthy and anti-fragile.

    Here are the basics.

    The Baloney Detection Kit

    At its core, the baloney detection kit pushes you to:

    1. Demand independent confirmation. Check claims with sources that weren’t involved in making them, while encouraging debate by all relevant experts.
    2. Avoid reliance solely on authority or persuasion. Experts can be wrong; evidence matters more than credentials alone.
    3. Create multiple hypotheses and test them. Don’t fixate on the first explanation; try to disprove competing ideas.
    4. Be your own fiercest critic. The hypothesis you like most is often the one you must test hardest.
    5. Quantify where possible and ensure every link in a reasoning chain holds up.
    6. Favor simplicity (Occam’s Razor) and insist that ideas be falsifiable — that there is some way to test whether they are wrong. The simplest answer is often the truth.

    Sagan’s emphasis is clear: skepticism is not cynicism — it’s a disciplined, systematic evaluation of evidence. Countless cognitive biases make stories appealing, but rigorous scrutiny separates what’s reliable.

    That’s powerful when you’re evaluating a news story or a scientific claim. It’s even more powerful when you wire it into how your organization decides what to do next.

    From Personal Skepticism to Organizational Practice

    These ideas are powerful personal tools, but they’re also powerful organizational frameworks.

    1. Tag every substantive claim before it leaves the building.
    Each claim gets a status like:

    • VERIFIED — independently checked
    • PRELIMINARY — plausible but unconfirmed
    • UNVERIFIED — high uncertainty
      Require visible flags and named reviewers before high-impact claims go public.

    2. Ask the “Stop Question.”
    For every major decision, answer:

    “What single observation would make us reverse course?”

    If you can’t articulate that, treat the initiative as exploratory.

    3. Document provenance for numbers.
    Every quantitative claim must list source, method, scope, and uncertainty in one place. Without that, weight it less in decisions.

    4. Build a structured decision workflow.

    • Author fills verification details.
    • Reviewer assesses evidence quality.
    • Senior Approver signs off on high-stakes items.
    • Rotating External Reviewer audits samples regularly.

    Track metrics quarterly, such as: % verified vs. unverified claims, time to verification, and errors caught in adversarial review.

    Why You Need A Risk-First Lens

    Most businesses get so excited about what could go right that they ignore what is most likely to go wrong.

    What Could Go Wrong?“ is often a sarcastic throwaway, when it should be the most serious question you ask before any launch.

    We live in a speed-first world, but if speed is rewarded over accuracy, skepticism will be ignored.

    Culture and clear rules trump short‑term results, and prevent the attrition most ‘overnight successes’ experience.

    Can You Imagine …

    Imagine an organization where …

    Every bold claim carries its verified provenance …

    Where errors are corrected, not shamed, and publicly learned from …

    Where small but frequent probes guide larger tasks and keep them on the rails …

    Imagine the difference in the anti-fragility of that organization, or the longevity, or even just the trust and respect between employees.

    Ask yourself: What percentage of your important decisions are uncertain or unverified?

    The future rewards organizations that can quickly and reliably separate signal from noise.

    If you make testing basic, provenance visible, and tiny, reversible bets your default, you turn skepticism into a competitive edge — and persuasive stories into durable advantages.

  • Which Jobs Are The Most at Risk of AI Disruption?

    Everywhere you look, someone is predicting which jobs AI will eliminate or automate away next. For many people, the real question is more personal: Is my job safe — or will my company survive?

    To answer that, it helps to zoom out.

    Back in 2018, I asked a simple question: Which industries were most at risk of disruption? This was pre‑AI boom, so the focus was on digitization and automation (rather than large language models or copilots). That article identified the key signals that an industry was ripe for disruption. That simple framework still applies today.

    Here’s a brief summary of the findings.

    1. Digitization Level – Industries like agriculture, construction, hospitality, healthcare, and government were among the least digitized, yet they still accounted for 34% of GDP and 42% of employees.
    2. Regulation Intensity – In heavily regulated industries, companies that find ways to work around legacy rules can become effective competitors quickly (e.g., Lyft or Tesla).
    3. Number of Competitors – Crowded markets with excess capacity or wasted resources (like taxis waiting for fares or empty airplane seats) are vulnerable to new business models. 
    4. Automatability – Even in 2018, many industries and tasks were ready to be automated but hadn’t been due to the cost or labor of switching to new technologies.

    Ultimately, disruption was about relieving a customer’s headache while lowering costs for the producer, the customer, or both.

    Today, AI’s inexorable march is unmistakable as it takes over more tasks and more of the content we create.

    In 2024, the WEF evaluated which jobs were most prone to small or significant alteration by AI. IT and finance have the highest share of tasks expected to be ‘largely’ impacted by AI — which is not particularly surprising. Followed by customer sales, operations, HR, marketing, legal, and (lastly) supply chain.

    Now, new Microsoft data takes a more granular look at which specific jobs are most exposed to generative AI.

    via visualcapitalist

    Microsoft assessed AI exposure using three indicators derived from Copilot usage:

    • Coverage: How often tasks associated with a job appear in Copilot conversations
    • Completion: Frequency of Copilot successfully completing those tasks
    • Overall AI Applicability Score: A combined metric indicating how well AI can support or execute tasks within a specific role.

    Language-heavy & research-based roles are at the highest risk of disruption. Think roles like interpreters, historians, writers, and customer service.

    But exposure does not automatically mean replacement. Augmenting roles with AI will become increasingly common.

    Even though creative and communication roles sit near the top, more technical roles will still feel a meaningful impact as well.

    Fear not … there is still a place for humans. In many cases, AI functions as a complement rather than a substitute, because these jobs still require judgment, creativity, and human interaction.

    Are you using AI in your daily process yet?

    At Capitalogix, we focus on amplifying intelligence. To us, that means the ability to make better decisions, take smarter actions, and continuously improve performance. In many ways, it comes down to better real-time decision-making. Practically, that means using technology to calculate, find, or know easy things faster … rather than predicting harder things better.

    You don’t have to predict every change. You do have to build the habit of experimenting with AI in the work you already do. The gap between winners and losers will be about learning speed, not job title.

    In the next few years, the biggest divide will not be between ‘AI jobs’ and ‘non‑AI jobs.’ It will be between people who learn to wield AI and people who pretend it is not their problem.

    A few years from now, when I write a follow‑up to this article, I suspect we will look back and clearly see the gap between winners and losers. It might come down to something as simple as this question:

    What are you doing to make sure that you ride the wave, rather than getting crushed by it?

  • “Real” Doesn’t Mean What It Used To Anymore

    Your Brand Style Guide Isn’t Enough Anymore

    Not long ago, high-quality art, music, and video had a built-in bottleneck: skill. If you wanted a specific emotional effect—or a certain level of craftsmanship—you either had to earn the craft yourself or hire someone who had.

    That bottleneck is dissolving.

    I recently watched an AI-generated music video on YouTube; if I hadn’t paid attention, I might not have known it was entirely created by technology rather than humans. Here’s the link.

    Artist & Song: Lolita Cercel – Pe peronu’ de la garã – AI Artist & Music Video

    Don’t expect to be wowed. I didn’t love the music or the video. But it’s still a notable achievement. For example, recognize how much it feels like a professionally produced music video. While there are some clear limitations in the production, It doesn’t feel like a party trick (even though, technologically, it still is a party trick). It feels like art.

    When I first watched it, I remember thinking it reminded me of a slightly older style of music. I couldn’t tell whether the words were Portuguese or Romanian. But I was focused on the little details, rather than its slick production or cool technology.

    The singer, Lolita Cercel, is entirely a construct of Tom, a Bacau-based video designer. She doesn’t exist except in AI.

    Neither did the music. Tom wanted to convey emotion through his song lyrics, and he decided AI was a powerful tool to turn his thoughts into things.

    “I tried to make it as realistic as possible. The inspiration came from an 80-year-old collection of poems by a Romanian author who used colloquial, slum language. I liked the style and adapted it for ‘Lolita’ to make it authentic … It’s a mix of artificial intelligence and classical music. I work on several videos in parallel, shooting, editing, adjusting. Technology has allowed me to bring my ideas to life,

    Tom

    That moment matters because the world doesn’t need perfection for the game to change.

    When the market believes “you can’t tell,” Whether something was produced by humans or technology, the operating assumptions of media, marketing, and trust start rewriting themselves.

    Now, for the sake of this article, I’m not focused on the nature of art and artists. I’m focused on media and the nature of attraction and consumption, particularly in business contexts.

    The Skill Shift: From “Making” to “Specifying + Judging”

    Until recently, to create something truly captivating, you had to pay the best and the brightest and hope for the best.

    It’s only really in the last 20 years that the average business could effectively test an ad before releasing it. Ad agencies hired ‘Mad Men’ savants, and a team of writers, designers, composers, artists, editors, and more, to create a piece that would hopefully stand the test of time … or at least drive some sales.

    The new advantage is more subtle — and ultimately more powerful: the ability to specify what you want and judge whether you got it. Often, with a minimal team.

    Everyone can watch and react to content. Far fewer can define (clearly and repeatably) what they want to produce in the mind of another human (e.g., trust, reassurance, curiosity, confidence, or urgency). And even fewer can define what “good enough” means (or how they will measure it) before they generate the content.

    In a world where production becomes cheap, taste becomes expensive.

    From Brand Book to Brand Operating System

    Style guides and brand books still matter. Voice, formatting, color choices, visual identity—none of that disappears.

    AI changes the game by altering the volume and nature of what gets produced. As people are exposed to more and more content of similar quality and production values, what really changes is the level of what constitutes “average”.

    With endless opportunities and distractions, the differentiator becomes consistency: your ability to deliver your promise again and again across channels and formats — without drifting into generic sameness.

    That’s where a Brand Operating System comes in.

    While a brand book is static, a Brand Operating System is a living specification that reliably turns identity into output and serves as a robust framework for AI initiatives.

    A BrandOS includes:

    • Audience psychology: what your audience hopes for, fears, rejects, and values
    • Proof standards: what they require to trust you (and what triggers skepticism)
    • Ambiguity tolerance: how much uncertainty they’ll accept before confidence drops
    • Response targets: the emotional outcomes you want to reliably provoke
    • Guardrails: what you never do (tone, claims, promises, compliance boundaries)
    • A recipe: the variables that make the output recognizably you

    Put differently: the BrandOS is how you scale production without losing the signal or the soul of what makes you … you.

    “Experience” Is the Product & Feedback Loops Are the Engine

    Here’s the thing: in a lot of these markets, results aren’t enough. Everyone can point to returns, claims, outputs—whatever. That stuff commoditizes fast.

    What actually sticks is how the system behaves over time. Does it feel consistent? Does it make sense? Do you understand what it’s doing when things go right and when they don’t? That’s where trust comes from.

    Under the surface, as AI or technology becomes more advanced, it’s harder for people to understand what it does. That’s why experience itself becomes the differentiator …

    Good systems adapt over time. They are not only focused on the immediate outcome. They focus on learning, growing, and adapting to the practical realities of the environment and audience. One way to accomplish that is to use feedback loops to provide the system with better context on what’s happening, how it’s performing, and which areas may need attention or improved data.

    I’ve been enjoying an app called Endel lately. It generates music on demand and can link to biometric signals. When I select the “Move” module, it uses data from devices such as an Apple Watch to adjust what it plays. As my pace changes — from walking to jogging — the cadence of the music shifts with me. It feels responsive, as if the system is listening, pacing, or even leading.

    That’s the shift: closed-loop generation; generation that adapts to feedback.

    We already do this in business:

    • In marketing: opens, engagement, retention curves, where people stop watching
    • In trading and investing: risk-adjusted targets, volatility stability, whether outcomes reflect skill or luck

    A Brand Operating System is what happens when you make those loops explicit, measurable, and repeatable.


    “Enough of Me” Has to Be Specified

    If you want AI to magnify you instead of replacing you, you have to define what “you” means.

    For me, “enough of me” looks like:

    1. A signature point of view: a high-level perspective of perspectives and what’s possible
    2. Metaphors: because they compress complexity into something people can carry
    3. Constructive challenge: not to tear things down, but to test what to trust

    Every person and every company has an equivalent set of signature variables—whether they’ve articulated them or not.

    If you don’t specify them, the system will default to what it thinks performs. And performance alone often converges on generic engagement rather than authentic resonance.

    Guardrails: The Power of “Forbidden Moves”

    Here’s a practical truth: At scale, the most important part of your BrandOS isn’t what it produces … It’s what it refuses to produce.

    Forbidden moves are how you protect trust. They ensure you get more of what you want and less of what you don’t—especially when content is manufactured at volume.

    Examples of forbidden moves (adapt these to your domain):

    • No absolute certainty in probabilistic environments
    • No hype language that undermines trust with sophisticated audiences
    • No claims without proof standards (define what counts as proof)
    • No manufactured intimacy that mimics a relationship you didn’t earn
    • No tone drift that breaks your promise (snarky, overly casual, overly salesy—whatever is off-brand)

    Guardrails aren’t constraints. They’re how you keep the system aligned with the asset you’re actually building: credibility.

    Entropy Is Inevitable—So Detect It Early

    The risk of outsourcing capability is that the tool changes. Models update. Distribution shifts. Channels fatigue. What worked last quarter can quietly stop working next month.

    We’ve discussed this before, but almost everything decays or drifts over time. It’s important to be able to measure that. Here are two examples:

    • Marketing drift: if open rates drop materially or engagement falls, something is drifting.
    • Trading drift (high level): if risk-adjusted targets degrade, volatility exceeds targets, or outcomes start to look like luck rather than understanding, something is drifting.

    No technique always works.

    But something is always working.

    The winners aren’t the ones who find a trick and freeze it. They’re the ones who build systems that notice change early, recalibrate, and keep moving forward.

    The Real Choice

    Your choice isn’t really whether or not to use AI. If you don’t, you’re going to get left behind.

    AI will continue to make ‘real’ cheap; your BrandOS is how to keep your “meaning” valuable.

    Your choice is whether you’ll let AI optimize you into generic engagement, and eventual irrelevancy … or whether you’ll build a BrandOS that protects what makes you you, while adapting fast enough to stay ahead of drift.

  • Carving a New Path: Humanizing The Exceptional

    How automated is too automated? 

    “To speak to a representative, say … representative …. “ 

    “Representative.” 

    “Sorry I didn’t catch that … would you like for me to repeat the options menu?” 

    “NO” 

    “Sorry I didn’t catch that … please state wh…” 

    “REPRESENTATIVE” 

    “Sorry, all of our representatives are busy helping others at the moment … Goodbye.” 

    *CALL ENDS* 

    How many of us have been in this scenario when on the phone with an airline, insurance company, or any other automated call center?

    Where are the people? Why can’t I speak to a human?  

    One of my son’s few memories of my Dad involved listening to him go through a scenario like this with a late-1990s auto-attendant. It was funny. My Dad became increasingly frustrated that he couldn’t get to an actual human being. It devolved into: “Shut up! Stop talking! I’ll give you $50 if you let me talk with a real person.” And it went downhill from there.

    Despite being frustrating, these systems save companies time, money, and resources. And in an ideal world, they streamline callers into organized categories, resulting in a more efficient experience.  They’re clearly working on some level because you’re seeing increased adoption of AI chatbots, robo-callers, and digital support systems. 

    The evolution of this technology is already replacing people in marketing, sales, consulting, coaching, and even therapy. Sometimes to mixed effects

    But does the efficiency or effectiveness it creates justify the lack of human connection?  Why did so many of the legacy call systems get rated so poorly?

    There’s hope, though. I remember air travel before apps let me check in online and skip the counter. I remember banks before ATMs. In both of those situations, I was so anchored in my past experience. I was more aware of what I was missing rather than what I was getting.

    Recently, I came across an article highlighting a trendy new restaurant in Venice, Italy. They serve the best dishes from several popular restaurants across the city! They must have a massive kitchen and extensive staff to take on such a task, right? Wrong. This restaurant is fully automated; you order and receive food via … vending machines. 

    My first reaction was this … the convenience sounds fantastic, but wouldn’t that turn a valuable part of the experience into a commodity? It seems like you’d lose so much of the community, human interaction, and pampering that you enjoy when going to a nice restaurant. As I continued to read, however, the article explained that, to “humanize” the restaurant, it is used as a meeting place for food tastings, community gatherings, and question-and-answer sessions. As the world changes, so do the types of experiences people crave.

    Humanity and automation merged beautifully.   

    Semi-Automated Often Beats Fully Automated  

    Systemize the predictable so you can humanize the exceptional

    — Isadore Sharp, Four Seasons

    Earlier, I mentioned automated call centers and how frustrating they can be. I’ve come in contact with several who have found a healthy balance in how they automate their system.  

    For example, an apartment complex near me uses an AI agent to screen calls and send them to the correct department.

    Often, the automation tags and organizes calls before routing them to their intended destination, or answers frequently asked questions without connecting them to a human. Either way, it reduces the need to transfer calls to find the correct department or gets the caller the information they need without tying up phone lines and wasting their and the receptionists’ time with basic questions.  

    There’s a lot of automation that can happen that isn’t a replacement of humans, but of mind-numbing behavior.

    — Stewart Butterfield

    This quote highlights the point of automation! Expedite the menial tasks, which in turn frees up the people working to provide a far more attentive experience.   

    Humans tend to seek ways to increase efficiency in every aspect of their world. But we are social creatures, craving meaningful connection and community. Therefore, the human element will not only persist but remain vital.

  • Language As A Limitation: Is Artificial Intelligence “Conscious”?

    Man acts as though he were the shaper and master of language, while in fact language remains the master of man. – Martin Heidegger

    Words are powerful. They can be used to define, obscure, or even to create reality. They can be taken alone, as precise definitions, or they can be part of a broader spectrum or scale. As such, they can create or destroy … uplift or demoralize. Their power is seemingly limitless.

    Language is like a hammer … you can use it to create or destroy something. Although it evolved to aid social interactions and facilitate our understanding of the world, it can also constrain how we perceive it and limit our grasp of technological advances and possibilities.

    Before I go into where language fails us, it’s essential to understand why language is so important.

    Language Facilitates Our Growth

    Because without our language, we have lost ourselves. Who are we without our words? – Melina Marchetta

    Language is one of the master keys to advanced thought. As infants, we learn by observing our environment, reading facial expressions and body language, and reflecting on our perceptions. As we improve our understanding and use of language, our brains and cognitive capabilities develop more rapidly.

    It’s this ability to cooperate and share expertise that has allowed us to build complex societies and advance technologically. However, as exponential technologies accelerate our progress, language itself may seem increasingly inadequate for the tasks at hand.

    What happens when we don’t have a word for something?

    The limits of my language mean the limits of my world – Ludwig Wittgenstein

    English is famous for coopting words from other languages; there are many cases of languages having nuanced words that you can’t express well in other languages. 

    • Schadenfreude – German for pleasure derived by someone from another person’s misfortune.
    • Layogenic – Tagalog for someone who looks good from afar but appears less attractive as you see the person closer
    • Koi No Yokan – Japanese for the sense upon first meeting a person that the two of you are going to fall in love 

    Expressing new concepts opens up our minds to new areas of inquiry. In the same vein, the lack of an appropriate concept or word often limits our understanding.

    Wisdom comes from finer distinctions … but sometimes we don’t have words for those distinctions. Here are two examples.

    • An artist who has studied extensively for many years can somehow “know” that a work is a fake without being able to explain why.
    • A professional athlete can better recognize the potential in an amateur than a bystander. 

    How is that possible?

    They’re subconsciously recognizing and evaluating factors that others couldn’t assess consciously.

    Language as a Limitation

    When it comes to atoms, language can be used only as in poetry. The poet, too, is not nearly so concerned with describing facts as with creating images. -Niels Bohr

    In Buddhism, there’s the idea of an Ultimate Reality and a Conventional Reality. Ultimate Reality refers to the objective nature of something, while the Conventional Reality is tied inextricably to our thought processes, and is heavily influenced by our choice of language.

    Said differently, language is one of the most important factors in determining what you focus on, what you make it mean, and even what you choose to do. Ultimately, language conveys cultural and personal values and biases, and influences how we perceive “reality”.

    This is part of the challenge we have with AI systems. They have incredible power to shape our exposure to language and thought patterns. Consequently, it gives the platform significant power to shape its audience’s thoughts and perceptions. We talked about this in last week’s article. We’ll dive deeper in the future.

    To paraphrase philosopher David Hume, our perception of the world is drawn from ideas and impressions. Ideas can only ever be derived from our impressions through a process that often leads us to contradictions and logical fallacies.

    Instead of exploring the true nature of things or thinking abstractly, language sifts and categorizes experiences according to our prior heuristics. When you’re concerned about survival, those heuristics save you a lot of energy; when you’re trying to expand the breadth and depth of humanity’s capabilities, they’re potentially a hindrance. 

    The world around us is changing faster than ever, and complexity is increasing exponentially. It will only get harder to describe the variety and magnificence of existence with our lexicon … so why try?

    We personify the world around us, and it limits our creativity. 

    Many of humanity’s greatest inventions came from skepticism, abstractions, and disassociations from norms.

    A mind enclosed in language is in prison.  – Simone Weil

    What could we create if we let go of language and our intertwined belief systems?

    There has recently been a lot of press in which AI experts are saying that the next big jump in AI won’t come from large language models but from world models of intelligence.

    Likewise, AI consciousness and superintelligence have become more common topics of discussion and speculation.

    When will AI have human-like consciousness?

    I will try to answer that, but first, I want to deconstruct the idea a bit. The question itself makes assumptions based on how humans tend to personify things and rely on past patterns to evaluate what’s in front of us.

    Said differently, I’m not sure we want AI to think the way humans do. I think we want to make better decisions, take smarter actions, and improve performance. And that means thinking better than humans do.

    Back to the original question, I think the term “consciousness” is likely a misnomer, too.

    What is consciousness, and what makes us think that for technology to surpass us, it needs it? The idea that AI will eventually have a “consciousness” may be a symptom of our own linguistic biases. 

    Artificial consciousness may not be anything like human consciousness in the same way that alien lifeforms may not be carbon-based. An advanced AI could solve problems that even the brightest humans cannot. However, being made of silicon or graphene, it may not have a conscious experience. Even if it did, it likely wouldn’t feel emotions (like shame, or greed) … at least the way we describe them.

    Meanwhile, it seems like we pass some new hallmark of consciousness exhibited by increasingly sophisticated AIs every day. They even have their own AI-only social media network now.

    Humans Are The Real Black Box

    But if thought corrupts language, language can also corrupt thought – George Orwell

    Humans are nuanced and surprisingly non-rational creatures. We’re prone to cognitive biases, fear, greed, and discretionary mistakes. We create heuristics from prior experiences (even when it does not serve us), and we can’t process information as cleanly or efficiently as a computer. We unfailingly search for meaning, even where there often isn’t any. Though flawed, we’re perfect in our imperfections. 

    When scientists use expensive brain-scanning machines, they can’t make sense of what they see. When humans give explanations for their own behavior, they’re often inaccurate – more like retrospective rationalizations or confabulations than summaries of the complex computer that is the human brain.

    When I first wrote on this subject, I described Artificial Intelligence as programmed, precise, and predictable. At the time, AI was heavily influenced by the data fed into it and the programming of the human who created it. In a way, that meant AI was transparent, even if the logic was opaque.

    Today, AI can exhibit emergent capabilities, such as complex reasoning, in-context learning, and abstraction, that were not explicitly programmed by humans. These behaviors can be impressive and highly useful. They are beginning to extend far beyond what the original developers explicitly designed or anticipated (which is why we’re discussing user-sovereign systems versus institutional systems).

    In short, we don’t just need to understand how AI was built; we need frameworks for understanding how it acts in diverse contexts. If an AI system behaves consistently with its design goals, performs safely, and produces reliable results, then our trust in it can be justified even if we don’t have perfect insight into every aspect of its internal reasoning — but that trust should be based on rigorous evaluation, interpretability efforts, and awareness of limitations.

    Do you agree? Reach out and tell me what you think.