Information can glitter like gold — and still turn out to be worthless fool’s gold.
Too often, organizations chase compelling narratives, market buzz, or charismatic claims instead of rigorous evidence. Decisions that matter need more than persuasion … they need proof.
Collectively, they are a set of critical thinking tools to help separate fact from fiction. These ideas aren’t just for science; they form a solid foundation for any high‑stakes business decision.
This post shows how to turn Sagan’s Baloney Detection Kit into concrete workflows, metrics, and tiny bets that make your organization more trustworthy and anti-fragile.
Here are the basics.
The Baloney Detection Kit
At its core, the baloney detection kit pushes you to:
Demand independent confirmation. Check claims with sources that weren’t involved in making them, while encouraging debate by all relevant experts.
Avoid reliance solely on authority or persuasion. Experts can be wrong; evidence matters more than credentials alone.
Create multiple hypotheses and test them. Don’t fixate on the first explanation; try to disprove competing ideas.
Be your own fiercest critic. The hypothesis you like most is often the one you must test hardest.
Quantify where possible and ensure every link in a reasoning chain holds up.
Favor simplicity (Occam’s Razor) and insist that ideas be falsifiable — that there is some way to test whether they are wrong. The simplest answer is often the truth.
Sagan’s emphasis is clear: skepticism is not cynicism — it’s a disciplined, systematic evaluation of evidence. Countless cognitive biases make stories appealing, but rigorous scrutiny separates what’s reliable.
That’s powerful when you’re evaluating a news story or a scientific claim. It’s even more powerful when you wire it into how your organization decides what to do next.
From Personal Skepticism to Organizational Practice
These ideas are powerful personal tools, but they’re also powerful organizational frameworks.
1. Tag every substantive claim before it leaves the building. Each claim gets a status like:
VERIFIED — independently checked
PRELIMINARY — plausible but unconfirmed
UNVERIFIED — high uncertainty Require visible flags and named reviewers before high-impact claims go public.
2. Ask the “Stop Question.” For every major decision, answer:
“What single observation would make us reverse course?”
If you can’t articulate that, treat the initiative as exploratory.
3. Document provenance for numbers. Every quantitative claim must list source, method, scope, and uncertainty in one place. Without that, weight it less in decisions.
Track metrics quarterly, such as: % verified vs. unverified claims, time to verification, and errors caught in adversarial review.
Why You Need A Risk-First Lens
Most businesses get so excited about what could go right that they ignore what is most likely to go wrong.
“What Could Go Wrong?“ is often a sarcastic throwaway, when it should be the most serious question you ask before any launch.
We live in a speed-first world, but if speed is rewarded over accuracy, skepticism will be ignored.
Culture and clear rules trump short‑term results, and prevent the attrition most ‘overnight successes’ experience.
Can You Imagine …
Imagine an organization where …
Every bold claim carries its verified provenance …
Where errors are corrected, not shamed, and publicly learned from …
Where small but frequent probes guide larger tasks and keep them on the rails …
Imagine the difference in the anti-fragility of that organization, or the longevity, or even just the trust and respect between employees.
Ask yourself: What percentage of your important decisions are uncertain or unverified?
The future rewards organizations that can quickly and reliably separate signal from noise.
If you make testing basic, provenance visible, and tiny, reversible bets your default, you turn skepticism into a competitive edge — and persuasive stories into durable advantages.
Everywhere you look, someone is predicting which jobs AI will eliminate or automate away next. For many people, the real question is more personal: Is my job safe — or will my company survive?
To answer that, it helps to zoom out.
Back in 2018, I asked a simple question: Which industries were most at risk of disruption? This was pre‑AI boom, so the focus was on digitization and automation (rather than large language models or copilots). That article identified the key signals that an industry was ripe for disruption. That simple framework still applies today.
Here’s a brief summary of the findings.
Digitization Level – Industries like agriculture, construction, hospitality, healthcare, and government were among the least digitized, yet they still accounted for 34% of GDP and 42% of employees.
Regulation Intensity – In heavily regulated industries, companies that find ways to work around legacy rules can become effective competitors quickly (e.g., Lyft or Tesla).
Number of Competitors – Crowded markets with excess capacity or wasted resources (like taxis waiting for fares or empty airplane seats) are vulnerable to new business models.
Automatability – Even in 2018, many industries and tasks were ready to be automated but hadn’t been due to the cost or labor of switching to new technologies.
Ultimately, disruption was about relieving a customer’s headache while lowering costs for the producer, the customer, or both.
Today, AI’s inexorable march is unmistakable as it takes over more tasks and more of the content we create.
In 2024, the WEF evaluated which jobs were most prone to small or significant alteration by AI. IT and finance have the highest share of tasks expected to be ‘largely’ impacted by AI — which is not particularly surprising. Followed by customer sales, operations, HR, marketing, legal, and (lastly) supply chain.
Microsoft assessed AI exposure using three indicators derived from Copilot usage:
Coverage: How often tasks associated with a job appear in Copilot conversations
Completion: Frequency of Copilot successfully completing those tasks
Overall AI Applicability Score: A combined metric indicating how well AI can support or execute tasks within a specific role.
Language-heavy & research-based roles are at the highest risk of disruption. Think roles like interpreters, historians, writers, and customer service.
But exposure does not automatically mean replacement. Augmenting roles with AI will become increasingly common.
Even though creative and communication roles sit near the top, more technical roles will still feel a meaningful impact as well.
Fear not … there is still a place for humans. In many cases, AI functions as a complement rather than a substitute, because these jobs still require judgment, creativity, and human interaction.
Are you using AI in your daily process yet?
At Capitalogix, we focus on amplifying intelligence. To us, that means the ability to make better decisions, take smarter actions, and continuously improve performance. In many ways, it comes down to better real-time decision-making. Practically, that means using technology to calculate, find, or know easy things faster … rather than predicting harder things better.
You don’t have to predict every change. You do have to build the habit of experimenting with AI in the work you already do. The gap between winners and losers will be about learning speed, not job title.
In the next few years, the biggest divide will not be between ‘AI jobs’ and ‘non‑AI jobs.’ It will be between people who learn to wield AI and people who pretend it is not their problem.
A few years from now, when I write a follow‑up to this article, I suspect we will look back and clearly see the gap between winners and losers. It might come down to something as simple as this question:
What are you doing to make sure that you ride the wave, rather than getting crushed by it?
Not long ago, high-quality art, music, and video had a built-in bottleneck: skill. If you wanted a specific emotional effect—or a certain level of craftsmanship—you either had to earn the craft yourself or hire someone who had.
That bottleneck is dissolving.
I recently watched an AI-generated music video on YouTube; if I hadn’t paid attention, I might not have known it was entirely created by technology rather than humans. Here’s the link.
Artist & Song: Lolita Cercel – Pe peronu’ de la garã – AI Artist & Music Video
Don’t expect to be wowed. I didn’t love the music or the video. But it’s still a notable achievement. For example, recognize how much it feels like a professionally produced music video. While there are some clear limitations in the production, It doesn’t feel like a party trick (even though, technologically, it still is a party trick). It feels like art.
When I first watched it, I remember thinking it reminded me of a slightly older style of music. I couldn’t tell whether the words were Portuguese or Romanian. But I was focused on the little details, rather than its slick production or cool technology.
The singer, Lolita Cercel, is entirely a construct of Tom, a Bacau-based video designer. She doesn’t exist except in AI.
Neither did the music. Tom wanted to convey emotion through his song lyrics, and he decided AI was a powerful tool to turn his thoughts into things.
“I tried to make it as realistic as possible. The inspiration came from an 80-year-old collection of poems by a Romanian author who used colloquial, slum language. I liked the style and adapted it for ‘Lolita’ to make it authentic … It’s a mix of artificial intelligence and classical music. I work on several videos in parallel, shooting, editing, adjusting. Technology has allowed me to bring my ideas to life,
That moment matters because the world doesn’t need perfection for the game to change.
When the market believes “you can’t tell,” Whether something was produced by humans or technology, the operating assumptions of media, marketing, and trust start rewriting themselves.
Now, for the sake of this article, I’m not focused on the nature of art and artists. I’m focused on media and the nature of attraction and consumption, particularly in business contexts.
The Skill Shift: From “Making” to “Specifying + Judging”
Until recently, to create something truly captivating, you had to pay the best and the brightest and hope for the best.
It’s only really in the last 20 years that the average business could effectively test an ad before releasing it. Ad agencies hired ‘Mad Men’ savants, and a team of writers, designers, composers, artists, editors, and more, to create a piece that would hopefully stand the test of time … or at least drive some sales.
The new advantage is more subtle — and ultimately more powerful: the ability to specify what you want and judge whether you got it. Often, with a minimal team.
Everyone can watch and react to content. Far fewer can define (clearly and repeatably) what they want to produce in the mind of another human (e.g., trust, reassurance, curiosity, confidence, or urgency). And even fewer can define what “good enough” means (or how they will measure it) before they generate the content.
In a world where production becomes cheap, taste becomes expensive.
From Brand Book to Brand Operating System
Style guides and brand books still matter. Voice, formatting, color choices, visual identity—none of that disappears.
AI changes the game by altering the volume and nature of what gets produced. As people are exposed to more and more content of similar quality and production values, what really changes is the level of what constitutes “average”.
With endless opportunities and distractions, the differentiator becomes consistency: your ability to deliver your promise again and again across channels and formats — without drifting into generic sameness.
That’s where a Brand Operating System comes in.
While a brand book is static, a Brand Operating System is a living specification that reliably turns identity into output and serves as a robust framework for AI initiatives.
A BrandOS includes:
Audience psychology: what your audience hopes for, fears, rejects, and values
Proof standards: what they require to trust you (and what triggers skepticism)
Ambiguity tolerance: how much uncertainty they’ll accept before confidence drops
Response targets: the emotional outcomes you want to reliably provoke
Guardrails: what you never do (tone, claims, promises, compliance boundaries)
A recipe: the variables that make the output recognizably you
Put differently: the BrandOS is how you scale production without losing the signal or the soul of what makes you … you.
“Experience” Is the Product & Feedback Loops Are the Engine
Here’s the thing: in a lot of these markets, results aren’t enough. Everyone can point to returns, claims, outputs—whatever. That stuff commoditizes fast.
What actually sticks is how the system behaves over time. Does it feel consistent? Does it make sense? Do you understand what it’s doing when things go right and when they don’t? That’s where trust comes from.
Under the surface, as AI or technology becomes more advanced, it’s harder for people to understand what it does. That’s why experience itself becomes the differentiator …
Good systems adapt over time. They are not only focused on the immediate outcome. They focus on learning, growing, and adapting to the practical realities of the environment and audience. One way to accomplish that is to use feedback loops to provide the system with better context on what’s happening, how it’s performing, and which areas may need attention or improved data.
I’ve been enjoying an app called Endel lately. It generates music on demand and can link to biometric signals. When I select the “Move” module, it uses data from devices such as an Apple Watch to adjust what it plays. As my pace changes — from walking to jogging — the cadence of the music shifts with me. It feels responsive, as if the system is listening, pacing, or even leading.
That’s the shift: closed-loop generation; generation that adapts to feedback.
We already do this in business:
In marketing: opens, engagement, retention curves, where people stop watching
In trading and investing: risk-adjusted targets, volatility stability, whether outcomes reflect skill or luck
A Brand Operating System is what happens when you make those loops explicit, measurable, and repeatable.
“Enough of Me” Has to Be Specified
If you want AI to magnify you instead of replacing you, you have to define what “you” means.
For me, “enough of me” looks like:
A signature point of view: a high-level perspective of perspectives and what’s possible
Metaphors: because they compress complexity into something people can carry
Constructive challenge: not to tear things down, but to test what to trust
Every person and every company has an equivalent set of signature variables—whether they’ve articulated them or not.
If you don’t specify them, the system will default to what it thinks performs. And performance alone often converges on generic engagement rather than authentic resonance.
Guardrails: The Power of “Forbidden Moves”
Here’s a practical truth: At scale, the most important part of your BrandOS isn’t what it produces … It’s what it refuses to produce.
Forbidden moves are how you protect trust. They ensure you get more of what you want and less of what you don’t—especially when content is manufactured at volume.
Examples of forbidden moves (adapt these to your domain):
No absolute certainty in probabilistic environments
No hype language that undermines trust with sophisticated audiences
No claims without proof standards (define what counts as proof)
No manufactured intimacy that mimics a relationship you didn’t earn
No tone drift that breaks your promise (snarky, overly casual, overly salesy—whatever is off-brand)
Guardrails aren’t constraints. They’re how you keep the system aligned with the asset you’re actually building: credibility.
Entropy Is Inevitable—So Detect It Early
The risk of outsourcing capability is that the tool changes. Models update. Distribution shifts. Channels fatigue. What worked last quarter can quietly stop working next month.
We’ve discussed this before, but almost everything decays or drifts over time. It’s important to be able to measure that. Here are two examples:
Marketing drift: if open rates drop materially or engagement falls, something is drifting.
Trading drift (high level): if risk-adjusted targets degrade, volatility exceeds targets, or outcomes start to look like luck rather than understanding, something is drifting.
No technique always works.
But something is always working.
The winners aren’t the ones who find a trick and freeze it. They’re the ones who build systems that notice change early, recalibrate, and keep moving forward.
The Real Choice
Your choice isn’t really whether or not to use AI. If you don’t, you’re going to get left behind.
AI will continue to make ‘real’ cheap; your BrandOS is how to keep your “meaning” valuable.
Your choice is whether you’ll let AI optimize you into generic engagement, and eventual irrelevancy … or whether you’ll build a BrandOS that protects what makes you you, while adapting fast enough to stay ahead of drift.
“To speak to a representative, say … representative …. “
“Representative.”
“Sorry I didn’t catch that … would you like for me to repeat the options menu?”
“NO”
“Sorry I didn’t catch that … please state wh…”
“REPRESENTATIVE”
“Sorry, all of our representatives are busy helping others at the moment … Goodbye.”
*CALL ENDS*
How many of us have been in this scenario when on the phone with an airline, insurance company, or any other automated call center?
Where are the people? Why can’t I speak to a human?
One of my son’s few memories of my Dad involved listening to him go through a scenario like this with a late-1990s auto-attendant. It was funny. My Dad became increasingly frustrated that he couldn’t get to an actual human being. It devolved into: “Shut up! Stop talking! I’ll give you $50 if you let me talk with a real person.” And it went downhill from there.
Despite being frustrating, these systems save companies time, money, and resources. And in an ideal world, they streamline callers into organized categories, resulting in a more efficient experience. They’re clearly working on some level because you’re seeing increased adoption of AI chatbots, robo-callers, and digital support systems.
The evolution of this technology is already replacing people in marketing, sales, consulting, coaching, and even therapy. Sometimes to mixed effects …
But does the efficiency or effectiveness it creates justify the lack of human connection? Why did so many of the legacy call systems get rated so poorly?
There’s hope, though. I remember air travel before apps let me check in online and skip the counter. I remember banks before ATMs. In both of those situations, I was so anchored in my past experience. I was more aware of what I was missing rather than what I was getting.
Recently, I came across an article highlighting a trendy new restaurant in Venice, Italy. They serve the best dishes from several popular restaurants across the city! They must have a massive kitchen and extensive staff to take on such a task, right? Wrong. This restaurant is fully automated; you order and receive food via … vending machines.
My first reaction was this … the convenience sounds fantastic, but wouldn’t that turn a valuable part of the experience into a commodity? It seems like you’d lose so much of the community, human interaction, and pampering that you enjoy when going to a nice restaurant. As I continued to read, however, the article explained that, to “humanize” the restaurant, it is used as a meeting place for food tastings, community gatherings, and question-and-answer sessions. As the world changes, so do the types of experiences people crave.
Humanity and automation merged beautifully.
Semi-Automated Often Beats Fully Automated
Systemize the predictable so you can humanize the exceptional
— Isadore Sharp, Four Seasons
Earlier, I mentioned automated call centers and how frustrating they can be. I’ve come in contact with several who have found a healthy balance in how they automate their system.
For example, an apartment complex near me uses an AI agent to screen calls and send them to the correct department.
Often, the automation tags and organizes calls before routing them to their intended destination, or answers frequently asked questions without connecting them to a human. Either way, it reduces the need to transfer calls to find the correct department or gets the caller the information they need without tying up phone lines and wasting their and the receptionists’ time with basic questions.
“There’s a lot of automation that can happen that isn’t a replacement of humans, but of mind-numbing behavior.”
— Stewart Butterfield
This quote highlights the point of automation! Expedite the menial tasks, which in turn frees up the people working to provide a far more attentive experience.
Humans tend to seek ways to increase efficiency in every aspect of their world. But we are social creatures, craving meaningful connection and community. Therefore, the human element will not only persist but remain vital.
Man acts as though he were the shaper and master of language, while in fact language remains the master of man. – Martin Heidegger
Words are powerful. They can be used to define, obscure, or even to create reality. They can be taken alone, as precise definitions, or they can be part of a broader spectrum or scale. As such, they can create or destroy … uplift or demoralize. Their power is seemingly limitless.
Language is like a hammer … you can use it to create or destroy something. Although it evolved to aid social interactions and facilitate our understanding of the world, it can also constrain how we perceive it and limit our grasp of technological advances and possibilities.
Before I go into where language fails us, it’s essential to understand why language is so important.
Language Facilitates Our Growth
Because without our language, we have lost ourselves. Who are we without our words? – Melina Marchetta
Language is one of the master keys to advanced thought. As infants, we learn by observing our environment, reading facial expressions and body language, and reflecting on our perceptions. As we improve our understanding and use of language, our brains and cognitive capabilities develop more rapidly.
It’s this ability to cooperate and share expertise that has allowed us to build complex societies and advance technologically. However, as exponential technologies accelerate our progress, language itself may seem increasingly inadequate for the tasks at hand.
What happens when we don’t have a word for something?
The limits of my language mean the limits of my world – Ludwig Wittgenstein
English is famous for coopting words from other languages; there are many cases of languages having nuanced words that you can’t express well in other languages.
Schadenfreude – German for pleasure derived by someone from another person’s misfortune.
Layogenic – Tagalog for someone who looks good from afar but appears less attractive as you see the person closer
Koi No Yokan – Japanese for the sense upon first meeting a person that the two of you are going to fall in love
Expressing new concepts opens up our minds to new areas of inquiry. In the same vein, the lack of an appropriate concept or word often limits our understanding.
Wisdom comes from finer distinctions … but sometimes we don’t have words for those distinctions. Here are two examples.
An artist who has studied extensively for many years can somehow “know” that a work is a fake without being able to explain why.
A professional athlete can better recognize the potential in an amateur than a bystander.
How is that possible?
They’re subconsciously recognizing and evaluating factors that others couldn’t assess consciously.
Language as a Limitation
When it comes to atoms, language can be used only as in poetry. The poet, too, is not nearly so concerned with describing facts as with creating images. -Niels Bohr
In Buddhism, there’s the idea of an Ultimate Reality and a Conventional Reality. Ultimate Reality refers to the objective nature of something, while the Conventional Reality is tied inextricably to our thought processes, and is heavily influenced by our choice of language.
Said differently, language is one of the most important factors in determining what you focus on, what you make it mean, and even what you choose to do. Ultimately, language conveys cultural and personal values and biases, and influences how we perceive “reality”.
This is part of the challenge we have with AI systems. They have incredible power to shape our exposure to language and thought patterns. Consequently, it gives the platform significant power to shape its audience’s thoughts and perceptions. We talked about this in last week’s article. We’ll dive deeper in the future.
To paraphrase philosopher David Hume, our perception of the world is drawn from ideas and impressions. Ideas can only ever be derived from our impressions through a process that often leads us to contradictions and logical fallacies.
Instead of exploring the true nature of things or thinking abstractly, language sifts and categorizes experiences according to our prior heuristics. When you’re concerned about survival, those heuristics save you a lot of energy; when you’re trying to expand the breadth and depth of humanity’s capabilities, they’re potentially a hindrance.
The world around us is changing faster than ever, and complexity is increasing exponentially. It will only get harder to describe the variety and magnificence of existence with our lexicon … so why try?
We personify the world around us, and it limits our creativity.
Many of humanity’s greatest inventions came from skepticism, abstractions, and disassociations from norms.
A mind enclosed in language is in prison. – Simone Weil
What could we create if we let go of language and our intertwined belief systems?
Likewise, AI consciousness and superintelligence have become more common topics of discussion and speculation.
When will AI have human-like consciousness?
I will try to answer that, but first, I want to deconstruct the idea a bit. The question itself makes assumptions based on how humans tend to personify things and rely on past patterns to evaluate what’s in front of us.
Said differently, I’m not sure we want AI to think the way humans do. I think we want to make better decisions, take smarter actions, and improve performance. And that means thinking better than humans do.
Back to the original question, I think the term “consciousness” is likely a misnomer, too.
What is consciousness, and what makes us think that for technology to surpass us, it needs it? The idea that AI will eventually have a “consciousness” may be a symptom of our own linguistic biases.
Artificial consciousness may not be anything like human consciousness in the same way that alien lifeforms may not be carbon-based. An advanced AI could solve problems that even the brightest humans cannot. However, being made of silicon or graphene, it may not have a conscious experience. Even if it did, it likely wouldn’t feel emotions (like shame, or greed) … at least the way we describe them.
Meanwhile, it seems like we pass some new hallmark of consciousness exhibited by increasingly sophisticated AIs every day. They even have their own AI-only social media network now.
Humans Are The Real Black Box
But if thought corrupts language, language can also corrupt thought – George Orwell
Humans are nuanced and surprisingly non-rational creatures. We’re prone to cognitive biases, fear, greed, and discretionary mistakes. We create heuristics from prior experiences (even when it does not serve us), and we can’t process information as cleanly or efficiently as a computer. We unfailingly search for meaning, even where there often isn’t any. Though flawed, we’re perfect in our imperfections.
When scientists use expensive brain-scanning machines, they can’t make sense of what they see. When humans give explanations for their own behavior, they’re often inaccurate – more like retrospective rationalizations or confabulations than summaries of the complex computer that is the human brain.
When I first wrote on this subject, I described Artificial Intelligence as programmed, precise, and predictable. At the time, AI was heavily influenced by the data fed into it and the programming of the human who created it. In a way, that meant AI was transparent, even if the logic was opaque.
Today, AI can exhibit emergent capabilities, such as complex reasoning, in-context learning, and abstraction, that were not explicitly programmed by humans. These behaviors can be impressive and highly useful. They are beginning to extend far beyond what the original developers explicitly designed or anticipated (which is why we’re discussing user-sovereign systems versus institutional systems).
In short, we don’t just need to understand how AI was built; we need frameworks for understanding how it acts in diverse contexts. If an AI system behaves consistently with its design goals, performs safely, and produces reliable results, then our trust in it can be justified even if we don’t have perfect insight into every aspect of its internal reasoning — but that trust should be based on rigorous evaluation, interpretability efforts, and awareness of limitations.
Do you agree? Reach out and tell me what you think.
Over the past few weeks, we’ve discussed the threats and opportunities in AI. We’ve also recently taken a look at the themes that drove markets in 2025.
To summarize:
AI and data infrastructure were big winners in 2025. So were precious metals and emerging markets. Meanwhile, REITS, non-AI software, and oil & gas underperformed.
One of the key themes/challenges of the coming years is something I call “The Future of Work”. We have important thinking to do about better understanding AI, what it enables, and where humans fit in this changing equation.
The Same Picture From a Different Perspective
Visual Capitalist recently released two charts that I thought were interesting. The first looks at global GDP growth. The second examines the top global risks for the coming year across various domains.
In the context of our recent discussions, I think they add value.
Beyond surface-level data, they also help explain how fear and excitement affect sentiment.
Global GDP growth is expected to be around 3% in 2026. A net positive. The infographic tells an interesting story as some of the larger economies slow and emerging markets grow.
In fact, the U.S. and the EU account for less than 20% of expected growth. Meanwhile, the Asia-Pacific Region accounts for about 60% of the predicted growth, driven primarily by China and India.
Both countries are incredibly populous and industrious, so their roles are unsurprising. However, the implications and second- and third-order effects of this might be surprising if the trends continue.
Overall, the growth in 2026 is expected to be driven by emerging markets, supported by population and workforce growth, as well as rising consumption.
The infographic depicts sentiment data collected by the World Economic Forum through interviews with over 1,300 experts.
It doesn’t take much to realize the world is a powder keg of geopolitical and economic conflict. It’s undoubtedly been an underlying theme for many of our insights.
In 2026, geoeconomic confrontation is the top global risk, driven by multiple factors, primarily the tenuous transatlantic alliances and competition between the U.S. and China.
We live in a fascinating era. In addition to wars and the rapid growth of AI, we face increased polarization and misinformation. Meanwhile, environmental changes are evident through resource shortages and more severe weather events.
Choosing Cautious Optimism
It’s easy, looking at all of this together, to feel pulled in two directions at once.
On one hand, the risks are real and increasingly interconnected. Some of the factors include: geopolitical tension, economic fragmentation, intentional misinformation, climate pressure, and a technology that’s moving faster than most institutions (or people) can comfortably absorb. That’s not noise. That’s signal.
On the other hand, growth persists. Innovation continues. New regions, new populations, and new ideas are doing what they’ve always done: stepping into the gaps left by older systems.
The center of gravity is shifting, not collapsing.
This is where cautious optimism earns its place.
History suggests that humanity rarely solves problems cleanly or quickly, but it does tend to solve them eventually … not through a single breakthrough or perfect plan, but through adaptation. You might even call it evolution.
AI fits squarely into that pattern. It’s neither salvation nor doom. It’s leverage. And like all leverage, its impact depends on who uses it, how deliberately, and to what end. The real challenge ahead isn’t whether the technology works (because it already does) but whether humans can understand it well enough to integrate it responsibly into economic systems, organizations, and daily life.
We’re entering a period where progress and instability coexist. That argues for selectivity over speed, yet curiosity over fear.
I’m not calling for blind optimism or to deny the challenges in front of us. But the opportunities are real, and they reward a willingness to think in longer arcs instead of short cycles.
Recently, several savvy friends have sent me stories about using ChatGPT or other LLMs (like Grok) to beat the markets. The common thread is excitement: a stock pick that worked, or a small portfolio that made money over a few weeks.
While that sounds impressive, I’m less excited than they are — and more interested in what it reveals about how we’re using AI and what’s becoming possible.
Ultimately, in a world where intelligence is cheap and ubiquitous, discernment and system design are what keep you productive and sane.
When I look at these LLM‑based trading stories, I find them interesting (and, yes, I do forward some of them internally with comments about key ideas or capabilities I believe will soon be possible or useful in other contexts).
But interesting isn’t the same as exciting, useful, or trustworthy.
While I’m still skeptical about using LLMs for autonomous trading, I’m thrilled by how far modern AI has come in reasoning about complex, dynamic environments in ways that would have seemed far-fetched not long ago. And I believe LLMs are becoming an increasingly important tool to use with other toolkits and system design processes.
LLM-based trading doesn’t excite me yet, because results like those expressed in the example above aren’t simple, repeatable, or scalable. Ten people running the same ‘experiment’ would likely get ten wildly different outcomes, depending on prompts, timing, framing, and interpretation. That makes it a compelling anecdote, not a system you’d bet your future on. With a bit of digging (or trial and error), you’ll likely find that for every positive result, there are many more stories of people losing more than they expected (especially, over time).
And that distinction turns out to matter a lot more than whether an individual experiment worked.
Two very different ways to use AI today
One way to make sense of where AI fits today is to separate use cases into two broad categories, like we did last week.
The first is background AI. These are tools that quietly make things better without demanding much thought or oversight. Here are a few simple examples: a maps app rerouting around traffic, autocomplete finishing a sentence, or using Grammarly to edit the grammar and punctuation of this post. You don’t need a complex system around these tools, and you don’t have to constantly check or tune them. You just use them.
There’s no guilt in that. There’s no anxiety about whether tools like these are changing your work in some fundamental way. They remove friction and fade into the background. In many cases, they’re already infrastructure.
The second category is very different: foreground or high-leverage AI. These are areas where quality is crucial, judgment and taste are key, and missteps can subtly harm results over time.
Writing is the most obvious example. AI can help generate drafts, outlines, and alternatives at remarkable speed. But AI writing also has quirks: it smooths things out, defaults to familiar phrasing, and often sounds confident even when it’s wrong or vague. Used lazily, it strips away your authentic voice. Even used judiciously, it can still subtly shift tone and intent in ways that aren’t always obvious until later.
This is where the ‘just let the AI do it’ approach quietly breaks down.
AI as a thought partner, not a ghostwriter
For most use-cases, I believe the most productive use of AI isn’t to let it do the work for you, but to help you think. The distinction here is between an outsourcer (AI as the doer/finisher) and an amplifier (making you more precise, more aware, more deliberate).
We’ve talked about it before, and it is similar to rubber-duck debugging. For example, when writing or editing these articles, I often use AI to homogenize data from different sources or to identify when I’ve been too vague (assuming the reader has knowledge that hasn’t been explicitly stated). AI also helps surface blind spots, improve framing, and generate alternatives when I’m struggling to be concise or to be better understood.
Sometimes the AI accelerates my process (especially with administrivia), but more often, it slows me down in a good way by making me more methodical about what I’m doing. I’m still responsible for judgment and intent, but it helps surface opportunities to improve the quality of my output.
I have to be careful, though. Even though I’m not letting AI write my articles, I’m reading exponentially more AI-generated writing. As a result, it’s probably influencing my thought patterns, preferences, and changing my word usage more than I’d like to admit. It also nudges me toward structures, formatting, and ‘best practices’ that make my writing more polished — but also more predictable and less distinctive.
Said differently, background AI is infrastructure, while foreground AI is where judgment, taste, and risk live. And “human-in-the-loop” framing isn’t about caution or control for its own sake. It’s about preserving quality and focus in places where it matters.
From creating to discerning
As AI becomes more capable, something subtle yet meaningful happens to human productivity. The constraint is no longer how much you can create or consume; it’s how well you can choose what to create and what’s worth consuming.
I often say that the real AI is Amplified Intelligence (which is about making better decisions, taking smarter actions, and continuously improving performance) … but now it’s also Abundant Intelligence.
As it becomes easier to create ideas, drafts, strategies, and variations, they risk becoming commodities. And it pays to remember:
Noise scales faster than signal.
In that environment, the human role shifts from pure creation to discernment: deciding what deserves attention, what’s a distraction, and what should be turned into a repeatable system.
Tying that back to trading, an LLM can generate a thousand trade ideas; the hard part is deciding which, if any, deserve real capital.
This is true in writing, strategy, and (more broadly) in work as a whole. AI is excellent at generating options. It is much less reliable at deciding which options matter over time and where it is biased or misinformed.
Keeping your eyes on the prize
All of this points to a broader theme: staying productive in a rapidly changing world is not about chasing every new tool or proving that AI can beat humans at specific tasks. It’s about knowing where automation helps and where it’s becoming a crutch or a hindrance.
In a world of abundant intelligence, productivity is less about how much your AI can do and more about how clearly you decide what it should do — and what you must still own.
Some problems benefit from general tools that “just work.” Others require careful system design, clear constraints, and ongoing human judgment. Some require fully bespoke systems, built by skilled teams over time, with decay‑filters to ensure longevity (like what we build at Capitalogix). Using one option when you really need another leads to fragile results and misplaced confidence.
The advantage, going forward, belongs to people and organizations that understand this distinction — and design their workflows to keep humans engaged where they add the most value. In a world where intelligence is increasingly abundant, focus, judgment, and discernment become the real differentiators.