January 25, 2026

  • Staying Productive in the Age of Abundant Intelligence

    Recently, several savvy friends have sent me stories about using ChatGPT or other LLMs (like Grok) to beat the markets. The common thread is excitement: a stock pick that worked, or a small portfolio that made money over a few weeks.

    While that sounds impressive, I’m less excited than they are — and more interested in what it reveals about how we’re using AI and what’s becoming possible.

    Ultimately, in a world where intelligence is cheap and ubiquitous, discernment and system design are what keep you productive and sane.

    When I look at these LLM‑based trading stories, I find them interesting (and, yes, I do forward some of them internally with comments about key ideas or capabilities I believe will soon be possible or useful in other contexts).

    But interesting isn’t the same as exciting, useful, or trustworthy.

    While I’m still skeptical about using LLMs for autonomous trading, I’m thrilled by how far modern AI has come in reasoning about complex, dynamic environments in ways that would have seemed far-fetched not long ago. And I believe LLMs are becoming an increasingly important tool to use with other toolkits and system design processes.

    LLM-based trading doesn’t excite me yet, because results like those expressed in the example above aren’t simple, repeatable, or scalable. Ten people running the same ‘experiment’ would likely get ten wildly different outcomes, depending on prompts, timing, framing, and interpretation. That makes it a compelling anecdote, not a system you’d bet your future on. With a bit of digging (or trial and error), you’ll likely find that for every positive result, there are many more stories of people losing more than they expected (especially, over time).

    And that distinction turns out to matter a lot more than whether an individual experiment worked.

    Two very different ways to use AI today

    One way to make sense of where AI fits today is to separate use cases into two broad categories, like we did last week.

    The first is background AI. These are tools that quietly make things better without demanding much thought or oversight. Here are a few simple examples: a maps app rerouting around traffic, autocomplete finishing a sentence, or using Grammarly to edit the grammar and punctuation of this post. You don’t need a complex system around these tools, and you don’t have to constantly check or tune them. You just use them.

    There’s no guilt in that. There’s no anxiety about whether tools like these are changing your work in some fundamental way. They remove friction and fade into the background. In many cases, they’re already infrastructure.

    The second category is very different: foreground or high-leverage AI. These are areas where quality is crucial, judgment and taste are key, and missteps can subtly harm results over time.

    Writing is the most obvious example. AI can help generate drafts, outlines, and alternatives at remarkable speed. But AI writing also has quirks: it smooths things out, defaults to familiar phrasing, and often sounds confident even when it’s wrong or vague. Used lazily, it strips away your authentic voice. Even used judiciously, it can still subtly shift tone and intent in ways that aren’t always obvious until later.

    This is where the ‘just let the AI do it’ approach quietly breaks down.

    AI as a thought partner, not a ghostwriter

    For most use-cases, I believe the most productive use of AI isn’t to let it do the work for you, but to help you think. The distinction here is between an outsourcer (AI as the doer/finisher) and an amplifier (making you more precise, more aware, more deliberate).

    We’ve talked about it before, and it is similar to rubber-duck debugging. For example, when writing or editing these articles, I often use AI to homogenize data from different sources or to identify when I’ve been too vague (assuming the reader has knowledge that hasn’t been explicitly stated). AI also helps surface blind spots, improve framing, and generate alternatives when I’m struggling to be concise or to be better understood.

    Sometimes the AI accelerates my process (especially with administrivia), but more often, it slows me down in a good way by making me more methodical about what I’m doing. I’m still responsible for judgment and intent, but it helps surface opportunities to improve the quality of my output.

    I have to be careful, though. Even though I’m not letting AI write my articles, I’m reading exponentially more AI-generated writing. As a result, it’s probably influencing my thought patterns, preferences, and changing my word usage more than I’d like to admit. It also nudges me toward structures, formatting, and ‘best practices’ that make my writing more polished — but also more predictable and less distinctive.

    Said differently, background AI is infrastructure, while foreground AI is where judgment, taste, and risk live. And “human-in-the-loop” framing isn’t about caution or control for its own sake. It’s about preserving quality and focus in places where it matters.

    From creating to discerning

    As AI becomes more capable, something subtle yet meaningful happens to human productivity. The constraint is no longer how much you can create or consume; it’s how well you can choose what to create and what’s worth consuming.

    I often say that the real AI is Amplified Intelligence (which is about making better decisions, taking smarter actions, and continuously improving performance) … but now it’s also Abundant Intelligence.

    As it becomes easier to create ideas, drafts, strategies, and variations, they risk becoming commodities. And it pays to remember:

    Noise scales faster than signal.

    In that environment, the human role shifts from pure creation to discernment: deciding what deserves attention, what’s a distraction, and what should be turned into a repeatable system.

    Tying that back to trading, an LLM can generate a thousand trade ideas; the hard part is deciding which, if any, deserve real capital.

    This is true in writing, strategy, and (more broadly) in work as a whole. AI is excellent at generating options. It is much less reliable at deciding which options matter over time and where it is biased or misinformed.

    Keeping your eyes on the prize

    All of this points to a broader theme: staying productive in a rapidly changing world is not about chasing every new tool or proving that AI can beat humans at specific tasks. It’s about knowing where automation helps and where it’s becoming a crutch or a hindrance.

    In a world of abundant intelligence, productivity is less about how much your AI can do and more about how clearly you decide what it should do — and what you must still own.

    Some problems benefit from general tools that “just work.” Others require careful system design, clear constraints, and ongoing human judgment. Some require fully bespoke systems, built by skilled teams over time, with decay‑filters to ensure longevity (like what we build at Capitalogix). Using one option when you really need another leads to fragile results and misplaced confidence.

    The advantage, going forward, belongs to people and organizations that understand this distinction — and design their workflows to keep humans engaged where they add the most value. In a world where intelligence is increasingly abundant, focus, judgment, and discernment become the real differentiators.

    Early adoption doesn’t require blind acceptance.

    Onwards!

  • Who’s Prompting Who? How AI Changes the Way You Think and Write

    We like to think we’re the ones training our dogs.

    It looks something like … Sit. Treat. Repeat.

    But every now and then, it’s worth asking a slightly uncomfortable question: what if the dog thinks it’s training you?

    From its perspective, it performs a behavior and you respond with a reward. Same loop. Same reinforcement. Just flipped.

    AI has quietly created a similar loop for writers.

    We think we’re prompting the machine. But more often, the machine is nudging us: toward familiar structures, familiar tones, familiar conclusions. It rewards certain styles by making them feel “done” before they’re actually saying anything new.

    That shift matters most for entrepreneurs, executives, and investors—people whose writing isn’t just content, but communication that moves decisions, capital, and teams.

    The real advantage isn’t AI writing for you

    The advantage isn’t AI writing for you.

    It’s AI forcing you to think before you write.

    Most people use AI like an answer vending machine: “Give me a post about X,” “Rewrite this,” “Summarize that,” “Make it punchier.”

    That’s fine if the goal is speed.

    But if the goal is signal—original insight, clear judgment, a point of view that people can trust—then outsourcing the thinking is the fastest way to produce more words with less value.

    Which brings us to the enemy.

    The enemy: careless, blustery & formulaic content

    AI makes nice-sounding snippets cheap and available. Your articles become more quotable, but also at the expense of conciseness and clarity.

    So the world fills up with writing that is:

    • polished enough to forward,
    • plausible enough to believe,
    • and forgettable enough that no one really needed it in the first place.

    That’s content inflation: more words competing for the same limited attention.

    It usually happens through two traps:

    1) Template trance

    AI is great at default frameworks: lists, pro/con structures, “here are 7 steps,” tidy summaries, executive tone, confident conclusions.

    Those patterns are useful. They’re also seductive.

    You start to expect them. You begin to think in them. And, the output feels complete because it’s formatted like something you’ve seen a hundred times before. It even happens with sentence structure and punctuation, like the em-dash.

    2) Outsourced judgment

    But, when left to it’s own devices, AI does more than write … it chooses.

    The emphasis, the framing, the “what matters,” the implied certainty, the vibe.

    And if you’re not careful, your job quietly shifts from “author” to “approver.”

    That’s how you end up with a lot more content … and a lot less you.

    A real mini-case: writing with my son Zach

    I’ve seen this clearly while writing the weekly commentary with my son, Zach. It’s becoming a recurring challenge and a bigger issue, so the topic of today’s posts was to document and discuss the internal conflict we feel each week as we try to write something that both sounds like us and meets our new standards.

    Over time, Zach has become increasingly sensitive (and frustrated) with the AI-ification of output. AI subtly pushes writing toward what “performs” well—what fits algorithms, formats, engagement loops—rather than what actually sounds like us, and as I use more AI in the research process, it becomes more apparent.

    He’s right to be wary.

    The temptation is always there: AI can generate something polished in seconds. You can ship something that looks finished before you’ve done the thinking that makes it worth reading.

    What changed for us wasn’t “using AI less.”

    It was changing what role we gave it.

    Instead of letting AI impose structure, we used it to force judgment.

    We stopped asking it for answers and started asking it to push back:

    • What are you really trying to say?
    • What are you avoiding?
    • What’s your actual opinion versus a generic summary?
    • What would a skeptic challenge?
    • What example proves you mean this?

    That interrogation loop changed the writing. It felt less AI-produced and more real, human, and valuable.

    Here’s the uncomfortable truth: AI is designed to feel satisfying

    AI isn’t just intelligent.

    It’s friction-reducing and often sycophantic.

    Behind the scenes, it’s optimized to produce outputs that feel helpful, fast, and “complete”—often in fewer iterations. That’s great for productivity. It’s also exactly how you slide into template trance.

    There’s a reason AI output can feel like mental fast food, and it’s similar to what we’ve been yelling at kids for with TikTok and social media:

    • quick reward,
    • low effort,
    • easy consumption,
    • repeatable satisfaction.

    The problem isn’t that fast food exists.

    The problem is when it becomes the only thing you eat, or when you confuse it with nourishment.

    A simple discipline shift: declare what’s yours

    In an AI-saturated world, one of the most underrated credibility moves is simple:

    Declare what is yours.

    Not as a disclaimer. As a signal of integrity.

    Label the difference between:

    • your opinion vs a summary,
    • your questions vs your conclusions,
    • your hope or fear vs “the facts,”
    • your judgment vs a compilation.

    Many of our articles recently have focused on moving forward in an AI-centric society – and how to protect your humanity and productivity in the process.

    The core lesson is the same through all of them. The future isn’t just about production – it’s about trust & transparency.

    The fix: make AI question you first.

    If AI can herd you into defaults, you can also use AI to herd yourself into depth.

    The simplest change is to stop asking AI to answer first—and start requiring it to question you first.

    Here are the steps in the process

    The Question First “Who’s Prompting Who?” Writing Loop

    Use this whenever you want signal, not sludge. Ask AI to question you about what you want to write about. Have it ask you to:

    1. State the intent (plain English): What are you writing, for whom, and why?
    2. Explain it simply: Write a “smart 10-year-old” version of your point.
    3. Diagnose gaps: Identify: vague logic, missing steps, missing definitions, missing examples, missing counterpoints.
    4. Interrogate for specificity: Generate 3–7 targeted questions about: assumptions, tradeoffs, constraints, decision implications, audience objections.
    5. Refine and simplify: Re-write the thesis in one sentence. Then outline in 5 bullets.
    6. Working notes capture: Have AI keep a compact ledger: Thesis / Claims / Examples / Counterpoint / Takeaway.
    7. Only then, draft: Draft once the thinking is real.

    This question-first prompting loop is the difference between “AI makes words” and “AI makes thinking sharper.”

    The punchline: the dog isn’t the problem

    AI isn’t a villain, but if you use it recklessly, you can be.

    If you treat AI like a vending machine, it will happily feed you. And you may gradually trade judgment for velocity.

    But if you treat AI like an interrogator, it becomes something else entirely:

    A tool that helps you notice what you actually believe, pressure-test it, and articulate it in a way that sounds like a human with a spine.

    So yes: keep asking what you want AI to do.

    Just don’t forget the deeper question:

    Who’s prompting who?

    P.S. Keep reading for a behind-the-scenes look into how we used prompting to help write this article.


    Behind the Scenes: The Conversation That Wrote the Article (Without Writing It)

    This post didn’t start with an outline. It started with an interrogation. If you’re interested, here is a link to the chat transcript and prompt.

    In the thread that produced this piece, the key shift was role design: I didn’t want an answer machine. I wanted a Socratic interrogator — a system that makes me declare what I actually believe, separate my point of view from generic summary, and test the idea until it had a clear golden thread.

    That’s the point: the advantage isn’t AI writing for you. It’s AI interrogating you until your ideas are worth writing.