Staying Productive in the Age of Abundant Intelligence

Recently, several savvy friends have sent me stories about using ChatGPT or other LLMs (like Grok) to beat the markets. The common thread is excitement: a stock pick that worked, or a small portfolio that made money over a few weeks.

While that sounds impressive, I’m less excited than they are — and more interested in what it reveals about how we’re using AI and what’s becoming possible.

Ultimately, in a world where intelligence is cheap and ubiquitous, discernment and system design are what keep you productive and sane.

When I look at these LLM‑based trading stories, I find them interesting (and, yes, I do forward some of them internally with comments about key ideas or capabilities I believe will soon be possible or useful in other contexts).

But interesting isn’t the same as exciting, useful, or trustworthy.

While I’m still skeptical about using LLMs for autonomous trading, I’m thrilled by how far modern AI has come in reasoning about complex, dynamic environments in ways that would have seemed far-fetched not long ago. And I believe LLMs are becoming an increasingly important tool to use with other toolkits and system design processes.

LLM-based trading doesn’t excite me yet, because results like those expressed in the example above aren’t simple, repeatable, or scalable. Ten people running the same ‘experiment’ would likely get ten wildly different outcomes, depending on prompts, timing, framing, and interpretation. That makes it a compelling anecdote, not a system you’d bet your future on. With a bit of digging (or trial and error), you’ll likely find that for every positive result, there are many more stories of people losing more than they expected (especially, over time).

And that distinction turns out to matter a lot more than whether an individual experiment worked.

Two very different ways to use AI today

One way to make sense of where AI fits today is to separate use cases into two broad categories, like we did last week.

The first is background AI. These are tools that quietly make things better without demanding much thought or oversight. Here are a few simple examples: a maps app rerouting around traffic, autocomplete finishing a sentence, or using Grammarly to edit the grammar and punctuation of this post. You don’t need a complex system around these tools, and you don’t have to constantly check or tune them. You just use them.

There’s no guilt in that. There’s no anxiety about whether tools like these are changing your work in some fundamental way. They remove friction and fade into the background. In many cases, they’re already infrastructure.

The second category is very different: foreground or high-leverage AI. These are areas where quality is crucial, judgment and taste are key, and missteps can subtly harm results over time.

Writing is the most obvious example. AI can help generate drafts, outlines, and alternatives at remarkable speed. But AI writing also has quirks: it smooths things out, defaults to familiar phrasing, and often sounds confident even when it’s wrong or vague. Used lazily, it strips away your authentic voice. Even used judiciously, it can still subtly shift tone and intent in ways that aren’t always obvious until later.

This is where the ‘just let the AI do it’ approach quietly breaks down.

AI as a thought partner, not a ghostwriter

For most use-cases, I believe the most productive use of AI isn’t to let it do the work for you, but to help you think. The distinction here is between an outsourcer (AI as the doer/finisher) and an amplifier (making you more precise, more aware, more deliberate).

We’ve talked about it before, and it is similar to rubber-duck debugging. For example, when writing or editing these articles, I often use AI to homogenize data from different sources or to identify when I’ve been too vague (assuming the reader has knowledge that hasn’t been explicitly stated). AI also helps surface blind spots, improve framing, and generate alternatives when I’m struggling to be concise or to be better understood.

Sometimes the AI accelerates my process (especially with administrivia), but more often, it slows me down in a good way by making me more methodical about what I’m doing. I’m still responsible for judgment and intent, but it helps surface opportunities to improve the quality of my output.

I have to be careful, though. Even though I’m not letting AI write my articles, I’m reading exponentially more AI-generated writing. As a result, it’s probably influencing my thought patterns, preferences, and changing my word usage more than I’d like to admit. It also nudges me toward structures, formatting, and ‘best practices’ that make my writing more polished — but also more predictable and less distinctive.

Said differently, background AI is infrastructure, while foreground AI is where judgment, taste, and risk live. And “human-in-the-loop” framing isn’t about caution or control for its own sake. It’s about preserving quality and focus in places where it matters.

From creating to discerning

As AI becomes more capable, something subtle yet meaningful happens to human productivity. The constraint is no longer how much you can create or consume; it’s how well you can choose what to create and what’s worth consuming.

I often say that the real AI is Amplified Intelligence (which is about making better decisions, taking smarter actions, and continuously improving performance) … but now it’s also Abundant Intelligence.

As it becomes easier to create ideas, drafts, strategies, and variations, they risk becoming commodities. And it pays to remember:

Noise scales faster than signal.

In that environment, the human role shifts from pure creation to discernment: deciding what deserves attention, what’s a distraction, and what should be turned into a repeatable system.

Tying that back to trading, an LLM can generate a thousand trade ideas; the hard part is deciding which, if any, deserve real capital.

This is true in writing, strategy, and (more broadly) in work as a whole. AI is excellent at generating options. It is much less reliable at deciding which options matter over time and where it is biased or misinformed.

Keeping your eyes on the prize

All of this points to a broader theme: staying productive in a rapidly changing world is not about chasing every new tool or proving that AI can beat humans at specific tasks. It’s about knowing where automation helps and where it’s becoming a crutch or a hindrance.

In a world of abundant intelligence, productivity is less about how much your AI can do and more about how clearly you decide what it should do — and what you must still own.

Some problems benefit from general tools that “just work.” Others require careful system design, clear constraints, and ongoing human judgment. Some require fully bespoke systems, built by skilled teams over time, with decay‑filters to ensure longevity (like what we build at Capitalogix). Using one option when you really need another leads to fragile results and misplaced confidence.

The advantage, going forward, belongs to people and organizations that understand this distinction — and design their workflows to keep humans engaged where they add the most value. In a world where intelligence is increasingly abundant, focus, judgment, and discernment become the real differentiators.

Early adoption doesn’t require blind acceptance.

Onwards!

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *