Thoughts about the markets, automated trading algorithms, artificial intelligence, and lots of other stuff

  • Staying Productive in the Age of Abundant Intelligence

    Recently, several savvy friends have sent me stories about using ChatGPT or other LLMs (like Grok) to beat the markets. The common thread is excitement: a stock pick that worked, or a small portfolio that made money over a few weeks.

    While that sounds impressive, I’m less excited than they are — and more interested in what it reveals about how we’re using AI and what’s becoming possible.

    Ultimately, in a world where intelligence is cheap and ubiquitous, discernment and system design are what keep you productive and sane.

    When I look at these LLM‑based trading stories, I find them interesting (and, yes, I do forward some of them internally with comments about key ideas or capabilities I believe will soon be possible or useful in other contexts).

    But interesting isn’t the same as exciting, useful, or trustworthy.

    While I’m still skeptical about using LLMs for autonomous trading, I’m thrilled by how far modern AI has come in reasoning about complex, dynamic environments in ways that would have seemed far-fetched not long ago. And I believe LLMs are becoming an increasingly important tool to use with other toolkits and system design processes.

    LLM-based trading doesn’t excite me yet, because results like those expressed in the example above aren’t simple, repeatable, or scalable. Ten people running the same ‘experiment’ would likely get ten wildly different outcomes, depending on prompts, timing, framing, and interpretation. That makes it a compelling anecdote, not a system you’d bet your future on. With a bit of digging (or trial and error), you’ll likely find that for every positive result, there are many more stories of people losing more than they expected (especially, over time).

    And that distinction turns out to matter a lot more than whether an individual experiment worked.

    Two very different ways to use AI today

    One way to make sense of where AI fits today is to separate use cases into two broad categories, like we did last week.

    The first is background AI. These are tools that quietly make things better without demanding much thought or oversight. Here are a few simple examples: a maps app rerouting around traffic, autocomplete finishing a sentence, or using Grammarly to edit the grammar and punctuation of this post. You don’t need a complex system around these tools, and you don’t have to constantly check or tune them. You just use them.

    There’s no guilt in that. There’s no anxiety about whether tools like these are changing your work in some fundamental way. They remove friction and fade into the background. In many cases, they’re already infrastructure.

    The second category is very different: foreground or high-leverage AI. These are areas where quality is crucial, judgment and taste are key, and missteps can subtly harm results over time.

    Writing is the most obvious example. AI can help generate drafts, outlines, and alternatives at remarkable speed. But AI writing also has quirks: it smooths things out, defaults to familiar phrasing, and often sounds confident even when it’s wrong or vague. Used lazily, it strips away your authentic voice. Even used judiciously, it can still subtly shift tone and intent in ways that aren’t always obvious until later.

    This is where the ‘just let the AI do it’ approach quietly breaks down.

    AI as a thought partner, not a ghostwriter

    For most use-cases, I believe the most productive use of AI isn’t to let it do the work for you, but to help you think. The distinction here is between an outsourcer (AI as the doer/finisher) and an amplifier (making you more precise, more aware, more deliberate).

    We’ve talked about it before, and it is similar to rubber-duck debugging. For example, when writing or editing these articles, I often use AI to homogenize data from different sources or to identify when I’ve been too vague (assuming the reader has knowledge that hasn’t been explicitly stated). AI also helps surface blind spots, improve framing, and generate alternatives when I’m struggling to be concise or to be better understood.

    Sometimes the AI accelerates my process (especially with administrivia), but more often, it slows me down in a good way by making me more methodical about what I’m doing. I’m still responsible for judgment and intent, but it helps surface opportunities to improve the quality of my output.

    I have to be careful, though. Even though I’m not letting AI write my articles, I’m reading exponentially more AI-generated writing. As a result, it’s probably influencing my thought patterns, preferences, and changing my word usage more than I’d like to admit. It also nudges me toward structures, formatting, and ‘best practices’ that make my writing more polished — but also more predictable and less distinctive.

    Said differently, background AI is infrastructure, while foreground AI is where judgment, taste, and risk live. And “human-in-the-loop” framing isn’t about caution or control for its own sake. It’s about preserving quality and focus in places where it matters.

    From creating to discerning

    As AI becomes more capable, something subtle yet meaningful happens to human productivity. The constraint is no longer how much you can create or consume; it’s how well you can choose what to create and what’s worth consuming.

    I often say that the real AI is Amplified Intelligence (which is about making better decisions, taking smarter actions, and continuously improving performance) … but now it’s also Abundant Intelligence.

    As it becomes easier to create ideas, drafts, strategies, and variations, they risk becoming commodities. And it pays to remember:

    Noise scales faster than signal.

    In that environment, the human role shifts from pure creation to discernment: deciding what deserves attention, what’s a distraction, and what should be turned into a repeatable system.

    Tying that back to trading, an LLM can generate a thousand trade ideas; the hard part is deciding which, if any, deserve real capital.

    This is true in writing, strategy, and (more broadly) in work as a whole. AI is excellent at generating options. It is much less reliable at deciding which options matter over time and where it is biased or misinformed.

    Keeping your eyes on the prize

    All of this points to a broader theme: staying productive in a rapidly changing world is not about chasing every new tool or proving that AI can beat humans at specific tasks. It’s about knowing where automation helps and where it’s becoming a crutch or a hindrance.

    In a world of abundant intelligence, productivity is less about how much your AI can do and more about how clearly you decide what it should do — and what you must still own.

    Some problems benefit from general tools that “just work.” Others require careful system design, clear constraints, and ongoing human judgment. Some require fully bespoke systems, built by skilled teams over time, with decay‑filters to ensure longevity (like what we build at Capitalogix). Using one option when you really need another leads to fragile results and misplaced confidence.

    The advantage, going forward, belongs to people and organizations that understand this distinction — and design their workflows to keep humans engaged where they add the most value. In a world where intelligence is increasingly abundant, focus, judgment, and discernment become the real differentiators.

    Early adoption doesn’t require blind acceptance.

    Onwards!

  • Who’s Prompting Who? How AI Changes the Way You Think and Write

    We like to think we’re the ones training our dogs.

    It looks something like … Sit. Treat. Repeat.

    But every now and then, it’s worth asking a slightly uncomfortable question: what if the dog thinks it’s training you?

    From its perspective, it performs a behavior and you respond with a reward. Same loop. Same reinforcement. Just flipped.

    AI has quietly created a similar loop for writers.

    We think we’re prompting the machine. But more often, the machine is nudging us: toward familiar structures, familiar tones, familiar conclusions. It rewards certain styles by making them feel “done” before they’re actually saying anything new.

    That shift matters most for entrepreneurs, executives, and investors—people whose writing isn’t just content, but communication that moves decisions, capital, and teams.

    The real advantage isn’t AI writing for you

    The advantage isn’t AI writing for you.

    It’s AI forcing you to think before you write.

    Most people use AI like an answer vending machine: “Give me a post about X,” “Rewrite this,” “Summarize that,” “Make it punchier.”

    That’s fine if the goal is speed.

    But if the goal is signal—original insight, clear judgment, a point of view that people can trust—then outsourcing the thinking is the fastest way to produce more words with less value.

    Which brings us to the enemy.

    The enemy: careless, blustery & formulaic content

    AI makes nice-sounding snippets cheap and available. Your articles become more quotable, but also at the expense of conciseness and clarity.

    So the world fills up with writing that is:

    • polished enough to forward,
    • plausible enough to believe,
    • and forgettable enough that no one really needed it in the first place.

    That’s content inflation: more words competing for the same limited attention.

    It usually happens through two traps:

    1) Template trance

    AI is great at default frameworks: lists, pro/con structures, “here are 7 steps,” tidy summaries, executive tone, confident conclusions.

    Those patterns are useful. They’re also seductive.

    You start to expect them. You begin to think in them. And, the output feels complete because it’s formatted like something you’ve seen a hundred times before. It even happens with sentence structure and punctuation, like the em-dash.

    2) Outsourced judgment

    But, when left to it’s own devices, AI does more than write … it chooses.

    The emphasis, the framing, the “what matters,” the implied certainty, the vibe.

    And if you’re not careful, your job quietly shifts from “author” to “approver.”

    That’s how you end up with a lot more content … and a lot less you.

    A real mini-case: writing with my son Zach

    I’ve seen this clearly while writing the weekly commentary with my son, Zach. It’s becoming a recurring challenge and a bigger issue, so the topic of today’s posts was to document and discuss the internal conflict we feel each week as we try to write something that both sounds like us and meets our new standards.

    Over time, Zach has become increasingly sensitive (and frustrated) with the AI-ification of output. AI subtly pushes writing toward what “performs” well—what fits algorithms, formats, engagement loops—rather than what actually sounds like us, and as I use more AI in the research process, it becomes more apparent.

    He’s right to be wary.

    The temptation is always there: AI can generate something polished in seconds. You can ship something that looks finished before you’ve done the thinking that makes it worth reading.

    What changed for us wasn’t “using AI less.”

    It was changing what role we gave it.

    Instead of letting AI impose structure, we used it to force judgment.

    We stopped asking it for answers and started asking it to push back:

    • What are you really trying to say?
    • What are you avoiding?
    • What’s your actual opinion versus a generic summary?
    • What would a skeptic challenge?
    • What example proves you mean this?

    That interrogation loop changed the writing. It felt less AI-produced and more real, human, and valuable.

    Here’s the uncomfortable truth: AI is designed to feel satisfying

    AI isn’t just intelligent.

    It’s friction-reducing and often sycophantic.

    Behind the scenes, it’s optimized to produce outputs that feel helpful, fast, and “complete”—often in fewer iterations. That’s great for productivity. It’s also exactly how you slide into template trance.

    There’s a reason AI output can feel like mental fast food, and it’s similar to what we’ve been yelling at kids for with TikTok and social media:

    • quick reward,
    • low effort,
    • easy consumption,
    • repeatable satisfaction.

    The problem isn’t that fast food exists.

    The problem is when it becomes the only thing you eat, or when you confuse it with nourishment.

    A simple discipline shift: declare what’s yours

    In an AI-saturated world, one of the most underrated credibility moves is simple:

    Declare what is yours.

    Not as a disclaimer. As a signal of integrity.

    Label the difference between:

    • your opinion vs a summary,
    • your questions vs your conclusions,
    • your hope or fear vs “the facts,”
    • your judgment vs a compilation.

    Many of our articles recently have focused on moving forward in an AI-centric society – and how to protect your humanity and productivity in the process.

    The core lesson is the same through all of them. The future isn’t just about production – it’s about trust & transparency.

    The fix: make AI question you first.

    If AI can herd you into defaults, you can also use AI to herd yourself into depth.

    The simplest change is to stop asking AI to answer first—and start requiring it to question you first.

    Here are the steps in the process

    The Question First “Who’s Prompting Who?” Writing Loop

    Use this whenever you want signal, not sludge. Ask AI to question you about what you want to write about. Have it ask you to:

    1. State the intent (plain English): What are you writing, for whom, and why?
    2. Explain it simply: Write a “smart 10-year-old” version of your point.
    3. Diagnose gaps: Identify: vague logic, missing steps, missing definitions, missing examples, missing counterpoints.
    4. Interrogate for specificity: Generate 3–7 targeted questions about: assumptions, tradeoffs, constraints, decision implications, audience objections.
    5. Refine and simplify: Re-write the thesis in one sentence. Then outline in 5 bullets.
    6. Working notes capture: Have AI keep a compact ledger: Thesis / Claims / Examples / Counterpoint / Takeaway.
    7. Only then, draft: Draft once the thinking is real.

    This question-first prompting loop is the difference between “AI makes words” and “AI makes thinking sharper.”

    The punchline: the dog isn’t the problem

    AI isn’t a villain, but if you use it recklessly, you can be.

    If you treat AI like a vending machine, it will happily feed you. And you may gradually trade judgment for velocity.

    But if you treat AI like an interrogator, it becomes something else entirely:

    A tool that helps you notice what you actually believe, pressure-test it, and articulate it in a way that sounds like a human with a spine.

    So yes: keep asking what you want AI to do.

    Just don’t forget the deeper question:

    Who’s prompting who?

    P.S. Keep reading for a behind-the-scenes look into how we used prompting to help write this article.


    Behind the Scenes: The Conversation That Wrote the Article (Without Writing It)

    This post didn’t start with an outline. It started with an interrogation. If you’re interested, here is a link to the chat transcript and prompt.

    In the thread that produced this piece, the key shift was role design: I didn’t want an answer machine. I wanted a Socratic interrogator — a system that makes me declare what I actually believe, separate my point of view from generic summary, and test the idea until it had a clear golden thread.

    That’s the point: the advantage isn’t AI writing for you. It’s AI interrogating you until your ideas are worth writing.

  • From Chatbots to Coworkers: The Architecture of True Delegation in Agentic AI

    For the last decade, artificial intelligence has been framed as a breakthrough in conversational technology (generating smarter answers, faster summaries, and more fluent chats). That framing is already obsolete.

    The consequential shift underway is not about conversation at all. It’s about delegation.

    AI is transitioning from a reactive interface to an agentic coworker: systems that draft, schedule, purchase, reconcile, and execute across tools, files, and workflows — without waiting for permission or direction.

    At Capitalogix, we built an agentic system that autonomously trades financial markets. Others have deployed AI that wires funds, adjusts pricing, and communicates with customers. The results are transformative. The risks are material.

    The critical question is no longer “How smart is the model?” It’s “What architecture governs its ability to act?” Digging deeper, do you trust the process enough to let it execute decisions that shape your business, your reputation, and your competitive position?

    That trust isn’t earned through better algorithms. It’s engineered through better architecture.

    Let’s examine what that actually requires.

    Delegation Beats Conversation

    Early AI systems were like automated parrots (they could retrieve and generate), but remained safely boxed inside a conversation or process. Agentic systems break those boundaries. They operate across applications, invoke APIs, move money, and trigger downstream effects.

    As a result, the conversation around AI fundamentally shifts. It’s no longer defined by understanding or expression, but by the capacity to perform multi-step actions safely, auditably, and reversibly.

    Those distinctions matter. Acting systems require invisible scaffolding (permissions, guardrails, audit logs, and recovery paths) that conversational interfaces never needed.

    In other words, delegation demands more than better models. It demands better control systems. To help with that, here is a simple risk taxonomy framework to evaluate agent delegations:

    • Execution risk: Agent does the wrong thing
    • Visibility risk: You can’t see what the agent did
    • Reversibility risk: You can’t undo what the agent did
    • Liability risk: You own the consequences of agent actions.

    Organizations that treat agentic AI as “chat plus plugins” will underestimate both its upside and its risk. Those that treat it as a new layer of operational infrastructure (closer to an automation control plane than a productivity app) will be better positioned to scale it responsibly.

    Privacy’s Fork in the Road

    As agents gain autonomy, privacy becomes a paradox. Privacy-first designs (encrypted, device-keyed interactions where even vendors cannot access logs) unlock the potential for sensitive use cases like legal preparation, HR conversations, and personal counseling.

    But that same strength introduces tension. Encryption that protects users can also obstruct auditability, legal discovery, and incident response. When agents act on behalf of individuals or organizations, the absence of records is a major stumbling block.

    This forces a choice:

    • User-sovereign systems, where privacy is maximized and oversight is minimized.
    • Institutional systems, where compliance, accountability, and traceability are non-negotiable.

    Reconciling these paths will necessitate the development of new technical frameworks and policy requirements. Viewing privacy as an absolute good without addressing its trade-offs is no longer sustainable as systems become more autonomous.

    Standards Are Infrastructure, Not Plumbing

    History is clear on this point: standards create coordination, but they also concentrate power. Open governance can lower barriers and expand ecosystems. Vendor-controlled standards can just as easily become toll roads.

    Protocols like Google’s Universal Commerce Protocol (UCP) are not neutral technical conveniences; they are institutional levers.

    Who defines how agents authenticate, initiate payments, and complete transactions will shape:

    • Who captures margin
    • Who bears liability, and
    • Who can compete?

    For businesses, protocol choices are strategic choices. Interoperability today determines negotiating leverage tomorrow.

    Ignoring this dynamic doesn’t make it disappear—it just cedes influence to those who understand it better.

    APIs, standards bodies, and partnerships quietly determine who becomes a gatekeeper and who remains interchangeable. The question of “who runs the agent” is inseparable from pricing power, data access, and long-term market structure.

    Organizations that control payment protocols become the new Visa. Those who define authentication standards become the new OAuth. And companies that treat these choices as “technical decisions” will wake up to discover they’ve locked themselves into someone else’s ecosystem—with pricing power, data access, and competitive flexibility determined by whoever wrote the rules

    Last But Not Least: The UX Problem

    One of the most underestimated challenges in agentic AI is actually human understanding and adoption. Stated differently, Human trust is the most underestimated challenge in AI adoption.

    The key is calibrating trust: users must feel confident enough not to intervene prematurely, yet vigilant enough to catch genuine errors.

    A related issue (especially when the process exceeds the capabilities of humans to keep up or understand what the AI is doing in real-time) is that it becomes increasingly important that the answers are correct. Why? Because errors executed at machine speed compound exponentially.

    Another challenge is that users lack shared mental models for delegation. They don’t intuitively grasp what an agent can do, when it will act, or how to interrupt it when something goes wrong … and thus, the average user still fears it.

    Trust is not built on raw performance. It’s built on predictability, transparency, and reversibility.

    Organizations that ignore this will face slow adoption, misuse, or catastrophic over-trust. Those who design explicitly for trust calibration will create a durable competitive advantage.

    The Architecture of The Future

    As we look at these various issues (Privacy, UX, Infrastructure) one thing becomes clear.

    The real transformation in AI is architectural, not conversational.

    Delegation at scale requires three integrated systems:

    • Leashes (controls, limits, audits),
    • Keys (privacy, encryption, access), and
    • Rules (standards, governance, accountability).

    Design any one in isolation, and the system fails (becoming either unusable or dangerously concentrated).

    At Capitalogix, we treat agentic AI as a system design challenge and infrastructure (not as a productivity feature). We measure risk, align incentives, and build governance alongside capability.

    This requires constant vigilance: updating rules, parameters, data sources, and privacy settings as conditions evolve. Likewise, every architectural decision needs an expiration date … because, without them, outdated choices become invisible vulnerabilities.

    This approach isn’t defensive — it’s how we scale responsibly.

    The winners in this transition won’t be those with the smartest models. They’ll be those who engineer trustworthy apprentices that can act autonomously while remaining aligned with organizational goals.

    Three Questions Before Deploying Agentic AI

    1. Can you audit every action this agent takes?
    2. Can you explain its decisions to regulators, customers, or boards?
    3. Can you revoke its authority without breaking critical workflows?

    The future isn’t smarter chat. It’s delegation you can trust.

    And trust, as always, is not just given; it’s engineered … then earned

  • Generative AI’s Explosive Growth

    Generative AI has moved from novelty to necessity in under two years — and the data proves it.

    What started as a curiosity is now quietly rewiring how we work, create, and consume information. The result is an invisible revolution happening inside our apps, our workflows, and our daily decisions.

    The Invisible Revolution

    Gen AI apps are increasingly a part of my day.

    While I still supervise most AI tasks, these tools now touch nearly every aspect of my workflow.

    Gen AI also works quietly in the background — organizing and filtering emails, files, and news stories — even in places I’ve forgotten to ask it to help.

    That experience isn’t unique; it reflects a broad behavioral shift across age groups, industries, and geographies.

    Here’s a chart that shows the rise of generative AI apps compared to other app categories on popular platforms. There’s a phrase that captures what this chart reveals:

    One of these things is not like the others.

    Take a look.

    Chart showing generative AI app downloads vastly outpacing other mobile app categories on iOS and Google Play

    via visualcapitalist

    THE DATA: Growth Unlike Any Other Category 

    While this data covers the iOS and Google Play stores — which represent the majority of consumer app downloads — it doesn’t capture enterprise or web-based AI usage, where adoption may be even higher.

    AI‑generated text, images, and video have quickly become a major force in content creation and moderation. Many younger users may now consume a majority of their content through AI-mediated or AI-generated experiences (e.g., personalized feeds and AI‑curated playlists, as well as synthetic influencers and chat‑based companions).

    The trajectory becomes even more striking when you examine the financial projections.

    According to Sensor Tower, Generative AI apps are projected to reach 4 billion downloads, generate $4.8 billion in in-app purchase revenue, and account for 43 billion hours spent in 2025 alone. Generative AI applications are anticipated to reach over $10 billion in consumer spending by 2026. Additionally, by then, Gen AI is expected to be among the top five mobile app categories in terms of downloads, revenue, and user engagement.

    THE BEHAVIOR SHIFT: From Tools to Workflows

    Beyond installs and revenue, user engagement is accelerating, reflecting increased consumer willingness to pay for AI tools, subscriptions, and premium features as these apps become part of daily workflows.

    Key insight: This isn’t just another app category — it’s infrastructure. And, learning to work with AI is quickly becoming a baseline skill. Just as spreadsheets and email became non‑negotiable skills in earlier eras, fluency with AI tools will soon be assumed rather than optional.

    How to adapt, starting now

    • Audit where AI already touches your workflows—email, content, customer interactions—and identify obvious gaps or redundancies.
    • Pilot one or two Gen AI tools deeply rather than dabbling in many, and track the impact on time saved or output quality.
    • Establish simple guardrails for accuracy, privacy, and human review so AI becomes a reliable partner, not a blind spot.

    The momentum is undeniable. In the AI era, standing still means falling behind (but at an exponential pace). The question isn’t whether to adopt AI … it is how quickly you can adapt your workflows, teams, and strategies to use it well. Those who learn to partner with these tools now will define what ‘normal’ looks like in the years ahead.

    Onwards!

  • How Did Markets Perform In 2025?

    This is the time of year when many investors look back at 2025 and ask, ‘How did the markets do?’ It is not just about what made or lost money, but how each asset performed relative to the others.

    Studying past performance is interesting, but it is not always helpful for deciding what to do next. This post looks at how 2025’s returns set the stage for 2026.

    Because 2026 is a midterm election year, market performance is likely to matter even more to the party in power. With that said, the market is not the economy. Asset class performance reflects diverse economic forces (risk appetites, rate expectations, foreign growth, government interventions, and real asset demand), all interacting in a complex global backdrop.

    Before thinking about what comes next, it helps to look back at how we got here.

    A Look at Recent History

    2022 was the worst year for the U.S. stock market since the 2008 financial crisis.

    2023 was much better, but most of the gains came from a handful of highly concentrated sectors.  

    2024 saw nearly every sector post gains – driven primarily by AI enthusiasm and a robust U.S. economy. Bitcoin surged to an all-time high, and Gold saw its best performance in 14 years. On the other hand, bonds suffered amid reflationary concerns and fears of a growing deficit.

    For 2025, I predicted a bullish year (driven by AI), but expected more volatility and noise. That is what we got and what we wrote about in the post: The Seven Giants Carrying the Market: What the S&P 493 Tells Us About The Future.

    So, looking back, how did markets actually perform in 2025? Here is a table showing global returns by asset class.

    A Global Look at 2025: Slowing, But Strong

    Table showing 2025 total returns across global asset classes, with silver and gold leading and crypto negative

    At a high level, 2025 was a year of solid gains, with diversification paying off: metals and international markets led, while crypto lagged.

    Here is a closer look at asset class performance (based on total return figures through the end of 2025):

    • Silver (+145.88%) and Gold (+64.33%) dominated returns, a rare year where precious metals outperformed traditional equities.
    • International equities surged, with the MSCI Emerging Markets Index (+33.57%) and MSCI World ex-USA Index (+31.85%) outpacing U.S. benchmarks.
    • U.S. major indices such as the Nasdaq 100 (+21.24%) and S&P 500 (+17.88%) remained strong.
    • Smaller U.S. stocks and value segments delivered respectable but more modest gains.
    • Fixed income and bonds produced positive but lower returns.
    • Cryptocurrencies — Bitcoin and Ethereum — ended the year with negative performance, illustrating ongoing volatility in digital assets.

    This split suggests that 2025 became a year of diversification returns, with non-U.S. equities and metals playing a larger role than in recent U.S. market-centric rallies.

    Diving Deeper Into Business Performance

    via visualcapitalist

    One of the most striking themes in U.S. equities throughout 2025 was the pronounced divergence in performance across sectors and stocks, as illustrated by VisualCapitalist’s winners-and-losers visualization.

    Unsurprisingly, AI and data infrastructure companies were among the biggest winners of the year.

    Continuing the trend from our broader perspective, precious metal producers also saw gains, reflecting a wider appetite for inflation hedges and geopolitical safe havens.

    Meanwhile, real estate investment trusts (REITs) struggled amid elevated borrowing costs and high yields, which made alternative income assets more attractive. Non-AI software companies and oil & gas stocks also underperformed.

    In Closing

    None of this guarantees how 2026 will play out. It does suggest a few things to watch: whether the strength in metals persists, whether international markets can build on their leadership, and whether crypto’s drawdown turns into a reset or a renewed rally. It also reinforces a familiar lesson: diversified, rules‑based portfolios can thrive even when leadership rotates (did you read last week’s article?)

    On one level, a systematic, algorithmic approach means not spending too much time trying to predict markets. On another, it is hard not to think about what might come next — especially as AI becomes more influential and pervasive.

    What do you expect for 2026? Will cryptocurrencies recover, or will they continue to shake out? Will AI keep booming at this pace or begin to normalize? And which sectors do you believe have the potential for the biggest surprises?

    Onwards!

  • A Deeper Look At Oil Reserves

    Last week, we took a look at oil reserves amid Venezuela-related headlines. However, knowing where oil reserves are isn’t enough to understand the entire picture.

    When the U.S. recently eased sanctions on Venezuela, headlines touted the country’s 300 billion barrels of proven reserves — the world’s largest. But here’s the paradox: Venezuela produces less than 1% of the global oil supply. What explains the gap between paper wealth and market irrelevance?

    The short answer is, in 2026’s energy landscape, not all barrels are created equal.

    Why Reserves Data Misleads

    To understand why those headlines can mislead, it helps to look at how the market actually prices different types of crude.

    For investors, reserves are table stakes; the edge lies in understanding which barrels can become durable cash flows. To see why, it helps to start with how the market actually prices crude. For example, oil benchmarks are determined by API gravity (which measures crude density relative to water) and sulfur content.

    While Venezuela holds the world’s largest reserves, most of its crude is heavy and sour(high-sulfur), making it more expensive to extract and refine than the light, sweet benchmarks that command premium prices.

    Below is a chart showing Oil Benchmarks Around the World. It maps major oil benchmarks by API gravity and sulfur content, highlighting how far Venezuela and Canada sit from the lighter, sweeter crudes that anchor pricing.

    via visualcapitalist

    This chart highlights an important reason why the Middle East still has such dominance in the industry. For contrast, Saudi Arabia, with half Venezuela’s reserves, produces 12x more oil daily.

    Venezuela’s Production Collapse

    Venezuela is unique among producers, boasting over 300 billion barrels in proven reserves and a reserves-to-production ratio of more than 800 years. It’s the highest in the world by a large margin.

    That 800-year figure is a mathematical ratio, not a forecast. It ignores the politics, capital constraints, and shifting demand that will determine whether this oil ever reaches the market.

    Put differently, a sky-high reserves‑to‑production number can signal untapped potential or reflect deep structural constraints that paralyze monetization.

    In the 1970s, Venezuela’s oil production reached approximately 3.5 million barrels daily, accounting for over 7% of the world’s oil output. Since then, production has fallen drastically due to underinvestment, deteriorating infrastructure, and geopolitical factors such as sanctions. Currently, Venezuela produces approximately 1 million barrels per day, which is roughly 1% of the global supply.

    Who Can Actually Produce

    Venezuela’s predicament is a lesson in the difference between resource endowment and resource power.

    For investors and operators, the real signal isn’t who has the most reserves, but who can turn underground barrels into reliable cash flows at competitive costs.

    Here is a chart showing the Oil Production & Reserves of the Top 25 Producers.

    via visualcapitalist

    The United States leads the list of global oil producers, pumping more than 20 million barrels per day. It also has machinery focused on heavier crude.

    With its heavy-crude infrastructure and capital depth, the U.S. may play an outsized role in shaping how Venezuelan reserves are monetized in the years ahead.

    The Bigger Picture

    All of this is happening against the backdrop of an uneven energy transition: EV adoption, non-OPEC supply growth, and shifting alliances are redefining which barrels matter.

    Venezuela’s position serves as a reminder that, in a world gradually decarbonizing, we still remain heavily reliant on oil. As a result, not all crude – or all producers – will be valued equally.

    In an era of shifting energy demand, these contrasts underscore how resource endowment and production capacity can tell very different stories, and why future energy security and market dynamics will depend not just on what lies beneath the ground, but on who has the ability (and political will) to bring it to market.

  • The End of an Era: Recognizing Warren Buffett’s Immutable Legacy

    With his final annual letter to Berkshire Hathaway investors, Warren Buffett has effectively written the last chapter of a six‑decade investing saga.

    Berkshire’s leadership is passing to Greg Abel as Buffett steps back at the remarkably young‑at‑heart age of 95.

    Abel inherits not just a portfolio, but a philosophy of disciplined capital allocation, conservative balance sheets, and a relentless focus on intrinsic value. The real question for investors is not whether Abel can be another Buffett, but whether Buffett’s playbook can outlast the man who wrote it.

    Buffett’s edge lasted across various eras because his focus was not on speed or exotic tools, but on patience, clarity, and a refusal to mistake volatility for risk. That mindset is still available to anyone willing to slow down and think in decades instead of days.

    Buffett’s Unmatched Track Record

    Buffett’s tenure produced extraordinary returns: roughly over 6,000,000% total appreciation for Berkshire Hathaway’s Class A shares from 1965 through the end of 2025. That works out to a compounded annual gain of roughly 19–20% for Berkshire versus about 10% for the S&P 500 — almost double the market’s annual return, sustained over six decades.

    Those numbers are hard to imagine (and even harder to replicate), which is why the mindset behind them matters more than the math.

    At a time when AI, algorithms, and noise dominate markets, Buffett’s true legacy isn’t his returns; it’s a playbook for thinking about risk, volatility, and human potential in an age of AI and uncertainty. 

    To understand how that philosophy shows up in practice, look at Berkshire’s positioning in 2025.

    Berkshire Hathaway’s 2025

    2025 drove home just how conservative Berkshire remains — and how consistently that conservatism has paid off.

    The company built over $350B in cash reserves, sold significant amounts of its Apple stock, increased its ownership in Japanese trading firms, and maintained its financial strength amid volatile market shifts.

    They held on to many of their core holdings (such as Coca-Cola and American Express) and still saw portfolio value growth despite the move toward cash. They’re one of the few businesses I can say I’m not surprised beat the market (again).

    Those decisions reflect themes Buffett underscored in his final annual letter.

    Lessons From His Final Letter

    ”Greatness does not come about through accumulating great amounts of money, great amounts of publicity or great power in government. When you help someone in any of thousands of ways, you help the world. Kindness is costless but also priceless. Whether you are religious or not, it’s hard to beat The Golden Rule as a guide to behavior.”

    It’s inspiring when a successful leader focuses on making things better for others, rather than simply winning. Perhaps that’s actually a healthy redefinition of what “winning” means.

    Readers of past letters will recognize familiar themes, now paired with a more reflective look back at an incredible career.

    In many ways, it reads as a love letter not only to America but also to humanity.

    He comes off as humble and down-to-earth … yet also proud of his achievements.

    Key takeaways?

    • Take a long-term perspective … stock price volatility (even large drops) is a normal and expected part of markets and should not derail long-term investing.
    • Acknowledge the role of luck … even when you’re as disciplined and effective as Buffett, luck always plays a role.
    • Don’t beat yourself up over mistakes … acknowledge them, learn from them, and do better.

    Berkshire’s 2025 decisions are simply the latest expression of habits Buffett has honed over a lifetime.

    A Look Back At Buffett’s Career

    Warren Buffett is a legend for many reasons. Foremost among them might be that he’s one of the few investors who clearly has an edge … and has for a long time. 

    Buffett didn’t chase lottery tickets; he stacked small, repeatable wins, and let compounding do the heavy lifting. There’s power in that. He also noted that as stock trading has become more accessible – it’s made daily buying and selling easier, but also more erratic. That, unfortunately, benefits the “house” more than individuals. 

    While most people label Buffett an investor, his story makes even more sense if you think of him as a scrappy entrepreneur.

    At the age of six, he started selling gum door-to-door.

    He made his first million at age 30 (in 1960). For context, a million dollars in 1960 would be worth about $10.4 million today.

    Buffett has always been honest about his bread-and-butter “trick”…  he buys quality companies at a discount and holds on to them.

    Sixty‑five years later, it is striking how dramatically the world has changed — and how little Buffett’s core playbook needed to.

    The Lesson Behind The Lesson

    Seeing Warren as an entrepreneur, rather than just as an investor, turns his ideas into axioms for life and business, not just trading.

    Money will always flow toward opportunity, and there is an abundance of that in America. Commentators today often talk of “great uncertainty.” … No matter how serene today may be, tomorrow is always uncertain.

    Don’t let that reality spook you. Throughout my lifetime, politicians and pundits have constantly moaned about terrifying problems facing America. Yet our citizens now live an astonishing six times better than when I was born. The prophets of doom have overlooked the all-important factor that is certain: Human potential is far from exhausted, and the American system for unleashing that potential – a system that has worked wonders for over two centuries despite frequent interruptions for recessions and even a Civil War – remains alive and effective.

    We are not natively smarter than we were when our country was founded nor do we work harder. But look around you and see a world beyond the dreams of any colonial citizen. Now, as in 1776, 1861, 1932 and 1941, America’s best days lie ahead

    This excerpt from his 2011 letter doesn’t just speak to America’s longevity; it speaks to our own capacity to keep reinventing ourselves.

    Few forces hold people back more than an outsized fear of failure.

    Fear, uncertainty, and greed are hallmarks of every year. The world will continuously cycle through ebbs and flows, but the long arc still bends toward greater possibility and greener pastures. 

    What This Means For Us

    Not every investor can (or should) copy Buffett, but everyone can borrow his mindset around patience, risk, and human potential.

    If you let yourself be persistently frightened into believing that the world is doomed, you’ll never take the risks that could change your life for the better. Worse still, if you never experience failure, you’ll never learn to get back up, brush yourself off, and grow stronger for future success.

    The game is not about the next year or even three; it is about a lifetime, and the generations that follow. 

    Buffett’s run may be ending, but the forces he trusted — human ingenuity, compounding, and long‑term thinking — are even more important.

    In an AI‑driven world, the edge won’t belong to whoever has the most models; it will belong to those who stay patient, take intelligent risks, and keep betting on human potential — starting with their own.

    Let’s continue to make our tomorrows bigger and better than our today. 

    Onwards!