Market Commentary

  • The Law (And Flaw) Of Averages

    The law of averages is a principle that supposes most future events are likely to balance any past deviation from a presumed average.

    Take, for example, flipping a coin.  If you happen to get 5 "Heads" in a row, you'd most likely assume the next one should be "Tails" … even though each flip has a 50/50 chance of landing on either. 

    Even from this example, you can tell it's a flawed law.  While there are some reasonable mathematical uses of the law of averages, in everyday life, this "law" mostly represents wishful thinking. 

    Crisis-of-2008

    It's also one of the most common fallacies succumbed to by gamblers and traders. 

    The concept of "Average" is more confusing and potentially damaging than you might suspect.

    Perhaps you heard the story about how the U.S. Air Force discovered the 'flaw' of averages by creating cockpits based on complex mathematics surrounding the average height, width, arm length, etc., of over 4,000 pilots.  Despite engineering the cockpit to precise specifications, pilots crashed their planes on a too-regular basis. 

    The reason?  With hindsight, they learned that very few of those 4,000 pilots were actually "average".  Ultimately, the Air Force re-engineered the cockpit and fixed the problem. 

    It's a good reminder that 'facts' can lie, and assumptions and interpretations are dangerous.  It's why I prefer taking decisive action on something known, rather than taking tentative actions about something guessed. 

    via ReasonTV

    Our Brains and the Illusion of Balance

    Our brains are wired to find patterns, even in random events.  This tendency, known as apophenia, can lead us to see connections where none exist.

    The Misleading Law of Averages

    It's this very tendency that fuels the misconception of the law of averages.  We expect randomness to "even out" because we see patterns in short sequences.  This can be tempting to believe, especially when dealing with chance events.

    The law of averages is a common idea that suggests future events will even out past results to reach some average outcome.  For instance, going back to our earlier coin-flipping example,  after getting five heads in a row, it's natural to assume the next flip is "due" to be tails.  However, that's not how probability works.  Each coin flip is an independent event (with a 50% chance of landing on heads or tails), regardless of previous flips.  The coin doesn't "remember" what happened before.

    Apophenia isn't limited to coin flips.  For instance, you might see your lucky number appearing repeatedly throughout the day, leading you to believe it has a special meaning – even though each instance is completely independent.

    This natural desire for order and predictability can lead us astray when dealing with chance events.

    Why is it Flawed?

    The law of averages often leads to a misconception called the gambler's fallacy.  This fallacy is the belief that random events can somehow "correct" themselves to reach an average.  In reality, every coin flip, roll of the dice, or spin of the roulette wheel is a fresh start with its own discrete probabilities.  The odds remain the same no matter how long the losing streak persists.

    Are there ever times when it applies?

    It's important to distinguish the law of averages from the law of large numbers, a well-established statistical principle.  The law of large numbers states that as the number of random events increases, the average outcome gets closer to the expected probability.  This applies in situations where many trials happen, and while past results of individual events are independent, the law describes the behavior of averages over a large number of trials.  For instance, the average weight of a large sample of apples will likely be close to the expected average weight of an apple, even if some individual apples are heavier or lighter than expected.

    However, in everyday situations (with a limited number of events), the law of averages is generally not a helpful way to think about chance or probabilities.

    Understanding these misconceptions can help us make better decisions and avoid false expectations based on flawed reasoning.

    Psychological Reasons Behind the Belief

    Human decision-making suffers from a range of tendencies and biases.

    Earlier, we discussed the tendency to find patterns, even where none exist.  Next, we will consider cognitive bias.  In our coin-flipping example, it is the representativeness heuristic that makes us assume that small samples should resemble the larger population they come from.

    Emotional factors also play a role.  The desire for control in uncertain situations can make us latch onto the law of averages as a comforting notion.  Believing that things will "even out" gives us a sense of predictability and fairness in an otherwise random world.

    Additionally, social influences can reinforce these beliefs.  Stories and anecdotes about streaks ending or luck changing often circulate among friends and family, further embedding the misconception into our collective consciousness.

    Understanding these psychological reasons helps explain why the law of averages persists despite its flaws.  Recognizing these biases can empower us to think more critically about probability and chance events.

    Improving Decision-Making in Gambling and Investing

    Recognizing the fallacy of the law of averages can significantly enhance decision-making, particularly in gambling and investing.  Understanding that each event is independent can help participants make more rational choices.  Instead of chasing losses with the hope that a win is "due," savvy speculators understand their odds remain constant and may choose to walk away or set strict limits on their betting behavior.

    In investing, this knowledge is equally crucial.  Many factors influence markets.  Nonetheless, believing that a stock "must" rebound after a series of declines too often leads to poor investment decisions.  Investors who grasp that past performance does not dictate future results are better equipped to evaluate investments based on fundamentals rather than emotions or flawed expectations.

    By dispelling these misconceptions, you can approach gambling or investing with a clearer mindset, reducing the risk of substantial losses driven by erroneous beliefs about probability and chance.

    You can also eliminate fear, greed, and discretionary mistakes by relying on algorithms to calculate realtime expectancy scores and take the road less stupid.  Take a different kind of chance.  Just ask our AI Overlords; they'll tell you what to expect!

  • Correlation Between Market Crashes & Oreos?!

    During the Robinhood & Gamestop debacle in 2021, I wrote an article about r/WallStreetBets where I essentially said that most of the retail investors that frequent the site don’t know what they’re doing … Occasionally, however, there are posts that present the type of solid research or insights you might see from a respected Wall Street firm.

    With Gamestop and AMC both surging recently, I thought this was a topic worth revisiting. 

    As an example of good research done by the subreddit, here’s a link to a post where a user (nobjos) analyzed 66,000+ buy and sell recommendations by financial analysts over the last 10 years to see if they had an edge.  Spoiler: maybe, but only if you have sufficient AUM to justify the investment in their research. 

    Some posts demonstrate a clear misunderstanding of markets, and the subreddit certainly contains more jokes than quality posts.  Nevertheless, I saw a great example of a post that pokes fun at the concept that correlation does not equal causation. 

    I’ve posted about the Super Bowl Indicator and the Big Mac Index in the past, but what about Oreos?  Read what’s next for mouth-watering market insights.

    The increasingly-depraved debuts of Oreos with more stuffing indicate unstable amounts of greed and leverage in the system, serving as an immediate indicator that the makings of a market crash are in place. Conversely, when the Oreo team reduces the amount of icing in their treats, markets tend to have great bull runs until once again society demands to push the boundaries of how much stuffing is possible.

    1974: Double Stuf Oreo released. Dow Jones crashes 45%. FTSE drops 73%.

    1987: Big Stuf Oreo released. Black Monday, a 20% single-day crash and a following bear market.

    1991: Mini Oreo introduced. Smaller icing ratios coincide with the 1991 Japanese asset price bubble, confirming the correlation works both ways and a reduction of Oreo icing may be a potential solution to preventing a future crash.

    2011: Triple Double Oreo introduced. S&P drops 21% in a 5-month bear market

    2015: Oreo Thins introduced. A complete lack of icing causes an unprecedented bull run in the S&P for years

    2019: The Most Stuf Oreo briefly introduced. Pulled off the shelf before any major market damage could occur.

    2021: The Most Stuf Oreo reintroduced. Market response: ???

     - LehmanParty via Reddit

    It’s surprisingly good due diligence, but it’s also clearly just meant to be funny.  It resonates because we crave order and look for signs that make markets seem a little bit more predictable.

    Funny-mealso-me-meme-about-making-healthy-choices-but-also-eating-crap-like-all-stuf-oreos

    The problem with randomness is that it often appears meaningful. 

    Many people on Wall Street have ideas about how to guess what will happen with the stock market or the economy.  Unfortunately, they often confuse correlation with causation.  At least with the Oreo Indicator, we know that the idea was supposed to be thought-provoking (but silly) rather than investment advice to be taken seriously.

    More people than you would hope or guess attempt to forecast the market based on gut, ancient wisdom, and prayers.

    While hope and prayer are good things … they aren’t reliably good trading strategies.

    Consider this a reminder that even if you do the work, you’ll likely get a bad answer if you use the wrong inputs. 

    Garbage in, garbage out. 

    Onwards!

  • Making News Beautiful Again

    My mother watches the news religiously.  To her credit, she watches a variety of sources and creates her own takeaways based on them.  Regardless, there's a common theme in all the sources she watched – they focus on fear or shock-inducing stories with a negative bias.  As you might guess, I hear it when I talk with her.

    While I value being informed, I also value things that nourish or make you stronger (as opposed to things that make you weak or less hopeful).

    Negativity Sells. 

    Sure, news sources throw in the occasional feel-good story as a pattern interrupt … but their focus skews negative.  History shows that stories about improvement or the things that work simply don't grab eyeballs, attention, or ratings as consistently as negativity-focused stories do.

    The reality is that negativity sells.  If everything were great all the time, people wouldn't need to buy as many products, they wouldn't need to watch the news, and this cycle wouldn't continue.

    It's worth acknowledging and understanding the perils our society is facing, but it's also worth focusing on the ways humanity is expanding and improving.

    As a brief respite from the seemingly unending stream of doom and gloom, Information Is Beautiful has a section focused on "Beautiful News".  It's a collection of visualizations highlighting positive trends, uplifting statistics, and creative solutions.  It's updated daily and can be sorted by topic.  I suggest you check it out.

     

    Screen Shot 2021-06-06 at 2.20.21 PM

    Beautiful News via Information Is Beautiful

    If you're looking for more "good news," here's a list of 10 sources focusing on good news

    Let me know if you have a site you'd like to share.

    Have a great week!

  • Some Timeless Wisdom From Socrates

    Small distinctions separate wise men from fools … Perhaps most important among them is what the wise man deems consequential. 

    This post discusses Socrates' Triple Filter Test, which involves checking information for truth, goodness, and usefulness.  It also explores how this concept applies to decision-making in business and life by focusing on important information and filtering out the rest.  The key to making better choices and staying focused is to avoid damaging or irrelevant information.

    Socrates' Triple Filter

    In ancient Greece, Socrates was reputed to hold knowledge in high esteem.  One day an acquaintance met the great philosopher and said, "Do you know what I just heard about your friend?"

    "Hold on a minute," Socrates replied. "Before telling me anything, I'd like you to pass a little test. It's called the Triple Filter Test."

    "Triple filter?"

    "That's right," Socrates continued.  "Before you talk to me about my friend, it might be a good idea to take a moment and filter what you're going to say. That's why I call it the triple filter test.

    The first filter is Truth.  Have you made absolutely sure that what you are about to tell me is true?"

    "No," the man said, "Actually I just heard about it and…"

    "All right," said Socrates. "So you don't really know if it's true or not. Now let's try the second filter, the filter of Goodness.  Is what you are about to tell me about my friend something good?"

    "No, on the contrary…"

    "So," Socrates continued, "You want to tell me something bad about him, but you're not certain it's true.  You may still pass the test though, because there's one filter left.  The third filter is Usefulness.  Is what you want to tell me about my friend going to be useful to me?"

    "No, not really."

    "Well," concluded Socrates, "If what you want to tell me is neither true, nor good, nor even useful … then why tell it to me at all?"

    With all the divisiveness in both media and in our everyday conversations with friends, family, and strangers … this is a good filter for what you say, what you post, and even how you evaluate markets, the economy, or a business opportunity. 

    How Does That Apply to Me or Trading?

    The concept of Socrates' Triple Filter applies to markets as well.

    When I was a technical trader, rather than looking at fundamental data and scouring the news daily, I focused on developing dynamic and adaptive systems and processes to look at the universe of trading algorithms to identify which were in phase and likely to perform well in the current market environment.

    That focus has become more concentrated as we've transitioned to using advanced mathematics and AI to understand markets. 

    Filter Out What Isn't Good For You.

    In contrast, there are too many ways that the media (meaning the techniques, graphics, music, etc.), the people reporting it, and even the news itself appeal to the fear and greed of human nature.

    Likewise, I don't watch the news on TV anymore.  It seems like story after story is about terrible things.  For example, during a recent visit with my mother, I listened to her watch the news.  There was a constant stream of "oh no," or "oh my," and "that's terrible".  You don't even have to watch the news to know what it says.

    These concepts also apply to what you feed your algorithms.  Garbage in, garbage out.  Just because you can plug in more data doesn't mean that data will add value.  Deciding what "not to do" and "what not to listen to" is equally as important as deciding what to do. 

    Artificial intelligence is exciting, but artificial stupidity is terrifying. 

    What's The Purpose of News for You?

    My purpose changes what I'm looking for and how much attention I pay to different types of information.  Am I reading or watching the news for entertainment, to learn something new, or to find something relevant and actionable?

     

    Socrates_quote_to_move_the_world_we_must_first_move_ourselves_5420

     

    One of my favorite activities is looking for new insights and interesting articles to share with you and my team.  If you aren't getting my weekly reading list on Fridays – you're missing out.  You can sign up here

    By the way, I recently found a site, Ground News, that makes it easy to compare news sources, read between the lines of media bias, and break free from the blinders the algorithms put on what we see.  I'd love to hear about tools or sites you think are worth sharing.

    Getting back to Socrates' three filters and business, I often ask myself: is it important, does it affect our edge, or can I use it as a catalyst for getting what we want?

    There's a lot of noise out there competing for your attention.  Stay focused. 

    Onwards!

  • Nvidia In Perspective

    In June of last year, Nvidia passed a Trillion-Dollar Market Capitalization. 

    Here’s where it stands a year later

    Nvidia-Market-Cap-May-2024_Website_05242024via visual capitalist

    Did you know that Nvidia is now the third most valuable company in the world?  It sits behind only Microsoft and Apple (though it’s nearing Apple). 

    These figures are even more impressive when you consider that at the beginning of 2020, Nvidia was valued at $145 billion.

    Nvidia’s growth was built largely on the back of AI hype.  Its chips have been a mainstay of AI and data science technologies, benefitting a litany of AI projects, gaming systems, crypto mining, and more.  It has successfully moved from a product company to a platform

    Do you think it’s going to continue to grow?  I do.

    We’ve talked about hype cycles … nevertheless, Nvidia’s offerings seem to be for the type of technology that will continue to be the underpinning of future progress.  So, while we’re seeing disillusionment toward AI, it may not affect Nvidia as intensely.

    This week, I saw an article in the WSJ titled “The AI Revolution Is Already Losing Steam,” – claiming that the pace of innovation in AI is slowing, its usefulness is limited, and the cost of running it remains exorbitant.

    This is ridiculous!  We are at the beginning of something growing exponentially.  It’s hard for most people to recognize the blind spot consisting of things they can’t conceive of … and what’s coming is hard to conceive, let alone believe is possible!

  • On The Horizon: Artificial Intelligence Agents

    In last week's article on Stanford's AI Index, we broadly covered many subjects. 

    There's one I felt like covering in more depth.  It's the concept of AI Agents

    One way to improve AI is to create agentic AI systems capable of autonomous operation in specific environments.  However, agentic AI has long challenged computer scientists.  The technology is only just now starting to show promise.  Current agents can play complex games, like Minecraft, and are much better at tackling real-world tasks like research assistance and retail shopping. 

    A common discussion point is the future of work.  The concept deals with how automation and AI will redefine the workforce, the workday, and even what we consider to be work. 

    Up until now, AI has been in very narrow applications.  Powerful applications, but with limited breadth of scope.  Generative AI and LLMs have increased the variety of tasks we can use AI for, but that's only the beginning. 

    Screenshot 2024-06-02 at 2.13.40 PM

    via Aniket Hingane

    AI agents represent a massive step toward intelligent, autonomous, and multi-modal systems working alongside skilled humans (and replacing unskilled workers) in a wide variety of scenarios. 

    Eventually, these agents will be able to understand, learn, and solve problems without human intervention.  There are a few critical improvements necessary to make that possible. 

    • Flexible goal-oriented behavior
    • Persistent memory & state tracking
    • Knowledge transfer & generalization
    • Interaction with real-world environments

    As models become more flexible in understanding and accomplishing their goals and begin to apply that knowledge to new real-world domains, models will go from intelligent-seeming tools to powerful partners with the ability to handle multiple tasks like a human would. 

    While they won't be human (or perhaps even seem human), we are on the verge of a technological shift that is a massive improvement from today's chatbots. 

    I like to think of these agents as the new assembly line.  The assembly line revolutionized the workforce and drove an industrial revolution, and I believe AI agents will do the same.

    As technology evolves, improvements in efficiency, effectiveness, and certainty are inevitable.  For example, with a proverbial army of agents creating, refining, and releasing content, it is easy to imagine a process that would take multiple humans a week getting done by agents in under an hour (even with human approval processes). 

    To make it literal, imagine using agents to write this article. One agent can be skilled in writing outlines and crafting headlines.  Another could focus on research and verification of research.  Then you have an agent to write, an agent to edit and proofread, and a conductor agent who makes sure that the quality is up to snuff, and replicates my voice.  If the goal was to make it go viral, there could be a virality agent, an SEO keyword agent, etc.

    Separating the activities into multiple agents (instead of trying to craft a vertical integrative agent) reduces the chances of "hallucinations" and self-aggrandization.  It can also theoretically wholly remove the human from the process. 

    Screenshot 2024-06-02 at 2.14.01 PMvia Aniket Hingane

    Now, I enjoy the writing process.  I'm not trying to remove myself from this process.  But, the capability is still there. 

    As agentification increases, I believe humans will still be a necessary part of the feedback loop process.  Soon, we will start to see agent-based companies.  Nonetheless, I still believe that humans will be an important part of the workforce (at least during my lifetime). 

    Another reason humans are important is because they are still important gatekeepers … meaning, humans have to become comfortable with a process to allow it.

    Trust and transparency are critical to AI adoption.  Even if AI excels at a task, people are unlikely to use it blindly.  To truly embrace AI, humans need to trust its capabilities and understand how it arrives at its results.  This means AI developers must prioritize building systems that are both effective and understandable.  By fostering a sense of ease and trust, users will be more receptive to the benefits AI or automation offers.

    Said a different way, just because AI can do something doesn't mean that you will use the tool or let AI do it.  It has to be done a "certain" way in order for you to let it get done … and that involves a lot of trust.  As a practical reality, humans don't just have to trust the technology; they also have to trust and understand the process.  That means the person building the AI or creating the automation must consider what it would take for a human to feel comfortable enough to allow the benefit.

    Especially as AI becomes more common (and as an increasingly large amount of content becomes solely created by artificial systems), the human touch will become a differentiator and a way to appear premium. 

    Screenshot 2024-06-02 at 2.24.59 PM

    via Aniket Hingane

    In my business, the goal has never been to automate away the high-value, high-touch parts of our work.  I want to build authentic relationships with the people I care about — and AI and automation promise to eliminate frustration and bother to free us up to do just that.

    The goal in your business should be to identify the parts in between those high-touch periods that aren't your unique ability – and find ways to automate and outsource them. 

    Remember, the heart of AI is still human (at least until our AI Overlords tell us otherwise).

    Onwards!

  • A Few Graphs On The State of AI in 2024

    Every year, Stanford puts out an AI Index1 with a massive amount of data attempting to sum up the current state of AI. 

    In 2022, it was 196 pages; last year, it was 386; now, it’s over 500 … The report details where research is going and covers current specs, ethics, policy, and more. 

    It is super nerdy … yet, it’s probably worth a skim (or ask one of the new AI services to summarize the key points, put it into an outline, and create a business strategy for your business from the items that are likely to create the best sustainable competitive advantages for you in your industry). 

    For reference, here are my highlights from 2022 and 2023.

    AI (as a whole) received less private investment than last year – despite an 8X funding increase for Generative AI in the past year.

    Even with less private investment, progress in AI accelerated in 2023.

    We saw the release of new state-of-the-art systems like GPT-4, Gemini, and Claude 3.  These systems are also much more multimodal than previous systems.  They’re fluent in dozens of languages, can process audio and video, and even explain memes. 

    So, while we’re seeing a decrease in the rate at which AI gets investment dollars and new job headcount, we’re starting to see the dam overflow.  The groundwork laid over the past few years is paying dividends.  Here are a few things that caught my eye and might help set some high-level context for you. 

     

    Technological Improvements In AI   

     

    Training Cost By Training Compute

    Number of Machine Learning Models

    via AI Index 2024

    Even since 2022, the capabilities of key models have increased exponentially.  LLMs like GPT-4 and Gemini Ultra are very impressive.  In fact, Gemini Ultra became the first LLM to reach human-level performance on the Massive Multitask Language Understanding (MMLU) benchmark.  However, there’s a direct correlation between the performance of those systems and the cost to train them. 

    The number of new LLMs has doubled in the last year.  Two-thirds of the new LLMs are open-source, but the highest-performing models are closed systems. 

    While looking at the pure technical improvements is important, it’s also worth realizing AI’s increased creativity and applications.  For example, Auto-GPT takes GPT-4 and makes it almost autonomous.  It can perform tasks with very little human intervention, it can self-prompt, and it has internet access & long-term and short-term memory management. 

    Here is an important distinction to make … We’re not only getting better at creating models, but we’re getting better at using them.  Meanwhile, the models are getting better at improving themselves. 

    • Researchers estimate that computer scientists could run out of high-quality language data for LLMs by the end of this year, exhausting low-quality language data within two decades, and use up image data by the late 2030s.  This means we’ll increasingly rely on synthetic data to train AI systems.  The call to rely on Synthetic data can be compelling, but when used as the majority of a data set, it can result in model collapse. 
    • With limited large datasets, fine-tuning has grown increasingly popular.  Adding smaller but curated datasets to a model’s training regimen can boost overall model performance while also sharpening the model’s capabilities on specific tasks.  It also allows for more precise control over behavior. 
    • Better AI means better data, which means … you guessed it, even better AI.  New tools like SegmentAnything and Skoltech are being used to generate specialized data for AI.  While self-improvement isn’t possible yet without intervention, AI has been improving at an incredible pace. 

     

    The Proliferation of AI 

    First, let’s look at patent growth.

    Number of AI Patents

    Number of Newly Funded AI companies

    via AI Index 2024

    The adoption of AI and the claims on AI “real estate” are still increasing.  The number of AI patents has skyrocketed.  From 2021 to 2022, AI patent grants worldwide increased sharply by 62.7%.  Since 2010, the number of granted AI patents has increased more than 31 times.

    As AI has improved, it has increasingly forced its way into our lives.  We’re seeing more products, companies, and individual use cases for consumers in the general public. 

    While the number of AI jobs has decreased since 2021, job positions that leverage AI have significantly increased.  

    As well, despite the decrease in private investment, massive tranches of money are moving toward key AI-powered endeavors.  For example, InstaDeep was acquired by BioNTech for $680 million to advance AI-powered drug discovery, Cohere raised $270 million to develop an AI ecosystem for enterprise use, Databricks bought MosaicML for 1.3 Billion, and Thomson Reuters acquired Casetext – an AI legal assistant. 

    Not to mention the investments and attention from companies like Hugging Face, Microsoft, Google, Bloomberg, Adobe, SAP, and Amazon. 

    Ethical AI

    Number of AI Incidents Number of AI Regulations

    via AI Index 2024

    Unfortunately, the number of AI misuse incidents is skyrocketing.  And it’s more than just deepfakes, AI can be used for many nefarious purposes that aren’t as visible, on top of intrinsic risks, like with self-driving cars.  A global survey on responsible AI highlights that companies’ top AI-related concerns include privacy, data security, and reliability.

    When you invent the car, you also invent the potential for car crashes … when you ‘invent’ nuclear energy, you create the potential for nuclear weapons. 

    There are other potential negatives as well.  For example, many AI systems (like cryptocurrencies) use vast amounts of energy and produce carbon.  So, the ecological impact has to be taken into account as well.

    Luckily, many of today’s best minds are focused on creating bumpers to rein in AI and prevent and discourage bad actors.  The number of AI-related regulations has risen significantly, both in the past year and over the last five years.  In 2023, there were 25 AI-related regulations, a stark increase from just one in 2016.  Last year, the total number of AI-related regulations grew by 56.3%.  Regulating AI has become increasingly important in legislative proceedings across the globe, increasing 10x since 2016. 

    Not to mention, US government agencies allocated over $1.8 billion to AI research and development spending in 2023.  Our government has tripled its funding for AI since 2018 and is trying to increase its budget again this year. 

    Conclusion

    Artificial Intelligence is inevitable.  Frankly, it’s already here.  Not only that … it’s growing, and it’s becoming increasingly powerful and impressive to the point that I’m no longer amazed by how amazing it continues to become.

    Despite America leading the charge in AI, we’re also among the lowest in positivity about the benefits and drawbacks of these products and services.  China, Saudi Arabia, and India rank the highest.  Only 34% of Americans anticipate AI will boost the economy, and 32% believe it will enhance the job market.  Significant demographic differences exist in perceptions of AI’s potential to enhance livelihoods, with younger generations generally more optimistic.

    We’re at an interesting inflection point where fear of repercussions could derail and diminish innovation – slowing down our technological advance. 

    Much of this fear is based on emerging models demonstrating new (and potentially unpredictable) capabilities.  Researchers showed that these emerging capabilities mostly appear when non-linear or discontinuous metrics are used … but vanish with linear and continuous metrics.  So far, even with LLMs, intrinsic self-correction has shown to be very difficult.  When a model is left to decide on self-correction without guidance, performance declines across all benchmarks. 

    If we don’t continue to lead the charge, other countries will … you can already see it with China leading the AI patent explosion.

    We need to address the fears and culture around AI in America.  The benefits seem to outweigh the costs – but we have to account for the costs (time, resources, fees, and friction) and attempt to minimize potential risks – because those are real (and growing) as well.

    Pioneers often get arrows in their backs and blood on their shoes.  But they are also the first to reach the new world.

    Luckily, I think momentum is moving in the right direction.  Last year, it was rewarding to see my peers start to use AI apps.  Now, many of them are using AI-inspired vocabulary and thinking seriously about how best to adopt AI into the fabric of their business. 

    We are on the right path.

    Onwards!


    1Nestor Maslej, Loredana Fattorini, Raymond Perrault, Vanessa Parli, Anka Reuel, Erik Brynjolfsson, John Etchemendy, Katrina Ligett, Terah Lyons, James Manyika, Juan Carlos Niebles, Yoav Shoham, Russell Wald, and Jack Clark, “The AI Index 2024 Annual Report,” AI Index Steering Committee, Institute for Human-Centered AI, Stanford University, Stanford, CA, April 2024.  The AI Index 2024 Annual Report by Stanford University is licensed under Attribution-NoDerivatives 4.0 International.

  • How Much Taxes Does Your State Pay?

    For most Americans, tax season is over … but I've got one more tax-related chart for you. 

    In April, I posted about where our tax dollars go. This month, let's look at what percentage of an average person's income goes to their state and local taxes

     

    Tax-Burden-by-State_Site

    via VisualCapitalist

    New York and Hawaii top the list with 12% and 11.8% respectively. Alaska ends the list with 4.9%, followed by New Hampshire with 5.6%. 

    Alaskans don't pay state income tax, but neither do Florida, Nevada, South Dakota, Tennessee, Texas, Washington, or Wyoming. So, if you're trying to avoid taxes, they all sound like better bets. 

    New Hampshire still has a better state tax burden than any of them despite its 4% flat tax on interest and dividend income. 

    If you don't like paying taxes (and don't mind the cold), then Alaska might be worth the winters?

    Meanwhile, we hear a lot about the exodus from California, but not from New York or Maine. Maybe it's the people … or maybe it's their Governor?

  • Can We Rewrite History?

    The problem with history is it rarely tells the whole story.

    Ideally, history would be presented objectively, recounting facts without the influence of societal bias, the perspective of the victor, or the storyteller's slant. But achieving this is harder than it seems.

    Think about your daily life – it is filled with many seemingly innocuous judgments about your perception of the economy, what's happening in the markets, who is a hero, who deserves punishment,  and whether an action is "Just" or "Wrong". 

    I'm often surprised by how frequently intelligent people violently disagree on issues that seem clear-cut to them.

    It's like a fish in water not realizing it's in water … Most people don't realize the inherent biases and filters that inform their sense of the world or reality.

    This post is an attempt to highlight the importance of diverse perspectives and information sources in building well-informed viewpoints.

    Even though most people would agree that genuinely understanding history requires a clear picture, free from bias … I think it's apparent that history (as we know it) is subjective. The narrative shifts to support the needs of the society reporting it. 

    The Cold War is a great example where: during the war, immediately after the war, and today, the interpretation of the causes and events has changed.  

    But while that's one example, to a certain degree, we can see it everywhere. We can even see it in the way events are reported today. News stations color the story based on whether they're red or blue, and the internet is quick to jump on a bandwagon even if the information is hearsay. 

    Now, what happens when you can literally rewrite history?

    “Every record has been destroyed or falsified, every book rewritten, every picture has been repainted, every statue and street building has been renamed, every date has been altered. And the process is continuing day by day and minute by minute. History has stopped.“ – Orwell, 1984

    That's one of the potential risks of deepfake technology. As it gets better, creating "supporting evidence" becomes easier for whatever narrative a government or other entity is trying to make real.

    On July 20th, 1969, Neil Armstrong and Buzz Aldrin landed safely on the moon. They then returned to Earth safely as well. 

    MIT recently created a deepfake of a speech Nixon's speechwriter William Safire wrote during the Apollo 11 mission in case of disaster. The whole video is worth watching, but the speech starts around 4:20. 

    MIT via In Event Of Moon Disaster

    Can you imagine the real-world ripples that would have occurred if the astronauts died on that journey (or if people genuinely believed they did)? Here is a quote from the press response the Nixon-era government prepared in case of that disaster.

    "Fate has ordained that the men who went to the moon to explore in peace will stay on the moon to rest in peace." – Nixon's Apollo 11 Disaster Speech

    Today, alternative histories are becoming some people's realities. Why? Media disinformation is the cause and is more dangerous than ever.

    Alternative history can only be called that when it's discernible from the truth, and unfortunately, we're prone to look for information that already fits our biases. 

    Today, we also have to increasingly consider the impacts of technology. Deepfakes are becoming more commonplace – with popstar Drake even using AI in a recent record. Now, that was apparent – but scarily, research shows that most can't tell a deepfake from reality (even if they think they can.)

    As deepfakes get better, we'll also get better at detecting them, but it's a cat-and-mouse game with no end in sight.

    In Signalling theory, it's the idea that signallers evolve to become better at manipulating receivers, while receivers evolve to become more resistant to manipulation. We're seeing the same thing in trading with algorithms. 

    In 1983, Stanislav Petrov saved the world. Petrov was the duty officer at the command center for a Russian nuclear early-warning system when the system reported that a missile had been launched from the U.S., followed by up to five more.  Petrov judged the reports to be a false alarm and didn't authorize retaliation (and a potential nuclear WWIII where countless would have died). 

    But messaging is now getting more convincing.  It's harder to tell real from fake.  What happens when a world leader has a convincing enough deepfake with a convincing enough threat to another country?  Will people have the wherewithal to double-check? What about when they're buffeted by these messages constantly and from every direction?

    As we increasingly use AI for writing and editing, there is a growing risk of subtle changes being made to messages and communications. This widespread opportunity to manipulate information amplifies the capacity and potential for people to use these technologies to influence people's perceptions. As a result, we must be increasingly cautious about how the data we rely on may be altered, which could ultimately affect our perceptions and decisions.

    Despite the risks, I'm excited about the promise and the possibilities of technology. But, as always, in search of the good (or better), we have to acknowledge and be prepared for the bad.

  • Talking Trading With Matthew Piepenburg

    In 2020, I had a Zoom meeting with Matthew Piepenburg of Signals Matter.  Even though it was a private discussion, there was so much value in our discussion we decided to share parts of it online. 

    Four years later, I still think it's a great watch. 

    While Matt evaluates markets based on Macro/Value investing, I'm much more interested in advanced AI and quantitative methods. 

    As you might expect, there are a lot of differences in how we view the world, decision-making, and the market.  Nonetheless, we share a lot of common beliefs as well.   

    Our talk explores several interesting areas and concepts.  I encourage you to watch it below
     

     
    Even though this video is four years old, the lessons remain true – markets are not the economy, and normal market dynamics have been out the window for a long time.  In addition, part of why you're seeing increased volatility and noise is because there are so many interventions and artificial inputs to our market system.

    While Matt and I may approach the world with very different lenses, we both believe in "timeless wisdom". 

    Ask yourself, What was true yesterday, today, and will stay true tomorrow

    That is part of the reason we focus on emerging technologies and constant innovation … they remain relevant. 

    Something we can both agree on is that if you don't know what your edge is … you don't have one. 

    If You Don't Know What Your Edge Is You Don't Have One _GapingVoid

    Hope you enjoyed the video.

    Let me know what other topics you'd like to hear more about. 

    Onwards!