During the Robinhood & Gamestop debacle in 2021, I wrote an article about r/WallStreetBets where I essentially said that most of the retail investors that frequent the site don’t know what they’re doing ... Occasionally, however, there are posts that present the type of solid research or insights you might see from a respected Wall Street firm.
As an example of good research done by the subreddit, here’s a link to a post where a user (nobjos) analyzed 66,000+ buy and sell recommendations by financial analysts over the last 10 years to see if they had an edge. Spoiler: maybe, but only if you have sufficient AUM to justify the investment in their research.
Some posts demonstrate a clear misunderstanding of markets, and the subreddit certainly contains more jokes than quality posts. Nevertheless, I saw a great example of a post that pokes fun at the concept that correlation does not equal causation.
I’ve posted about the Super Bowl Indicator and the Big Mac Index in the past, but what about Oreos? Read what’s next for mouth-watering market insights.
The increasingly-depraved debuts of Oreos with more stuffing indicate unstable amounts of greed and leverage in the system, serving as an immediate indicator that the makings of a market crash are in place. Conversely, when the Oreo team reduces the amount of icing in their treats, markets tend to have great bull runs until once again society demands to push the boundaries of how much stuffing is possible.
1987: Big Stuf Oreo released. Black Monday, a 20% single-day crash and a following bear market.
1991: Mini Oreo introduced. Smaller icing ratios coincide with the 1991 Japanese asset price bubble, confirming the correlation works both ways and a reduction of Oreo icing may be a potential solution to preventing a future crash.
2011: Triple Double Oreo introduced. S&P drops 21% in a 5-month bear market
2015: Oreo Thins introduced. A complete lack of icing causes an unprecedented bull run in the S&P for years
2019: The Most Stuf Oreo briefly introduced. Pulled off the shelf before any major market damage could occur.
2021: The Most Stuf Oreo reintroduced. Market response: ???
It’s surprisingly good due diligence, but it’s also clearly just meant to be funny. It resonates because we crave order and look for signs that make markets seem a little bit more predictable.
The problem with randomness is that it often appears meaningful.
Many people on Wall Street have ideas about how to guess what will happen with the stock market or the economy. Unfortunately, they often confuse correlation with causation. At least with the Oreo Indicator, we know that the idea was supposed to be thought-provoking (but silly) rather than investment advice to be taken seriously.
More people than you would hope or guess attempt to forecast the market based on gut, ancient wisdom, and prayers.
While hope and prayer are good things ... they aren’t reliably good trading strategies.
Consider this a reminder that even if you do the work, you’ll likely get a bad answer if you use the wrong inputs.
My mother watches the news religiously. To her credit, she watches a variety of sources and creates her own takeaways based on them. Regardless, there's a common theme in all the sources she watched – they focus on fear or shock-inducing stories with a negative bias. As you might guess, I hear it when I talk with her.
While I value being informed, I also value things that nourish or make you stronger (as opposed to things that make you weak or less hopeful).
Negativity Sells.
Sure, news sources throw in the occasional feel-good story as a pattern interrupt ... but their focus skews negative. History shows that stories about improvement or the things that work simply don't grab eyeballs, attention, or ratings as consistently as negativity-focused stories do.
The reality is that negativity sells. If everything were great all the time, people wouldn't need to buy as many products, they wouldn't need to watch the news, and this cycle wouldn't continue.
It's worth acknowledging and understanding the perils our society is facing, but it's also worth focusing on the ways humanity is expanding and improving.
As a brief respite from the seemingly unending stream of doom and gloom, Information Is Beautiful has a section focused on "Beautiful News". It's a collection of visualizations highlighting positive trends, uplifting statistics, and creative solutions. It's updated daily and can be sorted by topic. I suggest you check it out.
Small distinctions separate wise men from fools ... Perhaps most important among them is what the wise man deems consequential.
This post discusses Socrates' Triple Filter Test, which involves checking information for truth, goodness, and usefulness. It also explores how this concept applies to decision-making in business and life by focusing on important information and filtering out the rest. The key to making better choices and staying focused is to avoid damaging or irrelevant information.
Socrates' Triple Filter
In ancient Greece, Socrates was reputed to hold knowledge in high esteem. One day an acquaintance met the great philosopher and said, "Do you know what I just heard about your friend?"
"Hold on a minute," Socrates replied. "Before telling me anything, I'd like you to pass a little test. It's called the Triple Filter Test."
"Triple filter?"
"That's right," Socrates continued. "Before you talk to me about my friend, it might be a good idea to take a moment and filter what you're going to say. That's why I call it the triple filter test.
The first filter is Truth. Have you made absolutely sure that what you are about to tell me is true?"
"No," the man said, "Actually I just heard about it and…"
"All right," said Socrates. "So you don't really know if it's true or not. Now let's try the second filter, the filter of Goodness. Is what you are about to tell me about my friend something good?"
"No, on the contrary…"
"So," Socrates continued, "You want to tell me something bad about him, but you're not certain it's true. You may still pass the test though, because there's one filter left. The third filter is Usefulness. Is what you want to tell me about my friend going to be useful to me?"
"No, not really."
"Well," concluded Socrates, "If what you want to tell me is neither true, nor good, nor even useful … then why tell it to me at all?"
With all the divisiveness in both media and in our everyday conversations with friends, family, and strangers ... this is a good filter for what you say, what you post, and even how you evaluate markets, the economy, or a business opportunity.
How Does That Apply to Me or Trading?
The concept of Socrates' Triple Filter applies to markets as well.
When I was a technical trader, rather than looking at fundamental data and scouring the news daily, I focused on developing dynamic and adaptive systems and processes to look at the universe of trading algorithms to identify which were in phase and likely to perform well in the current market environment.
That focus has become more concentrated as we've transitioned to using advanced mathematics and AI to understand markets.
Filter Out What Isn't Good For You.
In contrast, there are too many ways that the media (meaning the techniques, graphics, music, etc.), the people reporting it, and even the news itself appeal to the fear and greed of human nature.
Likewise, I don't watch the news on TV anymore. It seems like story after story is about terrible things. For example, during a recent visit with my mother, I listened to her watch the news. There was a constant stream of "oh no," or "oh my," and "that's terrible". You don't even have to watch the news to know what it says.
These concepts also apply to what you feed your algorithms. Garbage in, garbage out. Just because you can plug in more data doesn't mean that data will add value. Deciding what "not to do" and "what not to listen to" is equally as important as deciding what to do.
Artificial intelligence is exciting, but artificial stupidity is terrifying.
What's The Purpose of News for You?
My purpose changes what I'm looking for and how much attention I pay to different types of information. Am I reading or watching the news for entertainment, to learn something new, or to find something relevant and actionable?
One of my favorite activities is looking for new insights and interesting articles to share with you and my team. If you aren't getting my weekly reading list on Fridays - you're missing out. You can sign up here.
By the way, I recently found a site, Ground News, that makes it easy to compare news sources, read between the lines of media bias, and break free from the blinders the algorithms put on what we see. I'd love to hear about tools or sites you think are worth sharing.
Getting back to Socrates' three filters and business, I often ask myself: is it important, does it affect our edge, or can I use it as a catalyst for getting what we want?
There's a lot of noise out there competing for your attention. Stay focused.
Did you know that Nvidia is now the third most valuable company in the world? It sits behind only Microsoft and Apple (though it’s nearing Apple).
These figures are even more impressive when you consider that at the beginning of 2020, Nvidia was valued at $145 billion.
Nvidia’s growth was built largely on the back of AI hype. Its chips have been a mainstay of AI and data science technologies, benefitting a litany of AI projects, gaming systems, crypto mining, and more. It has successfully moved from a product company to a platform.
Do you think it’s going to continue to grow? I do.
We’ve talked about hype cycles ... nevertheless, Nvidia’s offerings seem to be for the type of technology that will continue to be the underpinning of future progress. So, while we’re seeing disillusionment toward AI, it may not affect Nvidia as intensely.
This week, I saw an article in the WSJ titled “The AI Revolution Is Already Losing Steam,” – claiming that the pace of innovation in AI is slowing, its usefulness is limited, and the cost of running it remains exorbitant.
This is ridiculous! We are at the beginning of something growing exponentially. It’s hard for most people to recognize the blind spot consisting of things they can’t conceive of ... and what’s coming is hard to conceive, let alone believe is possible!
In last week's article on Stanford's AI Index, we broadly covered many subjects.
There's one I felt like covering in more depth. It's the concept of AI Agents.
One way to improve AI is to create agentic AI systems capable of autonomous operation in specific environments. However, agentic AI has long challenged computer scientists. The technology is only just now starting to show promise. Current agents can play complex games, like Minecraft, and are much better at tackling real-world tasks like research assistance and retail shopping.
A common discussion point is the future of work. The concept deals with how automation and AI will redefine the workforce, the workday, and even what we consider to be work.
Up until now, AI has been in very narrow applications. Powerful applications, but with limited breadth of scope. Generative AI and LLMs have increased the variety of tasks we can use AI for, but that's only the beginning.
AI agents represent a massive step toward intelligent, autonomous, and multi-modal systems working alongside skilled humans (and replacing unskilled workers) in a wide variety of scenarios.
Eventually, these agents will be able to understand, learn, and solve problems without human intervention. There are a few critical improvements necessary to make that possible.
Flexible goal-oriented behavior
Persistent memory & state tracking
Knowledge transfer & generalization
Interaction with real-world environments
As models become more flexible in understanding and accomplishing their goals and begin to apply that knowledge to new real-world domains, models will go from intelligent-seeming tools to powerful partners with the ability to handle multiple tasks like a human would.
While they won't be human (or perhaps even seem human), we are on the verge of a technological shift that is a massive improvement from today's chatbots.
I like to think of these agents as the new assembly line. The assembly line revolutionized the workforce and drove an industrial revolution, and I believe AI agents will do the same.
As technology evolves, improvements in efficiency, effectiveness, and certainty are inevitable. For example, with a proverbial army of agents creating, refining, and releasing content, it is easy to imagine a process that would take multiple humans a week getting done by agents in under an hour (even with human approval processes).
To make it literal, imagine using agents to write this article. One agent can be skilled in writing outlines and crafting headlines. Another could focus on research and verification of research. Then you have an agent to write, an agent to edit and proofread, and a conductor agent who makes sure that the quality is up to snuff, and replicates my voice. If the goal was to make it go viral, there could be a virality agent, an SEO keyword agent, etc.
Separating the activities into multiple agents (instead of trying to craft a vertical integrative agent) reduces the chances of "hallucinations" and self-aggrandization. It can also theoretically wholly remove the human from the process.
Now, I enjoy the writing process. I'm not trying to remove myself from this process. But, the capability is still there.
As agentification increases, I believe humans will still be a necessary part of the feedback loop process. Soon, we will start to see agent-based companies. Nonetheless, I still believe that humans will be an important part of the workforce (at least during my lifetime).
Another reason humans are important is because they are still important gatekeepers ... meaning, humans have to become comfortable with a process to allow it.
Trust and transparency are critical to AI adoption. Even if AI excels at a task, people are unlikely to use it blindly. To truly embrace AI, humans need to trust its capabilities and understand how it arrives at its results. This means AI developers must prioritize building systems that are both effective and understandable. By fostering a sense of ease and trust, users will be more receptive to the benefits AI or automation offers.
Said a different way, just because AI can do something doesn't mean that you will use the tool or let AI do it. It has to be done a "certain" way in order for you to let it get done ... and that involves a lot of trust. As a practical reality, humans don't just have to trust the technology; they also have to trust and understand the process. That means the person building the AI or creating the automation must consider what it would take for a human to feel comfortable enough to allow the benefit.
Especially as AI becomes more common (and as an increasingly large amount of content becomes solely created by artificial systems), the human touch will become a differentiator and a way to appear premium.
In my business, the goal has never been to automate away the high-value, high-touch parts of our work. I want to build authentic relationships with the people I care about — and AI and automation promise to eliminate frustration and bother to free us up to do just that.
The goal in your business should be to identify the parts in between those high-touch periods that aren't your unique ability - and find ways to automate and outsource them.
Remember, the heart of AI is still human (at least until our AI Overlords tell us otherwise).
If you're interested in AI and its impact on business, life, and our world, I encourage you to check out some of my past podcast interviews.
As I work on finishing my book, "Compounding Insights: Turning Thoughts into Things in the Age of AI," I've revisited several old episodes, and some are certainly worth sharing. I've collected a few here for you to listen to. Let me know what you think.
In 2021, I recorded two interviews that I especially enjoyed. The first was done with Dan Sullivan and Steven Krein for Strategic Coach's Free Zone Frontier podcast... and the second was with Brett Kaufman on his Gravity podcast.
Please listen to them. They were pretty different, but both were well done and interesting.
Free Zone Frontier with Dan Sullivan and Steve Krein
Free Zone Frontier is a Strategic Coach program (and podcast) about creating "Free Zones." It refers to the green space where entrepreneurs collaborate and create without competition.
It's a transformative idea for entrepreneurial growth.
This episode focused on topics like building a bigger future, how decision-making frameworks and technology can extend your edge, and what it takes to get to the next level. I realize there is a lot of Strategic Coach jargon in this episode. However, it is still easy to understand, and there was great energy and an elevated conversation about worthy topics.
As an aside, Steve Krein is my cousin, and we joined Strategic Coach entirely separately before realizing we had joined the same group.
Usually, I talk about business, mental models, and the future of AI and technology, but Brett Kaufman brought something different out of me.
Brett's Gravity Project is about living with intention, community, consciousness, and connection. He focuses on getting people to share their life experiences ... with the intent that others can see themselves in your story.
In my talk with Brett, we do talk about the entrepreneurial journey ... but we also probe some deep insights by discussing the death of my younger brother, how my life changed almost immediately upon meeting my wife, and why love is the most powerful and base energy in the universe.
This was not a typical conversation for me (a different ratio of head-to-heart), but it was a good one (and I've had many people reach out because of this podcast). It was fun to revisit my childhood, from playing with a cash register at my grandfather's pharmacy to selling fireflies or sand-painting terrariums; it's funny how those small moments influenced my love for entrepreneurship.
Last year, I recorded two other podcasts that I'm excited to share ... It's interesting to see the change in topic and focus - but how much is still the same (timeless).
Clarity Generates Confidence With Gary Mottershead
I talked with Gary about intentionality, learning from the past, and how AI adoption is more about human nature than technology ... and more.
On the surface, this episode may seem like just another conversation about AI, but I value the diverse insights, points of emphasis, and perspectives that different hosts illuminate.
In talking with Scott, we dove deeper into emotional alchemy, self-identity, and how to move toward what you want in life - instead of away from what you don't want.
Every year, Stanford puts out an AI Index1 with a massive amount of data attempting to sum up the current state of AI.
In 2022, it was 196 pages; last year, it was 386; now, it’s over 500 ... The report details where research is going and covers current specs, ethics, policy, and more.
It is super nerdy ... yet, it’s probably worth a skim (or ask one of the new AI services to summarize the key points, put it into an outline, and create a business strategy for your business from the items that are likely to create the best sustainable competitive advantages for you in your industry).
For reference, here are my highlights from 2022 and 2023.
AI (as a whole) received less private investment than last year - despite an 8X funding increase for Generative AI in the past year.
Even with less private investment, progress in AI accelerated in 2023.
We saw the release of new state-of-the-art systems like GPT-4, Gemini, and Claude 3. These systems are also much more multimodal than previous systems. They’re fluent in dozens of languages, can process audio and video, and even explain memes.
So, while we’re seeing a decrease in the rate at which AI gets investment dollars and new job headcount, we’re starting to see the dam overflow. The groundwork laid over the past few years is paying dividends. Here are a few things that caught my eye and might help set some high-level context for you.
Even since 2022, the capabilities of key models have increased exponentially. LLMs like GPT-4 and Gemini Ultra are very impressive. In fact, Gemini Ultra became the first LLM to reach human-level performance on the Massive Multitask Language Understanding (MMLU) benchmark. However, there’s a direct correlation between the performance of those systems and the cost to train them.
The number of new LLMs has doubled in the last year. Two-thirds of the new LLMs are open-source, but the highest-performing models are closed systems.
While looking at the pure technical improvements is important, it’s also worth realizing AI’s increased creativity and applications. For example, Auto-GPT takes GPT-4 and makes it almost autonomous. It can perform tasks with very little human intervention, it can self-prompt, and it has internet access & long-term and short-term memory management.
Here is an important distinction to make … We’re not only getting better at creating models, but we’re getting better at using them. Meanwhile, the models are getting better at improving themselves.
Researchers estimate that computer scientists could run out of high-quality language data for LLMs by the end of this year, exhausting low-quality language data within two decades, and use up image data by the late 2030s. This means we’ll increasingly rely on synthetic data to train AI systems. The call to rely on Synthetic data can be compelling, but when used as the majority of a data set, it can result in model collapse.
With limited large datasets, fine-tuning has grown increasingly popular. Adding smaller but curated datasets to a model’s training regimen can boost overall model performance while also sharpening the model’s capabilities on specific tasks. It also allows for more precise control over behavior.
Better AI means better data, which means ... you guessed it, even better AI. New tools like SegmentAnything and Skoltech are being used to generate specialized data for AI. While self-improvement isn’t possible yet without intervention, AI has been improving at an incredible pace.
The adoption of AI and the claims on AI “real estate” are still increasing. The number of AI patents has skyrocketed. From 2021 to 2022, AI patent grants worldwide increased sharply by 62.7%. Since 2010, the number of granted AI patents has increased more than 31 times.
As AI has improved, it has increasingly forced its way into our lives. We’re seeing more products, companies, and individual use cases for consumers in the general public.
While the number of AI jobs has decreased since 2021, job positions that leverage AI have significantly increased.
As well, despite the decrease in private investment, massive tranches of money are moving toward key AI-powered endeavors. For example, InstaDeep was acquired by BioNTech for $680 million to advance AI-powered drug discovery, Cohere raised $270 million to develop an AI ecosystem for enterprise use, Databricks bought MosaicML for 1.3 Billion, and Thomson Reuters acquired Casetext - an AI legal assistant.
Not to mention the investments and attention from companies like Hugging Face, Microsoft, Google, Bloomberg, Adobe, SAP, and Amazon.
Unfortunately, the number of AI misuse incidents is skyrocketing. And it’s more than just deepfakes, AI can be used for many nefarious purposes that aren’t as visible, on top of intrinsic risks, like with self-driving cars. A global survey on responsible AI highlights that companies’ top AI-related concerns include privacy, data security, and reliability.
When you invent the car, you also invent the potential for car crashes ... when you ‘invent’ nuclear energy, you create the potential for nuclear weapons.
There are other potential negatives as well. For example, many AI systems (like cryptocurrencies) use vast amounts of energy and produce carbon. So, the ecological impact has to be taken into account as well.
Luckily, many of today’s best minds are focused on creating bumpers to rein in AI and prevent and discourage bad actors. The number of AI-related regulations has risen significantly, both in the past year and over the last five years. In 2023, there were 25 AI-related regulations, a stark increase from just one in 2016. Last year, the total number of AI-related regulations grew by 56.3%. Regulating AI has become increasingly important in legislative proceedings across the globe, increasing 10x since 2016.
Not to mention, US government agencies allocated over $1.8 billion to AI research and development spending in 2023. Our government has tripled its funding for AI since 2018 and is trying to increase its budget again this year.
Conclusion
Artificial Intelligence is inevitable. Frankly, it’s already here. Not only that ... it’s growing, and it’s becoming increasingly powerful and impressive to the point that I’m no longer amazed by how amazing it continues to become.
Despite America leading the charge in AI, we’re also among the lowest in positivity about the benefits and drawbacks of these products and services. China, Saudi Arabia, and India rank the highest. Only 34% of Americans anticipate AI will boost the economy, and 32% believe it will enhance the job market. Significant demographic differences exist in perceptions of AI’s potential to enhance livelihoods, with younger generations generally more optimistic.
We’re at an interesting inflection point where fear of repercussions could derail and diminish innovation - slowing down our technological advance.
Much of this fear is based on emerging models demonstrating new (and potentially unpredictable) capabilities. Researchers showed that these emerging capabilities mostly appear when non-linear or discontinuous metrics are used ... but vanish with linear and continuous metrics. So far, even with LLMs, intrinsic self-correction has shown to be very difficult. When a model is left to decide on self-correction without guidance, performance declines across all benchmarks.
If we don’t continue to lead the charge, other countries will … you can already see it with China leading the AI patent explosion.
We need to address the fears and culture around AI in America. The benefits seem to outweigh the costs – but we have to account for the costs (time, resources, fees, and friction) and attempt to minimize potential risks – because those are real (and growing) as well.
Pioneers often get arrows in their backs and blood on their shoes. But they are also the first to reach the new world.
Luckily, I think momentum is moving in the right direction. Last year, it was rewarding to see my peers start to use AI apps. Now, many of them are using AI-inspired vocabulary and thinking seriously about how best to adopt AI into the fabric of their business.
We are on the right path.
Onwards!
1Nestor Maslej, Loredana Fattorini, Raymond Perrault, Vanessa Parli, Anka Reuel, Erik Brynjolfsson, John Etchemendy, Katrina Ligett, Terah Lyons, James Manyika, Juan Carlos Niebles, Yoav Shoham, Russell Wald, and Jack Clark, “The AI Index 2024 Annual Report,” AI Index Steering Committee, Institute for Human-Centered AI, Stanford University, Stanford, CA, April 2024. The AI Index 2024 Annual Report by Stanford University is licensed under Attribution-NoDerivatives 4.0 International.
Correlation Between Market Crashes & Oreos?!
During the Robinhood & Gamestop debacle in 2021, I wrote an article about r/WallStreetBets where I essentially said that most of the retail investors that frequent the site don’t know what they’re doing ... Occasionally, however, there are posts that present the type of solid research or insights you might see from a respected Wall Street firm.
With Gamestop and AMC both surging recently, I thought this was a topic worth revisiting.
As an example of good research done by the subreddit, here’s a link to a post where a user (nobjos) analyzed 66,000+ buy and sell recommendations by financial analysts over the last 10 years to see if they had an edge. Spoiler: maybe, but only if you have sufficient AUM to justify the investment in their research.
Some posts demonstrate a clear misunderstanding of markets, and the subreddit certainly contains more jokes than quality posts. Nevertheless, I saw a great example of a post that pokes fun at the concept that correlation does not equal causation.
I’ve posted about the Super Bowl Indicator and the Big Mac Index in the past, but what about Oreos? Read what’s next for mouth-watering market insights.
It’s surprisingly good due diligence, but it’s also clearly just meant to be funny. It resonates because we crave order and look for signs that make markets seem a little bit more predictable.
The problem with randomness is that it often appears meaningful.
Many people on Wall Street have ideas about how to guess what will happen with the stock market or the economy. Unfortunately, they often confuse correlation with causation. At least with the Oreo Indicator, we know that the idea was supposed to be thought-provoking (but silly) rather than investment advice to be taken seriously.
More people than you would hope or guess attempt to forecast the market based on gut, ancient wisdom, and prayers.
While hope and prayer are good things ... they aren’t reliably good trading strategies.
Consider this a reminder that even if you do the work, you’ll likely get a bad answer if you use the wrong inputs.
Garbage in, garbage out.
Onwards!
Posted at 11:59 PM in Business, Current Affairs, Ideas, Just for Fun, Market Commentary, Trading, Trading Tools | Permalink | Comments (0)
Reblog (0)