During the Robinhood & Gamestop debacle in 2021, I wrote an article about r/WallStreetBets where I essentially said that most of the retail investors that frequent the site don’t know what they’re doing ... Occasionally, however, there are posts that present the type of solid research or insights you might see from a respected Wall Street firm.
As an example of good research done by the subreddit, here’s a link to a post where a user (nobjos) analyzed 66,000+ buy and sell recommendations by financial analysts over the last 10 years to see if they had an edge. Spoiler: maybe, but only if you have sufficient AUM to justify the investment in their research.
Some posts demonstrate a clear misunderstanding of markets, and the subreddit certainly contains more jokes than quality posts. Nevertheless, I saw a great example of a post that pokes fun at the concept that correlation does not equal causation.
I’ve posted about the Super Bowl Indicator and the Big Mac Index in the past, but what about Oreos? Read what’s next for mouth-watering market insights.
The increasingly-depraved debuts of Oreos with more stuffing indicate unstable amounts of greed and leverage in the system, serving as an immediate indicator that the makings of a market crash are in place. Conversely, when the Oreo team reduces the amount of icing in their treats, markets tend to have great bull runs until once again society demands to push the boundaries of how much stuffing is possible.
1987: Big Stuf Oreo released. Black Monday, a 20% single-day crash and a following bear market.
1991: Mini Oreo introduced. Smaller icing ratios coincide with the 1991 Japanese asset price bubble, confirming the correlation works both ways and a reduction of Oreo icing may be a potential solution to preventing a future crash.
2011: Triple Double Oreo introduced. S&P drops 21% in a 5-month bear market
2015: Oreo Thins introduced. A complete lack of icing causes an unprecedented bull run in the S&P for years
2019: The Most Stuf Oreo briefly introduced. Pulled off the shelf before any major market damage could occur.
2021: The Most Stuf Oreo reintroduced. Market response: ???
It’s surprisingly good due diligence, but it’s also clearly just meant to be funny. It resonates because we crave order and look for signs that make markets seem a little bit more predictable.
The problem with randomness is that it often appears meaningful.
Many people on Wall Street have ideas about how to guess what will happen with the stock market or the economy. Unfortunately, they often confuse correlation with causation. At least with the Oreo Indicator, we know that the idea was supposed to be thought-provoking (but silly) rather than investment advice to be taken seriously.
More people than you would hope or guess attempt to forecast the market based on gut, ancient wisdom, and prayers.
While hope and prayer are good things ... they aren’t reliably good trading strategies.
Consider this a reminder that even if you do the work, you’ll likely get a bad answer if you use the wrong inputs.
My mother watches the news religiously. To her credit, she watches a variety of sources and creates her own takeaways based on them. Regardless, there's a common theme in all the sources she watched – they focus on fear or shock-inducing stories with a negative bias. As you might guess, I hear it when I talk with her.
While I value being informed, I also value things that nourish or make you stronger (as opposed to things that make you weak or less hopeful).
Negativity Sells.
Sure, news sources throw in the occasional feel-good story as a pattern interrupt ... but their focus skews negative. History shows that stories about improvement or the things that work simply don't grab eyeballs, attention, or ratings as consistently as negativity-focused stories do.
The reality is that negativity sells. If everything were great all the time, people wouldn't need to buy as many products, they wouldn't need to watch the news, and this cycle wouldn't continue.
It's worth acknowledging and understanding the perils our society is facing, but it's also worth focusing on the ways humanity is expanding and improving.
As a brief respite from the seemingly unending stream of doom and gloom, Information Is Beautiful has a section focused on "Beautiful News". It's a collection of visualizations highlighting positive trends, uplifting statistics, and creative solutions. It's updated daily and can be sorted by topic. I suggest you check it out.
Small distinctions separate wise men from fools ... Perhaps most important among them is what the wise man deems consequential.
This post discusses Socrates' Triple Filter Test, which involves checking information for truth, goodness, and usefulness. It also explores how this concept applies to decision-making in business and life by focusing on important information and filtering out the rest. The key to making better choices and staying focused is to avoid damaging or irrelevant information.
Socrates' Triple Filter
In ancient Greece, Socrates was reputed to hold knowledge in high esteem. One day an acquaintance met the great philosopher and said, "Do you know what I just heard about your friend?"
"Hold on a minute," Socrates replied. "Before telling me anything, I'd like you to pass a little test. It's called the Triple Filter Test."
"Triple filter?"
"That's right," Socrates continued. "Before you talk to me about my friend, it might be a good idea to take a moment and filter what you're going to say. That's why I call it the triple filter test.
The first filter is Truth. Have you made absolutely sure that what you are about to tell me is true?"
"No," the man said, "Actually I just heard about it and…"
"All right," said Socrates. "So you don't really know if it's true or not. Now let's try the second filter, the filter of Goodness. Is what you are about to tell me about my friend something good?"
"No, on the contrary…"
"So," Socrates continued, "You want to tell me something bad about him, but you're not certain it's true. You may still pass the test though, because there's one filter left. The third filter is Usefulness. Is what you want to tell me about my friend going to be useful to me?"
"No, not really."
"Well," concluded Socrates, "If what you want to tell me is neither true, nor good, nor even useful … then why tell it to me at all?"
With all the divisiveness in both media and in our everyday conversations with friends, family, and strangers ... this is a good filter for what you say, what you post, and even how you evaluate markets, the economy, or a business opportunity.
How Does That Apply to Me or Trading?
The concept of Socrates' Triple Filter applies to markets as well.
When I was a technical trader, rather than looking at fundamental data and scouring the news daily, I focused on developing dynamic and adaptive systems and processes to look at the universe of trading algorithms to identify which were in phase and likely to perform well in the current market environment.
That focus has become more concentrated as we've transitioned to using advanced mathematics and AI to understand markets.
Filter Out What Isn't Good For You.
In contrast, there are too many ways that the media (meaning the techniques, graphics, music, etc.), the people reporting it, and even the news itself appeal to the fear and greed of human nature.
Likewise, I don't watch the news on TV anymore. It seems like story after story is about terrible things. For example, during a recent visit with my mother, I listened to her watch the news. There was a constant stream of "oh no," or "oh my," and "that's terrible". You don't even have to watch the news to know what it says.
These concepts also apply to what you feed your algorithms. Garbage in, garbage out. Just because you can plug in more data doesn't mean that data will add value. Deciding what "not to do" and "what not to listen to" is equally as important as deciding what to do.
Artificial intelligence is exciting, but artificial stupidity is terrifying.
What's The Purpose of News for You?
My purpose changes what I'm looking for and how much attention I pay to different types of information. Am I reading or watching the news for entertainment, to learn something new, or to find something relevant and actionable?
One of my favorite activities is looking for new insights and interesting articles to share with you and my team. If you aren't getting my weekly reading list on Fridays - you're missing out. You can sign up here.
By the way, I recently found a site, Ground News, that makes it easy to compare news sources, read between the lines of media bias, and break free from the blinders the algorithms put on what we see. I'd love to hear about tools or sites you think are worth sharing.
Getting back to Socrates' three filters and business, I often ask myself: is it important, does it affect our edge, or can I use it as a catalyst for getting what we want?
There's a lot of noise out there competing for your attention. Stay focused.
Did you know that Nvidia is now the third most valuable company in the world? It sits behind only Microsoft and Apple (though it’s nearing Apple).
These figures are even more impressive when you consider that at the beginning of 2020, Nvidia was valued at $145 billion.
Nvidia’s growth was built largely on the back of AI hype. Its chips have been a mainstay of AI and data science technologies, benefitting a litany of AI projects, gaming systems, crypto mining, and more. It has successfully moved from a product company to a platform.
Do you think it’s going to continue to grow? I do.
We’ve talked about hype cycles ... nevertheless, Nvidia’s offerings seem to be for the type of technology that will continue to be the underpinning of future progress. So, while we’re seeing disillusionment toward AI, it may not affect Nvidia as intensely.
This week, I saw an article in the WSJ titled “The AI Revolution Is Already Losing Steam,” – claiming that the pace of innovation in AI is slowing, its usefulness is limited, and the cost of running it remains exorbitant.
This is ridiculous! We are at the beginning of something growing exponentially. It’s hard for most people to recognize the blind spot consisting of things they can’t conceive of ... and what’s coming is hard to conceive, let alone believe is possible!
In last week's article on Stanford's AI Index, we broadly covered many subjects.
There's one I felt like covering in more depth. It's the concept of AI Agents.
One way to improve AI is to create agentic AI systems capable of autonomous operation in specific environments. However, agentic AI has long challenged computer scientists. The technology is only just now starting to show promise. Current agents can play complex games, like Minecraft, and are much better at tackling real-world tasks like research assistance and retail shopping.
A common discussion point is the future of work. The concept deals with how automation and AI will redefine the workforce, the workday, and even what we consider to be work.
Up until now, AI has been in very narrow applications. Powerful applications, but with limited breadth of scope. Generative AI and LLMs have increased the variety of tasks we can use AI for, but that's only the beginning.
AI agents represent a massive step toward intelligent, autonomous, and multi-modal systems working alongside skilled humans (and replacing unskilled workers) in a wide variety of scenarios.
Eventually, these agents will be able to understand, learn, and solve problems without human intervention. There are a few critical improvements necessary to make that possible.
Flexible goal-oriented behavior
Persistent memory & state tracking
Knowledge transfer & generalization
Interaction with real-world environments
As models become more flexible in understanding and accomplishing their goals and begin to apply that knowledge to new real-world domains, models will go from intelligent-seeming tools to powerful partners with the ability to handle multiple tasks like a human would.
While they won't be human (or perhaps even seem human), we are on the verge of a technological shift that is a massive improvement from today's chatbots.
I like to think of these agents as the new assembly line. The assembly line revolutionized the workforce and drove an industrial revolution, and I believe AI agents will do the same.
As technology evolves, improvements in efficiency, effectiveness, and certainty are inevitable. For example, with a proverbial army of agents creating, refining, and releasing content, it is easy to imagine a process that would take multiple humans a week getting done by agents in under an hour (even with human approval processes).
To make it literal, imagine using agents to write this article. One agent can be skilled in writing outlines and crafting headlines. Another could focus on research and verification of research. Then you have an agent to write, an agent to edit and proofread, and a conductor agent who makes sure that the quality is up to snuff, and replicates my voice. If the goal was to make it go viral, there could be a virality agent, an SEO keyword agent, etc.
Separating the activities into multiple agents (instead of trying to craft a vertical integrative agent) reduces the chances of "hallucinations" and self-aggrandization. It can also theoretically wholly remove the human from the process.
Now, I enjoy the writing process. I'm not trying to remove myself from this process. But, the capability is still there.
As agentification increases, I believe humans will still be a necessary part of the feedback loop process. Soon, we will start to see agent-based companies. Nonetheless, I still believe that humans will be an important part of the workforce (at least during my lifetime).
Another reason humans are important is because they are still important gatekeepers ... meaning, humans have to become comfortable with a process to allow it.
Trust and transparency are critical to AI adoption. Even if AI excels at a task, people are unlikely to use it blindly. To truly embrace AI, humans need to trust its capabilities and understand how it arrives at its results. This means AI developers must prioritize building systems that are both effective and understandable. By fostering a sense of ease and trust, users will be more receptive to the benefits AI or automation offers.
Said a different way, just because AI can do something doesn't mean that you will use the tool or let AI do it. It has to be done a "certain" way in order for you to let it get done ... and that involves a lot of trust. As a practical reality, humans don't just have to trust the technology; they also have to trust and understand the process. That means the person building the AI or creating the automation must consider what it would take for a human to feel comfortable enough to allow the benefit.
Especially as AI becomes more common (and as an increasingly large amount of content becomes solely created by artificial systems), the human touch will become a differentiator and a way to appear premium.
In my business, the goal has never been to automate away the high-value, high-touch parts of our work. I want to build authentic relationships with the people I care about — and AI and automation promise to eliminate frustration and bother to free us up to do just that.
The goal in your business should be to identify the parts in between those high-touch periods that aren't your unique ability - and find ways to automate and outsource them.
Remember, the heart of AI is still human (at least until our AI Overlords tell us otherwise).
If you're interested in AI and its impact on business, life, and our world, I encourage you to check out some of my past podcast interviews.
As I work on finishing my book, "Compounding Insights: Turning Thoughts into Things in the Age of AI," I've revisited several old episodes, and some are certainly worth sharing. I've collected a few here for you to listen to. Let me know what you think.
In 2021, I recorded two interviews that I especially enjoyed. The first was done with Dan Sullivan and Steven Krein for Strategic Coach's Free Zone Frontier podcast... and the second was with Brett Kaufman on his Gravity podcast.
Please listen to them. They were pretty different, but both were well done and interesting.
Free Zone Frontier with Dan Sullivan and Steve Krein
Free Zone Frontier is a Strategic Coach program (and podcast) about creating "Free Zones." It refers to the green space where entrepreneurs collaborate and create without competition.
It's a transformative idea for entrepreneurial growth.
This episode focused on topics like building a bigger future, how decision-making frameworks and technology can extend your edge, and what it takes to get to the next level. I realize there is a lot of Strategic Coach jargon in this episode. However, it is still easy to understand, and there was great energy and an elevated conversation about worthy topics.
As an aside, Steve Krein is my cousin, and we joined Strategic Coach entirely separately before realizing we had joined the same group.
Usually, I talk about business, mental models, and the future of AI and technology, but Brett Kaufman brought something different out of me.
Brett's Gravity Project is about living with intention, community, consciousness, and connection. He focuses on getting people to share their life experiences ... with the intent that others can see themselves in your story.
In my talk with Brett, we do talk about the entrepreneurial journey ... but we also probe some deep insights by discussing the death of my younger brother, how my life changed almost immediately upon meeting my wife, and why love is the most powerful and base energy in the universe.
This was not a typical conversation for me (a different ratio of head-to-heart), but it was a good one (and I've had many people reach out because of this podcast). It was fun to revisit my childhood, from playing with a cash register at my grandfather's pharmacy to selling fireflies or sand-painting terrariums; it's funny how those small moments influenced my love for entrepreneurship.
Last year, I recorded two other podcasts that I'm excited to share ... It's interesting to see the change in topic and focus - but how much is still the same (timeless).
Clarity Generates Confidence With Gary Mottershead
I talked with Gary about intentionality, learning from the past, and how AI adoption is more about human nature than technology ... and more.
On the surface, this episode may seem like just another conversation about AI, but I value the diverse insights, points of emphasis, and perspectives that different hosts illuminate.
In talking with Scott, we dove deeper into emotional alchemy, self-identity, and how to move toward what you want in life - instead of away from what you don't want.
Correlation Between Market Crashes & Oreos?!
During the Robinhood & Gamestop debacle in 2021, I wrote an article about r/WallStreetBets where I essentially said that most of the retail investors that frequent the site don’t know what they’re doing ... Occasionally, however, there are posts that present the type of solid research or insights you might see from a respected Wall Street firm.
With Gamestop and AMC both surging recently, I thought this was a topic worth revisiting.
As an example of good research done by the subreddit, here’s a link to a post where a user (nobjos) analyzed 66,000+ buy and sell recommendations by financial analysts over the last 10 years to see if they had an edge. Spoiler: maybe, but only if you have sufficient AUM to justify the investment in their research.
Some posts demonstrate a clear misunderstanding of markets, and the subreddit certainly contains more jokes than quality posts. Nevertheless, I saw a great example of a post that pokes fun at the concept that correlation does not equal causation.
I’ve posted about the Super Bowl Indicator and the Big Mac Index in the past, but what about Oreos? Read what’s next for mouth-watering market insights.
It’s surprisingly good due diligence, but it’s also clearly just meant to be funny. It resonates because we crave order and look for signs that make markets seem a little bit more predictable.
The problem with randomness is that it often appears meaningful.
Many people on Wall Street have ideas about how to guess what will happen with the stock market or the economy. Unfortunately, they often confuse correlation with causation. At least with the Oreo Indicator, we know that the idea was supposed to be thought-provoking (but silly) rather than investment advice to be taken seriously.
More people than you would hope or guess attempt to forecast the market based on gut, ancient wisdom, and prayers.
While hope and prayer are good things ... they aren’t reliably good trading strategies.
Consider this a reminder that even if you do the work, you’ll likely get a bad answer if you use the wrong inputs.
Garbage in, garbage out.
Onwards!
Posted at 11:59 PM in Business, Current Affairs, Ideas, Just for Fun, Market Commentary, Trading, Trading Tools | Permalink | Comments (0)
Reblog (0)