Small distinctions separate wise men from fools ... Perhaps most important among them is what the wise man deems consequential.
This post discusses Socrates' Triple Filter Test, which involves checking information for truth, goodness, and usefulness. It also explores how this concept applies to decision-making in business and life by focusing on important information and filtering out the rest. The key to making better choices and staying focused is to avoid damaging or irrelevant information.
Socrates' Triple Filter
In ancient Greece, Socrates was reputed to hold knowledge in high esteem. One day an acquaintance met the great philosopher and said, "Do you know what I just heard about your friend?"
"Hold on a minute," Socrates replied. "Before telling me anything, I'd like you to pass a little test. It's called the Triple Filter Test."
"Triple filter?"
"That's right," Socrates continued. "Before you talk to me about my friend, it might be a good idea to take a moment and filter what you're going to say. That's why I call it the triple filter test.
The first filter is Truth. Have you made absolutely sure that what you are about to tell me is true?"
"No," the man said, "Actually I just heard about it and…"
"All right," said Socrates. "So you don't really know if it's true or not. Now let's try the second filter, the filter of Goodness. Is what you are about to tell me about my friend something good?"
"No, on the contrary…"
"So," Socrates continued, "You want to tell me something bad about him, but you're not certain it's true. You may still pass the test though, because there's one filter left. The third filter is Usefulness. Is what you want to tell me about my friend going to be useful to me?"
"No, not really."
"Well," concluded Socrates, "If what you want to tell me is neither true, nor good, nor even useful … then why tell it to me at all?"
With all the divisiveness in both media and in our everyday conversations with friends, family, and strangers ... this is a good filter for what you say, what you post, and even how you evaluate markets, the economy, or a business opportunity.
How Does That Apply to Me or Trading?
The concept of Socrates' Triple Filter applies to markets as well.
When I was a technical trader, rather than looking at fundamental data and scouring the news daily, I focused on developing dynamic and adaptive systems and processes to look at the universe of trading algorithms to identify which were in phase and likely to perform well in the current market environment.
That focus has become more concentrated as we've transitioned to using advanced mathematics and AI to understand markets.
Filter Out What Isn't Good For You.
In contrast, there are too many ways that the media (meaning the techniques, graphics, music, etc.), the people reporting it, and even the news itself appeal to the fear and greed of human nature.
Likewise, I don't watch the news on TV anymore. It seems like story after story is about terrible things. For example, during a recent visit with my mother, I listened to her watch the news. There was a constant stream of "oh no," or "oh my," and "that's terrible". You don't even have to watch the news to know what it says.
These concepts also apply to what you feed your algorithms. Garbage in, garbage out. Just because you can plug in more data doesn't mean that data will add value. Deciding what "not to do" and "what not to listen to" is equally as important as deciding what to do.
Artificial intelligence is exciting, but artificial stupidity is terrifying.
What's The Purpose of News for You?
My purpose changes what I'm looking for and how much attention I pay to different types of information. Am I reading or watching the news for entertainment, to learn something new, or to find something relevant and actionable?
One of my favorite activities is looking for new insights and interesting articles to share with you and my team. If you aren't getting my weekly reading list on Fridays - you're missing out. You can sign up here.
By the way, I recently found a site, Ground News, that makes it easy to compare news sources, read between the lines of media bias, and break free from the blinders the algorithms put on what we see. I'd love to hear about tools or sites you think are worth sharing.
Getting back to Socrates' three filters and business, I often ask myself: is it important, does it affect our edge, or can I use it as a catalyst for getting what we want?
There's a lot of noise out there competing for your attention. Stay focused.
Did you know that Nvidia is now the third most valuable company in the world? It sits behind only Microsoft and Apple (though it’s nearing Apple).
These figures are even more impressive when you consider that at the beginning of 2020, Nvidia was valued at $145 billion.
Nvidia’s growth was built largely on the back of AI hype. Its chips have been a mainstay of AI and data science technologies, benefitting a litany of AI projects, gaming systems, crypto mining, and more. It has successfully moved from a product company to a platform.
Do you think it’s going to continue to grow? I do.
We’ve talked about hype cycles ... nevertheless, Nvidia’s offerings seem to be for the type of technology that will continue to be the underpinning of future progress. So, while we’re seeing disillusionment toward AI, it may not affect Nvidia as intensely.
This week, I saw an article in the WSJ titled “The AI Revolution Is Already Losing Steam,” – claiming that the pace of innovation in AI is slowing, its usefulness is limited, and the cost of running it remains exorbitant.
This is ridiculous! We are at the beginning of something growing exponentially. It’s hard for most people to recognize the blind spot consisting of things they can’t conceive of ... and what’s coming is hard to conceive, let alone believe is possible!
In last week's article on Stanford's AI Index, we broadly covered many subjects.
There's one I felt like covering in more depth. It's the concept of AI Agents.
One way to improve AI is to create agentic AI systems capable of autonomous operation in specific environments. However, agentic AI has long challenged computer scientists. The technology is only just now starting to show promise. Current agents can play complex games, like Minecraft, and are much better at tackling real-world tasks like research assistance and retail shopping.
A common discussion point is the future of work. The concept deals with how automation and AI will redefine the workforce, the workday, and even what we consider to be work.
Up until now, AI has been in very narrow applications. Powerful applications, but with limited breadth of scope. Generative AI and LLMs have increased the variety of tasks we can use AI for, but that's only the beginning.
AI agents represent a massive step toward intelligent, autonomous, and multi-modal systems working alongside skilled humans (and replacing unskilled workers) in a wide variety of scenarios.
Eventually, these agents will be able to understand, learn, and solve problems without human intervention. There are a few critical improvements necessary to make that possible.
Flexible goal-oriented behavior
Persistent memory & state tracking
Knowledge transfer & generalization
Interaction with real-world environments
As models become more flexible in understanding and accomplishing their goals and begin to apply that knowledge to new real-world domains, models will go from intelligent-seeming tools to powerful partners with the ability to handle multiple tasks like a human would.
While they won't be human (or perhaps even seem human), we are on the verge of a technological shift that is a massive improvement from today's chatbots.
I like to think of these agents as the new assembly line. The assembly line revolutionized the workforce and drove an industrial revolution, and I believe AI agents will do the same.
As technology evolves, improvements in efficiency, effectiveness, and certainty are inevitable. For example, with a proverbial army of agents creating, refining, and releasing content, it is easy to imagine a process that would take multiple humans a week getting done by agents in under an hour (even with human approval processes).
To make it literal, imagine using agents to write this article. One agent can be skilled in writing outlines and crafting headlines. Another could focus on research and verification of research. Then you have an agent to write, an agent to edit and proofread, and a conductor agent who makes sure that the quality is up to snuff, and replicates my voice. If the goal was to make it go viral, there could be a virality agent, an SEO keyword agent, etc.
Separating the activities into multiple agents (instead of trying to craft a vertical integrative agent) reduces the chances of "hallucinations" and self-aggrandization. It can also theoretically wholly remove the human from the process.
Now, I enjoy the writing process. I'm not trying to remove myself from this process. But, the capability is still there.
As agentification increases, I believe humans will still be a necessary part of the feedback loop process. Soon, we will start to see agent-based companies. Nonetheless, I still believe that humans will be an important part of the workforce (at least during my lifetime).
Another reason humans are important is because they are still important gatekeepers ... meaning, humans have to become comfortable with a process to allow it.
Trust and transparency are critical to AI adoption. Even if AI excels at a task, people are unlikely to use it blindly. To truly embrace AI, humans need to trust its capabilities and understand how it arrives at its results. This means AI developers must prioritize building systems that are both effective and understandable. By fostering a sense of ease and trust, users will be more receptive to the benefits AI or automation offers.
Said a different way, just because AI can do something doesn't mean that you will use the tool or let AI do it. It has to be done a "certain" way in order for you to let it get done ... and that involves a lot of trust. As a practical reality, humans don't just have to trust the technology; they also have to trust and understand the process. That means the person building the AI or creating the automation must consider what it would take for a human to feel comfortable enough to allow the benefit.
Especially as AI becomes more common (and as an increasingly large amount of content becomes solely created by artificial systems), the human touch will become a differentiator and a way to appear premium.
In my business, the goal has never been to automate away the high-value, high-touch parts of our work. I want to build authentic relationships with the people I care about — and AI and automation promise to eliminate frustration and bother to free us up to do just that.
The goal in your business should be to identify the parts in between those high-touch periods that aren't your unique ability - and find ways to automate and outsource them.
Remember, the heart of AI is still human (at least until our AI Overlords tell us otherwise).
If you're interested in AI and its impact on business, life, and our world, I encourage you to check out some of my past podcast interviews.
As I work on finishing my book, "Compounding Insights: Turning Thoughts into Things in the Age of AI," I've revisited several old episodes, and some are certainly worth sharing. I've collected a few here for you to listen to. Let me know what you think.
In 2021, I recorded two interviews that I especially enjoyed. The first was done with Dan Sullivan and Steven Krein for Strategic Coach's Free Zone Frontier podcast... and the second was with Brett Kaufman on his Gravity podcast.
Please listen to them. They were pretty different, but both were well done and interesting.
Free Zone Frontier with Dan Sullivan and Steve Krein
Free Zone Frontier is a Strategic Coach program (and podcast) about creating "Free Zones." It refers to the green space where entrepreneurs collaborate and create without competition.
It's a transformative idea for entrepreneurial growth.
This episode focused on topics like building a bigger future, how decision-making frameworks and technology can extend your edge, and what it takes to get to the next level. I realize there is a lot of Strategic Coach jargon in this episode. However, it is still easy to understand, and there was great energy and an elevated conversation about worthy topics.
As an aside, Steve Krein is my cousin, and we joined Strategic Coach entirely separately before realizing we had joined the same group.
Usually, I talk about business, mental models, and the future of AI and technology, but Brett Kaufman brought something different out of me.
Brett's Gravity Project is about living with intention, community, consciousness, and connection. He focuses on getting people to share their life experiences ... with the intent that others can see themselves in your story.
In my talk with Brett, we do talk about the entrepreneurial journey ... but we also probe some deep insights by discussing the death of my younger brother, how my life changed almost immediately upon meeting my wife, and why love is the most powerful and base energy in the universe.
This was not a typical conversation for me (a different ratio of head-to-heart), but it was a good one (and I've had many people reach out because of this podcast). It was fun to revisit my childhood, from playing with a cash register at my grandfather's pharmacy to selling fireflies or sand-painting terrariums; it's funny how those small moments influenced my love for entrepreneurship.
Last year, I recorded two other podcasts that I'm excited to share ... It's interesting to see the change in topic and focus - but how much is still the same (timeless).
Clarity Generates Confidence With Gary Mottershead
I talked with Gary about intentionality, learning from the past, and how AI adoption is more about human nature than technology ... and more.
On the surface, this episode may seem like just another conversation about AI, but I value the diverse insights, points of emphasis, and perspectives that different hosts illuminate.
In talking with Scott, we dove deeper into emotional alchemy, self-identity, and how to move toward what you want in life - instead of away from what you don't want.
Every year, Stanford puts out an AI Index1 with a massive amount of data attempting to sum up the current state of AI.
In 2022, it was 196 pages; last year, it was 386; now, it’s over 500 ... The report details where research is going and covers current specs, ethics, policy, and more.
It is super nerdy ... yet, it’s probably worth a skim (or ask one of the new AI services to summarize the key points, put it into an outline, and create a business strategy for your business from the items that are likely to create the best sustainable competitive advantages for you in your industry).
For reference, here are my highlights from 2022 and 2023.
AI (as a whole) received less private investment than last year - despite an 8X funding increase for Generative AI in the past year.
Even with less private investment, progress in AI accelerated in 2023.
We saw the release of new state-of-the-art systems like GPT-4, Gemini, and Claude 3. These systems are also much more multimodal than previous systems. They’re fluent in dozens of languages, can process audio and video, and even explain memes.
So, while we’re seeing a decrease in the rate at which AI gets investment dollars and new job headcount, we’re starting to see the dam overflow. The groundwork laid over the past few years is paying dividends. Here are a few things that caught my eye and might help set some high-level context for you.
Even since 2022, the capabilities of key models have increased exponentially. LLMs like GPT-4 and Gemini Ultra are very impressive. In fact, Gemini Ultra became the first LLM to reach human-level performance on the Massive Multitask Language Understanding (MMLU) benchmark. However, there’s a direct correlation between the performance of those systems and the cost to train them.
The number of new LLMs has doubled in the last year. Two-thirds of the new LLMs are open-source, but the highest-performing models are closed systems.
While looking at the pure technical improvements is important, it’s also worth realizing AI’s increased creativity and applications. For example, Auto-GPT takes GPT-4 and makes it almost autonomous. It can perform tasks with very little human intervention, it can self-prompt, and it has internet access & long-term and short-term memory management.
Here is an important distinction to make … We’re not only getting better at creating models, but we’re getting better at using them. Meanwhile, the models are getting better at improving themselves.
Researchers estimate that computer scientists could run out of high-quality language data for LLMs by the end of this year, exhausting low-quality language data within two decades, and use up image data by the late 2030s. This means we’ll increasingly rely on synthetic data to train AI systems. The call to rely on Synthetic data can be compelling, but when used as the majority of a data set, it can result in model collapse.
With limited large datasets, fine-tuning has grown increasingly popular. Adding smaller but curated datasets to a model’s training regimen can boost overall model performance while also sharpening the model’s capabilities on specific tasks. It also allows for more precise control over behavior.
Better AI means better data, which means ... you guessed it, even better AI. New tools like SegmentAnything and Skoltech are being used to generate specialized data for AI. While self-improvement isn’t possible yet without intervention, AI has been improving at an incredible pace.
The adoption of AI and the claims on AI “real estate” are still increasing. The number of AI patents has skyrocketed. From 2021 to 2022, AI patent grants worldwide increased sharply by 62.7%. Since 2010, the number of granted AI patents has increased more than 31 times.
As AI has improved, it has increasingly forced its way into our lives. We’re seeing more products, companies, and individual use cases for consumers in the general public.
While the number of AI jobs has decreased since 2021, job positions that leverage AI have significantly increased.
As well, despite the decrease in private investment, massive tranches of money are moving toward key AI-powered endeavors. For example, InstaDeep was acquired by BioNTech for $680 million to advance AI-powered drug discovery, Cohere raised $270 million to develop an AI ecosystem for enterprise use, Databricks bought MosaicML for 1.3 Billion, and Thomson Reuters acquired Casetext - an AI legal assistant.
Not to mention the investments and attention from companies like Hugging Face, Microsoft, Google, Bloomberg, Adobe, SAP, and Amazon.
Unfortunately, the number of AI misuse incidents is skyrocketing. And it’s more than just deepfakes, AI can be used for many nefarious purposes that aren’t as visible, on top of intrinsic risks, like with self-driving cars. A global survey on responsible AI highlights that companies’ top AI-related concerns include privacy, data security, and reliability.
When you invent the car, you also invent the potential for car crashes ... when you ‘invent’ nuclear energy, you create the potential for nuclear weapons.
There are other potential negatives as well. For example, many AI systems (like cryptocurrencies) use vast amounts of energy and produce carbon. So, the ecological impact has to be taken into account as well.
Luckily, many of today’s best minds are focused on creating bumpers to rein in AI and prevent and discourage bad actors. The number of AI-related regulations has risen significantly, both in the past year and over the last five years. In 2023, there were 25 AI-related regulations, a stark increase from just one in 2016. Last year, the total number of AI-related regulations grew by 56.3%. Regulating AI has become increasingly important in legislative proceedings across the globe, increasing 10x since 2016.
Not to mention, US government agencies allocated over $1.8 billion to AI research and development spending in 2023. Our government has tripled its funding for AI since 2018 and is trying to increase its budget again this year.
Conclusion
Artificial Intelligence is inevitable. Frankly, it’s already here. Not only that ... it’s growing, and it’s becoming increasingly powerful and impressive to the point that I’m no longer amazed by how amazing it continues to become.
Despite America leading the charge in AI, we’re also among the lowest in positivity about the benefits and drawbacks of these products and services. China, Saudi Arabia, and India rank the highest. Only 34% of Americans anticipate AI will boost the economy, and 32% believe it will enhance the job market. Significant demographic differences exist in perceptions of AI’s potential to enhance livelihoods, with younger generations generally more optimistic.
We’re at an interesting inflection point where fear of repercussions could derail and diminish innovation - slowing down our technological advance.
Much of this fear is based on emerging models demonstrating new (and potentially unpredictable) capabilities. Researchers showed that these emerging capabilities mostly appear when non-linear or discontinuous metrics are used ... but vanish with linear and continuous metrics. So far, even with LLMs, intrinsic self-correction has shown to be very difficult. When a model is left to decide on self-correction without guidance, performance declines across all benchmarks.
If we don’t continue to lead the charge, other countries will … you can already see it with China leading the AI patent explosion.
We need to address the fears and culture around AI in America. The benefits seem to outweigh the costs – but we have to account for the costs (time, resources, fees, and friction) and attempt to minimize potential risks – because those are real (and growing) as well.
Pioneers often get arrows in their backs and blood on their shoes. But they are also the first to reach the new world.
Luckily, I think momentum is moving in the right direction. Last year, it was rewarding to see my peers start to use AI apps. Now, many of them are using AI-inspired vocabulary and thinking seriously about how best to adopt AI into the fabric of their business.
We are on the right path.
Onwards!
1Nestor Maslej, Loredana Fattorini, Raymond Perrault, Vanessa Parli, Anka Reuel, Erik Brynjolfsson, John Etchemendy, Katrina Ligett, Terah Lyons, James Manyika, Juan Carlos Niebles, Yoav Shoham, Russell Wald, and Jack Clark, “The AI Index 2024 Annual Report,” AI Index Steering Committee, Institute for Human-Centered AI, Stanford University, Stanford, CA, April 2024. The AI Index 2024 Annual Report by Stanford University is licensed under Attribution-NoDerivatives 4.0 International.
New York and Hawaii top the list with 12% and 11.8% respectively. Alaska ends the list with 4.9%, followed by New Hampshire with 5.6%.
Alaskans don't pay state income tax, but neither do Florida, Nevada, South Dakota, Tennessee, Texas, Washington, or Wyoming. So, if you're trying to avoid taxes, they all sound like better bets.
New Hampshire still has a better state tax burden than any of them despite its 4% flat tax on interest and dividend income.
If you don't like paying taxes (and don't mind the cold), then Alaska might be worth the winters?
Meanwhile, we hear a lot about the exodus from California, but not from New York or Maine. Maybe it's the people ... or maybe it's their Governor?
A few years ago, I shared a presentation called Mindset Matters that I had given to a small mastermind group.
This past week, I revisited that content in a different group.
One of my core beliefs is that energy is one of the most important things we can measure. I believe it so strongly I paid Gaping Void to put it on my wall.
It means exactly what it sounds like - but also a lot more.
Energy affects how you feel, what you do, and what you make it mean. That means it is a great way to measure your values too. Consequently, even if you don’t recognize it, energy has a lot to do with who you hire and fire. It affects where you spend your time. Ultimately, it even affects the long-term vision of our company. If something brings profit and energy, it is probably worth pursuing.
In contrast, fighting your energy is one of the quickest ways to burn out. Figuring out who and what to say “no” to is a crucial part of making sure you stay on the path and reach your goals.
I believe that words have power. Specifically, the words you use to describe your identity and your priorities change your reality.
First, some background. Your Roles and Goals are nouns. That means “a person, place, or thing.” Let's examine some sample roles (like father, entrepreneur, visionary, etc.) and goals (like amplified intelligence, autonomous platform, and sustainable edge). As expected, they are all nouns.
Next, we’ll examine your default strategies. You use these in order to create or be the things you want. The strategies you use are verbs. That means they define an action you take. Action words include: connect, communicate, contribute, collaborate, protect, serve, evaluate, curate, share … and love. On the other end of the spectrum, you could complain, retreat, blame, or block.
People have habitual strategies. I often say happy people find ways to be happy – while frustrated people find ways to be frustrated. This is true for many things.
Seen a different way, people expect and trust that you will act according to how they perceive you.
Meanwhile, you are the most important perceiver.
Another distinction worth making is that the nouns and verbs we use range from timely to timeless. Timely words relate to what you are doing now. Timeless words are chunked higher and relate to what you have done, what you are doing, and what you will do.
The trick is to chunk high enough that you are focused on words that link your timeless Roles, Goals, and Strategies. When done right, you know that these are a part of what makes you … “You”.
My favorite way to do this is through three-word strategies.
These work for your business, priorities, identity, and more.
I’ll introduce the idea to you by sharing my own to start.
Understand. Challenge. Transform.
The actual words are less important than what they mean to me.
What’s also important is that not only do these words mean something to me, but I’ve put them in a specific order, and I’ve made these words “commands” in my life. They’re specific, measurable, and actionable. They remind me what to do. They give me direction. And, together, they are a strategy (or process) that creates a reliable result.
First, I understand, because I want to make sure I consider the big picture and the possible paths from where I am to the bigger future possibility that I want. Then, I challenge situations, people, norms, and more. I don’t challenge to tear down. I challenge to find strengths … to figure out what to trust and rely upon. Finally, I transform things to make them better. Insanity is doing what you always do and expecting a different result. This is about finding where small shifts create massive consequences. It is about committing to the result rather than how we have done things till now.
If I challenged before I knew the situation, or I tried to transform something without properly doing my research, I’d risk causing more damage than good.
Likewise, imagine the life of someone who protects, serves, and loves. Compare that to the life of someone who loves, serves, and protects. The order matters!
There is an art and a science to it. But it starts by taking the first step. Try to find your three words.
I’ve set daily alarms on my phone to remind me of these words. I use them when I’m in meetings, and they’re used to evaluate whether I’m showing up as my best self.
You can also create three words that are different for the different hats you wear, the products in your business, or how your team collaborates.
Like recipes, your words should have ingredients, orders, and intensities. As you use your words more, the intensities might change. For example, when my son was just getting out of college, one of his words was contented because he was focused on all the things he missed from college - instead of being appreciative of the things he had. Later, his words switched to grateful and then loving. Evolutions that paired with his personal journeys and represented stronger actions.
Realize that we create what we want by doing. As such, choose words that inform or spark the right actions. You can see that in my son’s words. As he grew, he became more comfortable actively prompting the actions he wanted to approach life with, instead of just passively hoping for a feeling.
You can apply these simple three-word strategies almost everywhere once you learn how to create them.
The problem with history is it rarely tells the whole story.
Ideally, history would be presented objectively, recounting facts without the influence of societal bias, the perspective of the victor, or the storyteller's slant. But achieving this is harder than it seems.
Think about your daily life – it is filled with many seemingly innocuous judgments about your perception of the economy, what's happening in the markets, who is a hero, who deserves punishment, and whether an action is "Just" or "Wrong".
I'm often surprised by how frequently intelligent people violently disagree on issues that seem clear-cut to them.
It's like a fish in water not realizing it's in water ... Most people don't realize the inherent biases and filters that inform their sense of the world or reality.
This post is an attempt to highlight the importance of diverse perspectives and information sources in building well-informed viewpoints.
Even though most people would agree that genuinely understanding history requires a clear picture, free from bias ... I think it's apparent that history (as we know it) is subjective. The narrative shifts to support the needs of the society reporting it.
The Cold War is a great example where: during the war, immediately after the war, and today, the interpretation of the causes and events has changed.
But while that's one example, to a certain degree, we can see it everywhere. We can even see it in the way events are reported today. News stations color the story based on whether they're red or blue, and the internet is quick to jump on a bandwagon even if the information is hearsay.
Now, what happens when you can literally rewrite history?
“Every record has been destroyed or falsified, every book rewritten, every picture has been repainted, every statue and street building has been renamed, every date has been altered. And the process is continuing day by day and minute by minute. History has stopped.“ - Orwell, 1984
That's one of the potential risks of deepfake technology. As it gets better, creating "supporting evidence" becomes easier for whatever narrative a government or other entity is trying to make real.
On July 20th, 1969, Neil Armstrong and Buzz Aldrin landed safely on the moon. They then returned to Earth safely as well.
MIT recently created a deepfake of a speech Nixon's speechwriter William Safire wrote during the Apollo 11 mission in case of disaster. The whole video is worth watching, but the speech starts around 4:20.
Can you imagine the real-world ripples that would have occurred if the astronauts died on that journey (or if people genuinely believed they did)? Here is a quote from the press response the Nixon-era government prepared in case of that disaster.
"Fate has ordained that the men who went to the moon to explore in peace will stay on the moon to rest in peace." - Nixon's Apollo 11 Disaster Speech
Today, alternative histories are becoming some people's realities. Why? Media disinformation is the cause and is more dangerous than ever.
Alternative history can only be called that when it's discernible from the truth, and unfortunately, we're prone to look for information that already fits our biases.
Today, we also have to increasingly consider the impacts of technology. Deepfakes are becoming more commonplace - with popstar Drake even using AI in a recent record. Now, that was apparent - but scarily, research shows that most can't tell a deepfake from reality (even if they think they can.)
As deepfakes get better, we'll also get better at detecting them, but it's a cat-and-mouse game with no end in sight.
In Signalling theory, it's the idea that signallers evolve to become better at manipulating receivers, while receivers evolve to become more resistant to manipulation. We're seeing the same thing in trading with algorithms.
In 1983, Stanislav Petrov saved the world. Petrov was the duty officer at the command center for a Russian nuclear early-warning system when the system reported that a missile had been launched from the U.S., followed by up to five more. Petrov judged the reports to be a false alarm and didn't authorize retaliation (and a potential nuclear WWIII where countless would have died).
But messaging is now getting more convincing. It's harder to tell real from fake. What happens when a world leader has a convincing enough deepfake with a convincing enough threat to another country? Will people have the wherewithal to double-check? What about when they're buffeted by these messages constantly and from every direction?
As we increasingly use AI for writing and editing, there is a growing risk of subtle changes being made to messages and communications. This widespread opportunity to manipulate information amplifies the capacity and potential for people to use these technologies to influence people's perceptions. As a result, we must be increasingly cautious about how the data we rely on may be altered, which could ultimately affect our perceptions and decisions.
Despite the risks, I'm excited about the promise and the possibilities of technology. But, as always, in search of the good (or better), we have to acknowledge and be prepared for the bad.
In 2020, I had a Zoom meeting with Matthew Piepenburg of Signals Matter. Even though it was a private discussion, there was so much value in our discussion we decided to share parts of it online.
Four years later, I still think it's a great watch.
While Matt evaluates markets based on Macro/Value investing, I'm much more interested in advanced AI and quantitative methods.
As you might expect, there are a lot of differences in how we view the world, decision-making, and the market. Nonetheless, we share a lot of common beliefs as well.
Our talk explores several interesting areas and concepts. I encourage you to watch it below.
Even though this video is four years old, the lessons remain true – markets are not the economy, and normal market dynamics have been out the window for a long time. In addition, part of why you're seeing increased volatility and noise is because there are so many interventions and artificial inputs to our market system.
While Matt and I may approach the world with very different lenses, we both believe in "timeless wisdom".
Ask yourself, What was true yesterday, today, and will stay true tomorrow?
That is part of the reason we focus on emerging technologies and constant innovation ... they remain relevant.
Something we can both agree on is that if you don't know what your edge is ... you don't have one.
Hope you enjoyed the video.
Let me know what other topics you'd like to hear more about.
It's a pretty damning video from someone who is frustrated with AI - but it makes several interesting points. The presenter discusses Amazon's recent foible, Google's decreasing search quality, the increase of poorly written AI-crafted articles, GPTs web-scraping scandals, and the overall generalization of responses we see as everyone uses AI everywhere.
Yanshin attributes the disparity between the actual results and the excitement surrounding AI stocks to the substantial investments from technology giants. But as most bubbles prove, money will be the catalyst for amazing things — and some amazing failures and disappointments too.
His final takeaway is that, regardless of its current state, AI is coming and will undoubtedly improve our lives.
If I were to add some perspective from someone in the industry, it would be this.
AI Is Overdelivering in Countless Ways
There will always be a gap between expectations and reality (because there will always be a gap between the hype and adoption cycles). AI is already seamlessly integrated into your life. It's the underpinning of your Smartphones, Roombas, Alexas, Maps, etc. It has also massively improved supply chain management, data analytics, and more.
That's not what gets media coverage ... because it's not sexy ... even if it's real.
Having created AI since arguably the mid-90s, the progress and capabilities of AI today are hard to believe. They're almost good enough to seem like science fiction.
The Tool Isn't Usually The Problem
Artificial Intelligence is not a substitute for the real thing—and it certainly can't compensate for the lack of the real thing.
I sound like a broken record, but AI is a tool, not a panacea. Misusing it, like using a shovel as a hammer, leads to disappointment. And it doesn't help if you're trying to hammer nails when you should be laying bricks.
ChatGPT is very impressive, as are many other generative AI tools. However, they're still products of the data used to train them. They won't make sure they give you factual information; they can only write their responses based on the data they have.
If you give an AI tool a general prompt, you'll likely get a general answer. Crafting precise prompts increases their utility and can create surprising results.
Even if AI independently achieves 80% of the desired outcome, it still did it without a human, a salary, or hours and days of time to create it.
Unfortunately, if you're asking the wrong questions, the answers still won't help you.
That's why it matters not only that you use the right tool but also that you use it to solve the right problem. In addition, many businesses lose sight of the issues they're solving because they get distracted by bright and shiny new opportunities.
Conclusion
Sifting the wheat from the chaff has become more complicated — and not just in AI. Figuring out what news is real, who to trust, and what companies won't misuse your data seems like it has almost become a full-time job.
If you take the time, you will see a lot of exciting progress.
Public perception is likely to trend downward in the next news cycle, which is to be expected. After the peak of inflated expectations comes the trough of disillusionment.
Regardless, AI will continue to become more capable, ubiquitous, and autonomous. The question is only how long until it affects your business and industry.
Some Timeless Wisdom From Socrates
Small distinctions separate wise men from fools ... Perhaps most important among them is what the wise man deems consequential.
This post discusses Socrates' Triple Filter Test, which involves checking information for truth, goodness, and usefulness. It also explores how this concept applies to decision-making in business and life by focusing on important information and filtering out the rest. The key to making better choices and staying focused is to avoid damaging or irrelevant information.
With all the divisiveness in both media and in our everyday conversations with friends, family, and strangers ... this is a good filter for what you say, what you post, and even how you evaluate markets, the economy, or a business opportunity.
How Does That Apply to Me or Trading?
The concept of Socrates' Triple Filter applies to markets as well.
When I was a technical trader, rather than looking at fundamental data and scouring the news daily, I focused on developing dynamic and adaptive systems and processes to look at the universe of trading algorithms to identify which were in phase and likely to perform well in the current market environment.
That focus has become more concentrated as we've transitioned to using advanced mathematics and AI to understand markets.
Filter Out What Isn't Good For You.
In contrast, there are too many ways that the media (meaning the techniques, graphics, music, etc.), the people reporting it, and even the news itself appeal to the fear and greed of human nature.
Likewise, I don't watch the news on TV anymore. It seems like story after story is about terrible things. For example, during a recent visit with my mother, I listened to her watch the news. There was a constant stream of "oh no," or "oh my," and "that's terrible". You don't even have to watch the news to know what it says.
These concepts also apply to what you feed your algorithms. Garbage in, garbage out. Just because you can plug in more data doesn't mean that data will add value. Deciding what "not to do" and "what not to listen to" is equally as important as deciding what to do.
Artificial intelligence is exciting, but artificial stupidity is terrifying.
What's The Purpose of News for You?
My purpose changes what I'm looking for and how much attention I pay to different types of information. Am I reading or watching the news for entertainment, to learn something new, or to find something relevant and actionable?
One of my favorite activities is looking for new insights and interesting articles to share with you and my team. If you aren't getting my weekly reading list on Fridays - you're missing out. You can sign up here.
By the way, I recently found a site, Ground News, that makes it easy to compare news sources, read between the lines of media bias, and break free from the blinders the algorithms put on what we see. I'd love to hear about tools or sites you think are worth sharing.
Getting back to Socrates' three filters and business, I often ask myself: is it important, does it affect our edge, or can I use it as a catalyst for getting what we want?
There's a lot of noise out there competing for your attention. Stay focused.
Onwards!
Posted at 05:54 PM in Business, Current Affairs, Healthy Lifestyle, Ideas, Market Commentary, Personal Development, Religion, Science, Television, Trading, Trading Tools, Web/Tech, Writing | Permalink | Comments (0)
Reblog (0)