June 2024

  • Some Timeless Wisdom From Socrates

    Small distinctions separate wise men from fools … Perhaps most important among them is what the wise man deems consequential. 

    This post discusses Socrates' Triple Filter Test, which involves checking information for truth, goodness, and usefulness.  It also explores how this concept applies to decision-making in business and life by focusing on important information and filtering out the rest.  The key to making better choices and staying focused is to avoid damaging or irrelevant information.

    Socrates' Triple Filter

    In ancient Greece, Socrates was reputed to hold knowledge in high esteem.  One day an acquaintance met the great philosopher and said, "Do you know what I just heard about your friend?"

    "Hold on a minute," Socrates replied. "Before telling me anything, I'd like you to pass a little test. It's called the Triple Filter Test."

    "Triple filter?"

    "That's right," Socrates continued.  "Before you talk to me about my friend, it might be a good idea to take a moment and filter what you're going to say. That's why I call it the triple filter test.

    The first filter is Truth.  Have you made absolutely sure that what you are about to tell me is true?"

    "No," the man said, "Actually I just heard about it and…"

    "All right," said Socrates. "So you don't really know if it's true or not. Now let's try the second filter, the filter of Goodness.  Is what you are about to tell me about my friend something good?"

    "No, on the contrary…"

    "So," Socrates continued, "You want to tell me something bad about him, but you're not certain it's true.  You may still pass the test though, because there's one filter left.  The third filter is Usefulness.  Is what you want to tell me about my friend going to be useful to me?"

    "No, not really."

    "Well," concluded Socrates, "If what you want to tell me is neither true, nor good, nor even useful … then why tell it to me at all?"

    With all the divisiveness in both media and in our everyday conversations with friends, family, and strangers … this is a good filter for what you say, what you post, and even how you evaluate markets, the economy, or a business opportunity. 

    How Does That Apply to Me or Trading?

    The concept of Socrates' Triple Filter applies to markets as well.

    When I was a technical trader, rather than looking at fundamental data and scouring the news daily, I focused on developing dynamic and adaptive systems and processes to look at the universe of trading algorithms to identify which were in phase and likely to perform well in the current market environment.

    That focus has become more concentrated as we've transitioned to using advanced mathematics and AI to understand markets. 

    Filter Out What Isn't Good For You.

    In contrast, there are too many ways that the media (meaning the techniques, graphics, music, etc.), the people reporting it, and even the news itself appeal to the fear and greed of human nature.

    Likewise, I don't watch the news on TV anymore.  It seems like story after story is about terrible things.  For example, during a recent visit with my mother, I listened to her watch the news.  There was a constant stream of "oh no," or "oh my," and "that's terrible".  You don't even have to watch the news to know what it says.

    These concepts also apply to what you feed your algorithms.  Garbage in, garbage out.  Just because you can plug in more data doesn't mean that data will add value.  Deciding what "not to do" and "what not to listen to" is equally as important as deciding what to do. 

    Artificial intelligence is exciting, but artificial stupidity is terrifying. 

    What's The Purpose of News for You?

    My purpose changes what I'm looking for and how much attention I pay to different types of information.  Am I reading or watching the news for entertainment, to learn something new, or to find something relevant and actionable?

     

    Socrates_quote_to_move_the_world_we_must_first_move_ourselves_5420

     

    One of my favorite activities is looking for new insights and interesting articles to share with you and my team.  If you aren't getting my weekly reading list on Fridays – you're missing out.  You can sign up here

    By the way, I recently found a site, Ground News, that makes it easy to compare news sources, read between the lines of media bias, and break free from the blinders the algorithms put on what we see.  I'd love to hear about tools or sites you think are worth sharing.

    Getting back to Socrates' three filters and business, I often ask myself: is it important, does it affect our edge, or can I use it as a catalyst for getting what we want?

    There's a lot of noise out there competing for your attention.  Stay focused. 

    Onwards!

  • Nvidia In Perspective

    In June of last year, Nvidia passed a Trillion-Dollar Market Capitalization. 

    Here’s where it stands a year later

    Nvidia-Market-Cap-May-2024_Website_05242024via visual capitalist

    Did you know that Nvidia is now the third most valuable company in the world?  It sits behind only Microsoft and Apple (though it’s nearing Apple). 

    These figures are even more impressive when you consider that at the beginning of 2020, Nvidia was valued at $145 billion.

    Nvidia’s growth was built largely on the back of AI hype.  Its chips have been a mainstay of AI and data science technologies, benefitting a litany of AI projects, gaming systems, crypto mining, and more.  It has successfully moved from a product company to a platform

    Do you think it’s going to continue to grow?  I do.

    We’ve talked about hype cycles … nevertheless, Nvidia’s offerings seem to be for the type of technology that will continue to be the underpinning of future progress.  So, while we’re seeing disillusionment toward AI, it may not affect Nvidia as intensely.

    This week, I saw an article in the WSJ titled “The AI Revolution Is Already Losing Steam,” – claiming that the pace of innovation in AI is slowing, its usefulness is limited, and the cost of running it remains exorbitant.

    This is ridiculous!  We are at the beginning of something growing exponentially.  It’s hard for most people to recognize the blind spot consisting of things they can’t conceive of … and what’s coming is hard to conceive, let alone believe is possible!

  • On The Horizon: Artificial Intelligence Agents

    In last week's article on Stanford's AI Index, we broadly covered many subjects. 

    There's one I felt like covering in more depth.  It's the concept of AI Agents

    One way to improve AI is to create agentic AI systems capable of autonomous operation in specific environments.  However, agentic AI has long challenged computer scientists.  The technology is only just now starting to show promise.  Current agents can play complex games, like Minecraft, and are much better at tackling real-world tasks like research assistance and retail shopping. 

    A common discussion point is the future of work.  The concept deals with how automation and AI will redefine the workforce, the workday, and even what we consider to be work. 

    Up until now, AI has been in very narrow applications.  Powerful applications, but with limited breadth of scope.  Generative AI and LLMs have increased the variety of tasks we can use AI for, but that's only the beginning. 

    Screenshot 2024-06-02 at 2.13.40 PM

    via Aniket Hingane

    AI agents represent a massive step toward intelligent, autonomous, and multi-modal systems working alongside skilled humans (and replacing unskilled workers) in a wide variety of scenarios. 

    Eventually, these agents will be able to understand, learn, and solve problems without human intervention.  There are a few critical improvements necessary to make that possible. 

    • Flexible goal-oriented behavior
    • Persistent memory & state tracking
    • Knowledge transfer & generalization
    • Interaction with real-world environments

    As models become more flexible in understanding and accomplishing their goals and begin to apply that knowledge to new real-world domains, models will go from intelligent-seeming tools to powerful partners with the ability to handle multiple tasks like a human would. 

    While they won't be human (or perhaps even seem human), we are on the verge of a technological shift that is a massive improvement from today's chatbots. 

    I like to think of these agents as the new assembly line.  The assembly line revolutionized the workforce and drove an industrial revolution, and I believe AI agents will do the same.

    As technology evolves, improvements in efficiency, effectiveness, and certainty are inevitable.  For example, with a proverbial army of agents creating, refining, and releasing content, it is easy to imagine a process that would take multiple humans a week getting done by agents in under an hour (even with human approval processes). 

    To make it literal, imagine using agents to write this article. One agent can be skilled in writing outlines and crafting headlines.  Another could focus on research and verification of research.  Then you have an agent to write, an agent to edit and proofread, and a conductor agent who makes sure that the quality is up to snuff, and replicates my voice.  If the goal was to make it go viral, there could be a virality agent, an SEO keyword agent, etc.

    Separating the activities into multiple agents (instead of trying to craft a vertical integrative agent) reduces the chances of "hallucinations" and self-aggrandization.  It can also theoretically wholly remove the human from the process. 

    Screenshot 2024-06-02 at 2.14.01 PMvia Aniket Hingane

    Now, I enjoy the writing process.  I'm not trying to remove myself from this process.  But, the capability is still there. 

    As agentification increases, I believe humans will still be a necessary part of the feedback loop process.  Soon, we will start to see agent-based companies.  Nonetheless, I still believe that humans will be an important part of the workforce (at least during my lifetime). 

    Another reason humans are important is because they are still important gatekeepers … meaning, humans have to become comfortable with a process to allow it.

    Trust and transparency are critical to AI adoption.  Even if AI excels at a task, people are unlikely to use it blindly.  To truly embrace AI, humans need to trust its capabilities and understand how it arrives at its results.  This means AI developers must prioritize building systems that are both effective and understandable.  By fostering a sense of ease and trust, users will be more receptive to the benefits AI or automation offers.

    Said a different way, just because AI can do something doesn't mean that you will use the tool or let AI do it.  It has to be done a "certain" way in order for you to let it get done … and that involves a lot of trust.  As a practical reality, humans don't just have to trust the technology; they also have to trust and understand the process.  That means the person building the AI or creating the automation must consider what it would take for a human to feel comfortable enough to allow the benefit.

    Especially as AI becomes more common (and as an increasingly large amount of content becomes solely created by artificial systems), the human touch will become a differentiator and a way to appear premium. 

    Screenshot 2024-06-02 at 2.24.59 PM

    via Aniket Hingane

    In my business, the goal has never been to automate away the high-value, high-touch parts of our work.  I want to build authentic relationships with the people I care about — and AI and automation promise to eliminate frustration and bother to free us up to do just that.

    The goal in your business should be to identify the parts in between those high-touch periods that aren't your unique ability – and find ways to automate and outsource them. 

    Remember, the heart of AI is still human (at least until our AI Overlords tell us otherwise).

    Onwards!