Market Commentary

  • The Power of Intellectual Property

    Industry is changing fast.  In the 1900s, the world's titans mainly produced tangible goods (or the infrastructure for them).  The turn of the century brought an increasing return on intangible assets like data, software, and even brand value … hence the rise of the influencer. 

    As technology increasingly changes business, jobs, and the world, intellectual property becomes an increasingly important way to set yourself apart and make your business more valuable. 

    Which Countries are Granted the Most New Patents?

    While America is leading the charge in A.I., we're falling behind in creating and protecting intellectual property. 

    Patents can help protect your business, but they also do much more.  I.P. creates inroads for partnerships with other businesses, and they can also be a moat that makes it more difficult for others to enter your space.  On a small scale, this is a standard business strategy.  What happens when the scale increases?

    The number of patents we're seeing created is a testament to the pace of innovation in the world, but it should also be a warning to protect your innovations.  Remember, however, that anything you get a patent on becomes public knowledge – so be careful with your trade secrets. 

    If you want to hear more of my thoughts on this, I recorded a podcast with Goldstein Patent Law on this subject.

  • Does Astrology Work?

    As I experiment with social media in preparation for the launch of my book "Compounding Insights: Turning Thoughts Into Things in the Age of AI," we've started producing short videos where employees ask me questions … some dumb and some smart. 

    One we just released asked the question, "Does astrology work?" Here is my response.

     

    via Howard Getson's YouTube Channel.

    The first answer is … at least not the way many believers wish it would.  Nonetheless, many get value from astrology because it helps them think about themselves and others from a different perspective while providing comfort and structure. 

    It's like a nightlight in the dark.  It doesn't make you any safer, but it feels like it. 

    Unfortunately, like many things … some people take it too far.

    Trading is more accessible than ever before.  We've gone from scrums of traders in trading pits to armchair experts investing in real estate, cryptocurrencies, options, and more from the comfort of their couches in their underwear. 

    With accessibility often comes misuse.  And, in this specific case … astrology. 

    "Mercury Is In Retrograde … Should I Sell My Stocks?"

    A blindfolded monkey throwing darts at a newspaper’s financial pages could select a portfolio that would do just as well as one carefully selected by experts. – Burt Malkiel, “A Random Walk Down Wall Street”

    My son brought to my attention an iPhone app – Bull and Moon; "Find stocks whose stars align with yours."

    Screen Shot 2019-11-15 at 2.54.14 PM

    Human Mel via Twitter 

    After you create your "astrological investor profile," their "proprietary financial astrology algorithm recommends an optimal portfolio of six stocks and shows your compatibility score with thousands more." 

    IMG_0458

    Bull and Moon via Zach Getson

    It's fun to hear about things like the Big Mac Index or the Super Bowl Indicator … but this seems pretty out there.

    The picks were pedestrian: Oracle, Hasbro, American International Group, Microsoft, Yum!  Brands, and FedEx. 

    The logic and commentary were entertaining.  The choices were based on "similarities in business decisions," "shared outlooks on humanity," and "strong mutual success metrics."

    Here is an excerpt: 

    Zach can usually let strong FedEx Corporation lead the relationship, but at the same time, Zach will invest many times over. This relationship will be full of success, understanding on many levels, and a lot of fun. 

    At least it's entertaining … even if it doesn't constitute an edge.  Whether it works or not, there is a demand for it in the market.  Some people pay thousands of dollars for astrology-based trading advice

    As a reminder, in trading, life, and business … if you don't know what your edge is, you don't have one.

  • Let’s Talk AI Ethics

    It's no secret that I've been a proponent of the proliferation and adoption of AI. I've been the CEO of AI companies since the early 90s, but it was the early 2000s when I realized what the future had in store.

    A few years ago, in anticipation of where we are today, I started participating in discussions about the need for governance, ethical guidelines, and response frameworks for AI (and other exponential technologies). 

    Untitled design (20)

    Last week, I said that we shouldn't slow down the progress of generative AI … and I stand by that, but that doesn't mean that we shouldn't be working hastily to provide bumper rails to keep AI in check. 

    There are countless ethical concerns we should be talking about: 

    1. Bias and Discrimination – AI systems are only as objective as the data they are trained on. If the data is biased, the AI system will also be biased. Not only does that create discrimination, but it also leaves systems more susceptible. 
    2. Privacy and Data Protection – AI systems are capable of collecting vast amounts of personal data, and if this data is misused or mishandled, it could have serious consequences for individuals' privacy and security. The security of these systems needs to be managed, but also where and how they get their data. 
    3. Accountability, Explainability, and Transparency – As AI systems become increasingly complex and autonomous, it can be difficult to determine who is responsible when something goes wrong, not to mention the difficulty in understanding how public-facing systems arrive at their decisions. Explainability becomes more important for generative AI models as they're used to interface with anyone and everyone. 
    4. Human Agency and Control – When AI systems become more sophisticated and autonomous, there is fear about their autonomy … what amount of human control is necessary, and how do we prevent "malevolent" AI? Within human agency and control, we have two sub-topics. First, is job displacement … do we prevent AI from taking certain jobs as one potential way to preserve jobs and the economy, or do we look at other options like universal basic income. We also have to ask where international governance comes in, and how we ensure that ethical standards are upheld to prevent misuse or abuse of the technology by bad actors.
    5. Safety and Reliability – Ensuring the safety and reliability of AI systems is important, particularly in areas such as transportation and healthcare where the consequences of errors can be severe. Setting standards of performance is important, especially considering the outsized response when an AI system does commit an "error". Think about how many car crashes are caused by human error and negligence… and then think about the media coverage when a self-driving car causes one. If we want AI to be adopted and trusted, it's going to need to be held to much higher standards. 

    These are all real and present concerns that we should be aware of. However, it's not as simple as creating increasingly tight chains to limit AI. We have to be judicious in our applications of regulation and oversight. We intrinsically know the dangers of overregulation – of limiting freedoms. Not only will it stifle creativity and output, but it will only encourage bad actors to go further beyond what the law-abiding creators can do.  

    If you want to see one potential AI risk management framework, here's a proposition by the National Institute of Standards and Technology – it's called AI RMF 1.0. It's a nice jump-off point for you to think about internal controls and preparation for impending regulation.  To be one step more explicit … if you are a business owner or a tech creator, you should be getting a better handle on your own internal controls, as well as anticipating external influence. 

    In conclusion, there is a clear need for AI ethics to ensure that this transformative technology is used in a responsible and ethical manner. There are many issues we need to address as AI becomes more ubiquitous and powerful. That's not an excuse to slow down, because slowing down only lets others get ahead. If you're only scared of AI, you're not paying enough attention. You should be excited.

    Hope that helps.

  • A Few Graphs On The State Of AI in 2023

    Every year, Stanford puts out an AI Index1 with a massive amount of data attempting to sum up the current state of AI. 

    Last year it was 190 pages … now it's 386 pages.  The report details where research is going and covers current specs, ethics, policy, and more. 

    It is super nerdy … yet, it's probably worth a skim. Here are some of the highlights that I shared last year. 

    Here are a few things that caught my eye and might help set some high-level context for you. 

    Growth Of AI

    Screen Shot 2023-04-15 at 3.38.32 PM Screen Shot 2023-04-15 at 3.13.09 PM

    via 2023 AI Index Report

    One thing that's very obvious to the world right now is that the AI space is growing rapidly. And it's happening in many different ways.

    Over the last decade, private investment in AI has increased astronomically … Now, we're seeing government investment increasing, and the frequency and complexity of discussion around AI is exploding as well. 

    A big part of this is due to the massive improvement in the quality of generative AI. 

    Technical Improvements in AI

    Screen Shot 2023-04-15 at 3.22.00 PM Screen Shot 2023-04-15 at 3.15.06 PMvia 2023 AI Index Report

    This isn't the first time I've shared charts of this nature, but it's impressive to see the depth and breadth of new AI models. 

    For example, Minerva, a large language and multimodal model released by Google in June of 2022, used roughly 9x more training compute than GPT-3. And we can't even see the improvements already happening in 2023 like with GPT-4. 

    While it's important to look at the pure technical improvements, it's also worth realizing the increased creativity and applications of AI. For example, Auto-GPT takes  GPT-4 and makes it almost autonomous. It can perform tasks with very little human intervention, it can self-prompt, and it has internet access & long-term and short-term memory management. 

    Here is an important distinction to make … We're not only getting better at creating models, but we're getting better at using them, and they are getting better at improving themselves. 

    All of that leads to one of the biggest shifts we're currently seeing in AI – which is the shift from academia to industry. This is the difference between thinking and doing,  or promise and productive output.

    Jobs In AI

    Screen Shot 2023-04-15 at 3.14.09 PMScreen Shot 2023-04-15 at 3.30.24 PMvia 2023 AI Index Report

    In 2022, there were 32 significant industry-produced machine learning models … compared to just 3 by academia. It's no surprise that private industry has more resources than nonprofits and academia, And now we're starting to see the benefits from that increased surge in cashflow moving into artificial intelligence, automation, and innovation. 

    Not only does this result in better models, but also in more jobs. The demand for AI-related skills is skyrocketing in almost every sector. On top of the demand for skills, the amount of job postings has increased significantly as well. 

    Currently, the U.S. is leading the charge, but there's lots of competition. 

    The worry is, not everyone is looking for AI-related skills to improve the world. The ethics of AI is the elephant in the room for many. 

    AI Ethics

    Screen Shot 2023-04-15 at 3.39.46 PM Screen Shot 2023-04-15 at 3.37.46 PM

    via 2023 AI Index Report

    The number of AI misuse incidents is skyrocketing. Since 2012, the number has increased 26 times. And it's more than just deepfakes, AI can be used for many nefarious purposes that aren't as visible.

    Unfortunately, when you invent the car, you also invent the potential for car crashes … when you 'invent' nuclear energy, you create the potential for nuclear bombs. 

    There are other potential negatives as well.  For example, many AI systems (like cryptocurrencies) use vast amounts of energy and produce carbon. So, the ecological impact has to be taken into account as well.

    Luckily, many of the best minds of today are focused on how to create bumpers to rein in AI and prevent and discourage bad actors. In 2016, only 1 law was passed focused on Artificial Intelligence … 37 were passed last year. This is a focus not just in America, but around the globe. 

    Conclusion

    Artificial Intelligence is inevitable. It's here, it's growing, and it's amazing.

    Despite America leading the charge in A.I., we're also among the lowest in positivity about the benefits and drawbacks of these products and services. China, Saudi Arabia, and India rank the highest. 

    If we don't continue to lead the charge, other countries will …Which means we need to address the fears and culture around A.I. in America. The benefits outweigh the costs – but we have to account for the costs and attempt to minimize potential risks as well.

    Pioneers often get arrows in their backs and blood on their shoes.  But they are also the first to reach the new world.

    Luckily, I think momentum is moving in the right direction. Watching my friends start to use AI-powered apps, has been rewarding as someone who has been in the space since the early '90s. 

    We are on the right path.

    Onwards!

    _____________________________________

    1Nestor Maslej, Loredana Fattorini, Erik Brynjolfsson, John Etchemendy, Katrina Ligett, Terah Lyons, James Manyika, Helen Ngo, Juan Carlos Niebles, Vanessa Parli, Yoav Shoham, Russell Wald, Jack Clark, and Raymond Perrault, “The AI Index 2023 Annual Report,” AI Index Steering Committee, Institute for Human-Centered AI, Stanford University, Stanford, CA, April 2023. The AI Index 2023 Annual Report by Stanford University is licensed under

  • Yield of Dreams: Laughing Your Way To Financial Freedom

    The movie Field of Dreams came out the year my first son was born. If you haven't seen it, it's a fantastic movie. 

    Whether you’ve seen it (or not), you might want to see Charlie Epstein’s a one-man play called Yield of Dreams

    I put together a quick video on why you should watch it:

    >> Click Here to check it out

    Many of the people who read my blog, or are subscribed to my newsletter, are either entrepreneurs or in the financial space. While Charlie Epstein moonlights as an actor/comedian, his day job is in financial services. He's incredibly sharp, very knowledgeable … and yes, a little quirky. 

    But that quirkiness is what makes him funny – so much so that you'll be captivated long enough to gain some real value. Charlie does an excellent job teaching people how to do practical things to ensure they have enough money when they retire to live a good life.

    More importantly, he helps you think about your mindsets and what you truly want, so you can live the life you've always dreamed of and deserved. And even though I didn't think I needed to learn anything new, I gained a ton of practical value – and you probably will too.

    As a bonus, half of the proceeds go toward supporting vets with PTSD.

    There aren't many people (or "offers") I'd feel comfortable plugging, but this is one of them. As well, many of the other people I would put in front of you (like Dan Sullivan, Peter Diamandis, and Mike Koenigs) love Charlie as much as I do. 

    via Yield of Dreams

    So, here's the part I copied from Charlie: In this one-man show you'll discover how to

    • Work less while making more than you ever have before
    • Make more progress towards your dreams in one year than most people do in ten
    • Step into the biggest, boldest and most confident version of yourself
    • Stop worrying about money and start living your dream life

    So, if any of that interests you I highly recommend you sign up. You only have a limited time to do so. 

    >> Just click here to learn more about Yield of Dreams

  • The Benner Cycle: Making Market Predictions

    When I first got interested in trading, I used to look at many traditional sources and old-school market wisdom.  I particularly liked the Stock Trader's Almanac

    While there is real wisdom in some of those sources, most might as well be horoscopes or Nostradamus-level predictions.  Throw enough darts, and one of them might hit the bullseye. 

    Traders love patterns, from the simple head-and-shoulders, to Fibonacci sequences, and the Elliot Wave Theory.

    Here's an example from Samuel Benner, an Ohio farmer, in 1875.  That year he released a book titled "Benners Prophecies: Future Ups and Down in Prices," and in it, he shared a now relatively famous chart called the Benner Cycle.  Some claim that it's been accurately predicting the ups and downs of the market for over 100 years.  Let's check it out. 

     

     

    Here's what it does get right … markets go up, and then they go down … and that cycle continues.  Consequently, if you want to make money, you should buy low and sell high … It's hard to call that a competitive advantage.

    Mostly, you're looking at vague predictions with +/- 2-year error bars on a 10-year cycle. 

    However, it was close to the dotcom bust and the 2008 crash … so even if you sold a little early, you'd have been reasonably happy with your decision to follow the cycle.

    The truth is that we use cycle analysis in our live trading models.  However, it is a lot more rigorous and scientific than the Benner Cycle.  The trick is figuring out what to focus on – and what to ignore. 

    Just as humans are good at seeing patterns where there are none … they tend to see cycles that aren't anything but coincidences. 

    This is a reminder that just because an AI chat service recommends something, doesn't make it a good recommendation.  Those models do some things well.  Making scientific or mathematically rigorous market predictions probably aren't the areas to trust ChatGPT or one of its rivals.

    Be careful out there.

  • Top U.S. Banks by Uninsured Deposits …

    It's been about a month since we discussed Silicon Valley Bank (SVB). But the impact is still lingering. I know friends whose money is still tied up, and we've continued to see increased coverage of banks' perceived failures. 

    Currently, there are $7 trillion sitting uninsured in American banks. VisualCapitalist put together a list of the 30 biggest banks by uninsured deposits. 

     

    The U.S. Banks With the Most Uninsured Deposits

    via visualcapitalist

    Many of the banks on this list are systemically important to the banking system … which means the government would be more incentivized to prevent their collapse. 

    It's important to make clear that these banks differ from SVB in several ways. To start, their userbase is much more diverse … but even more importantly, their loans and held-to-maturity securities are much lower as a percentage of total deposits. Those types of loans took up the vast majority of SVB's deposits, while they make up less than 50% of the systemically important banks on this list. But, according to VisualCapitalist, 11 banks on this list have ratios over 90%, just like SVB, which brings them to a much higher risk level. 

    Regulators stepped up in the wake of the SVB collapse, and the Fed also launched the Bank Term Funding Program (BTFP), as we discussed in the last article on this subject. But, it remains to be seen what will happen in the future. 

    Does the Fed have another option besides saving the banks and backing deposits? If not, market participants will start to rely on the Fed to come to the rescue, making even riskier decisions than they already were. 

    It feels like the Fed is stuck between a rock and a hard place, but hopefully, we will start to see some movement in the right direction.

  • Should We Temporarily Halt The Progress of Generative AI?

    Several high-profile names (including Elon Musk) have penned an open letter calling for the pause of the creation of models more powerful than GPT-4

    In March, OpenAI unveiled GPT-4, and people were rightfully impressed. Now, fears are even greater about the potential consequences of more powerful AI. 

    The letter raises a couple of questions. 

    Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization? – Pause Giant AI Experients: An Open Letter

    The crux of their message is that we shouldn't be blindly creating smarter and more robust AI until we are confident that they can be managed and controlled to maintain a positive impact. 

    Artificial-Superintelligence-In-Control

    During the pause the letter calls for, the suggestion is for AI labs and experts to jointly develop and implement safety protocols that would be audited by an independent agency. At the same time, the letter calls for developers to work with policymakers to increase governance and regulatory authorities. 

    My personal thoughts? Trying to stop (or even pause) the development of something as important as AI is naive and impractical. From the Industrial Revolution to the Information Age, humanity has always embraced new technologies, despite initial resistance and concerns. The AI Age is no different, and attempting to stop its progress would be akin to trying to stop the tide. On top of that, AI development is a global phenomenon, with researchers, institutions, and companies from around the world making significant contributions. Attempting to halt or slow down AI development in one country would merely cede the technological advantage to other nations. In a world of intense competition and rapid innovation, falling behind in AI capabilities could have severe economic and strategic consequences.

    It is bigger than a piece of software or a set of technological capabilities. It represents a fundamental shift in what's possible.

    The playing field changed.  We are not going back. 

    The game changed.  That means what it takes to win or lose changed as well.

    Yes, AI ethics is an important endeavor and should be worked on as diligently as the creation of new AI.  But there is no pause button for exponential technologies like this.

    Change is coming.  Growth is coming. Acceleration is coming. Trying to reject it is an exercise in futility. 

    We will both rise to the occasion and fall to the level of our readiness and preparedness.  

    Actions have consequences, but so does inaction.  In part, we can't stop because bad actors certainly won't stop to give us time to combat them or catch up. 

    When there is some incredible new "thing" there will always be some people who try to avoid it … and some who try to leverage it (for good and bad purpose).

    There will always be promise and peril.

    What you focus on and what you do remains a choice. 

    Transformation Equals Innovation Plus Purposeful Action_GapingVoid

    Whether AI creates abundance or doomsday for you will be defined largely by how you perceive and act on the promise and peril you perceive. Artificial intelligence holds the potential to address some of the world's most pressing challenges, such as climate change, disease, and poverty. By leveraging AI's capabilities, we can develop innovative solutions and accelerate progress in these areas.

    It's two sides of the same coin. A 6-month hiatus won't stop what's coming. In this case, we need to pave the road as we traverse it. 

    We live in interesting times!

    What do you think?

  • Tech Over The Long Run

    Humans are wired to think locally and linearly … because that's what it took to survive in a pre-industrial age. However, that leaves most of us very bad at predicting technology and its impact on our future. 

    To put the future of technology in perspective, it's helpful to look at the history of technology to help understand what an amazing era we live in. 

    Our World In Data put together a great chart that shows the entire history of humanity in relation to innovation. 

    Longterm-timeline-of-technology

    Max Roser via ourworldindata

    3.4 million years ago, our ancestors supposedly started using tools. 2.4 million years later they harnessed fire. 43,000 years ago (almost a million years later) we developed the first instrument, a flute. 

    That's an insane amount of time. Compare that to this:

    In 1903, the Wright Brothers first took flight … 66 years later, we were on the moon. 

    That's less than a blink in the history of humankind, and yet we're still increasing speed. 

    Technology is a snowball rolling down a mountain, picking up steam, and now it's an avalanche being driven by AI. 

    But innovation isn't only driven by scientists. It's driven by people like you or me having a vision and making it into a reality. 

    Even though I'm the CEO of an AI company, I don't build artificial intelligence myself … but I can envision a bigger future and communicate that to people who can. I also can use tools that help me automate and innovate things that help free me to focus on more important ways to create value. 

    The point is that you can't let the perfect get in the way of the good.  AI's impact is inevitable.  You don't have to wait to see where the train's going … you should be boarding. 

    Onwards! 

  • Can We Predict The Future?!

    New technologies fascinate me … As we approach the Singularity, I guess that is becoming human nature. 

    Second Thought has put together a video that looks at various predictions from the early 1900s. It is a fun watch – Check it out. 

    via Second Thought

    It's interesting to look at what they strategically got right compared to what was tactically different. 

    In a 1966 interview, Marshall McLuhan discussed the future of information with ideas that now resonate with AI technologies. He envisioned personalized information, where people request specific knowledge and receive tailored content. This concept has become a reality through AI-powered chatbots like ChatGPT, which can provide customized information based on user inputs.

    Although McLuhan was against innovation, he recognized the need to understand emerging trends to maintain control and know when to "turn off the button." 

    While not all predictions are made equal, we seem to have a better idea of what we want than how to accomplish it. 

    The farther the horizon, the more guesswork is involved. Compared to the prior video on predictions from the mid-1900s, this video on the internet from 1995 seems downright prophetic. 

    via YouTube

    There's a lesson there. It's hard to predict the future, but that doesn't mean you can't skate to where the puck is moving. Even if the path ahead is unsure, it's relatively easy to pick your next step, and then the step in front of that. As long as you are moving in the right direction and keep taking steps without stopping, the result is inevitable.