Ideas

  • How Smart Is ChatGPT?

    It gets a little old talking about ChatGPT so often … but it's rightfully taking the world by storm, and the innovations, improvements, and use cases keep coming. 

    This week, I'm keeping it simple. 

    VisualCapitalist put together a chart that helps contextualize how well ChatGPT tests on several popular placement tests. 

    How smart is ChatGPT? We examine exam scores in this infographic

    via visualcapitalist

    It also shows the comparison between versions 3.5 and 4.

    ChatGPT 4 improvements include plugins, access to the internet, and the ability to analyze visual inputs. 

    Interestingly, there were a couple of places where version 4 didn't improve … Regardless, it is already outperforming the average human in these scenarios.

    Obviously, the ability to perform well on a test isn't a direct analog to intelligence – especially general intelligence. However, it's a sign that these tools can become important partners and assets in your business. Expect that it will take developing custom systems to truly transform your business, but there are a lot of easy wins you can stack by exploring what's out there already. 

    The takeaway is that you're missing out if you aren't experimenting. 

  • The Power of Intellectual Property

    Industry is changing fast.  In the 1900s, the world's titans mainly produced tangible goods (or the infrastructure for them).  The turn of the century brought an increasing return on intangible assets like data, software, and even brand value … hence the rise of the influencer. 

    As technology increasingly changes business, jobs, and the world, intellectual property becomes an increasingly important way to set yourself apart and make your business more valuable. 

    Which Countries are Granted the Most New Patents?

    While America is leading the charge in A.I., we're falling behind in creating and protecting intellectual property. 

    Patents can help protect your business, but they also do much more.  I.P. creates inroads for partnerships with other businesses, and they can also be a moat that makes it more difficult for others to enter your space.  On a small scale, this is a standard business strategy.  What happens when the scale increases?

    The number of patents we're seeing created is a testament to the pace of innovation in the world, but it should also be a warning to protect your innovations.  Remember, however, that anything you get a patent on becomes public knowledge – so be careful with your trade secrets. 

    If you want to hear more of my thoughts on this, I recorded a podcast with Goldstein Patent Law on this subject.

  • Does Astrology Work?

    As I experiment with social media in preparation for the launch of my book "Compounding Insights: Turning Thoughts Into Things in the Age of AI," we've started producing short videos where employees ask me questions … some dumb and some smart. 

    One we just released asked the question, "Does astrology work?" Here is my response.

     

    via Howard Getson's YouTube Channel.

    The first answer is … at least not the way many believers wish it would.  Nonetheless, many get value from astrology because it helps them think about themselves and others from a different perspective while providing comfort and structure. 

    It's like a nightlight in the dark.  It doesn't make you any safer, but it feels like it. 

    Unfortunately, like many things … some people take it too far.

    Trading is more accessible than ever before.  We've gone from scrums of traders in trading pits to armchair experts investing in real estate, cryptocurrencies, options, and more from the comfort of their couches in their underwear. 

    With accessibility often comes misuse.  And, in this specific case … astrology. 

    "Mercury Is In Retrograde … Should I Sell My Stocks?"

    A blindfolded monkey throwing darts at a newspaper’s financial pages could select a portfolio that would do just as well as one carefully selected by experts. – Burt Malkiel, “A Random Walk Down Wall Street”

    My son brought to my attention an iPhone app – Bull and Moon; "Find stocks whose stars align with yours."

    Screen Shot 2019-11-15 at 2.54.14 PM

    Human Mel via Twitter 

    After you create your "astrological investor profile," their "proprietary financial astrology algorithm recommends an optimal portfolio of six stocks and shows your compatibility score with thousands more." 

    IMG_0458

    Bull and Moon via Zach Getson

    It's fun to hear about things like the Big Mac Index or the Super Bowl Indicator … but this seems pretty out there.

    The picks were pedestrian: Oracle, Hasbro, American International Group, Microsoft, Yum!  Brands, and FedEx. 

    The logic and commentary were entertaining.  The choices were based on "similarities in business decisions," "shared outlooks on humanity," and "strong mutual success metrics."

    Here is an excerpt: 

    Zach can usually let strong FedEx Corporation lead the relationship, but at the same time, Zach will invest many times over. This relationship will be full of success, understanding on many levels, and a lot of fun. 

    At least it's entertaining … even if it doesn't constitute an edge.  Whether it works or not, there is a demand for it in the market.  Some people pay thousands of dollars for astrology-based trading advice

    As a reminder, in trading, life, and business … if you don't know what your edge is, you don't have one.

  • Let’s Talk AI Ethics

    It's no secret that I've been a proponent of the proliferation and adoption of AI. I've been the CEO of AI companies since the early 90s, but it was the early 2000s when I realized what the future had in store.

    A few years ago, in anticipation of where we are today, I started participating in discussions about the need for governance, ethical guidelines, and response frameworks for AI (and other exponential technologies). 

    Untitled design (20)

    Last week, I said that we shouldn't slow down the progress of generative AI … and I stand by that, but that doesn't mean that we shouldn't be working hastily to provide bumper rails to keep AI in check. 

    There are countless ethical concerns we should be talking about: 

    1. Bias and Discrimination – AI systems are only as objective as the data they are trained on. If the data is biased, the AI system will also be biased. Not only does that create discrimination, but it also leaves systems more susceptible. 
    2. Privacy and Data Protection – AI systems are capable of collecting vast amounts of personal data, and if this data is misused or mishandled, it could have serious consequences for individuals' privacy and security. The security of these systems needs to be managed, but also where and how they get their data. 
    3. Accountability, Explainability, and Transparency – As AI systems become increasingly complex and autonomous, it can be difficult to determine who is responsible when something goes wrong, not to mention the difficulty in understanding how public-facing systems arrive at their decisions. Explainability becomes more important for generative AI models as they're used to interface with anyone and everyone. 
    4. Human Agency and Control – When AI systems become more sophisticated and autonomous, there is fear about their autonomy … what amount of human control is necessary, and how do we prevent "malevolent" AI? Within human agency and control, we have two sub-topics. First, is job displacement … do we prevent AI from taking certain jobs as one potential way to preserve jobs and the economy, or do we look at other options like universal basic income. We also have to ask where international governance comes in, and how we ensure that ethical standards are upheld to prevent misuse or abuse of the technology by bad actors.
    5. Safety and Reliability – Ensuring the safety and reliability of AI systems is important, particularly in areas such as transportation and healthcare where the consequences of errors can be severe. Setting standards of performance is important, especially considering the outsized response when an AI system does commit an "error". Think about how many car crashes are caused by human error and negligence… and then think about the media coverage when a self-driving car causes one. If we want AI to be adopted and trusted, it's going to need to be held to much higher standards. 

    These are all real and present concerns that we should be aware of. However, it's not as simple as creating increasingly tight chains to limit AI. We have to be judicious in our applications of regulation and oversight. We intrinsically know the dangers of overregulation – of limiting freedoms. Not only will it stifle creativity and output, but it will only encourage bad actors to go further beyond what the law-abiding creators can do.  

    If you want to see one potential AI risk management framework, here's a proposition by the National Institute of Standards and Technology – it's called AI RMF 1.0. It's a nice jump-off point for you to think about internal controls and preparation for impending regulation.  To be one step more explicit … if you are a business owner or a tech creator, you should be getting a better handle on your own internal controls, as well as anticipating external influence. 

    In conclusion, there is a clear need for AI ethics to ensure that this transformative technology is used in a responsible and ethical manner. There are many issues we need to address as AI becomes more ubiquitous and powerful. That's not an excuse to slow down, because slowing down only lets others get ahead. If you're only scared of AI, you're not paying enough attention. You should be excited.

    Hope that helps.

  • Meet The Jetsons: 60 Years Later

    Since my last name is Getson, I often get "Jetson" at restaurants.  As the CEO of a tech company focused on innovative technologies, it somehow feels fitting. 

    Despite only airing for one season (from 1962-1963), The Jetsons remains a cultural phenomenon.  It supposedly takes place in 2062, but in the story, the family's patriarch (George Jetson) was born on July 31, 2022.  Not too long ago. 

    Obviously, this is a whimsical representation of the future – spurred on by fears of the Soviet Union and the space race.  But it captured the imagination of multiple generations of kids.  Flying cars, talking dogs, robot maids, and food printing … what's not to love?

     

    I don't intend to dissect the show about what they got right or wrong, but I do want to briefly examine what they imagined based on where we are today. 

    For example, while flying cars aren't ubiquitous yet (like in the Jetsons), we already have driverless cars.  It's likely that by 2062, driverless cars will be pervasive, even if flying cars aren't.  But, frankly, who knows?  That is still possible.

    Meanwhile, both George and Jane work very few hours a week due to the increase in technology.  While that's a future we can still envision, despite massive technological improvements, we've chosen to increase productivity (instead of working less and keeping output at 1960 levels).  Even with the expected growth of AI, I still believe that humans will choose to pursue purposeful work.

    The Jetsons also underemphasize the wireless nature of today's world.  George still has to go into the office, and while they have video phones, it's still a piece of hardware connected to a wall, instead of mobile and wireless.  2062 is far enough away that holographic displays are still a very real possibility.

    Likewise, while we don't yet have complex robot maids (like Rosie), we already have Roombas… and both AI and Robotics are improving exponentially.

    Meanwhile, we are in the process of creating cheap and sustainable food printing and drone delivery services … which makes the Jetsons look oddly prescient. 

    And, remember, there are still 40 years for us to continue to make progress.  So, while I think it's doubtful cities will look like the spaceports portrayed in the cartoon … I suspect that you'll be impressed by how much further we are along than even the Jetsons imagined.

    Not only is the rate of innovation increasing, but so is the rate at which that rate increases.  It's exponential. 

    We live in exciting times!

  • A Few Graphs On The State Of AI in 2023

    Every year, Stanford puts out an AI Index1 with a massive amount of data attempting to sum up the current state of AI. 

    Last year it was 190 pages … now it's 386 pages.  The report details where research is going and covers current specs, ethics, policy, and more. 

    It is super nerdy … yet, it's probably worth a skim. Here are some of the highlights that I shared last year. 

    Here are a few things that caught my eye and might help set some high-level context for you. 

    Growth Of AI

    Screen Shot 2023-04-15 at 3.38.32 PM Screen Shot 2023-04-15 at 3.13.09 PM

    via 2023 AI Index Report

    One thing that's very obvious to the world right now is that the AI space is growing rapidly. And it's happening in many different ways.

    Over the last decade, private investment in AI has increased astronomically … Now, we're seeing government investment increasing, and the frequency and complexity of discussion around AI is exploding as well. 

    A big part of this is due to the massive improvement in the quality of generative AI. 

    Technical Improvements in AI

    Screen Shot 2023-04-15 at 3.22.00 PM Screen Shot 2023-04-15 at 3.15.06 PMvia 2023 AI Index Report

    This isn't the first time I've shared charts of this nature, but it's impressive to see the depth and breadth of new AI models. 

    For example, Minerva, a large language and multimodal model released by Google in June of 2022, used roughly 9x more training compute than GPT-3. And we can't even see the improvements already happening in 2023 like with GPT-4. 

    While it's important to look at the pure technical improvements, it's also worth realizing the increased creativity and applications of AI. For example, Auto-GPT takes  GPT-4 and makes it almost autonomous. It can perform tasks with very little human intervention, it can self-prompt, and it has internet access & long-term and short-term memory management. 

    Here is an important distinction to make … We're not only getting better at creating models, but we're getting better at using them, and they are getting better at improving themselves. 

    All of that leads to one of the biggest shifts we're currently seeing in AI – which is the shift from academia to industry. This is the difference between thinking and doing,  or promise and productive output.

    Jobs In AI

    Screen Shot 2023-04-15 at 3.14.09 PMScreen Shot 2023-04-15 at 3.30.24 PMvia 2023 AI Index Report

    In 2022, there were 32 significant industry-produced machine learning models … compared to just 3 by academia. It's no surprise that private industry has more resources than nonprofits and academia, And now we're starting to see the benefits from that increased surge in cashflow moving into artificial intelligence, automation, and innovation. 

    Not only does this result in better models, but also in more jobs. The demand for AI-related skills is skyrocketing in almost every sector. On top of the demand for skills, the amount of job postings has increased significantly as well. 

    Currently, the U.S. is leading the charge, but there's lots of competition. 

    The worry is, not everyone is looking for AI-related skills to improve the world. The ethics of AI is the elephant in the room for many. 

    AI Ethics

    Screen Shot 2023-04-15 at 3.39.46 PM Screen Shot 2023-04-15 at 3.37.46 PM

    via 2023 AI Index Report

    The number of AI misuse incidents is skyrocketing. Since 2012, the number has increased 26 times. And it's more than just deepfakes, AI can be used for many nefarious purposes that aren't as visible.

    Unfortunately, when you invent the car, you also invent the potential for car crashes … when you 'invent' nuclear energy, you create the potential for nuclear bombs. 

    There are other potential negatives as well.  For example, many AI systems (like cryptocurrencies) use vast amounts of energy and produce carbon. So, the ecological impact has to be taken into account as well.

    Luckily, many of the best minds of today are focused on how to create bumpers to rein in AI and prevent and discourage bad actors. In 2016, only 1 law was passed focused on Artificial Intelligence … 37 were passed last year. This is a focus not just in America, but around the globe. 

    Conclusion

    Artificial Intelligence is inevitable. It's here, it's growing, and it's amazing.

    Despite America leading the charge in A.I., we're also among the lowest in positivity about the benefits and drawbacks of these products and services. China, Saudi Arabia, and India rank the highest. 

    If we don't continue to lead the charge, other countries will …Which means we need to address the fears and culture around A.I. in America. The benefits outweigh the costs – but we have to account for the costs and attempt to minimize potential risks as well.

    Pioneers often get arrows in their backs and blood on their shoes.  But they are also the first to reach the new world.

    Luckily, I think momentum is moving in the right direction. Watching my friends start to use AI-powered apps, has been rewarding as someone who has been in the space since the early '90s. 

    We are on the right path.

    Onwards!

    _____________________________________

    1Nestor Maslej, Loredana Fattorini, Erik Brynjolfsson, John Etchemendy, Katrina Ligett, Terah Lyons, James Manyika, Helen Ngo, Juan Carlos Niebles, Vanessa Parli, Yoav Shoham, Russell Wald, Jack Clark, and Raymond Perrault, “The AI Index 2023 Annual Report,” AI Index Steering Committee, Institute for Human-Centered AI, Stanford University, Stanford, CA, April 2023. The AI Index 2023 Annual Report by Stanford University is licensed under

  • Yield of Dreams: Laughing Your Way To Financial Freedom

    The movie Field of Dreams came out the year my first son was born. If you haven't seen it, it's a fantastic movie. 

    Whether you’ve seen it (or not), you might want to see Charlie Epstein’s a one-man play called Yield of Dreams

    I put together a quick video on why you should watch it:

    >> Click Here to check it out

    Many of the people who read my blog, or are subscribed to my newsletter, are either entrepreneurs or in the financial space. While Charlie Epstein moonlights as an actor/comedian, his day job is in financial services. He's incredibly sharp, very knowledgeable … and yes, a little quirky. 

    But that quirkiness is what makes him funny – so much so that you'll be captivated long enough to gain some real value. Charlie does an excellent job teaching people how to do practical things to ensure they have enough money when they retire to live a good life.

    More importantly, he helps you think about your mindsets and what you truly want, so you can live the life you've always dreamed of and deserved. And even though I didn't think I needed to learn anything new, I gained a ton of practical value – and you probably will too.

    As a bonus, half of the proceeds go toward supporting vets with PTSD.

    There aren't many people (or "offers") I'd feel comfortable plugging, but this is one of them. As well, many of the other people I would put in front of you (like Dan Sullivan, Peter Diamandis, and Mike Koenigs) love Charlie as much as I do. 

    via Yield of Dreams

    So, here's the part I copied from Charlie: In this one-man show you'll discover how to

    • Work less while making more than you ever have before
    • Make more progress towards your dreams in one year than most people do in ten
    • Step into the biggest, boldest and most confident version of yourself
    • Stop worrying about money and start living your dream life

    So, if any of that interests you I highly recommend you sign up. You only have a limited time to do so. 

    >> Just click here to learn more about Yield of Dreams

  • The Benner Cycle: Making Market Predictions

    When I first got interested in trading, I used to look at many traditional sources and old-school market wisdom.  I particularly liked the Stock Trader's Almanac

    While there is real wisdom in some of those sources, most might as well be horoscopes or Nostradamus-level predictions.  Throw enough darts, and one of them might hit the bullseye. 

    Traders love patterns, from the simple head-and-shoulders, to Fibonacci sequences, and the Elliot Wave Theory.

    Here's an example from Samuel Benner, an Ohio farmer, in 1875.  That year he released a book titled "Benners Prophecies: Future Ups and Down in Prices," and in it, he shared a now relatively famous chart called the Benner Cycle.  Some claim that it's been accurately predicting the ups and downs of the market for over 100 years.  Let's check it out. 

     

     

    Here's what it does get right … markets go up, and then they go down … and that cycle continues.  Consequently, if you want to make money, you should buy low and sell high … It's hard to call that a competitive advantage.

    Mostly, you're looking at vague predictions with +/- 2-year error bars on a 10-year cycle. 

    However, it was close to the dotcom bust and the 2008 crash … so even if you sold a little early, you'd have been reasonably happy with your decision to follow the cycle.

    The truth is that we use cycle analysis in our live trading models.  However, it is a lot more rigorous and scientific than the Benner Cycle.  The trick is figuring out what to focus on – and what to ignore. 

    Just as humans are good at seeing patterns where there are none … they tend to see cycles that aren't anything but coincidences. 

    This is a reminder that just because an AI chat service recommends something, doesn't make it a good recommendation.  Those models do some things well.  Making scientific or mathematically rigorous market predictions probably aren't the areas to trust ChatGPT or one of its rivals.

    Be careful out there.

  • Should We Temporarily Halt The Progress of Generative AI?

    Several high-profile names (including Elon Musk) have penned an open letter calling for the pause of the creation of models more powerful than GPT-4

    In March, OpenAI unveiled GPT-4, and people were rightfully impressed. Now, fears are even greater about the potential consequences of more powerful AI. 

    The letter raises a couple of questions. 

    Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization? – Pause Giant AI Experients: An Open Letter

    The crux of their message is that we shouldn't be blindly creating smarter and more robust AI until we are confident that they can be managed and controlled to maintain a positive impact. 

    Artificial-Superintelligence-In-Control

    During the pause the letter calls for, the suggestion is for AI labs and experts to jointly develop and implement safety protocols that would be audited by an independent agency. At the same time, the letter calls for developers to work with policymakers to increase governance and regulatory authorities. 

    My personal thoughts? Trying to stop (or even pause) the development of something as important as AI is naive and impractical. From the Industrial Revolution to the Information Age, humanity has always embraced new technologies, despite initial resistance and concerns. The AI Age is no different, and attempting to stop its progress would be akin to trying to stop the tide. On top of that, AI development is a global phenomenon, with researchers, institutions, and companies from around the world making significant contributions. Attempting to halt or slow down AI development in one country would merely cede the technological advantage to other nations. In a world of intense competition and rapid innovation, falling behind in AI capabilities could have severe economic and strategic consequences.

    It is bigger than a piece of software or a set of technological capabilities. It represents a fundamental shift in what's possible.

    The playing field changed.  We are not going back. 

    The game changed.  That means what it takes to win or lose changed as well.

    Yes, AI ethics is an important endeavor and should be worked on as diligently as the creation of new AI.  But there is no pause button for exponential technologies like this.

    Change is coming.  Growth is coming. Acceleration is coming. Trying to reject it is an exercise in futility. 

    We will both rise to the occasion and fall to the level of our readiness and preparedness.  

    Actions have consequences, but so does inaction.  In part, we can't stop because bad actors certainly won't stop to give us time to combat them or catch up. 

    When there is some incredible new "thing" there will always be some people who try to avoid it … and some who try to leverage it (for good and bad purpose).

    There will always be promise and peril.

    What you focus on and what you do remains a choice. 

    Transformation Equals Innovation Plus Purposeful Action_GapingVoid

    Whether AI creates abundance or doomsday for you will be defined largely by how you perceive and act on the promise and peril you perceive. Artificial intelligence holds the potential to address some of the world's most pressing challenges, such as climate change, disease, and poverty. By leveraging AI's capabilities, we can develop innovative solutions and accelerate progress in these areas.

    It's two sides of the same coin. A 6-month hiatus won't stop what's coming. In this case, we need to pave the road as we traverse it. 

    We live in interesting times!

    What do you think?

  • Predicting The Future With Arthur C Clarke

    Last week, I shared a couple of videos that attempted to predict the future. As a result, someone sent me a video of Arthur C Clarke's predictions that I thought was worth sharing.

    Arthur C Clarke was a fantastic science fiction writer and a famous futurist. You probably know him as the author of 2001: A Space Odyssey.

    Here are his predictions from 1964, nearly 60 years ago.

     

    via BBC Archive 

    Arthur C. Clarke had a profound impact on the way we imagine the future. Known for his remarkable predictions, Clarke's ideas may have seemed farfetched at times, yet his thoughts on the future and the art of making predictions were grounded in reason.

    If a prophet from the 1960s were to describe today's technological advancements in exaggerated terms, their predictions would sound equally ridiculous. The only certainty about the future is that it will be fantastical beyond belief, a sentiment Clarke understood well.

    You can be a great futurist even if many of your predictions are off in execution, but correct in direction. For example, Clarke predicted that the advancements in communication would potentially make cities nonexistent. While cities still exist – in much the same way as in the 1960s – people can now work, live, and make a massive difference in their companies from anywhere on the planet, even from a van traveling around the country. Global communication is so easy that it's taken for granted. 

    As a science fiction author, some of what he wrote about might seem ridiculous today. For example, super-monkey servants creating trade unions.  Much of what he wrote about was what could happen (and to provide a way for people to think about the consequences of their actions and inactions).  As we discussed last week, humans often recognize big changes on the horizon … but they rarely correctly anticipate the consequences. 

    In summary, even though some of Clarke's predictions were farfetched, they were rooted in a deep understanding of human potential and the transformative power of technology. His ability to envision a fantastical future was not only a testament to his imagination, but also served as an inspiration for generations of scientists, engineers, and dreamers. By embracing the unknown and acknowledging the inherent uncertainty of the future, we can continue to push the boundaries of what is possible and strive for a world that is truly beyond belief.

    You won't always be 100% correct, but you'll be much closer than if you reject what's coming.