Since my last name is Getson, I often get "Jetson" at restaurants. As the CEO of a tech company focused on innovative technologies, it somehow feels fitting.
Despite only airing for one season (from 1962-1963), The Jetsons remains a cultural phenomenon. It supposedly takes place in 2062, but in the story, the family's patriarch (George Jetson) was born on July 31, 2022. Not too long ago.
Obviously, this is a whimsical representation of the future - spurred on by fears of the Soviet Union and the space race. But it captured the imagination of multiple generations of kids. Flying cars, talking dogs, robot maids, and food printing ... what's not to love?
I don't intend to dissect the show about what they got right or wrong, but I do want to briefly examine what they imagined based on where we are today.
For example, while flying cars aren't ubiquitous yet (like in the Jetsons), we already have driverless cars. It's likely that by 2062, driverless cars will be pervasive, even if flying cars aren't. But, frankly, who knows? That is still possible.
Meanwhile, both George and Jane work very few hours a week due to the increase in technology. While that's a future we can still envision, despite massive technological improvements, we've chosen to increase productivity (instead of working less and keeping output at 1960 levels). Even with the expected growth of AI, I still believe that humans will choose to pursue purposeful work.
The Jetsons also underemphasize the wireless nature of today's world. George still has to go into the office, and while they have video phones, it's still a piece of hardware connected to a wall, instead of mobile and wireless. 2062 is far enough away that holographic displays are still a very real possibility.
Likewise, while we don't yet have complex robot maids (like Rosie), we already have Roombas... and both AI and Robotics are improving exponentially.
Meanwhile, we are in the process of creating cheap and sustainable food printing and drone delivery services ... which makes the Jetsons look oddly prescient.
And, remember, there are still 40 years for us to continue to make progress. So, while I think it's doubtful cities will look like the spaceports portrayed in the cartoon ... I suspect that you'll be impressed by how much further we are along than even the Jetsons imagined.
Not only is the rate of innovation increasing, but so is the rate at which that rate increases. It's exponential.
One thing that's very obvious to the world right now is that the AI space is growing rapidly. And it's happening in many different ways.
Over the last decade, private investment in AI has increased astronomically ... Now, we're seeing government investment increasing, and the frequency and complexity of discussion around AI is exploding as well.
A big part of this is due to the massive improvement in the quality of generative AI.
This isn't the first time I've shared charts of this nature, but it's impressive to see the depth and breadth of new AI models.
For example, Minerva, a large language and multimodal model released by Google in June of 2022, used roughly 9x more training compute than GPT-3. And we can't even see the improvements already happening in 2023 like with GPT-4.
While it's important to look at the pure technical improvements, it's also worth realizing the increased creativity and applications of AI. For example, Auto-GPT takes GPT-4 and makes it almost autonomous. It can perform tasks with very little human intervention, it can self-prompt, and it has internet access & long-term and short-term memory management.
Here is an important distinction to make … We're not only getting better at creating models, but we're getting better at using them, and they are getting better at improving themselves.
All of that leads to one of the biggest shifts we're currently seeing in AI - which is the shift from academia to industry. This is the difference between thinking and doing, or promise and productive output.
In 2022, there were 32 significant industry-produced machine learning models ... compared to just 3 by academia. It's no surprise that private industry has more resources than nonprofits and academia, And now we're starting to see the benefits from that increased surge in cashflow moving into artificial intelligence, automation, and innovation.
Not only does this result in better models, but also in more jobs. The demand for AI-related skills is skyrocketing in almost every sector. On top of the demand for skills, the amount of job postings has increased significantly as well.
Currently, the U.S. is leading the charge, but there's lots of competition.
The worry is, not everyone is looking for AI-related skills to improve the world. The ethics of AI is the elephant in the room for many.
The number of AI misuse incidents is skyrocketing. Since 2012, the number has increased 26 times. And it's more than just deepfakes, AI can be used for many nefarious purposes that aren't as visible.
Unfortunately, when you invent the car, you also invent the potential for car crashes ... when you 'invent' nuclear energy, you create the potential for nuclear bombs.
There are other potential negatives as well. For example, many AI systems (like cryptocurrencies) use vast amounts of energy and produce carbon. So, the ecological impact has to be taken into account as well.
Luckily, many of the best minds of today are focused on how to create bumpers to rein in AI and prevent and discourage bad actors. In 2016, only 1 law was passed focused on Artificial Intelligence ... 37 were passed last year. This is a focus not just in America, but around the globe.
Conclusion
Artificial Intelligence is inevitable. It's here, it's growing, and it's amazing.
Despite America leading the charge in A.I., we're also among the lowest in positivity about the benefits and drawbacks of these products and services. China, Saudi Arabia, and India rank the highest.
If we don't continue to lead the charge, other countries will …Which means we need to address the fears and culture around A.I. in America. The benefits outweigh the costs – but we have to account for the costs and attempt to minimize potential risks as well.
Pioneers often get arrows in their backs and blood on their shoes. But they are also the first to reach the new world.
Luckily, I think momentum is moving in the right direction. Watching my friends start to use AI-powered apps, has been rewarding as someone who has been in the space since the early '90s.
We are on the right path.
Onwards!
_____________________________________
1Nestor Maslej, Loredana Fattorini, Erik Brynjolfsson, John Etchemendy, Katrina Ligett, Terah Lyons, James Manyika, Helen Ngo, Juan Carlos Niebles, Vanessa Parli, Yoav Shoham, Russell Wald, Jack Clark, and Raymond Perrault, “The AI Index 2023 Annual Report,” AI Index Steering Committee, Institute for Human-Centered AI, Stanford University, Stanford, CA, April 2023. The AI Index 2023 Annual Report by Stanford University is licensed under
Many of the people who read my blog, or are subscribed to my newsletter, are either entrepreneurs or in the financial space. While Charlie Epstein moonlights as an actor/comedian, his day job is in financial services. He's incredibly sharp, very knowledgeable ... and yes, a little quirky.
But that quirkiness is what makes him funny - so much so that you'll be captivated long enough to gain some real value. Charlie does an excellent job teaching people how to do practical things to ensure they have enough money when they retire to live a good life.
More importantly, he helps you think about your mindsets and what you truly want, so you can live the life you've always dreamed of and deserved. And even though I didn't think I needed to learn anything new, I gained a ton of practical value – and you probably will too.
As a bonus, half of the proceeds go toward supporting vets with PTSD.
There aren't many people (or "offers") I'd feel comfortable plugging, but this is one of them. As well, many of the other people I would put in front of you (like Dan Sullivan, Peter Diamandis, and Mike Koenigs) love Charlie as much as I do.
When I first got interested in trading, I used to look at many traditional sources and old-school market wisdom. I particularly liked the Stock Trader's Almanac.
While there is real wisdom in some of those sources, most might as well be horoscopes or Nostradamus-level predictions. Throw enough darts, and one of them might hit the bullseye.
Traders love patterns, from the simple head-and-shoulders, to Fibonacci sequences, and the Elliot Wave Theory.
Here's an example from Samuel Benner, an Ohio farmer, in 1875. That year he released a book titled "Benners Prophecies: Future Ups and Down in Prices," and in it, he shared a now relatively famous chart called the Benner Cycle. Some claim that it's been accurately predicting the ups and downs of the market for over 100 years. Let's check it out.
Here's what it does get right ... markets go up, and then they go down ... and that cycle continues. Consequently, if you want to make money, you should buy low and sell high ... It's hard to call that a competitive advantage.
Mostly, you're looking at vague predictions with +/- 2-year error bars on a 10-year cycle.
However, it was close to the dotcom bust and the 2008 crash ... so even if you sold a little early, you'd have been reasonably happy with your decision to follow the cycle.
The truth is that we use cycle analysis in our live trading models. However, it is a lot more rigorous and scientific than the Benner Cycle. The trick is figuring out what to focus on – and what to ignore.
Just as humans are good at seeing patterns where there are none ... they tend to see cycles that aren't anything but coincidences.
This is a reminder that just because an AI chat service recommends something, doesn't make it a good recommendation. Those models do some things well. Making scientific or mathematically rigorous market predictions probably aren't the areas to trust ChatGPT or one of its rivals.
It's been about a month since we discussed Silicon Valley Bank (SVB). But the impact is still lingering. I know friends whose money is still tied up, and we've continued to see increased coverage of banks' perceived failures.
Currently, there are $7 trillion sitting uninsured in American banks. VisualCapitalist put together a list of the 30 biggest banks by uninsured deposits.
Many of the banks on this list are systemically important to the banking system ... which means the government would be more incentivized to prevent their collapse.
It's important to make clear that these banks differ from SVB in several ways. To start, their userbase is much more diverse ... but even more importantly, their loans and held-to-maturity securities are much lower as a percentage of total deposits. Those types of loans took up the vast majority of SVB's deposits, while they make up less than 50% of the systemically important banks on this list. But, according to VisualCapitalist, 11 banks on this list have ratios over 90%, just like SVB, which brings them to a much higher risk level.
Regulators stepped up in the wake of the SVB collapse, and the Fed also launched the Bank Term Funding Program (BTFP), as we discussed in the last article on this subject. But, it remains to be seen what will happen in the future.
Does the Fed have another option besides saving the banks and backing deposits? If not, market participants will start to rely on the Fed to come to the rescue, making even riskier decisions than they already were.
It feels like the Fed is stuck between a rock and a hard place, but hopefully, we will start to see some movement in the right direction.
In March, OpenAI unveiled GPT-4, and people were rightfully impressed. Now, fears are even greater about the potential consequences of more powerful AI.
The letter raises a couple of questions.
Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization? - Pause Giant AI Experients: An Open Letter
The crux of their message is that we shouldn't be blindly creating smarter and more robust AI until we are confident that they can be managed and controlled to maintain a positive impact.
During the pause the letter calls for, the suggestion is for AI labs and experts to jointly develop and implement safety protocols that would be audited by an independent agency. At the same time, the letter calls for developers to work with policymakers to increase governance and regulatory authorities.
My personal thoughts? Trying to stop (or even pause) the development of something as important as AI is naive and impractical. From the Industrial Revolution to the Information Age, humanity has always embraced new technologies, despite initial resistance and concerns. The AI Age is no different, and attempting to stop its progress would be akin to trying to stop the tide. On top of that, AI development is a global phenomenon, with researchers, institutions, and companies from around the world making significant contributions. Attempting to halt or slow down AI development in one country would merely cede the technological advantage to other nations. In a world of intense competition and rapid innovation, falling behind in AI capabilities could have severe economic and strategic consequences.
It is bigger than a piece of software or a set of technological capabilities. It represents a fundamental shift in what's possible.
The playing field changed. We are not going back.
The game changed. That means what it takes to win or lose changed as well.
Yes, AI ethics is an important endeavor and should be worked on as diligently as the creation of new AI. But there is no pause button for exponential technologies like this.
Change is coming. Growth is coming. Acceleration is coming. Trying to reject it is an exercise in futility.
We will both rise to the occasion and fall to the level of our readiness and preparedness.
Actions have consequences, but so does inaction. In part, we can't stop because bad actors certainly won't stop to give us time to combat them or catch up.
When there is some incredible new "thing" there will always be some people who try to avoid it ... and some who try to leverage it (for good and bad purpose).
There will always be promise and peril.
What you focus on and what you do remains a choice.
Whether AI creates abundance or doomsday for you will be defined largely by how you perceive and act on the promise and peril you perceive. Artificial intelligence holds the potential to address some of the world's most pressing challenges, such as climate change, disease, and poverty. By leveraging AI's capabilities, we can develop innovative solutions and accelerate progress in these areas.
It's two sides of the same coin. A 6-month hiatus won't stop what's coming. In this case, we need to pave the road as we traverse it.
A Few Graphs On The State Of AI in 2023
Every year, Stanford puts out an AI Index1 with a massive amount of data attempting to sum up the current state of AI.
Last year it was 190 pages ... now it's 386 pages. The report details where research is going and covers current specs, ethics, policy, and more.
It is super nerdy ... yet, it's probably worth a skim. Here are some of the highlights that I shared last year.
Here are a few things that caught my eye and might help set some high-level context for you.
Growth Of AI
via 2023 AI Index Report
One thing that's very obvious to the world right now is that the AI space is growing rapidly. And it's happening in many different ways.
Over the last decade, private investment in AI has increased astronomically ... Now, we're seeing government investment increasing, and the frequency and complexity of discussion around AI is exploding as well.
A big part of this is due to the massive improvement in the quality of generative AI.
Technical Improvements in AI
This isn't the first time I've shared charts of this nature, but it's impressive to see the depth and breadth of new AI models.
For example, Minerva, a large language and multimodal model released by Google in June of 2022, used roughly 9x more training compute than GPT-3. And we can't even see the improvements already happening in 2023 like with GPT-4.
While it's important to look at the pure technical improvements, it's also worth realizing the increased creativity and applications of AI. For example, Auto-GPT takes GPT-4 and makes it almost autonomous. It can perform tasks with very little human intervention, it can self-prompt, and it has internet access & long-term and short-term memory management.
Here is an important distinction to make … We're not only getting better at creating models, but we're getting better at using them, and they are getting better at improving themselves.
All of that leads to one of the biggest shifts we're currently seeing in AI - which is the shift from academia to industry. This is the difference between thinking and doing, or promise and productive output.
Jobs In AI
In 2022, there were 32 significant industry-produced machine learning models ... compared to just 3 by academia. It's no surprise that private industry has more resources than nonprofits and academia, And now we're starting to see the benefits from that increased surge in cashflow moving into artificial intelligence, automation, and innovation.
Not only does this result in better models, but also in more jobs. The demand for AI-related skills is skyrocketing in almost every sector. On top of the demand for skills, the amount of job postings has increased significantly as well.
Currently, the U.S. is leading the charge, but there's lots of competition.
The worry is, not everyone is looking for AI-related skills to improve the world. The ethics of AI is the elephant in the room for many.
AI Ethics
via 2023 AI Index Report
The number of AI misuse incidents is skyrocketing. Since 2012, the number has increased 26 times. And it's more than just deepfakes, AI can be used for many nefarious purposes that aren't as visible.
Unfortunately, when you invent the car, you also invent the potential for car crashes ... when you 'invent' nuclear energy, you create the potential for nuclear bombs.
There are other potential negatives as well. For example, many AI systems (like cryptocurrencies) use vast amounts of energy and produce carbon. So, the ecological impact has to be taken into account as well.
Luckily, many of the best minds of today are focused on how to create bumpers to rein in AI and prevent and discourage bad actors. In 2016, only 1 law was passed focused on Artificial Intelligence ... 37 were passed last year. This is a focus not just in America, but around the globe.
Conclusion
Artificial Intelligence is inevitable. It's here, it's growing, and it's amazing.
Despite America leading the charge in A.I., we're also among the lowest in positivity about the benefits and drawbacks of these products and services. China, Saudi Arabia, and India rank the highest.
If we don't continue to lead the charge, other countries will …Which means we need to address the fears and culture around A.I. in America. The benefits outweigh the costs – but we have to account for the costs and attempt to minimize potential risks as well.
Pioneers often get arrows in their backs and blood on their shoes. But they are also the first to reach the new world.
Luckily, I think momentum is moving in the right direction. Watching my friends start to use AI-powered apps, has been rewarding as someone who has been in the space since the early '90s.
We are on the right path.
Onwards!
_____________________________________
1Nestor Maslej, Loredana Fattorini, Erik Brynjolfsson, John Etchemendy, Katrina Ligett, Terah Lyons, James Manyika, Helen Ngo, Juan Carlos Niebles, Vanessa Parli, Yoav Shoham, Russell Wald, Jack Clark, and Raymond Perrault, “The AI Index 2023 Annual Report,” AI Index Steering Committee, Institute for Human-Centered AI, Stanford University, Stanford, CA, April 2023. The AI Index 2023 Annual Report by Stanford University is licensed under
Posted at 08:59 PM in Business, Current Affairs, Gadgets, Ideas, Market Commentary, Personal Development, Science, Trading Tools, Web/Tech | Permalink | Comments (0)
Reblog (0)