I was at an event last week where (unsurprisingly) we focused heavily on AI. Conversations ranged from use cases for generative AI (and the ethics of AI image creators) to the long-term effects of AI and its adoption.
Before I could chime in, the conversation had gone to the various comparisons from past generations. When electricity was harnessed, articles claimed that it would never catch on, it would hurt productivity, and what of the artisans it would potentially put out of business if it were to gain traction.
When the radio or TV was released, the older generation was sure it would lead to the death of productivity, have horrible ramifications, and ultimately lead to the next generation's failure.
People resist change. We're wired to avoid harm more than to seek pleasure. The reason is that, evolutionarily, you have to survive to have fun.
On the other hand, my grandmother used to say: "It's easy to get used to nice things."
Here's a transcript of some of my response to the discussion:
With AI, getting from "Zero-to-One" was a surprisingly long and difficult process. Meaning, getting AI tools and capabilities to the point where normal people felt called to use it because they believed it would be useful or necessary was a long and winding road. But now that everybody thinks it's useful, AI use will no longer be "special". Instead, it will become part of the playing field. And because of that, deciding to use AI is no longer a strategy. It's now table stakes.
But that creates a potential problem and distraction. Why? Because, for most of us, our unique ability is not based on our ability to use ChatGPT or some other AI tool of the day. As more people focus on what a particular tool can do, they risk losing sight of what really matters for their real business.
Right now, ChatGPT is hot! But, if you go back to the beginning of the Internet, MySpace or Netscape (or some other tools that were first and big) aren't necessarily the things that caught on and became standards.
I'm not saying that OpenAI and ChatGPT aren't important. But what I believe is more important is that we passed a turning point where, all of a sudden, tons of people started to use something new. That means there will be an increase of focus, resources, and activity concentrated on getting to next in that space.
You don't have to predict the technology; it is often easier and better to predict human nature instead. We're going to find opportunities and challenges we wouldn't because of the concentration of energy, focus, and effort. Consequently, AI, business, and life will evolve.
For most of us, what that means is that over the next five years, our success will not be tied to how well we use a tool that exists today, but rather on how we develop our capabilities to leverage tools like that to grow our business.
But your success will not be determined by how well you learn to use ChatGPT. It will be determined by how well you envision your future and recognize opportunities to use tools to start making progress toward the things you really want and to become more and more of who you really are. Right?
So, think about your long-term bigger future? Pick a time 5, 10, or even 20 years from now. What does your desired bigger future look like? Can you create a vivid vision where you describe in detail what you'd like to happen in your personal, professional, and business life? Once you've done that, then try to imagine what a likely midpoint might look like? And then, using that as a directional compass, try to imagine what you can do in this coming year that aligns your actions with the path you've chosen?
Ultimately, as soon as you start finding ways to use emerging technologies in a way that excites you, the fear and gloom fade.
The best way to break through a wall isn't with a wide net, but rather with a sharp blow. You should be decisive and focused.
I remember being an entrepreneur in the late 90s (during the DotCom Bubble). And I remember watching people start to emulate Steve Jobs ... wearing black turtlenecks and talking about which internet company would be the "next big thing" at social gatherings. Looking back, an early sign that a crash was coming was that seemingly everybody had an opinion on what was going to be hot, and too many of them were overly confident.
The point is that almost nobody talks about the Internet that way anymore ... in part because the Internet is now part of the fabric of society. At this point, it would be weird if somebody didn't use the Internet. And you don't really even have to think about how to use the Internet anymore because there's a WHO to do almost all those HOWs.
The same is going to be true for AI. Like with any technology, it will suffer from all the same hype-cycle blues ... inflated expectations and then disillusionment. But, when we come out the other side, AI will be better for it ... just like the Internet, the silver screen, or even radio stations.
Earlier, I mentioned how long it took to get from "Zero-to-One" with AI. But we're still at like three" or a "five" on a 100-point scale. Meaning, you are at the beginning of one of the biggest and most asymptotic curves that you can imagine. And you're not late … you're early! Even the fact that you're thinking about stuff like this now means that you are massively ahead.
The trick isn't to figure out how to use AI or some AI tool. The trick is to keep the main thing the main thing.
Investing resources into your company is one thing. Realize that there are 1000s of these tools out there, and many more coming. You don't have to build something yourself. It is often faster and better to acquire a tool than it is to spend money on developing and building it.
Think of the Medici family. They invested in people, which in turn triggered the Renaissance. A key to moving forward in the Age of AI will be to invest in the right people who seek to create the kind of world you want to see. Think of this as a strategic investment into creators and entrepreneurs with a vision who are on a path that aligns with yours.
As you get better and better at doing that, you'll see increasing opportunities to use tools to engage people to collaborate with and create joint ventures. Ultimately, you will collaborate with technology where it's your thought partner and then your business partner. We are entering exciting times where AI, automation, and innovation will make extraordinary things possible for people looking for opportunities to do extraordinary things.
It gets a little old talking about ChatGPT so often ... but it's rightfully taking the world by storm, and the innovations, improvements, and use cases keep coming.
This week, I'm keeping it simple.
VisualCapitalist put together a chart that helps contextualize how well ChatGPT tests on several popular placement tests.
It also shows the comparison between versions 3.5 and 4.
ChatGPT 4 improvements include plugins, access to the internet, and the ability to analyze visual inputs.
Interestingly, there were a couple of places where version 4 didn't improve ... Regardless, it is already outperforming the average human in these scenarios.
Obviously, the ability to perform well on a test isn't a direct analog to intelligence - especially general intelligence. However, it's a sign that these tools can become important partners and assets in your business. Expect that it will take developing custom systems to truly transform your business, but there are a lot of easy wins you can stack by exploring what's out there already.
The takeaway is that you're missing out if you aren't experimenting.
Industry is changing fast. In the 1900s, the world's titans mainly produced tangible goods (or the infrastructure for them). The turn of the century brought an increasing return on intangible assets like data, software, and even brand value ... hence the rise of the influencer.
As technology increasingly changes business, jobs, and the world, intellectual property becomes an increasingly important way to set yourself apart and make your business more valuable.
While America is leading the charge in A.I., we're falling behind in creating and protecting intellectual property.
Patents can help protect your business, but they also do much more. I.P. creates inroads for partnerships with other businesses, and they can also be a moat that makes it more difficult for others to enter your space. On a small scale, this is a standard business strategy. What happens when the scale increases?
The number of patents we're seeing created is a testament to the pace of innovation in the world, but it should also be a warning to protect your innovations. Remember, however, that anything you get a patent on becomes public knowledge - so be careful with your trade secrets.
As I experiment with social media in preparation for the launch of my book "Compounding Insights: Turning Thoughts Into Things in the Age of AI," we've started producing short videos where employees ask me questions ... some dumb and some smart.
One we just released asked the question, "Does astrology work?" Here is my response.
The first answer is ... at least not the way many believers wish it would. Nonetheless, many get value from astrology because it helps them think about themselves and others from a different perspective while providing comfort and structure.
It's like a nightlight in the dark. It doesn't make you any safer, but it feels like it.
Unfortunately, like many things ... some people take it too far.
Trading is more accessible than ever before. We've gone from scrums of traders in trading pits to armchair experts investing in real estate, cryptocurrencies, options, and more from the comfort of their couches in their underwear.
With accessibility often comes misuse. And, in this specific case ... astrology.
"Mercury Is In Retrograde ... Should I Sell My Stocks?"
A blindfolded monkey throwing darts at a newspaper’s financial pages could select a portfolio that would do just as well as one carefully selected by experts. - Burt Malkiel, “A Random Walk Down Wall Street”
My son brought to my attention an iPhone app - Bull and Moon; "Find stocks whose stars align with yours."
After you create your "astrological investor profile," their "proprietary financial astrology algorithm recommends an optimal portfolio of six stocks and shows your compatibility score with thousands more."
The picks were pedestrian: Oracle, Hasbro, American International Group, Microsoft, Yum! Brands, and FedEx.
The logic and commentary were entertaining. The choices were based on "similarities in business decisions," "shared outlooks on humanity," and "strong mutual success metrics."
Here is an excerpt:
Zach can usually let strong FedEx Corporation lead the relationship, but at the same time, Zach will invest many times over. This relationship will be full of success, understanding on many levels, and a lot of fun.
It's no secret that I've been a proponent of the proliferation and adoption of AI. I've been the CEO of AI companies since the early 90s, but it was the early 2000s when I realized what the future had in store.
A few years ago, in anticipation of where we are today, I started participating in discussions about the need for governance, ethical guidelines, and response frameworks for AI (and other exponential technologies).
Last week, I said that we shouldn't slow down the progress of generative AI ... and I stand by that, but that doesn't mean that we shouldn't be working hastily to provide bumper rails to keep AI in check.
There are countless ethical concerns we should be talking about:
Bias and Discrimination - AI systems are only as objective as the data they are trained on. If the data is biased, the AI system will also be biased. Not only does that create discrimination, but it also leaves systems more susceptible.
Privacy and Data Protection - AI systems are capable of collecting vast amounts of personal data, and if this data is misused or mishandled, it could have serious consequences for individuals' privacy and security. The security of these systems needs to be managed, but also where and how they get their data.
Accountability, Explainability, and Transparency - As AI systems become increasingly complex and autonomous, it can be difficult to determine who is responsible when something goes wrong, not to mention the difficulty in understanding how public-facing systems arrive at their decisions. Explainability becomes more important for generative AI models as they're used to interface with anyone and everyone.
Human Agency and Control - When AI systems become more sophisticated and autonomous, there is fear about their autonomy ... what amount of human control is necessary, and how do we prevent "malevolent" AI? Within human agency and control, we have two sub-topics. First, is job displacement ... do we prevent AI from taking certain jobs as one potential way to preserve jobs and the economy, or do we look at other options like universal basic income. We also have to ask where international governance comes in, and how we ensure that ethical standards are upheld to prevent misuse or abuse of the technology by bad actors.
Safety and Reliability - Ensuring the safety and reliability of AI systems is important, particularly in areas such as transportation and healthcare where the consequences of errors can be severe. Setting standards of performance is important, especially considering the outsized response when an AI system does commit an "error". Think about how many car crashes are caused by human error and negligence... and then think about the media coverage when a self-driving car causes one. If we want AI to be adopted and trusted, it's going to need to be held to much higher standards.
These are all real and present concerns that we should be aware of. However, it's not as simple as creating increasingly tight chains to limit AI. We have to be judicious in our applications of regulation and oversight. We intrinsically know the dangers of overregulation - of limiting freedoms. Not only will it stifle creativity and output, but it will only encourage bad actors to go further beyond what the law-abiding creators can do.
If you want to see one potential AI risk management framework, here's a proposition by the National Institute of Standards and Technology - it's called AI RMF 1.0. It's a nice jump-off point for you to think about internal controls and preparation for impending regulation. To be one step more explicit ... if you are a business owner or a tech creator, you should be getting a better handle on your own internal controls, as well as anticipating external influence.
In conclusion, there is a clear need for AI ethics to ensure that this transformative technology is used in a responsible and ethical manner. There are many issues we need to address as AI becomes more ubiquitous and powerful. That's not an excuse to slow down, because slowing down only lets others get ahead. If you're only scared of AI, you're not paying enough attention. You should be excited.
One thing that's very obvious to the world right now is that the AI space is growing rapidly. And it's happening in many different ways.
Over the last decade, private investment in AI has increased astronomically ... Now, we're seeing government investment increasing, and the frequency and complexity of discussion around AI is exploding as well.
A big part of this is due to the massive improvement in the quality of generative AI.
This isn't the first time I've shared charts of this nature, but it's impressive to see the depth and breadth of new AI models.
For example, Minerva, a large language and multimodal model released by Google in June of 2022, used roughly 9x more training compute than GPT-3. And we can't even see the improvements already happening in 2023 like with GPT-4.
While it's important to look at the pure technical improvements, it's also worth realizing the increased creativity and applications of AI. For example, Auto-GPT takes GPT-4 and makes it almost autonomous. It can perform tasks with very little human intervention, it can self-prompt, and it has internet access & long-term and short-term memory management.
Here is an important distinction to make … We're not only getting better at creating models, but we're getting better at using them, and they are getting better at improving themselves.
All of that leads to one of the biggest shifts we're currently seeing in AI - which is the shift from academia to industry. This is the difference between thinking and doing, or promise and productive output.
In 2022, there were 32 significant industry-produced machine learning models ... compared to just 3 by academia. It's no surprise that private industry has more resources than nonprofits and academia, And now we're starting to see the benefits from that increased surge in cashflow moving into artificial intelligence, automation, and innovation.
Not only does this result in better models, but also in more jobs. The demand for AI-related skills is skyrocketing in almost every sector. On top of the demand for skills, the amount of job postings has increased significantly as well.
Currently, the U.S. is leading the charge, but there's lots of competition.
The worry is, not everyone is looking for AI-related skills to improve the world. The ethics of AI is the elephant in the room for many.
The number of AI misuse incidents is skyrocketing. Since 2012, the number has increased 26 times. And it's more than just deepfakes, AI can be used for many nefarious purposes that aren't as visible.
Unfortunately, when you invent the car, you also invent the potential for car crashes ... when you 'invent' nuclear energy, you create the potential for nuclear bombs.
There are other potential negatives as well. For example, many AI systems (like cryptocurrencies) use vast amounts of energy and produce carbon. So, the ecological impact has to be taken into account as well.
Luckily, many of the best minds of today are focused on how to create bumpers to rein in AI and prevent and discourage bad actors. In 2016, only 1 law was passed focused on Artificial Intelligence ... 37 were passed last year. This is a focus not just in America, but around the globe.
Conclusion
Artificial Intelligence is inevitable. It's here, it's growing, and it's amazing.
Despite America leading the charge in A.I., we're also among the lowest in positivity about the benefits and drawbacks of these products and services. China, Saudi Arabia, and India rank the highest.
If we don't continue to lead the charge, other countries will …Which means we need to address the fears and culture around A.I. in America. The benefits outweigh the costs – but we have to account for the costs and attempt to minimize potential risks as well.
Pioneers often get arrows in their backs and blood on their shoes. But they are also the first to reach the new world.
Luckily, I think momentum is moving in the right direction. Watching my friends start to use AI-powered apps, has been rewarding as someone who has been in the space since the early '90s.
We are on the right path.
Onwards!
_____________________________________
1Nestor Maslej, Loredana Fattorini, Erik Brynjolfsson, John Etchemendy, Katrina Ligett, Terah Lyons, James Manyika, Helen Ngo, Juan Carlos Niebles, Vanessa Parli, Yoav Shoham, Russell Wald, Jack Clark, and Raymond Perrault, “The AI Index 2023 Annual Report,” AI Index Steering Committee, Institute for Human-Centered AI, Stanford University, Stanford, CA, April 2023. The AI Index 2023 Annual Report by Stanford University is licensed under
Many of the people who read my blog, or are subscribed to my newsletter, are either entrepreneurs or in the financial space. While Charlie Epstein moonlights as an actor/comedian, his day job is in financial services. He's incredibly sharp, very knowledgeable ... and yes, a little quirky.
But that quirkiness is what makes him funny - so much so that you'll be captivated long enough to gain some real value. Charlie does an excellent job teaching people how to do practical things to ensure they have enough money when they retire to live a good life.
More importantly, he helps you think about your mindsets and what you truly want, so you can live the life you've always dreamed of and deserved. And even though I didn't think I needed to learn anything new, I gained a ton of practical value – and you probably will too.
As a bonus, half of the proceeds go toward supporting vets with PTSD.
There aren't many people (or "offers") I'd feel comfortable plugging, but this is one of them. As well, many of the other people I would put in front of you (like Dan Sullivan, Peter Diamandis, and Mike Koenigs) love Charlie as much as I do.
When I first got interested in trading, I used to look at many traditional sources and old-school market wisdom. I particularly liked the Stock Trader's Almanac.
While there is real wisdom in some of those sources, most might as well be horoscopes or Nostradamus-level predictions. Throw enough darts, and one of them might hit the bullseye.
Traders love patterns, from the simple head-and-shoulders, to Fibonacci sequences, and the Elliot Wave Theory.
Here's an example from Samuel Benner, an Ohio farmer, in 1875. That year he released a book titled "Benners Prophecies: Future Ups and Down in Prices," and in it, he shared a now relatively famous chart called the Benner Cycle. Some claim that it's been accurately predicting the ups and downs of the market for over 100 years. Let's check it out.
Here's what it does get right ... markets go up, and then they go down ... and that cycle continues. Consequently, if you want to make money, you should buy low and sell high ... It's hard to call that a competitive advantage.
Mostly, you're looking at vague predictions with +/- 2-year error bars on a 10-year cycle.
However, it was close to the dotcom bust and the 2008 crash ... so even if you sold a little early, you'd have been reasonably happy with your decision to follow the cycle.
The truth is that we use cycle analysis in our live trading models. However, it is a lot more rigorous and scientific than the Benner Cycle. The trick is figuring out what to focus on – and what to ignore.
Just as humans are good at seeing patterns where there are none ... they tend to see cycles that aren't anything but coincidences.
This is a reminder that just because an AI chat service recommends something, doesn't make it a good recommendation. Those models do some things well. Making scientific or mathematically rigorous market predictions probably aren't the areas to trust ChatGPT or one of its rivals.
In March, OpenAI unveiled GPT-4, and people were rightfully impressed. Now, fears are even greater about the potential consequences of more powerful AI.
The letter raises a couple of questions.
Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization? - Pause Giant AI Experients: An Open Letter
The crux of their message is that we shouldn't be blindly creating smarter and more robust AI until we are confident that they can be managed and controlled to maintain a positive impact.
During the pause the letter calls for, the suggestion is for AI labs and experts to jointly develop and implement safety protocols that would be audited by an independent agency. At the same time, the letter calls for developers to work with policymakers to increase governance and regulatory authorities.
My personal thoughts? Trying to stop (or even pause) the development of something as important as AI is naive and impractical. From the Industrial Revolution to the Information Age, humanity has always embraced new technologies, despite initial resistance and concerns. The AI Age is no different, and attempting to stop its progress would be akin to trying to stop the tide. On top of that, AI development is a global phenomenon, with researchers, institutions, and companies from around the world making significant contributions. Attempting to halt or slow down AI development in one country would merely cede the technological advantage to other nations. In a world of intense competition and rapid innovation, falling behind in AI capabilities could have severe economic and strategic consequences.
It is bigger than a piece of software or a set of technological capabilities. It represents a fundamental shift in what's possible.
The playing field changed. We are not going back.
The game changed. That means what it takes to win or lose changed as well.
Yes, AI ethics is an important endeavor and should be worked on as diligently as the creation of new AI. But there is no pause button for exponential technologies like this.
Change is coming. Growth is coming. Acceleration is coming. Trying to reject it is an exercise in futility.
We will both rise to the occasion and fall to the level of our readiness and preparedness.
Actions have consequences, but so does inaction. In part, we can't stop because bad actors certainly won't stop to give us time to combat them or catch up.
When there is some incredible new "thing" there will always be some people who try to avoid it ... and some who try to leverage it (for good and bad purpose).
There will always be promise and peril.
What you focus on and what you do remains a choice.
Whether AI creates abundance or doomsday for you will be defined largely by how you perceive and act on the promise and peril you perceive. Artificial intelligence holds the potential to address some of the world's most pressing challenges, such as climate change, disease, and poverty. By leveraging AI's capabilities, we can develop innovative solutions and accelerate progress in these areas.
It's two sides of the same coin. A 6-month hiatus won't stop what's coming. In this case, we need to pave the road as we traverse it.
Last week, I shared a couple of videos that attempted to predict the future. As a result, someone sent me a video of Arthur C Clarke's predictions that I thought was worth sharing.
Arthur C. Clarke had a profound impact on the way we imagine the future. Known for his remarkable predictions, Clarke's ideas may have seemed farfetched at times, yet his thoughts on the future and the art of making predictions were grounded in reason.
If a prophet from the 1960s were to describe today's technological advancements in exaggerated terms, their predictions would sound equally ridiculous. The only certainty about the future is that it will be fantastical beyond belief, a sentiment Clarke understood well.
You can be a great futurist even if many of your predictions are off in execution, but correct in direction. For example, Clarke predicted that the advancements in communication would potentially make cities nonexistent. While cities still exist - in much the same way as in the 1960s - people can now work, live, and make a massive difference in their companies from anywhere on the planet, even from a van traveling around the country. Global communication is so easy that it's taken for granted.
As a science fiction author, some of what he wrote about might seem ridiculous today. For example, super-monkey servants creating trade unions. Much of what he wrote about was what could happen (and to provide a way for people to think about the consequences of their actions and inactions). As we discussed last week, humans often recognize big changes on the horizon ... but they rarely correctly anticipate the consequences.
In summary, even though some of Clarke's predictions were farfetched, they were rooted in a deep understanding of human potential and the transformative power of technology. His ability to envision a fantastical future was not only a testament to his imagination, but also served as an inspiration for generations of scientists, engineers, and dreamers. By embracing the unknown and acknowledging the inherent uncertainty of the future, we can continue to push the boundaries of what is possible and strive for a world that is truly beyond belief.
You won't always be 100% correct, but you'll be much closer than if you reject what's coming.
A Lesson From My Grandmother ... On AI
I was at an event last week where (unsurprisingly) we focused heavily on AI. Conversations ranged from use cases for generative AI (and the ethics of AI image creators) to the long-term effects of AI and its adoption.
Before I could chime in, the conversation had gone to the various comparisons from past generations. When electricity was harnessed, articles claimed that it would never catch on, it would hurt productivity, and what of the artisans it would potentially put out of business if it were to gain traction.
When the radio or TV was released, the older generation was sure it would lead to the death of productivity, have horrible ramifications, and ultimately lead to the next generation's failure.
People resist change. We're wired to avoid harm more than to seek pleasure. The reason is that, evolutionarily, you have to survive to have fun.
On the other hand, my grandmother used to say: "It's easy to get used to nice things."
Here's a transcript of some of my response to the discussion:
Ultimately, as soon as you start finding ways to use emerging technologies in a way that excites you, the fear and gloom fade.
The best way to break through a wall isn't with a wide net, but rather with a sharp blow. You should be decisive and focused.
I remember being an entrepreneur in the late 90s (during the DotCom Bubble). And I remember watching people start to emulate Steve Jobs ... wearing black turtlenecks and talking about which internet company would be the "next big thing" at social gatherings. Looking back, an early sign that a crash was coming was that seemingly everybody had an opinion on what was going to be hot, and too many of them were overly confident.
The point is that almost nobody talks about the Internet that way anymore ... in part because the Internet is now part of the fabric of society. At this point, it would be weird if somebody didn't use the Internet. And you don't really even have to think about how to use the Internet anymore because there's a WHO to do almost all those HOWs.
The same is going to be true for AI. Like with any technology, it will suffer from all the same hype-cycle blues ... inflated expectations and then disillusionment. But, when we come out the other side, AI will be better for it ... just like the Internet, the silver screen, or even radio stations.
Earlier, I mentioned how long it took to get from "Zero-to-One" with AI. But we're still at like three" or a "five" on a 100-point scale. Meaning, you are at the beginning of one of the biggest and most asymptotic curves that you can imagine. And you're not late … you're early! Even the fact that you're thinking about stuff like this now means that you are massively ahead.
The trick isn't to figure out how to use AI or some AI tool. The trick is to keep the main thing the main thing.
Investing resources into your company is one thing. Realize that there are 1000s of these tools out there, and many more coming. You don't have to build something yourself. It is often faster and better to acquire a tool than it is to spend money on developing and building it.
Think of the Medici family. They invested in people, which in turn triggered the Renaissance. A key to moving forward in the Age of AI will be to invest in the right people who seek to create the kind of world you want to see. Think of this as a strategic investment into creators and entrepreneurs with a vision who are on a path that aligns with yours.
As you get better and better at doing that, you'll see increasing opportunities to use tools to engage people to collaborate with and create joint ventures. Ultimately, you will collaborate with technology where it's your thought partner and then your business partner. We are entering exciting times where AI, automation, and innovation will make extraordinary things possible for people looking for opportunities to do extraordinary things.
Posted at 03:38 PM in Business, Current Affairs, Gadgets, Ideas, Market Commentary, Personal Development, Science, Trading Tools, Web/Tech | Permalink | Comments (0)
Reblog (0)