If you're interested in AI and its impact on business, life, and our world, I encourage you to check out some of my past podcast interviews.
As I work on finishing my book, "Compounding Insights: Turning Thoughts into Things in the Age of AI," I've revisited several old episodes, and some are certainly worth sharing. I've collected a few here for you to listen to. Let me know what you think.
In 2021, I recorded two interviews that I especially enjoyed. The first was done with Dan Sullivan and Steven Krein for Strategic Coach's Free Zone Frontier podcast... and the second was with Brett Kaufman on his Gravity podcast.
Please listen to them. They were pretty different, but both were well done and interesting.
Free Zone Frontier with Dan Sullivan and Steve Krein
Free Zone Frontier is a Strategic Coach program (and podcast) about creating "Free Zones." It refers to the green space where entrepreneurs collaborate and create without competition.
It's a transformative idea for entrepreneurial growth.
This episode focused on topics like building a bigger future, how decision-making frameworks and technology can extend your edge, and what it takes to get to the next level. I realize there is a lot of Strategic Coach jargon in this episode. However, it is still easy to understand, and there was great energy and an elevated conversation about worthy topics.
As an aside, Steve Krein is my cousin, and we joined Strategic Coach entirely separately before realizing we had joined the same group.
The podcast is 47 Minutes. I hope you enjoy it.
Or click here to listen on Spotify, Google Podcasts, or Apple Podcasts
Gravity Podcast with Brett Kaufman
Usually, I talk about business, mental models, and the future of AI and technology, but Brett Kaufman brought something different out of me.
Brett's Gravity Project is about living with intention, community, consciousness, and connection. He focuses on getting people to share their life experiences ... with the intent that others can see themselves in your story.
In my talk with Brett, we do talk about the entrepreneurial journey ... but we also probe some deep insights by discussing the death of my younger brother, how my life changed almost immediately upon meeting my wife, and why love is the most powerful and base energy in the universe.
This was not a typical conversation for me (a different ratio of head-to-heart), but it was a good one (and I've had many people reach out because of this podcast). It was fun to revisit my childhood, from playing with a cash register at my grandfather's pharmacy to selling fireflies or sand-painting terrariums; it's funny how those small moments influenced my love for entrepreneurship.
The episode is 65 minutes. I hope you enjoy it.
Click here to listen on Spotify, Apple Podcasts, or Listen Notes.
Last year, I recorded two other podcasts that I'm excited to share ... It's interesting to see the change in topic and focus - but how much is still the same (timeless).
Clarity Generates Confidence With Gary Mottershead
I talked with Gary about intentionality, learning from the past, and how AI adoption is more about human nature than technology ... and more.
Click here to listen on Spotify or Gary's Website.
Creative On Purpose With Scott Perry
On the surface, this episode may seem like just another conversation about AI, but I value the diverse insights, points of emphasis, and perspectives that different hosts illuminate.
In talking with Scott, we dove deeper into emotional alchemy, self-identity, and how to move toward what you want in life - instead of away from what you don't want.
Click here to listen at Scott's Substack.
I'm currently planning a podcast series called "Frameworks on Frameworks," where we'll explore great ideas, how they work, and how you can use them.
Let me know your thoughts and any topics you want us to cover.
A Few Graphs On The State of AI in 2024
Every year, Stanford puts out an AI Index1 with a massive amount of data attempting to sum up the current state of AI.
In 2022, it was 196 pages; last year, it was 386; now, it’s over 500 ... The report details where research is going and covers current specs, ethics, policy, and more.
It is super nerdy ... yet, it’s probably worth a skim (or ask one of the new AI services to summarize the key points, put it into an outline, and create a business strategy for your business from the items that are likely to create the best sustainable competitive advantages for you in your industry).
For reference, here are my highlights from 2022 and 2023.
AI (as a whole) received less private investment than last year - despite an 8X funding increase for Generative AI in the past year.
Even with less private investment, progress in AI accelerated in 2023.
We saw the release of new state-of-the-art systems like GPT-4, Gemini, and Claude 3. These systems are also much more multimodal than previous systems. They’re fluent in dozens of languages, can process audio and video, and even explain memes.
So, while we’re seeing a decrease in the rate at which AI gets investment dollars and new job headcount, we’re starting to see the dam overflow. The groundwork laid over the past few years is paying dividends. Here are a few things that caught my eye and might help set some high-level context for you.
Technological Improvements In AI
via AI Index 2024
Even since 2022, the capabilities of key models have increased exponentially. LLMs like GPT-4 and Gemini Ultra are very impressive. In fact, Gemini Ultra became the first LLM to reach human-level performance on the Massive Multitask Language Understanding (MMLU) benchmark. However, there’s a direct correlation between the performance of those systems and the cost to train them.
The number of new LLMs has doubled in the last year. Two-thirds of the new LLMs are open-source, but the highest-performing models are closed systems.
While looking at the pure technical improvements is important, it’s also worth realizing AI’s increased creativity and applications. For example, Auto-GPT takes GPT-4 and makes it almost autonomous. It can perform tasks with very little human intervention, it can self-prompt, and it has internet access & long-term and short-term memory management.
Here is an important distinction to make … We’re not only getting better at creating models, but we’re getting better at using them. Meanwhile, the models are getting better at improving themselves.
The Proliferation of AI
First, let’s look at patent growth.
via AI Index 2024
The adoption of AI and the claims on AI “real estate” are still increasing. The number of AI patents has skyrocketed. From 2021 to 2022, AI patent grants worldwide increased sharply by 62.7%. Since 2010, the number of granted AI patents has increased more than 31 times.
As AI has improved, it has increasingly forced its way into our lives. We’re seeing more products, companies, and individual use cases for consumers in the general public.
While the number of AI jobs has decreased since 2021, job positions that leverage AI have significantly increased.
As well, despite the decrease in private investment, massive tranches of money are moving toward key AI-powered endeavors. For example, InstaDeep was acquired by BioNTech for $680 million to advance AI-powered drug discovery, Cohere raised $270 million to develop an AI ecosystem for enterprise use, Databricks bought MosaicML for 1.3 Billion, and Thomson Reuters acquired Casetext - an AI legal assistant.
Not to mention the investments and attention from companies like Hugging Face, Microsoft, Google, Bloomberg, Adobe, SAP, and Amazon.
Ethical AI
via AI Index 2024
Unfortunately, the number of AI misuse incidents is skyrocketing. And it’s more than just deepfakes, AI can be used for many nefarious purposes that aren’t as visible, on top of intrinsic risks, like with self-driving cars. A global survey on responsible AI highlights that companies’ top AI-related concerns include privacy, data security, and reliability.
When you invent the car, you also invent the potential for car crashes ... when you ‘invent’ nuclear energy, you create the potential for nuclear weapons.
There are other potential negatives as well. For example, many AI systems (like cryptocurrencies) use vast amounts of energy and produce carbon. So, the ecological impact has to be taken into account as well.
Luckily, many of today’s best minds are focused on creating bumpers to rein in AI and prevent and discourage bad actors. The number of AI-related regulations has risen significantly, both in the past year and over the last five years. In 2023, there were 25 AI-related regulations, a stark increase from just one in 2016. Last year, the total number of AI-related regulations grew by 56.3%. Regulating AI has become increasingly important in legislative proceedings across the globe, increasing 10x since 2016.
Not to mention, US government agencies allocated over $1.8 billion to AI research and development spending in 2023. Our government has tripled its funding for AI since 2018 and is trying to increase its budget again this year.
Conclusion
Artificial Intelligence is inevitable. Frankly, it’s already here. Not only that ... it’s growing, and it’s becoming increasingly powerful and impressive to the point that I’m no longer amazed by how amazing it continues to become.
Despite America leading the charge in AI, we’re also among the lowest in positivity about the benefits and drawbacks of these products and services. China, Saudi Arabia, and India rank the highest. Only 34% of Americans anticipate AI will boost the economy, and 32% believe it will enhance the job market. Significant demographic differences exist in perceptions of AI’s potential to enhance livelihoods, with younger generations generally more optimistic.
We’re at an interesting inflection point where fear of repercussions could derail and diminish innovation - slowing down our technological advance.
Much of this fear is based on emerging models demonstrating new (and potentially unpredictable) capabilities. Researchers showed that these emerging capabilities mostly appear when non-linear or discontinuous metrics are used ... but vanish with linear and continuous metrics. So far, even with LLMs, intrinsic self-correction has shown to be very difficult. When a model is left to decide on self-correction without guidance, performance declines across all benchmarks.
If we don’t continue to lead the charge, other countries will … you can already see it with China leading the AI patent explosion.
We need to address the fears and culture around AI in America. The benefits seem to outweigh the costs – but we have to account for the costs (time, resources, fees, and friction) and attempt to minimize potential risks – because those are real (and growing) as well.
Pioneers often get arrows in their backs and blood on their shoes. But they are also the first to reach the new world.
Luckily, I think momentum is moving in the right direction. Last year, it was rewarding to see my peers start to use AI apps. Now, many of them are using AI-inspired vocabulary and thinking seriously about how best to adopt AI into the fabric of their business.
We are on the right path.
Onwards!
1Nestor Maslej, Loredana Fattorini, Raymond Perrault, Vanessa Parli, Anka Reuel, Erik Brynjolfsson, John Etchemendy, Katrina Ligett, Terah Lyons, James Manyika, Juan Carlos Niebles, Yoav Shoham, Russell Wald, and Jack Clark, “The AI Index 2024 Annual Report,” AI Index Steering Committee, Institute for Human-Centered AI, Stanford University, Stanford, CA, April 2024. The AI Index 2024 Annual Report by Stanford University is licensed under Attribution-NoDerivatives 4.0 International.
Posted at 05:33 PM in Business, Current Affairs, Gadgets, Ideas, Market Commentary, Science, Trading, Trading Tools, Web/Tech | Permalink | Comments (0)
Reblog (0)