I was at an event last week where (unsurprisingly) we focused heavily on AI. Conversations ranged from use cases for generative AI (and the ethics of AI image creators) to the long-term effects of AI and its adoption.
Before I could chime in, the conversation had gone to the various comparisons from past generations. When electricity was harnessed, articles claimed that it would never catch on, it would hurt productivity, and what of the artisans it would potentially put out of business if it were to gain traction.
When the radio or TV was released, the older generation was sure it would lead to the death of productivity, have horrible ramifications, and ultimately lead to the next generation's failure.
People resist change. We're wired to avoid harm more than to seek pleasure. The reason is that, evolutionarily, you have to survive to have fun.
On the other hand, my grandmother used to say: "It's easy to get used to nice things."
Here's a transcript of some of my response to the discussion:
With AI, getting from "Zero-to-One" was a surprisingly long and difficult process. Meaning, getting AI tools and capabilities to the point where normal people felt called to use it because they believed it would be useful or necessary was a long and winding road. But now that everybody thinks it's useful, AI use will no longer be "special". Instead, it will become part of the playing field. And because of that, deciding to use AI is no longer a strategy. It's now table stakes.
But that creates a potential problem and distraction. Why? Because, for most of us, our unique ability is not based on our ability to use ChatGPT or some other AI tool of the day. As more people focus on what a particular tool can do, they risk losing sight of what really matters for their real business.
Right now, ChatGPT is hot! But, if you go back to the beginning of the Internet, MySpace or Netscape (or some other tools that were first and big) aren't necessarily the things that caught on and became standards.
I'm not saying that OpenAI and ChatGPT aren't important. But what I believe is more important is that we passed a turning point where, all of a sudden, tons of people started to use something new. That means there will be an increase of focus, resources, and activity concentrated on getting to next in that space.
You don't have to predict the technology; it is often easier and better to predict human nature instead. We're going to find opportunities and challenges we wouldn't because of the concentration of energy, focus, and effort. Consequently, AI, business, and life will evolve.
For most of us, what that means is that over the next five years, our success will not be tied to how well we use a tool that exists today, but rather on how we develop our capabilities to leverage tools like that to grow our business.
But your success will not be determined by how well you learn to use ChatGPT. It will be determined by how well you envision your future and recognize opportunities to use tools to start making progress toward the things you really want and to become more and more of who you really are. Right?
So, think about your long-term bigger future? Pick a time 5, 10, or even 20 years from now. What does your desired bigger future look like? Can you create a vivid vision where you describe in detail what you'd like to happen in your personal, professional, and business life? Once you've done that, then try to imagine what a likely midpoint might look like? And then, using that as a directional compass, try to imagine what you can do in this coming year that aligns your actions with the path you've chosen?
Ultimately, as soon as you start finding ways to use emerging technologies in a way that excites you, the fear and gloom fade.
The best way to break through a wall isn't with a wide net, but rather with a sharp blow. You should be decisive and focused.
I remember being an entrepreneur in the late 90s (during the DotCom Bubble). And I remember watching people start to emulate Steve Jobs ... wearing black turtlenecks and talking about which internet company would be the "next big thing" at social gatherings. Looking back, an early sign that a crash was coming was that seemingly everybody had an opinion on what was going to be hot, and too many of them were overly confident.
The point is that almost nobody talks about the Internet that way anymore ... in part because the Internet is now part of the fabric of society. At this point, it would be weird if somebody didn't use the Internet. And you don't really even have to think about how to use the Internet anymore because there's a WHO to do almost all those HOWs.
The same is going to be true for AI. Like with any technology, it will suffer from all the same hype-cycle blues ... inflated expectations and then disillusionment. But, when we come out the other side, AI will be better for it ... just like the Internet, the silver screen, or even radio stations.
Earlier, I mentioned how long it took to get from "Zero-to-One" with AI. But we're still at like three" or a "five" on a 100-point scale. Meaning, you are at the beginning of one of the biggest and most asymptotic curves that you can imagine. And you're not late … you're early! Even the fact that you're thinking about stuff like this now means that you are massively ahead.
The trick isn't to figure out how to use AI or some AI tool. The trick is to keep the main thing the main thing.
Investing resources into your company is one thing. Realize that there are 1000s of these tools out there, and many more coming. You don't have to build something yourself. It is often faster and better to acquire a tool than it is to spend money on developing and building it.
Think of the Medici family. They invested in people, which in turn triggered the Renaissance. A key to moving forward in the Age of AI will be to invest in the right people who seek to create the kind of world you want to see. Think of this as a strategic investment into creators and entrepreneurs with a vision who are on a path that aligns with yours.
As you get better and better at doing that, you'll see increasing opportunities to use tools to engage people to collaborate with and create joint ventures. Ultimately, you will collaborate with technology where it's your thought partner and then your business partner. We are entering exciting times where AI, automation, and innovation will make extraordinary things possible for people looking for opportunities to do extraordinary things.
It gets a little old talking about ChatGPT so often ... but it's rightfully taking the world by storm, and the innovations, improvements, and use cases keep coming.
This week, I'm keeping it simple.
VisualCapitalist put together a chart that helps contextualize how well ChatGPT tests on several popular placement tests.
It also shows the comparison between versions 3.5 and 4.
ChatGPT 4 improvements include plugins, access to the internet, and the ability to analyze visual inputs.
Interestingly, there were a couple of places where version 4 didn't improve ... Regardless, it is already outperforming the average human in these scenarios.
Obviously, the ability to perform well on a test isn't a direct analog to intelligence - especially general intelligence. However, it's a sign that these tools can become important partners and assets in your business. Expect that it will take developing custom systems to truly transform your business, but there are a lot of easy wins you can stack by exploring what's out there already.
The takeaway is that you're missing out if you aren't experimenting.
Industry is changing fast. In the 1900s, the world's titans mainly produced tangible goods (or the infrastructure for them). The turn of the century brought an increasing return on intangible assets like data, software, and even brand value ... hence the rise of the influencer.
As technology increasingly changes business, jobs, and the world, intellectual property becomes an increasingly important way to set yourself apart and make your business more valuable.
While America is leading the charge in A.I., we're falling behind in creating and protecting intellectual property.
Patents can help protect your business, but they also do much more. I.P. creates inroads for partnerships with other businesses, and they can also be a moat that makes it more difficult for others to enter your space. On a small scale, this is a standard business strategy. What happens when the scale increases?
The number of patents we're seeing created is a testament to the pace of innovation in the world, but it should also be a warning to protect your innovations. Remember, however, that anything you get a patent on becomes public knowledge - so be careful with your trade secrets.
It's no secret that I've been a proponent of the proliferation and adoption of AI. I've been the CEO of AI companies since the early 90s, but it was the early 2000s when I realized what the future had in store.
A few years ago, in anticipation of where we are today, I started participating in discussions about the need for governance, ethical guidelines, and response frameworks for AI (and other exponential technologies).
Last week, I said that we shouldn't slow down the progress of generative AI ... and I stand by that, but that doesn't mean that we shouldn't be working hastily to provide bumper rails to keep AI in check.
There are countless ethical concerns we should be talking about:
Bias and Discrimination - AI systems are only as objective as the data they are trained on. If the data is biased, the AI system will also be biased. Not only does that create discrimination, but it also leaves systems more susceptible.
Privacy and Data Protection - AI systems are capable of collecting vast amounts of personal data, and if this data is misused or mishandled, it could have serious consequences for individuals' privacy and security. The security of these systems needs to be managed, but also where and how they get their data.
Accountability, Explainability, and Transparency - As AI systems become increasingly complex and autonomous, it can be difficult to determine who is responsible when something goes wrong, not to mention the difficulty in understanding how public-facing systems arrive at their decisions. Explainability becomes more important for generative AI models as they're used to interface with anyone and everyone.
Human Agency and Control - When AI systems become more sophisticated and autonomous, there is fear about their autonomy ... what amount of human control is necessary, and how do we prevent "malevolent" AI? Within human agency and control, we have two sub-topics. First, is job displacement ... do we prevent AI from taking certain jobs as one potential way to preserve jobs and the economy, or do we look at other options like universal basic income. We also have to ask where international governance comes in, and how we ensure that ethical standards are upheld to prevent misuse or abuse of the technology by bad actors.
Safety and Reliability - Ensuring the safety and reliability of AI systems is important, particularly in areas such as transportation and healthcare where the consequences of errors can be severe. Setting standards of performance is important, especially considering the outsized response when an AI system does commit an "error". Think about how many car crashes are caused by human error and negligence... and then think about the media coverage when a self-driving car causes one. If we want AI to be adopted and trusted, it's going to need to be held to much higher standards.
These are all real and present concerns that we should be aware of. However, it's not as simple as creating increasingly tight chains to limit AI. We have to be judicious in our applications of regulation and oversight. We intrinsically know the dangers of overregulation - of limiting freedoms. Not only will it stifle creativity and output, but it will only encourage bad actors to go further beyond what the law-abiding creators can do.
If you want to see one potential AI risk management framework, here's a proposition by the National Institute of Standards and Technology - it's called AI RMF 1.0. It's a nice jump-off point for you to think about internal controls and preparation for impending regulation. To be one step more explicit ... if you are a business owner or a tech creator, you should be getting a better handle on your own internal controls, as well as anticipating external influence.
In conclusion, there is a clear need for AI ethics to ensure that this transformative technology is used in a responsible and ethical manner. There are many issues we need to address as AI becomes more ubiquitous and powerful. That's not an excuse to slow down, because slowing down only lets others get ahead. If you're only scared of AI, you're not paying enough attention. You should be excited.
Since my last name is Getson, I often get "Jetson" at restaurants. As the CEO of a tech company focused on innovative technologies, it somehow feels fitting.
Despite only airing for one season (from 1962-1963), The Jetsons remains a cultural phenomenon. It supposedly takes place in 2062, but in the story, the family's patriarch (George Jetson) was born on July 31, 2022. Not too long ago.
Obviously, this is a whimsical representation of the future - spurred on by fears of the Soviet Union and the space race. But it captured the imagination of multiple generations of kids. Flying cars, talking dogs, robot maids, and food printing ... what's not to love?
I don't intend to dissect the show about what they got right or wrong, but I do want to briefly examine what they imagined based on where we are today.
For example, while flying cars aren't ubiquitous yet (like in the Jetsons), we already have driverless cars. It's likely that by 2062, driverless cars will be pervasive, even if flying cars aren't. But, frankly, who knows? That is still possible.
Meanwhile, both George and Jane work very few hours a week due to the increase in technology. While that's a future we can still envision, despite massive technological improvements, we've chosen to increase productivity (instead of working less and keeping output at 1960 levels). Even with the expected growth of AI, I still believe that humans will choose to pursue purposeful work.
The Jetsons also underemphasize the wireless nature of today's world. George still has to go into the office, and while they have video phones, it's still a piece of hardware connected to a wall, instead of mobile and wireless. 2062 is far enough away that holographic displays are still a very real possibility.
Likewise, while we don't yet have complex robot maids (like Rosie), we already have Roombas... and both AI and Robotics are improving exponentially.
Meanwhile, we are in the process of creating cheap and sustainable food printing and drone delivery services ... which makes the Jetsons look oddly prescient.
And, remember, there are still 40 years for us to continue to make progress. So, while I think it's doubtful cities will look like the spaceports portrayed in the cartoon ... I suspect that you'll be impressed by how much further we are along than even the Jetsons imagined.
Not only is the rate of innovation increasing, but so is the rate at which that rate increases. It's exponential.
One thing that's very obvious to the world right now is that the AI space is growing rapidly. And it's happening in many different ways.
Over the last decade, private investment in AI has increased astronomically ... Now, we're seeing government investment increasing, and the frequency and complexity of discussion around AI is exploding as well.
A big part of this is due to the massive improvement in the quality of generative AI.
This isn't the first time I've shared charts of this nature, but it's impressive to see the depth and breadth of new AI models.
For example, Minerva, a large language and multimodal model released by Google in June of 2022, used roughly 9x more training compute than GPT-3. And we can't even see the improvements already happening in 2023 like with GPT-4.
While it's important to look at the pure technical improvements, it's also worth realizing the increased creativity and applications of AI. For example, Auto-GPT takes GPT-4 and makes it almost autonomous. It can perform tasks with very little human intervention, it can self-prompt, and it has internet access & long-term and short-term memory management.
Here is an important distinction to make … We're not only getting better at creating models, but we're getting better at using them, and they are getting better at improving themselves.
All of that leads to one of the biggest shifts we're currently seeing in AI - which is the shift from academia to industry. This is the difference between thinking and doing, or promise and productive output.
In 2022, there were 32 significant industry-produced machine learning models ... compared to just 3 by academia. It's no surprise that private industry has more resources than nonprofits and academia, And now we're starting to see the benefits from that increased surge in cashflow moving into artificial intelligence, automation, and innovation.
Not only does this result in better models, but also in more jobs. The demand for AI-related skills is skyrocketing in almost every sector. On top of the demand for skills, the amount of job postings has increased significantly as well.
Currently, the U.S. is leading the charge, but there's lots of competition.
The worry is, not everyone is looking for AI-related skills to improve the world. The ethics of AI is the elephant in the room for many.
The number of AI misuse incidents is skyrocketing. Since 2012, the number has increased 26 times. And it's more than just deepfakes, AI can be used for many nefarious purposes that aren't as visible.
Unfortunately, when you invent the car, you also invent the potential for car crashes ... when you 'invent' nuclear energy, you create the potential for nuclear bombs.
There are other potential negatives as well. For example, many AI systems (like cryptocurrencies) use vast amounts of energy and produce carbon. So, the ecological impact has to be taken into account as well.
Luckily, many of the best minds of today are focused on how to create bumpers to rein in AI and prevent and discourage bad actors. In 2016, only 1 law was passed focused on Artificial Intelligence ... 37 were passed last year. This is a focus not just in America, but around the globe.
Conclusion
Artificial Intelligence is inevitable. It's here, it's growing, and it's amazing.
Despite America leading the charge in A.I., we're also among the lowest in positivity about the benefits and drawbacks of these products and services. China, Saudi Arabia, and India rank the highest.
If we don't continue to lead the charge, other countries will …Which means we need to address the fears and culture around A.I. in America. The benefits outweigh the costs – but we have to account for the costs and attempt to minimize potential risks as well.
Pioneers often get arrows in their backs and blood on their shoes. But they are also the first to reach the new world.
Luckily, I think momentum is moving in the right direction. Watching my friends start to use AI-powered apps, has been rewarding as someone who has been in the space since the early '90s.
We are on the right path.
Onwards!
_____________________________________
1Nestor Maslej, Loredana Fattorini, Erik Brynjolfsson, John Etchemendy, Katrina Ligett, Terah Lyons, James Manyika, Helen Ngo, Juan Carlos Niebles, Vanessa Parli, Yoav Shoham, Russell Wald, Jack Clark, and Raymond Perrault, “The AI Index 2023 Annual Report,” AI Index Steering Committee, Institute for Human-Centered AI, Stanford University, Stanford, CA, April 2023. The AI Index 2023 Annual Report by Stanford University is licensed under
Last week, I shared a couple of videos that attempted to predict the future. As a result, someone sent me a video of Arthur C Clarke's predictions that I thought was worth sharing.
Arthur C. Clarke had a profound impact on the way we imagine the future. Known for his remarkable predictions, Clarke's ideas may have seemed farfetched at times, yet his thoughts on the future and the art of making predictions were grounded in reason.
If a prophet from the 1960s were to describe today's technological advancements in exaggerated terms, their predictions would sound equally ridiculous. The only certainty about the future is that it will be fantastical beyond belief, a sentiment Clarke understood well.
You can be a great futurist even if many of your predictions are off in execution, but correct in direction. For example, Clarke predicted that the advancements in communication would potentially make cities nonexistent. While cities still exist - in much the same way as in the 1960s - people can now work, live, and make a massive difference in their companies from anywhere on the planet, even from a van traveling around the country. Global communication is so easy that it's taken for granted.
As a science fiction author, some of what he wrote about might seem ridiculous today. For example, super-monkey servants creating trade unions. Much of what he wrote about was what could happen (and to provide a way for people to think about the consequences of their actions and inactions). As we discussed last week, humans often recognize big changes on the horizon ... but they rarely correctly anticipate the consequences.
In summary, even though some of Clarke's predictions were farfetched, they were rooted in a deep understanding of human potential and the transformative power of technology. His ability to envision a fantastical future was not only a testament to his imagination, but also served as an inspiration for generations of scientists, engineers, and dreamers. By embracing the unknown and acknowledging the inherent uncertainty of the future, we can continue to push the boundaries of what is possible and strive for a world that is truly beyond belief.
You won't always be 100% correct, but you'll be much closer than if you reject what's coming.
Humans are wired to think locally and linearly ... because that's what it took to survive in a pre-industrial age. However, that leaves most of us very bad at predicting technology and its impact on our future.
To put the future of technology in perspective, it's helpful to look at the history of technology to help understand what an amazing era we live in.
Our World In Data put together a great chart that shows the entire history of humanity in relation to innovation.
3.4 million years ago, our ancestors supposedly started using tools. 2.4 million years later they harnessed fire. 43,000 years ago (almost a million years later) we developed the first instrument, a flute.
That's an insane amount of time. Compare that to this:
In 1903, the Wright Brothers first took flight ... 66 years later, we were on the moon.
That's less than a blink in the history of humankind, and yet we're still increasing speed.
Technology is a snowball rolling down a mountain, picking up steam, and now it's an avalanche being driven by AI.
But innovation isn't only driven by scientists. It's driven by people like you or me having a vision and making it into a reality.
Even though I'm the CEO of an AI company, I don't build artificial intelligence myself ... but I can envision a bigger future and communicate that to people who can. I also can use tools that help me automate and innovate things that help free me to focus on more important ways to create value.
The point is that you can't let the perfect get in the way of the good. AI's impact is inevitable. You don't have to wait to see where the train's going ... you should be boarding.
It's interesting to look at what they strategically got right compared to what was tactically different.
In a 1966 interview, Marshall McLuhan discussed the future of information with ideas that now resonate with AI technologies. He envisioned personalized information, where people request specific knowledge and receive tailored content. This concept has become a reality through AI-powered chatbots like ChatGPT, which can provide customized information based on user inputs.
Although McLuhan was against innovation, he recognized the need to understand emerging trends to maintain control and know when to "turn off the button."
In 1966, media futurist Marshall McLuhan envisioned a form of digital research eerily similar to the customized queries now answered by AI. Then he makes a surprising admission about why he studies technological change—with a lesson I think many need to hear. pic.twitter.com/yEBJv95GvP
While not all predictions are made equal, we seem to have a better idea of what we want than how to accomplish it.
The farther the horizon, the more guesswork is involved. Compared to the prior video on predictions from the mid-1900s, this video on the internet from 1995 seems downright prophetic.
There's a lesson there. It's hard to predict the future, but that doesn't mean you can't skate to where the puck is moving. Even if the path ahead is unsure, it's relatively easy to pick your next step, and then the step in front of that. As long as you are moving in the right direction and keep taking steps without stopping, the result is inevitable.
Basically, generative AI refers to AI that generates new outputs based on the data they have been trained on. Instead of recognizing patterns and then making predictions, they're used to create images, text, audio, and more.
Please let me know about any tools you think are especially worthy (or that I might have missed).
With Google and Microsoft entering the space, I think you're about to see a lot of tool churn as they push redundant tools out of the market. Short-term, that'll cause a bit of chaos. Long-term, it will mean that we'll have a better diversity of tools as innovators are forced to be more creative.
A Lesson From My Grandmother ... On AI
I was at an event last week where (unsurprisingly) we focused heavily on AI. Conversations ranged from use cases for generative AI (and the ethics of AI image creators) to the long-term effects of AI and its adoption.
Before I could chime in, the conversation had gone to the various comparisons from past generations. When electricity was harnessed, articles claimed that it would never catch on, it would hurt productivity, and what of the artisans it would potentially put out of business if it were to gain traction.
When the radio or TV was released, the older generation was sure it would lead to the death of productivity, have horrible ramifications, and ultimately lead to the next generation's failure.
People resist change. We're wired to avoid harm more than to seek pleasure. The reason is that, evolutionarily, you have to survive to have fun.
On the other hand, my grandmother used to say: "It's easy to get used to nice things."
Here's a transcript of some of my response to the discussion:
Ultimately, as soon as you start finding ways to use emerging technologies in a way that excites you, the fear and gloom fade.
The best way to break through a wall isn't with a wide net, but rather with a sharp blow. You should be decisive and focused.
I remember being an entrepreneur in the late 90s (during the DotCom Bubble). And I remember watching people start to emulate Steve Jobs ... wearing black turtlenecks and talking about which internet company would be the "next big thing" at social gatherings. Looking back, an early sign that a crash was coming was that seemingly everybody had an opinion on what was going to be hot, and too many of them were overly confident.
The point is that almost nobody talks about the Internet that way anymore ... in part because the Internet is now part of the fabric of society. At this point, it would be weird if somebody didn't use the Internet. And you don't really even have to think about how to use the Internet anymore because there's a WHO to do almost all those HOWs.
The same is going to be true for AI. Like with any technology, it will suffer from all the same hype-cycle blues ... inflated expectations and then disillusionment. But, when we come out the other side, AI will be better for it ... just like the Internet, the silver screen, or even radio stations.
Earlier, I mentioned how long it took to get from "Zero-to-One" with AI. But we're still at like three" or a "five" on a 100-point scale. Meaning, you are at the beginning of one of the biggest and most asymptotic curves that you can imagine. And you're not late … you're early! Even the fact that you're thinking about stuff like this now means that you are massively ahead.
The trick isn't to figure out how to use AI or some AI tool. The trick is to keep the main thing the main thing.
Investing resources into your company is one thing. Realize that there are 1000s of these tools out there, and many more coming. You don't have to build something yourself. It is often faster and better to acquire a tool than it is to spend money on developing and building it.
Think of the Medici family. They invested in people, which in turn triggered the Renaissance. A key to moving forward in the Age of AI will be to invest in the right people who seek to create the kind of world you want to see. Think of this as a strategic investment into creators and entrepreneurs with a vision who are on a path that aligns with yours.
As you get better and better at doing that, you'll see increasing opportunities to use tools to engage people to collaborate with and create joint ventures. Ultimately, you will collaborate with technology where it's your thought partner and then your business partner. We are entering exciting times where AI, automation, and innovation will make extraordinary things possible for people looking for opportunities to do extraordinary things.
Posted at 03:38 PM in Business, Current Affairs, Gadgets, Ideas, Market Commentary, Personal Development, Science, Trading Tools, Web/Tech | Permalink | Comments (0)
Reblog (0)