Elon Musk and Alibaba co-founder Jack Ma recently held a debate about AI at a conference in Shanghai. Their conversation was captured in a 46-minute video. Even if you don't watch all of it, it's interesting to see how these different thought archetypes position different issues.
World AI Conference via New China TV
My son, Zach, watched it and sent me some notes and takeaways.
Fascinating Stuff!
________
While they talk about Mars, education, and other topics, the discussion tends to revolve around AI.
In the video, Jack Ma comes off as optimistic and somewhat uninformed while Elon Musk comes off as so optimistic about AI that he's pessimistic about humans.
Musk's intelligence shines through, though he tends toward hyperbolic best- and worst-case scenarios.
With Jack Ma, I'm curious how heavily exogenous forces influenced the expression of his opinions. The conference took place in China and was partially sponsored by the Chinese government. As a result, I'm not sure how much of what Jack Ma said represented his true beliefs or fears. Taking into consideration State censorship and "spin," it isn't hard to imagine being encouraged to remind people that the Communist party is (and will remain) smarter than AI.
Jack takes positions that make people feel safe, while Elon is committed to pointing out potential dangers.
Elon focused on three main points. Here they are:
People Underestimate AI
Laymen often compare AI to a smart human, but it's much more than that. He uses the comparison that to a chimpanzee we're a strange alien, but AI's disparity may be even worse than that.
Humans vastly underestimate the scale of time. I think Tony Robbins makes the comment that we overestimate a year and underestimate 10. Elon's equivalent would be that we overestimate 10 years and underestimate 1000. On the scale of Earth's existence, humanity is blip. On the scale of human existence, our current level of technology is a blip.
We're wired to think locally and linearly, but not only are we at the very beginning of a 20-year innovation curve, but we also have a theoretical thousands-of-years to worry about.
The Biggest Mistake AI Researchers Make Is Thinking They're Intelligent
“I hope they’re nice … If you can’t beat them, join them. That’s what neuralink is about.” - Elon Musk
This is one of the ideas we talk about a lot at Capitalogix - but it's this idea that many researchers think they can predict AI despite messing up 99% of predictions ever. We constantly underestimate technology, we constantly underestimate change, and it's naive to think that won't continue to be true.
He uses an interesting comparison between our bandwidths. We're already so integrated to our phones and computers but the bandwidth is very different from a computer - or from his proposed neuralink.
Our input bandwidth has skyrocketed. We're always tuned in. Data downloads faster. We have more sources of data. Yet, our output bandwidth has slowed. As we spend more time on our phones instead of computers, we're less efficient.
Compare that to a computer with an exaflop of compute capability... a millisecond becomes an eternity. Our speech becomes a whale song. We're inefficient.
Our Biggest Concern Should Be The Proliferation of Consciousness
Sustainability, cryogenics, Mars, Neuralinks, biohacking, these are all attempts at expanding the scope and scale of human consciousness. Of hedging our bets against any number of potential bad futures.
Even if any potential doomsday has a minuscule probability of happening, why not prepare for them?
The probabilities are non-zero, the better we understand our universe, the better we can handle things here.
Jack Ma – AI Isn't A Threat
Jack's response to most of Elon's comments is that AI isn't a threat to jobs, to us, or to the world. Instead, it's an opportunity to change ourselves, an opportunity to understand people better and to better self-actualize.
He also states that it's an opportunity not to replace jobs but to work less. His ideal is three days a week. Giving us more time to enjoy being human.
Jack also believes that while AI is more clever, humans are smarter. His argument is that intelligence is experience-driven and that humans have created computers but no computers create humans. AI still isn't as good at the subjectivity of the human experience. Can a human determine if something is delicious? Can it write a comedy? He believes AI doesn't have the heart humans do.
The only thing that Elon and Jack really agreed on was that our biggest fear over the next 20 years is population collapse.
My Thoughts
We Should Go To Mars
Elon and Jack are at odds for various reasons, but I think a big part of the issue is that Jack Ma wants us to focus on improving the earth. He views Mars as a waste of time. Musk (and I) view these exercises as improving earth. It shouldn't be an "either/or" ... it should be an "and". We do need to do a better job on taking care of each other and our planet, but that shouldn't be at the expense of moonshots.
I like the 80/20 rule. In economics it's the Pareto Principle - 20% of the "causes" cause 80% of the effects. In personal finance, 80% of what you invest in should be safe and 20% can be alternative investments. In our office, it's 20% of your time being spent on "what if's".
For Earth, 80% of our focus should be on the incremental improvements we're eschewing, and 20% should be on moonshots. If even one of the moonshots we focus on (elongating human life, going to mars, hybridizing humans and computers) happens, they improve the quality of life exponentially.
Just Because We Can't Predict The Future Doesn't Mean We Shouldn't Try
During the debate, Elon said the goal should always be to "try to predict the future with less error."
While it's clear we can't solve tomorrow's issues because we can't predict them all, it doesn't mean there aren't actions we can take and questions that we can ask to better prepare our children for the future.
We Should Be Cautiously Optimistic About AI
I've always been taught to be cautious and to exhibit moderation. We don't know what's possible with AI. While that means there's a potential timeline where AI doesn't become Skynet, it also means there's a potential timeline where it does.
It's human hubris to not worry about AI ethics and to not keep those ideals in mind when creating a technology that will be ubiquitous.
In human history we've often weaponized tools that were intended for other uses. It's likely AI will be used in the same way (if not by AI, then by humans).
I like Jack's focus on the human aspect of AI and while it seems clear to me that Elon understands AI better, if we can manage to balance a focus on the human while growing the artificial, I think we'll go a lot farther.
Closing Thoughts
Humans are at the top of the food chain not because of any athletic dominance, but because of our intelligence and our ability to create tools that enhance our capabilities. Our tools define us as a species.
I recently read a book by Donald Norman called Things That Make Us Smart: Defending Human Attributes In The Age of The Machine. It was written in the early '90s but holds up surprisingly well.
One key takeaway from the book was that technology should be an extension of humans. An example being the calculator on my phone. I'm not great at math, but I'm great at using my calculator to perform whatever math I need to. Donald Norman would argue that means for all functional purposes, I'm good at math.
As our tools evolve, we evolve.
To me, the fact that technology is growing so fast is heavily weighted towards a positive impact. There will be negative impacts, but they can be mitigated. I'm sure plenty of carriage operators were really bummed about the creation of the automobile.
I resonated with Musk's fears and joys because I felt he was championing that belief.
We don't know the answers, but we should be asking the questions and we should be preparing for the possibilities.