The fear of a robot-dominated future is mounting ... But, is there a basis for that fear?
It's a common trope in film, but as we all know, media is meant to capture attention - not emulate reality.
Michael Osborne and Carl Frey, from Oxford University, calculated how susceptible various jobs are to automation. They based their results on nine key skills:
There are various statistics about the rate of change for robots taking jobs. Many expect that ~50% of current jobs will be automated by 2035. Turns out, that statistic is from Michael and Carl, and the numbers were 47% by 20341.
Realize that statistic actually refers to the risk of them being automated. That number doesn't take into account the realities of cost, regulation, politics, social pressure, preference, or the actual work and progress necessary to automate something – so it's unlikely the full 47% will be realized.
Nonetheless, many use that quote to point toward a dystopian future of joblessness and an increasing lack of middle-class mobility.
Mr. Frey isn't a proponent of that belief … and neither am I.
Automation and innovation free us to focus on what matters most (or what can create the most value). The goal is not to have machines let us be fat, dumb, and lazy … it is to free us to focus on bigger and better things.
Industrialization created short-term strife – but vastly increased the economic pie over the long term. So did electricity or the internet. It's likely that future automation will have similar effects, but it's possible to minimize the pain and potential negative impacts if we learn from previous iterations of this cycle. The fact that we're so far along technologically in comparison to previous revolutions means we're in a better position to proactively handle the transition periods.
New tech comes with both “promise” and “peril”. We must manage the short-term consequences of the new tech – because it is inevitable. With that said, by embracing innovation, we can make sure it is a boon to the middle-class (and all of society) and not the bane of their existence.
Throughout history, technology has always created more jobs than it has destroyed.
Progress means the restructuring of society’s norms … not the destruction of society.
When we first started using technology, that progress allowed humans to stop acting like robots (think farming and manufacturing). As technology improved, we have "robots" that seem to act more like humans. They can play chess, or shoot a basketball, etc.
The truth is that humans didn’t act like robots. They did what they had to to survive. As technology improved, we look back and have trouble imagining a time when humans had to do those things. Technology often focuses on the most pressing “constraint” or “pain." It isn’t getting more human, it is simply more capable … which frees us to ascend as well. There are many aspects of humanity that robots can't yet replace. But as we move forward, technology will continue to free us to be more human (which I assume means to be more creative, more caring, more empathetic, and more original).
Doom and gloom sell. It's much easier to convince people something's going to be painful than amazing (because we're creatures of habit, and our monkey brains fear pain much more than they enjoy pleasure).
Our attitudes and actions play a pivotal role in how the world impacts us.
We are positioned not only to survive the revolution but to take advantage of it.
AI is a gold rush, but you don't have to be a miner to strike it rich. You can provide the picks and shovels, the amenities, or a map that helps people find treasures.
Onwards!
_________________
[1] Frey, Carl & Osborne, Michael. (2013). The Future of Employment: How Susceptible Are Jobs to Computerisation?
There have never been as many people alive as there are now. But have you ever wondered how many humans have lived on this planet, in total, over the long arc of history?
The number takes a lot of estimation, but you end up with approximately 109 billion people over the course of human existence.
That means almost 7% of all humans who have ever existed are alive today. It also means that for every person alive, nearly 14 people are dead. That number seems small to me as I think about how many generations came before us.
While this might be somewhat interesting, the more important question is what you will do with the time left in your hourglass? To learn more about what I think about that, here's an article I wrote on the time value of time and here's an article I wrote on the power of purpose and how healthy mindsets extend your life.
For most of my life, I've been a tech early adopter.
Here are some snippets from that journey. I fell in love with the Mac 128 in the 1980s. My frustration with the limitations of floppies caused me to fly across the country to get one of the earliest 20 MB hard drives (which I didn't know how I would ever fill up). Much to the consternation of those who thought only secretaries should be seen typing, I was one of the first lawyers to use a computer to do work. I waited in lines to grab Palm Pilots and cool phones before smartphones became a thing. And, somehow, I don't enjoy setting up my computer anymore (OK, I do – but not like I did before).
A lot has changed, while much stays the same.
In the late 90s, I was obsessed with the early web scene. I spoke at computer events like Comdex and MacWorld, and I was able to see and identify many of the companies that would become major players. Many of those "major players" expanded into the dot-com bubble, then disappeared.
I've watched that cycle play itself out several times as the landscape and players changed and evolved.
There is a chart that captures a lot of those changes by listing the 20 Internet Giants that ruled the web since 1998.
Humans are very good at recognizing major turning points. However, with that said, they often are much worse than they would believe in regards to understanding the implications of the changes they so easily predicted.
Who would have guessed that AOL would become almost wholly irrelevant? Or that Yahoo would make so many horrible decisions and still last to 2022?
In the early days of the internet, most of the leaders were aggregators and search engines. Now we have a much broader set of influencers. The top 20 players in the space are also playing much larger games than their 1998 predecessors. Most of the leaders are platforms that help other products succeed as well.
I'm curious to see what names are added to the list in 5 years.
A little over a week ago, a deepfake of Ukrainian President Volodymyr Zelenskyy was used to try and convince Ukraine's soldiers to lay down their arms and surrender against Russia. On top of being shared on social media, hackers got it onto news sites and a TV ticker as well.
While it's not explicitly known that Russia did this – there's a long history of Russian cyberwarfare, including many instances of media manipulation.
Luckily, while the lip-sync was okay in this video, several cues helped us know it was fake.
Unfortunately, this is only the tip of the iceberg. Many deepfakes aren't as easy to discern. Consequently, as we fight wars (both physical and cultural), manipulated videos will increasingly alter both perceptions and reality.
Even when proven to be fake, the damage can persist. Some people might believe it anyway ... while others may begin distrusting all videos from leaders as potential misinformation.
That being said, not all deepfakes are malicious, and the potential for the technology is attractive. Production companies are already using it to splice in actors who might have aged or died into scenes in movies. Deepfake technology can also be used to allow a celebrity to sell their likeness without having to waste their time doing all the filming necessary to produce the intended finished product.
Deepfake technology also allows us to create glimpses into potential pasts or futures. For example, On July 20th, 1969, Neil Armstrong and Buzz Aldrin landed safely on the moon. They then returned to Earth safely as well. What if they didn't? MIT recently created a deepfake of a speech Nixon's speechwriter William Safire wrote during the Apollo 11 mission in case of disaster. The whole video is worth watching, but the "fake history" speech starts around the 4:20 mark.
"Fate has ordained that the men who went to the moon to explore in peace will stay on the moon to rest in peace." - Nixon's Apollo 11 Disaster Speech
“Every record has been destroyed or falsified, every book rewritten, every picture has been repainted, every statue and street building has been renamed, every date has been altered. And the process is continuing day by day and minute by minute. History has stopped.“ - Orwell, 1984
In an ideal world, history would be objective; facts about what happened, unencumbered by the bias of society, or the victor, the narrator, etc. On some level, however, history is written by the winners. Think about it ... perceived "truth" is shaped by the bias and perspectives of the chronicler.
Consequently, history (as we know it) is subjective. The narrative shifts to support the needs of the society that's reporting it.
The Cold War with the Soviet Union was a great example. During the war, immediately thereafter, and even today, the interpretation of what transpired has repeatedly changed (both here and there). The truth is that we are uncertain about what we are certain about.
But while that was one example, to a certain degree, we can see this type of phenomenon everywhere. Yes, we're even seeing it again with Russia.
But it runs deeper than cyber-warfare. News stations color the story told based on whether they're red or blue, and the internet is quick to jump on a bandwagon even if the information is hearsay. The goal is attention rather than truth.
Media disinformation is more dangerous than ever. Alternative history can only be called that when it's discernible from the truth ... and unfortunately, we're prone to look for information that already fits our biases.
As deepfakes get better, we'll likely get better at detecting them. But it's a cat-and-mouse game with no end in sight. Signaling Theory posits that signalers evolve to become better at manipulating receivers, while receivers become more resistant to manipulation.
I'm excited about the possibilities of technology, even though new capabilities present us with both promise and peril.
Meanwhile, "Change" and "Human Nature" remain constant.
During the time in question, males had an expected lifespan of between 35 and 40 years. In stark contrast, the founding fathers lived more than twice that long (except for Alexander Hamilton, who made the bad decision to embrace dueling).
I don’t believe this chart shows the disparity of “Haves” and “Have Nots”. Instead, it shows the importance of purpose. The Founding Fathers understood how important their efforts and ideas were (not only to their lives ... but also to the lives of the people who relied on them – and to future generations). They truly saw a bigger future and their part in its creation.
Common wisdom posits that a lot of longevity comes down to diet and exercise.
Clearly, sleep and stress management matter too. With that said, healthy mindsets potentially have the most significant impact on your health, well-being, and longevity.
Mindset Matters.
Dan Sullivan wrote an e-book called “My Plan for Living to 156”. His message was to stop being nostalgic about the past and anxious about the future.
Most people’s notion about how long they’ll live becomes an oppressive thought. They feel confined by their expected lifespan, often based on family history and averages. But what if you could extend your lifetime? What if you could increase the quality of the years you had left? How would adding extra years impact the way you live now?
The goal of living to 156 may sound outrageous. But in reading this book, you’ll find that imagination can have a huge impact on behavior and accomplishment. And, even if you don't make it to 156, the years you're left with will be better for it.
You don’t have to actually believe that you will live to 156 (or some other huge number). Simply adopting a mindset that you have extra time permits you to set longer-term goals and focus on bigger possibilities. As a result, those mindsets allow you to focus on continued learning and growth, rather than looking for an excuse or an easy off-ramp.
Purpose is a master key! It gives you direction, capabilities, and confidence.
As I think about these issues, I know that I want to be valuable and interesting to those around me as long as I’m here. That means I want to be healthy, fit, and vital as well! The reason? So I can focus on living ... rather than not dying.
I’ve heard it said many times, in many different ways, but one of the easiest ways to predict your life and lifestyle is to take the average of the five people you spend the most time with. Consequently, it’s important to surround yourself with people committed to bigger futures!
Likewise, it’s important to set goals and scorecards that keep you focused on what matters and continued progress.
Even if you don’t live until 156, I think it’s important and healthy to live now as if you will!
My wife is currently in Indonesia – and inflation is rising. What a perfect time to revisit the world’s most expensive coffee.
Indonesia is famous for coffee. For example, “Sumatra” is their biggest island – with “Java” coming in close behind (and both are synonymous with coffee).
They also make one of the most expensive coffees in the world … Luwak Coffee.
It is a very particular coffee, created using a very peculiar process.
In traditional coffee production, the cherries are harvested, and the beans are extracted, before being shipped to a roaster, ground into a pulp, and brewed by a barista at your local Starbucks.
In contrast, with Luwak coffee, something different happens.
The coffee cherries are harvested by wild animals.
Specifically, they’re harvested by the Asian Palm Civet, a small, cat-like animal that absolutely loves the taste of coffee cherries.
But,if the civets eat the cherries, how can they still be used to make coffee?
Here comes the gross part—the civets eat the coffee cherries, but their digestive tract can’t effectively process the beans, only the flesh surrounding them.
When the partially digested, partially fermented beans are eventually excreted, coffee producers harvest them. The beans are then cleaned, roasted, and used to make astonishingly expensive (“with retail prices reaching up to $1300 per kilogram”) coffee.
Now, is the coffee that mind-blowing?
No, not really. In fact, many critics will openly call it bad coffee, or as Tim Carman, food writer for the Washington Post put it, “It tasted just like…Folgers. Stale. Lifeless. Petrified dinosaur droppings steeped in bathtub water. I couldn’t finish it.”
To be fair, the Luwak coffee industry is not really about coffee ... it is about an experience. When I toured a plantation near Ubud, Bali, a smiling tour guide greeted and led me on an in-depth exploration of the forested property, where I was allowed to immerse myself in the various spices, roots, beans, and civets used to produce this one-of-a-kind coffee.
Here is a video I shot of the process.
If you think about it, I paid a premium to drink exotic cat poop coffee. Kind of strange!
I wouldn’t drink coffee made from people’s poop (or even domestic cat poop).
It’sthe story that allows this not-so-awesome coffee to fetch awesome prices. People are paying for the experience, not the commodity itself.
The same is true when you buy Starbucks. The coffee at 7-Eleven is cheaper – and Consumer Reports tell us that McDonald’s coffee is better.
When I was a child, NASA got to the Moon with computers much less sophisticated than those we now keep in our pockets.
At that time, when somebody used the term "computers," they were probably referring to people doing math. The idea that businesses or individuals would use computing devices, as we do now, was far-fetched science fiction.
Recently, I shared an article on the growing "compute" calculations used in machine learning. We showed that the amount of compute used in machine learning has doubled every six months since 2010, with today's largest models using datasets up to 1,900,000,000,000 points.
This week, I want to take a look at lines of code. Think of that as a loose proxy showing how sophisticated software is becoming.
As you go through the chart, you'll see that in the early 2000's we had software with up to approximately twenty-five million lines of code. Meanwhile, today, the average car uses one hundred million, and Google uses two billion lines of code across their internet services.
For context, if you count DNA as code, the human genome has 3.3 billion lines of code. So, while technology has increased massively - we're still not close to emulating the complexity of humanity.
Another thing to consider is that when computers had tighter memory constraints, coders had to be deliberate about how they used each line of code or variable. They found hacks and workarounds to make a lot out of a little.
However, with an abundance of memory and processing power, software can get bloated as lazy (or lesser) programmers get by with inefficient code. Consequently, not all the increase in size results from increasing complexity - some of it is the result of lackadaisical programming or more forgiving development platforms.
In better-managed products, they consider whether the code is working as intended as well as reasonable resource usage.
In our internal development, we look to build modular code that allows us to re-use equations, techniques, and resources. We look at our platform as a collection of evolving components.
As the cost and practicality of bigger systems become more manageable, we can use our intellectual property assets differently than before.
For example, a "trading system" doesn't have to trade profitably to be valuable anymore. It can be used as a "sensor" that generates useful information for other parts of the system. It helps us move closer to what I like to call, digital omniscience.
As a result of increased capabilities and capacities, we can use older and less capable components to inform better decision-making.
In the past, computing constraints limited us to use only our most recent system at the highest layer of our framework.
We now have more ways to win.
But, bigger isn't always better - and applying constraints can encourage creativity.
Nonetheless, as technology continues to skyrocket, so will the applications and our expectations about what they can do for us.
When I first got out of Law School in the 1980s, "professionals" didn't type ... that was your assistant's job (or the "typing pool," which was a real thing too).
At that point, most people couldn't have imagined what computers and software are capable of now. And if you tried to tell people how pervasive computers and 'typing' would be ... they would have thought that you were delusional.
My career has spanned a series of cycles where I was able to imagine what advanced tech would enable (and how businesses would have to change to best leverage those new capabilities).
Malcolm Gladwell suggests that it takes 10,000 hours of focus and effort for someone to become an expert at something. While that is not necessarily true or accurate, it's still a helpful heuristic.
Today, we can do research that took humans 10,000 hours in the time it took you to read this sentence. Moreover, technology doesn't forget what it's learned – As a result, technological memory is much better than yours or mine. Consequently, the type and quality of decisions, inferences, and actions are better as well. Ultimately, we will leverage the increased speed, capacity, and capabilities of autonomous platforms. While that is easy to anticipate, the consequences of these discontinuous innovations are hard to predict. Things often take longer to happen than you would think. But, when they do, the consequences are often more significant and more far-reaching than anticipated.
Still, technology isn't a cure-all. Many people miss out on the benefits of A.I. and technology for the same reasons they didn't master the hobbies they picked up as an adolescent.
I shot a video discussing how to use technology to create a sustainable creative advantage. Check it out.
Many people recognize a "cool" new technology (like A.I.), but they underestimate the level of commitment and effort that mastery takes.
When using A.I. and high-performance computing, you need to ask the same questions you ask yourself about your ultimate purpose.
What's my goal?
What do I (or my systems) need to learn to accomplish my goal?
What are the best ways to achieve that goal (or something better)?
Too many companies are focused on A.I. as if that is the goal. A.I. is simply a tool. As I mentioned in the video, you must define the problem the right way in order to find an optimal solution.
Artificial Intelligence is a game-changer - so you have to approach it as such.
Know your mission and your strategy, recognize what you're committing to, set it as a compass heading and make deliberate movement in that direction.
I end the video by saying, "Wisdom comes from making finer distinctions. So, it is an iterative and recursive process... but it is also evolutionary. And frankly, that is extraordinarily exciting!"
I often talk about Machine Learning and Artificial Intelligence in broad strokes. Part of that is based on me – and part of that is a result of my audience. I tend to speak with entrepreneurs (rather than data scientists or serious techies). So talking about training FLOPs, parameters, and the actual benchmarks of ML is probably outside of their interest range.
But, every once in a while, it's worth taking a look into the real tangible progress computers have been making.
Less Wrong put together a great dataset on the growth of machine learning systems between 1952 and 2021. While there are many variables that are important in judging the performance and intelligence of systems, their dataset focuses on parameter count. It does this because it's easy to find data that is also a reasonable proxy for model complexity.
One of the simplest takeaways is that ML training compute has been doubling basically every six months since 2010. Compared to Moore's Law, where compute power doubled every two years, we're radically eclipsing that. Especially as we've entered a new era of technology.
Now, to balance this out, we have to ask the question, what actually makes AI intelligent? Model size is important, but you also have factors like training compute and training dataset size. You also must consider the actual results that these systems produce. As well, model size isn't a 1-t0-1 with model complexity as architectures and domains have different inputs and needs (but can have similar sizes).
A few other brief takeaways are that language models have seen the most growth, while gaming models have the fewest trainable parameters. This is somewhat counterintuitive at first glance, but makes sense as the complexity of games means that they have more constraints in other domains. If you really get into the data, there are plenty more questions and insights to be had. But, you can learn more from either Giancarlo or Less Wrong.
And, a question to leave with is whether the scaling laws of machine learning will differ as deep learning become more prevalent. Right now, model size comparisons suggest not, but there are so many other metrics to consider.
Will Robots Take Your Job?
The fear of a robot-dominated future is mounting ... But, is there a basis for that fear?
It's a common trope in film, but as we all know, media is meant to capture attention - not emulate reality.
Michael Osborne and Carl Frey, from Oxford University, calculated how susceptible various jobs are to automation. They based their results on nine key skills:
via Michael Osborne & Carl Frey (Click For A Comprehensive Infographic)
There are various statistics about the rate of change for robots taking jobs. Many expect that ~50% of current jobs will be automated by 2035. Turns out, that statistic is from Michael and Carl, and the numbers were 47% by 20341.
Realize that statistic actually refers to the risk of them being automated. That number doesn't take into account the realities of cost, regulation, politics, social pressure, preference, or the actual work and progress necessary to automate something – so it's unlikely the full 47% will be realized.
via The Economist
Nonetheless, many use that quote to point toward a dystopian future of joblessness and an increasing lack of middle-class mobility.
Mr. Frey isn't a proponent of that belief … and neither am I.
Automation and innovation free us to focus on what matters most (or what can create the most value). The goal is not to have machines let us be fat, dumb, and lazy … it is to free us to focus on bigger and better things.
Industrialization created short-term strife – but vastly increased the economic pie over the long term. So did electricity or the internet. It's likely that future automation will have similar effects, but it's possible to minimize the pain and potential negative impacts if we learn from previous iterations of this cycle. The fact that we're so far along technologically in comparison to previous revolutions means we're in a better position to proactively handle the transition periods.
New tech comes with both “promise” and “peril”. We must manage the short-term consequences of the new tech – because it is inevitable. With that said, by embracing innovation, we can make sure it is a boon to the middle-class (and all of society) and not the bane of their existence.
Throughout history, technology has always created more jobs than it has destroyed.
Progress means the restructuring of society’s norms … not the destruction of society.
When we first started using technology, that progress allowed humans to stop acting like robots (think farming and manufacturing). As technology improved, we have "robots" that seem to act more like humans. They can play chess, or shoot a basketball, etc.
The truth is that humans didn’t act like robots. They did what they had to to survive. As technology improved, we look back and have trouble imagining a time when humans had to do those things. Technology often focuses on the most pressing “constraint” or “pain." It isn’t getting more human, it is simply more capable … which frees us to ascend as well.
There are many aspects of humanity that robots can't yet replace. But as we move forward, technology will continue to free us to be more human (which I assume means to be more creative, more caring, more empathetic, and more original).
Doom and gloom sell. It's much easier to convince people something's going to be painful than amazing (because we're creatures of habit, and our monkey brains fear pain much more than they enjoy pleasure).
Our attitudes and actions play a pivotal role in how the world impacts us.
We are positioned not only to survive the revolution but to take advantage of it.
AI is a gold rush, but you don't have to be a miner to strike it rich. You can provide the picks and shovels, the amenities, or a map that helps people find treasures.
Onwards!
_________________
[1] Frey, Carl & Osborne, Michael. (2013). The Future of Employment: How Susceptible Are Jobs to Computerisation?
Posted at 06:05 PM in Business, Current Affairs, Gadgets, Ideas, Market Commentary, Personal Development, Science, Trading Tools, Web/Tech | Permalink | Comments (0)
Reblog (0)