Dread of a robot-dominated future is mounting. Is there basis for it?
Michael Osborne and Carl Frey, from Oxford University, calculated how susceptible various jobs are to automation. They based their results on nine key skills:
There are various statistics about the rate of change for robots taking jobs. Many expect that ~50% of current jobs will be automated by 2035. Turns out, that statistic is from Michael and Carl, and the numbers were 47% by 20341.
The quote actually refers to the risk of them being automated. That 47% number doesn't take into account the cost, regulatory, political, or social pressures - so it's unlikely the full 47% will be realized.
Many use that quote as a fear-monger toward future joblessness and an increasing lack of middle-class mobility, but Mr. Frey isn't a proponent of that belief and neither am I.
Industrialization created short-term strife but vastly increased the economic pie over the long-term. It's likely that future automation will have similar effects if managed correctly. It's possible to truncate the pain if we learn from previous iterations of this cycle. The fact that we're so far along technologically in comparison to previous revolutions means we're in a better position to proactively handle the transitory period.
We can't fail to manage the short-term consequences of the new tech because it will lead to unrest. If unrest and opposition to automation persist - it's likely the situation will be exacerbated. It's only by embracing innovation that we can make sure automation is a boon to the middle-class and not the bane of their existence.
Throughout history, technology has always created more jobs than it has destroyed - and while currently, that isn't the case, it doesn't mean it won't be. I often compare the AI revolution to the introduction of electricity. Electricity was a massive disruptor, and put many people out of work, but a fantastic benefit to society.
Doom and gloom sell. It's much easier to convince people something's going to be painful than amazing because we're creatures of habit and our monkey brains fear pain much more than they enjoy pleasure.
Our attitudes and actions play a pivotal role in how the world impacts us. Pragmatically, we have various institutions in place to make the transition as painless as possible - note that I wouldn't say painless, but as painless as possible.
Onwards!
_________________
[1] Frey, Carl & Osborne, Michael. (2013). The Future of Employment: How Susceptible Are Jobs to Computerisation?
Harvard's Center for International Development put together a tool that I think is pretty cool. It's called the Atlas of Economic Complexity. Its goal is to get you to think differently about the economic strategy, policy, and investment opportunities for individual countries.
Each country's profile analyzes its economic dynamics and future growth prospects, including which industries are burgeoning. They made it look pretty as well. If you're curious about specific questions, you can use their exploration function instead.
Boston Dynamics just released a video of their Atlas robot doing an impressive gymnastics routine. Comparing it to their videos from 2009 shows how insane the progress is.
You see the fear of Skynet-esque advanced AI ... but Terminator-style robots may be a more immediate threat.
On the one hand, Boston Dynamics makes robotics look cute but there's promise and peril. For example, Syria is using autonomous killer drones in Turkey.
Any tool can be used for good or evil, there's no inherent morality in a tool, but we're certainly good at finding ways to push the boundaries of their uses.
“Nobody phrases it this way, but I think that artificial intelligence is almost a humanities discipline. It's really an attempt to understand human intelligence and human cognition.” —Sebastian Thrun
We often use human consciousness as the ultimate benchmark for artificial exploration.
The human brain is ridiculously intricate. While weighing only three pounds, it contains about 100 billion neurons and 100 trillion connections between those. On top of the sheer number complexity, the order of the connections, and the order of actions the brain does naturally make it even harder to replicate. The human brain is also constantly reorganizing and adapting. It's a beautiful piece of machinery.
We've had millions of years for this powerhouse of a computer to be created, and now we're trying to do the same with neural networks and machines in a truncated time period. While deep learning algorithms have been around for a while, we're only just now developing enough data and enough compute power to change deep learning from a thought experiment to providing a real edge.
Think of it this way, when talking about the human brain we talk about left-brain and right-brain. The theory is that left-brain activities are analytical and methodical, and right-brain activities are creative, free-form and artistic. We're great at training AI for left-brain activities (obviously with exceptions). In fact, AI is beating us at these left-brain activities because a computer has a much higher input bandwidth than we do, they're less biased, and they can perform 10,000 hours of research by the time you finish this article.
It's tougher to train AI for right-brain tasks. That's where deep learning comes in.
Deep learning is a subset of machine learning based on unsupervised learning from unstructured/unlabeled data. Instead of asking AI a question, giving it metrics and letting it chug away, you're letting AI be intuitive. Deep learning is a much more faithful representation of the human brain. It utilizes a hierarchy of convolutional neural networks to handle linear and non-linear operations so it can think creatively to better problem-solve on potentially various data sets and in unseen environments.
When a baby is first learning to walk it might stand up and fall down. It might then take a small stutter step, or maybe a step that's much too far for its little baby body to handle. It will fall, fail, and learn. Fall, fail, and learn. That's very similar to the goal for deep learning or reinforcement learning.
What's missing is the intrinsic reward that keeps humans moving when the extrinsic rewards aren't coming fast enough. AI can beat humans at a lot of games but has struggled with puzzle/platformers because there's not always a clear objective outside of clearing the level.
A relatively new (in practice, not in theory) approach is to train AI around "curiousity"[1]. Curiosity helps it overcome that boundary. Curiosity lets humans explore and learn for vast periods of time with no reward in sight, and it looks like it can do that for computers too!
I enjoy looking at great disruptive companies and great examples of industries that are primed for disruption.
Think about how many companies have failed due to myopia... Radioshack couldn't understand a future where shopping was done online and Kodak didn't think digital cameras would replace good ol' film. Blockbuster couldn't foresee a future where people would want movies in their mailboxes, because "part of the joy is seeing all your options!" They didn't even make it long enough to see "Netflix and Chill" become a thing.
The Taxi industry had been ready for disruption way before Uber came along, yet, Uber may have mismanaged their opportunity. Taxis now have a chance to innovate back.
To run a taxi in New York you need a medallion. There are approximately 13.5 thousand medallions in NYC. In 2013, prices peaked at over 1.3 million dollars for a single medallion.
The medallion system has been broken for a long time. NYC taxis, in particular, were corrupt and the prices of medallions were artificially inflated by Bloomberg and de Blasio, and built on a debt bubble.
Taxis offered mediocre service, high rates due to artificial caps/greed, and often didn't take credit cards.
They didn't adapt and got disrupted. It's an age-old tale. The same tale as Blockbuster or Kodak; companies thinking linearly in an age of exponential change.
Taxi agencies had the infrastructure to edge ridesharing out and adopt friendlier policies but were slow to adopt the apps and convenience that modeled ridesharing.
It's clear that there's an increased demand for rides. Increased demand is likely caused by access in places that didn't previously have enough demand for a full taxi-service. Ridesharing means you can have drivers in small towns, rural areas, etc. Almost all the new demand is being monopolized by ridesharing.
Should it be, though?
Many would argue Uber's model isn't sustainable; neither are many of these gig-based companies like DoorDash. Uber has a product almost everyone uses, no inventory, very little staff, and despite "winning" the race, it lost $370 million in 2018 and $4.5 billion in 2017.
They gained market share by offering lower prices (even at a loss). They also incentivized an army of drivers to join based on flexible hours and side-income.
The road to profitability for these companies is uncertain.
Uber's low prices got it here, but prices have slowly raised, and AB5 in California has passed, though Uber is claiming exemption - it's likely their prices will jump again if forced to comply.
Rideshare companies are trying to convince workers that hour flexibility is worth the non-employee status, but I don't think that has any real basis. Gig workers can't unionize, have little labor protection and don't receive benefits.
The industry is in a period of massive disruption - but taxies have a chance to fight back. As the gig economy becomes regulated, the already defined system may regain an edge.
In the game of disruption, Uber was shortsighted. In the game of knowing their customers, Taxis were shortsighted.
Will taxies see a resurgence as Uber inevitably hikes up rates? Will autonomous fleets put drivers out of business as they will for long-haul freight?
When I think about the invention of the wheel, I think about cavemen. But that isn't how it happened.
Lots of significant inventions predated the wheel by thousands of years. For example, woven cloth, rope, baskets, boats, even the flute were all invented before the wheel (and apparently not invented by cavemen).
While simple, the wheel worked well (and still does). Even now, the phrase "reinventing the wheel" is used derogatorily to depict needless or inefficient effort. But how does that compare to sliced bread (which was also a pretty significant invention)?
Despite being a hallmark of innovation, it still took more than 300 years for the wheel to be used for travel. With a bit more analysis, it makes sense. In order to use a wheel for travel it needs an axle, and it needs to be durable, and loadbearing, requiring relatively advanced woodworking and engineering.
All the aforementioned products created before the wheel (except for the flute) were necessary for survival. That's why they came first. As new problems arose, so did new solutions.
Necessity is the mother of invention.
Unpacking that phrase is a good reminder that inventions (and innovation) are often solution-centric.
Too many entrepreneurs are attracted to an idea because it sounds cool. They get attracted to their ideas and neglect their ideal customer's actual needs.
If you want to be disruptive, cool isn't enough. Your invention has to be functional, and it has to fix a problem people have (even if they don't know they have it.) The more central the complaint is to their daily lives the better.
Henry Ford famously said: “If I had asked people what they wanted, they would have said faster horses.”
Innovation means thinking about and anticipating wants and future needs.
Your customers may not even need something radically new. Your innovation may be a better application of existing technology or a reframe of best practices.
Uber didn't create a new car, they created a new way to get from where you want with existing infrastructure and less friction.
Football season is officially underway! In honor of that, here's a look at each position's composite player!
As you might expect, different sports have a different ratio of ethnicities. For example, you might expect more Pacific Islanders in Rugby or Asians in Badminton.
The same is true for different positions on a football team. Apparently, offensive linemen are more likely to be white while running backs are more likely to be black.
Here is a visualization that shows what happens when you average the top players' faces in various positions?
While you may be thinking "this player must be unstoppable" ... statistically, he's average.
The "composite" NFL player would be the 848th best player in the league. He's not a starter, and he plays on an average team.
We found the same thing with our trading bots. The ones that made it through most filters weren't star performers. They were the average bots that did enough not to fail (but failed to make the list as top performers in any of the categories). The survivors were generalists, not specialists.
In an ideal world, with no roster limits, you'd want the perfect lineup for each granular situation. You'd want to evaluate players on how they perform under pressure, on different downs, against other players, and with different schemes.
That's what technology lets you do with algorithms. You can have a library of systems that communicate with each other ... and you don't even have to pay their salary (but you will need data scientists, researchers, machines, data, alternative data, electricity, disaster recovery, and a testing platform).
You won't find exceptional specialists if your focus is on generalized safety. Generalists are great, but you also have to be able to respond to specific conditions.
Will A Robot Take Your Job? The War on Automation
Dread of a robot-dominated future is mounting. Is there basis for it?
Michael Osborne and Carl Frey, from Oxford University, calculated how susceptible various jobs are to automation. They based their results on nine key skills:
There are various statistics about the rate of change for robots taking jobs. Many expect that ~50% of current jobs will be automated by 2035. Turns out, that statistic is from Michael and Carl, and the numbers were 47% by 20341.
The quote actually refers to the risk of them being automated. That 47% number doesn't take into account the cost, regulatory, political, or social pressures - so it's unlikely the full 47% will be realized.
Many use that quote as a fear-monger toward future joblessness and an increasing lack of middle-class mobility, but Mr. Frey isn't a proponent of that belief and neither am I.
Industrialization created short-term strife but vastly increased the economic pie over the long-term. It's likely that future automation will have similar effects if managed correctly. It's possible to truncate the pain if we learn from previous iterations of this cycle. The fact that we're so far along technologically in comparison to previous revolutions means we're in a better position to proactively handle the transitory period.
We can't fail to manage the short-term consequences of the new tech because it will lead to unrest. If unrest and opposition to automation persist - it's likely the situation will be exacerbated. It's only by embracing innovation that we can make sure automation is a boon to the middle-class and not the bane of their existence.
Throughout history, technology has always created more jobs than it has destroyed - and while currently, that isn't the case, it doesn't mean it won't be. I often compare the AI revolution to the introduction of electricity. Electricity was a massive disruptor, and put many people out of work, but a fantastic benefit to society.
Doom and gloom sell. It's much easier to convince people something's going to be painful than amazing because we're creatures of habit and our monkey brains fear pain much more than they enjoy pleasure.
Our attitudes and actions play a pivotal role in how the world impacts us. Pragmatically, we have various institutions in place to make the transition as painless as possible - note that I wouldn't say painless, but as painless as possible.
Onwards!
_________________
[1] Frey, Carl & Osborne, Michael. (2013). The Future of Employment: How Susceptible Are Jobs to Computerisation?
Posted at 03:44 PM in Business, Current Affairs, Gadgets, Ideas, Market Commentary, Science, Trading Tools, Web/Tech | Permalink | Comments (0)
Reblog (0)