Technology is a massive differentiator in today's competitive landscape.
Sorting through predictions of which new technologies are going to impact the world and which are going to fizzle out can be an overwhelming task. I look forward to Gartner's report each year as a benchmark to compare reality against.
Last year, Gartner reported Deep Learning and Biochips were at the top of the hype cycle - in the "peak of inflated expectations." While I'm excited about both industries, there was certainly more buzz than actual improvement in those spaces last year. Excitement almost always exceeds realistic expectations when technologies gain mainstream appeal.
What's a "Hype Cycle"?
As technology advances, it is human nature to get excited about the possibilities and to get disappointed when those expectations aren't met.
At its core, the Hype Cycle tells us where in the product's timeline we are, and how long it will take the technology to hit maturity. It attempts to tell us which technologies will survive the hype and have the potential to become a part of our daily life.
Gartner's Hype Cycle Report is a considered analysis of market excitement, maturity, and the benefit of various technologies. It aggregates data and distills more than 2,000 technologies into a succinct and contextually understandable snapshot of where various emerging technologies sit in their hype cycle.
Peak of Inflated Expectations (Success stories through early publicity),
Trough of Disillusionment (waning interest),
Slope of Enlightenment (2nd & 3rd generation products appear), and
Plateau of Productivity (Mainstream adoption starts).
Understanding this hype cycle framework enables you to ask important questions like "How will these technologies impact my business?" and "Which technologies can I trust to stay relevant in 5 years?"
That being said - it's worth acknowledging that the hype cycle can't predict which technologies will survive the trough of disillusionment and which ones will fade into obscurity.
What's exciting this year?
It's worth noting that in this edition of the hype cycle, Gartner shifted towards introducing new technologies at the expense of technologies that would normally persist through multiple iterations of the cycle; 21 new technologies were added to the list. For comparison, here's my article from last year, and here's my article from 2015. Click on the chart below to see a larger version of this year's Hype Cycle.
This year's ~30 key technologies were selected from over 2000 technologies and bucketed into 5 major trends:
Sensing and Mobility represents technologies that are gaining more detailed awareness of the world around them like 3D sensing cameras, the next iteration of autonomous driving, and drones. Improvements in sensor technology and their communication through the IoT is leading to more data and more insight.
Augmented Human builds on the "Do It Yourself Biohacking" trend from last year. It represents technologies that improve both the cognitive and physical abilities of humanity - technologies like biochips, augmented intelligence and robotic skin. The future is bringing implants to extend humans past their perceived limits and increase our understanding of our bodies; biochips with the potential to detect diseases, synthetic muscles, and neural implants. Many of my friends believe this realm will elongate human lifespans.
Postclassical Compute and Comms represents new architectures of classical computing technologies like 5G or nanotech - it results in faster CPUs, denser memory and increased throughput. Innovation is commonly thought of as new technologies, but better versions of existing technologies can provide just as much value - and disrupt industries in a very similar way.
Digital Ecosystems are platforms that connect various types of "actors." They create seamless communication between companies, people and APIs. This enables more efficient decentralized organizations (and decentralized autonomous organizations) and allows constant adoption of new evolutions in technology. Examples of this technology are the decentralized web, synthetic data, and decentralized autonomous organizations.
My wheelhouse, Advanced AI and Analytics is an acknowledgment of new classes of algorithms and data science that are leading to new capabilities, deeper insights, and adaptive AI. The future of this space involves more accurate predictions and recommendations on smaller data sets. More signal. Less noise.
Looking past the overarching trends of this year, it's also fun to look at what technologies are just starting their hype cycle.
Flying Autonomous Vehicles can be used as taxis, but also as transports for other things such as medical supplies, food delivery, etc. Amazon and Uber are likely excited about this development - and expect it in the next couple of years.
Decentralized Web builds on the same arguments blockchain creates against normal currencies. Because the mainstream centralized web is dominated by massive and corporate-controlled platforms like Facebook and Google, the decentralized web movement strives to enable free speech and increased access to those users whose access to the internet is strictly regulated.
Transfer Learning refers to the ability of an AI to solve one problem and apply that "lesson" to a different but tangential problem. When AI becomes able to generalize knowledge more abstractly, you will see a massive spike in utilization.
Augmented Intelligence complements humanity instead of replacing them with robots. To be clear - Augmented intelligence is a subset of AI, but a different perspective/approach to its adoption.
AI has been around since the '60s, but technological advancement and increased data mean we are now in an AI spring after decades of stagnation.
Many of these technologies have been hyped for years - but the hype cycle is different than the adoption cycle. We often overestimate a year and underestimate 10.
Which technologies do you think will survive the hype?
It's interesting to look at what they strategically got right compared to what was tactically different.
While not all predictions are made equal, it seems that we have a better idea of what we want compared to how to accomplish it.
The farther the horizon, the more guesswork is involved. Compared to the prior video on predictions from the mid-1900s, this video on the internet from 1995 seems downright prophetic.
Dread of a robot-dominated future is mounting. Is there basis for it?
Michael Osborne and Carl Frey, from Oxford University, calculated how susceptible various jobs are to automation. They based their results on nine key skills:
There are various statistics about the rate of change for robots taking jobs. Many expect that ~50% of current jobs will be automated by 2035. Turns out, that statistic is from Michael and Carl, and the numbers were 47% by 20341.
The quote actually refers to the risk of them being automated. That 47% number doesn't take into account the cost, regulatory, political, or social pressures - so it's unlikely the full 47% will be realized.
Many use that quote as a fear-monger toward future joblessness and an increasing lack of middle-class mobility, but Mr. Frey isn't a proponent of that belief and neither am I.
Industrialization created short-term strife but vastly increased the economic pie over the long-term. It's likely that future automation will have similar effects if managed correctly. It's possible to truncate the pain if we learn from previous iterations of this cycle. The fact that we're so far along technologically in comparison to previous revolutions means we're in a better position to proactively handle the transitory period.
We can't fail to manage the short-term consequences of the new tech because it will lead to unrest. If unrest and opposition to automation persist - it's likely the situation will be exacerbated. It's only by embracing innovation that we can make sure automation is a boon to the middle-class and not the bane of their existence.
Throughout history, technology has always created more jobs than it has destroyed - and while currently, that isn't the case, it doesn't mean it won't be. I often compare the AI revolution to the introduction of electricity. Electricity was a massive disruptor, and put many people out of work, but a fantastic benefit to society.
Doom and gloom sell. It's much easier to convince people something's going to be painful than amazing because we're creatures of habit and our monkey brains fear pain much more than they enjoy pleasure.
Our attitudes and actions play a pivotal role in how the world impacts us. Pragmatically, we have various institutions in place to make the transition as painless as possible - note that I wouldn't say painless, but as painless as possible.
Onwards!
_________________
[1] Frey, Carl & Osborne, Michael. (2013). The Future of Employment: How Susceptible Are Jobs to Computerisation?
Boston Dynamics just released a video of their Atlas robot doing an impressive gymnastics routine. Comparing it to their videos from 2009 shows how insane the progress is.
You see the fear of Skynet-esque advanced AI ... but Terminator-style robots may be a more immediate threat.
On the one hand, Boston Dynamics makes robotics look cute but there's promise and peril. For example, Syria is using autonomous killer drones in Turkey.
Any tool can be used for good or evil, there's no inherent morality in a tool, but we're certainly good at finding ways to push the boundaries of their uses.
“Nobody phrases it this way, but I think that artificial intelligence is almost a humanities discipline. It's really an attempt to understand human intelligence and human cognition.” —Sebastian Thrun
We often use human consciousness as the ultimate benchmark for artificial exploration.
The human brain is ridiculously intricate. While weighing only three pounds, it contains about 100 billion neurons and 100 trillion connections between those. On top of the sheer number complexity, the order of the connections, and the order of actions the brain does naturally make it even harder to replicate. The human brain is also constantly reorganizing and adapting. It's a beautiful piece of machinery.
We've had millions of years for this powerhouse of a computer to be created, and now we're trying to do the same with neural networks and machines in a truncated time period. While deep learning algorithms have been around for a while, we're only just now developing enough data and enough compute power to change deep learning from a thought experiment to providing a real edge.
Think of it this way, when talking about the human brain we talk about left-brain and right-brain. The theory is that left-brain activities are analytical and methodical, and right-brain activities are creative, free-form and artistic. We're great at training AI for left-brain activities (obviously with exceptions). In fact, AI is beating us at these left-brain activities because a computer has a much higher input bandwidth than we do, they're less biased, and they can perform 10,000 hours of research by the time you finish this article.
It's tougher to train AI for right-brain tasks. That's where deep learning comes in.
Deep learning is a subset of machine learning based on unsupervised learning from unstructured/unlabeled data. Instead of asking AI a question, giving it metrics and letting it chug away, you're letting AI be intuitive. Deep learning is a much more faithful representation of the human brain. It utilizes a hierarchy of convolutional neural networks to handle linear and non-linear operations so it can think creatively to better problem-solve on potentially various data sets and in unseen environments.
When a baby is first learning to walk it might stand up and fall down. It might then take a small stutter step, or maybe a step that's much too far for its little baby body to handle. It will fall, fail, and learn. Fall, fail, and learn. That's very similar to the goal for deep learning or reinforcement learning.
What's missing is the intrinsic reward that keeps humans moving when the extrinsic rewards aren't coming fast enough. AI can beat humans at a lot of games but has struggled with puzzle/platformers because there's not always a clear objective outside of clearing the level.
A relatively new (in practice, not in theory) approach is to train AI around "curiousity"[1]. Curiosity helps it overcome that boundary. Curiosity lets humans explore and learn for vast periods of time with no reward in sight, and it looks like it can do that for computers too!
Here Are Some Links For Your Weekly Reading - November 17th, 2019
The holiday season is coming fast.
Here are some of the posts that caught my eye recently. Hope you find something interesting.
Lighter Links:
Trading Links:
Posted at 04:52 PM in Business, Current Affairs, Gadgets, Healthy Lifestyle, Ideas, Just for Fun, Market Commentary, Personal Development, Religion, Science, Trading, Trading Tools, Web/Tech | Permalink | Comments (0)
Reblog (0)