Web/Tech

  • Top 10 Most Overhyped Technologies (From 2008)

    Just because something is overhyped, doesn’t mean it’s bad. Gartner's hype cycle is a great example of this. Every technology goes through inflated expectations and a trough of disillusionment, regardless of whether they're a success or failure. Sometimes a fad is more than a fad. 

    Screen Shot 2022-05-15 at 8.45.33 PM

    Humans are pretty bad at exponential thinking. We're not bad at recognizing periods of inflection, but we're very bad at recognizing the winners and losers of these regime changes. 

     

    Screen Shot 2022-05-15 at 2.26.23 PM

     

    There are countless examples. Here's a funny one from Maximum PC Magazine in 2008. It shows that hype isn't always a sign of mistaken excess.  This list purported to show things that were getting too much attention in 2008.  Instead of being a list of has-beens and failures, many of these things rightfully deserved the attention.

     

    Maximumpc

    It's been 14 years since this came out. How did the predictions hold up?

    Facebook has become Meta, and is one of the big five. The iPhone has sold more than 2.2 billion phones, and accounts for more than half of Apple's total revenue. And the list keeps going. Multiple GPU video cards, HD, 64-bit computing, and downloading movies from the internet …

    It's hard to believe how poorly this image aged. 

    The trend is your friend while it continues. Just because something is overhyped – doesn't mean you shouldn't be excited about it. 

    Onwards!

  • How Tech Giants Make Their Money

    In 2021, the Big Five – Alphabet (Google), Amazon, Apple, Meta (Facebook), and Microsoft – generated over $1.4 trillion in revenue.

    How did they generate that revenue? We know they sell products … but we also know that we're often the product they sell. 

    Google and Facebook each make a lot of money selling you (or data about you) to advertisers. 

    The image below shows how Alphabet generated its revenue.   The full infographic shows that breakdown for each of the Big Five.

    Screen Shot 2022-05-01 at 3.49.00 PMClick to view the other companies via visualcapitalist

    Apple, Amazon, and Microsoft, primarily sell products (like more traditional businesses). On the other hand, almost 98% of Meta's revenue (and 81% of Google's revenue) comes from advertising. 

    Unsurprisingly, all five companies saw significant growth during the pandemic. 

    Though the economy shrank in the past two years, societal changes continued to push demand for big tech's products and services. 

    Will growth continue or slow down? 

    I'm curious what you think.

  • How I Got Started In Artificial Intelligence

    Recently, I've had several people ask about how I got into AI. 

    There are a couple of different answers, but I shot a video to go through the main points. 

     

    Click here for a transcript

    You could argue that I got my start in AI with my most recent company – Capitalogix – which started almost 20 years ago. You could also say that my previous company – IntellAgent Control – was an early AI company, and that's where I got my start.  By today's standards, the technology we used back then was too simple to call AI … but at the time, we were on the cutting edge.

    You could go further back and say it started when I became the first lawyer in my firm to use a computer, and I fell in love with technology. 

    As I look back, I've spent my whole life on this path.  My fascination with making better decisions, taking smarter actions, and a commitment to getting better results probably started when I was two years old (because of the incident discussed in the video).

    Ultimately, the starting point is irrelevant. Looking back, it seems inevitable. The decisions I made, the people I met, and my experiences … they all led me here.

    However, at any point in the journey, if you asked, "Is this where you thought you'd end up?" I doubt that I'd have said yes. 

    I've always been fascinated by what makes people successful and how to become more efficient and effective. In a sense, that's what AI does. It's a capability amplifier. 

    When I switched from being a corporate securities lawyer to an entrepreneur, I intended to go down that path. 

    Artificial Intelligence happened to be the best vehicle I found to do that. It made sense then, and it makes sense now.

    I wouldn't have it any other way. 

    Onwards!

  • A Few Graphs On The State Of AI

    Every year, Stanford puts out an AI Index with a massive amount of data attempting to sum up the current state of AI. 

    It's 190 pages that detail where research is going and covers current specs, ethics, policy, and more. 

    It is super nerdy … yet, it's probably worth a skim. 

    Here are a few things that caught my eye and might help set some high-level context for you. 

    Investments In AI 

    A-bar-chart-of-global-corporate-investment-in-ai-by-investment-activity-2013-2021

     

    via AI Index 2022

    In 2021, private investments in AI totaled over $93 billion – which was double the investments made in 2020. However, fewer companies received investments. The number of companies receiving funding dropped from 1051 in 2019 to 746 in 2021.

    At extremes, putting greater resources in fewer hands increases the danger of monopolies.  But we are early in the game, and it is safe to interpret this consolidation as separating the wheat from the chaff. As these companies become more mature, you're seeing a drop-off similar to when the web began its exponential growth. 

    With investment increasing, and the number of companies consolidating, you can expect to see massive improvements in the state of AI over the next few years.

    We knew that already – but following the money is a great way to identify a trend. 

    Increased regulation is another trend you should expect as AI matures and proliferates.

    Ethical AI 

    A-chart-showing-number-of-ai-related-bills-passed-into-law-in-25-select-countries-2016-2021 A-chart-showing-number-of-ai-related-policy-papers-by-u-s-based-organizations-by-topic-2021

    via AI Index 2022

    Research on the ethics of AI is becoming much more widespread – while the research influences papers, it is also a catalyst for new laws.

    AI's academic and philosophical implications are being taken much more seriously across the board. Many people recognize that AI has the potential to impact the world in unprecedented ways.  As a result, its promise and peril are under constant scrutiny.

    The adoption of AI might seem slow … but like electricity (or the internet), it only seems slow until it's suddenly ubiquitous.

    As you find AI in more domains, the ethics of its use becomes a more pressing concern. There is a lot of potential for abuse of technologies like facial recognition and deepfakes.  Likewise, people worry about mistakes, judgment, and who's liable for errors in technologies like self-driving cars.

    Luckily, you have many of the world's greatest minds working on the subject – including the Hastings Center.  

    Many factors contribute to the speed of AI's maturation and adoption.  Here are three of the obvious reasons. First, hardware and software are getting better.  Second, we have access to more and better data than ever before.  And third, more people are actively seeking to leverage these capabilities for their benefit.

    Technical ImprovementsScreen Shot 2022-03-31 at 2.01.17 PM

    via AI Index 2022

    Top-performing hardware systems can reach baseline levels of performance in task categories like recommendation, light-weight objection detection, image classification, and language processing in under a minute.

    Not only that, but the cost to train systems is also decreasing. By one measure, training costs for image classification systems have dropped by a factor of 223 since 2017. 

    When people think of advancements in AI, they often think of the humanization of technology. While that may eventually happen, most of the progress in AI comes from more practical improvements and applications. Think of these as discrete capabilities (like individual Lego blocks) that help you do something better than before.  These capabilities are easily stacked to create prototypes that do more.  Prototypes mature into products when the capabilities are robust and reliable enough to allow new users to achieve desired results.  The next stage happens when the capabilities mature to the point that people use them as the foundation or platform to do a whole new class of things.

    We're past the trough of disillusionment and are on the slope to enlightenment.

    Practical use cases abound.  Meaning, these technologies aren't only for giant companies anymore.

    AI is ready for you to use.

    If I think of a seasonal metaphor, it is "springtime" for AI (a time of rapid growth).  But not for you unless you plant the seeds, water them, and start to build your capabilities to understand and use what sprouts.

    As a reminder, it isn't really about the AI … it is about understanding the results you want, the competitive advantages you need, and the data you're feeding it (or getting from it) so that you know whether something is working.

    You've probably heard the phrase "garbage-in-garbage-out."  This is especially true with AI. Top results across technical benchmarks have increasingly relied on extra training data for combinatorial and dimensional reasons. Another reason this is important is to compound insights to continue learning and growing.  As of 2021, 9 state-of-the-art AI systems out of the 10 benchmarks in this report are trained with extra data. 

    To read more of my thoughts about these topics, you can check out this article on data and this article on alternative datasets

    Conclusion

    Artificial Intelligence capabilities are becoming much more robust and more able to transfer their learnings to new domains. They're taking in broader data sets and producing better results (while taking less investment to do so). 

    It isn't a question of "If" … it is a question of "when." 

    AI is exciting and inevitable!

    Let me know if you have questions or comments.

  • Will Robots Take Your Job?

    The fear of a robot-dominated future is mounting … But, is there a basis for that fear?

    It's a common trope in film, but as we all know, media is meant to capture attention – not emulate reality. 

    Michael Osborne and Carl Frey, from Oxford University, calculated how susceptible various jobs are to automation. They based their results on nine key skills:

    • social perceptiveness
    • negotiation
    • persuasion
    • assisting and caring for others
    • originality
    • fine arts
    • finger dexterity
    • manual dexterity
    • and the need to work in a cramped area

    6a00e5502e47b288330240a4b2c074200d-600wi

    via Michael Osborne & Carl Frey (Click For A Comprehensive Infographic)

    There are various statistics about the rate of change for robots taking jobs. Many expect that ~50% of current jobs will be automated by 2035.  Turns out, that statistic is from Michael and Carl, and the numbers were 47% by 20341

    Realize that statistic actually refers to the risk of them being automated. That number doesn't take into account the realities of cost, regulation, politics, social pressure, preference, or the actual work and progress necessary to automate something – so it's unlikely the full 47% will be realized. 

     

    6a00e5502e47b288330240a4b2c15f200d-600wi

    via The Economist

    Nonetheless, many use that quote to point toward a dystopian future of joblessness and an increasing lack of middle-class mobility.  

    Mr. Frey isn't a proponent of that belief … and neither am I.  

    Automation and innovation free us to focus on what matters most (or what can create the most value). The goal is not to have machines let us be fat, dumb, and lazy … it is to free us to focus on bigger and better things.

    Industrialization created short-term strife – but vastly increased the economic pie over the long term. So did electricity or the internet. It's likely that future automation will have similar effects, but it's possible to minimize the pain and potential negative impacts if we learn from previous iterations of this cycle. The fact that we're so far along technologically in comparison to previous revolutions means we're in a better position to proactively handle the transition periods.

    New tech comes with both “promise” and “peril”. We must manage the short-term consequences of the new tech – because it is inevitable. With that said, by embracing innovation, we can make sure it is a boon to the middle-class (and all of society) and not the bane of their existence.

    Throughout history, technology has always created more jobs than it has destroyed.

    Progress means the restructuring of society’s norms … not the destruction of society.

    When we first started using technology, that progress allowed humans to stop acting like robots (think farming and manufacturing). As technology improved, we have "robots" that seem to act more like humans. They can play chess, or shoot a basketball, etc.

    The truth is that humans didn’t act like robots. They did what they had to to survive. As technology improved, we look back and have trouble imagining a time when humans had to do those things. Technology often focuses on the most pressing “constraint” or “pain." It isn’t getting more human, it is simply more capable … which frees us to ascend as well.
    There are many aspects of humanity that robots can't yet replace. But as we move forward, technology will continue to free us to be more human (which I assume means to be more creative, more caring, more empathetic, and more original).

    Doom and gloom sell. It's much easier to convince people something's going to be painful than amazing (because we're creatures of habit, and our monkey brains fear pain much more than they enjoy pleasure).

    Our attitudes and actions play a pivotal role in how the world impacts us.

    We are positioned not only to survive the revolution but to take advantage of it.

    AI is a gold rush, but you don't have to be a miner to strike it rich. You can provide the picks and shovels, the amenities, or a map that helps people find treasures.

    Onwards!

    _________________

    [1] Frey, Carl & Osborne, Michael. (2013). The Future of Employment: How Susceptible Are Jobs to Computerisation?

  • Batteries Not Included

    Mercedes-Benz might have a sense of humor.

    They lent Saturday Night Live a C-Class for a parody TV commercial that makes fun of battery-powered cars.

    The 2-minute video, starring Julia Louis-Dreyfus, shows a car that runs on AA batteries (which is why the car is called the Mercedes AA Class). 

    Here is the video.

    Screen Shot 2022-04-03 at 5.29.44 PM

    via SNL

    One of the highlighted features, the "Auto-Dump," helps you discharge worn-out batteries … all 9,648 of them.

    The video ends with the disclaimer: "Batteries not included."

    I hope you had a fun April Fool's Day.

  • Top 20 Internet Giants

    For most of my life, I've been a tech early adopter. 

    Here are some snippets from that journey. I fell in love with the Mac 128 in the 1980s. My frustration with the limitations of floppies caused me to fly across the country to get one of the earliest 20 MB hard drives (which I didn't know how I would ever fill up). Much to the consternation of those who thought only secretaries should be seen typing, I was one of the first lawyers to use a computer to do work. I waited in lines to grab Palm Pilots and cool phones before smartphones became a thing. And, somehow, I don't enjoy setting up my computer anymore (OK, I do – but not like I did before). 

    A lot has changed, while much stays the same.

    In the late 90s, I was obsessed with the early web scene. I spoke at computer events like Comdex and MacWorld, and I was able to see and identify many of the companies that would become major players. Many of those "major players" expanded into the dot-com bubble, then disappeared.

    I've watched that cycle play itself out several times as the landscape and players changed and evolved.

    There is a chart that captures a lot of those changes by listing the 20 Internet Giants that ruled the web since 1998. 

    Take a look. 

    The-20-Internet-Giants-That-Rule-the-Webvia visualcapitalist

    Humans are very good at recognizing major turning points. However, with that said, they often are much worse than they would believe in regards to understanding the implications of the changes they so easily predicted.

    Who would have guessed that AOL would become almost wholly irrelevant? Or that Yahoo would make so many horrible decisions and still last to 2022?

    In the early days of the internet, most of the leaders were aggregators and search engines. Now we have a much broader set of influencers. The top 20 players in the space are also playing much larger games than their 1998 predecessors. Most of the leaders are platforms that help other products succeed as well. 

    I'm curious to see what names are added to the list in 5 years. 

    Who do you believe we will see there in 2027?

  • Changing the Course of History

    A little over a week ago, a deepfake of Ukrainian President Volodymyr Zelenskyy was used to try and convince Ukraine's soldiers to lay down their arms and surrender against Russia.  On top of being shared on social media, hackers got it onto news sites and a TV ticker as well. 

    While it's not explicitly known that Russia did this – there's a long history of Russian cyberwarfare, including many instances of media manipulation. 

    Luckily, while the lip-sync was okay in this video, several cues helped us know it was fake. 

    Unfortunately, this is only the tip of the iceberg.  Many deepfakes aren't as easy to discern.  Consequently, as we fight wars (both physical and cultural), manipulated videos will increasingly alter both perceptions and reality. 

    Even when proven to be fake, the damage can persist.  Some people might believe it anyway … while others may begin distrusting all videos from leaders as potential misinformation. 

    That being said, not all deepfakes are malicious, and the potential for the technology is attractive.  Production companies are already using it to splice in actors who might have aged or died into scenes in movies.  Deepfake technology can also be used to allow a celebrity to sell their likeness without having to waste their time doing all the filming necessary to produce the intended finished product. 

    Deepfake technology also allows us to create glimpses into potential pasts or futures.  For example, On July 20th, 1969, Neil Armstrong and Buzz Aldrin landed safely on the moon.  They then returned to Earth safely as well.   What if they didn't?  MIT recently created a deepfake of a speech Nixon's speechwriter William Safire wrote during the Apollo 11 mission in case of disaster.  The whole video is worth watching, but the "fake history" speech starts around the 4:20 mark. 

    "Fate has ordained that the men who went to the moon to explore in peace will stay on the moon to rest in peace." – Nixon's Apollo 11 Disaster Speech

     

    MIT via In Event Of Moon Disaster

    Conclusion

    “Every record has been destroyed or falsified, every book rewritten, every picture has been repainted, every statue and street building has been renamed, every date has been altered. And the process is continuing day by day and minute by minute. History has stopped.“ – Orwell, 1984

    In an ideal world, history would be objective; facts about what happened, unencumbered by the bias of society, or the victor, the narrator, etc.  On some level, however, history is written by the winners.  Think about it … perceived "truth" is shaped by the bias and perspectives of the chronicler.

    Consequently, history (as we know it) is subjective.  The narrative shifts to support the needs of the society that's reporting it. 

    The Cold War with the Soviet Union was a great example.  During the war, immediately thereafter, and even today, the interpretation of what transpired has repeatedly changed (both here and there).  The truth is that we are uncertain about what we are certain about.

    But while that was one example, to a certain degree, we can see this type of phenomenon everywhere.  Yes, we're even seeing it again with Russia.

    But it runs deeper than cyber-warfare.  News stations color the story told based on whether they're red or blue, and the internet is quick to jump on a bandwagon even if the information is hearsay.  The goal is attention rather than truth.

    Media disinformation is more dangerous than ever.  Alternative history can only be called that when it's discernible from the truth … and unfortunately, we're prone to look for information that already fits our biases. 

    As deepfakes get better, we'll likely get better at detecting them.  But it's a cat-and-mouse game with no end in sight.  Signaling Theory posits that signalers evolve to become better at manipulating receivers, while receivers become more resistant to manipulation. 

    I'm excited about the possibilities of technology, even though new capabilities present us with both promise and peril. 

    Meanwhile, "Change" and "Human Nature" remain constant.

    And so we go.

  • A Look at Codebases

    When I was a child, NASA got to the Moon with computers much less sophisticated than those we now keep in our pockets.

    At that time, when somebody used the term "computers," they were probably referring to people doing math.  The idea that businesses or individuals would use computing devices, as we do now, was far-fetched science fiction.

    Recently, I shared an article on the growing "compute" calculations used in machine learning.  We showed that the amount of compute used in machine learning has doubled every six months since 2010, with today's largest models using datasets up to 1,900,000,000,000 points.

    This week, I want to take a look at lines of code.  Think of that as a loose proxy showing how sophisticated software is becoming.

     

    Screen Shot 2022-03-11 at 4.31.30 PMvia informationisbeautiful

    As you go through the chart, you'll see that in the early 2000's we had software with up to approximately twenty-five million lines of code.  Meanwhile, today, the average car uses one hundred million, and Google uses two billion lines of code across their internet services. 

    For context, if you count DNA as code, the human genome has 3.3 billion lines of code.  So, while technology has increased massively – we're still not close to emulating the complexity of humanity. 

    Another thing to consider is that when computers had tighter memory constraints, coders had to be deliberate about how they used each line of code or variable.  They found hacks and workarounds to make a lot out of a little.

    However, with an abundance of memory and processing power, software can get bloated as lazy (or lesser) programmers get by with inefficient code.  Consequently, not all the increase in size results from increasing complexity – some of it is the result of lackadaisical programming or more forgiving development platforms.

    In better-managed products, they consider whether the code is working as intended as well as reasonable resource usage. 

    In our internal development, we look to build modular code that allows us to re-use equations, techniques, and resources.  We look at our platform as a collection of evolving components. 

    As the cost and practicality of bigger systems become more manageable, we can use our intellectual property assets differently than before. 

    For example, a "trading system" doesn't have to trade profitably to be valuable anymore.  It can be used as a "sensor" that generates useful information for other parts of the system. It helps us move closer to what I like to call, digital omniscience. 

    As a result of increased capabilities and capacities, we can use older and less capable components to inform better decision-making.

    In the past, computing constraints limited us to use only our most recent system at the highest layer of our framework.

    We now have more ways to win.

    But, bigger isn't always better – and applying constraints can encourage creativity.

    Nonetheless, as technology continues to skyrocket, so will the applications and our expectations about what they can do for us.

    We live in exciting times … Onwards!