Film

  • Understanding the Shapes of Stories

    Seemingly complex things are often simpler when understood.

    This applies to many things.

    For example, great writing is diverse and nuanced … but its underlying structure often isn't.

    Kurt Vonnegut wrote several "Classics", including Cat's CradleSlaughterhouse-Five, and The Sirens of Titan.

    Despite his great writing and its complexities, he was able to simplify his stories into a few basic narrative shapes.

    Here is a graphic that explains the concept.

    201227 Kurt-Vonnegut-The-Shapes-of-Stories

    Here is a 17-minute video of Vonnegut discussing his theory of the Shape of Stories. You can grasp the basic concepts within the first 7 minutes, but he is witty, and the whole video is worth watching. 

     

    You can explore a bit more elaborate version of his "Shapes of Stories" idea in Vonnegut's rejected Master's thesis from the University of Chicago.

    Researchers recently extended Vonnegut's idea by using AI to extract the emotional trajectories of 1,327 stories and discover six core emotional arcs. In case you are curious, here they are.

    • Rags to riches (a rise)
    • Tragedy (a fall)
    • Man in a hole (fall, then a rise)
    • Icarus (rise, then a fall)
    • Cinderella (rise, then a fall, then a rise)
    • Oedipus (fall, then a rise, then a fall)

     For more on writing from Kurt Vonnegut:

    My friend, John Raymonds, also has a substack. He just released a great article on the power of storytelling. It dives deep into the nature of stories and narrative transportation. Check it out

    Have a nice week.

  • Digesting a Bigger Future

    We live in a world where technology changes quickly and often, while human nature remains relatively unchanged.

    For most of us, human nature is the key variable.

    I suspect Henry Ford focused on that when he said, "Whether you think you can or you think you can't. You're right."

    Henry Ford Quote - Whether You Think You Can

    Processing the possibilities of tomorrow is often difficult for humans. Part of the problem is that we're wired to think locally and linearly. It's a monumental task for us to comprehend exponential growth, let alone its implications. For example, consider what happened to seemingly smart and forward-looking companies like Kodak, Blockbuster, and RadioShack

     

    The world changes quickly.

    Change is constant. The wheels of innovation and commerce spin ever-faster (whether you're ready for it or not). 

    As a practical matter, it means that you get to choose between the shorter-term pain of trying to keep up … or the longer-term pain of being left behind. Said another way, you have to choose between chaos and nothing. 

    It's hard to keep up – and even harder to stay ahead.

    Personally, I went from being one of the youngest and most tech-savvy people in the room to a not-so-young person close to losing their early-adopter beanie. Sometimes it almost seems like my kids expect me to ask them to set my VCR so it stops flashing 12:00 AM all day.

    Def5094d723b4c099755173bc6b580ad

    My company may not be doing "rocket science", but it's pretty close. We utilize exponential technologies, such as high-performance computing, AI, and machine learning, to amplify intelligence and make data-driven, evidence-based decisions in real-time, all the time. 

    But, as we get "techier," I get less so … and my role gets less technical, over time, too.

    Due to my age, experience, and tendency to be a pioneer, I've been battling technology for decades. 

    Don't get me wrong, technology has always been my friend, and I still love it. But my relationship with it is different now.

    I recognize that there are things that change and things that stay the same. And for me, the things that "stay the same" tend to be more important.

    Paradoxically, the part of me that stays the same can still change and grow – that is how you become more (and a more evolved version) of that thing.

     

    The Bigger Picture

    My father said that not worrying about all the little details helped him see the bigger picture and focus on what was possible.

    You don't have to focus on the technological details to predict its progress. Anticipating what people will need is a great predictor of what will get built.   That means predicting "what" is often easier than predicting 'how'.

    Why is that often the case? Because technology that solves a problem is more profitable and popular than technology searching for a problem to solve.

    Here's a video from 1974 of Arthur C. Clarke making some remarkably accurate predictions about the future of technology. 

     

    via Australian Broadcasting Corporation

    Artificial Intelligence, quantum computing, augmented reality, neuro-interfaces, and a host of exponential technologies are going to change the face and nature of our lives (and perhaps life itself). Some of these technologies have become inevitabilities … but what they enable is virtually limitless.

    Where do you see this going?

    Onwards.

  • When Worlds Collide: Timeless Wisdom & Evolutionary Technology in Trading with Matthew Piepenburg

    Back in 2020, I had a Zoom meeting with Matthew Piepenburg of Signals Matter. Of course, being the height of the Pandemic, it was over Zoom. Even though it was a private discussion, there was so much value in our discussion that we decided to share parts of it here. 

    While Matt's understanding of markets is based on Macro/Value investing, we use advanced AI and quantitative methods for our approach. 

    As you might expect, there are a lot of differences in how we view the world, decision-making, and the current market environment. Nonetheless, we share a lot of common beliefs as well.   

    Our talk explores several interesting areas and concepts. I encourage you to watch it below

     

    Via YouTube.

    To summarize a couple of the key points, markets are not the economy, and normal market dynamics have been out the window for a long time. In addition, part of why you're seeing increased volatility and noise is that there are so many interventions and artificial inputs to our market system.

    While Matt and I may approach the world with very different lenses, we both believe in "timeless wisdom". 

    Ask yourself, What was true yesterday, today, and will stay true tomorrow

    That is part of the reason we focus on emerging technologies and constant innovation … they remain relevant. 

    Something we can both agree on is that if you don't know what your edge is … you don't have one. 

     

    If You Don't Know What Your Edge Is You Don't Have One _GapingVoid

    Hope you enjoyed the video.

    Let me know what other topics you'd like to hear more about. 

    Onwards!

  • How Long You Have Left

    We only have a limited time on this earth … and a lot of it is spent on frivolous activities. 

    How much time do you think the average millennial spends on their phones … or a baby-boomer sits in front of the TV?

    The answer is a lot.

    Although this chart hasn't been updated recently, it still provides a helpful glimpse of the bigger picture. 

     

    How-much-time-we-have-infographic (1)via Anna Vital

    Nine years in front of entertainment devices – another 10.5 years spent working. You get the idea.

    If you have goals you want to accomplish, places you want to go, and lifestyle aspirations to experience, this puts the idea of finding and living your passion into perspective. 

    Do you have the time to waste it?

    VisualCapitalist put together a chart projecting longevity based on 2020 mortality rates.

     

    OC_Life-Expectancy-by-Age_1600px_Oct31

    via visualcapitalist
     

    According to this calculator, since I'm over 60, I only have about 20 years left.   I expect more!

    There are some interesting statistical facts in this; for example, an average American baby boy can expect to live until 74 … but if that boy turns 21, his life expectancy jumps to over 75. 

    While these numbers appear high, there are two key considerations. First, COVID-19 heavily reduced these numbers because mortality rates increased. 

    Also, remember that these numbers are based on 2020 averages, which may differ from your own (specifically considering your race, income, location, etc.). These numbers also don't take into account expected medical and technology advances, etc. 

    Ultimately, I believe Purpose is one of the most significant catalysts of longevity. People often die when they retire … not because they're done working, but because they're done striving. 

    If you're not growing, you're dying!

  • Relics Of A Bygone Era …

    The U.S. Treasury is ceasing production of pennies – as they cost more to make than they’re worth.

    According to a 2024 report from the U.S. Mint, we lose $85M a year minting pennies, as they cost 3.69 cents to make. 

    That makes the phrase “penny wise and pound foolish” officially passé – at least in America. 

     

    Images (3)

     

    Many phrases like this still exist. It’s an interesting example of the power of language. Words take on meaning beyond their original usage … and often remain relevant long after their origin has become irrelevant. 

    For example:

    • Burning the midnight oil means working hard, but it comes from a time before electricity, when you had to use candles and lamps to light a room after dark. 
    • Time to face the music refers to dealing with the consequences of one’s actions, but originates from a time when disgraced military officers had to face a drumline upon discharge.
    • More recently, hang-ups were what you did when you replaced a phone receiver in its cradle. Now, you can only really find a desk phone in an office. Even then, you don’t need to place it in its cradle to hang it up. 
    • Put a sock in it comes from the act of putting a sock into the trumpet of a gramophone.
    • And Stereotypes come from a type of printing plate commonly used in old-school newspaper publishing. While it still refers to impressions … the origin is lost on the average user of this word. Filming is rarely done on film; footage is from when film was measured in feet and frames, and you don’t need to stay tuned because your television doesn’t need to be tuned to receive the channels you like. 

    Until recently, technologies (and the phrases they spawned) lasted for decades, if not longer. As technology evolves at an ever-accelerating pace, new tools, platforms, and ways of communicating emerge almost daily. With these innovations come fresh slang, buzzwords, and cultural references that often catch on quickly—think “DM me,” “ghosting,” or “cloud computing.” Yet just as rapidly as they rise, many of these terms fade into obscurity, replaced by the next wave of trends. What was once cutting-edge can become outdated in a matter of years, if not months. This cycle of innovation and obsolescence is a hallmark of the modern digital era.

    However, much like these old idioms, the fleeting nature of these technologies and jobs doesn’t mean they lack value or impact. Some expressions endure because they capture something universally human—emotion, conflict, humor—even if the context changes. Similarly, technologies may evolve, but their core functions or purposes often remain. The fax machine gives way to email, and email to instant messaging—but the need for communication is constant.

    This principle also applies to work and tools. While job titles and methods may change, the underlying skills — such as critical thinking, collaboration, and creativity — remain timeless. A carpenter today might use laser-guided saws instead of hand tools, just as a marketer might use data analytics instead of intuition alone, but the essence of their work persists. Innovation reshapes how we do things, not always what we do.

    Just as enduring phrases carry forward old meanings in new settings, so too will jobs, tools, and skills adapt and survive.

    Onwards!

  • The Rise of AI Art and Its Implications

    The last time I talked about AI Art specifically was in 2022 when Dall-E was just gaining steam. Before that, it was 2019, when AI self-portraits were going viral. 

    On both occasions, it still felt like the relative infancy of the technology. I compared it to VR getting another 15 minutes of fame. 

    The images at the time weren’t fantastic, but it was a massive step in AI’s ability to understand and translate text into a coherent graphic response. The algorithms still didn’t really “understand” the meaning of images the way we do, and they were guessing based on what they had seen before – which was much less than today’s algorithms have seen. They were also much worse at interpreting images. As such, when you tried to use AI to recreate an image, there were a lot of hallucinations. The algorithms were essentially a brute-force application of math masquerading as intelligence. 

    An Elegant Use Of Brute Force_GapingVoid

     

    Fortunately, AI imagery has come a long way since then. However, with that improvement comes more ethical concerns. 

    The rise of AI-generated art has sparked a complex and ongoing ethical debate, with compelling arguments on both sides. At the heart of the discussion lies the question of authorship, originality, and the impact of automation on human creativity and labor.

    Proponents of AI art argue that it represents a powerful extension of human imagination. Just as past innovations—such as photography, digital editing, or sampling in music—were initially met with skepticism, Advocates argue AI-generated art is simply the next evolution in the artistic toolkit, and it democratizes access to artmaking. As a result, those with less skill – or time – can explore new styles, generate concepts, and be creative in a new form. To this end, they see AI not as a threat but as a collaborator—another brush or chisel in an artist’s hand.

    However, critics raise concerns about the ethical implications of AI art, particularly in how these models are trained. Many AI systems are built on vast datasets scraped from the internet, including artwork by human creators who were neither consulted nor compensated, leading to accusations of IP theft. Moreover, they argue it sets a dangerous precedent where creative works can be replicated and commodified without consent or attribution. Lastly, on the idea of democratization, they would argue that art is already accessible to all and that people should be willing to explore skills not only to be good at them but to enjoy them. 

     

    White Black Before After Professional Upcycling YouTube Thumbnail

     

    The most recent trend has been a great example of this argument. The launch of OpenAI’s new image generator, powered by GPT-4, has empowered users to transform their photos into various famous media themes – like Renaissance paintings or  Studio Ghibli anime images – which ironically goes against the ethos of Studio Ghibli and Hayao Miyazaki. The studio is known for its commitment to the craft, with carefully animated and hand-drawn scenes. Their films are known for glorifying nature and living in harmony with it. Miyazaki also believes that AI art is disrespectful to the “life” found in human-created art. 

     

    “I feel like we are nearing the end times. We humans are losing faith in ourselves.” – Hayao Miyazaki 

     

    I’m a massive fan of AI – and even AI art … but as the technology continues to evolve, society must grapple with how to integrate these tools in ways that honor both progress and the rights of the artists (and people) whose work—and livelihoods—may be at stake.

    What do you think?

  • AI: We’re Not Just Prompts!

    AI’s trajectory isn’t just upward—it’s curving ever steeper. From DeepMind’s groundbreaking models to Flow’s democratization of filmmaking, people are becoming used to how quickly AI technology improves.
     
    Breakneck doesn’t even seem adequate to explain the scale of the movements. Because it isn’t just about the rate of change – even the rate of change of the rate of change is accelerating … and the result is exponential progress.
     
    Here is a simple example. Remember when you mocked AI-generated videos on social media for obvious flaws (e.g., six fingers, unnatural blinking or movement, etc.). Over the past few months, AI media quality has improved so much that spotting fakes is now difficult, even for tech-savvy people.
     
    Well, we just took another giant leap.
     
    This week, Google’s DeepMind unit released three new core AI models: Imagen for image generationLyria for music generation, and Veo 3 for video generation.

    It only takes a quick look at Veo 3 to realize it represents a significant breakthrough in delivering astonishingly realistic videos.

    I’m only including two examples here … but I went down the rabbit hole and came away very impressed.

    Take a lookEverything in the clip below may be fake, but the AI is real.

     

    via Jerrod Lew

    The era of effortless, hyper-real content has arrived.
     
    One of the big takeaways from tools like this is that you no longer need content creation talent other than your ideas.
     
    An example of this comes from Google’s new AI filmmaking tool, Flow. 

    What Is Flow?

    What if creating professional-grade videos required no cameras, no crew, and no weeks of editing?
     
    Flow can imagine and create videos just from your ideas. Kind of like telling a friend a story and having them draw or act it out instantly.

    How Does It Work?

    Think of Flow as a giant box of movie Legos. You can bring your own pieces (like pictures or clips) or ask Flow to make new pieces for you. Then, you snap them together to build scenes and clips that look like real movies.

    Why Is This Cool?

    It is becoming easier for almost anyone to create the type of content that only a specialist could produce before. The tool makes it easy in these three ways.

    1. Consistent: The videos stick together well, so your story doesn’t jump around confusingly.
    2. Seamless: It’s easy to add or change things without breaking the flow.
    3. Cinematic: The videos look high-quality — like something you’d see on TV or in theaters.

    If you want to play with it, it’s available to Google Ultra subscribers through the Gemini app and Google Labs

    Ok, but what can it do?

    Redefining “Real”

    Don’t skip this next part. It’s what gave me the idea for the post.
    To set the stage, imagine you’re watching a video of a person talking. Typically, you think, “This is real — someone actually stood in front of a camera and spoke.” But now computers can make a video that looks and sounds so real, you can’t tell it’s fake.
     
    Anyway, this week, I saw a cool video on social media. At first, I thought it was cool simply because of the idea it expressed. But the video gets even more interesting when you realize how it was created.
     
    Prompt Theory” is a mind-bending exploration of artificial intelligence brought to life. The premise examines what happens when AI-generated characters refuse to believe they’re not real. From stunning visuals to synced audio, this video showcases AI’s new immersive storytelling power while examining some pretty trippy concepts.
     

    Hashem Al-Ghaili via X

    I predict you will see a massive influx of AI-generated content flooding social media using tools like this. 

    Meanwhile, digital “people” with likenesses and internal objectives are increasingly going to become persistent and gain the ability to influence our world. This is inevitable. Yet, it’s still a little disorienting to think about.
     
    As digital agents gain persistence and purpose, we face profound questions about reality, ethics, and human creativity.
     
    And that is only the beginning!
     
    Perhaps we are living in a simulation?
  • What Do You Do When AI is Better Than You?

    When Beethoven was at the peak of his career, several of his contemporaries struggled to deal with the realization that they may never create anything that lived up to his creations. Brahms, for example, refused to make a symphony for 21 years. Schubert is quoted as saying, “Who can ever do anything after Beethoven?”

    We’re seeing the same effect as a result of Artificial Intelligence. 

     

    A line chart showing AI vs human performance in various technical tasks

    via visualcapitalist

    The gap between human and machine reasoning is narrowing fast. I remember when AlphaGo, an AI program created by Google’s DeepMind, finally got better than humanity at Go. It was a big deal, and it prompted us to think seriously about competition in a post-AI world. If you can’t be the best, is it still worth competing? To one former Go champion, it wasn’t. He retired after “declaring AI invincible.” 

    Over the past few years, AI systems have advanced rapidly, surpassing humans in many more tasks. Much like Beethoven, AI is discouraging competition. 

    Was Lee Sedol, the former Go champ, wrong to quit? It’s hard to say … but as AI gets better at more activities, it’s an issue we’ll encounter more often.

    There’s always someone (or something) better. Taking a purely utilitarian approach isn’t always necessary or productive. It often helps to take a longer view of the issue.

    Sometimes, it's okay to just do something because you enjoy doing it.

    Sometimes you have to “embrace the suck” and be willing to put in the work to learn, grow, and progress.

    Sometimes, you need to invest effort in understanding a process better to determine whether others (or automation) are achieving the right results.

    The most successful people I know don’t try to avoid things with powerful potential. Instead, they leverage those things to achieve more and become better.

    I advocate intelligently adopting AI, in part, because I expect the scale of AI’s “wins” will skyrocket. That means I know AI will soon be better than I am at things I do now.

    It doesn’t mean I should give up. It means I have to raise the bar to stay competitive.

    I have another belief that helps here. What if you believed, “The game isn’t over until I win …”? With that belief in place, I won’t let a 2nd place ceiling stop me if something gives me energy. AI may change how I play the game … or even what game I choose to play … but I will still choose to play.

     

    Dc-Cover-652ovhkibhg82kh6on274ihkn1-20180128034206.Medi

     

    What Happens to Human Work When Machines Get Smarter?

    AI is changing the playing field at work, too. 

    As a result, some say that AI-driven job displacement is not a future threat but a present reality.

    This past week, several prominent  CEOs publicly mandated AI use, marking a shift to “AI-first” work culture, which prioritizes and integrates AI into the core of an organization’s strategy, operations, and overall culture.

    Here is what I think (and you've probably heard me say this before): 

    At this point, AI won't likely replace you … but someone who uses AI better might.

    Let’s face it, doing more with less is a core goal and strategy in business.

    But that doesn’t mean humans are doomed. There are lots of historical parallels between AI integration and past technological revolutions. If you think about AI as a transformative force, you can hear the echoes of historical shifts that redefined work practices and intellectual labor (like the printing press, the calculator, or the internet).

    We’re seeing significant changes in how we work. Instead of just having a mix of people working from home or the office (a hybrid workplace), we’re moving to a situation where people are working alongside smart computer programs, called AI agents (a hybrid workforce).
     
    The rise of the hybrid workforce signifies a transformative shift in workplace dynamics. Gartner predicts that one-third of generative AI use cases will involve AI agents by 2028.

    In the age of AI, success doesn’t come from battling technology — it comes from embracing our uniquely human powers and building systems that let those powers shine.

    AI is coming – but it doesn’t have to be joy-sucking. Ideally, it should free you up to do MORE of the things that bring you joy, energy, and satisfaction. 

    Onwards!

  • Are Your AI Fears Valid? What Experts Say

    It's no surprise that there is often a disparity between what experts believe and what the average adult feels. It's even more pronounced in industries like AI that have been lambasted by science fiction and popular media.

    Even just a few years ago, many of my advisors and friends told me to avoid using the term "AI" in our materials because they thought people would respond negatively to it. Back then, people expected AI to be artificial and clunky … yet, somehow, it also reminded them of dystopian stories about AI Overlords and Terminators. An incompetent superpower is scary … so is a competent superpower you can't trust!

    As AI integrates more heavily into our everyday lives, people's hopes and concerns are intensifying… but should they be? 

    Pew Research Center surveyed over 5,000 adults and 1,000 experts about their concerns related to AI. The infographic shows the difference in concern those groups had regarding specific issues.

     

    This graphic by Statista shows the biggest concerns of experts and regular adult users about AI.

    Statista via VisualCapitalist

    Half of experts (47%) report being more excited than concerned about AI’s future. Among U.S. adults, just 11% say the same.
    Instead, 51% of adults say they’re more concerned than excited — more than triple the rate of experts (15%).

     

    The most common—and well-founded—fears center on misinformation and the misappropriation of information. Experts and the average adult are in alignment here. 

    I am consistently surprised by the lack of media literacy and skepticism demonstrated by otherwise intelligent people. Images and articles that scream "fake" or "AI" to me are shared virally and used to not only take advantage of the most susceptible but also to create dangerous echo chambers. 

    Remember how bad phishing e-mails used to be, and how many of our elderly or disabled ended up giving money to a fake Prince from various random countries? Even my mother, an Ivy League-educated lawyer, couldn't help but click on some of these e-mails. Meanwhile, the quality of these attacks has risen exponentially.

    And we're seeing the same thing now with AI. Not only are people falling for images, videos, and audio, but you also have the potential for custom apps and AI avatars that are fully focused on exploitation. 

    AI Adoption Implications

    Experts and the average adult have a significant disparity in beliefs about the long-term ramifications of AI adoption, such as potential isolation or job displacement. 

    I'm curious, how concerned are you that AI will lead to fewer connections between people or job loss? 

    I often say that technology adoption has very little to do with technology and much more to do with human nature.

    That obviously includes AI adoption as well. 

    Career growth often means abandoning an old role to take on something new and better. It's about delegating, outsourcing, or automating tasks so you can free up time to work on things that matter more.

    It may sound like a joke, but I don't believe most people will lose jobs to AI. Instead, they'll lose jobs to people who use AI better. The future of work will be about amplifying human intelligence … making better decisions, and taking smarter actions. If your job is about doing those things – and you don't use AI to do them – you will fall behind, and there will be consequences.

    It's the same way that technology overtook farming. Technology didn't put people out of work, but it did force people to work differently.

    Innovation has always created opportunity and prosperity in the long term. Jobs may look different, and some roles may be phased out, but new jobs will take their place. Think of it as tasks being automated, not jobs. 

    Likewise, COVID is not why people have resisted returning to the office. COVID might have allowed them to work remotely in the first place, but their decision to resist going back to the office is a natural part of human nature.

    When people found that technology enabled them to meet expectations without a commute, opportunities and possibilities expanded.

    Some used the extra time to learn and grow, raising their expectations. Others used that time to rest or focus on other things. They're both choices, just with different consequences.

    Choosing to Contract or Expand in the Age of AI 

    AI presents us with a similar inflection point. I could have easily used AI to write this article much faster, and it certainly would have been easier in the short term. But what are the consequences of that choice?

    While outreach and engagement are important, the primary benefit of writing a piece like this, for me, is to take the time and to go through the exercise of thinking about these issues … what they mean, what they make possible, and how that impacts my sense of the future. That wouldn't happen if I didn't do it.  

    I often say, "First bring order to chaos … then wisdom comes from making finer distinctions." Doing work often entails embracing the chaos and making finer distinctions over time as you gain experience. With repetition, the quality of those results improves. As we increasingly rely on technology to do the work, to learn, and to grow, the technology learns and grows. If you fail to also learn and grow, it's not the technology's fault. It is a missed opportunity.

    The same is true for connection. AI can help you connect better with yourself and others… or it can be another excuse to avoid connection.

    You can now use an AI transcription service to record every word of an interaction, take notes, create a summary, and even highlight key insights. That sounds amazing! But far too many people become accustomed to the quality of that output and fail to think critically, make connections, or even read and process the information.

    It could be argued that our society already has a connection problem (or an isolation epidemic), regardless of AI. Whether you blame it on social media, remote work, or COVID-19, for a long time, how we connect (and what we consider "connection") has been changing. However, many still have fulfilling lives despite the technology … again, it's a choice. Do you use these vehicles to amplify your life, or are they a substitute and an excuse to justify failing to pursue connection in the real world?.

    As said, actions have consequences … and so do inactions.

    I'm curious to hear your thoughts on these issues. Are you focused on the promise or the perils of AI?

  • Why Don’t We See Aliens?

    So, if the math says it's likely that there are aliens … why don't we see them?

    In 2020, I mentioned Israeli officials who claimed they had been contacted by Aliens from a Galactic Federation – and that not only is our government aware of this, but they are working together.

    There are many stories (or theories) about how we have encountered aliens before and just kept them secret. Here are some links to things you might find interesting if you want to learn more about this.

    So, while some may still believe aliens don't exist – I think it's a more helpful thought experiment to wonder why we haven't seen them. 

    For example, the Fermi Paradox considers the apparent contradiction between the lack of evidence for extraterrestrial civilizations and the various high-probability estimates for their existence. 

    To simplify the issue, billions of stars in the Milky Way galaxy (which is only one of many galaxies) are similar to our Sun. Consequently, there must be some probability that some of them will have Earth-like planets. It isn't hard to conceive that some of those planets should be older than ours, and thus some fraction should be more technologically advanced than us. Even if you assume they're only looking at evolutions of our current technologies, interstellar travel isn't absurd. 

    Thus, based on the law of really large numbers (both in terms of the number of planets and length of time we are talking about) … it makes the silence all the more deafening and curious. 

    If you are interested in the topic "Where are all the aliens?"  Stephen Webb, a particle physicist, tackles that in his book and this TED Talk.   

    via TED

    In the TED talk, Stephen Webb covers a couple of key factors necessary for communicative space-faring life. 

    1. Habitability and stability of their planet
    2. Building blocks of life 
    3. Technological advancement
    4. Socialness/Communication technologies

    But he also acknowledges the numerous confounding variables, including things like imperialism, war, bioterrorism, fear, the moon's effect on climate, etc. 

    Essentially, his thesis is that there are numerous roadblocks to intelligent life, and it's entirely possible we are the only planet that has gotten past those roadblocks. Even if there were others, it's entirely possible that they're extinct by now. 

    E23

    What do you think?

    Here are some other links I liked on this topic. There is some interesting stuff you don't have to be a rocket scientist to understand or enjoy. 

    To Infinity and Beyond!