Web/Tech

  • Life After Death … Will A.I. Help you Live Forever?

    My Aunt recently passed away. She was my Dad's sister … and she was a fantastic person. She was loving and kind. She was a natural-born caregiver, And she was as sharp as a tack. What wouldn't we give for another moment with her? My response to her death reminded me of my feelings when my Dad passed away

    This time, the conversation was a little different. People asked me if I thought that A.I. would enable us to live on after our bodies started to give out on us. I recorded some of my thoughts. 

     

    I don't think A.I. will give us life after death.

    I do believe technology will get good enough to create a replica of you – that talks like you, responds like you, and even comforts people who miss you. 

    I don't believe technology can capture whatever part of us doesn't live in our bodies. Whether you call it our soul (or something else), I don't think that will ever get uploaded to the matrix, so you live on. 

    And, I think that's okay. Part of the beauty of existence is the transience, the joy, the suffering, and the range of human experience. That is a big part of what we call life.

    When my Dad was dying, every moment took on new meaning. Not only did time seem to slow down, but there was a weight and intimacy that's often taken for granted. 

    What do you think?

  • A Brief Look At Quantum Computing

    I am not an expert on quantum computing … but I saw an impressive photo of Google's new quantum computer, and thought it was worth diving a bit deeper. 

    Quantum Computer

    Google's computer stands at the forefront of computing technologies. This extraordinary device boasts 70 qubits, a significant improvement over the previous 2019 model, which had 53 qubits. A qubit is the quantum world's answer to classical bits. Not to dive too deep, but as you increase the number of qubits in a model, the possible states a quantum computer can hold simultaneously grows exponentially (due to quantum entanglement,) allowing it to perform faster calculations.

    So, while 70 qubits don't sound like that much, it calculates exponentially faster than normal computers. For some context, Google's team used a synthetic benchmark called random circuit sampling to test the system's speed, and the results showed that they could perform calculations in seconds that would take the world's most powerful supercomputer, Frontier, 47 years. 

    Four years ago, Google announced that they'd reached quantum supremacy, a benchmark demonstrating that a programmable quantum device could solve a problem impossible for classical computers to solve within a practical timeframe. It took less than five years to successfully establish the technological feasibility of quantum computers. 

    The progress made in quantum computing enhances our capacity to tackle complex problems that previously posed a challenge (or seemed impossible). The ripple effects will extend to other domains and industries (improving artificial intelligence, logistics, medicine, and almost anything you can imagine). As with the space race or AI, the benefits will not be limited to the realm in which they were created … but will also have a significant impact on broader industries, the world, and our lives.
     
    It's important to temper your expectations and recognize that quantum technology is still in its infancy. It comes with significant limitations, such as the need for extremely low temperatures and precise magnetic fields. Even if these specific conditions are satisfied, there will be stability issues. Additionally, the current cost to develop and operate this technology is quite high.

    But, it's an exciting horizon for us to walk towards. 

    Onwards!

  • The Surreal World of Deepfakes And Deep AI

    Deep Learning excels in analyzing pictures & videos and creating facsimiles or combining styles.  People are using generative AI tools like ChatGPT or Midjourney increasingly frequently.  And there is an explosion of simple tools (like the Deep Dream Generator or DeepAI) that use Convolutional Neural Networks to combine your photo with an art style (if you want to do it on your phone, check out Prisma).   Here are some example photos.

    Download

    via SubSubRoutine

    The same foundation that allows us to create these cool art amalgamations also can create deepfakes.  A Deepfake is precisely what it sounds like … they use "Deep Learning" to "Fake" a recording.  For example, a machine learning technique called a Generative Adversarial Network can be used to superimpose images onto a source video.  That is how they made this fun (and disturbing) Deepfake of Jennifer Lawrence and Steve Buscemi.

     

    Another interesting technology can create AI-powered replicas of someone that don't just look and sound like them – they can respond like them too.  Examples of this are seen in tools like Replica Studios or Replika.  One of the artistic uses people have been exploring recently is getting unlikely characters to sing famous songs.  These chatbots have also been used by lonely men and women to create virtual paramours. 

    The three basic uses of deep learning (described above) are being combined to create a lot of real mainstream applications … and the potential to create convincing fakes.

    Deepfakes can be fun and funny … but they also create real concerns.  They're frequently used for more "nefarious" purposes (e.g., to create fake celebrity or revenge porn and to make important figures say things they never said).  You've likely seen videos of Trump or Biden created with this technology.   But it is easy to imagine someone faking evidence used at trial, trying to influence business transactions, or using this to support or slander causes in the media.

    As fakes get better and easier to produce, they will likely be used more often

    On a more functional note, you can use these technologies to create convincing replicas of yourself.  You could use that replica to record videos, send voicemails, or participate in virtual meetings for you. While I don't encourage you to use it without telling people you are, even just using the technology puts you a step ahead. 

  • Rewriting The Past, Present, and Future

    "Fate has ordained that the men who went to the moon to explore in peace will stay on the moon to rest in peace." – Nixon's Apollo 11 Disaster Speech

    In an ideal world, history would be objective; facts about what happened, unencumbered by the bias of society, or the victor, the narrator, etc.

    I think it's apparent that history as we know it is subjective.  The narrative shifts to support the needs of the society that's reporting it.  History books are written by the victors. 

    The Cold War is a great example where, during the war, immediately after the war, and today, the interpretation of the causes and events has all changed.  

    But while that's one example, to a certain degree, we can see it everywhere.  We can even see it in the way events are reported today.  News stations color the story based on whether they're red or blue, and the internet is quick to jump on a bandwagon even if the information is hearsay. 

    Now, what happens when you can literally rewrite history?

    “Every record has been destroyed or falsified, every book rewritten, every picture has been repainted, every statue and street building has been renamed, every date has been altered. And the process is continuing day by day and minute by minute. History has stopped.“ – Orwell, 1984

    That's one of the potential risks of generative AI and deepfake technology.  As it gets better, creating "supporting evidence" becomes easier for whatever narrative a government or other entity is trying to make real.

    On July 20th, 1969, Neil Armstrong and Buzz Aldrin landed safely on the moon.  They then returned to Earth safely as well. 

    MIT recently created a deepfake of a speech Nixon's speechwriter William Safire wrote during the Apollo 11 mission in case of disaster.  The whole video is worth watching, but the speech starts around 4:20. 

     

    MIT via In Event Of Moon Disaster

    Media disinformation is more dangerous than ever.  Alternative narratives and histories can only be called that when they are discernible from the truth.  In addition, people often aren't looking for the "truth" – instead, they are prone to look for information that already fits their biases. 

    As deepfakes get better, we'll also get better at detecting them.  But it's a cat-and-mouse game with no end in sight.  In Signaling Theory, it's the idea that signalers evolve to become better at manipulating receivers, while receivers evolve to become more resistant to manipulation.  We're seeing the same thing in trading with algorithms. 

    In 1983, Stanislav Petrov saved the world. Petrov was the duty officer at the command center for a Russian nuclear early-warning system when the system reported that a missile had been launched from the U.S., followed by up to five more.  Petrov judged the reports to be a false alarm and didn't authorize retaliation (and a potential nuclear WWIII where countless would have died). 

    But messaging is now getting more convincing.  It's harder to tell real from fake.  What happens when a world leader has a convincing enough deepfake with a convincing enough threat to another country?  Will people have the wherewithal to double-check?

    Lots to think about. 

    I'm excited about the possibilities of technology, and I believe they're predominantly good.  But, as always, in search of the good, we must acknowledge and be prepared for the bad. 

  • Economic Allies and Economic Enemies

    Last week, I brought up the concept of Economic Freedom. It reminded me of an idea I last shared in 2008, during the housing crisis. 

    I noticed how correlated and coordinated worldwide actions were during the housing crisis. During the pandemic, while there was a lot of dissent, there was also a remarkable amount of coordination. 

    Why Do We Shake Hands? | Mind Fuel Daily | Life & Journey

    The concept of economic allies presupposes that we also have economic enemies. It’s easy to construct a theory that countries like Russia and China use financial markets to exert leverage in a nascent form of economic warfare.

    It's easy to come up with a theory that suggests we are our own worst enemies. Our innate fear and greed instincts (and how we react to them) tend to lead us down a path of horrifying consequences. This has been evident in recent years, not just in society, but also in the world of business. I am confident that this pattern will persist in the context of Artificial Intelligence, with both its potential benefits and risks.

    The butterfly effect theorizes that a butterfly flapping its wings in Beijing on one day can create or impact a rainstorm over Chicago a few days later. Similarly, in a world with extensive global communication and where automated trading programs (and even toasters) can interact with each other from anywhere across the globe, it is not surprising that market movements are becoming larger, faster, and more volatile.

    Perhaps governments cooperate and collaborate because they collectively recognize the need for a new form of protection to mitigate the increasing speed, size, and leverage behind market movements.

    And we can also extend this idea to other entities beyond governments. It doesn’t have to be limited to traditional markets either; it can include cryptocurrencies or other emerging technologies as well.

    It’s worth understanding the currents, but we must also consider the undercurrents and countercurrents. 

    Conspiracy theories are rarely healthy or helpful, but maintaining a healthy skepticism is a great survival mechanism.

    Hope that helps.

  • The AI Hacking Paradox

    Fear is a natural response to change or the unknown, serving as an evolutionary mechanism designed to safeguard us. However, it’s also worth noting that many of our fears turn out to be unjustified.

    Sometimes, however, fear is a much-needed early warning system. 

    In the context of AI hacking, you should be afraid. Given the exponential growth in technology and artificial intelligence, concerns about security breaches and intentional misinformation campaigns have become common.

    64195b4ed646c18b0509a7f1_marketing_1_robots_head

    In 2016, DARPA created the Cyber Grand Challenge to illustrate the need for automated, scalable, machine-speed vulnerability detection as more and more systems—from household appliances to major military platforms—got connected to each other and the internet. During this event, AI systems competed against each other to autonomously hack and exploit vulnerabilities in computer programs. The competition revealed the unprecedented speed, scope, scale, and sophistication with which AI systems can find and exploit vulnerabilities.

    And that was seven years ago. 

    AI hackers operate at superhuman speeds and can analyze massive amounts of data, enabling them to uncover vulnerabilities that might elude human hackers. Their ability to think differently, free from human constraints, allows AI systems to devise novel hacks that humans would never consider. This creates an asymmetrical advantage for AI hackers, making them formidable at infiltrating and compromising systems.

    We expect people to use AI for malicious purposes intentionally, but unintentional AI hacking arises when an AI autonomously discovers a solution or workaround that its creators did not intend. This type of hack can remain undetected for extended periods, amplifying the potential damage caused. 

    So, how do we stop it?

    Ironically, or perhaps, exactly as you would expect it, AI itself holds the key to defending against future attacks. Just as hacking can drive progress by exposing vulnerabilities and prompting improvements, AI hackers could potentially identify and rectify weaknesses in software, regulations, and other systems. By proactively searching for vulnerabilities, they can contribute to making these systems more hack-resistant. This is the paradox of AI hacking. 

    It’s the same concept as I mentioned in the article on potentially halting the creation of generative AI.  

    Unfortunately, when you invent the car, you also invent the potential for car crashes … when you ‘invent’ nuclear energy, you create the potential for atomic bombs. That’s not a reason to stop innovation – it’s a call to action for innovators to respond faster and counteract the bad actors. 

    We can’t stop bad actors from existing – but we can get better at preventing harm due to them. This is a helpful framework for innovation. If you want to stop the bad actors from misusing a technology, the good actors "simply" have to get better at using the technology faster. 

    The best way to stop negative motion is with positive motion. But, we can also make moves in the background to counteract bad actors and bad actions.

    For example: 

    1. Regulation and Transparency: Regulatory frameworks can be established for AI technologies that demand transparency regarding how they function and how they’re secured.
    2. Ethical Guidelines: Implementing ethical guidelines for AI development can help prevent misuse.
    3. Cybersecurity Measures: Enhancing cybersecurity protocols and utilizing state-of-the-art encryption methods could make AI systems more resilient against hacking attempts.
    4. Education: Increasing public understanding of AI technologies would spread awareness of their benefits alongside potential risks.

    While these measures won’t eliminate the potential risk of AI hacking, they could significantly mitigate it and provide reassurances about employing such technologies.

  • Nvidia Joins The Trillionaire Club

    Believe it or not, Nvidia is now worth nearly as much as Amazon. America’s largest semiconductor company has skyrocketed past the $1 trillion market cap mark and joined the likes of Apple, Amazon, and Microsoft. 

    Nvidia-1-trillion-market-cap-club-MAINvia visualcapitalist

    My Thoughts

    Nvidia’s growth is largely built on the back of the AI hype. It is also a mainstay of technology, benefitting a litany of AI projects, gaming systems, crypto mining, and more. 

    But, the question is whether it will continue to rise in popularity – or see a “correction” to pre-hype levels. I think the reality is you’ll see both happen.

    Despite my obvious bullishness on AI as a market mover and industry transformer, after a hype cycle comes a trough of disillusionment. The media attention on AI will diminish again. Meanwhile, tech giants like Google and Apple rely on the technology, and Nvidia has also launched new products spanning from robotics to gaming. So, as the hype dies down, its mainstream uses will increase. 

    These chips will only continue to be more important. We saw the company’s stock rise and fall during the peak of inflated expectations of cryptocurrency, but AI’s staying power – I believe – is inevitable. 

    So, while it may not be a good investment in the short term, it’s a technology you can count on to be essential for decades. 

  • Musk vs. Zuckerberg: Fight Of The Century

    In today’s “Truth is Stranger than Fiction” episode, Elon Musk and Mark Zuckerberg seem to be discussing a "cage match." But, for those of us who have been around awhile, we remember the first real billionaire fight when Herb Kelleher, co-founder of Southwest Airlines, settled a business dispute with a rival by arm wrestling in front of an audience at an arena, in an event dubbed “Malice in Dallas.” 

    This supposed cage fight started because Elon responded to someone on Twitter saying, “I’m up for a cage match if he is lol” to which Zuckerberg posted an Instagram story saying, “Send Me Location.”

    Mark-zuckerberg-responds-to-elon-musk

    Supposedly, there’s a real chance they do it, and talks they may do it in Vegas.  

    Now, their beef isn’t new. Back in 2016, Musk’s SpaceX was contracted to shuttle a satellite into orbit for Facebook. During a routine test, an explosion on the ground caused the satellite to be destroyed, and Zuck to say, “I’m deeply disappointed to hear that SpaceX’s launch failure destroyed our satellite that would have provided connectivity to so many entrepreneurs and everyone else across the continent.”

    Ever since, they’ve been going at it. They take different stances on AI. They’ve gotten off each other’s platforms, etc. 

    So … who do you think will win?

    Red and Yellow Modern Boxing Match Facebook Post