"Fate has ordained that the men who went to the moon to explore in peace will stay on the moon to rest in peace." - Nixon's Apollo 11 Disaster Speech
In an ideal world, history would be objective; facts about what happened, unencumbered by the bias of society, or the victor, the narrator, etc.
I think it's apparent that history as we know it is subjective. The narrative shifts to support the needs of the society that's reporting it. History books are written by the victors.
The Cold War is a great example where, during the war, immediately after the war, and today, the interpretation of the causes and events has all changed.
But while that's one example, to a certain degree, we can see it everywhere. We can even see it in the way events are reported today. News stations color the story based on whether they're red or blue, and the internet is quick to jump on a bandwagon even if the information is hearsay.
Now, what happens when you can literally rewrite history?
“Every record has been destroyed or falsified, every book rewritten, every picture has been repainted, every statue and street building has been renamed, every date has been altered. And the process is continuing day by day and minute by minute. History has stopped.“ - Orwell, 1984
That's one of the potential risks of generative AI and deepfake technology. As it gets better, creating "supporting evidence" becomes easier for whatever narrative a government or other entity is trying to make real.
On July 20th, 1969, Neil Armstrong and Buzz Aldrin landed safely on the moon. They then returned to Earth safely as well.
MIT recently created a deepfake of a speech Nixon's speechwriter William Safire wrote during the Apollo 11 mission in case of disaster. The whole video is worth watching, but the speech starts around 4:20.
MIT via In Event Of Moon Disaster
Media disinformation is more dangerous than ever. Alternative narratives and histories can only be called that when they are discernible from the truth. In addition, people often aren't looking for the "truth" – instead, they are prone to look for information that already fits their biases.
As deepfakes get better, we'll also get better at detecting them. But it's a cat-and-mouse game with no end in sight. In Signaling Theory, it's the idea that signalers evolve to become better at manipulating receivers, while receivers evolve to become more resistant to manipulation. We're seeing the same thing in trading with algorithms.
In 1983, Stanislav Petrov saved the world. Petrov was the duty officer at the command center for a Russian nuclear early-warning system when the system reported that a missile had been launched from the U.S., followed by up to five more. Petrov judged the reports to be a false alarm and didn't authorize retaliation (and a potential nuclear WWIII where countless would have died).
But messaging is now getting more convincing. It's harder to tell real from fake. What happens when a world leader has a convincing enough deepfake with a convincing enough threat to another country? Will people have the wherewithal to double-check?
Lots to think about.
I'm excited about the possibilities of technology, and I believe they're predominantly good. But, as always, in search of the good, we must acknowledge and be prepared for the bad.
The Surreal World of Deepfakes And Deep AI
Deep Learning excels in analyzing pictures & videos and creating facsimiles or combining styles. People are using generative AI tools like ChatGPT or Midjourney increasingly frequently. And there is an explosion of simple tools (like the Deep Dream Generator or DeepAI) that use Convolutional Neural Networks to combine your photo with an art style (if you want to do it on your phone, check out Prisma). Here are some example photos.
via SubSubRoutine
The same foundation that allows us to create these cool art amalgamations also can create deepfakes. A Deepfake is precisely what it sounds like ... they use "Deep Learning" to "Fake" a recording. For example, a machine learning technique called a Generative Adversarial Network can be used to superimpose images onto a source video. That is how they made this fun (and disturbing) Deepfake of Jennifer Lawrence and Steve Buscemi.
Another interesting technology can create AI-powered replicas of someone that don't just look and sound like them – they can respond like them too. Examples of this are seen in tools like Replica Studios or Replika. One of the artistic uses people have been exploring recently is getting unlikely characters to sing famous songs. These chatbots have also been used by lonely men and women to create virtual paramours.
The three basic uses of deep learning (described above) are being combined to create a lot of real mainstream applications ... and the potential to create convincing fakes.
Deepfakes can be fun and funny ... but they also create real concerns. They're frequently used for more "nefarious" purposes (e.g., to create fake celebrity or revenge porn and to make important figures say things they never said). You've likely seen videos of Trump or Biden created with this technology. But it is easy to imagine someone faking evidence used at trial, trying to influence business transactions, or using this to support or slander causes in the media.
As fakes get better and easier to produce, they will likely be used more often.
On a more functional note, you can use these technologies to create convincing replicas of yourself. You could use that replica to record videos, send voicemails, or participate in virtual meetings for you. While I don't encourage you to use it without telling people you are, even just using the technology puts you a step ahead.
Posted at 10:03 PM in Business, Current Affairs, Film, Gadgets, Games, Ideas, Just for Fun, Market Commentary, Science, Web/Tech | Permalink | Comments (0)
Reblog (0)