When I was a child, NASA got to the Moon with computers much less sophisticated than those we now keep in our pockets.
At that time, when somebody used the term "computers," they were probably referring to people doing math. The idea that businesses or individuals would use computing devices, as we do now, was far-fetched science fiction.
Recently, I shared an article on the growing "compute" calculations used in machine learning. We showed that the amount of compute used in machine learning has doubled every six months since 2010, with today's largest models using datasets up to 1,900,000,000,000 points.
This week, I want to take a look at lines of code. Think of that as a loose proxy showing how sophisticated software is becoming.
As you go through the chart, you'll see that in the early 2000's we had software with up to approximately twenty-five million lines of code. Meanwhile, today, the average car uses one hundred million, and Google uses two billion lines of code across their internet services.
For context, if you count DNA as code, the human genome has 3.3 billion lines of code. So, while technology has increased massively - we're still not close to emulating the complexity of humanity.
Another thing to consider is that when computers had tighter memory constraints, coders had to be deliberate about how they used each line of code or variable. They found hacks and workarounds to make a lot out of a little.
However, with an abundance of memory and processing power, software can get bloated as lazy (or lesser) programmers get by with inefficient code. Consequently, not all the increase in size results from increasing complexity - some of it is the result of lackadaisical programming or more forgiving development platforms.
In better-managed products, they consider whether the code is working as intended as well as reasonable resource usage.
In our internal development, we look to build modular code that allows us to re-use equations, techniques, and resources. We look at our platform as a collection of evolving components.
As the cost and practicality of bigger systems become more manageable, we can use our intellectual property assets differently than before.
For example, a "trading system" doesn't have to trade profitably to be valuable anymore. It can be used as a "sensor" that generates useful information for other parts of the system. It helps us move closer to what I like to call, digital omniscience.
As a result of increased capabilities and capacities, we can use older and less capable components to inform better decision-making.
In the past, computing constraints limited us to use only our most recent system at the highest layer of our framework.
We now have more ways to win.
But, bigger isn't always better - and applying constraints can encourage creativity.
Nonetheless, as technology continues to skyrocket, so will the applications and our expectations about what they can do for us.
We live in exciting times ... Onwards!
Changing the Course of History
A little over a week ago, a deepfake of Ukrainian President Volodymyr Zelenskyy was used to try and convince Ukraine's soldiers to lay down their arms and surrender against Russia. On top of being shared on social media, hackers got it onto news sites and a TV ticker as well.
While it's not explicitly known that Russia did this – there's a long history of Russian cyberwarfare, including many instances of media manipulation.
Luckily, while the lip-sync was okay in this video, several cues helped us know it was fake.
Unfortunately, this is only the tip of the iceberg. Many deepfakes aren't as easy to discern. Consequently, as we fight wars (both physical and cultural), manipulated videos will increasingly alter both perceptions and reality.
Even when proven to be fake, the damage can persist. Some people might believe it anyway ... while others may begin distrusting all videos from leaders as potential misinformation.
That being said, not all deepfakes are malicious, and the potential for the technology is attractive. Production companies are already using it to splice in actors who might have aged or died into scenes in movies. Deepfake technology can also be used to allow a celebrity to sell their likeness without having to waste their time doing all the filming necessary to produce the intended finished product.
Deepfake technology also allows us to create glimpses into potential pasts or futures. For example, On July 20th, 1969, Neil Armstrong and Buzz Aldrin landed safely on the moon. They then returned to Earth safely as well. What if they didn't? MIT recently created a deepfake of a speech Nixon's speechwriter William Safire wrote during the Apollo 11 mission in case of disaster. The whole video is worth watching, but the "fake history" speech starts around the 4:20 mark.
MIT via In Event Of Moon Disaster
Conclusion
In an ideal world, history would be objective; facts about what happened, unencumbered by the bias of society, or the victor, the narrator, etc. On some level, however, history is written by the winners. Think about it ... perceived "truth" is shaped by the bias and perspectives of the chronicler.
Consequently, history (as we know it) is subjective. The narrative shifts to support the needs of the society that's reporting it.
The Cold War with the Soviet Union was a great example. During the war, immediately thereafter, and even today, the interpretation of what transpired has repeatedly changed (both here and there). The truth is that we are uncertain about what we are certain about.
But while that was one example, to a certain degree, we can see this type of phenomenon everywhere. Yes, we're even seeing it again with Russia.
But it runs deeper than cyber-warfare. News stations color the story told based on whether they're red or blue, and the internet is quick to jump on a bandwagon even if the information is hearsay. The goal is attention rather than truth.
Media disinformation is more dangerous than ever. Alternative history can only be called that when it's discernible from the truth ... and unfortunately, we're prone to look for information that already fits our biases.
As deepfakes get better, we'll likely get better at detecting them. But it's a cat-and-mouse game with no end in sight. Signaling Theory posits that signalers evolve to become better at manipulating receivers, while receivers become more resistant to manipulation.
I'm excited about the possibilities of technology, even though new capabilities present us with both promise and peril.
Meanwhile, "Change" and "Human Nature" remain constant.
And so we go.
Posted at 07:13 PM in Business, Current Affairs, Film, Gadgets, Ideas, Market Commentary, Science, Television, Trading, Trading Tools, Web/Tech | Permalink | Comments (0)
Reblog (0)