When I first got out of Law School in the 1980s, "professionals" didn't type ... that was your assistant's job (or the "typing pool," which was a real thing too).
At that point, most people couldn't have imagined what computers and software are capable of now. And if you tried to tell people how pervasive computers and 'typing' would be ... they would have thought that you were delusional.
My career has spanned a series of cycles where I was able to imagine what advanced tech would enable (and how businesses would have to change to best leverage those new capabilities).
Malcolm Gladwell suggests that it takes 10,000 hours of focus and effort for someone to become an expert at something. While that is not necessarily true or accurate, it's still a helpful heuristic.
Today, we can do research that took humans 10,000 hours in the time it took you to read this sentence. Moreover, technology doesn't forget what it's learned – As a result, technological memory is much better than yours or mine. Consequently, the type and quality of decisions, inferences, and actions are better as well. Ultimately, we will leverage the increased speed, capacity, and capabilities of autonomous platforms. While that is easy to anticipate, the consequences of these discontinuous innovations are hard to predict. Things often take longer to happen than you would think. But, when they do, the consequences are often more significant and more far-reaching than anticipated.
Still, technology isn't a cure-all. Many people miss out on the benefits of A.I. and technology for the same reasons they didn't master the hobbies they picked up as an adolescent.
I shot a video discussing how to use technology to create a sustainable creative advantage. Check it out.
Many people recognize a "cool" new technology (like A.I.), but they underestimate the level of commitment and effort that mastery takes.
When using A.I. and high-performance computing, you need to ask the same questions you ask yourself about your ultimate purpose.
- What's my goal?
- What do I (or my systems) need to learn to accomplish my goal?
- What are the best ways to achieve that goal (or something better)?
Too many companies are focused on A.I. as if that is the goal. A.I. is simply a tool. As I mentioned in the video, you must define the problem the right way in order to find an optimal solution.
Artificial Intelligence is a game-changer - so you have to approach it as such.
Know your mission and your strategy, recognize what you're committing to, set it as a compass heading and make deliberate movement in that direction.
I end the video by saying, "Wisdom comes from making finer distinctions. So, it is an iterative and recursive process... but it is also evolutionary. And frankly, that is extraordinarily exciting!"
I hope you agree.
Onwards!
A Look at Codebases
When I was a child, NASA got to the Moon with computers much less sophisticated than those we now keep in our pockets.
At that time, when somebody used the term "computers," they were probably referring to people doing math. The idea that businesses or individuals would use computing devices, as we do now, was far-fetched science fiction.
Recently, I shared an article on the growing "compute" calculations used in machine learning. We showed that the amount of compute used in machine learning has doubled every six months since 2010, with today's largest models using datasets up to 1,900,000,000,000 points.
This week, I want to take a look at lines of code. Think of that as a loose proxy showing how sophisticated software is becoming.
As you go through the chart, you'll see that in the early 2000's we had software with up to approximately twenty-five million lines of code. Meanwhile, today, the average car uses one hundred million, and Google uses two billion lines of code across their internet services.
For context, if you count DNA as code, the human genome has 3.3 billion lines of code. So, while technology has increased massively - we're still not close to emulating the complexity of humanity.
Another thing to consider is that when computers had tighter memory constraints, coders had to be deliberate about how they used each line of code or variable. They found hacks and workarounds to make a lot out of a little.
However, with an abundance of memory and processing power, software can get bloated as lazy (or lesser) programmers get by with inefficient code. Consequently, not all the increase in size results from increasing complexity - some of it is the result of lackadaisical programming or more forgiving development platforms.
In better-managed products, they consider whether the code is working as intended as well as reasonable resource usage.
In our internal development, we look to build modular code that allows us to re-use equations, techniques, and resources. We look at our platform as a collection of evolving components.
As the cost and practicality of bigger systems become more manageable, we can use our intellectual property assets differently than before.
For example, a "trading system" doesn't have to trade profitably to be valuable anymore. It can be used as a "sensor" that generates useful information for other parts of the system. It helps us move closer to what I like to call, digital omniscience.
As a result of increased capabilities and capacities, we can use older and less capable components to inform better decision-making.
In the past, computing constraints limited us to use only our most recent system at the highest layer of our framework.
We now have more ways to win.
But, bigger isn't always better - and applying constraints can encourage creativity.
Nonetheless, as technology continues to skyrocket, so will the applications and our expectations about what they can do for us.
We live in exciting times ... Onwards!
Posted at 06:41 PM in Business, Current Affairs, Gadgets, Ideas, Market Commentary, Science, Trading, Trading Tools, Web/Tech | Permalink | Comments (0)
Reblog (0)