I recently participated on several panel discussions about AI and trading. This picture was taken at The Trading Show in New York.
I speak at a number of events every year because I enjoy meeting people pushing the envelope and shaking things up. It is also a great opportunity to feel the pulse of the industry (by paying attention to the titles of the sessions, the types of sponsors and vendors attracted, and of course, the makeup of the audiences).
Big changes are coming! Technical innovations and data science insights continue to impress, but the use of alternative data and advanced AI is at a tipping point. I describe these shifts in the book I’m finishing up, called “Next On Wall Street – Understanding AI’s Inevitable Impact on Trading”. Let me know if you want to know more about the book.
In the 90s, when I’d go to conferences, I would pay attention to speakers. Now, when I go to conferences, I'm paying attention to the audience. The players are changing so fast, the game itself is changing.
There have been various generations of trading built on different innovations. When computerized data became available, simply understanding how to download and use it generated Alpha. The same could be said for each later evolution: the adoption of complex algorithms, access to massive amounts of clean data, and the adoption of AI strategies.
Each time a new shift happens, traders pivot or fail. The scale of innovation increases, but the pattern remains.
At this most recent conference, I was excited to see people recognizing the pivot toward AI, Big Data and high-speed computing.
Change happens slowly, and then all at once, and we’re getting close to that inflection point.
Dread of a robot-dominated future is mounting. Is there basis for it?
Michael Osborne and Carl Frey, from Oxford University, calculated how susceptible various jobs are to automation. They based their results on nine key skills:
There are various statistics about the rate of change for robots taking jobs. Many expect that ~50% of current jobs will be automated by 2035. Turns out, that statistic is from Michael and Carl, and the numbers were 47% by 20341.
The quote actually refers to the risk of them being automated. That 47% number doesn't take into account the cost, regulatory, political, or social pressures - so it's unlikely the full 47% will be realized.
Many use that quote as a fear-monger toward future joblessness and an increasing lack of middle-class mobility, but Mr. Frey isn't a proponent of that belief and neither am I.
Industrialization created short-term strife but vastly increased the economic pie over the long-term. It's likely that future automation will have similar effects if managed correctly. It's possible to truncate the pain if we learn from previous iterations of this cycle. The fact that we're so far along technologically in comparison to previous revolutions means we're in a better position to proactively handle the transitory period.
We can't fail to manage the short-term consequences of the new tech because it will lead to unrest. If unrest and opposition to automation persist - it's likely the situation will be exacerbated. It's only by embracing innovation that we can make sure automation is a boon to the middle-class and not the bane of their existence.
Throughout history, technology has always created more jobs than it has destroyed - and while currently, that isn't the case, it doesn't mean it won't be. I often compare the AI revolution to the introduction of electricity. Electricity was a massive disruptor, and put many people out of work, but a fantastic benefit to society.
Doom and gloom sell. It's much easier to convince people something's going to be painful than amazing because we're creatures of habit and our monkey brains fear pain much more than they enjoy pleasure.
Our attitudes and actions play a pivotal role in how the world impacts us. Pragmatically, we have various institutions in place to make the transition as painless as possible - note that I wouldn't say painless, but as painless as possible.
Onwards!
_________________
[1] Frey, Carl & Osborne, Michael. (2013). The Future of Employment: How Susceptible Are Jobs to Computerisation?
Harvard's Center for International Development put together a tool that I think is pretty cool. It's called the Atlas of Economic Complexity. Its goal is to get you to think differently about the economic strategy, policy, and investment opportunities for individual countries.
Each country's profile analyzes its economic dynamics and future growth prospects, including which industries are burgeoning. They made it look pretty as well. If you're curious about specific questions, you can use their exploration function instead.
Boston Dynamics just released a video of their Atlas robot doing an impressive gymnastics routine. Comparing it to their videos from 2009 shows how insane the progress is.
You see the fear of Skynet-esque advanced AI ... but Terminator-style robots may be a more immediate threat.
On the one hand, Boston Dynamics makes robotics look cute but there's promise and peril. For example, Syria is using autonomous killer drones in Turkey.
Any tool can be used for good or evil, there's no inherent morality in a tool, but we're certainly good at finding ways to push the boundaries of their uses.
“Nobody phrases it this way, but I think that artificial intelligence is almost a humanities discipline. It's really an attempt to understand human intelligence and human cognition.” —Sebastian Thrun
We often use human consciousness as the ultimate benchmark for artificial exploration.
The human brain is ridiculously intricate. While weighing only three pounds, it contains about 100 billion neurons and 100 trillion connections between those. On top of the sheer number complexity, the order of the connections, and the order of actions the brain does naturally make it even harder to replicate. The human brain is also constantly reorganizing and adapting. It's a beautiful piece of machinery.
We've had millions of years for this powerhouse of a computer to be created, and now we're trying to do the same with neural networks and machines in a truncated time period. While deep learning algorithms have been around for a while, we're only just now developing enough data and enough compute power to change deep learning from a thought experiment to providing a real edge.
Think of it this way, when talking about the human brain we talk about left-brain and right-brain. The theory is that left-brain activities are analytical and methodical, and right-brain activities are creative, free-form and artistic. We're great at training AI for left-brain activities (obviously with exceptions). In fact, AI is beating us at these left-brain activities because a computer has a much higher input bandwidth than we do, they're less biased, and they can perform 10,000 hours of research by the time you finish this article.
It's tougher to train AI for right-brain tasks. That's where deep learning comes in.
Deep learning is a subset of machine learning based on unsupervised learning from unstructured/unlabeled data. Instead of asking AI a question, giving it metrics and letting it chug away, you're letting AI be intuitive. Deep learning is a much more faithful representation of the human brain. It utilizes a hierarchy of convolutional neural networks to handle linear and non-linear operations so it can think creatively to better problem-solve on potentially various data sets and in unseen environments.
When a baby is first learning to walk it might stand up and fall down. It might then take a small stutter step, or maybe a step that's much too far for its little baby body to handle. It will fall, fail, and learn. Fall, fail, and learn. That's very similar to the goal for deep learning or reinforcement learning.
What's missing is the intrinsic reward that keeps humans moving when the extrinsic rewards aren't coming fast enough. AI can beat humans at a lot of games but has struggled with puzzle/platformers because there's not always a clear objective outside of clearing the level.
A relatively new (in practice, not in theory) approach is to train AI around "curiousity"[1]. Curiosity helps it overcome that boundary. Curiosity lets humans explore and learn for vast periods of time with no reward in sight, and it looks like it can do that for computers too!
When I think about the invention of the wheel, I think about cavemen. But that isn't how it happened.
Lots of significant inventions predated the wheel by thousands of years. For example, woven cloth, rope, baskets, boats, even the flute were all invented before the wheel (and apparently not invented by cavemen).
While simple, the wheel worked well (and still does). Even now, the phrase "reinventing the wheel" is used derogatorily to depict needless or inefficient effort. But how does that compare to sliced bread (which was also a pretty significant invention)?
Despite being a hallmark of innovation, it still took more than 300 years for the wheel to be used for travel. With a bit more analysis, it makes sense. In order to use a wheel for travel it needs an axle, and it needs to be durable, and loadbearing, requiring relatively advanced woodworking and engineering.
All the aforementioned products created before the wheel (except for the flute) were necessary for survival. That's why they came first. As new problems arose, so did new solutions.
Necessity is the mother of invention.
Unpacking that phrase is a good reminder that inventions (and innovation) are often solution-centric.
Too many entrepreneurs are attracted to an idea because it sounds cool. They get attracted to their ideas and neglect their ideal customer's actual needs.
If you want to be disruptive, cool isn't enough. Your invention has to be functional, and it has to fix a problem people have (even if they don't know they have it.) The more central the complaint is to their daily lives the better.
Henry Ford famously said: “If I had asked people what they wanted, they would have said faster horses.”
Innovation means thinking about and anticipating wants and future needs.
Your customers may not even need something radically new. Your innovation may be a better application of existing technology or a reframe of best practices.
Uber didn't create a new car, they created a new way to get from where you want with existing infrastructure and less friction.
Football season is officially underway! In honor of that, here's a look at each position's composite player!
As you might expect, different sports have a different ratio of ethnicities. For example, you might expect more Pacific Islanders in Rugby or Asians in Badminton.
The same is true for different positions on a football team. Apparently, offensive linemen are more likely to be white while running backs are more likely to be black.
Here is a visualization that shows what happens when you average the top players' faces in various positions?
While you may be thinking "this player must be unstoppable" ... statistically, he's average.
The "composite" NFL player would be the 848th best player in the league. He's not a starter, and he plays on an average team.
We found the same thing with our trading bots. The ones that made it through most filters weren't star performers. They were the average bots that did enough not to fail (but failed to make the list as top performers in any of the categories). The survivors were generalists, not specialists.
In an ideal world, with no roster limits, you'd want the perfect lineup for each granular situation. You'd want to evaluate players on how they perform under pressure, on different downs, against other players, and with different schemes.
That's what technology lets you do with algorithms. You can have a library of systems that communicate with each other ... and you don't even have to pay their salary (but you will need data scientists, researchers, machines, data, alternative data, electricity, disaster recovery, and a testing platform).
You won't find exceptional specialists if your focus is on generalized safety. Generalists are great, but you also have to be able to respond to specific conditions.
In Part 2, I talked about normalizing your habits and picking consistent, normalized metrics. This doesn't just work at the gym; it applies to life and business as well.
Today, I want to explain how and why this helps. To do so, we will talk about controlling your arousal states.
Chemically, most arousal states are the same. Meaning, the same hormones and neurotransmitters that make you feel fear also can make you feel excited. They affect your heart rate, respiration, etc. ... Though, the outside stimuli you experience likely determines how you interpret what is happening.
In most situations, a heart rate of 170 beats per minute is an indicator of extreme danger (or an impending toe-tag). If I felt my heart racing like that in a meeting, it might trigger a fight or flight instinct. I prefer conscious and controlled responses. So, I train myself to recognize what I can control and to respond accordingly.
One way I do that is by being mindful of heart rate zones during exercise.
My goal is to get as close to 170 bpm as I can, then stay in that peak zone for as long as possible.
Here is a chart showing a Fitbit readout of an exercise session.
As you can see, every time I reach my limit ... I get my heart rate back down. It becomes a conscious and controlled learned behavior.
Here is a different look that shows effort based on my maximum heart rate. It is from an app called Heart Analyzer.
Recognizing what this feels like is a form of biofeedback; it's not only gotten me better at controlling what happens after my heartrate reaches 170 but at identifying when I'm close – even without a monitor.
Now, when my heart rate is at 170 bpm (regardless of the situation), I don't feel anxious ... I think about what I want to do.
I currently use an Apple watch with the HeartWatch app to measure heart rate during the day. The Oura Ring is what I use to measure sleep and readiness.
These are useful tools.
It's the same with trading ... Does a loss or error harsh your mellow – or is it a trigger to do what you are supposed to do?
Getting used to normalized risk creates opportunity.
When you are comfortable operating at a pace, or in an environment, that others find difficult – you have a profound advantage and edge.
Trade Shows & The Evolution Of Trading
I recently participated on several panel discussions about AI and trading. This picture was taken at The Trading Show in New York.
I speak at a number of events every year because I enjoy meeting people pushing the envelope and shaking things up. It is also a great opportunity to feel the pulse of the industry (by paying attention to the titles of the sessions, the types of sponsors and vendors attracted, and of course, the makeup of the audiences).
Big changes are coming! Technical innovations and data science insights continue to impress, but the use of alternative data and advanced AI is at a tipping point. I describe these shifts in the book I’m finishing up, called “Next On Wall Street – Understanding AI’s Inevitable Impact on Trading”. Let me know if you want to know more about the book.
In the 90s, when I’d go to conferences, I would pay attention to speakers. Now, when I go to conferences, I'm paying attention to the audience. The players are changing so fast, the game itself is changing.
There have been various generations of trading built on different innovations. When computerized data became available, simply understanding how to download and use it generated Alpha. The same could be said for each later evolution: the adoption of complex algorithms, access to massive amounts of clean data, and the adoption of AI strategies.
Each time a new shift happens, traders pivot or fail. The scale of innovation increases, but the pattern remains.
At this most recent conference, I was excited to see people recognizing the pivot toward AI, Big Data and high-speed computing.
Change happens slowly, and then all at once, and we’re getting close to that inflection point.
Onwards!
Posted at 01:20 PM in Business, Current Affairs, Ideas, Market Commentary, Science, Trading, Trading Tools, Web/Tech | Permalink | Comments (0)
Reblog (0)