Recently, however, it seems that we are increasingly presented with issues divided into polar opposite points of view, with little to no tolerance for disagreement.
Nonetheless, not all topics need to be debated or negotiated.
Sometimes, a fact is a fact.
Hopefully, this video won't step on any toes – but if you're a "flat earther," I wouldn't watch.
Here's a clip from Behind The Curve (a documentary on the flat earth society) that I think perfectly shows confirmation bias.
Start with the evidence and then form a conclusion. Doing that in reverse doesn't tend to work out as well.
As a polite reminder, if a conspiracy relies on millions of people (as well as different countries and organizations) to all commit to the disinformation campaign … it's not likely true.
As Occam's razor states, the simplest explanation is often the correct one.
While the calculation is based on five factors, the primary conditions indicate a big disagreement about market conditions.
For example, two of the conditions are that a substantial number of stocks have to be at yearly highs, while a substantial number of stocks have to be at new annual lows. Ultimately, it is hard for those two conditions to be met in a short period of time unless there's uncertainty in the market. Moreover, after a rally, uncertainty is often a precursor to a decline.
In addition, technically (for the pattern to be complete), a second sighting of the five elements must occur within 36 days. Logically, lingering uncertainty is a momentum killer.
Should I Be Worried?
This week, Cumberland Advisors' shared the following from Art Cashin, Director of Floor Operations for UBS Financial Services at the New York Stock Exchange.
Art had this to say:
“I had told Carl Quintanilla on CNBC’s Squawk on the Street in an interview about 10:20 that I thought the chatroom bears were turning a bit more aggressive. Several were trying to point out that we had had two Hindenburg Omens in a row. In case you had forgotten, a Hindenburg Omen is rather arcane indicator that takes as a measurement the ratio or relationship between the new 52-week highs and the new 52-week lows. It is quite unusual to have two days back-to-back with new Hindenburg Omens.
Now, you have to be a little bit careful about the Hindenburg Omen because, over the last 35 or 40 years, we haven’t had a market ‘crash’ without the presence of the Hindenburg Omen, and that is what chatroom bears were pushing. You have to remember the other part of that, which is while there has always been a Hindenburg Omen before a crash, there has not been a crash after every Hindenburg Omen. To use a rather poor analogy, it is almost like saying, we have never had a flood without rain. But, then again, every time it rains, it doesn’t mean it is going to flood.
So, it was, nevertheless, an effective tool among the chatroom types just to make people nervous. I am not sure how many have bought into the Hindenburg aspect, but it was one of those ‘Wait a minute – should I be aggressive on the buy-side or should I wait and hold back here?’ I think it had some of that effect.” – Art Cashin
From my perspective, while this pattern may have correctly predicted every big stock market swoon of the past two decades (including the October 2008 decline), not every Hindenburg Omen has been followed by a crash. Resorting to a geometry analogy: All rectangles are squares, but not all squares are rectangles.
Times are strange – and there's reason to be wary of the markets, but indicators like this are a reason to be cautious, not a basis for trading decisions.
GPT-3 was released by OpenAI in 2020 – and was considered by many a huge jump in natural language processing.
GPT stands for Generative Pre-trained Transformer. It uses deep learning to generate text responses based on an input text. Even more simply, it's a bot that creates a quality of text so high that it can be difficult to tell whether it's written by a human or an AI.
GPT-3 is 100x bigger than any previous language AI model and comes pre-trained on 45TB of training text (499 billion words). It cost at least 4.6 million US dollars (some estimated as high as $12 million) to train on GPUs. The resulting model has 175 billion parameters. On top of that, it can be tuned to your specific use after the fact.
Practically, GPT-3 was a huge milestone. It represents a huge jump in NLP's capabilities and a massive increase in scale. That being said, there was a frenzy in the community that may not match the results. To the general public, it felt like a discontinuity; like a big jump toward general intelligence.
To me, and to others I know in the space, GPT-3 represents a preview of what's to come. It's a reminder that Artificial General Intelligence (AGI) is coming and that we need to be thinking about the rules of engagement and ethics of AI before we get there.
Especially with Musk unveiling his intention to build 'friendly' robots this week.
On the scale of AI's potential, GPT-3 was a relatively small step. It's profoundly intelligent in many ways – but it's also inconsistent and not cognitively concrete enough.
Take it from me, the fact that an algorithm can do something amazing isn't surprising to me anymore … but neither is the fact that an amazing algorithm can do stupid things more often than you'd suspect. It is all part of the promise and the peril of exponential technologies.
It's hard to measure the intelligence of tools like this because metrics like IQ don't work. Really it comes down to utility. Does it help you do things more efficiently, more effectively, or with more certainty?
For the most part, these tools are early. They show great promise, and they do a small subset set of things surprisingly well. If I think about them simply as a tool, a backstop, or a catalyst to get me moving when I'm stuck … the current set of tools is exciting. On the other hand, if you compare current tools to your fantasy of artificial general intelligence, there are a lot of things to be improved upon.
Clearly, we are making progress. Soon, GPT-4 will take us further. In the meantime, enjoy the progress and imagine what you will do with the capabilities, prototypes, products, and platforms you predict will exist for you soon.
Each year I look forward to Camp Kotok, or as I like to call it Economists in Nature. It's basically 5 days of canoeing, fishing, and dining with economists, wealth managers, traders, investors and more.
One of few chances for people from these backgrounds to come together and talk about the world, big trends, investing, economics, politics, and more … in an open and safe forum. The event goes by the Chatham House Rule – which basically means you can share the information you receive, but not who said it.
This year we talked about everything from China, digital currencies, the pandemic, and the state of markets.
Interestingly, for all the takeaways I could focus on, the main takeaway was uncertainty.
For all the intelligent and "in-the-know" people in the room, very few people had clear opinions of what was going to happen. There were too many variables at play, and while they posited a lot of potential paths, it feels like the general census was we're at a crossroads with many potential futures in front of us.
Despite the general uncertainty in the room, it wasn't fear-laden. The general mood was optimistic, and for the most part, everyone sees paths toward economic success post-COVID.
With that said, when and what "post-COVID" means is another issue.
One of the other key discussions that came up often was the new generation of workers and their changing relationship with work. It's plain to see the rate of quitting is higher, that wages are rising, and it's getting hard to fill minimum wage jobs. It's hard to get employees back in an office space, and many are willing to take pay cuts or switch to other companies to stay at home.
The long-term impact on our economy (and our culture) is yet to be seen.
As the world gets faster, automation is even more important. How else can you keep up with all the data you need, the tests you need to run, etc..
But, like with all things, moderation is key. Knowing what to automate and when is vital to your success. Especially in data science, sometimes less is more.
I shot a short video with some of my main ideas on the subject. Check it out.
Automation is one of the last steps you should attempt in any business process. It comes once you're confident in an ability and are able to move to bigger problems. And, the secret to knowing when to utilize automation is data; specifically, the metadata created by your processes. That metadata can help you assess whether you can automate it, and if so, for how long can it be automated before you must re-evaluate and update.
Often, as you're getting started, less is really more, because while artificial intelligence is cool, artificial stupidity (or automated inefficiencies) is scary.
At a mastermind meeting last week, Landon Downs from 1Qbit spoke on the state of technology. Landon and I agree on a lot of things – and one of those things he emphasized heavily. AI is in a period of massive innovation. It's a renaissance, or springtime, or whatever euphemism you want to use. But it's only springtime for AI if you can take advantage of it.
Adding to that, he explained that a current constraint might become a big short-term limitation to how widespread AI can grow. The constraint is that there is a global chip shortage (and it could be an issue until 2023).
The chip shortage is probably a bigger problem than you imagine because microchips are in everything from refrigerators to toothbrushes – not just high-tech computers. This has the potential to be a massive disruptor, especially in the tech industry.
Building and running smart AI systems takes a lot of computing power, and as more competitors enter the scene, not only will the cost to play increase, but so will the potential you get turned away at the door.
To a certain extent, the AI arms race becomes a chip arms race.
As I thought about the chip shortage, and its impact on the next few years, it also made me brainstorm what else I thought would be the most influential shifts that would influence me and my business (and potentially the world).
Here's my top 5, and I'd love to hear yours.
Compute Power is going to increase, and the ability to brute force problems will create new possibilities. Quantum computing will become more important and likely available for commercial use.
New and better AI platforms will transition AI from a tool for specialists to a commodity for everyday people – it won't just be Artificial Intelligence, it will be Amplified Intelligence (helping people make better decisions, take smarter actions, and continually measure and improve performance).
Blockchain and authenticated provenance are going to become more important as the world becomes increasingly digital. Trust and transparency will be important as indelible logs are needed for finance, medical, armies, etc.
IoT will become more pervasive, enabling near digital omniscience as everything becomes a sensor that transmits data up the chain.
Mass customization will become the norm instead of simple mass production as hardware, data, and AI continues to improve products, medicine, custom supplements, and just about everything else.
While self-driving cars seem like a relatively new invention, the reality is that the earliest autonomous self-driving cars existed in the early 1980s (non-autonomous versions and semi-workable experiments have existed since the 1920s).
Luckily, the standards and approach have gotten much better since then, and we continue to make massive strides. Recently, Elon Musk stated that he was confident that level 5 self-driving cars would exist by the end of this year. That would mean the need for a steering wheel or a driver's seat would be next to 0 – a luxury even.
According to many AI experts, this is exciting because level 5 autonomy is not just difficult – it's near impossible.
Think of it from a human perspective. When we're driving, many minute decisions happen instantaneously and without much trouble. But some of those decisions are "subjective" and seemingly novel. We know the answer because we intuit the answer – not because it's following any specific rule.
For a car to reach level 5 autonomy, it would have to be pre-trained for essentially every possible situation they could encounter – no matter how rare.
Elon Musk is famous for his potentially antagonizing beliefs and predilection for extreme statements … but will Tesla somehow solve these problems?
Is AI about to pass another hurdle already?
It's exciting stuff! As someone that hates long drives, I'm certainly ready for it. I can also envision a future where the norm is autonomous driving, and individuals that want the right to drive their cars themselves will have to pass extra tests, pay extra fees, and warn the autonomous cars that it's a human at the wheel.