Capitalogix started in my home. The first employee sat at a tiny desk behind me. Their job was to exit the trades I entered. This was an early attempt to avoid the fear, greed, and discretionary mistakes that humans bring to the business of trading.
We started to grow … and somehow got to 23 people working in my home. It literally overtook my office, dining room, and the entire upstairs. Neighbors noticed (and expressed their displeasure).
Looking back, it seems crazy (and my wife seems Saintly). But somehow, at the time, it felt natural.
Incubating the company in my home, and growing it the way we did, resulted in a closeness (a feeling much like family) that pays dividends, even today.
There is a concept in business expressed by the phrase "measure twice and cut once." It's much easier to do something the right way from the beginning rather than trying to fix it after you mess it up.
It saves time and creates a better end result.
Beginning with the end in mind is powerful. I often spend what looks like "too much" time imagining the bigger future. What will things look like when we are ten times bigger? Who will we serve? What dangers will keep me up at night? What opportunities will we be trying to attract or capture? What strengths will give us confidence? Who will we be collaborating with … and about what? It helps build a roadmap that makes it easier to understand whether particular activities are aligned with our future (or just something we are doing now).
I prefer to optimize on the longer term rather than the shorter term. That isn't always possible or practical, but that is my preference when it is.
Pace is important – and a focus on "what's the best next step" is an important driver at Capitalogix, but sometimes in order to go fast, you have to go slow. You may miss out on something, but the ultimate payoff is often worth it.
It's a good lesson for personal growth as well. There is no right timeline. No one size fits all. Take your time. Find your path.
Industry is changing fast. In the 1900s, the world's titans mainly produced tangible goods (or the infrastructure for them). The turn of the century brought an increasing return on intangible assets like data, software, and even brand value … hence the rise of the influencer.
As technology increasingly changes business, jobs, and the world, intellectual property becomes an increasingly important way to set yourself apart and make your business more valuable.
While America is leading the charge in A.I., we're falling behind in creating and protecting intellectual property.
Patents can help protect your business, but they also do much more. I.P. creates inroads for partnerships with other businesses, and they can also be a moat that makes it more difficult for others to enter your space. On a small scale, this is a standard business strategy. What happens when the scale increases?
The number of patents we're seeing created is a testament to the pace of innovation in the world, but it should also be a warning to protect your innovations. Remember, however, that anything you get a patent on becomes public knowledge – so be careful with your trade secrets.
As I experiment with social media in preparation for the launch of my book "Compounding Insights: Turning Thoughts Into Things in the Age of AI," we've started producing short videos where employees ask me questions … some dumb and some smart.
One we just released asked the question, "Does astrology work?" Here is my response.
The first answer is … at least not the way many believers wish it would. Nonetheless, many get value from astrology because it helps them think about themselves and others from a different perspective while providing comfort and structure.
It's like a nightlight in the dark. It doesn't make you any safer, but it feels like it.
Unfortunately, like many things … some people take it too far.
Trading is more accessible than ever before. We've gone from scrums of traders in trading pits to armchair experts investing in real estate, cryptocurrencies, options, and more from the comfort of their couches in their underwear.
With accessibility often comes misuse. And, in this specific case … astrology.
"Mercury Is In Retrograde … Should I Sell My Stocks?"
A blindfolded monkey throwing darts at a newspaper’s financial pages could select a portfolio that would do just as well as one carefully selected by experts. – Burt Malkiel, “A Random Walk Down Wall Street”
My son brought to my attention an iPhone app – Bull and Moon; "Find stocks whose stars align with yours."
After you create your "astrological investor profile," their "proprietary financial astrology algorithm recommends an optimal portfolio of six stocks and shows your compatibility score with thousands more."
The picks were pedestrian: Oracle, Hasbro, American International Group, Microsoft, Yum! Brands, and FedEx.
The logic and commentary were entertaining. The choices were based on "similarities in business decisions," "shared outlooks on humanity," and "strong mutual success metrics."
Here is an excerpt:
Zach can usually let strong FedEx Corporation lead the relationship, but at the same time, Zach will invest many times over. This relationship will be full of success, understanding on many levels, and a lot of fun.
It's no secret that I've been a proponent of the proliferation and adoption of AI. I've been the CEO of AI companies since the early 90s, but it was the early 2000s when I realized what the future had in store.
A few years ago, in anticipation of where we are today, I started participating in discussions about the need for governance, ethical guidelines, and response frameworks for AI (and other exponential technologies).
Last week, I said that we shouldn't slow down the progress of generative AI … and I stand by that, but that doesn't mean that we shouldn't be working hastily to provide bumper rails to keep AI in check.
There are countless ethical concerns we should be talking about:
Bias and Discrimination – AI systems are only as objective as the data they are trained on. If the data is biased, the AI system will also be biased. Not only does that create discrimination, but it also leaves systems more susceptible.
Privacy and Data Protection – AI systems are capable of collecting vast amounts of personal data, and if this data is misused or mishandled, it could have serious consequences for individuals' privacy and security. The security of these systems needs to be managed, but also where and how they get their data.
Accountability, Explainability, and Transparency – As AI systems become increasingly complex and autonomous, it can be difficult to determine who is responsible when something goes wrong, not to mention the difficulty in understanding how public-facing systems arrive at their decisions. Explainability becomes more important for generative AI models as they're used to interface with anyone and everyone.
Human Agency and Control – When AI systems become more sophisticated and autonomous, there is fear about their autonomy … what amount of human control is necessary, and how do we prevent "malevolent" AI? Within human agency and control, we have two sub-topics. First, is job displacement … do we prevent AI from taking certain jobs as one potential way to preserve jobs and the economy, or do we look at other options like universal basic income. We also have to ask where international governance comes in, and how we ensure that ethical standards are upheld to prevent misuse or abuse of the technology by bad actors.
Safety and Reliability – Ensuring the safety and reliability of AI systems is important, particularly in areas such as transportation and healthcare where the consequences of errors can be severe. Setting standards of performance is important, especially considering the outsized response when an AI system does commit an "error". Think about how many car crashes are caused by human error and negligence… and then think about the media coverage when a self-driving car causes one. If we want AI to be adopted and trusted, it's going to need to be held to much higher standards.
These are all real and present concerns that we should be aware of. However, it's not as simple as creating increasingly tight chains to limit AI. We have to be judicious in our applications of regulation and oversight. We intrinsically know the dangers of overregulation – of limiting freedoms. Not only will it stifle creativity and output, but it will only encourage bad actors to go further beyond what the law-abiding creators can do.
If you want to see one potential AI risk management framework, here's a proposition by the National Institute of Standards and Technology – it's called AI RMF 1.0. It's a nice jump-off point for you to think about internal controls and preparation for impending regulation. To be one step more explicit … if you are a business owner or a tech creator, you should be getting a better handle on your own internal controls, as well as anticipating external influence.
In conclusion, there is a clear need for AI ethics to ensure that this transformative technology is used in a responsible and ethical manner. There are many issues we need to address as AI becomes more ubiquitous and powerful. That's not an excuse to slow down, because slowing down only lets others get ahead. If you're only scared of AI, you're not paying enough attention. You should be excited.
Many of the people who read my blog, or are subscribed to my newsletter, are either entrepreneurs or in the financial space. While Charlie Epstein moonlights as an actor/comedian, his day job is in financial services. He's incredibly sharp, very knowledgeable … and yes, a little quirky.
But that quirkiness is what makes him funny – so much so that you'll be captivated long enough to gain some real value. Charlie does an excellent job teaching people how to do practical things to ensure they have enough money when they retire to live a good life.
More importantly, he helps you think about your mindsets and what you truly want, so you can live the life you've always dreamed of and deserved. And even though I didn't think I needed to learn anything new, I gained a ton of practical value – and you probably will too.
As a bonus, half of the proceeds go toward supporting vets with PTSD.
There aren't many people (or "offers") I'd feel comfortable plugging, but this is one of them. As well, many of the other people I would put in front of you (like Dan Sullivan, Peter Diamandis, and Mike Koenigs) love Charlie as much as I do.
When I first got interested in trading, I used to look at many traditional sources and old-school market wisdom. I particularly liked the Stock Trader's Almanac.
While there is real wisdom in some of those sources, most might as well be horoscopes or Nostradamus-level predictions. Throw enough darts, and one of them might hit the bullseye.
Traders love patterns, from the simple head-and-shoulders, to Fibonacci sequences, and the Elliot Wave Theory.
Here's an example from Samuel Benner, an Ohio farmer, in 1875. That year he released a book titled "Benners Prophecies: Future Ups and Down in Prices," and in it, he shared a now relatively famous chart called the Benner Cycle. Some claim that it's been accurately predicting the ups and downs of the market for over 100 years. Let's check it out.
Here's what it does get right … markets go up, and then they go down … and that cycle continues. Consequently, if you want to make money, you should buy low and sell high … It's hard to call that a competitive advantage.
Mostly, you're looking at vague predictions with +/- 2-year error bars on a 10-year cycle.
However, it was close to the dotcom bust and the 2008 crash … so even if you sold a little early, you'd have been reasonably happy with your decision to follow the cycle.
The truth is that we use cycle analysis in our live trading models. However, it is a lot more rigorous and scientific than the Benner Cycle. The trick is figuring out what to focus on – and what to ignore.
Just as humans are good at seeing patterns where there are none … they tend to see cycles that aren't anything but coincidences.
This is a reminder that just because an AI chat service recommends something, doesn't make it a good recommendation. Those models do some things well. Making scientific or mathematically rigorous market predictions probably aren't the areas to trust ChatGPT or one of its rivals.
It's been about a month since we discussed Silicon Valley Bank (SVB). But the impact is still lingering. I know friends whose money is still tied up, and we've continued to see increased coverage of banks' perceived failures.
Currently, there are $7 trillion sitting uninsured in American banks. VisualCapitalist put together a list of the 30 biggest banks by uninsured deposits.
Many of the banks on this list are systemically important to the banking system … which means the government would be more incentivized to prevent their collapse.
It's important to make clear that these banks differ from SVB in several ways. To start, their userbase is much more diverse … but even more importantly, their loans and held-to-maturity securities are much lower as a percentage of total deposits. Those types of loans took up the vast majority of SVB's deposits, while they make up less than 50% of the systemically important banks on this list. But, according to VisualCapitalist, 11 banks on this list have ratios over 90%, just like SVB, which brings them to a much higher risk level.
Regulators stepped up in the wake of the SVB collapse, and the Fed also launched the Bank Term Funding Program (BTFP), as we discussed in the last article on this subject. But, it remains to be seen what will happen in the future.
Does the Fed have another option besides saving the banks and backing deposits? If not, market participants will start to rely on the Fed to come to the rescue, making even riskier decisions than they already were.
It feels like the Fed is stuck between a rock and a hard place, but hopefully, we will start to see some movement in the right direction.
In March, OpenAI unveiled GPT-4, and people were rightfully impressed. Now, fears are even greater about the potential consequences of more powerful AI.
The letter raises a couple of questions.
Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization? – Pause Giant AI Experients: An Open Letter
The crux of their message is that we shouldn't be blindly creating smarter and more robust AI until we are confident that they can be managed and controlled to maintain a positive impact.
During the pause the letter calls for, the suggestion is for AI labs and experts to jointly develop and implement safety protocols that would be audited by an independent agency. At the same time, the letter calls for developers to work with policymakers to increase governance and regulatory authorities.
My personal thoughts? Trying to stop (or even pause) the development of something as important as AI is naive and impractical. From the Industrial Revolution to the Information Age, humanity has always embraced new technologies, despite initial resistance and concerns. The AI Age is no different, and attempting to stop its progress would be akin to trying to stop the tide. On top of that, AI development is a global phenomenon, with researchers, institutions, and companies from around the world making significant contributions. Attempting to halt or slow down AI development in one country would merely cede the technological advantage to other nations. In a world of intense competition and rapid innovation, falling behind in AI capabilities could have severe economic and strategic consequences.
It is bigger than a piece of software or a set of technological capabilities. It represents a fundamental shift in what's possible.
The playing field changed. We are not going back.
The game changed. That means what it takes to win or lose changed as well.
Yes, AI ethics is an important endeavor and should be worked on as diligently as the creation of new AI. But there is no pause button for exponential technologies like this.
Change is coming. Growth is coming. Acceleration is coming. Trying to reject it is an exercise in futility.
We will both rise to the occasion and fall to the level of our readiness and preparedness.
Actions have consequences, but so does inaction. In part, we can't stop because bad actors certainly won't stop to give us time to combat them or catch up.
When there is some incredible new "thing" there will always be some people who try to avoid it … and some who try to leverage it (for good and bad purpose).
There will always be promise and peril.
What you focus on and what you do remains a choice.
Whether AI creates abundance or doomsday for you will be defined largely by how you perceive and act on the promise and peril you perceive. Artificial intelligence holds the potential to address some of the world's most pressing challenges, such as climate change, disease, and poverty. By leveraging AI's capabilities, we can develop innovative solutions and accelerate progress in these areas.
It's two sides of the same coin. A 6-month hiatus won't stop what's coming. In this case, we need to pave the road as we traverse it.
This week, the rapid collapse of Silicon Valley Bank (“SVB”) stunned the venture capital and startup community. SVB customers initiated withdrawals of $42bn in a single day (a quarter of the bank’s total deposits), and it could not meet the requests. By Friday, the Federal Deposit Insurance Corporation (the “FDIC”), the US bank regulator that guarantees deposits of up to $250,000) declared SVB insolvent and took control. The run was so swift SVB’s coffers were drained in full, and the bank carried a “negative cash balance” of nearly $1bn.
Silicon Valley Bank’s death spiral started on Wednesday when it told investors that it needed to raise over $2 billion … in large part due to unforced errors. To start, its balance sheet took a massive hit because of inflation and the subsequent rise in interest rates. Deposits in the bank grew massively from 2019 to 2021, and interest rates were low, so the bank heavily invested in treasury bonds. Those bonds were yielding an average of only 1.79% at the time. When the Fed jacked up rates, the approximately $80 billion SVB had in bonds cratered in value. Suddenly, SVB customers began a hysteric bank run, ultimately withdrawing $42 billion worth of deposits by the end of Thursday. By Friday, the FDIC had seized the bank in the most significant failure since the Great Recession. To make matters worse, 97% of deposits in the bank were above the FDIC insurance threshold and thus uninsured.
When I started writing this article, it was unclear what would happen to the thousands of VCs, PE Funds, and startups heavily reliant on SVB. Over 65,000 startups were worried about missing payroll, and it was all dependent on the whim of the FDIC. Luckily for them, they took aggressive action and agreed to backstop all depositors – hoping to prevent runs on any other financial institutions.
Meanwhile, the Dow posted its worst week since June on the back of the big banks being hit with big losses.
The FDIC stepping in is part of a broader effort by regulators to reassure customers that their money is safe. For example, the US central bank added it was “prepared to address any liquidity pressures that may arise.”
The Fed’s new facility, the Bank Term Funding Program (BTFP), will offer loans of up to one year to lenders who pledge as collateral US Treasuries, agency debt, mortgage-backed securities, and other “qualifying assets.”
Those assets will be valued at par, and the BTFP will eliminate an institution’s need to quickly sell those securities in times of stress. The Fed said the facility would be big enough to cover all US uninsured deposits. The discount window, where banks can access funding at a slight penalty, remains “open and available,” the central bank added.
Officials on Sunday said that the taxpayer would bear no losses stemming from the resolution of deposits. A levy on the rest of the banking system would fund any shortfall. They added shareholders and certain unsecured debtholders would not be protected.
A Look at How This Happened
We’ve already touched on the bank run and what caused it … but let’s dive deeper.
One of the biggest risks to SVB’s business model was catering to a very tightly-knit group of investors who exhibit herd-like mentalities. The problem with a business model like that is that when capital dries up, the deposits flee. Unfortunately, that sounds like a bank run waiting to happen … and it did.
The situation created a prisoner’s dilemma for depositors: I’m fine if they don’t draw their money, and they’re fine if I don’t draw mine. But once some started withdrawing, others followed suit.
Part of what started the run was SVBs decision to search for yield in an era of ultra-low interest rates. SVB ramped-up investment in a portfolio of highly rated government-backed securities, A significant portion of those in fixed-rate mortgage bonds carrying an average interest rate of just 1.64 percent. While slightly higher than the meager returns it could earn from short-term government debt, the investments locked the cash away for more than a decade and exposed it to losses if interest rates rose quickly.
When rates rose sharply last year, the portfolio’s value fell by $15bn, almost equal to SVB’s total capital. If SVB were forced to sell any of the bonds, it would risk becoming technically insolvent.
Although SVB’s deposits had been dropping for four straight quarters as tech valuations crashed from their pandemic-era highs, they plunged faster than expected in February and March. As a result, SVB decided to liquidate almost all of the bank’s “available for sale” securities portfolio and reinvest the proceeds in shorter-term assets to earn higher interest rates and improve the pressure on its profitability.
The sale meant taking a $1.8bn hit, as the value of the securities had fallen since SVB had purchased them due to surging interest rates.
To compensate for this, SVB arranged for a public offering of the bank’s shares, led by Goldman Sachs. It included a large investment from General Atlantic, which committed to buying $500mn of the stock. Although that deal was announced on Wednesday night, by Thursday morning, the deal was failing. SVB’s decision to sell the securities had surprised some investors and signaled to them that it had exhausted other avenues to raise cash. Some “smart” VC clients directed their portfolio clients to withdraw their deposits en masse to avoid losing it all.
What happened was the “perfect storm.” Many say it was predictable, especially after a decrease in regulation (which the bank’s management successfully lobbied for in 2015).
For now, SVB seems like an outlier, with its unusual (and specific) clientele. Still, there’s already nervousness for other small/regional banks … and there’s bubbling fear about the system as a whole.
Where Do We Go From Here?
My first question is, should the FDIC raise the insurance limit above 250K? While Giannis Antetokounmpo might have his money in 50 banks to keep it insured, it doesn’t seem a reasonable expectation of small companies that need liquidity for payroll and other monthly expenses. While some might be happy to see a bank potentially penalized for perceived recklessness, you also have to consider the clientele of this bank – many of the innovators that are driving the future of technology (or at least, hoping to).
My second question is, where were the regulators? The issues that led to this disaster were pointed out publicly months before this happened. Are more regulations required to ensure trust in the American financial system? Or is this a free market where pain and pleasure point out the evolutionary path?
What happens when another bank fails the same way? Do we continue to find a way to bail them out?
Trust in the Fed – and the government as a whole – is low. It’s one of the reasons why people are so interested in cryptocurrency and the blockchain. As a result, we’re at a bit of a crossroads. Various governmental agencies want to assure you your money is safe, but there’s no belief that will always be the case.
SVB failed, in part, due to their own mistakes … but they also failed due to herd mentality and negative sentiment. Had people felt confident in this 40+ year-old bank, business might have continued as usual.
And, what does this mean for banks and regulation as a whole? Perception is often more important than reality in the case of markets, pricing, and a host of other supposedly logic-and data-based decisions. Clearly, Markets are not rational … that’s why you shouldn’t try to predict them. Even scarier is the potential lack of trust in banks’ ability to meet the needs of their stakeholders. There are countless banks with more than 50% of their money in uninsured deposits … will companies want to bank with them if there aren’t safeguards protecting them?
A big crisis was averted this time … but this won’t be the last crisis for banks.
As news continues to shake out, I’ll give more of my thoughts, but for now, I want to watch more and see what changes.
For a bonus laugh, here’s Jim Cramer calling Silicon Valley Bank a buy a month ago.
For more context, several massive companies have large portions of their money with SVB, including
Circle – $3.3 Billion
Roku – $487 Million
BlockFi – $227 Million
and Roblox – $150 million
In 2008, Washington Mutual was taken over by the FDIC, filed for bankruptcy, and then was bought by JP Morgan.
Some of the other significant failures of the Great Recession, like Lehman Brothers, aren't in the chart because they were financial services firms – not banks.