The three things investors should know this week:
The macroeconomy is no longer driving the equity market. After last week’s US CPI announcement, rate cut expectations are now back near October levels, and much closer to reality.
While bond markets adjusted, equity markets are still rallying, mostly on the back of technology.
Tech valuations promise extreme earnings growth, which may not necessarily happen because:
The tech may take more time than anticipated to develop commercially
The tech may faill to live up to its hyped potential
Microchips, the way we know them today, could become obsolete if a technology like quantum computing is achieved and commercialised
Last week, I penned an article saying “This equity rally has no (visible) legs”. Earnings are ok, but nothing to write home about. Rates by the end of the year will likely end up higher than what was priced in by the end of December, a theme which is playing out.
Inflation numbers from the US confirmed that macroeconomics are not driving the market. With both consumer and producer prices coming in hotter than expected, US interest rate cut expectations fell even further.
The bond market is now pricing in 3.5 cuts later in the year, a far cry from the 7 cuts six weeks ago and much closer to the Fed’s own projection for 3.
The equity market corrected a bit, but this didn’t last long. A 2% intra-day drop was limited to -1.44% by the end of the session, and bullishness resumed thereafter. The S&P 500 is roughly where it was before the CPI announcement.
Having said that, bond markets never really found their way to December’s bullishness, and are now down more than 2% for the year…
despite the fact that credit spreads remain relatively tight.
There’s no other way to read this, other than the market simply being in a bullish mood. If anyone has any doubts, Bitcoin (a bellwether of optimism if there has ever been one) is trading back above $50k.
Let’s focus on tech equities, however, as they clearly are the key drivers of the rally. Nvidia, a global chip maker and key driver of this rally, now boasts a capitalisation higher than the whole of the S&P 500 Energy sector. The company trades at 93x times its last year's earnings, 58x times its earnings for this year and 34x times its predicted earnings for next year. This implies a +60% rise in profits for the next couple of years, for a company that’s been listed for over two decades.
It’s not just Nvidia, but in fact all of the companies participating in the global chip-making supply chain that have made significant gains since October.
Much as I hate exorbitant valuations, there’s some reasoning behind the exuberance. As electric vehicles take off (EVs require double the number of microchips versus a conventional car), countries hoard chips and companies compete for the next AI-based app, demand for microchips is soaring, while supply is limited. Taiwan, the world’s premier manufacturer of microchips is under the microscope by both the East and the West, in a way oil-producing Arab countries were in the beginning of the 20th century.
Yet still, it’s difficult to account for a 50% rally in six weeks.
Humanity talks about the potential and perils of AI as if this is something tangible and understood. It is neither. Bar large language models, which are really clever chatbots that can now pass the Turing Test and communicate in a simple way, we haven’t seen a tangible breakpoint, only the promise of one.
While a lot of organisations claim to be using AI, there’s little evidence that they have transformed or that they do so in a way that is gaining them a meaningful competitive advantage. We have yet even to see evidence that companies are embedding a constant learning and transformation culture that would enable them to adapt to big changes.
At best, we stand at the precipice of another technological revolution, of which we still understand very little.
A 1994 article in Time Magazine, introduced readers to the internet
Reading the article, their advice was mostly about using the internet to send emails. Yet, a few years later, by 1999, tech valuations had become exorbitant. “No company can exist unless it’s online”, tech gurus proclaimed. They proved right, but it took another decade and a half from that point. So, the first risk is simply that we don’t know much about what lies ahead. For tech to become ubiquitous enough to spur meaningful change, it may take more time than anticipated. Or that change can take a wildly different direction than we currently anticipate. We don’t understand much more about AI than we did about the internet in 1994. A lot of the high valuations we see may simply reflect our wish to invest early rather than late. Investing in microchips makes some sense, in that they are the key building blocks. But at current valuations, the assumption is that demand will take off.
The second risk is that the tech may simply fail to live up to its potential. People would rather talk to people rather than robots. Most importantly they trust people more than robots. If a century’s worth of Sci-Fi, from H.G. Well’s “War of the Worlds” and Frank Herbert’s epic “Dune”, to Asimov’s “I Robot” and “Foundation” or Hollywood’s “Terminator” won’t convince you of the innate human technophobia, reality might. The wealth management industry learned this lesson with robo-advisors. A few year’s ago, the future of wealth management was all about financial technology, Fintech. Developed in 2008, robo-advisors were going to change the wealth management world, by interacting with affluent individuals in a cost-effective way. High Net Worth and Ultra High Net Worth clients didn’t show much interest in the technology, as they could afford human advisers. So the burden fell on the mass-affluent market, clients with enough money to invest but who could not afford a financial planner or a private banker. The theory was sound enough. But it never really caught on. Robo-advisers were sold to big banks by 2020 and 2022 for lower than expected valuations, and precious little has been heard about them since. In 2018, I attended many a presentation on the imminent future of immersive technology – reality altering glasses and VR headsets. Once again, this technology was all the rage. Google had made a start with Google Glasses, a mix of ordinary glasses and augmented reality, in 2013. By 2015 it had stopped production and by 2023 abandoned the project altogether. Mark Zuckerberg then tried to make the Metaverse a reality. Facebook, even changed its name to Meta and tried to create the first version of the metaverse. Yet the graphics were poor, and the immersive experience not nearly as cool as promised. And already, there are reports of those who purchased $3999 Apple VR headsets returning them.
The third key risk is that microchips could become an obsolete technology. Exotic as this claim may sound, the future of computing, and I suspect AI, may lie away from traditional computing. Modern computing is largely based on “Moore’s Law”. Gordon Moore, an engineer and co-founder of Intel, posited in 1965 (and in 1976) that the number of transistors in an integrated circuit would double every two years. So would computing power.
While this is still the case, for various technical reasons, while we can keep adding transistors we are reaching the point where computing power (clock speed) is levelling out. We can keep stringing processors together, of course, to increase computing power, but it is becoming clear the technology is reaching its limits. Chips are now being designed at an atomic level. Moore himself warned of the limitations in the early 2000s.
While there are many interesting ways being developed to engineer even more efficient microchips, it may just be that quantum computing, a new form of computing altogether, could upend conventional computing. A quantum computer, in theory, can perform calculations much faster than a conventional computer. Comparing the two would be like comparing Turing’s engine to your smartphone. While efforts to create one have not yet succeeded (that we know of), and we are possibly at least a good decade away from any sort of commercial use, quantum computers could soon become a reality. Such a high-level technology would be the natural home to high-performing Artificial Intelligence, which can, in theory, exist without Nvidia and other chip makers doubling their long-term profits each year.
While there is a case for this wild tech rally, the buzz around the potential of AI far outstrips visible evidence. Yet it persists, unabated. After winning a board room war, Sam Altman, founder of Chat GPT, is now asking for a $7tn investment, more than the annual US budget and nearly 2.5 times the UK’s annual GDP, to consolidate the tech industry under himself. The plan is grand. But asking for nearly a seventh of global GDP to move things forward may seem a bit excessive for some rational investors.
As long-term portfolio managers we can’t ignore bubbles and bonanzas. History is littered with them, and early investors have made fortunes. But history also demonstrates clearly that over-excitement is not a good guide for long-term investments. The basis for long-term success rests upon fundamentals, sensible investing and diversification.