The recent turmoil across global financial markets, marked by abrupt trading reversals and amplifying doubt, poses a critical question for investors: are we witnessing the unwinding of the exuberant artificial intelligence (AI) trade, or is it another episode in the cycle of market psychology? The answer is as elusive as it is consequential. Navigating a sea of doubt is uncomfortable, and each headline, earnings report, or central bank decision swings sentiment wildly. Viewed through this lens, recent weakness is a complex, collective reckoning with risk – one where perceptions of AI’s transformative power clash with fears of its sustainability and the broader vulnerabilities of the financial system.
A Sea of Doubt: The Investor’s Plight
Recent weeks have forced a reckoning with the nature of risk in today’s interconnected, technology-driven market. The abrupt reversals seen in major equity indices expose the nervous underbelly and provoke fears of being the last to exit.
The AI narrative that drove breathtaking valuations is now shadowed by doubt. Whispers of questionable accounting practices have deepened suspicions, among them that the exponential promises of AI may, in practice, be built atop far more fragile foundations than the sector’s evangelists will admit.
But technological doubt is only one strand in a complex web of unease. Weaknesses in the opaque world of private credit are beginning to surface, with illiquidity being scrutinised. In public debt markets, credit default swaps for leading corporations reveal an environment increasingly attuned to risk rather than opportunity. Meanwhile, robust employment numbers have all but quashed hopes of immediate monetary easing and its calming liquidity injection to the arm.
The narrative driving markets is not new, but there is a change in its stickiness at a time when the collective psychological fragility is rising.
Decoding Microsoft’s Vision: The Satya Nadella Interview
A short while ago, Nadella announced that he was stepping away from some of his duties as CEO of Microsoft and donning his technologist hat. It is easy to underestimate the importance of that announcement, but under the hood it suggested that the company’s proprietary AI-related strategy was slow and lacked focus. Alphabet’s Sergey Brin did the same when it became apparent that ChatGPT was stealing a march on their search and their execution was all at sea.
Nadella said the move allowed him “to be laser focused on our highest ambition technical work – across our data centre buildout, systems architecture, AI Science and product innovation – to lead with intensity and pace in this generational platform shift.”
Since Microsoft’s somewhat acrimonious divorce settled, Nadella’s insights provide rare clarity. His approach to Microsoft’s AI infrastructure build-out is both nuanced and pragmatic, an antidote to the simplistic “bigger is better” logic that so often dominates. Speaking recently, Nadella articulated an infrastructure strategy hinging on three imperatives: fungibility, hyperscale versus bare-metal, and the requirement for geopolitical flexibility.
Fungibility – the ability of any infrastructure to support broad and unpredictable shifts in model architecture – stands at the core of this vision, arguing that over-optimisation for any single AI model or chip risks rendering massive investments obsolete overnight, especially as innovation at firms like Nvidia accelerates. For example, silicon updates like the GB200, GB300, and the forthcoming Vera Rubin and Ultra promise better performance and reduced power consumption, requiring pacing to retain flexibility amid the furious rush to scale.
This pragmatism extends to Microsoft’s market positioning and that Azure’s future does not lie in hosting a handful of vast, bare-metal contracts for AI model training. Instead, Microsoft aims to capture the long tail of enterprise workloads, building resilience through diversification. Indeed, Nadella appears delighted that Oracle has gone all in with OpenAI and is intent on Microsoft insulating itself from the potential collapse of any single high-profile AI project. This diversification is consistent with the desire to have compliance with the geopolitics of data, where customers demand assets and data sovereignty in-region or in-country. The answer, as Nadella suggests, is a globally distributed, regulatory-compliant footprint, a critical non-negotiable for the multinationals and government clients that will define cloud’s future. This means that raw computational power is not the sole yardstick; the data centre, the power per rack, power per row, and cooling requirements are going to look very different. That means the pacing, the fungibility, and the location matter.
Moreover, in a future where workloads may become increasingly autonomous, how does a company built on empowering users adjust to a reality where users are not people but code? All of this has raised new questions, not about AI itself but about the pace of infrastructure build and its sustainability. The current, rough rule of thumb estimate is US$2 trillion of revenue by 2030, and that is not a slam dunk. Indeed, the ‘hey this is cool’ narrative has firmly retreated for now.
Does Satya Nadella have some agenda? Certainly, but his comments have added fuel to the negative narrative. “What do you expect an independent lab trying to raise money to do? They have to put some numbers out there such that they can actually go raise money so that they can pay their bills for compute!” That is not quite an emperor-has-no-clothes statement, but it is definitely uncomfortable.
AI Supply Chain Correction: The Anatomy of a Sell-Off
The recent market spasm which saw leading AI supply chain names battered has sparked debate over whether this is a genuine turning point or merely the market’s attempt to preempt the dreaded “glut after shortage.” At the centre of the drama was a third-party data centre delay due to lack of parts for CoreWeave. This narrative, in and of itself, should not surprise anyone, but it opened the door to the idea that companies are not in charge of their own destiny and, worryingly, the prospect of double or triple ordering.
As Nicholas Taleb famously put it, “I have seen plenty of gluts not followed by a shortage, but never a shortage not followed by a glut!” The market, ever focused on the second derivative, is already trading not today’s imbalance but the reversal it suspects will one day arrive.
The COVID-era “double-ordering” dynamic remains fresh in the memory, and the divergence in performance between the suppliers and the customers is marked. After all, the hyperscaler – with its robust balance sheet and immense operational agility – remains the safest haven in an uncertain build-cycle given these giants can rapidly reallocate capex, manage asset load, and weather the inevitable cycle correction.
Nvidia: The White Knight?
In this climate of high anxiety, Nvidia ’s latest earnings report was poised as more than a bellwether; it was, for many, the hope of a knight in shining armour. With sky-high valuations attached to the AI narrative, the challenge was to reassure with numbers and vision. Nvidia delivered on both counts.
The earnings summary delivered good news, yet the real achievement lay elsewhere. In an atypically measured tone from a CEO who is known for his expressive and energetic presentation style, Jensen Huang came straight out of the gate to say, “There’s been a lot of talk about an AI bubble. From our vantage point, we see something very different.” Huang’s vision was panoramic; he reframed the AI narrative from the perspective of single trees to one of an expansive forest. In other words, Nvidia contends that global compute infrastructure is fundamentally being rebuilt for a post-Moore’s Law era and is making the case that the real story is already several chapters ahead of where most attention is focused.
Concerns over saturation or depreciation were dispelled by highlighting Nvidia’s software moat CUDA, which extends its GPU stacks’ relevance and turns what would otherwise be an obsolescence timeline into a narrative of perpetual renewal. Huang’s claim that A100 GPUs shipped six years ago still run at full utilisation, thanks to continual software updates, underscores that success in this market is not simply about hardware but about controlling an entire ecosystem.
Networking, too, continues to eclipse expectations, with over 100% year-on-year growth across product lines. Importantly, the sovereign customer base is materialising: Saudi Arabia’s commitment to procure up to 600,000 more GPUs shows that, even as US export controls pinch, demand remains global and robust.
Does this calm the market; did we learn anything new? No and no – and one earnings report, no matter how good, is unlikely to dissolve entrenched scepticism about an infrastructure bubble.
Nvidia is not going to be the place where you see weakness first. Its order book and those of every other supplier will be full before there is a clear sign. The supply chain is always the last to know. Where softness creeps in and then backwards into the ‘build chain’ is in actual compute demand from the startups and labs which either don’t get funded or have no progress which means no funding, and the overall compute demand slows.
Conclusion: Rethinking Risk and Opportunity
In past musings we have talked about OpenAI as too big to fail, that circular funding exists, that cashflow is being eaten by capital investments, and that there is no ROI. The backlash has been harsh, and we are by no means out of the woods. Good news is shrugged off, and bad news is gathered up. A famous investor that made a call on sub-prime debt has thrown in the towel, and questioning has moved to the Jerry Maguire ‘show me the money’ moment.
If the recent sell-off has demonstrated anything, it is that narratives are the true currency of the market, and in the fourth industrial revolution we may be over or underestimating the opportunity. For now, the selling appears orderly, and despite a spike in both the Volatility and AAII Bear Indices they are below the levels seen in April this year, when tariffs occupied the headlines.
The persistent demand for foundational technology is cause for hope, but punctuating this optimism is the voice of Satya Nadella reminding us that numbers are often aspirational and crafted for fundraising as much as for substance. The challenge today is not simply to divine which company or segment will triumph, but to grasp the shifting terrain of risk itself.
Investors are in various stages of visiting the exit in a risk-off environment. Whether this is the beginning of a stampede is not yet known, but if the de-risk becomes something more, and something disorderly, the windows and doors will not be wide enough.
Tim Chesterfield is CIO of the Perpetual Guardian Group and the founding CIO and Director of its investment management business, PG Investments. With $2.8 billion in funds under management and $8 billion in total assets under management, Perpetual Guardian Group is a leading financial services provider to New Zealanders.
Disclaimer
Information provided in this publication is not personalised and does not take into account the particular financial situation, needs or goals of any person. Professional investment advice should be taken before making an investment. The information provided in this article is not a recommendation to buy, sell, or hold any of the companies mentioned. PG Investments is not responsible for, and expressly disclaims all liability for, damages of any kind arising out of use, reference to, or reliance on any information contained within this article, and no guarantee is given that the information provided in this article is correct, complete, and up to date.


