
When NVIDIA Corp. reports its quarterly results on Wednesday, DeepSeek may play a role as large as data center expansion and capital expense spending by Big Tech.
All ears will be on NVIDIA CEO Jensen Huang’s commentary about broad demand for artificial intelligence (AI) in the wake of the emergence of the Chinese AI startup’s low-cost chatbot – as well as his thoughts on capacity restraints that may have slowed sales of higher-end Blackwell server racks.
The impact of DeepSeek may be especially revelatory: After its chatbot debuted in late January, NVIDIA’s market value cratered $593 billion, the largest single-day loss in stock market history.
Wall Street jitters were temporarily assuaged Monday following investment bank Wedbush Securities’ prediction that the chip maker would post a “clear beat-and-raise” results despite lingering fears over DeepSeek.
“Enterprise driven demand is accelerating as more companies and governments…head down the AI yellow brick road,” Wedbush analysts wrote, adding demand for the company’s Blackwell chip “is far outstripping supply.”
Data center sales, which comprise NVIDIA’s AI chips and processors including its new Blackwell line and its legacy Hopper offerings, are expected to grow 82% year-over-year to $33.6 billion, according to analysts.
“All eyes will be on Blackwell commentary, where we see management reaffirming Blackwell shipments beginning in [Q4] and demand continuing to exceed supply through [fiscal 2026],” Rosenblatt analyst Hans Mosesmann said in a note. “We see shipments of Blackwell accelerating as we progress through the year, with a stronger [second half].”
Added Daniel Newman, CEO of The Futurum Group via a post on LinkedIn: “NVIDIA securing 70% of TSMC CoWoS-L for 2025 is hard not to read as an extremely bullish sign for AI infrastructure demand. Furthermore, the verbatim $300 Billion+ CAPEX commitments from Google, Amazon, Meta, Microsoft, and Tesla and then the Stargate plus multiple $100 Billion commits from nations beyond the U.S. also indicate stability in the buildout of AI.”
Need remains strong for NVIDIA’s AI chips as big customers such as Meta Platforms Inc. and Microsoft Corp. plan to proceed with exorbitant AI spending. Amazon.com Inc., Microsoft, Alphabet Inc.’s Google, and Meta intend to spend $320 billion on AI-related projects this year, up from $246 billion in 2024.
“Despite the emergence of large language models which are cheaper to run, other signs, including huge infrastructure investment plans from tech giants like Meta, indicate that Nvidia’s high-end chips will remain in demand,” Susannah Streeter, head of money and markets at Hargreaves Lansdown, said in a note.
There are some troubling signs, however. Microsoft canceled some leases for U.S. data center capacity totaling “a couple hundred megawatts,” or the equivalent of about two data centers, TD Cowen wrote Friday. The investment bank said the reduction reflects concern over whether the software giant is building more AI computing than it will need long-term. [Microsoft promptly refuted the note.]
In a second report Monday, TD Cowen speculated Microsoft-backed OpenAI is moving workloads from Microsoft to Oracle Corp. as part of the new $500 billion Stargate project to build out AI infrastructure in the U.S. over the next four years. Microsoft is also a large owner and operator of data centers, and may be reallocating some of those operations to the U.S. from abroad, he said.