
China is moving to sharply expand its supply of artificial intelligence processors, setting up 2026 as a key year in its effort to reduce reliance on NVIDIA and negate the impact of U.S. export controls.
According to multiple reports, three fabrication plants aligned with Huawei’s AI portfolio are scheduled to come online, one as early as late 2025, two more in 2026. Also contributing to the effort: Semiconductor Manufacturing International Corporation (SMIC), which is China’s largest and most advanced contract chip maker, plans to double capacity for its most advanced 7nm lines next year.
“China’s push for self-sufficiency, led by Huawei and SMIC, has been an ongoing process,” said Ray Wang, Research Director, Semiconductors, Supply Chain and Emerging Tech at Futurum, told Techstrong.ai. “One key issue that impedes China on this front is the ability to produce large amounts of advanced AI chips to fit domestic demand.”
China’s effort to produce more AI chips coincides with a focus on compute standards meant to wring more performance from homegrown chips. DeepSeek has been advocating a next-generation data format (FP8-class) that sacrifices some precision for higher hardware efficiency. Chips produced by Cambricon, a Chinese firm known for specialized AI chips, appear to be geared for that approach, and several smaller chip designers are said to be working to this standard. The near-term result could be a more cohesive software-hardware stack inside China’s AI ecosystem, though performance isn’t expected to be at NVIDIA’s level.
SMIC’s planned 7nm expansion is supporting this push. Huawei is the foundry’s largest customer for those chip lines, but SMIC’s scale-up would free capacity for smaller players like Cambricon, Biren, and MetaX. That matters because U.S. rules limit shipments of NVIDIA’s highest-end parts into China, creating domestic demand for “good-enough” accelerators. A larger pool of chip suppliers could intensify price and performance competition in China’s fast-growing AI market.
Similarly, CXMT, a Chinese maker of DRAM chips, is evaluating HBM3-class high-bandwidth memory with a 2026 goal, a generation behind the best in class but close enough to support AI training and inference tasks.
Clearly the Chinese government is driving these efforts. Beijing has called for broader AI adoption and tighter coordination between research, engineering, and commercialization. Investment is flowing: Cambricon received approval to raise roughly $600 million this year, while several smaller AI chip firms like Biren and MetaX are advancing toward public listings after massive pre-IPO rounds.
The challenge for China is that all of this takes time. Aligning the various vendors for compute, memory, networking, and software, then scaling up this concerted effort, could take years. Meanwhile, NVIDIA and other US-based chip vendors are moving forward at lightspeed.
Whether China will actually catch up remains an open question. DeepSeek has demonstrated early viability on domestic accelerators in small batches, but large-scale training still requires NVIDIA clusters. Reports note that Chinese vendors are targeting parity with NVIDIA’s China-market H20, even as regulators scrutinize foreign chips over security concerns. China’s next step is proving that domestic silicon, paired with the FP8-style standard, can deliver competitive throughput on AI model training and high-volume inference.
“Given the increasing complexity and limitation of NVIDIA’s ability to sell downgraded chips to China, Huawei and SMIC, as well as other domestic AI chips makers, could benefit even though their offerings are still inferior to NVIDIA in terms of hardware specs, software, and networking technologies,” Wang said. “If China can execute the reported capacity next year, it does pose even more threat to NVIDIA if their chips are restricted to go to China, either by Beijing or Washington, as domestic firms can take advantage of such strategic timing to capture more domestic market.”