If the Huawei training story is real, the policy debate is over
The headline number from The Decoder's April reporting is that DeepSeek v4, the next big model from the Hangzhou shop that has spent two years embarrassing better-funded US labs, was trained end-to-end on Huawei Ascend chips. No H100s. No H200s. No quiet pipeline through a Singapore reseller. The full training run, the full inference deployment, all on Chinese silicon. I want to be careful here, because we have not seen an independent benchmark and we have not seen the training logs. But the supply-chain signals around the model release line up, and the Chinese language posts from people inside the project line up, and the absence of an angry Nvidia rebuttal lines up.
If you accept the premise, the entire policy frame from 2023 and 2024 is dead. Export controls on advanced GPUs were sold as a way to slow Chinese frontier AI by years. The argument inside Washington was that without H100s and H200s, you could not get a frontier model out the door. DeepSeek v4 says otherwise. It says you can get there on 910C-class hardware if you are willing to do harder distributed systems work, and DeepSeek has been doing exactly that distributed systems work in public since v2.
I am not arguing the export controls were stupid. I am arguing they are now retroactively a 24-month delay, not a structural barrier. Those are very different policies.
DeepSeek v4 is reportedly trained and served entirely on Huawei Ascend silicon, no Nvidia in the stack

The chip itself, and why this is not a one-off
The Ascend line has been quietly maturing for the last three years. The 910B was the first chip people in the West took seriously, mostly because Huawei was willing to ship it in volume after Nvidia got squeezed. The 910C, which is the part most people think DeepSeek is on, closes a meaningful gap on FP16 throughput and adds the kind of fabric improvements that matter when you are training across thousands of accelerators. There is a newer part rumored, a 910D or whatever they end up calling it, but the public DeepSeek v4 work appears to be done on the C generation.
The thing the chip discourse keeps missing is the software story. CANN, Huawei's CUDA equivalent, was a punch line as recently as 2024. It is not a punch line in 2026. The DeepSeek team has been an extremely loud customer, and they have effectively been doing free QA on the entire Huawei AI stack for two years. That investment shows up in v4 the same way Google's TPU work showed up in PaLM. You build the model on the silicon you actually have, and over time the silicon and the software co-evolve until the gap to Nvidia stops mattering.
Efficiency is the open question. I have seen credible numbers suggesting DeepSeek v4 is roughly 1.5x to 2x less efficient on a per-watt basis than a frontier Nvidia setup, and roughly comparable on a per-dollar basis once you factor in Chinese pricing on Ascend. That gap is not zero. It is also not the kind of gap that stops a country from training models.

ASUS Ascent GX10 Personal AI Supercomputer
Local AI workstation pick for model testing and AI hardware stories
Affiliate links. We may earn a commission at no extra cost to you.

What this does to Nvidia's China story
Nvidia's China revenue has been slowly draining for two years. The H20, the export-compliant part Nvidia built specifically for the Chinese market, has been the floor of the business. The pitch to Chinese customers was always: yes, the policy is annoying, but the software is still better here, the ecosystem is still stickier, and your engineers already know CUDA. That pitch worked because there was no real top-to-bottom alternative.
DeepSeek v4 is the alternative. Once a frontier-quality Chinese model is publicly known to run on a fully domestic stack, every Chinese hyperscaler has cover to make the same move on inference. They were going to do it anyway, the policy environment basically required it, but the timeline just compressed. I would expect Nvidia's China data center revenue to step down meaningfully in the second half of 2026, and I would expect Nvidia to stop pretending the H20 is a strategic product.
The interesting wrinkle, and the one I think is going to drive the actual market story for the rest of the year, is what happens with US labs. There is no rule that says an American AI company cannot run inference workloads on Huawei silicon if they do it inside a Chinese subsidiary. There are several reasons they might want to. I would not be shocked to learn, six months from now, that one or two name-brand US labs have quietly been benchmarking Ascend for inference cost reasons. The economics are starting to demand it.
The benchmark question, and why I am still 80 percent confident
I want to flag the unknowns honestly. We do not have an independent reproduction of DeepSeek v4's training run. We do not have a third-party benchmark on Huawei-only inference. We have a model, we have credible reporting, and we have a long pattern of DeepSeek being unusually transparent about its infrastructure. I am at roughly 80 percent confidence that the Huawei-only training claim is substantially true, and roughly 60 percent that it generalizes cleanly to whatever DeepSeek ships next.
The thing that would update me down is a leak showing some critical training step happened on smuggled or grey-market H100s. That is the standard Western analyst counter, and it is not a crazy counter. The thing that would update me up is a clean reproduction by a second Chinese lab, which I expect within the year because that is how these things go.
Either way, the strategic conclusion does not really move. Even if DeepSeek v4 used some Nvidia somewhere, the trend line is clear. China is going to be training frontier models on domestic silicon by 2027 with high probability, and that timeline used to be 2030.
What I'd actually watch from here
Three things to watch over the next three months. First, whether DeepSeek v4 gets independently benchmarked at frontier quality on standard evals like MMLU-Pro and GPQA Diamond. If it lands within five points of GPT-5 or Claude 4 Opus on those, the conversation is functionally over. Second, whether Alibaba's Qwen team or Moonshot AI announce their own Ascend-only training runs. That would confirm the playbook is reproducible and not just a DeepSeek-specific trick.
Third, watch Nvidia's earnings language. If Jensen starts talking about China as a smaller share of the data center business and hedges on H20 demand, that is the financial confirmation that the Huawei story has teeth. If he doubles down on the H20 pipeline, he is either bluffing or seeing real demand we are not.
Verdict: this is a watch story for most readers, and a buy story if you are the kind of investor who plays semis. I would not be selling Nvidia, frankly the AI capex tailwind in the US is so strong it can absorb the China loss for years, but I would be buying any name with credible Huawei or Chinese-fab exposure on weakness. The geography of AI compute is reorganizing in real time.
Related coverage
More from saavage on the chip and lab side: the GDDRHammer attack and what it says about Nvidia memory exposure (/tech/geforge-gddrhammer-attacks-exploit-nvidia-gpu-memory), the OpenAI chip ambitions piece on what happens when labs go vertical (/ai/openais-chip-ambitions-signal-the-death-of-the-app-economy), and my read on AMD's MFG play and whether Radeon survives the AI upscaling race (/ai/amds-mfg-play-can-radeon-survive-the-ai-upscaling-race).
If you want the geopolitical frame, the China-blocks-Meta-buyout piece (/ai/china-blocks-metas-2b-ai-buyout-warning-signal) is the obvious companion read.

