Technology & Science

DeepSeek Unveils V4 LLM Optimized for Huawei Ascend Chips, Undercutting U.S. Rivals on Price

On 24 April 2026, DeepSeek released preview versions of its open-source V4 model that run natively on Huawei Ascend NPUs, matching top U.S. systems in benchmark tests while charging roughly one-sixth their inference prices.

By Priya Castellano

Focusing Facts

  1. V4-Pro packs 1.6 trillion parameters, handles 1 million-token context windows, and is priced at $1.74 per million input tokens and $3.48 per million output tokens—versus $5 and $30 for OpenAI’s GPT-5.5.
  2. Huawei’s Ascend A2/A3/950 processors were used in portions of V4’s training and are fully validated for ‘day-zero’ inference, making V4 the first leading Chinese LLM launched without primary reliance on Nvidia GPUs.
  3. Huahong Semiconductor and SMIC shares rose 15 % and 10 %, respectively, after the launch on expectations of wider domestic chip adoption.

See how 3 sources reported this story.

Where they agree. Where they disagree. What they left out.

  • Full multi-perspective analysis on every story
  • Primary source links for every claim
  • Daily email briefing — no algorithm

Perspectives in this article

  • Asia-Pacific mainstream media
  • Tech-enthusiast trade press & blogs
  • Western business & tech-skeptical press
Share

Related Stories