Marketing AI Institute | Blog

China's DeepSeek Releases New AI Model. It’s Surpassing U.S. Models

Written by Mike Kaput | Dec 5, 2025 1:30:00 PM

Like a heavyweight fight, the large AI research labs continue to duke it out over improvements to their models. The latest jab comes from DeepSeek, the Chinese research lab that has repeatedly shaken up the industry with high-performance, low-cost models, with its launch of DeepSeek-V3.2.

The new release introduces a novel architecture designed to radically improve efficiency while maintaining top-tier reasoning capabilities, according to the company’s technical report. On certain benchmarks, it surpasses GPT-5.

To understand if DeepSeek is poised to disrupt the market yet again, I discussed the new release with SmarterX and Marketing AI Institute founder and CEO Paul Roetzer on Episode 183 of The Artificial Intelligence Show.

Competition from China

DeepSeek-V3.2 introduces a new mechanism called "DeepSeek Sparse Attention" (DSA).

This allows the model to process long streams of information with significantly less computational complexity than traditional models. The result is a system that balances high efficiency with deep reasoning capabilities, particularly in "long-context" scenarios where other models might struggle or become prohibitively expensive.

For Roetzer, the release confirms that the battle for AI dominance is global.

“Obviously DeepSeek is a major player in this and can be a disruptive force to what the models in the U.S. AI labs are doing,” Roetzer says. 

Gold Medal Performance

The release includes two distinct variants: the standard DeepSeek-V3.2 and a high-compute version called DeepSeek-V3.2-Speciale.

The capabilities outlined in the technical paper are eye-opening:

  • Agentic Thinking: The model integrates a "thinking process" directly into tool use, allowing it to reason through complex tasks that involve using external software or code.
  • GPT-5 Level Reasoning: The standard V3.2 model performs comparably to "GPT-5-High" on several reasoning benchmarks.
  • Gold Medal Performance: The V3.2-Speciale surpasses GPT-5 and Google’s Gemini-3.0-Pro on multiple benchmarks and achieved gold-medal performance in both the 2025 International Mathematical Olympiad and the International Olympiad in Informatics.

"One-Upping" Meta?

DeepSeek’s aggressive open-source strategy is also casting a shadow over other major players, most notably Meta.

Meta CEO Mark Zuckerberg has spent billions positioning the company as the champion of open-source AI, aiming to commoditize the model. But DeepSeek continues to release models that rival or beat the best Western open-source options, often with greater efficiency.

“That's probably the biggest threat: DeepSeek is one-upping Zuckerberg at what he intended to do, which was commoditize the model market with open source models,” says Roetzer. “And they've sort of beat them to it multiple times now.”

The Bottom Line

The gap between open models and proprietary frontier models may be closing faster than expected.

“Definitely noteworthy,” says Roetzer. “Usually when DeepSeek does something, it has an immediate trickle-down effect.”