
The new model is designed to handle longer text sequences and is more efficient to train than previous versions. DeepSeek calls this release an “intermediate step” toward its next-generation AI architecture, hinting at a major update coming soon.
Along with this launch, the company has cut API prices by more than 50%.
Based in Hangzhou, DeepSeek says this new architecture will be its most significant since the popular V3 and R1 models, which shook Silicon Valley and the global tech scene.
The experimental model uses a technology called DeepSeek Sparse Attention, which lowers computing costs while boosting performance in specific tasks.
While this update may not cause an immediate market shake-up like DeepSeek’s earlier breakthroughs, analysts believe it will pressure rivals like Alibaba’s Qwen and OpenAI, especially if the company maintains high performance at lower costs.
DeepSeek’s strategy focuses on delivering powerful AI capabilities without the high costs usually linked to model training, continuing to make it a key player in the global AI race.
Also read: PTCL accepts CCP terms, paves way for $40B Telenor merger
DeepSeek’s launch of the V3.2-Exp model marks a significant step in reducing AI development costs, cutting API prices by over 50%. The model’s improved efficiency and ability to handle longer text make it competitive against major players like OpenAI and Alibaba. Although it may not cause immediate market disruption, it pressures competitors with its cost-effective approach, reinforcing DeepSeek’s role in the global AI race and setting the stage for future breakthroughs.



