New Open Source Qwen3-Next Models Preview Hybrid MoE Architecture Delivering Improved Accuracy and Accelerated Parallel Processing across NVIDIA Platform 

As AI models grow larger and process longer sequences of text, efficiency becomes just as important as scale.   To showcase what’s next, Alibaba released…

As AI models grow larger and process longer sequences of text, efficiency becomes just as important as scale. To showcase what’s next, Alibaba released two new open models, Qwen3-Next 80B-A3B-Thinking and Qwen3-Next 80B-A3B-Instruct to preview a new hybrid Mixture of Experts (MoE) architecture with the research and developer community. Qwen3-Next-80B-A3B-Thinking is now live on build.

Source

Leave a Reply

Your email address will not be published.

Previous post Moody and wooden, this PC and I have a lot in common: Asus ProArt Case PA401, Ryzen 7 9800X3D and RX 9070 XT build
Next post Build High-Performance Vision AI Pipelines with NVIDIA CUDA-Accelerated VC-6