Launched today, NVIDIA Nemotron 3 Super is a 120‑billion‑parameter open model with 12 billion active parameters designed to run complex agentic AI systems at scale.
Available now, the model combines advanced reasoning capabilities to efficiently complete tasks with high accuracy for autonomous agents.
AI-Native Companies: Perplexity offers its users access to Nemotron 3 Super for search and as one of 20 orchestrated models in Computer. Companies offering software development agents like CodeRabbit, Factory and Greptile are integrating the model into their AI agents along with proprietary models to achieve higher accuracy at lower cost. And life sciences and frontier AI organizations like Edison Scientific and Lila Sciences will power their agents for deep literature search, data science and molecular understanding.
Enterprise Software Platforms: Industry leaders such as Amdocs, Palantir, Cadence, Dassault Systèmes and Siemens are deploying and customizing the model to automate workflows in telecom, cybersecurity, semiconductor design and manufacturing.
As companies move beyond chatbots and into multi‑agent applications, they encounter two constraints.
The first is context explosion. Multi‑agent workflows generate up to 15x more tokens than standard chat because each interaction requires resending full histories, including tool outputs and intermediate reasoning.
Over long tasks, this volume of context increases costs and can lead to goal drift, where agents lose alignment with the original objective.
The second is the thinking tax. Complex agents must reason at every step, but using large models for every subtask makes multi-agent applications too expensive and sluggish for practical applications.
Nemotron 3 Super has a 1‑million‑token context window, allowing agents to retain full workflow state in memory and preventing goal drift.
Nemotron 3 Super has set new standards, claiming the top spot on Artificial Analysis for efficiency and openness with leading accuracy among models of the same size.
The model also powers the NVIDIA AI-Q research agent to the No. 1 position on DeepResearch Bench and DeepResearch Bench II leaderboards, benchmarks that measure an AI system’s ability to conduct thorough, multistep research across large document sets while maintaining reasoning coherence.
Hybrid Architecture
Nemotron 3 Super uses a hybrid mixture‑of‑experts (MoE) architecture that combines three major innovations to deliver up to 5x higher throughput and up to 2x higher accuracy than the previous Nemotron Super model.
Hybrid Architecture: Mamba layers deliver 4x higher memory and compute efficiency, while transformer layers drive advanced reasoning.
MoE: Only 12 billion of its 120 billion parameters are active at inference.
Latent MoE: A new technique that improves accuracy by activating four expert specialists for the cost of one to generate the next token at inference.
Multi-Token Prediction: Predicts multiple future words simultaneously, resulting in 3x faster inference.
On the NVIDIA Blackwell platform, the model runs in NVFP4 precision. That cuts memory requirements and pushes inference up to 4x faster than FP8 on NVIDIA Hopper, with no loss in accuracy.
Open Weights, Data and Recipes
NVIDIA is releasing Nemotron 3 Super with open weights under a permissive license. Developers can deploy and customize it on workstations, in data centers or in the cloud.
The model was trained on synthetic data generated using frontier reasoning models. NVIDIA is publishing the complete methodology, including over 10 trillion tokens of pre- and post-training datasets, 15 training environments for reinforcement learning and evaluation recipes. Researchers can further use the NVIDIA NeMo platform to fine-tune the model or build their own.
Use in Agentic Systems
Nemotron 3 Super is designed to handle complex subtasks inside a multi-agent system.
A software development agent can load an entire codebase into context at once, enabling end-to-end code generation and debugging without document segmentation.
In financial analysis it can load thousands of pages of reports into memory, eliminating the need to re-reason across long conversations, which improves efficiency.
Nemotron 3 Super has high-accuracy tool calling that ensures autonomous agents reliably navigate massive function libraries to prevent execution errors in high-stakes environments, like autonomous security orchestration in cybersecurity.
Availability
NVIDIA Nemotron 3 Super, part of the Nemotron 3 family, can be accessed at build.nvidia.com, Perplexity, OpenRouter and Hugging Face. Dell Technologies is bringing the model to the Dell Enterprise Hub on Hugging Face, optimized for on-premise deployment on the Dell AI Factory, advancing multi-agent AI workflows. HPE is also bringing NVIDIA Nemotron to its agents hub to help ensure scalable enterprise adoption of agentic AI.
Enterprises and developers can deploy the model through several partners:
Cloud Service Providers: Google Cloud’s Vertex AI and Oracle Cloud Infrastructure, and coming soon to Amazon Web Services through Amazon Bedrock as well as Microsoft Azure.
NVIDIA Cloud Partners: Coreweave, Crusoe, Nebius and Together AI.
Inference Service Providers: Baseten, CloudFlare, DeepInfra, Fireworks AI, Inference.net, Lightning AI, Modal and FriendliAI.
Data Platforms and Services: Distyl, Dataiku, DataRobot, Deloitte, EY and Tata Consultancy Services.
The model is packaged as an NVIDIA NIM microservice, allowing deployment from on-premises systems to the cloud.
Stay up to date on agentic AI, NVIDIA Nemotron and more by subscribing to NVIDIA AI news, joining the community, and following NVIDIA AI on LinkedIn, Instagram, X and Facebook.
Explore self-paced video tutorials and livestreams.
