Nvidia's Nemotron 3 Super: A 120B Parameter Hybrid Model Outperforms GPT-OSS and Qwen in Throughput for Enterprise Multi-Agent Systems
Nvidia has introduced Nemotron 3 Super, a 120-billion-parameter hybrid model with open weights on Hugging Face, designed to address the cost-effectiveness challenges of multi-agent systems in enterprise tasks. These systems, which handle long-horizon tasks like software engineering, can generate significantly higher token volumes than standard chats. Nemotron 3 Super combines state-space models, transformers, and a novel "Latent" mixture-of-experts design to provide specialized depth for agentic workflows without the typical bloat of dense reasoning models. Its core features include a Hybrid Mamba-Transformer backbone for efficient sequence processing and precise factual retrieval, along with Latent Mixture-of-Experts (LatentMoE) to optimize computational efficiency by projecting to a lower dimension. This architecture aims to maintain a massive 1-million-token context window while ensuring commercial usability.
Nvidia has released Nemotron 3 Super, a 120-billion-parameter hybrid model with its weights available on Hugging Face, aiming to tackle the cost-effectiveness issues associated with multi-agent systems in enterprise environments. Multi-agent systems, which are engineered for long-horizon tasks such as software engineering or cybersecurity triaging, can generate up to 15 times the token volume of standard chat applications. This high token volume can threaten their economic viability for enterprise applications.
Nemotron 3 Super integrates three distinct architectural philosophies: state-space models, transformers, and an innovative "Latent" mixture-of-experts design. This fusion is intended to deliver the specialized depth necessary for agentic workflows, circumventing the computational overhead typically associated with dense reasoning models. The model is made available for commercial use under mostly open weights.
The core of Nemotron 3 Super features a sophisticated architectural triad that seeks to balance memory efficiency with precise reasoning capabilities. It employs a Hybrid Mamba-Transformer backbone, which strategically interleaves Mamba-2 layers with Transformer attention layers. The Mamba-2 layers function as a "fast-travel" highway system, processing the majority of sequence data with linear-time complexity. This design enables the model to maintain a substantial 1-million-token context window without the memory footprint of the KV cache escalating.
However, pure state-space models often encounter difficulties with associative recall. To mitigate this, Nvidia has strategically embedded Transformer attention layers as "global anchors." These layers ensure the model's ability to accurately retrieve specific facts that might be deeply embedded within large datasets, such as a codebase or a collection of financial reports. Beyond the backbone, the model introduces Latent Mixture-of-Experts (LatentMoE). Traditional Mixture-of-Experts (MoE) designs typically route tokens to experts in their full hidden dimension, which can create a computational bottleneck as models scale. LatentMoE addresses this by projecting to a lower dimension.