Back to List
TechnologyAIInnovationEnterprise AI

Nvidia's Nemotron 3 Super: A 120B Parameter Hybrid Model Outperforms GPT-OSS and Qwen in Throughput for Enterprise Multi-Agent Systems

Nvidia has introduced Nemotron 3 Super, a 120-billion-parameter hybrid model with open weights on Hugging Face, designed to address the cost-effectiveness challenges of multi-agent systems in enterprise tasks. These systems, which handle long-horizon tasks like software engineering, can generate significantly higher token volumes than standard chats. Nemotron 3 Super combines state-space models, transformers, and a novel "Latent" mixture-of-experts design to provide specialized depth for agentic workflows without the typical bloat of dense reasoning models. Its core features include a Hybrid Mamba-Transformer backbone for efficient sequence processing and precise factual retrieval, along with Latent Mixture-of-Experts (LatentMoE) to optimize computational efficiency by projecting to a lower dimension. This architecture aims to maintain a massive 1-million-token context window while ensuring commercial usability.

VentureBeat

Nvidia has released Nemotron 3 Super, a 120-billion-parameter hybrid model with its weights available on Hugging Face, aiming to tackle the cost-effectiveness issues associated with multi-agent systems in enterprise environments. Multi-agent systems, which are engineered for long-horizon tasks such as software engineering or cybersecurity triaging, can generate up to 15 times the token volume of standard chat applications. This high token volume can threaten their economic viability for enterprise applications.

Nemotron 3 Super integrates three distinct architectural philosophies: state-space models, transformers, and an innovative "Latent" mixture-of-experts design. This fusion is intended to deliver the specialized depth necessary for agentic workflows, circumventing the computational overhead typically associated with dense reasoning models. The model is made available for commercial use under mostly open weights.

The core of Nemotron 3 Super features a sophisticated architectural triad that seeks to balance memory efficiency with precise reasoning capabilities. It employs a Hybrid Mamba-Transformer backbone, which strategically interleaves Mamba-2 layers with Transformer attention layers. The Mamba-2 layers function as a "fast-travel" highway system, processing the majority of sequence data with linear-time complexity. This design enables the model to maintain a substantial 1-million-token context window without the memory footprint of the KV cache escalating.

However, pure state-space models often encounter difficulties with associative recall. To mitigate this, Nvidia has strategically embedded Transformer attention layers as "global anchors." These layers ensure the model's ability to accurately retrieve specific facts that might be deeply embedded within large datasets, such as a codebase or a collection of financial reports. Beyond the backbone, the model introduces Latent Mixture-of-Experts (LatentMoE). Traditional Mixture-of-Experts (MoE) designs typically route tokens to experts in their full hidden dimension, which can create a computational bottleneck as models scale. LatentMoE addresses this by projecting to a lower dimension.

Related News

Project N.O.M.A.D: A Self-Sufficient Offline Survival Computer with AI and Essential Tools for Anytime, Anywhere Access
Technology

Project N.O.M.A.D: A Self-Sufficient Offline Survival Computer with AI and Essential Tools for Anytime, Anywhere Access

Project N.O.M.A.D (N.O.M.A.D project) is introduced as a self-sufficient, offline survival computer designed to provide users with critical tools, knowledge, and AI capabilities. This system aims to ensure users can access information and maintain an advantage regardless of their location or connectivity status. The project emphasizes self-reliance and preparedness through its integrated features.

MiroFish: A Concise and Universal Swarm Intelligence Engine for Predicting Everything
Technology

MiroFish: A Concise and Universal Swarm Intelligence Engine for Predicting Everything

MiroFish, an innovative project by 666ghj, has emerged as a trending repository on GitHub. Described as a concise and universal swarm intelligence engine, MiroFish aims to predict a wide array of phenomena. The project's core concept revolves around leveraging collective intelligence to offer predictive capabilities across various domains. Further details regarding its specific applications or underlying technology are not provided in the initial description.

GitNexus: Zero-Server Code Smart Engine Transforms GitHub Repos and ZIP Files into Interactive Knowledge Graphs with Built-in Graph RAG Agent for Enhanced Code Exploration
Technology

GitNexus: Zero-Server Code Smart Engine Transforms GitHub Repos and ZIP Files into Interactive Knowledge Graphs with Built-in Graph RAG Agent for Enhanced Code Exploration

GitNexus is a client-side knowledge graph creator that operates entirely within the browser, requiring no server-side code. Users can input GitHub repositories or ZIP files to generate an interactive knowledge graph, which includes a built-in Graph RAG agent. This tool is designed to significantly enhance code exploration by providing a visual and interactive way to understand codebases.