Back to List
TechnologyAIInnovationEnterprise AI

Nvidia's Nemotron 3 Super: A 120B Parameter Hybrid Model Outperforms GPT-OSS and Qwen in Throughput for Enterprise Multi-Agent Systems

Nvidia has introduced Nemotron 3 Super, a 120-billion-parameter hybrid model with open weights on Hugging Face, designed to address the cost-effectiveness challenges of multi-agent systems in enterprise tasks. These systems, which handle long-horizon tasks like software engineering, can generate significantly higher token volumes than standard chats. Nemotron 3 Super combines state-space models, transformers, and a novel "Latent" mixture-of-experts design to provide specialized depth for agentic workflows without the typical bloat of dense reasoning models. Its core features include a Hybrid Mamba-Transformer backbone for efficient sequence processing and precise factual retrieval, along with Latent Mixture-of-Experts (LatentMoE) to optimize computational efficiency by projecting to a lower dimension. This architecture aims to maintain a massive 1-million-token context window while ensuring commercial usability.

VentureBeat

Nvidia has released Nemotron 3 Super, a 120-billion-parameter hybrid model with its weights available on Hugging Face, aiming to tackle the cost-effectiveness issues associated with multi-agent systems in enterprise environments. Multi-agent systems, which are engineered for long-horizon tasks such as software engineering or cybersecurity triaging, can generate up to 15 times the token volume of standard chat applications. This high token volume can threaten their economic viability for enterprise applications.

Nemotron 3 Super integrates three distinct architectural philosophies: state-space models, transformers, and an innovative "Latent" mixture-of-experts design. This fusion is intended to deliver the specialized depth necessary for agentic workflows, circumventing the computational overhead typically associated with dense reasoning models. The model is made available for commercial use under mostly open weights.

The core of Nemotron 3 Super features a sophisticated architectural triad that seeks to balance memory efficiency with precise reasoning capabilities. It employs a Hybrid Mamba-Transformer backbone, which strategically interleaves Mamba-2 layers with Transformer attention layers. The Mamba-2 layers function as a "fast-travel" highway system, processing the majority of sequence data with linear-time complexity. This design enables the model to maintain a substantial 1-million-token context window without the memory footprint of the KV cache escalating.

However, pure state-space models often encounter difficulties with associative recall. To mitigate this, Nvidia has strategically embedded Transformer attention layers as "global anchors." These layers ensure the model's ability to accurately retrieve specific facts that might be deeply embedded within large datasets, such as a codebase or a collection of financial reports. Beyond the backbone, the model introduces Latent Mixture-of-Experts (LatentMoE). Traditional Mixture-of-Experts (MoE) designs typically route tokens to experts in their full hidden dimension, which can create a computational bottleneck as models scale. LatentMoE addresses this by projecting to a lower dimension.

Related News

Technology

AstrBot: An Agent-Based Instant Messaging Chatbot Infrastructure Integrating LLMs, Plugins, and AI Features as an OpenClaw Alternative

AstrBot is an agent-based instant messaging chatbot infrastructure designed to integrate a wide array of instant messaging platforms, Large Language Models (LLMs), plugins, and various AI functionalities. Positioned as a potential alternative to OpenClaw, AstrBot aims to provide a comprehensive and versatile solution for automated communication and AI-driven interactions across multiple platforms. The project is developed by AstrBotDevs and was featured on GitHub Trending on March 15, 2026.

Technology

Google Unveils A2UI: An Open-Source Agent-to-User Interface for Dynamic UI Generation and Rendering

Google has launched A2UI, an open-source project designed to facilitate the creation and rendering of agent-generated user interfaces. A2UI introduces an optimized format for representing updatable, agent-generated UIs and includes an initial set of renderers. This allows agents to generate or populate rich user interfaces, enhancing the dynamic interaction between AI agents and users. The project is currently trending on GitHub.

Technology

OpenRAG: A Unified Retrieval-Augmented Generation Platform Built with Langflow, Docling, and Opensearch

OpenRAG is introduced as a comprehensive, single-platform solution for Retrieval-Augmented Generation (RAG). It is built upon a powerful stack comprising Langflow, Docling, and Opensearch. This platform aims to streamline the RAG process by integrating these key technologies into a unified system, offering a complete solution for developers and researchers working with advanced AI models.