Back to List
TechnologyAIOpen SourceLanguage Models

Alibaba's New Open-Source Qwen3.5-9B Model Outperforms OpenAI's GPT-OSS-120B and Runs on Standard Laptops

Alibaba's Qwen Team has unveiled its new Qwen3.5 Small Model Series, featuring open-source language and multimodal AI models. Notably, the Qwen3.5-9B, a compact reasoning model, has demonstrated superior performance against OpenAI's 13.5x larger gpt-oss-120B on key third-party benchmarks, including multilingual knowledge and graduate-level reasoning. This series also includes Qwen3.5-0.8B & 2B, optimized for edge devices, and Qwen3.5-4B, a multimodal base for lightweight agents with a 262,144 token context window. These models are significantly smaller than flagship models from OpenAI, Anthropic, and Google, making them comparable to MIT's LiquidAI LFM2 series. The models are available globally under Apache 2.0 licenses on Hugging Face and ModelScope, suitable for enterprise and commercial use. Their technical foundation utilizes an Efficient Hybrid Architecture combining Gated Delta Networks with sparse Mixture-of-Experts (MoE) to overcome memory limitations.

VentureBeat

Despite political turmoil in the U.S. AI sector, AI advancements in China are progressing rapidly. Alibaba's Qwen Team, known for developing and releasing a growing family of powerful Qwen open-source language and multimodal AI models, has introduced its newest batch: the Qwen3.5 Small Model Series.

This series includes several models tailored for different applications:

  • Qwen3.5-0.8B & 2B: These two models are optimized for "tiny" and "fast" performance, designed for prototyping and deployment on edge devices where battery life is a critical factor.
  • Qwen3.5-4B: This model serves as a strong multimodal base for lightweight agents and natively supports a substantial 262,144 token context window.
  • Qwen3.5-9B: A compact reasoning model that has shown to outperform OpenAI's open-source gpt-oss-120B, which is 13.5 times larger, on crucial third-party benchmarks. These benchmarks include multilingual knowledge and graduate-level reasoning capabilities.

To provide context, these models are among the smallest general-purpose models recently released by any lab globally. They are more comparable to MIT offshoot LiquidAI's LFM2 series, which also feature several hundred million or billion parameters, rather than the estimated trillion parameters reportedly used for flagship models from OpenAI, Anthropic, and Google's Gemini series.

The weights for these models are currently available worldwide under Apache 2.0 licenses on Hugging Face and ModelScope. This licensing makes them ideal for enterprise and commercial use, allowing for customization as needed.

The underlying technology of the Qwen3.5 small series represents a departure from standard Transformer architectures. Alibaba has adopted an Efficient Hybrid Architecture that integrates Gated Delta Networks (a form of linear attention) with sparse Mixture-of-Experts (MoE). This hybrid approach is designed to address the "memory wall" that typically restricts the performance of smaller models.

Related News

Project N.O.M.A.D: A Self-Sufficient Offline Survival Computer with AI and Essential Tools for Anytime, Anywhere Access
Technology

Project N.O.M.A.D: A Self-Sufficient Offline Survival Computer with AI and Essential Tools for Anytime, Anywhere Access

Project N.O.M.A.D (N.O.M.A.D project) is introduced as a self-sufficient, offline survival computer designed to provide users with critical tools, knowledge, and AI capabilities. This system aims to ensure users can access information and maintain an advantage regardless of their location or connectivity status. The project emphasizes self-reliance and preparedness through its integrated features.

MiroFish: A Concise and Universal Swarm Intelligence Engine for Predicting Everything
Technology

MiroFish: A Concise and Universal Swarm Intelligence Engine for Predicting Everything

MiroFish, an innovative project by 666ghj, has emerged as a trending repository on GitHub. Described as a concise and universal swarm intelligence engine, MiroFish aims to predict a wide array of phenomena. The project's core concept revolves around leveraging collective intelligence to offer predictive capabilities across various domains. Further details regarding its specific applications or underlying technology are not provided in the initial description.

GitNexus: Zero-Server Code Smart Engine Transforms GitHub Repos and ZIP Files into Interactive Knowledge Graphs with Built-in Graph RAG Agent for Enhanced Code Exploration
Technology

GitNexus: Zero-Server Code Smart Engine Transforms GitHub Repos and ZIP Files into Interactive Knowledge Graphs with Built-in Graph RAG Agent for Enhanced Code Exploration

GitNexus is a client-side knowledge graph creator that operates entirely within the browser, requiring no server-side code. Users can input GitHub repositories or ZIP files to generate an interactive knowledge graph, which includes a built-in Graph RAG agent. This tool is designed to significantly enhance code exploration by providing a visual and interactive way to understand codebases.