Back to List
TechnologyAIInnovationDeepMind

Google DeepMind Unveils SIMA 2: A General-Purpose AI Agent Powered by Gemini, Achieving Near-Human Performance in Complex 3D Virtual Worlds with Enhanced Reasoning and Self-Improvement

Google DeepMind has launched SIMA 2, an upgraded general-purpose AI agent designed to navigate and perform tasks in complex 3D game environments. Building on its predecessor, SIMA 1 (released in 2024), SIMA 2 integrates the Gemini 2.5 Flash Lite model as its core reasoning engine, enabling it to better understand goals, interpret plans, and continuously learn through self-improvement. While SIMA 1 achieved a 31% task completion rate with over 600 language instructions, SIMA 2 significantly boosts this to 62%, nearing the 71% completion rate of human players. SIMA 2 maintains the same interface but transforms from a mere instruction executor into an interactive game partner, capable of explaining its intentions and answering questions about its goals. It also expands its instruction channels to include voice, graphics, and emojis, demonstrating advanced reasoning by interpreting abstract requests. Furthermore, SIMA 2 features a self-improvement mechanism where it learns from its own experience in new games, with the Gemini model generating and scoring new tasks, leading to success in previously failed scenarios without additional human demonstrations. DeepMind also showcased SIMA 2's integration with Genie 3, allowing it to generate interactive 3D environments from a single image or text prompt, marking a significant step towards advanced real-world robotics.

AI新闻资讯 - AI Base

Google DeepMind has recently unveiled SIMA 2, an advanced general-purpose AI agent engineered to excel in complex 3D game worlds. SIMA 2, which stands for Scalable, Instructable, Multiworld Agent, represents a significant upgrade from its predecessor, SIMA 1, introduced in 2024. The new iteration leverages the powerful Gemini model, specifically Gemini 2.5 Flash Lite, as its core reasoning engine, enabling enhanced goal comprehension, plan interpretation, and continuous self-improvement across diverse virtual environments.

SIMA 1, upon its release, operated by interpreting over 600 language instructions and controlling virtual environments via rendered images and virtual keyboard/mouse inputs. It achieved a task completion rate of approximately 31%, notably lower than the 71% completion rate observed in human players. SIMA 2, while retaining the same interface, has dramatically improved its performance. DeepMind's evaluations show SIMA 2's task completion rate soaring to 62%, bringing it remarkably close to human player levels.

A key architectural enhancement in SIMA 2 is the deep integration of the Gemini model. This allows the agent to receive visual observations and user instructions, deduce high-level objectives, and generate corresponding actions. This novel training paradigm transforms SIMA 2 from a simple instruction executor into an interactive game partner. It can now explain its intentions, respond to queries about its current goals, and articulate its reasoning process within the environment.

SIMA 2 also boasts expanded instruction channels, moving beyond mere text commands to process voice, graphics, and even emojis. A compelling demonstration involved a user asking SIMA 2 to locate a "house the color of a ripe tomato." The agent successfully reasoned that "a ripe tomato is red" and subsequently identified the target, showcasing its advanced inferential capabilities.

Self-improvement is another standout feature of SIMA 2. After an initial phase of learning from human game demonstrations, the agent transitions into new games, relying entirely on its own experience for learning. The Gemini model plays a crucial role here, generating new tasks for the agent and scoring its performance. This mechanism has led to subsequent versions of SIMA 2 successfully completing many tasks that it previously failed, all without the need for additional human demonstrations.

Finally, DeepMind showcased the synergy between SIMA 2 and Genie 3. This integration allows for the generation of interactive 3D environments from a single image or text prompt. Within these newly generated environments, SIMA 2 demonstrated its ability to identify objects and accomplish specified tasks, marking a pivotal step towards the development of general-purpose agents for more advanced real-world robotic applications.

Related News

Technology

WiFi DensePose: Real-Time, Wall-Penetrating Full-Body Tracking via Commodity Mesh Routers – A Production-Ready Implementation of InvisPose

WiFi DensePose is announced as a production-ready implementation of InvisPose, a groundbreaking WiFi-based dense human pose estimation system. This innovative technology enables real-time, wall-penetrating full-body tracking by utilizing commodity mesh routers. The project, authored by ruvnet, was published on March 1, 2026, and is trending on GitHub, highlighting its potential impact on various applications requiring advanced human pose tracking capabilities.

Technology

XZ Utils: An Overview of Documentation, Versioning, Bug Reporting, and Translation for the Open-Source Project

This news provides a concise overview of XZ Utils, an open-source project by tukaani-project. It outlines the key sections of its documentation, including general documentation, command-line tool documentation, and liblzma documentation. The overview also touches upon the project's version numbering, procedures for reporting bugs, and information regarding translations, specifically mentioning the testing of translations. This structured approach aims to guide users and contributors through the project's essential aspects.

Technology

Agent Skills for Context Engineering: A Comprehensive Toolkit for Multi-Agent Architectures and Production Systems

This GitHub Trending project, "Agent-Skills-for-Context-Engineering" by muratcankoylan, offers a comprehensive collection of agent skills designed for context engineering, multi-agent architectures, and production agent systems. It is intended for use in building, optimizing, or debugging agent systems that require effective context management, providing essential tools for developers working with advanced AI agent implementations.