Back to List
TechnologyAIAPIInnovation

OpenAI Enhances Responses API with Server-side Compaction, Hosted Shell, and Agent Skills for Long-Term AI Worker Reliability

OpenAI has announced significant upgrades to its Responses API, aiming to overcome the limitations of AI agents, particularly their tendency to lose context over extended interactions. The updates, revealed today, include Server-side Compaction, Hosted Shell Containers, and the implementation of a new 'Skills' standard for agents. These advancements are designed to provide agents with persistent memory and a complete terminal shell, transforming them into more reliable, long-term digital workers. Server-side Compaction addresses the critical issue of 'context amnesia' by allowing agents to summarize past actions into a compressed state, maintaining essential context without hitting token limits. Early results from e-commerce platform Triple Whale demonstrate this breakthrough, with their agent Moby successfully handling a 5 million token session over 150 tool calls without accuracy degradation. These upgrades signal a shift away from limited AI agents towards more capable and stable autonomous systems.

VentureBeat

OpenAI has rolled out substantial enhancements to its Responses API, the interface enabling developers to access various agentic tools like web and file search with a single call. These updates signify a move beyond the era of limited AI agents, which previously struggled with maintaining context over prolonged interactions, often leading to 'hallucinations' after a few dozen exchanges. The newly announced features — Server-side Compaction, Hosted Shell Containers, and the adoption of the new 'Skills' standard for agents — are set to revolutionize how AI agents function.

These three major updates are designed to equip agents with a permanent operational environment, a complete terminal, and a memory that endures, fostering their evolution into dependable, long-term digital workers. A primary technical challenge for autonomous agents has been managing the 'clutter' generated during long-running tasks. Each tool call or script execution expands the conversation history, eventually causing the model to reach its token limit. This often forced developers to truncate the history, inadvertently removing crucial 'reasoning' necessary for the agent to complete its task.

OpenAI's solution to this problem is Server-side Compaction. Unlike simple truncation, compaction enables agents to operate effectively for hours or even days. Initial data from the e-commerce platform Triple Whale highlights the stability breakthrough offered by this feature. Their agent, Moby, successfully navigated a session involving 5 million tokens and 150 tool calls, maintaining accuracy throughout. Practically, this means the AI model can 'summarize' its own past actions into a compressed state, preserving vital context while discarding irrelevant noise. This transforms the model from one prone to forgetting into a more robust and persistent entity.

Related News

Project N.O.M.A.D: A Self-Sufficient Offline Survival Computer with AI and Essential Tools for Anytime, Anywhere Access
Technology

Project N.O.M.A.D: A Self-Sufficient Offline Survival Computer with AI and Essential Tools for Anytime, Anywhere Access

Project N.O.M.A.D (N.O.M.A.D project) is introduced as a self-sufficient, offline survival computer designed to provide users with critical tools, knowledge, and AI capabilities. This system aims to ensure users can access information and maintain an advantage regardless of their location or connectivity status. The project emphasizes self-reliance and preparedness through its integrated features.

MiroFish: A Concise and Universal Swarm Intelligence Engine for Predicting Everything
Technology

MiroFish: A Concise and Universal Swarm Intelligence Engine for Predicting Everything

MiroFish, an innovative project by 666ghj, has emerged as a trending repository on GitHub. Described as a concise and universal swarm intelligence engine, MiroFish aims to predict a wide array of phenomena. The project's core concept revolves around leveraging collective intelligence to offer predictive capabilities across various domains. Further details regarding its specific applications or underlying technology are not provided in the initial description.

GitNexus: Zero-Server Code Smart Engine Transforms GitHub Repos and ZIP Files into Interactive Knowledge Graphs with Built-in Graph RAG Agent for Enhanced Code Exploration
Technology

GitNexus: Zero-Server Code Smart Engine Transforms GitHub Repos and ZIP Files into Interactive Knowledge Graphs with Built-in Graph RAG Agent for Enhanced Code Exploration

GitNexus is a client-side knowledge graph creator that operates entirely within the browser, requiring no server-side code. Users can input GitHub repositories or ZIP files to generate an interactive knowledge graph, which includes a built-in Graph RAG agent. This tool is designed to significantly enhance code exploration by providing a visual and interactive way to understand codebases.