Back to List
OpenAI’s New GPT-5.5 Powers Codex on NVIDIA Infrastructure as AI Agents Revolutionize Knowledge Work
Industry NewsOpenAINVIDIAGPT-5.5

OpenAI’s New GPT-5.5 Powers Codex on NVIDIA Infrastructure as AI Agents Revolutionize Knowledge Work

OpenAI has officially integrated its latest frontier model, GPT-5.5, into Codex, its specialized agentic coding application. This technological leap is supported by NVIDIA's high-performance GB200 NVL72 rack-scale systems, marking a significant milestone in the evolution of AI agents. While AI agents have already transformed developer workflows, the focus is now shifting toward broader knowledge work, including complex problem-solving and innovation. The collaboration highlights the synergy between OpenAI's advanced modeling and NVIDIA's infrastructure, aiming to drive a new frontier of productivity. With over 10,000 users already part of the ecosystem, this deployment signifies a major step in scaling agentic AI capabilities for professional environments.

NVIDIA Newsroom

Key Takeaways

  • GPT-5.5 Integration: OpenAI’s latest frontier model, GPT-5.5, now powers the Codex agentic coding application.
  • NVIDIA Infrastructure: The system is optimized to run on NVIDIA GB200 NVL72 rack-scale systems for high-performance computing.
  • Evolution of AI Agents: The focus of AI agents is expanding from developer workflows to complex knowledge work and innovation.
  • Scalable Impact: The platform is already supporting a large user base, with over 10,000 participants involved in the ecosystem.

In-Depth Analysis

The Shift to Knowledge Work

AI agents have traditionally been recognized for their ability to streamline developer workflows. However, the introduction of GPT-5.5 into Codex signals a transition toward a new frontier: knowledge work. This involves processing vast amounts of information, solving intricate problems, and generating new ideas to drive innovation. By leveraging the reasoning capabilities of GPT-5.5, Codex is positioned to move beyond simple code generation into the realm of comprehensive cognitive assistance.

Hardware-Software Synergy with NVIDIA

The deployment of GPT-5.5 on NVIDIA GB200 NVL72 rack-scale systems underscores the critical role of specialized infrastructure in modern AI. These systems provide the necessary computational power to handle the demands of OpenAI’s latest frontier model. This collaboration ensures that agentic applications like Codex can operate with the efficiency and scale required for professional environments, allowing for real-time problem-solving and complex data processing.

Industry Impact

The integration of GPT-5.5 into Codex on NVIDIA hardware represents a pivotal moment for the AI industry. It demonstrates the maturation of agentic AI, moving from experimental tools to robust systems capable of handling high-level professional tasks. By focusing on knowledge work and innovation, this development sets a new standard for how enterprises might utilize AI to solve problems that were previously reserved for human experts. Furthermore, the use of NVIDIA's GB200 systems highlights the ongoing dependency of cutting-edge software on massive hardware scaling to achieve "frontier" performance.

Frequently Asked Questions

Question: What is the primary model powering the new version of Codex?

Codex is now powered by GPT-5.5, which is OpenAI’s latest frontier model designed for advanced reasoning and agentic tasks.

Question: What hardware infrastructure is being used to run GPT-5.5?

GPT-5.5 runs on NVIDIA GB200 NVL72 rack-scale systems, which are designed to provide the high-performance computing necessary for frontier AI models.

Question: How is the role of AI agents changing according to this announcement?

AI agents are moving beyond just developer workflows to tackle knowledge work, which includes processing information, solving complex problems, and driving innovation.

Related News

50 Rising AI Startups in Asia: Identifying the Next Generation of Industry Leaders
Industry News

50 Rising AI Startups in Asia: Identifying the Next Generation of Industry Leaders

Tech in Asia has released a curated list of 50 rising AI startups across the Asian continent, highlighting companies that are positioned to become the next major players in the global artificial intelligence landscape. The report identifies these specific entities as having the potential to achieve significant scale and influence, marking them as the 'next big thing' in the industry. This selection underscores the rapid growth and increasing importance of the Asian AI ecosystem as it produces a new wave of innovative companies ready to disrupt the market.

Intercom Rebrands Corporate Entity to Fin: A Strategic Pivot Toward AI Customer Agents
Industry News

Intercom Rebrands Corporate Entity to Fin: A Strategic Pivot Toward AI Customer Agents

Intercom has officially announced a major corporate rebranding, changing its company name to Fin. While the well-known customer service software platform will retain the Intercom name—supported by the recent launch of Intercom 2—the parent company will now align its identity with its flagship customer agent platform, Fin. This move marks the culmination of a multi-year transition involving shifts in culture, pricing, and product strategy. CEO Eoghan Jennings (implied) emphasizes that the change is necessary to move beyond past successes and embrace the future of the service agent category. All 1,400 employees are now officially part of Fin, signaling a total commitment to the company's AI-driven technological direction.

Industry News

Claude Design Users Warn of Project Data Loss and Credit Expiration Following Subscription Cancellation

A recent report on Hacker News has raised significant concerns regarding data retention and credit management within Anthropic's Claude ecosystem. A user, identified as 'pycassa,' shared a cautionary experience detailing the immediate loss of access to Claude Design projects after unsubscribing from a five-month Claude Code Max subscription. The report further highlights issues with promotional credits—granted due to previous service instabilities—which reportedly vanished upon plan termination and remained inaccessible even after the user resubscribed. This incident has sparked a broader discussion within the developer community about the 'fast and loose' nature of bleeding-edge AI tools and the inherent risks of complex billing systems that may prioritize growth-oriented contracts over robust user-centric implementation and data persistence.