Back to List
Industry NewsAICloud ServicesDeveloper Tools

Google Restricts Antigravity Access for OpenClaw Users Citing 'Malicious Usage' and Overwhelmed Systems, Highlighting Rivalry with OpenAI

Google has sparked controversy by restricting access to its Antigravity 'vibe coding' platform for users, particularly those integrating with the open-source AI agent OpenClaw. Google alleges 'malicious usage,' stating that these users were accessing an excessive number of Gemini tokens through third-party platforms like OpenClaw, leading to service degradation for other Antigravity customers. Some affected users reported losing access to their Google accounts. This move is seen as a strategic response, especially given that OpenClaw's creator, Peter Steinberger, recently joined OpenAI, Google's primary rival. While OpenClaw remains open-source, it is now financially backed and strategically guided by OpenAI. Google DeepMind engineer Varun Mohan confirmed the crackdown, noting the need to address service degradation caused by users not adhering to the Terms of Service, and indicated a path for some unaware users to regain access.

VentureBeat

Google has initiated a significant enforcement action against certain users of its Antigravity 'vibe coding' platform, citing 'malicious usage' and causing considerable controversy among developers. The restrictions, which began this weekend and continued into Monday, February 23rd, primarily affected users who had integrated the open-source autonomous AI agent OpenClaw with Antigravity-built agents, or those who had connected OpenClaw agents to their Gmail accounts. These users subsequently reported losing access to their Google accounts.

According to Google, the affected users were leveraging Antigravity to obtain a larger volume of Gemini tokens via third-party platforms such as OpenClaw. This activity, Google claims, overwhelmed its system and degraded the quality of service for other Antigravity customers. The company's action has effectively cut off several users, bringing to light potential architectural and trust issues associated with OpenClaw's integration with Google's services.

The timing of Google's crackdown is particularly noteworthy. Just a week prior, on February 15th, OpenAI CEO Sam Altman announced that Peter Steinberger, the creator of OpenClaw, had joined OpenAI to lead its 'next generation of personal agents.' Although OpenClaw continues to operate as an open-source project under an independent foundation, it now receives financial backing and strategic guidance from OpenAI, Google's main competitor in the AI space. By severing OpenClaw's access to Antigravity, Google is not merely safeguarding its server infrastructure; it is also effectively disrupting a channel that allowed an OpenAI-affiliated tool to utilize Google's advanced Gemini models.

Varun Mohan, a Google DeepMind engineer and former CEO and founder of Windsurf, addressed the situation in an X post. He stated that the company had observed a 'massive increase in malicious usage' of the Antigravity backend, which had severely impacted the quality of service for legitimate users. Mohan emphasized the necessity of quickly restricting access for users who were not using the product as intended. He also acknowledged that a subset of these users might have been unaware that their actions violated Google's Terms of Service (ToS) and indicated that a pathway would be provided for them to regain access.

Related News

Anthropic to Restrict Claude Code Usage with Third-Party Tools Due to Subscription Design Constraints
Industry News

Anthropic to Restrict Claude Code Usage with Third-Party Tools Due to Subscription Design Constraints

Anthropic has announced plans to restrict the use of Claude Code when integrated with third-party tools and harnesses. The decision was communicated by Boris Cherny, the head of Claude Code, via a statement on X (formerly Twitter). According to Cherny, the current subscription models for Claude Code were not originally designed to accommodate the specific usage patterns generated by external third-party harnesses. This move highlights a strategic shift in how Anthropic manages its developer tools and subscription structures, ensuring that usage remains aligned with the intended design of their service tiers. The restriction aims to address discrepancies between user behavior on third-party platforms and the underlying subscription framework provided by Anthropic.

India’s Gujarat High Court Implements Strict Restrictions on AI Usage Within Judicial Decision-Making Processes
Industry News

India’s Gujarat High Court Implements Strict Restrictions on AI Usage Within Judicial Decision-Making Processes

The Gujarat High Court in India has officially established new boundaries regarding the integration of Artificial Intelligence within the judicial system. According to recent reports, the court has restricted the use of AI in formal judicial decisions, while still permitting its application for specific supportive roles. Under the new guidelines, AI technologies can be utilized for administrative tasks, legal research, and IT automation. However, a critical caveat remains: all AI-generated outputs must undergo a mandatory review by a human officer to ensure accuracy and accountability. This move highlights a cautious approach to legal tech, prioritizing human oversight in the delivery of justice while leveraging automation for operational efficiency.

Industry News

The Microsoft Copilot Naming Paradox: Mapping Over 75 Different Products Under One Brand Name

A recent investigation into Microsoft's branding strategy reveals a complex ecosystem where the name 'Copilot' now represents at least 75 distinct entities. The research, compiled from various product pages, launch announcements, and marketing materials, highlights that 'Copilot' is no longer just a single AI assistant. Instead, it encompasses a vast array of applications, features, platforms, physical hardware like keyboard keys, and even an entire category of laptops. The study found that no single official source, including Microsoft’s own documentation, provides a comprehensive list of these products. This fragmentation has led to significant confusion, as the brand now simultaneously refers to end-user tools and the infrastructure used to build additional AI assistants.