Back to List
Industry NewsAICloud ServicesDeveloper Tools

Google Restricts Antigravity Access for OpenClaw Users Citing 'Malicious Usage' and Overwhelmed Systems, Highlighting Rivalry with OpenAI

Google has sparked controversy by restricting access to its Antigravity 'vibe coding' platform for users, particularly those integrating with the open-source AI agent OpenClaw. Google alleges 'malicious usage,' stating that these users were accessing an excessive number of Gemini tokens through third-party platforms like OpenClaw, leading to service degradation for other Antigravity customers. Some affected users reported losing access to their Google accounts. This move is seen as a strategic response, especially given that OpenClaw's creator, Peter Steinberger, recently joined OpenAI, Google's primary rival. While OpenClaw remains open-source, it is now financially backed and strategically guided by OpenAI. Google DeepMind engineer Varun Mohan confirmed the crackdown, noting the need to address service degradation caused by users not adhering to the Terms of Service, and indicated a path for some unaware users to regain access.

VentureBeat

Google has initiated a significant enforcement action against certain users of its Antigravity 'vibe coding' platform, citing 'malicious usage' and causing considerable controversy among developers. The restrictions, which began this weekend and continued into Monday, February 23rd, primarily affected users who had integrated the open-source autonomous AI agent OpenClaw with Antigravity-built agents, or those who had connected OpenClaw agents to their Gmail accounts. These users subsequently reported losing access to their Google accounts.

According to Google, the affected users were leveraging Antigravity to obtain a larger volume of Gemini tokens via third-party platforms such as OpenClaw. This activity, Google claims, overwhelmed its system and degraded the quality of service for other Antigravity customers. The company's action has effectively cut off several users, bringing to light potential architectural and trust issues associated with OpenClaw's integration with Google's services.

The timing of Google's crackdown is particularly noteworthy. Just a week prior, on February 15th, OpenAI CEO Sam Altman announced that Peter Steinberger, the creator of OpenClaw, had joined OpenAI to lead its 'next generation of personal agents.' Although OpenClaw continues to operate as an open-source project under an independent foundation, it now receives financial backing and strategic guidance from OpenAI, Google's main competitor in the AI space. By severing OpenClaw's access to Antigravity, Google is not merely safeguarding its server infrastructure; it is also effectively disrupting a channel that allowed an OpenAI-affiliated tool to utilize Google's advanced Gemini models.

Varun Mohan, a Google DeepMind engineer and former CEO and founder of Windsurf, addressed the situation in an X post. He stated that the company had observed a 'massive increase in malicious usage' of the Antigravity backend, which had severely impacted the quality of service for legitimate users. Mohan emphasized the necessity of quickly restricting access for users who were not using the product as intended. He also acknowledged that a subset of these users might have been unaware that their actions violated Google's Terms of Service (ToS) and indicated that a pathway would be provided for them to regain access.

Related News

Industry News

NZ Health App Breach: Alive Patients Falsely Marked Deceased, Names Changed to 'Charlie Kirk'

A significant breach in a New Zealand health app has led to alarming data inaccuracies, with living patients being incorrectly marked as deceased and their names altered to 'Charlie Kirk'. The extent and implications of this breach are currently under investigation, raising serious concerns about patient data integrity and the security of health information systems in New Zealand. Further details regarding the cause of the breach and the number of affected individuals are yet to be released.

Industry News

Goldman Sachs Report: AI Contributed 'Basically Zero' to US Economic Growth Last Year

According to a report by Goldman Sachs, Artificial Intelligence (AI) had a negligible impact on US economic growth last year, contributing 'basically zero'. This assessment suggests that despite widespread discussion and investment in AI technologies, its tangible effects on the broader economy have yet to materialize significantly. The report's findings indicate that the anticipated economic boost from AI has not been observed in the recent past, prompting a re-evaluation of the immediate economic benefits of AI integration.

Industry News

Anthropic Accuses DeepSeek, Moonshot AI, and MiniMax of Industrial-Scale Claude Model Theft Using 24,000 Fake Accounts

Anthropic has publicly accused three prominent Chinese AI laboratories—DeepSeek, Moonshot AI, and MiniMax—of orchestrating large-scale campaigns to extract capabilities from its Claude models. The San Francisco-based AI company alleges that these labs collectively generated over 16 million exchanges with Claude through approximately 24,000 fraudulent accounts, violating Anthropic's terms of service and regional access restrictions. Anthropic describes these campaigns as the most concrete public evidence of foreign competitors systematically using 'distillation' to bypass years of research and significant investment. The company warned that these campaigns are increasing in intensity and sophistication, requiring urgent, coordinated action from industry, policymakers, and the global AI community. This disclosure escalates tensions between American and Chinese AI developers and is linked to the ongoing debate in Washington regarding export controls on advanced AI chips, a policy Anthropic has actively supported.