Saturday, May 2, 2026, 8:50 PM
×

OpenAI Restricts Access to “Cyber” Tool Amid Rising Competition in AI-Driven Cybersecurity

Saturday 2 May 2026 14:33
OpenAI Restricts Access to “Cyber” Tool Amid Rising Competition in AI-Driven Cybersecurity

In a move reflecting intensifying competition in the market for AI-powered cybersecurity tools, OpenAI has begun restricting access to its new “Cyber” tool, powered by the GPT-5.5 model, making it available only to selected cybersecurity experts and accredited institutions.

The decision comes shortly after public criticism by CEO Sam Altman of rival Anthropic, which had previously limited access to its own security tools such as “Mythos” to a narrow user base. That approach drew scrutiny at the time, with critics describing it as “fear-based marketing” that emphasized technological risks to justify restricted availability.

Despite the earlier criticism, OpenAI appears to be adopting a similar strategy. The company confirmed that the Cyber tool will be rolled out gradually under a “Trusted Access for Cyber” framework—an initiative designed to grant advanced capabilities only to entities that can demonstrate verified expertise in the sensitive field of cybersecurity.

The new tool is considered one of OpenAI’s most advanced offerings in digital security, featuring capabilities such as penetration testing, vulnerability analysis, cyberattack simulation, and advanced malware reverse engineering. While these features position Cyber as a powerful asset for protecting digital infrastructure, they also raise concerns about potential misuse if accessed by malicious actors.

Under the new policy, OpenAI has implemented a multi-layered verification system to assess applicants and assign varying levels of access based on their credentials and operational roles. The company noted that the program has already expanded to include thousands of verified users and hundreds of specialized teams focused on safeguarding critical software systems.

OpenAI also indicated that it is providing more flexible versions of the model tailored specifically for defensive cybersecurity tasks, while easing technical constraints for qualified users.

This shift is likely to reignite broader debate about the future of advanced AI tools—particularly those linked to cybersecurity and offensive capabilities. As concerns grow over the global risks posed by unrestricted deployment, the industry faces mounting pressure to balance accessibility for experts with safeguards against misuse.

In this delicate equilibrium between openness and control, major AI companies appear to be moving toward more selective and restricted models for sensitive technologies—even if that direction challenges the openness narratives promoted in previous years.