OpenAI Introduces ‘Lockdown Mode’ and ‘Elevated Risk’ Features to Strengthen ChatGPT Security
OpenAI has announced the launch of two new security features for ChatGPT, aimed at enhancing cyber protection and countering growing threats linked to the misuse of artificial intelligence technologies.
The newly introduced features—“Lockdown Mode” and “Elevated Risk”—are designed to address advanced prompt injection attacks, a technique used by malicious actors to insert deceptive instructions into AI systems in order to extract sensitive data or trigger unauthorized actions.
Lockdown Mode is an advanced, optional security setting tailored for high-risk user groups, including corporate executives and cybersecurity teams within large organizations. The feature imposes stricter limitations on ChatGPT’s interaction with external systems and integrated tools, reducing the likelihood of data leakage stemming from exploited vulnerabilities.
Initially, Lockdown Mode will be available to subscribers of ChatGPT Enterprise, ChatGPT Edu, ChatGPT for Healthcare, and ChatGPT for Teachers, with broader rollout planned in the coming months.
The “Elevated Risk” feature, meanwhile, introduces a classification system across certain capabilities within ChatGPT and related products such as Atlas and Codex. Clear visual indicators will notify users when specific tools may involve additional security risks—particularly those requiring network connectivity or extended permissions beyond the core chat environment. The goal is to enhance transparency and enable users to assess potential risks before activating certain features.
Risk indicators will appear alongside functionalities that may represent exposure points, especially integrations with external systems or tools that have not yet fully mitigated vulnerabilities according to industry standards.
The move comes amid increasing warnings about the exploitation of AI tools in cyberattacks. Intelligent conversational systems have become direct targets for attempts to bypass safeguards or manipulate outputs through carefully crafted interactions.
By introducing layered security controls, OpenAI signals a broader industry shift toward embedding advanced protection mechanisms within AI systems—particularly as adoption expands across sensitive sectors such as healthcare, education, and financial services.
As enterprise reliance on AI accelerates, concepts such as advanced protection modes and risk classification frameworks are expected to become foundational elements of AI governance, balancing innovation with cybersecurity and data protection requirements.
