Empowering legitimate defenders with scaled access, ecosystem support, and a purpose-built AI model.
- Scaling Trusted Access: The Trusted Access for Cyber (TAC) program is expanding to thousands of verified individual defenders and hundreds of enterprise teams, replacing arbitrary gatekeeping with objective identity verification.
- Introducing GPT-5.4-Cyber: A new, specialized model trained to be cyber-permissive has launched, offering advanced defensive capabilities like binary reverse engineering for vetted security professionals.
- Scaling Defenses with Capabilities: Through continuous ecosystem investments—like the $10M Cybersecurity Grant Program and Codex Security, which has already fixed over 3,000 vulnerabilities—defenses are scaling in lockstep with advancing AI models.
The progressive evolution of artificial intelligence is fundamentally rewriting the rules of digital security. AI has proven to be a powerful accelerant for cyber defenders—the professionals tasked with keeping our critical systems, data, and users safe—enabling them to identify and remediate vulnerabilities faster than ever before in the digital infrastructure we all rely on. However, this same technology is actively being used by threat actors seeking to cause harm. We have been preparing for this reality.
Since 2023, the focus has been on supporting defenders through comprehensive grant programs and strengthened safeguards. Now, as threat actors experiment with novel, AI-driven approaches and sophisticated harnesses to elicit stronger capabilities from existing models, the cybersecurity community can no longer wait for a single, future threshold to act. To meet this moment, we are scaling up our Trusted Access for Cyber (TAC) program and introducing advanced, specialized tools designed to keep defenders several steps ahead.
A Strategy Grounded in Resilience and Capability
Our approach to continuous capability advancement rests on the conviction that cyber risk is already here and accelerating. Long before advanced AI arrived, digital infrastructure was vulnerable. Today, AI models can reason across complex codebases and support meaningful parts of the cyber workflow. Because cyber capabilities are inherently dual-use, risk cannot be defined by the model alone; it depends heavily on the user, their intent, and the level of access they are granted.
To navigate this, our cybersecurity strategy is driven by three core principles:
- Democratized Access: Advanced tools must be made as widely available to legitimate users as possible, without arbitrary gatekeeping. By utilizing clear, objective criteria—such as strong KYC (Know Your Customer) and identity verification protocols—we can confidently grant access to critical infrastructure protectors, public service defenders, and small-scale legitimate actors alike.
- Iterative Deployment: We learn best by carefully putting systems into the real world. By studying their differentiated benefits and risks, we can iteratively update models and safety systems, improving resilience against jailbreaks and adversarial attacks while bolstering defensive strengths.
- Investing in Ecosystem Resilience: We actively support the defender community through trusted access pathways, targeted grants, and contributions to open-source security initiatives. A prime example is our deployment of Codex for Open Source, which provides free security scanning to over 1,000 open-source projects.
Defenses Scaled in Lockstep: The Impact of Codex Security
As AI models become more capable, our defensive mechanisms must scale alongside them. We have steadily advanced our cyber-specific safety training from GPT-5.2 to GPT-5.3-Codex, and now to GPT-5.4, which is classified as having “high” cyber capabilities under our Preparedness Framework.
A cornerstone of this scaling effort is Codex Security. Launched initially in private beta and later as a research preview, Codex Security automatically monitors codebases, validates potential issues, and actively proposes fixes. The results speak for themselves: the system’s precision has drastically improved, contributing to the remediation of over 3,000 critical and high-severity vulnerabilities across the ecosystem, alongside countless lower-severity findings.
Software development itself must become more secure. By seamlessly integrating agentic coding models into developer workflows, security shifts from being an episodic, reactive audit to a proactive, ongoing process of tangible risk reduction.
Scaling Trusted Access and the Launch of GPT-5.4-Cyber
To further empower the security community, we are expanding the Trusted Access for Cyber (TAC) program, originally introduced to reduce friction for authenticated cybersecurity professionals. Today, we are adding new tiers of access for users willing to authenticate themselves as legitimate defenders.
For customers in the highest tiers, we are introducing GPT-5.4-Cyber. This purposely fine-tuned variant of the GPT-5.4 model features fewer capability restrictions and a lowered refusal boundary specifically calibrated for legitimate cybersecurity work. It unlocks advanced defensive workflows, including binary reverse engineering, which allows security professionals to analyze compiled software for malware, vulnerabilities, and robustness without requiring access to the original source code.
Given the permissive nature of GPT-5.4-Cyber, it is being rolled out through a limited, iterative deployment to vetted security vendors, researchers, and organizations. We recognize that access to such models requires stringent controls, especially regarding no-visibility use cases like Zero-Data Retention (ZDR), where oversight of the user environment or request purpose is limited.
Once approved, all TAC customers benefit from reduced friction around safeguards that typically trigger on dual-use activities. This allows them to seamlessly execute security education, defensive programming, and responsible vulnerability research.
Our current cybersecurity defenses are the culmination of months of iterative improvement. Today’s safeguards successfully reduce cyber risk enough to support the broad deployment of our existing models, and we expect versions of these safeguards to hold strong for upcoming releases.
Models explicitly trained for permissive cybersecurity work will continue to require restrictive deployments and strict access controls. Looking further into the future, as AI capabilities rapidly exceed even the best purpose-built models of today, we anticipate the need for dramatically more expansive defenses. By scaling access, iterating on safeguards, and deeply integrating AI into the fabric of software development, we are building a resilient ecosystem ready for the next era of cyber defense.



