CompTIA SecAI+ is the first certification in our expansion series, designed to help you secure, govern and responsibly integrate artificial intelligence into your cybersecurity operations. You’ll build the skills to defend AI systems, meet global compliance expectations and use AI to enhance threat detection, automation and innovation—so you can strengthen your expertise and help keep your organization’s systems and data secure.
V1
SecAI+
Skills you'll learn
Apply AI concepts to strengthen your organization’s cybersecurity posture.
Secure AI systems using advanced controls and protections to safeguard data, models, and infrastructure.
Leverage AI technologies to automate workflows, accelerate incident response, and scale security operations.
Navigate global GRC frameworks to ensure ethical and compliant AI adoption across industries.
Defend against AI-driven threats like adversarial attacks, automated malware, and malicious use of generative AI.
Integrate AI securely into DevSecOps pipelines and enterprise security strategies.
Stay informed
Advance with confidence
Get updates, insights, and exclusive offers to support your learning journey and career growth.
Exam details
Exam version: V1
Exam series code: CY0-001
Launch date: February 17, 2026
Number of questions: Maximum of 60, multiple-choice and performance-based
Duration: 60 minutes
Passing score: 600 (on a scale of 100–900)
Languages: English
Recommended experience: 3–4 years in IT, inclusive of 2+ years hands-on cybersecurity; Security+, CySA+, PenTest+, or equivalent recommended.
Retirement: Estimated 3 years after launch
SecAI+ (V1) exam objectives summary
Basic AI concepts related to cybersecurity (17%)
-
Explain core AI principles and terminology: Machine learning, deep learning, natural language processing, and automation.
-
Identify AI applications in security: Use cases for AI in threat detection, defense, and security operations.
-
Recognize AI-driven threats: Automated phishing, polymorphic malware, adversarial machine learning, and malicious use of generative AI.
Securing AI systems (40%)
-
Implement security controls: Protect AI systems, data, and models using robust technical safeguards.
-
Secure AI deployment environments: Apply best practices across on-premises, cloud, and hybrid infrastructures.
-
Mitigate adversarial risks: Defend against attacks targeting AI models, data pipelines, and inference layers.
AI-assisted security (24%)
-
Enhance detection and response: Use AI-driven tools to identify anomalies, detect threats, and accelerate incident remediation.
-
Automate security workflows: Integrate AI for event triage, alert correlation, and response orchestration.
-
Apply AI techniques in operations: Incorporate AI into threat modeling, behavior analysis, and continuous monitoring.
AI governance, risk, and compliance (19%)
-
Understand regulatory frameworks: Identify global governance requirements and their implications for AI adoption.
-
Integrate GRC into AI projects: Incorporate governance, risk management, and compliance practices throughout the AI lifecycle.
-
Ensure responsible AI use: Apply ethical guidelines, legal standards, and industry frameworks such as GDPR and NIST AI RMF.