Skip to main content

RSA 2026: AI Cybersecurity Maturity - Why AI Can’t Fix Broken Systems

Dr. James Stanger

You can’t drop AI into chaos and expect wisdom

It’s always exciting – and a bit unsettling – to witness the organized chaos of the showroom floor(s) at RSA San Francisco, as well as experience the wisdom from the presenters. It’s exciting to see the incredible progress of products and best practices: RSA remains a creative nexus where enterprises and governments engage in learning. It’s a natural place for us at CompTIA. Yet, it’s also unsettling because, as hard as we’ve tried, most organizations haven’t moved the cybersecurity needle as much as they should. Sure, there are some paragons of maturity and efficiency. But they remain in the minority.

Countering “toxic combinations”

In our presentation at RSA 2026, my colleague, Patrick Johnson, and I deliberately avoided another “AI buzzword” talk. Instead, we focused on something we keep seeing repeatedly in real organizations: Patterns where otherwise promising AI initiatives collide with immature processes, cultural blind spots, and long‑standing technical debt. We call these patterns toxic combinations—and they help explain why AI so often fails to deliver the outcomes leaders expect.

Let me digress for a second, because it’s important at this time in this blog to insert some sort of science fiction reference. In our case, I’ll use a Classic Original Series Star Trek reference found in an episode called The Ultimate Computer. At the end of the episode, where humans have (who knew) saved the day from an AI system that has run-amok AI system, Mr. Spock and Dr. McCoy get into an inevitable argument about the benefits of human / AI interaction. Spock retorts, “It would be most interesting to impress your memory engrams on a computer, Doctor. The resulting torrential flood of illogic would be most entertaining."

You see, even back in the 1960s, television audiences understood the issues that can occur if you integrate, automate, and iterate the wrong things: Chaos. Yet, the entire tech industry is saddled with the following challenges:

  • Contribute instant value to an organization by implementing resilient technologies.
  • Implement governance at speed – no one can afford to take too much time creating compliance frameworks in the face of constant change.
  • Duplicating yourself into a bot or agent to create efficiencies.

These are worthy – and necessary – goals. They’re table stakes, really. But make sure you duplicate the right things.

Inserting AI into bad situations

Organizations feel enormous pressure to “do something” with AI, whether in operations, cybersecurity, software development, or customer engagement. But our research and field experience suggest that AI can’t thrive in bad situations or in theoretical workflows. It operates inside of tangible, actual workflows, alongside people, connected to systems that already exist. And if those workflows are broken or toxic, AI doesn’t fix them—it scales them.

One of the most dangerous assumptions we see is the belief that AI can simply be dropped into a situation to make it better. We all have seen situations where old processes are reused, unchanged, even when new technologies ask us to rethink those processes. Broken incentives remain intact. Shadow IT and undocumented workflows quietly persist and exert serious gravity. Some of these workflows are actually very useful and good, but, they remain undocumented as shadow workflows that contribute to instant value, but then often get eliminated when AI is put in place in the wrong way.

In these environments, AI often substitutes judgment rather than improving it, automates ambiguity instead of clarity, and accelerates poor decisions rather than eliminating them. That’s not transformation—that’s faster failure. This might explain why almost 80% of AI implementations experience what we call “backtracking.” This is where an AI project begins, but then gets dialed back to another, usually analog process. We’ve seen it in all areas where we operate at CompTIA: In enterprises, governments, and in learning institutions.

Many of the toxic combinations we highlighted live in the “in‑between” spaces: Where humans hand off work to systems, where APIs connect services, where agents are granted permissions without sufficient oversight, or where workflows sprawl without a clear owner. These interstitial spaces are where risk concentrates—and where AI now exerts the most force. When an AI assistant is corrupted, mis-scoped, or over‑trusted, the consequences can cascade across systems far faster than traditional tooling ever allowed.

It’s not all toxic!

Yet despite these risks, our message was not pessimistic. In fact, it was cautiously optimistic. Preparing for AI—really preparing—forces organizations to confront issues they have often ignored – or deferred – for years. To deploy AI responsibly, organizations must inventory workflows, document decision paths, rationalize processes, and clarify ownership. They must define what an AI system can see, what it can do, and what rules govern it. Those are not new ideas in cybersecurity or IT governance—but AI has turned them into immediate, unavoidable requirements.

This is where things become genuinely interesting. As organizations implement AI, many are inadvertently increasing their overall maturity. Nowhere is this more evident than in cybersecurity. For decades, cybersecurity has struggled to move the needle decisively. Progress has been real, but often slow, reactive, and fragmented. AI, however, demands better data hygiene, clearer access controls, stronger identity practices, and tighter workflow governance simply to function safely. Immaturity becomes a blocker to value, not just a source of risk. This way, organizations can avoid their own versions of the UK Post Office Scandal, various supply chain attacks, and implement AI in such a way that they can truly bring cybersecurity maturity.

Making work environments safe for AI

As a result, organizations are rationalizing workflows. For example, we’re seeing serious efforts where organizations are SOC operations, and rethinking incident response, not because a framework told them to—but because AI made inefficiencies and blind spots impossible to ignore. In some cases, we may be seeing more cybersecurity maturity emerge in a couple of years of AI-driven change than the field was able to achieve in the previous twenty or thirty years through policy and tooling alone.

For example, when it comes to incident response, we’re seeing how organizations are moving beyond the clichéd “3 am phone call” type of incident and into investigating chronic, long-term micro-incidents. We’ve found that organizations that have implemented AI properly have been able to identify key areas in workflows where “cybersecurity micro-aggressions” occur. Micro-aggressions can include anything in the OWASP Top 10 (both the traditional or AI version), as well as:

  • Substituting theoretical or ideal workflows for real workflows happening in your organization.

  • Faulty communication at the C-suite concerning how to map technology to business needs.

  • Failing to document and follow up on a cybersecurity issue during the implementation process.

  • Avoiding cybersecurity guardrails when code and platforms are being paired during the DevOps process.

  • Lax governance structures that don’t take AI into account.

  • Ignoring the need for tech workers to upskill.

This last point is, in many ways, the most critical. Over the past year, Enterprise CISOs, military leaders, and private equity investors alike have told me that upskilling is the most important element in preparing organizations to use AI maturely. Nothing is more important. Yet, these same leaders have also noted that organizations don’t quite realize the negative impact of this particular microaggression.

So, the takeaway from our presentation was simple but demanding: AI can do remarkable things—but only if you prepare the way. If you do AI right, you experience the serious leverage that any exponential technology can provide. If you don’t, then you’re going to have even bigger problems. Organizations that clean up workflows, reduce cruft, and mature their thinking will find AI amplifying their best qualities. Those that don’t may find that AI simply holds up a mirror to problems they can no longer afford to ignore.

In that sense, AI may not just be a technological shift. It may be the strongest incentive we’ve ever had to finally detoxify and grow up – I mean, mature. Then, we’ll be able to enjoy the wisdom that a proper AI / human dialog can bring in a healthy environment.

If you would like to discuss how CompTIA can help your organization tackle AI and cybersecurity challenges, reach out here.