Shadow AI and Silent Risks: Why Culture Is the Missing Link in Cybersecurity Governance
The meteoric rise of artificial intelligence in business has pushed cybersecurity professionals to confront more than just technical challenges. The conversation is shifting rapidly toward the cultural and governance aspects of AI adoption — areas that are proving to be just as critical as configuring firewalls or patching vulnerabilities. Organizations that fail to integrate AI governance into their security frameworks are increasingly at risk, not only from external adversaries but from internal blind spots that emerge when technology outpaces policy.
Joe Sullivan, CEO of Ukraine Friends and an industry veteran with decades of security leadership at tech giants like eBay, PayPal, Facebook, and Uber, offered sharp insights on this very topic during a recent FedNinjas podcast episode. Sullivan’s perspective is rooted in firsthand experience watching artificial intelligence evolve from niche curiosity to operational cornerstone.
“When we were first using AI,” Sullivan recalled, “it was about moving from a rules-based approach to a model-based approach for detecting fraud and abuse.” This early phase of AI adoption laid the groundwork for a transformation that’s now reshaping not only security operations but entire organizational cultures.

The Early Adoption Lessons
Sullivan’s reflections from his time at PayPal and Facebook underline how central AI became in the detection of fraud and abuse. But it was his experience at Uber that highlighted the cultural integration of AI. Uber’s commitment to embedding small data science teams inside every department set the stage for proactive problem-solving, whether it was using telematics to ensure driver safety or applying natural language processing (NLP) to identify emergencies from customer support tickets in real-time.
This operational model fostered a deep respect for data-informed decision-making across teams. “If AI was core to the company, the company invested in it,” Sullivan explained. “And when that’s the case, the people, the processes, and the policies naturally start to adapt.”
But as Sullivan highlighted, the challenge now lies beyond the technical: “If security teams don’t embrace AI themselves, how can they possibly develop the rules for responsible AI use?” This core question reveals the cultural tension emerging across industries, where fear, ignorance, or corporate inertia often stand in the way of meaningful AI governance.
Shadow AI: The Quiet Risk
The pace of AI adoption inside organizations has created a new class of risks — risks born not from attackers, but from employees. Whether it’s a well-meaning worker using an AI note-taker in a confidential meeting or relying on an AI assistant to draft client communications, the line between efficiency and exposure has blurred.
“Employees are so empowered to add these plugins, to add little tools to their workflows,” Sullivan observed. “They’re motivated to do it because instead of having to take notes during a full meeting, the AI does it for them.” The result is an ungoverned ecosystem of tools, each potentially exfiltrating sensitive data to third parties without the organization’s knowledge or consent.
This phenomenon, often referred to as “Shadow IT,” has found a new expression in AI tools — or as some now call it, “Shadow AI.” Sullivan likened it to the early days of browser plugins like Grammarly, which, while helpful, raise immediate red flags for security professionals. “At a corporation, do you really want every single email that’s being written shipped off to some other company that you don’t know anything about their security?” he asked.
When employees adopt AI tools informally, without scrutiny or governance, organizations risk not only data loss but regulatory non-compliance. The question is no longer whether employees will use AI — they already are — but whether leadership is prepared to manage it.
The Cultural Stalemate: Fear vs. Familiarity
One of the greatest cultural challenges of AI adoption isn’t the technology itself, but the human response to it. Companies, especially those with deep security concerns, often resort to outright bans rather than strategic enablement. Sullivan highlighted the fallacy of this approach: “If the bad guys are going to use AI, then we have to use it too. It’s an arms race.”
The adversarial nature of cybersecurity requires defenders to maintain parity with attackers, who are undoubtedly adopting AI at a breakneck pace. From generating malicious payloads to automating social engineering campaigns, AI tools are making sophisticated attacks more accessible than ever.
Yet even as organizations acknowledge the offensive potential of AI, internal skepticism remains high. Some of this resistance stems from historical struggles to balance innovation with risk management, while other hesitations emerge from a lack of institutional AI literacy. “You can’t ask people who’ve never driven a car to come up with safety features for cars,” Sullivan quipped. “The more you use the product, the better you understand the risks.”
In other words, security teams must not only permit AI experimentation but lead it. Familiarity breeds understanding — and only with understanding can true governance be implemented.
Guardrails: A Modern Necessity
The conversation about AI governance often lands on the topic of guardrails — clear boundaries and best practices for safe AI usage. Sullivan pointed out that many of the threats AI poses are actually extensions of long-standing security principles. “At its simplest, the risks are the same as the risks we’ve been dealing with forever,” he said.
From software development lifecycle (SDLC) standards to data access control policies, the foundations of AI governance are not entirely new. What AI introduces, however, is scale and complexity. For instance, with large language models (LLMs) and generative AI, prompt injection attacks — where malicious input manipulates the AI’s output — present challenges strikingly similar to classic web vulnerabilities like SQL injection or cross-site scripting.
The difference lies in the reach. When AI is embedded into automated business processes, small security oversights can rapidly escalate into enterprise-wide incidents. “If the company is training AI models on data meant for internal decision-making, and that model inadvertently exposes executive-level discussions to lower-level employees, that’s not just a data leak — it’s a governance failure,” Sullivan warned.
The Regulatory Gap
Another pressure point in AI adoption is the regulatory environment, which is currently playing catch-up. Sullivan noted the flurry of legislative interest around AI guardrails, including bipartisan initiatives, but cautioned against waiting for formal regulation to drive action. “We don’t need to convene massive events to get started,” he stressed. “The basics of AI safety are already rooted in cybersecurity best practices.”
Organizations that treat AI governance as a compliance-only task miss the opportunity to create a cultural shift that embeds responsible AI use into daily operations. From ethical sourcing of training data to strict access controls and transparent algorithm audits, security leaders should take proactive ownership of AI risk, rather than waiting for regulators to mandate it.
Human Factors and the Future
Despite the technical marvels AI offers, Sullivan’s emphasis remained grounded in human factors. He warned against overlooking the impact of AI adoption on vulnerable populations and highlighted the parallels to prior tech booms, such as the internet’s early years.
“As a society, we have obligations bigger than just running really fast inside one corporation,” Sullivan explained. “The people least able to protect themselves tend to get hurt when corporations don’t take responsibility and governments don’t have the ability to act quickly enough.”
For cybersecurity professionals, this reinforces a growing reality: responsible AI adoption isn’t just about securing systems; it’s about protecting people. Whether through technical guardrails, clear cultural expectations, or governance structures designed to evolve with the technology, AI’s future in cybersecurity will depend heavily on how seriously organizations take this social contract.
Looking Ahead
AI is accelerating business productivity, but it’s also reshaping the landscape of cybersecurity risks. As Sullivan and his anonymous co-discussants underscored, the gap between innovation and governance can no longer be ignored. The challenge for security leaders isn’t simply to understand AI’s capabilities, but to embed responsible AI use deep into the organizational DNA — before adversaries or compliance auditors force their hand.
References Cited:
