Skip to content
The FedNinjas

The Fedninjas

FedNinjas: Your Guide to Federal Cloud, Cybersecurity, and FedRAMP Success.

Primary Menu
  • Home
  • Blog
  • Podcast
Listen to us on Spotify!

Shadow AI and Silent Risks: Why Culture Is the Missing Link in Cybersecurity Governance

FedNinjas Team April 22, 2025 7 minutes read

Shadow AI and Silent Risks: Why Culture Is the Missing Link in Cybersecurity Governance

The meteoric rise of artificial intelligence in business has pushed cybersecurity professionals to confront more than just technical challenges. The conversation is shifting rapidly toward the cultural and governance aspects of AI adoption — areas that are proving to be just as critical as configuring firewalls or patching vulnerabilities. Organizations that fail to integrate AI governance into their security frameworks are increasingly at risk, not only from external adversaries but from internal blind spots that emerge when technology outpaces policy.

Joe Sullivan, CEO of Ukraine Friends and an industry veteran with decades of security leadership at tech giants like eBay, PayPal, Facebook, and Uber, offered sharp insights on this very topic during a recent FedNinjas podcast episode. Sullivan’s perspective is rooted in firsthand experience watching artificial intelligence evolve from niche curiosity to operational cornerstone.

“When we were first using AI,” Sullivan recalled, “it was about moving from a rules-based approach to a model-based approach for detecting fraud and abuse.” This early phase of AI adoption laid the groundwork for a transformation that’s now reshaping not only security operations but entire organizational cultures.

The Early Adoption Lessons

Sullivan’s reflections from his time at PayPal and Facebook underline how central AI became in the detection of fraud and abuse. But it was his experience at Uber that highlighted the cultural integration of AI. Uber’s commitment to embedding small data science teams inside every department set the stage for proactive problem-solving, whether it was using telematics to ensure driver safety or applying natural language processing (NLP) to identify emergencies from customer support tickets in real-time.

This operational model fostered a deep respect for data-informed decision-making across teams. “If AI was core to the company, the company invested in it,” Sullivan explained. “And when that’s the case, the people, the processes, and the policies naturally start to adapt.”

But as Sullivan highlighted, the challenge now lies beyond the technical: “If security teams don’t embrace AI themselves, how can they possibly develop the rules for responsible AI use?” This core question reveals the cultural tension emerging across industries, where fear, ignorance, or corporate inertia often stand in the way of meaningful AI governance.

Shadow AI: The Quiet Risk

The pace of AI adoption inside organizations has created a new class of risks — risks born not from attackers, but from employees. Whether it’s a well-meaning worker using an AI note-taker in a confidential meeting or relying on an AI assistant to draft client communications, the line between efficiency and exposure has blurred.

“Employees are so empowered to add these plugins, to add little tools to their workflows,” Sullivan observed. “They’re motivated to do it because instead of having to take notes during a full meeting, the AI does it for them.” The result is an ungoverned ecosystem of tools, each potentially exfiltrating sensitive data to third parties without the organization’s knowledge or consent.

This phenomenon, often referred to as “Shadow IT,” has found a new expression in AI tools — or as some now call it, “Shadow AI.” Sullivan likened it to the early days of browser plugins like Grammarly, which, while helpful, raise immediate red flags for security professionals. “At a corporation, do you really want every single email that’s being written shipped off to some other company that you don’t know anything about their security?” he asked.

When employees adopt AI tools informally, without scrutiny or governance, organizations risk not only data loss but regulatory non-compliance. The question is no longer whether employees will use AI — they already are — but whether leadership is prepared to manage it.

The Cultural Stalemate: Fear vs. Familiarity

One of the greatest cultural challenges of AI adoption isn’t the technology itself, but the human response to it. Companies, especially those with deep security concerns, often resort to outright bans rather than strategic enablement. Sullivan highlighted the fallacy of this approach: “If the bad guys are going to use AI, then we have to use it too. It’s an arms race.”

The adversarial nature of cybersecurity requires defenders to maintain parity with attackers, who are undoubtedly adopting AI at a breakneck pace. From generating malicious payloads to automating social engineering campaigns, AI tools are making sophisticated attacks more accessible than ever.

Yet even as organizations acknowledge the offensive potential of AI, internal skepticism remains high. Some of this resistance stems from historical struggles to balance innovation with risk management, while other hesitations emerge from a lack of institutional AI literacy. “You can’t ask people who’ve never driven a car to come up with safety features for cars,” Sullivan quipped. “The more you use the product, the better you understand the risks.”

In other words, security teams must not only permit AI experimentation but lead it. Familiarity breeds understanding — and only with understanding can true governance be implemented.

Guardrails: A Modern Necessity

The conversation about AI governance often lands on the topic of guardrails — clear boundaries and best practices for safe AI usage. Sullivan pointed out that many of the threats AI poses are actually extensions of long-standing security principles. “At its simplest, the risks are the same as the risks we’ve been dealing with forever,” he said.

From software development lifecycle (SDLC) standards to data access control policies, the foundations of AI governance are not entirely new. What AI introduces, however, is scale and complexity. For instance, with large language models (LLMs) and generative AI, prompt injection attacks — where malicious input manipulates the AI’s output — present challenges strikingly similar to classic web vulnerabilities like SQL injection or cross-site scripting.

The difference lies in the reach. When AI is embedded into automated business processes, small security oversights can rapidly escalate into enterprise-wide incidents. “If the company is training AI models on data meant for internal decision-making, and that model inadvertently exposes executive-level discussions to lower-level employees, that’s not just a data leak — it’s a governance failure,” Sullivan warned.

The Regulatory Gap

Another pressure point in AI adoption is the regulatory environment, which is currently playing catch-up. Sullivan noted the flurry of legislative interest around AI guardrails, including bipartisan initiatives, but cautioned against waiting for formal regulation to drive action. “We don’t need to convene massive events to get started,” he stressed. “The basics of AI safety are already rooted in cybersecurity best practices.”

Organizations that treat AI governance as a compliance-only task miss the opportunity to create a cultural shift that embeds responsible AI use into daily operations. From ethical sourcing of training data to strict access controls and transparent algorithm audits, security leaders should take proactive ownership of AI risk, rather than waiting for regulators to mandate it.

Human Factors and the Future

Despite the technical marvels AI offers, Sullivan’s emphasis remained grounded in human factors. He warned against overlooking the impact of AI adoption on vulnerable populations and highlighted the parallels to prior tech booms, such as the internet’s early years.

“As a society, we have obligations bigger than just running really fast inside one corporation,” Sullivan explained. “The people least able to protect themselves tend to get hurt when corporations don’t take responsibility and governments don’t have the ability to act quickly enough.”

For cybersecurity professionals, this reinforces a growing reality: responsible AI adoption isn’t just about securing systems; it’s about protecting people. Whether through technical guardrails, clear cultural expectations, or governance structures designed to evolve with the technology, AI’s future in cybersecurity will depend heavily on how seriously organizations take this social contract.

Looking Ahead

AI is accelerating business productivity, but it’s also reshaping the landscape of cybersecurity risks. As Sullivan and his anonymous co-discussants underscored, the gap between innovation and governance can no longer be ignored. The challenge for security leaders isn’t simply to understand AI’s capabilities, but to embed responsible AI use deep into the organizational DNA — before adversaries or compliance auditors force their hand.

References Cited:

  1. FedNinjas Podcast.Episode 9:A Discussion on Artificial Intelligence with Joe Sullivan.2024

About The Author

FedNinjas Team

See author's posts

Post navigation

Previous: Continuous Security Monitoring, Logging, and Incident Response for Cloud Applications
Next: The Indispensable Role of AI-Enabled Security Tooling in Modern System Security

Related Stories

Insider threat cybersecurity hacker

Creating Insider Risk from Reducing Cybersecurity Headcount

Eric Adams May 24, 2025
Computer screens showing a vulnerability alert data breach cybersecurity dashboard.

Increased Vulnerability to Data Breaches: The Fallout of Reducing Cybersecurity Headcount

Eric Adams May 21, 2025
Cybersecurity staff reduction cost-cutting sword

The Risks of Reducing Cybersecurity Headcount

Eric Adams May 20, 2025

Trending News

Claude Mythos and Project Glasswing: a Seismic Shift in Cybersecurity Claude Mythos and Glasswing Butterfly 1

Claude Mythos and Project Glasswing: a Seismic Shift in Cybersecurity

April 21, 2026
The Stryker Cyber Attack: A Mass Remote Wipe of its Managed Devices Stryker affected countries 2

The Stryker Cyber Attack: A Mass Remote Wipe of its Managed Devices

March 19, 2026
Agentic AI is the Attack Surface Agentic AI attack surfaces 3

Agentic AI is the Attack Surface

February 3, 2026
The Rise of Humanoid Robots in Modern Society Humanoid robots getting hackied 4

The Rise of Humanoid Robots in Modern Society

December 29, 2025
The Rise of AI Espionage: How Autonomous Agents Are Redefining Cyber Threats AI-orchestrated-cyber-espionage-campaign 5

The Rise of AI Espionage: How Autonomous Agents Are Redefining Cyber Threats

November 17, 2025
  • 3PAO assessments
  • Access Control
  • Advanced Threat Protection
  • Adversarial Modeling
  • Agentic AI
  • AI
  • AI and Quantum Computing
  • AI in Healthcare
  • AI-Powered SOCs
  • AI-Powered Tools
  • Anomaly Detection
  • API Security
  • Application Security
  • Artificial Intelligence
  • Artificial Intelligence
  • Artificial Intelligence in Cybersecurity
  • Attack Surface Management
  • Attack Surface Reduction
  • Audit and Compliance
  • Autonomous Systems
  • Blockchain
  • Breach Severity
  • Business
  • Career
  • CISA Advisory
  • CISO
  • CISO Strategies
  • Cloud
  • Cloud Computing
  • Cloud Security
  • Cloud Security
  • Cloud Service Providers
  • Compliance
  • Compliance And Governance
  • Compliance and Regulatory Affairs
  • Compliance And Regulatory Requirements
  • Continuous Monitoring
  • Continuous Monitoring
  • Corporate Security
  • Critical Infrastructure
  • Cross-Agency Collaboration
  • Cryptocurrency
  • Cyber Attack
  • Cyber Attacks
  • Cyber Deterrence
  • Cyber Resilience
  • Cyber Threats
  • Cyber-Physical Systems
  • Cyberattacks.
  • Cybercrime
  • Cybersecurity
  • Cybersecurity And Sustainability
  • Cybersecurity Breaches
  • Cybersecurity in Federal Programs
  • Cybersecurity Measures
  • Cybersecurity Strategy
  • Cybersecurity Threats
  • Data Breach
  • Data Breaches
  • Data Privacy
  • Data Protection
  • Data Security
  • Deepfake Detection
  • Deepfakes
  • Defense Readiness
  • Defense Strategies
  • Digital Twins
  • Disaster Recovery
  • Dwell Time
  • Encryption
  • Encryption Technologies
  • Federal Agencies
  • Federal Cloud
  • Federal Cybersecurity
  • Federal Cybersecurity Regulations
  • Federal Government
  • FedRamp
  • FedRAMP Compliance
  • Game Theory
  • GDPR
  • Global Security Strategies
  • Government
  • Government Compliance.
  • Government Cybersecurity
  • Healthcare
  • Healthcare Cybersecurity
  • Healthcare Technology
  • HIPAA Compliance
  • humanoid
  • Humans
  • Incident Response
  • Industrial Control Systems (ICS)
  • Information Security
  • Insider Threats
  • Internet of Things
  • Intrusion Detection
  • IoT
  • IoT Security
  • IT Governance
  • IT Security
  • Least Privilege
  • LLM Poisoning
  • Modern Cyber Defense
  • Nation-State Hackers
  • National Cybersecurity Strategy
  • National Security
  • Network Security
  • NHI
  • NIST Cybersecurity Framework
  • Operational Environments
  • Phishing
  • Privacy
  • Public Safety
  • Quantum Computing
  • Ransomware
  • Real-World Readiness
  • Red Teaming
  • Regulatory Compliance
  • Risk Assessment
  • Risk Management
  • Risk Management
  • Risk-Based Decision Making
  • robotics
  • Secure Coding Practices
  • Security Awareness
  • Security Operations Center
  • Security Operations Center (SOC)
  • Security Threats
  • Security Training
  • SIEM Tools
  • Social Engineering
  • Supply Chain Cybersecurity
  • Supply Chain Risk Management
  • Supply Chain Security
  • Sustainability
  • Tech
  • Technology
  • Third Party Security
  • Third-Party Risk Management
  • Third-Party Vendor Management
  • Threat Analysis
  • Threat Containment
  • Threat Defense
  • Threat Detection
  • Threat Intelligence
  • Threat Landscape
  • Training
  • Uncategorized
  • vCISO
  • Voice Phishing
  • Vulnerability Disclosure
  • Vulnerability Management
  • Workforce
  • Zero Trust Architecture
  • Zero Trust Authentication
  • Zero-Day Exploits
  • Zero-Day Vulnerabilities
  • Zero-Trust Architecture

You may have missed

Claude Mythos and Glasswing Butterfly

Claude Mythos and Project Glasswing: a Seismic Shift in Cybersecurity

Eric Adams April 21, 2026
Stryker affected countries

The Stryker Cyber Attack: A Mass Remote Wipe of its Managed Devices

Eric Adams March 19, 2026
Agentic AI attack surfaces

Agentic AI is the Attack Surface

Eric Adams February 3, 2026
Humanoid robots getting hackied

The Rise of Humanoid Robots in Modern Society

Eric Adams December 29, 2025
Copyright © All rights reserved.