How to tackle AI sprawl
According to McKinsey’s latest research, organizations now use AI in an average of three business functions. Your marketing team may have their content generation tool, product development could be taking advantage of code assistants, and your own IT team might just have deployed that AI-powered ticketing system. Sounds easy enough to manage, right? You’ve done this before, more or less.
There’s much more going on under the hood, though.
Each of those deployments can quietly spawn its own ecosystem. Marketing’s content tool could be connecting to your CRM, brand guidelines, and customer data. It might talk to your content management system, feed your analytics platform, and link up with social media schedulers.
The product development team’s code assistant could be pulling from multiple repositories, touching CI/CD pipelines, and reaching into production systems. IT’s ticketing system probably connects to Active Directory, monitoring tools, asset management databases, and vendor portals.
These three AI deployments have created dozens of integration points, each with its own data flows, permissions, and failure modes. Twenty-one percent of organizations are already redesigning workflows around these tools, which means those integrations are likely becoming more consequential to the business before IT has even had a chance to map them.
This isn’t just shadow AI. It’s something different
You’re already dealing with shadow AI, in which unauthorized tools run without your awareness or involvement. Just like the first wave of SaaS deployments, people where you work are adopting tools that IT never approved. This is a real challenge, and it deserves attention.
“Each business function could be making deployment decisions on its own, and new integrations might appear without triggering your change management processes.”
AI sprawl can happen even when everyone does everything right. Everyone follows the process, everyone checks the boxes, and everyone feels good about the decision. But the tool you sanctioned six months ago could now be integrating with systems that didn’t exist in your original security review. Users discovered new use cases, connected it to different data sources, and built dependencies you never saw coming.
This problem calls for a different approach. Instead of asking, “what unauthorized AI is running?”, you need to ask, “what is this AI, approved or otherwise, connecting to?”
Traditional governance can’t keep pace with AI sprawl
Traditional governance frameworks weren’t designed for this speed and scale of change. McKinsey’s survey shows the strain: 47% of organizations have already hit at least one negative consequence from genAI use, such as inaccuracy issues, cybersecurity incidents, intellectual property concerns, or privacy violations. And these are sanctioned deployments causing problems, not shadow AI.
The integration multiplication effect explains much of this challenge. Each AI deployment creates multiple new integration requirements. Some are obvious from day one: the tool authenticates users, accesses relevant data, and delivers outputs somewhere useful. Others emerge over time as users discover new workflows and your business processes adapt to them.
Even businesses that are bullish on AI are struggling to meet their targets as they encounter unexpected difficulties. MIT research found that 95% of generative AI pilots fail to hit rapid revenue acceleration. The AI itself usually works fine; organizations just underestimate the complexity involved in moving from pilot to production. That proof-of-concept running on test data with manual data entry doesn’t necessarily prepare you for production systems, data quality across multiple sources, and the governance requirements that come with automated decision-making.
In addition, each business function could be making deployment decisions on its own, and new integrations might appear without triggering your change management processes. Fragmentation worsens the problem even further. Whether governance responsibilities split across multiple teams or pile onto a single IT leader, no one sees the full picture of how these tools interact, what data they access, or where the dependencies lie.
Legacy systems compound the problem
Your legacy systems weren’t built for this pace of integration. Your core business applications likely assumed you had relatively stable architectures with carefully planned integration points. AI tools expect to connect to everything, simultaneously pull from multiple data sources, and adapt their behavior based on real-time inputs.
Meanwhile, AI vendors are rapidly adding new integrations and capabilities—sometimes weekly. The tool you reviewed last quarter now has features that didn’t exist during your security assessment. Suddenly, you’re running something different than what you approved.
This creates a moving target for security and compliance. The authentication methods, data access patterns, and integration points you documented during your initial review may no longer reflect what’s running in production. Your incident response playbooks reference capabilities that have been deprecated, while new attack surfaces emerge unmonitored. It becomes very difficult to properly manage AI risk in this scenario.
See the full picture first
IT teams often respond to sprawl by implementing stricter approval processes, mandating centralized deployment, and piling on documentation requirements. That approach treats sprawl as a discipline problem rather than an architecture problem.
You can’t wrangle what you can’t see, though. The following approaches will help you establish visibility into what your AI tools actually do—whether you’re a one-person IT department (been there) or leading a team. Start with what’s manageable given your resources, because even basic documentation beats flying blind.
Map the integration footprint. For each AI deployment, document not just the initial integration points but how they’ve evolved. Which systems does the tool access? What data flows through it? Where do its outputs go? These connections change as new features come out and usage patterns evolve, so plan to update this analysis on a regular basis.
Create integration triggers. Build processes that flag when approved AI tools connect to new systems or data sources. These triggers don’t necessarily have to block the integrations, but they should make them visible so you can assess the implications.
Track the multiplication. When you approve an AI deployment in one function, anticipate the integration requirements as best you can and build them into your capacity planning. That single approval will probably spawn three to five integration projects over the next six months.
These approaches give you a foundation to work from, but tracking is only the first step to addressing AI expansion in your environment.
Track AI’s expanding footprint, then manage it
Better governance policies are ideal, but they won’t solve the AI sprawl problem on their own. Stricter controls won’t singlehandedly fix it, either. To get out in front of this issue, you need to rethink how IT establishes and maintains visibility across a system architecture that grows more complex every quarter. Organizations that figure out how to track AI’s expanding footprint govern it more effectively. The work isn’t trivial, but it’ll be more manageable if you start with visibility and build from there.