Ready, fire, aim: Corporate AI strategy ignores the risks
This article is part of Spiceworks’ Recalibrating Risk Tolerance series investigating the contemporary landscape of cybersecurity risk. You can follow along on our landing page, where we’ll be adding new stories all week.
It’s been nearly three years since the introduction of ChatGPT, generally acknowledged as the “big bang” event for the unfolding Age of AI. Never mind that the technical roots of traditional artificial intelligence (AI) and machine learning (ML) go back decades, and AI/ML had already been incorporated, for example, in leading cybersecurity solutions for several years.
More than two years ago, I wrote that “it seems everyone is scrambling to come to grips with the possibilities — both good and bad — of yet another ‘What hath God wrought‘ moment in high tech history.”
Today, Spiceworks’ corporate partner Aberdeen Strategy & Research, has found that through its research into current implementations of AI infrastructure that organizations are only modestly concerned about selected AI-related risks. Our eyes are mostly on the prize.
Our research described nine risks related to AI implementations in a corporate context: data leaks, data breaches, IP compromise, non-compliance, data poisoning, non-availability, hallucinations, explainability, and bias.
For a somewhat simpler perspective, these nine issues can be abstracted into three higher-level categories:
- Indiscriminate sharing of valuable or sensitive data with AI tools (data leaks, data breaches, IP compromise, non-compliance)
- Garbage in (data poisoning, bias), garbage out (hallucinations)
- Dependence on AI (non-availability), to the point of blind faith (explainability)
This data also includes a “net perception index” for these nine issues, based on how respondents perceived the associated risks. A net perception index of +100% means that all respondents perceived the issue as a high risk, while -100% means that all perceived it as a low risk. Generally, a net perception index of +50% or higher would be considered strongly positive, and -50% or lower would be strongly negative.
Our data shows the net perception index for these nine issues across the full spectrum. Generally speaking, these nine issues don’t seem to be keeping too many people up at night. The four examples of data sharing are split evenly across both sides of the 0% line, with modest concern about the risk to IP or data subject to regulatory compliance. The strongest concerns — non-availability and explainability — indicate at least some recognition that we may be rushing to become highly dependent on systems we don’t fully understand.
Déjà Vu All Over Again?
In the Spiceworks Ziff Davis State of IT 2026 dataset, “establishing governance and accountability frameworks for AI systems” and “mitigating bias in AI models and data” ranked 23rd and 24th, respectively, on a list of 24 AI-related initiatives organizations are undertaking over the next 12 months. That’s dead last.
Note: the full SWZD State of IT 2026 report is scheduled for release on November 11, 2025, the first day of SpiceWorld 2025 in Austin, Texas. Be sure to check it out — or even better, come and join us!
What’s going on here?
As organizations rush to harness AI’s transformative power — ranking high on their agendas for capturing operational efficiencies and enabling upside business opportunities — they’re betting big on the upside of AI-driven innovation, and rightly so. Let’s get some quick wins, and we’ll figure it out as we go.
Even so, the “Ready, Fire, Aim” fervor that casts AI governance aside, effectively accepting risks from explainability or bias without making thoughtful, deliberate business decisions, sets the stage for inevitable reckonings.
We definitely don’t want to return to the bad old days, when cybersecurity teams had a reputation as “the Department of No” and IT teams lost visibility and control to shadow IT. Can we already see the proliferation of shadow AI?
We must not let these become the days when risk-oriented professionals are seen as obstacles and hindrances to the organization’s strategic goals regarding AI. At the same time, we do need to continue evolving towards a more sophisticated view of risk, and governance is key to that.
It’s okay to make a business decision to accept certain risks, for now. But it should not be okay to burden the organization with material risks to AI-related initiatives without identifying, acknowledging, and making a deliberate business decision about them.
As your organization’s AI-related initiatives roll forward, has your organization already applied the hard-won lessons, cross-functional relationships, and collaboration techniques that eliminated the previous need for shadow IT and successfully changed the perception of the cybersecurity team to be a side-by-side partner with the business?
If not, I suspect some quick wins can also be had here. Make it so!
