Ask your CFO what your IT spend was last year. They'll give you a number. Then ask them what that money bought. Watch the pause. They'll mention something about Microsoft licenses, maybe the managed service provider contract, possibly a hardware refresh. But the full picture? The line-by-line accounting of what you're paying for and whether you're getting value? That almost never exists.
IT spending in most organizations grows at somewhere between eight and fifteen percent per year. Revenue rarely grows at the same rate. Nobody signed off on this increase. It just happened, compounding quietly in the background while the board focused on commercial priorities.
The frustrating part isn't the cost itself. Organizations need technology, and technology costs money. The frustrating part is that nobody can explain the increase. Not the IT manager. Not the MSP. Not the finance team. Each of them sees a piece of the picture, but nobody owns the whole thing. And in that gap between partial views, money disappears.
This isn't a technology problem. It's a governance problem. And it persists because most organizations don't have anyone whose job is to look at IT spending as a whole, someone who understands both the technology and the commercial reality, who can tell the difference between necessary investment and accumulated waste.
License creep: the silent budget killer
This is the most common source of waste, and the easiest to understand. Your organization buys software licenses. People join, and they get assigned licenses. People leave, and their licenses stay active. Departments upgrade to premium tiers for a feature they used once. Annual renewals happen automatically, and nobody checks whether the seat count still reflects reality.
Consider the typical Microsoft 365 estate. An organization of 200 people might be paying for 240 licenses because nobody deprovisioned the accounts for former employees. Thirty of those active users might be on E5 licenses (the most expensive tier) because someone in IT assigned E5 as the default during a migration two years ago. Most of those users need E3 at most. Some could function perfectly well on E1 or Business Basic.
The math on this specific example is illustrative. The difference between an E5 and an E3 license is meaningful per user per month. Across 30 unnecessarily upgraded users, that's a significant annual overspend. Add the 40 unused licenses, and you're looking at thousands going to waste each year on a single platform.
Now multiply this pattern across every SaaS tool in the organization: your CRM (Salesforce charges per user, and inactive users still count against your license allocation until someone explicitly deactivates them), your project management platform (Monday.com, Asana, and Jira all bill per seat regardless of activity level), your design tools (Adobe Creative Cloud licenses sitting idle on the machines of people who used Photoshop once for a presentation), your video conferencing, your cloud storage, your HR system, your accounting software. Each one has its own seat count, its own tier structure, and its own automatic renewal. Each one is probably carrying dead weight.
The waste isn't dramatic on any single line item. It's ten percent here, fifteen percent there. But across the full software estate, it adds up to a meaningful fraction of the total IT budget. License rationalization alone routinely recovers enough to fund a new initiative. Without cutting a single tool that anyone was actually using.
Shadow IT: the spend you don't even know about
Here's a scenario that plays out in every organization above about 50 people. The marketing team needs a social media scheduling tool. They find one, sign up with a corporate credit card, and start using it. Finance sees the charge and codes it to "marketing software." The IT team has no idea it exists.
Meanwhile, the sales team has bought their own analytics tool. The operations team is paying for a workflow automation platform. The HR team subscribed to two different survey tools because the person who set up the first one left and nobody knew the login. The finance team itself is paying for a reporting tool that duplicates functionality already available in the accounting platform. Each department solved its own problem independently, which is understandable. But the cumulative effect is an unmanaged sprawl of applications, each with its own security posture, its own data silo, and its own recurring cost.
Shadow IT isn't just a cost problem. It's a security problem and a data governance problem. When an employee signs up for a SaaS tool using their corporate email and a password they reuse across five other services, they've created an attack vector that the IT department can't see, can't manage, and can't protect. When business data ends up in a SaaS platform that nobody in IT knows about, it's outside the backup regime, outside the access control framework, and outside the data retention policy. If that vendor suffers a breach, you won't even know your data was affected until the vendor notifies you. If they notify you.
From a pure spend perspective, the issue is that nobody has visibility over the total. The CFO sees individual line items scattered across department budgets. The IT team manages the systems they know about. The gap between those two views is where shadow IT lives, and it's often larger than anyone expects.
The first time most organizations get a complete picture of their SaaS estate, the reaction is the same: "We're paying for how many tools?"
A thorough audit typically uncovers between 30 and 60 percent more applications than IT is aware of. Not all of them are wasteful. Some are genuinely useful tools that should be formally adopted and managed. But many are duplicates, abandoned trials with active billing, or tools that overlap with capabilities the organization already pays for through its core platforms. The worst case we've encountered: an organization paying for four different project management tools across four departments, none of which shared data with any of the others.
Over-provisioning: paying enterprise prices for standard needs
Vendors sell tiers. Basic, Professional, Enterprise. The naming varies, but the structure is universal. And there's a persistent tendency in IT purchasing to buy the tier above what you need, "just in case."
This happens for predictable reasons. The IT manager doesn't want to be the person who chose the cheaper tier and then hit a limitation six months later. The vendor's sales rep steered the conversation toward the premium tier by demonstrating features that sounded compelling in a demo but will never be used. Or the organization simply didn't do a proper requirements analysis before purchasing, so they defaulted to the safest option, which is always the most expensive one.
Over-provisioning shows up everywhere:
- Cloud infrastructure. Virtual machines sized for peak load that actually runs at 15 percent utilization most of the time. Storage tiers designed for instant access on data that hasn't been touched in months and should be in cold storage. Reserved instances for workloads that no longer exist. AWS makes it remarkably easy to provision resources and remarkably difficult to identify which ones are no longer needed. Azure is no better. The cloud providers have no incentive to help you spend less. Their revenue model depends on over-provisioning.
- Network bandwidth. Internet circuits sized based on what the organization might need in three years, not what it needs today. A 1Gbps dedicated internet connection when actual peak utilization never exceeds 200Mbps. Backup connections that duplicate capacity rather than providing genuine resilience because they share the same last-mile infrastructure.
- Software tiers. Enterprise licenses with advanced analytics, API access, unlimited automation, and custom integrations, for teams that use the basic features and nothing else. Salesforce Enterprise when Professional would suffice. HubSpot Enterprise when Professional covers 95% of the use case. The gap between tiers is often substantial, and the incremental features rarely justify it for the average user.
- Support contracts. Premium 24/7 support with four-hour response SLAs on systems that are only used during business hours in a single timezone. Next-business-day hardware replacement warranties on devices that have local spares on the shelf. Gold-level vendor support that nobody has called in two years because the internal team handles everything.
The pattern is the same in each case: the organization is paying for capacity or capability it doesn't use. Not because anyone made a bad decision at the time, but because nobody went back to check whether the original sizing still made sense. Requirements change. Usage patterns shift. But the contracts and provisioning don't adjust to match unless someone actively reviews them.
Vendor inertia: the cost of not renegotiating
Most technology contracts auto-renew. This is by design. Vendors know that the easiest revenue to retain is revenue that renews without a conversation. And most organizations let it happen because renegotiating a contract requires time, knowledge, and negotiating power that the IT team doesn't have and the finance team doesn't prioritize.
Think about what happens when a three-year contract comes up for renewal. The vendor sends a renewal notice, usually 90 days before expiry, sometimes less because shorter notice windows reduce the customer's time to evaluate alternatives. The notice includes a price increase, typically linked to CPI or some other index, sometimes just a flat percentage, sometimes buried in a clause that says "at vendor's standard rates at time of renewal." The IT manager sees it, confirms the service is still needed, and signs. No competitive tender. No negotiation. No review of whether the original scope still matches the current requirement.
Now multiply that across every vendor relationship in the organization. Internet circuits, phone systems, managed services, cloud hosting, cybersecurity tools, backup solutions, print contracts, hardware maintenance agreements, CCTV monitoring, access control systems. Each one auto-renews on its own schedule. Each one creeps up in cost. Nobody looks at the full picture because nobody's job is to look at the full picture.
The vendors know this. They build their revenue models around it. A well-run vendor will increase prices just enough that it's not worth the customer's effort to challenge it. Five percent a year doesn't feel like much. But compounded over a five-year relationship, that's a 28 percent increase with no corresponding increase in value. Over ten years, it's 63 percent.
Telecom contracts are the worst offenders. Broadband, MPLS, SIP trunking, mobile. These contracts frequently contain price escalation mechanisms that are opaque and difficult to benchmark. The original circuit was competitively priced when it was installed five years ago. Since then, the market price for equivalent bandwidth has dropped by 40%, but your contract price has increased by 25%. You're paying 65% more than you should be, and you won't know until someone benchmarks your circuits against current market rates.
Vendors price for inertia. They know that switching costs are high and that most IT teams don't have the bandwidth to run a competitive procurement every renewal cycle. That asymmetry is where margin lives.
Technical debt: the interest you pay on past shortcuts
Every organization carries technical debt. Systems that were deployed as temporary fixes and became permanent. Migrations that were planned but never completed. Old platforms running alongside their replacements because the cutover was too risky, too expensive, or too disruptive to schedule.
Technical debt has a real cost, and it compounds. An old email server that should have been decommissioned two years ago still needs patching, monitoring, and backup. The legacy ERP system that three people still use requires its own maintenance contract, its own server infrastructure, and someone who knows how to administer it. The on-premises file server that was supposed to be migrated to SharePoint still sits in the server room, consuming power, cooling, rack space, and a support contract. While the organization also pays for cloud storage.
The worst technical debt is the invisible kind: systems that work well enough that nobody questions them, but that quietly consume resources that could be redirected. An organization running two separate identity systems (on-premises Active Directory and a separate cloud identity provider) because they were never consolidated after a migration. A VPN infrastructure maintained alongside a modern zero-trust solution because nobody has confirmed that every use case has been migrated. Backup systems covering the same data through two different platforms because the old one was never properly retired. Each of these costs money directly through licensing and maintenance, and indirectly through the complexity they add to the environment.
Technical debt doesn't show up as a single line item. It shows up as higher-than-necessary support costs, slower-than-necessary change delivery, and a persistent inability to simplify the estate. It's the reason why an IT team of five is fully occupied keeping things running when the actual workload should require two or three dedicated people. The rest of their time is consumed by the debt: maintaining systems that should have been decommissioned, managing complexity that shouldn't exist, and working around problems that should have been fixed properly the first time.
The auto-renewal trap
Auto-renewal clauses deserve their own section because they represent one of the most systematic mechanisms by which IT costs increase without oversight.
A typical auto-renewal clause reads something like: "This agreement will automatically renew for successive one-year periods unless either party provides written notice of non-renewal at least 60 days prior to the end of the current term." Some contracts require 90 days. Some require 120. The notice must typically be in writing, sent to a specific address or email, and reference the contract number.
The practical effect is that you need a contract management system (or at minimum a calendar) that tracks every renewal date and every notice period across your entire vendor estate. Most organizations don't have this. Most IT managers have a spreadsheet that was last updated when they started in the role. Most CFOs have no visibility into technology contract renewal dates at all.
The result is predictable. Contracts renew by default. Prices increase. Scope stays the same even when requirements have changed. And the opportunity to renegotiate, recompete, or simply cancel a service you no longer need passes quietly, noticed by nobody.
Building a contract renewal calendar is one of the first things we do in any IT governance engagement. It's not complicated work, but it's remarkably high-value. Simply knowing when you have negotiating power (because a renewal is coming and the notice period hasn't expired) changes the dynamic of vendor relationships fundamentally.
The cloud cost spiral
Cloud computing was supposed to reduce costs. For some organizations, it has. For many, it's done the opposite. Not because the cloud is inherently more expensive, but because the consumption model makes it easy to spend without realizing it.
On-premises infrastructure has a natural cost control mechanism: when you run out of capacity, you have to buy more hardware, which requires a capital expenditure request, which gets scrutinized. Cloud infrastructure has no such mechanism. Any developer, any administrator, any power user with the right permissions can spin up resources with a few clicks. The bill arrives at the end of the month.
AWS, Azure, and Google Cloud all have genuinely useful cost management tools. The problem is that most organizations don't use them, or don't use them effectively. Tagging policies that would let you attribute costs to departments and projects go unenforced. Budget alerts get set up and then ignored because nobody owns the response. Reserved instance recommendations from the cloud provider go unacted on because the finance team doesn't understand them and the IT team doesn't have authority to commit to one-year or three-year reservations.
The pattern we see most often: an organization migrated to the cloud two or three years ago. The initial architecture was designed by whoever was available at the time. Instances were sized generously "to be safe." Development environments run 24/7 even though developers only work during business hours. Snapshots and backups accumulate without a lifecycle policy. Data transfer costs (which are notoriously opaque and difficult to predict in cloud environments) are higher than anyone expected. The cloud bill has grown 20% year-over-year while the workload has stayed roughly flat.
Cloud cost optimization is a discipline, not a one-time exercise. It requires continuous monitoring, regular right-sizing reviews, and someone who understands both the technical options (reserved instances, savings plans, spot instances, storage tiering, auto-scaling) and the financial implications of each. Most organizations lack that combination of skills internally, which is why cloud spend trends upward until someone external takes a hard look at it.
The MSP markup problem
Managed service providers serve an important role, particularly for organizations that don't have the scale to justify a large internal IT team. But there's a structural misalignment in most MSP relationships that drives costs upward.
An MSP that bills by the hour or by the device has no financial incentive to simplify your environment. A complex environment with more devices, more platforms, and more integration points generates more billable work. A clean, well-architected environment with standardized platforms and automated processes generates less. The MSP's revenue model rewards complexity, even if the MSP's people would prefer to work on a clean estate.
This doesn't mean your MSP is deliberately making things worse. Most aren't. But they're also not incentivized to proactively recommend the consolidation project that would reduce their monthly invoice by a third. They'll do it if you ask, but they won't suggest it. And if nobody in your organization has the technical knowledge to know that the consolidation is possible, it simply doesn't happen.
The same dynamic applies to vendor selection. An MSP that has a reseller relationship with a particular vendor (and most MSPs earn meaningful margin on software resale) will naturally recommend that vendor's products. Not out of malice, but because it's what they know, it's what they can support efficiently, and they earn margin on the sale. Whether it's the best solution for your specific needs is a secondary consideration. One that requires an independent voice to properly evaluate.
Watch for these signals: your MSP has never recommended removing a system or downgrading a service. Your monthly invoices have grown steadily but the scope of what's managed hasn't changed proportionally. You're paying for "proactive monitoring" but the only communications you receive are about problems and renewals, never about optimization opportunities. Your MSP resells every product in your stack and can't provide evidence that alternatives were evaluated. These patterns don't indicate bad faith. They indicate a structural misalignment that you need to manage actively.
The absence of technology governance: the root cause
Every category of waste described above has the same root cause: the absence of someone whose job is technology governance. Not IT operations (keeping things running). Not IT support (fixing things when they break). Governance: ensuring that technology spending is intentional, justified, regularly reviewed, and aligned with business needs.
In most mid-market organizations, technology governance falls into a gap. The IT manager is focused on operations and support. Keeping the lights on. The CFO sees the headline numbers but lacks the technical knowledge to evaluate them. The CEO has opinions about technology but no framework for making technology investment decisions. The MSP manages what they're contracted to manage but nobody asks them to look at the full picture.
The result is that technology spending operates on autopilot. It increases each year through the accumulated effect of license creep, auto-renewals, shadow IT, over-provisioning, and technical debt. Nobody is responsible for the trend because nobody has been given responsibility for the trend. The budget is everyone's problem and nobody's job.
What a proper IT spend audit looks like
The solution to all of this is visibility. Not a spreadsheet of invoices. An actual audit of what you're spending, what you're getting, and where the gaps are.
A thorough IT spend audit covers five areas:
1. Complete software inventory
Every application the organization pays for, including the ones that don't flow through IT. This means reviewing corporate card statements, department budgets, expense claims, and procurement records, not just the IT cost center. For each application: how many licenses are paid for, how many are in active use, what tier is provisioned, and what tier is actually needed. Tools like Zylo, Productiv, or even a manual credit card review can surface the full picture.
2. Contract and renewal analysis
Every vendor contract mapped to its renewal date, auto-renewal terms, price escalation clauses, notice periods, and change-of-control provisions. This creates a forward-looking calendar that tells you exactly when you have power to renegotiate and exactly how much notice you need to give. It also reveals contracts that have been running month-to-month, contracts with above-market pricing, and contracts for services that are no longer needed.
3. Infrastructure utilization
What are you paying for in terms of compute, storage, and bandwidth, and what are you actually using? Cloud environments are particularly prone to waste here; it's trivially easy to spin up resources and forget about them. On-premises infrastructure should be assessed for consolidation and decommissioning opportunities. Network circuits should be benchmarked against current market rates, not against what was competitive when the contract was signed.
4. Overlap and duplication
Which tools duplicate functionality? How many different ways does the organization share files, manage projects, communicate, or store data? Where are there integration opportunities that would eliminate manual processes? The most common duplications: file sharing (SharePoint and Dropbox and Google Drive), project management (Monday.com and Asana and spreadsheets), communication (Teams and Slack and email), and notes/documentation (Notion and Confluence and shared drives).
5. MSP and outsource contract review
What are you paying your managed service providers for, and does the scope still match the need? Are you paying per-device rates for devices that have been decommissioned? Is the SLA appropriate for the criticality of the service? Could any of the outsourced functions be brought in-house now that the organization has grown? Is the MSP providing value beyond basic maintenance, or have they become an expensive helpdesk?
What organizations typically find
The results of a first-time IT spend audit are remarkably consistent across organizations of different sizes and sectors. The details vary, but the patterns don't.
Software licensing waste runs between 15 and 30 percent of the total software spend. Most of this is unused seats and over-provisioned tiers. It's the quickest win because reducing licenses doesn't require any operational change. You're simply stopping payments for things nobody uses.
Shadow IT typically accounts for 20 to 40 percent of applications in use across the organization. Not all of this represents waste, but the process of discovering, cataloging, and rationalizing it always identifies meaningful savings and significant security risks.
Vendor contract renegotiation, when armed with competitive alternatives and actual usage data, typically achieves 10 to 25 percent reduction on the next renewal. Vendors have room to move on pricing; they just don't offer it unless you ask with evidence in hand and a credible alternative ready.
Infrastructure right-sizing (matching provisioned capacity to actual utilization) recovers 20 to 40 percent of cloud spend in organizations that have never done it. On-premises, the savings come from decommissioning hardware and the support contracts, power, and cooling costs that go with it.
Telecom contract benchmarking routinely reveals that organizations are paying 30 to 60 percent above current market rates for equivalent circuits. The telecom market has changed dramatically in the past five years, and contracts signed before that shift are almost certainly above market.
The total? It's not unusual for a comprehensive IT spend audit to identify savings of 20 to 35 percent of the total IT budget. Not by cutting services. Not by reducing capability. Just by eliminating waste that accumulated because nobody was watching.
How to actually benchmark and control IT spend
Identifying waste is the first step. Controlling spend on an ongoing basis requires structural change.
Centralize procurement. All technology purchases, not just the ones that go through IT, should require approval through a single process. This doesn't mean creating bureaucracy. It means ensuring that before anyone signs up for a new SaaS tool, someone checks whether the organization already has something that does the same thing. A simple approval workflow adds a day to the procurement process and prevents thousands in duplicate spending.
Implement a contract management system. This can be as simple as a shared spreadsheet with renewal dates and calendar reminders. The point is that someone owns the renewal calendar and takes action before notice periods expire. Every contract should be reviewed at least 90 days before renewal, with a decision documented: renew as-is, renegotiate, recompete, or cancel.
Review cloud costs monthly. Not quarterly, not annually. Monthly. Cloud costs can change dramatically from month to month, and catching a spike early is much cheaper than discovering it three months later. Assign a specific person to review the cloud bill each month, investigate anomalies, and implement right-sizing recommendations.
Audit software licenses quarterly. Run a report of active users versus provisioned licenses for every SaaS platform. Deactivate unused accounts. Review tier assignments. This takes a few hours per quarter and typically pays for itself many times over.
Benchmark externally. Your internal team can tell you what you're spending. They can't always tell you whether that spending is reasonable. External benchmarking (comparing your per-user costs, your cost-per-circuit, your MSP rates against market norms) gives you the context to evaluate whether you're getting fair value.
IT costs don't rise because technology gets more expensive. They rise because nobody with the right combination of technical knowledge and commercial awareness is minding the store.
The board doesn't need to understand the technical details. But they do need someone who understands both the technology and the commercial reality. Someone who can translate between the two and hold vendors, MSPs, and internal teams accountable for value. Without that, the budget will keep rising. And nobody will be able to explain why.