A Fortune 500 company spent $5M on an "AI-powered intelligent document processing platform." The system extracts numbers from PDFs and puts them in columns. An analyst with Excel could do this in 20 minutes. But "Excel" does not appear in transformation roadmaps. "AI" does.
Most enterprise AI initiatives are solving spreadsheet problems at 50x the cost.
The Recurring Pattern
The business problem "show me which products are selling" has been solved identically since 1985. Only the price changes.
1985: =VLOOKUP(). Cost: 200K. Time to production: six months. 2025: "AI-powered demand forecasting platform." Cost: $5M. Time to production: 18 months and counting.
Same answer. Same columns. Same numbers. The underlying computation is a filter and an aggregation. It was a spreadsheet operation in 1985, and it remains one in 2025.
The Incentive Misalignment
The technical explanation for this pattern is straightforward: these problems do not require machine learning. The actual explanation is organizational.
Nobody gets promoted for maintaining spreadsheets. Everyone gets promoted for "leading AI transformation." The executive incentive is to frame every data problem as an AI problem, regardless of whether AI adds value over simpler alternatives.
Vendors reinforce this. A spreadsheet costs 500K/year with a three-year contract and migration costs that make departure prohibitive. The vendor is not selling a better solution. They are selling lock-in with an AI marketing wrapper.
Consultants complete the cycle. The engagement to "evaluate AI opportunities" will always find AI opportunities. The incentive structure guarantees it.
The Decision Framework
Three questions separate AI problems from spreadsheet problems.
Can a formula express the logic? If the transformation from input to output can be described as a deterministic rule -- lookups, aggregations, conditional logic, arithmetic -- it is a spreadsheet problem. AI adds no value to deterministic transformations.
Where is the complexity? Complex logic at simple scale: spreadsheet. Simple logic at massive scale: database with standard ETL. Both complex: potentially a machine learning problem. The overwhelming majority of enterprise data problems are simple logic at simple scale.
What is the cost of errors? A spreadsheet error is debuggable. Open the cell, read the formula, find the mistake. An AI error is opaque. The model produced the wrong output. Why? The debugging cost scales with the opacity of the system, and LLMs are maximally opaque.
Where AI Genuinely Applies
AI adds value when the task cannot be expressed as rules: semantic analysis of unstructured text at scale, pattern recognition in high-dimensional data (medical imaging, fraud detection in complex transaction networks), and generative tasks where the output space is too large for enumeration.
The pattern is clear. If the data fits in rows and columns and the logic is expressible as deterministic rules, it is a spreadsheet problem. If the input is unstructured, the patterns are latent, or the output requires generation -- then AI may justify its cost.
Most enterprise data problems are the former. The 100 spreadsheet produce the same output. The difference is which one gets budget approval.