The Right Way to Build an AI Strategy in 2025
AI strategy is not a technology plan. It's a business transformation plan that happens to use technology. How to set strategy that survives contact with reality — and builds momentum.
Most AI strategies we encounter in the Netherlands are technology plans with an AI label. They list tools to evaluate, pilots to run, and vendors to engage. They do not answer the question that matters: which specific business outcomes are we trying to change, and what is the evidence that AI can change them? This is the distinction between a strategy that builds momentum and one that produces an interesting portfolio of failed pilots.
The Confusion About What AI Strategy Is
AI strategy is the application of AI capabilities to specific business problems in a sequence that builds organisational capability over time. It has three components: a prioritised set of use cases (what), a capability development plan (how), and a measurement framework (whether it is working). The technology choices — which models, which platforms, which vendors — are implementation details, not strategy.
This distinction matters because it determines who owns AI strategy. Technology choices belong to IT. Use case priority and business outcome targets belong to business leaders. An AI strategy owned only by IT, or only by a Chief AI Officer with no P&L accountability, is structurally likely to produce technically sophisticated but commercially marginal results.
Why Most AI Strategies Fail Before They Start
- Starting with technology not problems: "We need to implement LLMs" is not a strategy. "We need to reduce the time our analysts spend on contract review" is a strategy — LLMs may or may not be the answer.
- Pilot proliferation: Running 15 pilots simultaneously to "test multiple approaches" produces no organisational learning, no scalable infrastructure, and no champions who want to take ownership of a production system.
- Underestimating data readiness: The most common post-mortem finding on failed AI programmes is 'the data was not ready.' This is predictable and diagnosable in advance. A data readiness assessment before any use case selection is table stakes.
- No defined production path: A pilot without a production path is a science experiment with a budget. Before starting a pilot, define: what does success look like, who owns it in production, and what is the business case for investment beyond the pilot.
- Measuring only outcomes, not leading indicators: Business outcomes from AI take 6–18 months to manifest. Without leading indicators (data quality metrics, model performance metrics, adoption rates), you cannot manage the programme during the period when it matters most.
The Right Starting Point: Business Problem Inventory
Start by identifying the decisions your organisation makes repeatedly that are slow, expensive, or inaccurate. Every such decision is a potential AI use case. Ask business leaders across the organisation: what do you spend disproportionate time on that feels like it should not require human judgment? What decisions do you make repeatedly with high variance in quality? Where does manual process sit between a data source and a business action?
This inventory, done properly, will surface 20–40 potential use cases. Most organisations are surprised both by the volume and by which functions generate the most candidates (often operations and compliance, not the functions that most enthusiastically advocate for AI).
A Framework for Prioritising Use Cases
Score each use case across four dimensions to identify the highest-priority investments:
- 1.Impact: what is the annual value if this works at full scale? Include cost reduction, revenue impact, risk reduction, and quality improvement. Be conservative and sensitive-test your assumptions.
- 2.Data readiness: do you have the data required? Is it labelled, clean, and accessible? A high-impact use case with low data readiness is a 24-month project, not a 6-month one.
- 3.Technical feasibility: is this problem demonstrably solvable with current AI capabilities? Has it been solved in analogous context elsewhere? Avoid being the first organisation to attempt a novel AI application with business-critical stakes.
- 4.Strategic fit: does this use case build capability you want in the organisation long-term? Does it move a metric the board cares about? Prioritise use cases where success generates internal champions and budget for the next investment.
The use cases that score highest across all four dimensions are your first investments. Resist the temptation to run high-impact but low-readiness use cases in parallel — sequencing matters for organisational learning.
Building Momentum: The First 90 Days
The first 90 days should produce clarity, not product. Organisations that try to build and ship in the first 90 days almost always cut corners on data assessment and use case definition that they pay for over the following 12 months.
- Weeks 1–4: Use case inventory and initial prioritisation with business stakeholders
- Weeks 5–8: Data readiness assessment for the top 3–5 use cases
- Weeks 9–12: Detailed business case and go/no-go decision for the first use case, plus team design (who builds, who owns, who measures)
The output of 90 days is a decision: one use case to build, a team to build it, a success definition, and a production path. This is more valuable than three pilots at various stages of completion.
Measuring Success in an AI Programme
Define three levels of metrics before you begin: leading indicators (data quality scores, model performance benchmarks, deployment frequency), operational indicators (system reliability, adoption rates, time-to-decision for use cases in production), and lagging business outcomes (cost reduction, revenue impact, risk metrics). The danger is measuring only the last category — it takes 6–18 months to move and tells you nothing about where to intervene when the programme is not on track.
The Board Conversation
Frame AI investment to the board as competitive risk management, not productivity improvement. The most compelling board-level case for AI investment in most Dutch industries is not 'we will save €2M in operational costs.' It is 'our competitors are building capabilities that will change the competitive dynamics of this market, and our data assets position us well to respond if we invest now.' The risk of under-investing in AI capabilities is asymmetric and hard to recover from once competitors establish a data advantage.
An AI strategy that produces one production system operating reliably at scale is worth more than ten pilots that never graduated. Start smaller, define success more precisely, and build the capability to deliver repeatedly.