
AI Got Good Fast! Organizations Didn’t… (Review of HAI Reports)
I’ve been following the Stanford HAI AI Index over the past few years. Based on their latest AI Index report, AI is now entering a new phase where managing it is becoming more complex than building it:
- 2024: Should we adopt AI?
- 2025: How do we use AI effectively?
- 2026: Why can’t our organization keep up?
What’s Actually Changing:
- Using AI is no longer the question.
AI usage has moved incredibly fast:
- From ~55% of organizations using AI in 2023 to 88% in 2025-2026.
- Generative AI reached ~50% population adoption within just a few years.
At this point, most companies are already using AI in some way, and their focus is now on how to make it work at scale.
2. Model differences are narrowing.
Top models are getting closer in performance, and open models are catching up quickly. So, picking the “best model” is no longer the main focus. It means that your advantage will not come from the model but from:
- Your data,
- How AI fits into workflows,
- Whether people actually use it,
- How widely it’s deployed across the company.
In short: Execution > Model Choices. But execution is getting harder, not easier.
3. AI is getting better but harder to evaluate.
AI systems now meet or exceed human baselines on some PhD-level tasks and complex benchmarks. However:
- Benchmarks, tests used to measure how good an AI model is, are hitting their limits. That’s why some people say, “We don’t actually know how good these models are anymore.”
- Real-world performance is inconsistent.
- Models can be great at one thing and fail at another.
This creates a growing gap between what AI can do and what you can reliably use. It means your AI may pass the test and still fail in production. So, more companies will need their own evaluation setups tied to their use cases in practice.
4. The system around AI can’t keep up.
This is the central theme of the 2026 report. AI is scaling faster than:
- Internal processes,
- Governance,
- Evaluation methods,
- Training and education systems.
This is not just a technology problem anymore; it’s a coordination problem across systems. That’s why so many teams feel like they’re constantly behind.
5. Constraints are now structural.
AI is starting to look less like software and more like infrastructure. We’re hitting limits in:
- Access to high-quality data.
- Compute and infrastructure (concentrated supply chains).
- Transparency from model providers (increasing black-box models).
So this turns AI into a supply chain problem. The next competitive moat is more about access to data, compute, and infrastructure than just the best models.
6. Capability ≠ Reliability
One of the most important patterns in the reports is that AI can outperform humans in some complex tasks… but still fail at simple ones.
This “jagged frontier” means performance is uneven, failures are unpredictable, and systems are hard to trust in production. These make deployment harder than expected, meaning the bottleneck is no longer capability but consistency and reliability.
7. Workforce impact is starting to show
While AI agents are still under-deployed across most functions, we’re starting to see real effects of AI in:
- Productivity gains (~14–26% being reported in some roles).
- Declines in some entry-level roles.
This isn’t just about “AI helping people” anymore. It’s starting to reshape how work itself is structured.
8. The economic impacts are becoming visible.
AI is moving beyond experimentation to generating measurable consumer value and helping companies scale revenue faster. For example, AI crossed ~$172B in annual consumer value (U.S. alone), faster revenue growth than past tech waves.
That brings pressure to move from pilots to production, show ROI, and capture value early.
9. Regulation and geopolitics are getting more complex.
Different regions are taking different paths, resulting in a more fragmented world than ever; for example:
- EU: aggressive regulation,
- U.S.: mixed/ partial deregulation,
- Others: building national strategies.
At the same time, AI sovereignty is rising over who controls the infrastructure, data, and models. For global organizations, this adds a new layer of complexity.
10. Trust is still a gap.
One of the most understated changes is a clear disconnect between expert and public perception:
- ~73% of experts are mostly optimistic about AI,
- Only ~23% of the public agrees.
That gap is widening, which will shape adoption, regulation complexities, and brand and reputational risk going forward.
The Bigger Shift
If you zoom out, the pattern from comparing the HAI Index report shows that AI went from:
early adoption & AI capability → now managing organizational ability to use AI at scale.
It means the core challenge is no longer access to AI, but building the systems around it so it actually works and delivers value.
What This Means for Leaders:
- Execution matters more than access.
- Workflows matter more than models.
- Reliability matters more than benchmarks.
- ROI is now expected, not optional.
- AI strategy increasingly includes infrastructure and policy.
Question: Most organizations are still optimizing for how to use AI, but as a visionary leader, what do you think breaks first when AI becomes embedded everywhere in your organization, and how do you design for that now?
Source: Stanford AI Index Reports (2024, 2025, 2026)