Internal Companion Page

Claim-to-Reference Map

This page maps the book's main chapter claims to the references listed in the published edition of The CEO's Guide to AI Transformation. It is intentionally omitted from the main site navigation and sitemap.

Scope

Representative mapping of the book's core claims by chapter, not a sentence-by-sentence concordance.

Source Basis

Built from the published book's references section and chapter summaries in the March 2026 edition.

Source Date Note

The printed references section states that key web sources and regulatory dates were last checked on 27 March 2026.

Use this page as a traceability aid: each chapter entry paraphrases a major claim from the book and points to the primary references named in the printed edition that support that line of argument. Regulatory timelines and company facts should still be rechecked against official sources before reuse.
Introduction

Why Is Transformation Different This Time?

Open

Core book claim: AI is a general transformative technology whose effect is determined by leadership readiness, operating model readiness, and board-level commitment rather than by model novelty alone.

AI behaves more like a general-purpose infrastructure shift than a contained software upgrade

The introduction frames AI as a transformation comparable to electricity or the internet, which changes the scale of management, control, and integration problems.

Leadership and board alignment determine whether AI becomes enterprise value or noise

The book's opening claim is that CEOs and boards, not technology teams alone, determine whether AI becomes measurable business change.

Chapter 1

Declare AI a Business Redesign, Not a Technology Program

Open

Core book claim: AI transformation must be owned by the CEO as a business redesign tied directly to profit, loss, growth, and risk posture.

Scaled value comes from redesigning the business around AI, not from running AI as a side program

The chapter argues that AI should be treated like a business model issue with executive accountability, not as another IT workstream.

The CEO, not a delegated innovation team, is the real owner of enterprise AI outcomes

The chapter's governance position is that AI ownership belongs where capital allocation, operating model choices, and enterprise risk already sit.

Chapter 2

Redesign How Decisions Are Made

Open

Core book claim: AI creates value through decision rights, escalation paths, and operating judgment, so leaders must define where AI advises, acts, and can be overridden.

Decision quality and decision architecture matter more than deploying models in isolation

The chapter treats AI as a redesign of how choices are made inside management and operating systems.

Human oversight, reversibility, and kill authority must be explicit

The chapter argues that decision maps must define who can override AI, when escalation triggers fire, and what remains non-delegable.

Chapter 3

Embed AI Into Core Workflows or Stop Funding It

Open

Core book claim: AI spending should concentrate on workflow redesign with measurable business output, not on accumulating loosely related pilots.

Pilot-heavy AI programs do not automatically compound into enterprise value

The chapter's anti-pilot argument is that local productivity wins are not enough unless they change revenue, cost, throughput, or control in core workflows.

Live workflow embedding is what turns AI into operating leverage

The chapter uses real deployments to show that AI matters when it is inside the path of work rather than outside it.

Chapter 4

Fix the Data and Operating Foundations AI Depends On

Open

Core book claim: weak data ownership, low integration, and slow operating mechanisms become hard limits as AI scales.

Data access and cross-functional integration are structural constraints, not implementation details

The chapter argues that hidden operating friction is often the real reason AI transformations stall.

AI depends on operational plumbing, ownership, and cadence, not on isolated model performance

The chapter's operating point is that AI scale depends on usable systems, named owners, and repeatable process control.

Chapter 5

Recontract With Your Workforce

Open

Core book claim: AI changes task composition, authority, incentives, and career structure, so leaders must move from reassurance to explicit redesign and reskilling.

AI affects a broad share of work, but actual exposure depends on what gets embedded into workflows

The chapter treats labor impact as real but uneven, making observed adoption and redesign more informative than abstract capability alone.

Adoption depends on role clarity, incentives, skills, and trusted leadership behavior

The chapter argues that workforce transition is a management system problem, not a communications exercise.

Chapter 6

Govern AI Like a Material Enterprise Risk

Open

Core book claim: AI needs structured controls, documentation, oversight, and intervention authority comparable to other material enterprise risks.

AI governance must move from aspiration to lifecycle control

The chapter positions AI governance as an operating discipline spanning design, deployment, monitoring, and intervention.

Chapter 7

Measure What Actually Matters

Open

Core book claim: AI should be judged by business outcomes, decision quality, speed, cost, and risk reduction rather than by counts of pilots, users, or tools.

Outcome and EBIT impact are stronger measures than tool counts or pilot counts

The chapter argues that AI measurement must look like capital allocation and operating performance measurement, not adoption theater.

Boards should review AI through operating KPIs and business outcomes

The chapter reinforces that measurement should tie to real service, risk, throughput, and financial effects.

Chapter 8

Manage AI Dependency

Open

Core book claim: AI creates new dependency on vendors, infrastructure, regulation, and geopolitics, and leaders must make explicit control choices before lock-in forms.

AI dependency should be managed like other strategic exposures

The chapter treats dependency as a board-level issue, not a technical procurement detail.

Dependency accumulates faster than governance unless control choices are made deliberately

The chapter's strategic point is that firms must decide what to commoditize, what to own, and what to keep reversible.

  • Mustafa Suleyman, The Coming Wave (2023) is cited for the containment problem and for dependency outpacing governance.
  • EU AI Act and Digital Omnibus on AI are cited for the regulatory dimension of dependency and control.
Chapter 9

Shaping the Boardroom Discussion

Open

Core book claim: boards must govern AI as a durable enterprise redesign issue that cuts across capital allocation, risk, operating model, talent, and control.

Board conversations on AI should be about business redesign, risk, and accountability

The chapter pushes boards away from passive technology updates toward decision-making on how the company will compete and remain governable.

Capital discipline and operating assumptions belong inside the AI board discussion

The chapter treats the boardroom as the place where AI investment logic, risk appetite, and operating model changes must be made coherent.

Chapter 10

Innovate with Expectation of Continued Uncertainty

Open

Core book claim: leadership should assume continued uncertainty in models, costs, regulation, and competitive structure, and should optimize for optionality and reversibility.

Optionality and reversibility are better strategic defaults than long-range certainty claims

The chapter adapts classic decision framing to AI conditions where the environment is moving faster than corporate planning cycles.

Fast innovation does not remove the need for boundaries and survivability

The chapter's posture is that speed without control is not strategic advantage.

  • Stuart Russell, Human Compatible (2019) supports the control and corrigibility argument.
  • NIST AI RMF 1.0 supports a lifecycle approach to risk under uncertainty.
Chapter 11

Competing When AI Collapses Differentiation

Open

Core book claim: as foundation-model access broadens, differentiation shifts toward workflow control, trust, execution speed, integration quality, and cost structure.

Durable advantage comes from execution and workflow ownership once model access becomes widespread

The chapter argues that value capture will accrue to firms that change the operating system of the business, not just the toolset.

Chapter 12

When AI Fails, Who Is Actually in Control?

Open

Core book claim: AI failures are unavoidable, and the real test of leadership is whether accountability, intervention authority, and operational controls still function under pressure.

Public failures expose whether AI oversight is operationally real

The chapter uses prominent incidents to show that reputational and control failures quickly become leadership failures.

Control requires override rights, documentation, monitoring, and people empowered to act

The chapter's control framework is that systems are only governable if the organization can stop, explain, and contain them under live conditions.

  • Stuart Russell, Human Compatible (2019) is cited for switch-off and corrigibility logic.
  • Timnit Gebru (2022) is cited for meaningful institutional authority to surface and stop harm.
  • NIST AI RMF 1.0 supports the monitoring and intervention discipline referenced in the chapter.
Conclusion

Retaining Agency

Open

Core book claim: the CEO's enduring task is to retain agency by making explicit choices about control, dependency, and consequence before AI choices become locked in by default.

Agency comes from explicit strategic choices, not passive adoption

The conclusion consolidates the book's argument that leadership must decide what will be controlled, outsourced, regulated around, and measured.

Appendix

Is Your AI Transformation Failing?

Open

Core book claim: early red flags show up in value capture, data access, governance readiness, and failure response long before a transformation is officially declared unsuccessful.

Red flags appear when AI remains a pilot count, a tool count, or a presentation layer

The appendix frames stalled transformation as visible through operating symptoms rather than through model performance alone.