Free report: Straithead Industry Vision Report 2026 — AI: The New Essential Infrastructure

Download free

AI Missing Economic Impact: What Goldman Sachs Got Right And What It Missed

AI Missing Economic Impact:
What Goldman Sachs Got Right And What It Missed

Goldman Sachs chief economist Jan Hatzius said AI contributed “basically zero” to US economic growth in 2025. The import-adjusted accounting is correct. But the full picture is more complex — and more important — than the headline suggests.

Straithead March 2026 12 min read
~$0
AI contribution to US GDP growth in 2025
Goldman Sachs / Atlantic Council, 2026
75%
Data center cost from imported components
TechRadar, 2026
80%+
Companies reporting no productivity gains from AI
Tom’s Hardware survey, Feb 2026
2027
When Goldman forecasts AI will start showing measurable GDP impact
Goldman Sachs Research, 2023

For the past two years, a powerful narrative has circulated in Washington, on Wall Street, and in the boardrooms of every major technology company: AI investment is the engine driving the US economy forward. Goldman Sachs just put a number on that claim. The number is zero.

In February 2026, Jan Hatzius — Goldman Sachs’s chief economist and one of the most closely watched macroeconomists on Wall Street — sat down with the Atlantic Council and made a statement that cut directly against the prevailing consensus. Speaking about AI’s contribution to US GDP growth in 2025, he said: “It’s much smaller than is often perceived. Basically, zero.”

This was not a fringe position. It was a carefully considered claim from the head economist of one of the world’s most influential financial institutions, backed by a specific and intellectually rigorous argument about how GDP is actually measured. Understanding that argument — and its limits — is essential for any enterprise leader making AI investment decisions in 2026.

The Core Argument

Why Imports Make the Difference

Hatzius’s argument is rooted in the mechanics of GDP accounting, not a sceptical view of AI’s potential. The US measures GDP by calculating the value of goods and services produced domestically. When a US company buys imported equipment, the spending shows up as a positive entry in the investment line — but it is simultaneously offset by a negative entry in the net exports line. The two cancel out. Net contribution to GDP: zero.

The problem for AI is that roughly 75% of the cost of building an AI data center comes from imported components. The most critical of these — high-bandwidth memory, advanced logic chips, server components — are manufactured primarily in Taiwan (TSMC, SK Hynix) and South Korea (Samsung). NVIDIA designs its chips in California, but fabricates them in Taiwan. The money flows out of the US economy before it can contribute to domestic GDP growth.

“A lot of the AI investment that we’re seeing in the US adds to Taiwanese GDP, and it adds to Korean GDP — but not really that much to US GDP.”

Jan Hatzius, Chief Economist, Goldman Sachs — Atlantic Council interview, 2026

This means that the hyperscalers — Meta, Amazon, Google, Microsoft — spending a combined $480 billion on AI infrastructure in 2025 were, from a GDP accounting perspective, primarily enriching the semiconductor supply chains of East Asia. That is not a criticism of their strategy. It is a structural consequence of where the global AI hardware supply chain currently sits.

The Disputed Narrative

How the Misreporting Happened

The story of how AI came to be credited with driving US economic growth is worth understanding, because it reveals how easily high-level economic narratives can mislead.

In mid-2025, Harvard economics professor Jason Furman posted on X that investment in information processing equipment and software had been responsible for 92% of US GDP growth in the first half of the year. The Federal Reserve Bank of St. Louis separately estimated that AI-related investments accounted for 39% of GDP growth in Q3 2025. Both figures were widely cited — by analysts, by journalists, and notably by President Trump, who cited AI investment as a reason to resist state-level AI regulations.

The problem, as Goldman’s Joseph Briggs noted, is that these figures captured a broader category of information processing investment rather than AI-specific capital spending — and crucially, they did not adjust for imports. Once imports are properly accounted for, the picture changes dramatically. The contribution of AI investment to US GDP shrinks from a headline-grabbing 39-92% range down to, in Hatzius’s words, basically zero.

Straithead Analysis

This is a structural measurement problem, not evidence that AI is failing. GDP measures domestic production, not the strategic value of infrastructure investment. The same dynamic played out with early internet infrastructure buildout — spending on imported hardware showed minimal near-term GDP impact, while the long-term productivity gains were transformative. The lesson is not to stop investing. It is to stop using GDP contribution as the primary metric for evaluating AI investment in its infrastructure phase.

AI’s Reported vs. Actual GDP Contribution — United States, 2025 Straithead Analysis

How different economists measured AI’s contribution to US GDP growth in 2025 — and why the figures diverge so dramatically depending on methodology.

Furman / Harvard
Info processing investment share of GDP growth, H1 2025
92% of GDP growth
92%
Fed Reserve St. Louis
AI-related investment share, Q3 2025
39% of GDP growth
39%
Goldman Sachs
Import-adjusted AI investment contribution, full year 2025
~0%
Goldman Forecast
Expected measurable AI GDP impact from 2027 onwards
Measurable from 2027
2027+

The divergence between the 92% and ~0% figures is explained by methodology: Furman and the St. Louis Fed counted gross investment in information processing equipment, which includes a broad range of tech spending and does not subtract imports. Goldman Sachs applied import adjustment — subtracting the value of hardware manufactured overseas — which reflects how US GDP is actually calculated. Both methodologies are internally consistent. They are measuring different things.

Sources: Jason Furman / Harvard (X post, 2025) · Federal Reserve Bank of St. Louis (Q3 2025) · Goldman Sachs / Atlantic Council (Jan Hatzius, Feb 2026) · Straithead analysis

The Productivity Gap

Beyond GDP: The Productivity Problem Is Real

Hatzius’s GDP argument is technically correct but somewhat narrow. A more troubling signal sits alongside it: the productivity data. A survey of over 6,000 executives published in early 2026 found that more than 80% of companies report no measurable productivity gains from AI, despite billions in investment. McKinsey’s 2025 State of AI survey found that meaningful enterprise-wide EBIT impact from AI remains rare, with only around 6% of respondents qualifying as “AI high performers” achieving 5%+ EBIT impact.

These numbers sit in direct tension with the vendor-reported figures. OpenAI’s enterprise report claims 75% of surveyed workers say AI has improved the speed or quality of their output. NVIDIA’s State of AI survey reports 88% of respondents say AI has increased annual revenue. The contradiction is partly a sampling issue — enterprise AI leaders surveyed by the vendors who sold them AI are not a representative sample of the broader economy — and partly a genuine gap between where AI is working and where it is not.

Source Metric Finding
Goldman Sachs / Atlantic Council AI contribution to US GDP, 2025 ~Zero (import-adjusted)
Tom’s Hardware / 6,000 executives Companies with no productivity gains 80%+
McKinsey State of AI 2025 Enterprises with 5%+ EBIT impact ~6% (“high performers”)
EY Work Reimagined Survey Employees using AI only for basic tasks 83% (vs 5% advanced use)
Federal Reserve Bank of Chicago AI as driver of broad economy “Not as big as portrayed”
Goldman Sachs Research (2023) Measurable AI GDP impact expected 2027 onwards
Why the Gap Exists

Hardware Alone Does Not Produce Productivity

The pattern here is structurally familiar. Every major technology cycle — mainframes, PCs, the internet — showed a similar gap between infrastructure investment and measurable productivity returns. Economists call it the productivity paradox: the computers are everywhere but in the productivity statistics. Robert Solow made the observation in 1987. It took until the mid-1990s for internet-era productivity gains to show up in the data.

The reasons for the current gap are not mysterious. As the EY 2025 Work Reimagined Survey found, 88% of employees are using AI at work — but 83% use it only for basic tasks like search and document summarisation. Only 5% are using AI in advanced ways that genuinely transform their work. Buying a GPU cluster does not automatically produce business value. Redesigning workflows around AI capability does — but that requires time, talent, change management, and governance infrastructure that most organisations have not yet built.

“The near-term returns are more likely to come from how workers actually use the technology than from the capital expenditure itself.”

Goldman Sachs Research, 2026
The Sovereign Dimension

The Import Dependency Is a Strategic Problem, Not Just an Accounting One

Hatzius frames the import dependency primarily as an accounting issue. But it is also a strategic and geopolitical one. The fact that 75% of the cost of an AI data center flows overseas to Taiwan and South Korea means that the US AI infrastructure buildout is structurally dependent on a semiconductor supply chain that sits in one of the world’s most geopolitically sensitive regions.

TSMC’s leading-edge fabs in Taiwan fabricate the chips that power virtually every major AI system in the world. Samsung and SK Hynix in South Korea supply the high-bandwidth memory without which modern AI training cannot function. Any sustained disruption to that supply chain — whether from geopolitical tension, natural disaster, or export control escalation — would halt the AI infrastructure expansion that technology companies are betting their market capitalisations on.

This is precisely why the Chips Act, domestic semiconductor investment, and the push for TSMC and Samsung manufacturing capacity in the US and Europe have become national policy priorities. The GDP accounting anomaly Hatzius identifies is, at a deeper level, a manifestation of a structural vulnerability in the global AI supply chain.

Enterprise Implication

For enterprise technology leaders, the import dependency question has a direct procurement dimension. Hardware supply chain risk — particularly around advanced GPU availability, lead times, and geographic concentration — should now be a first-order consideration in AI infrastructure planning. The shortage cycles of 2023-2024, when GPU lead times stretched to 52 weeks, are a preview of what supply disruption could look like at scale.

What Changes in 2027

Goldman’s Own Forecast Points to a Real Inflection

It is worth noting that Hatzius’s “basically zero” statement is explicitly about 2025. Goldman Sachs’s own 2023 research forecast that AI would begin to show measurable impact on US GDP and labour productivity starting in 2027. The Chicago Fed’s Austan Goolsbee similarly noted that while AI has “not been as big a driver of the economy as some have portrayed,” he did not dismiss its long-term potential.

The mechanism for the expected 2027 inflection is not more hardware spending. It is the maturation of the software, workflow, and human capability layers that sit on top of the infrastructure. Once organisations move beyond basic summarisation tasks and genuinely redesign workflows around AI capability — automating decision processes, compressing product development cycles, transforming customer-facing operations — the productivity gains should begin to appear in aggregate economic data.

McKinsey’s data already shows this pattern at the firm level: AI high performers — the 6% of companies achieving 5%+ EBIT impact — are three times more likely to have fundamentally redesigned individual workflows. They invest more than 20% of their digital budgets in AI. They scale across multiple business functions. The infrastructure investment was necessary but not sufficient. The transformation of how that infrastructure is actually used is where the economic value will ultimately accrue.

Technical Perspective

Why the GPU Import Problem Cannot Be Solved Quickly

To understand why the import dependency Hatzius identifies is structural rather than incidental, it helps to understand what actually goes inside an AI data center — and why almost none of it can currently be sourced domestically.

The critical path for any AI training or inference workload runs through three hardware layers. First, the GPU or AI accelerator — NVIDIA’s H100/H200 and B200 Blackwell chips, which are designed in California but fabricated exclusively at TSMC’s fabs in Taiwan using 4nm and 3nm process nodes that no US foundry currently operates. Second, high-bandwidth memory (HBM) — the specialised stacked memory that feeds data to the GPU fast enough to keep it busy — manufactured almost entirely by SK Hynix and Samsung in South Korea, with Micron (US) holding a distant third position. Third, the networking fabric — InfiniBand switches, Ethernet ASICs, and interconnect hardware that links thousands of GPUs together into a coherent training cluster — supplied primarily by NVIDIA (Mellanox), Broadcom, and Marvell, with manufacturing again concentrated in Asia.

The result is that roughly 75% of the capital cost of a large-scale AI data center flows directly overseas before a single training job runs. The CHIPS Act was designed precisely to address this — but semiconductor fab construction operates on 3-5 year timelines. TSMC’s Arizona fab began producing chips in late 2024, but at 4nm rather than the leading-edge 2nm nodes that will power next-generation AI accelerators. Closing the fabrication gap is a decade-long project, not a policy-cycle fix.

The Sovereignty Mathematics

A single NVIDIA H100 server costs approximately $200,000. Of that, roughly $30,000 is the GPU die fabricated at TSMC in Taiwan. The HBM3 memory stacked on it adds another $20,000, sourced from SK Hynix in South Korea. The networking cards, power infrastructure, and cooling hardware add further import exposure. Before a US data center operator powers on a new AI server, the majority of its capital cost has already left the US economy — permanently, from a GDP accounting perspective.

AI Hardware Supply Chain: GPU vs Quantum — Sovereignty Risk Comparison Straithead Analysis
⚠ Classical GPU Stack
✓ Quantum (QPU) Stack
GPU Die (H100/B200)
TSMC — Taiwan (4nm / 3nm)
Very High
IBM Quantum Chips
IBM Research — New York, USA
Low
HBM3/HBM3e Memory
SK Hynix / Samsung — South Korea
Very High
IQM Superconducting QPUs
IQM — Espoo, Finland (EU)
Low
InfiniBand / Networking ASICs
TSMC / Samsung — Asia
High
Dilution Refrigerators
Bluefors — Helsinki, Finland (EU)
Low
Power / Cooling Hardware
Mixed — significant Asia exposure
Medium
ORCA Photonic QPUs
ORCA Computing — London, UK
Low
Very High / High sovereignty risk
Medium sovereignty risk
Low sovereignty risk

Sources: TSMC · SK Hynix · Samsung · IBM Research · IQM Quantum Computers · Bluefors · ORCA Computing · Straithead analysis, March 2026

The Quantum Alternative

Quantum Computing as the Long-Term Sovereignty Answer

Here is where the analysis becomes genuinely forward-looking, and where Straithead’s coverage of quantum computing connects directly to the Goldman Sachs observation. The import dependency problem is fundamentally a consequence of the physical architecture of classical AI hardware — specifically, the way transformer-based neural networks require enormous parallel floating-point computation that maps naturally onto GPU silicon. Quantum computing does not share this architectural dependency.

Quantum processors — QPUs — operate on fundamentally different physics. Superconducting qubits, the dominant hardware modality from IBM, Google, and IQM, require only dilution refrigerators operating near absolute zero and microwave control electronics. The supply chain for superconducting quantum hardware is substantially more distributed than the GPU supply chain. IBM fabricates its quantum chips at its own facilities in New York. IQM, the Finnish quantum hardware company whose AaltoQ20 system we covered earlier this year, manufactures its processors in Europe. The cryogenic systems — supplied by Bluefors in Finland — are European-sourced. A sovereign quantum computing supply chain is not merely theoretically possible — it is already being built.

This matters because quantum hardware is specifically well-suited to the class of problems where classical AI infrastructure is most economically inefficient. Drug discovery, materials simulation, financial optimisation, logistics planning, cryptography — these are all domains where quantum algorithms offer theoretical exponential speedups over classical approaches. They are also domains where the current AI infrastructure spend is concentrated and where the productivity gap identified by McKinsey and others is most acute.

“GPUs will remain king, but ASIC-based accelerators, chiplet designs, analog inference and even quantum-assisted optimizers will mature. Maybe a new class of chips for agentic workloads will emerge.”

Kaoutar El Maghraoui, Principal Research Scientist, IBM — IBM Think, March 2026
Hybrid Architecture

The Hybrid Quantum-Classical Data Centre Is Already Emerging

The transition from GPU-only AI infrastructure to hybrid quantum-classical architecture is no longer speculative. It is happening now, driven by three converging developments.

First, NVIDIA’s NVQLink. At GTC 2025, NVIDIA launched NVQLink — a direct communication interface between quantum processors (QPUs) and NVIDIA GPUs. This enables quantum co-processors to be embedded within existing GPU-based data centre architectures, handling optimisation and sampling workloads while classical GPUs manage training and inference. Seventeen quantum computing companies and eight US Department of Energy national laboratories have already joined the NVIDIA quantum ecosystem built around this architecture.

Second, European sovereign quantum infrastructure. On March 17, 2026 — one week ago — Scaleway, the French cloud provider, achieved full compatibility between its Quantum-as-a-Service platform and NVIDIA CUDA-Q, allowing developers to run hybrid quantum-classical code across NVIDIA Blackwell Ultra GPUs and physical quantum processors from IQM and AQT. Scaleway is explicitly positioning itself as a sovereign cloud hub for European quantum researchers — a direct response to the import dependency problem Hatzius identified, but approached from the quantum side rather than the classical silicon side.

Third, ORCA Computing’s photonic approach. UK-based ORCA Computing has deployed a hybrid quantum-classical platform linking its photonic quantum processors with NVIDIA CUDA-Q units at the Poznan Supercomputing and Networking Center in Poland. Photonic quantum systems operate at room temperature — eliminating the dilution refrigerator requirement — and use telecom-compatible hardware that integrates naturally into existing data centre environments. This further reduces the supply chain concentration risk that defines classical GPU infrastructure.

Hardware Type Primary Fabrication Location Sovereignty Risk
NVIDIA H/B-series GPUs TSMC, Taiwan Very High
HBM3/HBM3e Memory SK Hynix / Samsung, South Korea Very High
IBM Superconducting QPUs IBM Research, New York, USA Low
IQM Superconducting QPUs IQM, Espoo, Finland Low (EU)
ORCA Photonic QPUs ORCA Computing, UK Low (UK)
Bluefors Dilution Refrigerators Bluefors, Helsinki, Finland Low (EU)
Timeline Reality Check

What Quantum Can and Cannot Do in 2026

Intellectual honesty requires a clear-eyed assessment of where quantum hardware actually is in 2026, not where the most optimistic roadmaps project it will be.

Quantum computers today operate in what researchers call the NISQ era — Noisy Intermediate-Scale Quantum. Current systems from IBM, Google, IQM, and others operate with 50–1,000 physical qubits, but qubit error rates remain high enough that complex computations require significant error-correction overhead. IBM’s fault-tolerant quantum roadmap targets late 2029. Google’s Willow chip demonstrated scalable error correction with 105 qubits in late 2024, but fault-tolerant general-purpose quantum computation remains years away.

In 2026, quantum hardware is genuinely useful for a specific class of problems — optimisation, quantum chemistry simulation, cryptography research, and quantum machine learning sampling — but it is not a near-term replacement for the GPU clusters powering large language models. The honest framing is not “quantum will replace GPUs.” It is “quantum will join GPUs as a co-processor for specific workloads — and crucially, quantum hardware has a sovereignty profile that GPU hardware does not.”

Bain’s analysis describes the future compute environment as a “mosaic” — CPUs, GPUs, and QPUs each handling the workload classes they are best suited for. McKinsey’s 2025 quantum report emphasises that quantum computing addresses AI’s core constraints in specific domains: algorithmic efficiency for optimisation problems, memory walls in quantum chemistry, and compute bottlenecks in high-dimensional data processing. For the UK government — which pledged up to £1 billion for quantum computing procurement as part of a £2.5 billion AI and quantum sovereignty initiative — this mosaic architecture is precisely what it is building toward.

The Sovereignty Conclusion

Goldman Sachs identified the import dependency as an accounting problem. It is also a strategic one. The long-term answer to compute sovereignty is not simply building more classical semiconductor fabs — though that matters — it is developing a hybrid compute stack where quantum hardware handles the problem classes it is architecturally suited for, using supply chains that can be domesticated. Europe and the UK are already building this. The US is funding it. The question for enterprise technology leaders is not whether this transition will happen, but whether their organisations will be ready for it when it does.

From GPU Dependency to Quantum-Classical Sovereignty — The Roadmap to 2032 Straithead Analysis
2024
NISQ Era50–1,000 qubit systems. High error rates. Research & pilots only.
2026 ◀ Now
Hybrid EmergenceNVQLink live. Scaleway CUDA-Q sovereign cloud. Enterprise pilots at scale.
2027
AI GDP ImpactGoldman Sachs forecasts measurable AI contribution to GDP. Quantum co-processors enter data centres.
2029
Fault ToleranceIBM targets fault-tolerant quantum computer. Google Willow successor. First commercial quantum advantage.
2032
Mosaic ComputeCPU + GPU + QPU standard architecture. Sovereign quantum supply chains operational.
Phase 1: 2024–2026
Infrastructure investment. Import dependency exposed. Hybrid architectures emerge.
Phase 2: 2027–2029
AI GDP impact materialises. Fault-tolerant quantum arrives. Supply chains diversify.
Phase 3: 2030–2032
Mosaic compute standard. Quantum sovereignty achieved in EU, UK, USA.

Sources: Goldman Sachs Research · IBM quantum roadmap · Google Willow · NVIDIA NVQLink · Scaleway / CUDA-Q (March 2026) · IDTechEx · Straithead analysis

The Honest Takeaway for Enterprise Leaders

Goldman Sachs is right on the accounting. AI infrastructure investment in the US in 2025 added almost nothing to GDP in the import-adjusted calculation that actually matters for measuring domestic economic output. The narrative that AI was driving US growth was largely built on figures that ignored where the money actually went.

But “basically zero” GDP contribution in 2025 is not evidence that the investment was wrong. It reflects the structural reality of the global semiconductor supply chain, and the time lag between infrastructure investment and productivity-driven economic output that has characterised every major technology transition.

The strategic implication is not to pause AI investment. It is to redirect the focus. The organisations that will capture economic value from AI are not the ones that bought the most GPUs in 2025. They are the ones that redesigned their workflows, retrained their workforce, and built the governance infrastructure to turn AI capability into measurable business outcomes — before 2027, when Goldman Sachs expects the macroeconomic data to begin catching up with the investment thesis.

The $480 billion has been spent. The question now is whether the organisations that spent it know how to use it.

Sources & References

  • Jan Hatzius, Goldman Sachs Chief Economist — Atlantic Council interview, February 2026: Gizmodo coverage
  • Tom’s Hardware — “Over 80% of companies report no productivity gains from AI”, February 2026
  • McKinsey Global Institute — State of AI 2025, November 2025: mckinsey.com
  • EY Work Reimagined Survey 2025 — 15,000 employees, 1,500 employers, 29 countries: EY Global
  • TechRadar — “75% of data center cost from imported components”, 2026: techradar.com
  • Goldman Sachs Research — AI GDP forecast from 2027 onwards, 2023
  • Austan Goolsbee, Federal Reserve Bank of Chicago — AI economic impact comments, 2026
  • Jason Furman, Harvard Economics — AI investment as % of GDP growth, X post, 2025
  • Federal Reserve Bank of St. Louis — AI investment as 39% of Q3 2025 GDP growth
  • Prism News — Goldman Sachs AI GDP analysis, March 2026: prismnews.com

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top