Tyler Cowen shared his reasons why he thinks AI take-off, really LLM dissemination through society will be slow:
I’ve touched on this before in AI/LLM Diminishing Returns, but this has more reasons and more detail.
Lots of economic concepts I don’t grok.
I asked for a summary via deepseek:
The author discusses the potential economic impact of AI, emphasizing that while AI has significant capabilities, its integration into the economy will face numerous challenges. These include slow adoption in inefficient sectors like government, human bottlenecks (e.g., regulatory constraints), and the O-Ring model, where human limitations may hinder AI’s effectiveness. Historically, new technologies take time to diffuse, and GDP growth tends to remain stable around 2%, suggesting AI’s impact will be gradual. The author estimates AI might boost growth by 0.5% annually, leading to significant long-term changes but not immediate, noticeable shifts. Market prices also do not indicate rapid transformation. Overall, the author remains optimistic about AI’s potential but cautious about its near-term economic effects.
Nod.
I think I get it. I asked for a one sentence summary I can share with a non-expert (e.g. my wife, if she askes):
AI has huge potential to boost the economy, but its impact will likely be gradual due to slow adoption in inefficient sectors, human bottlenecks, and historical trends showing new technologies take time to spread—expect steady growth, not an overnight revolution.
We adopt things slowly for “reasons”.
Nod.
How could be wrong?
The author’s cautious outlook on AI’s economic impact might be wrong for several reasons:
Underestimating AI Adoption Speed: The author assumes slow adoption in inefficient sectors, but breakthroughs in AI usability, cost reduction, or regulatory changes could accelerate adoption faster than historical precedents (e.g., electricity). For example, if AI tools become extremely user-friendly or governments incentivize rapid integration, sectors like healthcare or education might adopt AI more quickly than expected.
Overlooking AI-Driven Regulatory Reform: The author highlights regulatory bottlenecks (e.g., FDA approvals), but AI itself could streamline regulatory processes by improving data analysis, risk assessment, and decision-making. This could reduce human bottlenecks and accelerate innovation in fields like pharmaceuticals.
O-Ring Model Limitations: The O-Ring model assumes human limitations will constrain AI’s productivity gains. However, AI could increasingly automate entire workflows, reducing reliance on human collaboration. In some domains, AI might operate independently, bypassing human bottlenecks altogether.
Exponential vs. Linear Thinking: The author’s growth estimates (e.g., 0.5% annual boost) assume a linear progression, but AI advancements could be exponential. If AI achieves general intelligence (AGI) or recursive self-improvement, its economic impact might be far more transformative and rapid than anticipated.
Spillover Effects and New Industries: The author focuses on existing sectors, but AI could create entirely new industries or business models (e.g., personalized medicine, AI-driven entertainment) that drive growth in unpredictable ways. These spillover effects could amplify AI’s economic impact beyond current projections.
Market Misjudgment: While the author notes that market prices don’t forecast rapid transformation, markets can be slow to price in disruptive technologies until their impact becomes undeniable. AI’s transformative potential might be undervalued in current market assessments.
Human Adaptation: The author assumes human limitations will persist, but humans might adapt more quickly to AI than expected. For example, education and training systems could evolve to better integrate AI, enhancing human-AI collaboration and productivity.
Energy and Infrastructure Advances: The author mentions energy constraints, but breakthroughs in energy production (e.g., fusion, advanced renewables) or infrastructure could remove these barriers, enabling faster AI deployment and scaling.
Global Competition: The author’s analysis seems U.S.-centric, but global competition (e.g., China’s aggressive AI adoption) could force faster integration of AI technologies worldwide, accelerating economic growth beyond the author’s estimates.
In summary, the author’s cautious stance might underestimate the pace of AI adoption, its potential to reshape industries, and the possibility of unforeseen breakthroughs that could dramatically accelerate its economic impact.
Okay.
And one sentence that captures the most lilely reason the author might be wrong:
The author might be wrong because breakthroughs in AI usability, regulatory reforms, or unexpected exponential advancements could accelerate adoption and integration far faster than historical trends suggest, leading to a more rapid and transformative economic impact.
So, we could adopt LLMs faster, because of “reasons”.
I think this whole area is over my head.
It is clear adoption is way too slow. The way we use gpt nightly in our family with homework is miles away from the way the primary/high school/s are using it for our kids (e.g. not at all). A gulf. And it’s insane.
Tyler’s probably right.
Anyway.
Back to work.