I’m currently reading Reid Hoffman’s new book “Superagency”.

In an early chapter, he condenses and summarizes the main ideological AI camps into 4 categories:

So far, at least four key constituencies have been informing the discourse: the Doomers, the Gloomers, the Zoomers, and the Bloomers

Here’s a definition of each in turn (summarized by gpt4o from a quote taken from the book):

  1. Doomers – People who believe AI poses an existential threat to humanity, especially if superintelligent, autonomous AI systems become misaligned with human values. They fear worst-case scenarios where AI could decide to destroy or enslave humanity. Their concerns often center on long-term risks, including AI-driven human extinction.

  2. Gloomers – Critics of both AI and the Doomer perspective. They argue that focusing on distant, hypothetical AI dangers distracts from immediate and pressing issues, such as job losses, misinformation, bias amplification, and the erosion of personal agency. Gloomers advocate for strong government regulation and oversight to control AI development and mitigate its real-world harms.

  3. Zoomers – Optimists who believe AI’s benefits will far outweigh its risks. They advocate for minimal regulation and maximum freedom for AI developers, arguing that unrestricted innovation will lead to rapid progress and prosperity. They oppose precautionary policies and believe AI should be allowed to evolve freely without government intervention.

  4. Bloomers – Optimists who, like Zoomers, see AI as a transformative force for good but acknowledge the importance of broad public participation in shaping its development. They support iterative deployment—allowing real-world users to interact with AI to improve its safety, fairness, and effectiveness. While they are open to regulation, they emphasize engagement and accessibility over top-down control.

I’ve heard of some of these categories, but not all and not put so cleanly.

Nice Reid!

Who personifies each category?

How about:

  1. Doomers: Eliezer Yudkowsky
  2. Gloomers: Gary Marcus
  3. Zoomers: Sam Altman
  4. Bloomers: Reid Hoffman

Here’s the relevant (fair use) quote from the book:

Doomers believe we’re on a path to a future where, in worst-case scenarios, superintelligent, completely autonomous AIs that are no longer well aligned with human values may decide to destroy us altogether, except perhaps for a small contingent of tech bros whom they’ll keep around to do the vacuuming as revenge on behalf of Roombas.

Gloomers are both highly critical of AI and highly critical of Doomers. In their estimation, the Doomer outlook serves dual purposes. First, it exists as a tacit endorsement of AI—“It’s so powerful it just might destroy us!” Second, its long-term and abstract nature misdirects attention toward the future, when our real priority should be more near-term AI risks and harms such as potential job losses, disinformation on a massive scale, amplification of existing systemic biases, and the undermining of individual agency. In general, Gloomers favor a prohibitive, top-down approach, where development and deployment should be closely monitored and controlled by official regulation and oversight bodies.

In contrast, Zoomers argue that the productivity gains and innovation AI will create will far exceed any negative impacts it produces. Generally speaking, they’re skeptical of the idea of precautionary regulation that tries to eliminate the possibility of risk or harm before real-world deployment even occurs. Instead, they argue that giving developers the space to operate as they see fit will produce the best outcomes fastest. They don’t want government regulation. They don’t want government support. They want a clear runway and complete autonomy to innovate.

Finally, there are the Bloomers. Like the Zoomers, their perspective is fundamentally optimistic. They believe AI can accelerate human progress in countless domains. At the same time, they recognize that a technology as transformative and protean as AI cannot and should not be developed and deployed in a unilateral fashion. AI is going to impact too many lives in too many ways for that. So Bloomers pursue mass engagement, in real-world conditions—which is what you get with iterative deployment. While they’re not unconditionally opposed to government regulation, they believe that the fastest and most effective way to develop safer, more equitable, and more useful AI tools is to make them accessible to a diverse range of users with different values and intentions.

Buy and read the book.

Anyway, I read that quote and pondered where I fit.

I guess I’m a zoomer, perhaps with some bloomer leanings.

In fact, the framework feels self-serving as Reid declares himself a bloomer and bloomer sounds nicer than zoomer :)

Now, humans like to take things too far.

What is the category before doomers and after bloomers?

Pre-doomers (via gpt4o):

The “Rejecters” (Luddites)

  • Belief: AI should not be developed at all—or at the very least, its use should be severely restricted or rolled back.
  • Concerns: AI threatens human autonomy, jobs, social structures, and even our understanding of intelligence itself.
  • View on Regulation: AI should be either banned, extremely limited, or only used in strictly controlled circumstances.

Example Figures:

  • Jaron Lanier (AI skeptic, critic of the dehumanization of technology)
  • Noam Chomsky (argues that deep learning does not constitute true intelligence)

Post-bloomers (via gpt4o):

The Transcendents (AI Symbiosis)

  • Belief: AI will not just be a tool; it will merge with humanity, transforming intelligence and life itself.
  • Concerns: The biggest challenge isn’t AI itself but ensuring that humans adapt to a post-AI reality.
  • View on Regulation: We should embrace AI integration, including brain-computer interfaces and AI-augmented cognition, to evolve alongside it.

Example Figures:

  • Ray Kurzweil (futurist, proponent of the Singularity, believes humans and AI will merge)
  • Elon Musk (promotes Neuralink, AI-human symbiosis)

Nice!

I asked gpt4o to come up with rhyming names and it gave me roomers for “Rejecters” and boomers for “Transcendents”. Not good enough.

Actually, going back to Reid’s framework: I think it should be rank ordered by permissiveness.

In that case Zoomers is more permissive than Bloomers. Their order should be flipped, for example:

  1. Rejecters
  2. Doomers
  3. Gloomers
  4. Bloomers
  5. Zoomers
  6. Transcendents