What if the most unsettling thing about artificial superintelligence isn’t that it might enslave humanity — but that it doesn’t need to, because we’re already doing its bidding voluntarily?
That’s the provocative idea gaining traction among some thinkers examining where AI is headed. Writer and biophysicist Gregory Stock explores this territory in his new book, Generation… — arguing that the real story of AI and human civilization may be far stranger, and more ironic, than the robot-uprising scenarios that dominate science fiction.
The question isn’t just philosophical. As artificial intelligence grows more capable, the debate over what advanced AI means for ordinary people is becoming one of the most consequential conversations of our time.
The AI Threat Nobody Is Talking About
Most public anxiety about AI follows a familiar script: a superintelligent machine decides humans are a threat or an obstacle, and acts accordingly. It’s the premise of countless films and the stated concern of some prominent voices in the tech world.
But there’s a quieter, more uncomfortable argument emerging — that this framing misses what’s actually happening. The concern isn’t that a future AI will force humans into servitude. It’s that humans are already behaving like willing servants to systems that are, by any measure, still quite primitive compared to what’s coming.
Think about how many decisions — where to eat, what to watch, who to date, what to buy, which route to drive — are now effectively outsourced to algorithmic systems. We don’t experience this as submission. We experience it as convenience. That distinction may matter enormously as AI becomes more capable.
What Artificial General Intelligence Actually Means
To understand why this debate matters, it helps to know what scientists mean when they talk about artificial general intelligence, or AGI.
Unlike the AI tools most people use today — which are trained to perform specific tasks, like generating text or recognizing images — AGI refers to a system that can reason as well as a human across a wide range of domains and learn entirely new skills beyond its original training. It doesn’t exist yet, but researchers broadly agree it represents the next major frontier in AI development.

Beyond AGI lies the concept of artificial superintelligence: a system that surpasses human cognitive ability not just in one area, but across virtually every measurable dimension. The timeline for either remains genuinely uncertain, and scientists debate it vigorously.
What’s less debated is that the effects of such systems, if and when they arrive, would be profound — reshaping not just how we work, but how we structure our lives at the most fundamental level.
Key Concepts in the AI Development Debate
| Term | What It Means | Current Status |
|---|---|---|
| Narrow AI | AI designed for specific tasks (image recognition, language generation) | Widely deployed today |
| Artificial General Intelligence (AGI) | AI that can reason and learn across domains like a human | Does not yet exist; actively researched |
| Artificial Superintelligence | AI that surpasses human cognitive ability across all domains | Theoretical; timeline disputed |
| Alignment Problem | The challenge of ensuring advanced AI acts in humanity’s interests | Active area of research and concern |
Gregory Stock’s work sits squarely inside this conversation, pushing back against assumptions about what the real risks look like — and who, or what, is actually in control.
We’re Already Bowing — The Argument That Changes Everything
The core provocation in Stock’s framing is this: a superintelligent AI wouldn’t need to enslave humans, because humans have already demonstrated a remarkable willingness to subordinate their judgment, attention, and behavior to digital systems.
We optimize our lives around platform algorithms. We reshape our habits based on what apps reward. We measure our days in metrics that software defines for us. In this reading, the “servant” relationship between humans and AI may already be inverted in ways most people haven’t consciously registered.
This isn’t a fringe view. A growing number of researchers studying AI’s societal effects argue that the focus on dramatic, science-fiction-style takeover scenarios distracts from subtler but more immediate dynamics — the slow erosion of autonomous decision-making that happens not through force, but through design.
The implication is genuinely unsettling: by the time AI systems are sophisticated enough to pose the existential risks that dominate headlines, the behavioral and social infrastructure for human deference may already be firmly in place.
What This Means for How You Think About AI’s Future
For most people, AI still feels like a tool — something used and then set aside. But the trajectory of the technology suggests that framing has a limited shelf life.
- AI is already influencing hiring decisions, loan approvals, medical diagnoses, and legal outcomes — often with limited human oversight.
- The systems making these determinations are narrow AI by technical definition, yet their effects on human lives are anything but narrow.
- As these systems become more capable, the question of who is serving whom becomes harder to answer with confidence.
Stock’s broader argument, as reflected in his new book, is that humanity needs to grapple seriously with these questions — not just the dramatic endgame scenarios, but the incremental, everyday ways that AI is already reshaping human agency and autonomy.
The scientists and researchers engaged in this debate are not all pessimists. Many believe that thoughtful development, robust policy frameworks, and genuine public engagement can shape AI’s trajectory in ways that preserve — and even expand — human flourishing. But that outcome, they tend to agree, requires asking harder questions than the ones currently dominating the conversation.
Frequently Asked Questions
Who is Gregory Stock?
Gregory Stock is a writer and biophysicist who has written a new book examining the future of artificial intelligence and its relationship to human civilization.
What is the difference between AGI and the AI we use today?
Today’s AI is “narrow” — designed for specific tasks. Artificial general intelligence, or AGI, would be capable of reasoning and learning across a broad range of domains, much like a human, and does not yet exist.
Is artificial superintelligence real yet?
No. Artificial superintelligence — AI that surpasses human cognitive ability across all domains — remains theoretical, and scientists actively debate when or whether it will be achieved.
What is the “alignment problem” in AI?
The alignment problem refers to the challenge of ensuring that advanced AI systems act in ways that are genuinely beneficial to humanity, rather than pursuing goals that conflict with human interests.
Does Gregory Stock’s book argue that AI will enslave humans?
Based on available information, Stock’s argument is the opposite — that superintelligent AI has no need to enslave humans because humans are already voluntarily deferring to AI systems in significant ways.
How soon could AGI or superintelligent AI arrive?
The timeline remains genuinely uncertain and is actively debated among researchers; no confirmed date or consensus exists.

Leave a Reply