Keep The Future Human - Summary (original) (raw)
We are building something that could change what it means to be human.
For years, artificial intelligence has been getting smarter, faster than anyone predicted. It's no longer just a tiny academic project, it's the main goal of the world's biggest companies, with billions of dollars pushing it forward.
But what kind of change do we want? We now face a critical choice: continue this unchecked race toward ever-more powerful systems, or impose clear limits to ensure AI remains a tool to help humans rather than replace us.
> > CLICK FOR MORE DETAILS < <
The essay "Keep The Future Human" argues we should close the "gates" to smarter-than-human, autonomous AGI (Artificial General Intelligence) and especially superintelligence. Instead, we should focus on powerful, trustworthy AI tools that help people and improve human societies.
The current trajectory is motivated by economic incentives of huge tech companies seeking to automate human labor. If this continues, AI itself becomes the inevitable winner - a faster, smarter, cheaper alternative to people in our economy, thinking, decisions, and eventually control of our civilization.
AI IS DIFFERENT

AI systems are fundamentally different from other technologies:
- They learn how to achieve goals without explicit instructions
- They are inherently unpredictable
- They are largely unexplainable (neural network "weights" can't be parsed)
The training of digital neural networks is rapidly increasing in complexity. The most powerful AI systems have trillions of weights (numbers that govern their behaviour) and are created through massive computational experiments using specialized hardware trained on enormous datasets comprising a large fraction of humanity's collective output.
This has led to powerful tools for text/image processing, mathematical reasoning, information aggregation, and querying human knowledge. But development of more trustworthy tools is NOT the trajectory we're actually on.
AGI & SUPERINTELLIGENCE THREAT

AGI (Artificial General Intelligence) combines three dangerous properties:
- High AUTONOMY (independence of action)
- High GENERALITY (broad scope and adaptability)
- High INTELLIGENCE (ability at cognitive tasks)
Superintelligent systems could operate hundreds of times faster, process vastly more data, and form groups much more effective than humans. They could replace not just individuals but companies, nations, or our civilization as a whole.
We are at the threshold: Evidence for short timelines to AGI is strong with 25% probability within 1-2 years and 50% within 2-5 years according to expert predictions.
THE SOLUTION: GOVERNANCE & TOOL AI
We can prevent superhuman AI in many ways (click on one learn more):
> > COMPUTE OVERSIGHT < <
Oversight of AI computation ("compute"), which is the fundamental factor allowing the training and running of large-scale AI systems. This requires a way to measure and report the compute used in training and running AI models.
> > HARD COMPUTE CAPS < <
Implement hard caps on AI computation for both training and operation. These prevent AI from being too powerful and operating too quickly. Can be implemented through legal requirements and hardware-based security measures in AI chips.
> > DEVELOPER LIABILITY < <
Strict liability for developers of dangerous AI systems that satisfy high autonomy, broad generality, and superior intelligence. Safe harbors would encourage development of more limited, controllable systems.
> > TIERED REGULATION < <
Tiered regulation based on risk levels. The most capable systems require more safety guarantees and testing before development/deployment, while less powerful systems need less oversight.
A POSITIVE HUMAN FUTURE WITH TOOL AI

Rather than pursuing uncontrollable AGI, we can develop "Tool AI" that helps humans while remaining under meaningful human control.
Tool AI can accelerate scientific discovery, medicine, improve education, and more. When properly governed, it makes human experts and institutions more effective rather than outright replacing them.
Such systems will still be very disruptive but pose fundamentally different risks from AGI: risks we can see and manage, not existential threats to human agency and civilization.
We may eventually choose to develop more powerful systems, but only after developing the scientific understanding and ability to do so safely. This decision should be made deliberately by humanity as a whole, not by one person or company.
TAKE ACTION
The key missing ingredient is political and social will to take control of the AI development process.
People want the good that comes from AI but overwhelming majorities want slower, more careful development and do not want smarter-than-human AI that will replace them.
> > WHAT CAN I DO? < <
Share this page! Read the original essay! Discuss with friends! Contact your representatives! Support organizations working on AI safety!
The original essay: Keep The Future Human.
We can have AI that helps us without the existential risk. It starts by deciding that our destiny is not in the hands of a few crummy CEOs in Silicon Valley, but in ours if we take hold of it.
LET'S CLOSE THE GATES, AND KEEP THE FUTURE HUMAN.