The Effective Altruism Handbook — EA Forum (original) (raw)
The Introduction to Effective Altruism Handbook
Effective altruism (EA) is an ongoing project to find the best ways to do good, and put them into practice.
This series of articles will introduce you to some of the core thinking tools behind effective altruism, share some of the arguments about which global problems are most pressing, and help you to reflect on how you personally can contribute.
Introduction to Effective Altruism
The handbook is structured into eight chapters, with exercises to help you reflect on your reading throughout. If you'd like to discuss these ideas with other people who are interested in improving the lives of others, you may be interested in our free introductory EA program, which is based on this handbook.
If you want to use your time or money to help others, you probably want to help as many people as you can. But you only have so much time to help, so you can have a much bigger impact if you focus on the interventions that help more people rather than fewer.
But finding such interventions is incredibly difficult: it requires a "scout mindset" - seeking the truth, rather than defending our current ideas.
Introduction
On effective altruism
On scope sensitivity
On scout mindset and thinking clearly
On tradeoffs
On impact
More to explore
Around 700 million people still live in poverty, mostly in low-income countries. Efforts to help them - by policy reform, cash transfers, or provision of health services - can be incredibly effective.
Alongside investigating this issue, we also discuss how much more effective some interventions are than others, and we introduce a simple tool for estimating important figures.
Introduction
ITN framework
Differences in impact
Thinking on the Margin
Fermi estimation
Background data on global health and poverty
EA strategies for addressing global poverty
Exercise (20 mins.)
More to explore
Should we care about non-human animals? We'll show how it can be important to care impartially, rather than ignoring weird topics or unusual beneficiaries. We'll also give you some ideas for how we could improve the lives of animals that suffer in factory farms.
Introduction
Impartiality and radical empathy
The case for caring about animal welfare
Strategies for improving animal welfare
Exercise (10 mins.)
More to explore
Humanity appears to face existential risks: a chance that we'll destroy our long-term potential. We’ll examine why existential risks might be a moral priority, and explore why they are so neglected by society. We’ll also look into one of the major risks that we might face: a human-made pandemic, worse than COVID-19.
Alongside this, we'll introduce you to the concept of “expected value” and explore whether you could lose all of your impact by missing one crucial consideration.
Introduction
Existential risks
Risks from pandemics
Strategies for improving biosecurity
Expected value & Hits-based giving
Crucial considerations
Exercise (10 mins.)
More to explore
"Longtermism" is the view that improving the long term future is a key moral priority of our time. This can bolster arguments for working on reducing some of the extinction risks that we covered in the last section.
We’ll also explore some views on what our future could look like, and why it might be pretty different from the present. And we'll introduce forecasting: a set of methods for improving and learning from our attempts to predict the future.
Introduction
The case for and against longtermism
Hinge of history
To what extent can we predict the future? How?
What might the future look like?
Exercise (45 mins.)
More to explore
Transformative artificial intelligence may well be developed this century. If it is, it may begin to make many significant decisions for us, and rapidly accelerate changes like economic growth. Are we set up to deal with this new technology safely?
You will also learn about strategies to prevent an AI-related catastrophe and the possibility of so-called “s-risks”.
Introduction
The case for worrying about risks from artificial intelligence
Strategies for reducing risks from unaligned artificial intelligence
Suffering risks
More to explore
It's really important to think for yourself and reflect on the arguments you've heard in previous weeks: you might uncover places where you disagree, or mistakes in the reasoning. And even if you don't you'll probably understand the ideas more deeply if you've thought about their weakest points.
So this week, we encourage you to take some time to reflect on your confusions and concerns about the ideas so far, and to read up on some of the strongest counterarguments.
Introduction
Bayes' rule and evidence
Independent impressions
Learning from mistakes
Less common causes
Exercise (1.5 hrs.)
More to explore
In this final section, we hope to help you apply the principles of effective altruism to your own life and career.
You probably won’t be ready to make a major change just yet - you might want to read and reflect more before you do that. So instead we’ll help you to think through some of your key uncertainties, generate tests for those uncertainties, and plan out how you can make sure you follow through on your intentions.
Introduction
Attitudes to doing good
Career choice
Donations
Dealing with demandingness
Exercise (1.5 hrs.)
More to explore