Effective Altruism News
Effective Altruism News

SUBSCRIBE

RSS Feed
X Feed

ORGANIZATIONS

  • Effective Altruism Forum
  • 80,000 Hours
  • Ada Lovelace Institute
  • Against Malaria Foundation
  • AI Alignment Forum
  • AI Futures Project
  • AI Impacts
  • AI Now Institute
  • AI Objectives Institute
  • AI Safety Camp
  • AI Safety Communications Centre
  • AidGrade
  • Albert Schweitzer Foundation
  • Aligned AI
  • ALLFED
  • Asterisk
  • altLabs
  • Ambitious Impact
  • Anima International
  • Animal Advocacy Africa
  • Animal Advocacy Careers
  • Animal Charity Evaluators
  • Animal Ethics
  • Apollo Academic Surveys
  • Aquatic Life Institute
  • Association for Long Term Existence and Resilience
  • Ayuda Efectiva
  • Berkeley Existential Risk Initiative
  • Bill & Melinda Gates Foundation
  • Bipartisan Commission on Biodefense
  • California YIMBY
  • Cambridge Existential Risks Initiative
  • Carnegie Corporation of New York
  • Center for Applied Rationality
  • Center for Election Science
  • Center for Emerging Risk Research
  • Center for Health Security
  • Center for Human-Compatible AI
  • Center for Long-Term Cybersecurity
  • Center for Open Science
  • Center for Reducing Suffering
  • Center for Security and Emerging Technology
  • Center for Space Governance
  • Center on Long-Term Risk
  • Centre for Effective Altruism
  • Centre for Enabling EA Learning and Research
  • Centre for the Governance of AI
  • Centre for the Study of Existential Risk
  • Centre of Excellence for Development Impact and Learning
  • Charity Entrepreneurship
  • Charity Science
  • Clearer Thinking
  • Compassion in World Farming
  • Convergence Analysis
  • Crustacean Compassion
  • Deep Mind
  • Democracy Defense Fund
  • Democracy Fund
  • Development Media International
  • EA Funds
  • Effective Altruism Cambridge
  • Effective altruism for Christians
  • Effective altruism for Jews
  • Effective Altruism Foundation
  • Effective Altruism UNSW
  • Effective Giving
  • Effective Institutions Project
  • Effective Self-Help
  • Effective Thesis
  • Effektiv-Spenden.org
  • Eleos AI
  • Eon V Labs
  • Epoch Blog
  • Equalize Health
  • Evidence Action
  • Family Empowerment Media
  • Faunalytics
  • Farmed Animal Funders
  • FAST | Animal Advocacy Forum
  • Felicifia
  • Fish Welfare Initiative
  • Fistula Foundation
  • Food Fortification Initiative
  • Foresight Institute
  • Forethought
  • Foundational Research Institute
  • Founders’ Pledge
  • Fortify Health
  • Fund for Alignment Research
  • Future Generations Commissioner for Wales
  • Future of Life Institute
  • Future of Humanity Institute
  • Future Perfect
  • GBS Switzerland
  • Georgetown University Initiative on Innovation, Development and Evaluation
  • GiveDirectly
  • GiveWell
  • Giving Green
  • Giving What We Can
  • Global Alliance for Improved Nutrition
  • Global Catastrophic Risk Institute
  • Global Challenges Foundation
  • Global Innovation Fund
  • Global Priorities Institute
  • Global Priorities Project
  • Global Zero
  • Good Food Institute
  • Good Judgment Inc
  • Good Technology Project
  • Good Ventures
  • Happier Lives Institute
  • Harvard College Effective Altruism
  • Healthier Hens
  • Helen Keller INTL
  • High Impact Athletes
  • HistPhil
  • Humane Slaughter Association
  • IDInsight
  • Impactful Government Careers
  • Innovations for Poverty Action
  • Institute for AI Policy and Strategy
  • Institute for Progress
  • International Initiative for Impact Evaluation
  • Invincible Wellbeing
  • Iodine Global Network
  • J-PAL
  • Jewish Effective Giving Initiative
  • Lead Exposure Elimination Project
  • Legal Priorities Project
  • LessWrong
  • Let’s Fund
  • Leverhulme Centre for the Future of Intelligence
  • Living Goods
  • Long Now Foundation
  • Machine Intelligence Research Institute
  • Malaria Consortium
  • Manifold Markets
  • Median Group
  • Mercy for Animals
  • Metaculus
  • Metaculus | News
  • METR
  • Mila
  • New Harvest
  • Nonlinear
  • Nuclear Threat Initiative
  • One Acre Fund
  • One for the World
  • OpenAI
  • Open Mined
  • Open Philanthropy
  • Organisation for the Prevention of Intense Suffering
  • Ought
  • Our World in Data
  • Oxford Prioritisation Project
  • Parallel Forecast
  • Ploughshares Fund
  • Precision Development
  • Probably Good
  • Pugwash Conferences on Science and World Affairs
  • Qualia Research Institute
  • Raising for Effective Giving
  • Redwood Research
  • Rethink Charity
  • Rethink Priorities
  • Riesgos Catastróficos Globales
  • Sanku – Project Healthy Children
  • Schmidt Futures
  • Sentience Institute
  • Sentience Politics
  • Seva Foundation
  • Sightsavers
  • Simon Institute for Longterm Governance
  • SoGive
  • Space Futures Initiative
  • Stanford Existential Risk Initiative
  • Swift Centre
  • The END Fund
  • The Future Society
  • The Life You Can Save
  • The Roots of Progress
  • Target Malaria
  • Training for Good
  • UK Office for AI
  • Unlimit Health
  • Utility Farm
  • Vegan Outreach
  • Venten | AI Safety for Latam
  • Village Enterprise
  • Waitlist Zero
  • War Prevention Initiative
  • Wave
  • Wellcome Trust
  • Wild Animal Initiative
  • Wild-Animal Suffering Research
  • Works in Progress

PEOPLE

  • Scott Aaronson | Shtetl-Optimized
  • Tom Adamczewski | Fragile Credences
  • Matthew Adelstein | Bentham's Newsletter
  • Matthew Adelstein | Controlled Opposition
  • Vaidehi Agarwalla | Vaidehi's Blog
  • James Aitchison | Philosophy and Ideas
  • Scott Alexander | Astral Codex Ten
  • Scott Alexander | Slate Star Codex
  • Scott Alexander | Slate Star Scratchpad
  • Alexanian & Franz | GCBR Organization Updates
  • Applied Divinity Studies
  • Leopold Aschenbrenner | For Our Posterity
  • Amanda Askell
  • Amanda Askell's Blog
  • Amanda Askell’s Substack
  • Atoms vs Bits
  • Connor Axiotes | Rules of the Game
  • Sofia Balderson | Hive
  • Mark Bao
  • Boaz Barak | Windows On Theory
  • Nathan Barnard | The Good blog
  • Matthew Barnett
  • Ollie Base | Base Rates
  • Simon Bazelon | Out of the Ordinary
  • Tobias Baumann | Cause Prioritization Research
  • Tobias Baumann | Reducing Risks of Future Suffering
  • Nora Belrose
  • Rob Bensinger | Nothing Is Mere
  • Alexander Berger | Marginal Change
  • Aaron Bergman | Aaron's Blog
  • Satvik Beri | Ars Arcana
  • Aveek Bhattacharya | Social Problems Are Like Maths
  • Michael Bitton | A Nice Place to Live
  • Liv Boeree
  • Dillon Bowen
  • Topher Brennan
  • Ozy Brennan | Thing of Things
  • Catherine Brewer | Catherine’s Blog
  • Stijn Bruers | The Rational Ethicist
  • Vitalik Buterin
  • Lynette Bye | EA Coaching
  • Ryan Carey
  • Joe Carlsmith
  • Lucius Caviola | Outpaced
  • Richard Yetter Chappell | Good Thoughts
  • Richard Yetter Chappell | Philosophy, Et Cetera
  • Paul Christiano | AI Alignment
  • Paul Christiano | Ordinary Ideas
  • Paul Christiano | Rational Altruist
  • Paul Christiano | Sideways View
  • Paul Christiano & Katja Grace | The Impact Purchase
  • Evelyn Ciara | Sunyshore
  • Cirrostratus Whispers
  • Jesse Clifton | Jesse’s Substack
  • Peter McCluskey | Bayesian Investor
  • Greg Colbourn | Greg's Substack
  • Ajeya Cotra & Kelsey Piper | Planned Obsolescence
  • Owen Cotton-Barrat | Strange Cities
  • Andrew Critch
  • Paul Crowley | Minds Aren’t Magic
  • Dale | Effective Differentials
  • Max Dalton | Custodienda
  • Saloni Dattani | Scientific Discovery
  • Amber Dawn | Contemplatonist
  • De novo
  • Michael Dello-Iacovo
  • Jai Dhyani | ANOIEAEIB
  • Michael Dickens | Philosophical Multicore
  • Dieleman & Zeijlmans | Effective Environmentalism
  • Charles Dillon
  • Anthony DiGiovanni | Ataraxia
  • Kate Donovan | Gruntled & Hinged
  • Dawn Drescher | Impartial Priorities
  • Eric Drexler | AI Prospects: Toward Global Goal Convergence
  • Holly Elmore
  • Sam Enright | The Fitzwilliam
  • Daniel Eth | Thinking of Utils
  • Daniel Filan
  • Lukas Finnveden
  • Diana Fleischman | Dianaverse
  • Diana Fleischman | Dissentient
  • Julia Galef
  • Ben Garfinkel | The Best that Can Happen
  • Adrià Garriga-Alonso
  • Matthew Gentzel | The Consequentialist
  • Aaron Gertler | Alpha Gamma
  • Rachel Glennerster
  • Sam Glover | Samstack
  • Andrés Gómez Emilsson | Qualia Computing
  • Nathan Goodman & Yusuf | Longterm Liberalism
  • Ozzie Gooen | The QURI Medley
  • Ozzie Gooen
  • Katja Grace | Meteuphoric
  • Katja Grace | World Spirit Sock Puppet
  • Katja Grace | Worldly Positions
  • Spencer Greenberg | Optimize Everything
  • Milan Griffes | Flight from Perfection
  • Simon Grimm
  • Zach Groff | The Groff Spot
  • Erich Grunewald
  • Marc Gunther
  • Cate Hall | Useful Fictions
  • Chris Hallquist | The Uncredible Hallq
  • Topher Hallquist
  • John Halstead
  • Finn Hambly
  • James Harris | But Can They Suffer?
  • Riley Harris
  • Riley Harris | Million Year View
  • Peter Hartree
  • Shakeel Hashim | Transformer
  • Sarah Hastings-Woodhouse
  • Hayden | Ethical Haydonism
  • Julian Hazell | Julian’s Blog
  • Julian Hazell | Secret Third Thing
  • Jiwoon Hwang
  • Incremental Updates
  • Roxanne Heston | Fire and Pulse
  • Hauke Hillebrandt | Hauke’s Blog
  • Ben Hoffman | Compass Rose
  • Florian Jehn | Existential Crunch
  • Michael Huemer | Fake Nous
  • Tyler John | Secular Mornings
  • Mike Johnson | Open Theory
  • Toby Jolly | Seeking To Be Jolly
  • Holden Karnofsky | Cold Takes
  • Jeff Kaufman
  • Cullen O'Keefe | Jural Networks
  • Daniel Kestenholz
  • Ketchup Duck
  • Oliver Kim | Global Developments
  • Isaac King | Outside the Asylum
  • Petra Kosonen | Tiny Points of Vast Value
  • Victoria Krakovna
  • Ben Kuhn
  • Alex Lawsen | Speculative Decoding
  • Gavin Leech | argmin gravitas
  • Howie Lempel
  • Gregory Lewis
  • Eli Lifland | Foxy Scout
  • Rhys Lindmark
  • Robert Long | Experience Machines
  • Garrison Lovely | Garrison's Substack
  • MacAskill, Chappell & Meissner | Utilitarianism
  • Jack Malde | The Ethical Economist
  • Jonathan Mann | Abstraction
  • Sydney Martin | Birthday Challenge
  • Daniel May
  • Daniel May
  • Conor McCammon | Utopianish
  • Peter McIntyre
  • Peter McIntyre | Conceptually
  • Pablo Melchor
  • Pablo Melchor | Altruismo racional
  • Geoffrey Miller
  • Fin Moorhouse
  • Thomas Moynihan
  • Luke Muehlhauser
  • Neel Nanda
  • David Nash | Global Development & Economic Advancement
  • David Nash | Monthly Overload of Effective Altruism
  • Eric Neyman | Unexpected Values
  • Richard Ngo | Narrative Ark
  • Richard Ngo | Thinking Complete
  • Elizabeth Van Nostrand | Aceso Under Glass
  • Oesterheld, Treutlein & Kokotajlo | The Universe from an Intentional Stance
  • James Ozden | Understanding Social Change
  • Daniel Paleka | AI Safety Takes
  • Ives Parr | Parrhesia
  • Dwarkesh Patel | The Lunar Society
  • Kelsey Piper | The Unit of Caring
  • Michael Plant | Planting Happiness
  • Michal Pokorný | Agenty Dragon
  • Georgia Ray | Eukaryote Writes Blog
  • Ross Rheingans-Yoo | Icosian Reflections
  • Josh Richards | Tofu Ramble
  • Jess Riedel | foreXiv
  • Anna Riedl
  • Hannah Ritchie | Sustainability by Numbers
  • David Roodman
  • Eli Rose
  • Abraham Rowe | Good Structures
  • Siebe Rozendal
  • John Salvatier
  • Anders Sandberg | Andart II
  • William Saunders
  • Joey Savoie | Measured Life
  • Stefan Schubert
  • Stefan Schubert | Philosophical Sketches
  • Stefan Schubert | Stefan’s Substack
  • Stefan Schubert | The Update
  • Nuño Sempere | Measure is unceasing
  • Harish Sethu | Counting Animals
  • Rohin Shah
  • Zeke Sherman | Bashi-Bazuk
  • Buck Shlegeris
  • Jay Shooster | jayforjustice
  • Carl Shulman | Reflective Disequilibrium
  • Jonah Sinick
  • Andrew Snyder-Beattie | Defenses in Depth
  • Ben Snodin
  • Nate Soares | Minding Our Way
  • Kaj Sotala
  • Tom Stafford | Reasonable People
  • Pablo Stafforini | Pablo’s Miscellany
  • Henry Stanley
  • Jacob Steinhardt | Bounded Regret
  • Zach Stein-Perlman | AI Lab Watch
  • Romeo Stevens | Neurotic Gradient Descent
  • Michael Story | Too long to tweet
  • Maxwell Tabarrok | Maximum Progress
  • Max Taylor | AI for Animals
  • Benjamin Todd
  • Brian Tomasik | Reducing Suffering
  • Helen Toner | Rising Tide
  • Philip Trammell
  • Jacob Trefethen
  • Toby Tremlett | Raising Dust
  • Alex Turner | The Pond
  • Alyssa Vance | Rationalist Conspiracy
  • Magnus Vinding
  • Eva Vivalt
  • Jackson Wagner
  • Ben West | Philosophy for Programmers
  • Nick Whitaker | High Modernism
  • Jess Whittlestone
  • Peter Wildeford | Everyday Utilitarian
  • Peter Wildeford | Not Quite a Blog
  • Peter Wildeford | Pasteur’s Cube
  • Peter Wildeford | The Power Law
  • Bridget Williams
  • Ben Williamson
  • Julia Wise | Giving Gladly
  • Julia Wise | Otherwise
  • Julia Wise | The Whole Sky
  • Kat Woods
  • Thomas Woodside
  • Mark Xu | Artificially Intelligent
  • Linch Zhang
  • Anisha Zaveri

NEWSLETTERS

  • AI Safety Newsletter
  • Alignment Newsletter
  • Altruismo eficaz
  • Animal Ask’s Newsletter
  • Apart Research
  • Astera Institute | Human Readable
  • Campbell Collaboration Newsletter
  • CAS Newsletter
  • ChinAI Newsletter
  • CSaP Newsletter
  • EA Forum Digest
  • EA Groups Newsletter
  • EA & LW Forum Weekly Summaries
  • EA UK Newsletter
  • EACN Newsletter
  • Effective Altruism Lowdown
  • Effective Altruism Newsletter
  • Effective Altruism Switzerland
  • Epoch Newsletter
  • Farm Animal Welfare Newsletter
  • Existential Risk Observatory Newsletter
  • Forecasting Newsletter
  • Forethought
  • Future Matters
  • GCR Policy’s Newsletter
  • Gwern Newsletter
  • High Impact Policy Review
  • Impactful Animal Advocacy Community Newsletter
  • Import AI
  • IPA’s weekly links
  • Manifold Markets | Above the Fold
  • Manifund | The Fox Says
  • Matt’s Thoughts In Between
  • Metaculus – Medium
  • ML Safety Newsletter
  • Open Philanthropy Farm Animal Welfare Newsletter
  • Oxford Internet Institute
  • Palladium Magazine Newsletter
  • Policy.ai
  • Predictions
  • Rational Newsletter
  • Sentinel
  • SimonM’s Newsletter
  • The Anti-Apocalyptus Newsletter
  • The Digital Minds Newsletter
  • The EA Behavioral Science Newsletter
  • The EU AI Act Newsletter
  • The EuropeanAI Newsletter
  • The Long-termist’s Field Guide
  • The Works in Progress Newsletter
  • This week in security
  • TLDR AI
  • Tlön newsletter
  • To Be Decided Newsletter

PODCASTS

  • 80,000 Hours Podcast
  • Alignment Newsletter Podcast
  • AI X-risk Research Podcast
  • Altruismo Eficaz
  • CGD Podcast
  • Conversations from the Pale Blue Dot
  • Doing Good Better
  • Effective Altruism Forum Podcast
  • ForeCast
  • Found in the Struce
  • Future of Life Institute Podcast
  • Future Perfect Podcast
  • GiveWell Podcast
  • Gutes Einfach Tun
  • Hear This Idea
  • Invincible Wellbeing
  • Joe Carlsmith Audio
  • Morality Is Hard
  • Open Model Project
  • Press the Button
  • Rationally Speaking Podcast
  • The End of the World
  • The Filan Cabinet
  • The Foresight Institute Podcast
  • The Giving What We Can Podcast
  • The Good Pod
  • The Inside View
  • The Life You Can Save
  • The Rhys Show
  • The Turing Test
  • Two Persons Three Reasons
  • Un equilibrio inadecuado
  • Utilitarian Podcast
  • Wildness

VIDEOS

  • 80,000 Hours: Cambridge
  • A Happier World
  • AI In Context
  • AI Safety Talks
  • Altruismo Eficaz
  • Altruísmo Eficaz
  • Altruísmo Eficaz Brasil
  • Brian Tomasik
  • Carrick Flynn for Oregon
  • Center for AI Safety
  • Center for Security and Emerging Technology
  • Centre for Effective Altruism
  • Center on Long-Term Risk
  • Chana Messinger
  • Deliberation Under Ideal Conditions
  • EA for Christians
  • Effective Altruism at UT Austin
  • Effective Altruism Global
  • Future of Humanity Institute
  • Future of Life Institute
  • Giving What We Can
  • Global Priorities Institute
  • Harvard Effective Altruism
  • Leveller
  • Machine Intelligence Research Institute
  • Metaculus
  • Ozzie Gooen
  • Qualia Research Institute
  • Raising for Effective Giving
  • Rational Animations
  • Robert Miles
  • Rootclaim
  • School of Thinking
  • The Flares
  • EA Groups
  • EA Tweets
  • EA Videos
Inactive websites grayed out.
Effective Altruism News is a side project of Tlön.
  • Compassionate Purpose: Personal Inspiration for a Better World
    “Read this book. It may change your life.”— Peter Singer, author of Animal Liberation What if the point of self-improvement were not just to feel better or get ahead, but to become more capable of helping in a hurting world? In Compassionate Purpose, Magnus Vinding bridges self-help and ethics with a framework for personal development... Continue Reading →...
    Magnus Vinding | 50 minutes ago
  • Krystal Birungi is awarded the Global Citizen Prize
    Subtitle as H4 It is with great pleasure that I announce that my colleague Krystal Birungi of Target Malaria Uganda at the Uganda Virus Research Institute has been awarded the 2026 Global Citizen Prize. The Global Citizen Prize seeks to identify and celebrate grassroots activists in local communities who are fighting for social justice, championing […].
    Target Malaria | 1 hours ago
  • How much electricity does AI consume? [2025 summary]
    What share of electricity is consumed by data centres? What's the energy footprint of ChatGPT and other chatbots?
    Sustainability by Numbers | 5 hours ago
  • Goodmaxxing
    (Crosspost). If you’re young and online, you’re probably maxxing something. Maybe you’re looksmaxxing: trying to maximize your hotness (e.g. by hitting yourself in the face with a hammer). Maybe, like Clavicular, you do it just to mog other people—to look better than they do. But good looks reach diminishing marginal returns.
    Effective Altruism Forum | 11 hours ago
  • May is Healthy Vision Month. This is how sight united three generations of women.
    In Battambang, Cambodia, three generations of women run a family car wash. It’s a life built on grit, love, and long days, but for 20 years, there was a missing piece at the center of their home. At 74, Phen Mao lived in a blur. After she lost her sight, her daughter, Lorb, carried the …. The post May is Healthy Vision Month. This is how sight united three generations of women.
    Seva Foundation | 12 hours ago
  • It's nice of you to worry about me, but I really do have a life
    I have two shameful secrets that I probably shouldn't talk about online: I love my family. I enjoy my hobbies. "What an idiot!" you probably think. "Doesn't he realize that at his next job interview, HR will probably use an AI that can match his online writing based on a short sample of written text, and when they ask 'hey AI, is this guy really 100% devoted to his job, and does he spend...
    LessWrong | 12 hours ago
  • Irretrievability; or, Murphy's Curse of Oneshotness upon ASI
    Example 1: The Viking 1 lander. In the 1970s, NASA sent a pair of probes to Mars, the Viking 1 and Viking 2 missions. Total cost of $1B (1970), equivalent to about $7B (2025). The Viking 1 probe operated on Mars's surface for six years, before its battery began to seriously degrade. One might have thought a battery problem like that would spell the irrevocable end of the mission.
    LessWrong | 12 hours ago
  • AIM's new charity taxonomy
    0. I don't work at AIM. why care about this?. This taxonomy is written from AIM's perspective, but it may be helpful more broadly: If you're starting a new charity, incubating others, or doing charity idea research: The taxonomy gives you a structured way to think about which ideas to pursue, what founder profile fits, and what research and support each idea needs.
    Effective Altruism Forum | 15 hours ago
  • 🟡 Iran says it will target US naval vessels, UAE leaves OPEC, GPT-5.5 similar to Mythos on cyber tasks || Global Risks Weekly Roundup #18/2026
    Executive summary
    Sentinel | 18 hours ago
  • If You Do One Thing for Animals This Year, Do This
    There is a short window to prevent a US bill that would overturn decades of animal welfare progress. This is arguably the most consequential piece of farm animal legislation in U.S. history. Summary: The Farm Bill currently being considered by the U.S. Congress includes the “Save Our Bacon Act”, which would eliminate states' abilities to set standards on how farmed animals are raised and...
    Effective Altruism Forum | 18 hours ago
  • AI Industrial Takeoff — Part 1: Maximum growth rates with current technology
    How fast could an AI-driven economy grow? Most economists expect a few percentage points at best, comparable to previous general-purpose technologies (Acemoglu (2024)). Those closer to AI development tend to imagine something much more radical (Shulman (2023); Davidson and Hadshar (2025)). This series aims to ground growth rates in how physical production works.
    LessWrong | 19 hours ago
  • Goodmaxxing
    A letter to Gen Alpha
    Bentham's Newsletter | 19 hours ago
  • AI Industrial Takeoff — Part 1: Maximum growth rates with current technology
    How fast can an AI-driven economy grow? Economists expect a few percentage points; at best those closer to AI development imagine Dyson spheres within years. Who is correct?
    Defenses in Depth | 20 hours ago
  • How the Radical Fund Sustained Radical Imagination
    Editors’ Note: Carmen Rojas continues HistPhil’s book forum on John Witt’s The Radical Fund (Simon & Schuster, 2025). John Fabian Witt’s The Radical Fund: How a Band of Visionaries and a Million Dollars Upended America is one of the best books I’ve read about the perils and promises of philanthropy in the United States. It … Continue reading →...
    HistPhil | 20 hours ago
  • Taking woo seriously but not literally
    I think that a lot of “woo” - a broad term that includes things like chakras, energy healing, Tarot, various Eastern religions and neopagan practices, etc. - consists of things that have real effects and uses, even if many (though not all) of their practitioners are mistaken about the exact mechanisms and make unwarranted metaphysical claims. Now, a woo practitioner might explain what’s...
    LessWrong | 20 hours ago
  • Open Thread 432
    Astral Codex Ten | 22 hours ago
  • Import AI 455: AI systems are about to start building themselves.
    The first step towards recursive self improvement
    Import AI | 22 hours ago
  • Linkpost for May
    Effective Altruism
    Thing of Things | 23 hours ago
  • Van Halen Of The Heart
    just show you care
    Atoms vs Bits | 23 hours ago
  • ChinAI #357: AI Surveillance in Chinese Universities
    Greetings from a world where…...
    ChinAI Newsletter | 23 hours ago
  • Podcast | How will falling fertility rates hurt the economy? With Melissa Kearney
    Podcast | How will falling fertility rates hurt the economy? With Melissa Kearney Typically, a society’s population remains stable if women have about 2.1 children each. By that metric, the word has a big problem. In developed countries the total fertility rate is well below that figure. So what are the economic consequences of that shortfall?
    J-PAL | 23 hours ago
  • Can putting a price tag on ending poverty unlock billions in giving?
    Can putting a price tag on ending poverty unlock billions in giving? New research from J-PAL affiliate Paul Niehaus, cofounder of GiveDirectly, reveals ending extreme poverty may be more achievable than many assume. The question now is whether that kind of clarity can mobilize philanthropic money sitting on the sidelines... . spriyabalasubr… Mon, 05/04/2026 - 07:26...
    J-PAL | 23 hours ago
  • Podcast | Boosting farmers' profits
    Podcast | Boosting farmers' profits In this episode of VoxDevTalks, Craig McIntosh discusses a recent J-PAL policy insight that takes stock of the evidence from randomised controlled trials on credit, subsidies, and cash transfers for smallholder farmers, arriving at conclusions that challenge some of agriculture's most persistent development assumptions. spriyabalasubr… Mon, 05/04/2026 -...
    J-PAL | 24 hours ago
  • The EU AI Act Newsletter #101: Trilogue Breakdown
    Talks on delaying the AI Act collapse over industrial AI, Merz diverges from his coalition partner, and Parliament invites Anthropic to a hearing on the Mythos model.
    The EU AI Act Newsletter | 1 days ago
  • Wat is een walvis waard en wat is het gewicht van een garnaal op de morele weegschaal?
    Opiniestuk in De Standaard (04-05-2026) Eén geredde bultrug toont hoe onbetrouwbaar onze morele radar is Het was ontroerend mooi om te zien: bultrugwalvis Timmy die na een reddingsoperatie terug vrij rondzwom in de open zee, nadat hij eind maart in … Lees verder →...
    The Rational Ethicist | 1 days ago
  • Thoughts on investing for transformative AI
    TLDR: I basically don’t. Contents. Contents. Ethical concerns. Thoughts on how to avoid becoming corrupted. Future worlds What happens in the lead-up to ASI?. Predictions are hard, especially about markets. Trend-following. The EA portfolio. Leaning my investments in the right direction. Appendix: Some specific predictions. Notes. Ethical concerns.
    Philosophical Multicore | 1 days ago
  • What I learned from making a fire
    One time my friends and I made a fire on the beach.
    Hauke’s Blog | 1 days ago
  • Meta humanoid robots 🤖, SpaceX costs leak 💰, open design 🧑‍🎨
    TLDR AI | 1 days ago
  • Dairy cows make their misery expensive (but their calves can’t)
    How much do cows suffer in the production of milk? I can’t answer that; understanding animal experience is hard. But I can at least provide some facts about the conditions dairy cows live in, which might be useful to you in making your own assessment. My biggest conclusion is that cows made better choices than chickens by making their misery financially costly to farmers. Life Cycle.
    LessWrong | 2 days ago
  • Exploration Hacking: Can LLMs Learn to Resist RL Training?
    We empirically investigate exploration hacking (EH) — where models strategically alter their exploration to resist RL training — by creating model organisms that resist capability elicitation, evaluating countermeasures, and auditing frontier models for their propensity.
    AI Alignment Forum | 2 days ago
  • Explicit Racial Discrimination in College Debate
    Plus other madness
    Bentham's Newsletter | 2 days ago
  • Word-learner
    Words, words were truly alive on the tongue, in the head
    Atoms vs Bits | 2 days ago
  • Are the last 3 months the start of an AI acceleration?
    Most public commentary is debating whether AI has hit a plateau.
    Benjamin Todd | 2 days ago
  • The better algorithms of our nature
    Engagement, bridging, and the design of digital platforms which don't pander to our weaknesses.
    Reasonable People | 2 days ago
  • AI, Fiction, Literature: A Scenario
    Soon, if not already, established authors of mass-market fiction will publish AI-assisted writing.
    Raising Dust | 2 days ago
  • What’s more likely to be sentient: an ant or ChatGPT?
    Sentience is hot these days. Partly because of the development of impressive new AI systems, everyone seems to be asking: How do we know if something is sentient? While consciousness means simply having a subjective point of view on the world — a feeling of what it’s like to be you — sentience is the […]...
    Future Perfect | 2 days ago
  • Measuring the ability of Opus 4.5 to fool narrow classifiers
    We measure the ability of Opus 4.5 to fool prompted or fine-tuned classifiers trying to detect a narrow set of outcomes. We find that: The Opus 4.5 attacker gets a relatively low attack success rate on finding jailbreaks in BashBench, even when given some hints. Performance is especially low against a prompt Opus 4.5 classifier with a CoT and a fine-tuned Haiku 4.5 classifier.
    LessWrong | 2 days ago
  • Notes on equanimity from the inside
    I've always thought of myself as even-keeled and equanimous; that my mind is still. In hindsight, I had no idea what I was talking about. Halfway through my second ten-day meditation retreat, I experienced a depth of equanimity that broke my existing frame of reference. It’s hard to convey in words.
    Effective Altruism Forum | 2 days ago
  • A new rationalist self-improvement book: the 12 Levers
    I'm publishing a book that I think can fairly be described as a rationalist approach to self-improvement. Whereas many self-help books focus mainly on stories and what worked well for the author, our book takes a very different approach.
    LessWrong | 2 days ago
  • OpenAI's red line for AI self-improvement is fundamentally flawed
    TL;DR. OpenAI's "Critical" threshold for AI self-improvement in the Preparedness Framework v2 has three structural problems: It fires too late. The lagging indicator, 5× generational acceleration sustained for several months, lets ~3 years of effective progress accumulate before triggering. Anthropic used a 2x threshold instead of a 5x. It's self-certified.
    LessWrong | 2 days ago
  • A new rationalist self-improvement book: the 12 Levers
    I'm publishing a book that I think can fairly be described as a rationalist/evidence oriented approach to self-improvement. Whereas many self-help books focus mainly on stories and what worked well for the author, our book takes a very different approach.
    Effective Altruism Forum | 2 days ago
  • Open position: Web Product Lead
    The post Open position: Web Product Lead appeared first on 80,000 Hours.
    80,000 Hours | 2 days ago
  • Open position: Product and Growth Manager
    The post Open position: Product and Growth Manager appeared first on 80,000 Hours.
    80,000 Hours | 2 days ago
  • You Are Not Immune To Mode Collapse
    “Mode collapse” is a few things. First it was an observation about how early image generating AIs often collapsed to producing just the modal output from their training distribution (something very common, like a house with a white picket fence and a tree in the garden). Then it was the observation that this effect seemed to occur extremely quickly when AIs were trained on AI-generated inputs.
    LessWrong | 3 days ago
  • What it means for an AI to “want” something?
    What it means for an AI to “want” something? MIRI President Nate Soares breaks it down with @novaramedia: AI systems are going to do something that’s to “wanting” what submarine movement is to swimming. Not human, but functionally the same outcome.
    Machine Intelligence Research Institute | 3 days ago
  • The seven deadly curses of superhuman AI
    Developing a superintelligent AI that does what we want, without killing everyone, might be extremely difficult. In this video, we showcase the arguments from Chapter 10 of the book "If Anyone Builds It, Everyone Dies" by Eliezer Yudkowsky and Nate Soares. The chapter draws on analogies with space probes, nuclear reactors, and computer security.
    Rational Animations | 3 days ago
  • Billie Eilish Is Obviously Right About Animals
    Eating someone is inconsistent with loving them properly
    Bentham's Newsletter | 3 days ago
  • Eternal Recurrence*
    In an earlier post—...
    Fake Nous | 3 days ago
  • Games that change your mind
    Some things you might learn from games are pretty blatant: Trivial Pursuit might teach you trivia, MasterType might teach you about typing, Grand Theft Auto might teach you about driving or crime. But sometimes games teach people less obvious things—things that are more experiential or ineffable, things that you didn’t know you didn’t know, concepts that stick in your mind, deep things.
    LessWrong | 3 days ago
  • Some deaf children are hearing again because of a new gene therapy
    In a lab room, a toddler, deaf from birth, sits while a tone plays. There’s no reaction. His face does not change. Six weeks later, after a single injection of an experimental gene therapy, the same toddler is back in the same room. The tone plays. The toddler’s head turns toward the sound. And somewhere […]...
    Future Perfect | 3 days ago
  • Primary Care Physicians are Incompetent. We Need More of Them.
    The typical primary care physician is incompetent in every measurable respect. This is a huge problem. Here, I make the case that. Primary care physicians are broadly, grossly incompetent. This is due to empty credentialism. Making it much (~10X) easier to become a PCP is a good solution. Primary Care Physicians are Broadly, Grossly Incompetent.
    LessWrong | 3 days ago
  • Games that change your mind
    Crossposted from world spirit sock puppet. Some things you might learn from games are pretty blatant: Trivial Pursuit might teach you trivia, MasterType might teach you about typing, Grand Theft Auto might teach you about driving or crime. But sometimes … Continue reading →...
    Meteuphoric | 3 days ago
  • Understand why AI is a doom-risk in 39 captivating minutes
    Crossposted from world spirit sock puppet. I’ve really wanted more good short accounts of why AI poses an existential risk. Working on one myself has been one of those incredibly high priorities I keep putting off. Meanwhile award-winning journalist Ben … Continue reading →...
    Meteuphoric | 3 days ago
  • Games that change your mind
    Some things you might learn from games are pretty blatant: Trivial Pursuit might teach you trivia, MasterType might teach you about typing, Grand Theft Auto might teach you about driving or crime. But sometimes games teach people less obvious things—things that are more experiential or ineffable, things that you didn’t know you didn’t know, concepts that stick in your mind, deep things.
    Worldly Positions | 3 days ago
  • Quel Film a le Mieux Prédit Notre Présent ?
    ➡️ Passez à l'action sur les risques de l'IA : En quelques clics, alertez vos élus et envoyez le modèle de lettre préparé. C’est automatisé pour un minimum d’effort: https://taap.it/TF-PauseIACampagnes ⬇️⬇️⬇️ Infos complémentaires : sources, références, liens... ⬇️⬇️⬇️ Et si Matrix n'était pas une fiction, mais un documentaire sur votre attachement ?
    The Flares | 3 days ago
  • Understand why AI is a doom-risk in 39 captivating minutes
    I’ve really wanted more good short accounts of why AI poses an existential risk. Working on one myself has been one of those incredibly high priorities I keep putting off. Meanwhile award-winning journalist Ben Bradford of NPR has made a podcast version of my case for AI x-risk that I am thrilled with!
    World Spirit Sock Puppet | 3 days ago
  • How Go Players Disempower Themselves to AI
    Written as part of the MATS 9.1 extension program, mentored by Richard Ngo. From March 9th to 15th 2016, Go players around the world stayed up to watch their game fall to AI. Google DeepMind’s AlphaGo defeated Lee Sedol, commonly understood to be the world’s strongest player at the time, with a convincing 4-1 score.
    LessWrong | 3 days ago
  • Dairy cows make their misery expensive (but their calves can’t)
    How much do cows suffer in the production of milk? I can’t answer that; understanding animal experience is hard. But I can at least provide some facts about the conditions dairy cows live in, which might be useful to you in making your own assessment. My biggest conclusion is that cows made better choices than … Continue reading "Dairy cows make their misery expensive (but their calves can’t)"...
    Aceso Under Glass | 3 days ago
  • How much should the ideal person cry wolf?
    It is a fact about wolves and rationality that you should warn people about wolves quite a few times for every effective wolf attack. In particular, there is an asymmetry between the costs of having one’s flock devoured and averting a non-eventuating wolf attack. If the carnage is a hundred times worse, then it’s worth up to ninety-nine false alarms to stop it.
    LessWrong | 4 days ago
  • Are AI benchmarks doomed?
    In this episode, Greg Burnham and Tom Adamczewski join Anson Ho to push back on benchmark pessimism and dig into what the next generation of AI benchmarks could look like.
    Epoch Newsletter | 4 days ago
  • Conditional misalignment: Mitigations can hide EM behind contextual cues
    This is the abstract, introduction, and discussion of our new paper. We study three popular mitigations for emergent misalignment (EM) — diluting misaligned data with benign data, post-hoc HHH finetuning, and inoculation prompting — and show that each can leave behind conditional misalignment: the model reverts to broadly misaligned behavior when prompts contain cues from the misaligned...
    LessWrong | 4 days ago
  • Risk from fitness-seeking AIs: mechanisms and mitigations
    Current AIs routinely take unintended actions to score well on tasks: hardcoding test cases, training on the test set, downplaying issues, etc. This misalignment is still somewhat incoherent, but it increasingly resembles what I call " fitness-seeking"—a family of misaligned motivations centered on performing well in training and evaluations (e.g., reward-seeking).
    LessWrong | 4 days ago
  • Basic Rights for AIs
    The topic of AI welfare is fast becoming mainstream. As I wrote in my last post, there’s an emerging debate that has been drawing some strong reactions. There is some resistance to even treating AI welfare as a legitimate concern. But there’s a perhaps more understandable resistance—not to taking AI welfare seriously in general, but to particular […].
    Center for Reducing Suffering | 4 days ago
  • Sanity-checking “Incompressible Knowledge Probes”
    Or, did a chief scientist of an AI assistant startup conclusively show that GPT-5.5 has 9.7T parameters? . Introduction. Recently, a paper was circulated on Twitter claiming to have reverse engineered the parameter count of many frontier closed-source models including the newer GPT-5.5 (9.7T parameters) and Claude Opus 4.6 (5.3T parameters) as well as older models such as o1 (3.5T) and...
    LessWrong | 4 days ago
  • Risk from fitness-seeking AIs: mechanisms and mitigations
    Fitness-seeking is increasingly what misalignment looks like in practice—how should we respond?
    Redwood Research | 4 days ago
  • Risk from fitness-seeking AIs: mechanisms and mitigations
    Current AIs routinely take unintended actions to score well on tasks: hardcoding test cases, training on the test set, downplaying issues, etc. This misalignment is still somewhat incoherent, but it increasingly resembles what I call " fitness-seeking"—a family of misaligned motivations centered on performing well in training and evaluations (e.g., reward-seeking).
    AI Alignment Forum | 4 days ago
  • The Human Cost of Farming Animals – The Transfarmation Project
    The post The Human Cost of Farming Animals – The Transfarmation Project appeared first on Mercy For Animals.
    Mercy for Animals | 4 days ago
  • "Experts Say," Um, No They Don't
    A random professor's opinion isn't an expert consensus!
    Bentham's Newsletter | 4 days ago
  • AI unemployment and AI extinction are often the same
    My sense is that people think of AI existential risk and AI unemployment as distinct issues. Some people are extremely concerned about extinction and perhaps even indifferent to total unemployment. Some people think of moderate AI unemployment as a realistic and concerning issue, and AI extinction as science fiction.
    LessWrong | 4 days ago
  • Government control of AI has begun
    Transformer Weekly: Cruz’s latest messaging bill, Google employee outrage, and Elon goes to court...
    Transformer | 4 days ago
  • Lifetime Hardship Predicts Repetitive Behavior In Macaques
    A study reveals how negative experiences over time may create lasting behavioral scars in rhesus macaques used for research. The post Lifetime Hardship Predicts Repetitive Behavior In Macaques appeared first on Faunalytics.
    Faunalytics | 4 days ago
  • AISN #72: Empirical Research Sheds Light on AI Wellbeing
    Also: Public sentiment towards AI worsens
    AI Safety Newsletter | 4 days ago
  • Vision Weekend UK in one month. Plus: new grantees, and submissions for our prizes are open
    In this newsletter:
    Foresight Institute | 4 days ago
  • One week left to apply for Invisible College 2026
    Our residential seminar for 18–22 year olds was such a success that we are running it again. Apply this week or forever hold your peace.
    The Works in Progress Newsletter | 4 days ago
  • LLMs roleplay characters
    I. I’m going to talk about the persona selection model, which in my opinion is one of the most important concepts to understand if you want to understand large language models’ psychology.
    Thing of Things | 4 days ago
  • Boletín de mayo de 2026
    🚀 Las últimas novedades de la comunidad de AE...
    Altruismo eficaz | 4 days ago
  • What Are We Doing To Ourselves
    I do not find it amusing
    Atoms vs Bits | 4 days ago
  • Self driving interview
    In honor of yesterday’s nonspecific point in the gradual arrival of self-driving cars, an interview with myself. Interviewer: It sounds like you’re pretty excited about self-driving cars. Weren’t you just saying that unemployment from AI is on some kind of very overlapping continuum with extinction from AI?
    Worldly Positions | 4 days ago
  • 11 ways to be less deferential
    Crossposted from world spirit sock puppet. often worry that people are being too deferential about their beliefs. I also hear others worrying about this, and nobody seemingly worrying about the reverse, except perhaps my friends and therapists (and I guess … Continue reading →...
    Meteuphoric | 4 days ago
  • How You Help: 5 Ways We Use Your Community Survey Responses
    Ever wonder Faunalytics uses your community survey responses? In this blog, we break down a handful of important ways your feedback is incorporated directly into our work. The post How You Help: 5 Ways We Use Your Community Survey Responses appeared first on Faunalytics.
    Faunalytics | 4 days ago
  • Monkeyism 101
    Big complex idea
    Seeking To Be Jolly | 4 days ago
  • The best book on writing is Style: Lessons in Clarity and Grace
    Because it has concrete advice and exercises
    Experience Machines | 4 days ago
  • Games that change your mind
    Some things you might learn from games are pretty blatant: Trivial Pursuit might teach you trivia, MasterType might teach you about typing, Grand Theft Auto might teach you about driving or crime. But sometimes games teach people less obvious things—things that are more experiential or ineffable, things that you didn’t know you didn’t know, concepts that stick in your mind, deep things.
    World Spirit Sock Puppet | 4 days ago
  • Things I read and liked in April
    Meticulousness, moral muddles, mirth, mastery, millenarianist movements, masks
    Experience Machines | 4 days ago
  • Diversion and resale: estimating compute smuggling to China
    We estimate that between 290,000 and 1.6 million H100-equivalents (H100e) were smuggled to China through 2025. Our median estimate of 660,000 H100e would be roughly a third of China's total compute.
    Epoch Newsletter | 4 days ago
  • How to actually give money away
    This was originally posted here. It's written for an audience that's not deep in the weeds of EA giving theory/culture, but a few people suggested I post here as there's much that's additive to or divergent from some common EA practices. Feedback / disagreements welcome! Also my first time posting here. Hi!. --. Most people who intend to give large amounts of money away never actually do.
    Effective Altruism Forum | 4 days ago
  • Zuckerberg's leaked Q&A 💬, Netflix's vertical feed 📱, Mozilla vs Prompt API 👨‍💻
    TLDR AI | 4 days ago
  • Talking to journalists
    A common view around me seems to be that journalists are frequently dishonorable and dangerous, and talking to them is a risk to be avoided unless you have a very specific piece of information that you seek to publicize.
    Worldly Positions | 4 days ago
  • Picking up where I left off in 2021
    This isn’t the first time I’ve blogged every day. I started my newest blog, world spirit sock puppet, on 7 October 2020, and blogged every day until February 20th 2021, 138 posts later. I just came across an unpublished draft about the experience from later that year (saved May 4), which I feel that I should reckon with a bit rather than just jumping into a month of this as if it’s a new...
    Worldly Positions | 4 days ago
  • Is there an acceptable way to store clothes?
    Every way I know to store clothes I hate, to a first approximation. I hate my current nominal method: keeping them folded on open-front shelves, because they fall out on the floor and I can’t see almost any of them without taking a bunch out.
    Worldly Positions | 4 days ago
  • Unsickness celebration
    When I’ve been sick for a bit, here are some things that may be true: I haven’t exercised lately. . I have developed a vague background sense that I’m fragile and if I were to exercise it should be by walking around the garden or something. . I’m dressed in what is too shlubby to even count as comfortable. .
    Worldly Positions | 4 days ago
  • AI risk was not invested by AI CEOs to hype their companies
    I hear that many people believe that the idea of advanced AI threatening human existence was invented by AI CEOs to hype their products. I’ve even been condescendingly informed of this, as if I am the one at risk of naively accepting AI companies’ preferred narratives. If you are reading this, you are probably familiar enough with the decades-old AI safety community to know this isn’t true.
    Worldly Positions | 4 days ago
  • Missing markets in executive function
    It’s early in the morning, and sadly 1:29pm. After spending some time looking at things and picking them up and walking up the stairs and down the stairs and considering questions like “what should I…”, which my brain apparently considered objects of art more than of imperative, I inched into a decision to go out somewhere.
    Worldly Positions | 4 days ago
  • Cambridge: the kettle
    I arrived in Cambridge, Massachusetts, today with my boyfriend. We have a modest Airbnb apartment, up enough stairs that if you decided to count the flights you would probably have forgotten about the project by the top. It’s pleasant and unassuming, and we were moving slowly toward beginning writing our mandatory blog posts rather too late in the evening when a new presence got our attention.
    Worldly Positions | 4 days ago
  • Manhattan: distance and movement
    Last Tuesday I went to a Broadway show, Ragtime. I was in the front row, but surprised by how much the action did not feel real and a few feet away from me. Perhaps the performers were so skilled they didn’t seem like real people, or the sound so loud and sharp that it didn’t feel like people legit singing just over there.
    Worldly Positions | 4 days ago
  • San Francisco: self driving
    I’m on a plane heading back to San Francisco. I’ve lived in the Bay Area for most of the years since 2009, and a large fraction of that time the place has felt near the brink of self-driving cars. (Well, everywhere has, but San Francisco feels like the first testing ground for the most interesting experiments in technology.). And that has felt like a big deal.
    Worldly Positions | 4 days ago
  • What Deontological Bars?
    Astral Codex Ten | 4 days ago
  • Talking to journalists
    Crossposted from world spirit sock puppet. A common view around me seems to be that journalists are frequently dishonorable and dangerous, and talking to them is a risk to be avoided unless you have a very specific piece of information … Continue reading →...
    Meteuphoric | 4 days ago
  • Is there an acceptable way to store clothes?
    Crossposted from world spirit sock puppet. Every way I know to store clothes I hate, to a first approximation. I hate my current nominal method: keeping them folded on open-front shelves, because they fall out on the floor and I … Continue reading →...
    Meteuphoric | 4 days ago
  • Orgs: unreasonable boyfriend as service
    Crossposted from world spirit sock puppet. Suppose you and Bobby the car salesman are haggling over the price of a car. You could try saying that you won’t pay more than $3k, but Bobby can equally retort that he won’t … Continue reading →...
    Meteuphoric | 4 days ago

Loading...

SEARCH

SUBSCRIBE

RSS Feed
X Feed

ORGANIZATIONS

  • Effective Altruism Forum
  • 80,000 Hours
  • Ada Lovelace Institute
  • Against Malaria Foundation
  • AI Alignment Forum
  • AI Futures Project
  • AI Impacts
  • AI Now Institute
  • AI Objectives Institute
  • AI Safety Camp
  • AI Safety Communications Centre
  • AidGrade
  • Albert Schweitzer Foundation
  • Aligned AI
  • ALLFED
  • Asterisk
  • altLabs
  • Ambitious Impact
  • Anima International
  • Animal Advocacy Africa
  • Animal Advocacy Careers
  • Animal Charity Evaluators
  • Animal Ethics
  • Apollo Academic Surveys
  • Aquatic Life Institute
  • Association for Long Term Existence and Resilience
  • Ayuda Efectiva
  • Berkeley Existential Risk Initiative
  • Bill & Melinda Gates Foundation
  • Bipartisan Commission on Biodefense
  • California YIMBY
  • Cambridge Existential Risks Initiative
  • Carnegie Corporation of New York
  • Center for Applied Rationality
  • Center for Election Science
  • Center for Emerging Risk Research
  • Center for Health Security
  • Center for Human-Compatible AI
  • Center for Long-Term Cybersecurity
  • Center for Open Science
  • Center for Reducing Suffering
  • Center for Security and Emerging Technology
  • Center for Space Governance
  • Center on Long-Term Risk
  • Centre for Effective Altruism
  • Centre for Enabling EA Learning and Research
  • Centre for the Governance of AI
  • Centre for the Study of Existential Risk
  • Centre of Excellence for Development Impact and Learning
  • Charity Entrepreneurship
  • Charity Science
  • Clearer Thinking
  • Compassion in World Farming
  • Convergence Analysis
  • Crustacean Compassion
  • Deep Mind
  • Democracy Defense Fund
  • Democracy Fund
  • Development Media International
  • EA Funds
  • Effective Altruism Cambridge
  • Effective altruism for Christians
  • Effective altruism for Jews
  • Effective Altruism Foundation
  • Effective Altruism UNSW
  • Effective Giving
  • Effective Institutions Project
  • Effective Self-Help
  • Effective Thesis
  • Effektiv-Spenden.org
  • Eleos AI
  • Eon V Labs
  • Epoch Blog
  • Equalize Health
  • Evidence Action
  • Family Empowerment Media
  • Faunalytics
  • Farmed Animal Funders
  • FAST | Animal Advocacy Forum
  • Felicifia
  • Fish Welfare Initiative
  • Fistula Foundation
  • Food Fortification Initiative
  • Foresight Institute
  • Forethought
  • Foundational Research Institute
  • Founders’ Pledge
  • Fortify Health
  • Fund for Alignment Research
  • Future Generations Commissioner for Wales
  • Future of Life Institute
  • Future of Humanity Institute
  • Future Perfect
  • GBS Switzerland
  • Georgetown University Initiative on Innovation, Development and Evaluation
  • GiveDirectly
  • GiveWell
  • Giving Green
  • Giving What We Can
  • Global Alliance for Improved Nutrition
  • Global Catastrophic Risk Institute
  • Global Challenges Foundation
  • Global Innovation Fund
  • Global Priorities Institute
  • Global Priorities Project
  • Global Zero
  • Good Food Institute
  • Good Judgment Inc
  • Good Technology Project
  • Good Ventures
  • Happier Lives Institute
  • Harvard College Effective Altruism
  • Healthier Hens
  • Helen Keller INTL
  • High Impact Athletes
  • HistPhil
  • Humane Slaughter Association
  • IDInsight
  • Impactful Government Careers
  • Innovations for Poverty Action
  • Institute for AI Policy and Strategy
  • Institute for Progress
  • International Initiative for Impact Evaluation
  • Invincible Wellbeing
  • Iodine Global Network
  • J-PAL
  • Jewish Effective Giving Initiative
  • Lead Exposure Elimination Project
  • Legal Priorities Project
  • LessWrong
  • Let’s Fund
  • Leverhulme Centre for the Future of Intelligence
  • Living Goods
  • Long Now Foundation
  • Machine Intelligence Research Institute
  • Malaria Consortium
  • Manifold Markets
  • Median Group
  • Mercy for Animals
  • Metaculus
  • Metaculus | News
  • METR
  • Mila
  • New Harvest
  • Nonlinear
  • Nuclear Threat Initiative
  • One Acre Fund
  • One for the World
  • OpenAI
  • Open Mined
  • Open Philanthropy
  • Organisation for the Prevention of Intense Suffering
  • Ought
  • Our World in Data
  • Oxford Prioritisation Project
  • Parallel Forecast
  • Ploughshares Fund
  • Precision Development
  • Probably Good
  • Pugwash Conferences on Science and World Affairs
  • Qualia Research Institute
  • Raising for Effective Giving
  • Redwood Research
  • Rethink Charity
  • Rethink Priorities
  • Riesgos Catastróficos Globales
  • Sanku – Project Healthy Children
  • Schmidt Futures
  • Sentience Institute
  • Sentience Politics
  • Seva Foundation
  • Sightsavers
  • Simon Institute for Longterm Governance
  • SoGive
  • Space Futures Initiative
  • Stanford Existential Risk Initiative
  • Swift Centre
  • The END Fund
  • The Future Society
  • The Life You Can Save
  • The Roots of Progress
  • Target Malaria
  • Training for Good
  • UK Office for AI
  • Unlimit Health
  • Utility Farm
  • Vegan Outreach
  • Venten | AI Safety for Latam
  • Village Enterprise
  • Waitlist Zero
  • War Prevention Initiative
  • Wave
  • Wellcome Trust
  • Wild Animal Initiative
  • Wild-Animal Suffering Research
  • Works in Progress

PEOPLE

  • Scott Aaronson | Shtetl-Optimized
  • Tom Adamczewski | Fragile Credences
  • Matthew Adelstein | Bentham's Newsletter
  • Matthew Adelstein | Controlled Opposition
  • Vaidehi Agarwalla | Vaidehi's Blog
  • James Aitchison | Philosophy and Ideas
  • Scott Alexander | Astral Codex Ten
  • Scott Alexander | Slate Star Codex
  • Scott Alexander | Slate Star Scratchpad
  • Alexanian & Franz | GCBR Organization Updates
  • Applied Divinity Studies
  • Leopold Aschenbrenner | For Our Posterity
  • Amanda Askell
  • Amanda Askell's Blog
  • Amanda Askell’s Substack
  • Atoms vs Bits
  • Connor Axiotes | Rules of the Game
  • Sofia Balderson | Hive
  • Mark Bao
  • Boaz Barak | Windows On Theory
  • Nathan Barnard | The Good blog
  • Matthew Barnett
  • Ollie Base | Base Rates
  • Simon Bazelon | Out of the Ordinary
  • Tobias Baumann | Cause Prioritization Research
  • Tobias Baumann | Reducing Risks of Future Suffering
  • Nora Belrose
  • Rob Bensinger | Nothing Is Mere
  • Alexander Berger | Marginal Change
  • Aaron Bergman | Aaron's Blog
  • Satvik Beri | Ars Arcana
  • Aveek Bhattacharya | Social Problems Are Like Maths
  • Michael Bitton | A Nice Place to Live
  • Liv Boeree
  • Dillon Bowen
  • Topher Brennan
  • Ozy Brennan | Thing of Things
  • Catherine Brewer | Catherine’s Blog
  • Stijn Bruers | The Rational Ethicist
  • Vitalik Buterin
  • Lynette Bye | EA Coaching
  • Ryan Carey
  • Joe Carlsmith
  • Lucius Caviola | Outpaced
  • Richard Yetter Chappell | Good Thoughts
  • Richard Yetter Chappell | Philosophy, Et Cetera
  • Paul Christiano | AI Alignment
  • Paul Christiano | Ordinary Ideas
  • Paul Christiano | Rational Altruist
  • Paul Christiano | Sideways View
  • Paul Christiano & Katja Grace | The Impact Purchase
  • Evelyn Ciara | Sunyshore
  • Cirrostratus Whispers
  • Jesse Clifton | Jesse’s Substack
  • Peter McCluskey | Bayesian Investor
  • Greg Colbourn | Greg's Substack
  • Ajeya Cotra & Kelsey Piper | Planned Obsolescence
  • Owen Cotton-Barrat | Strange Cities
  • Andrew Critch
  • Paul Crowley | Minds Aren’t Magic
  • Dale | Effective Differentials
  • Max Dalton | Custodienda
  • Saloni Dattani | Scientific Discovery
  • Amber Dawn | Contemplatonist
  • De novo
  • Michael Dello-Iacovo
  • Jai Dhyani | ANOIEAEIB
  • Michael Dickens | Philosophical Multicore
  • Dieleman & Zeijlmans | Effective Environmentalism
  • Charles Dillon
  • Anthony DiGiovanni | Ataraxia
  • Kate Donovan | Gruntled & Hinged
  • Dawn Drescher | Impartial Priorities
  • Eric Drexler | AI Prospects: Toward Global Goal Convergence
  • Holly Elmore
  • Sam Enright | The Fitzwilliam
  • Daniel Eth | Thinking of Utils
  • Daniel Filan
  • Lukas Finnveden
  • Diana Fleischman | Dianaverse
  • Diana Fleischman | Dissentient
  • Julia Galef
  • Ben Garfinkel | The Best that Can Happen
  • Adrià Garriga-Alonso
  • Matthew Gentzel | The Consequentialist
  • Aaron Gertler | Alpha Gamma
  • Rachel Glennerster
  • Sam Glover | Samstack
  • Andrés Gómez Emilsson | Qualia Computing
  • Nathan Goodman & Yusuf | Longterm Liberalism
  • Ozzie Gooen | The QURI Medley
  • Ozzie Gooen
  • Katja Grace | Meteuphoric
  • Katja Grace | World Spirit Sock Puppet
  • Katja Grace | Worldly Positions
  • Spencer Greenberg | Optimize Everything
  • Milan Griffes | Flight from Perfection
  • Simon Grimm
  • Zach Groff | The Groff Spot
  • Erich Grunewald
  • Marc Gunther
  • Cate Hall | Useful Fictions
  • Chris Hallquist | The Uncredible Hallq
  • Topher Hallquist
  • John Halstead
  • Finn Hambly
  • James Harris | But Can They Suffer?
  • Riley Harris
  • Riley Harris | Million Year View
  • Peter Hartree
  • Shakeel Hashim | Transformer
  • Sarah Hastings-Woodhouse
  • Hayden | Ethical Haydonism
  • Julian Hazell | Julian’s Blog
  • Julian Hazell | Secret Third Thing
  • Jiwoon Hwang
  • Incremental Updates
  • Roxanne Heston | Fire and Pulse
  • Hauke Hillebrandt | Hauke’s Blog
  • Ben Hoffman | Compass Rose
  • Florian Jehn | Existential Crunch
  • Michael Huemer | Fake Nous
  • Tyler John | Secular Mornings
  • Mike Johnson | Open Theory
  • Toby Jolly | Seeking To Be Jolly
  • Holden Karnofsky | Cold Takes
  • Jeff Kaufman
  • Cullen O'Keefe | Jural Networks
  • Daniel Kestenholz
  • Ketchup Duck
  • Oliver Kim | Global Developments
  • Isaac King | Outside the Asylum
  • Petra Kosonen | Tiny Points of Vast Value
  • Victoria Krakovna
  • Ben Kuhn
  • Alex Lawsen | Speculative Decoding
  • Gavin Leech | argmin gravitas
  • Howie Lempel
  • Gregory Lewis
  • Eli Lifland | Foxy Scout
  • Rhys Lindmark
  • Robert Long | Experience Machines
  • Garrison Lovely | Garrison's Substack
  • MacAskill, Chappell & Meissner | Utilitarianism
  • Jack Malde | The Ethical Economist
  • Jonathan Mann | Abstraction
  • Sydney Martin | Birthday Challenge
  • Daniel May
  • Daniel May
  • Conor McCammon | Utopianish
  • Peter McIntyre
  • Peter McIntyre | Conceptually
  • Pablo Melchor
  • Pablo Melchor | Altruismo racional
  • Geoffrey Miller
  • Fin Moorhouse
  • Thomas Moynihan
  • Luke Muehlhauser
  • Neel Nanda
  • David Nash | Global Development & Economic Advancement
  • David Nash | Monthly Overload of Effective Altruism
  • Eric Neyman | Unexpected Values
  • Richard Ngo | Narrative Ark
  • Richard Ngo | Thinking Complete
  • Elizabeth Van Nostrand | Aceso Under Glass
  • Oesterheld, Treutlein & Kokotajlo | The Universe from an Intentional Stance
  • James Ozden | Understanding Social Change
  • Daniel Paleka | AI Safety Takes
  • Ives Parr | Parrhesia
  • Dwarkesh Patel | The Lunar Society
  • Kelsey Piper | The Unit of Caring
  • Michael Plant | Planting Happiness
  • Michal Pokorný | Agenty Dragon
  • Georgia Ray | Eukaryote Writes Blog
  • Ross Rheingans-Yoo | Icosian Reflections
  • Josh Richards | Tofu Ramble
  • Jess Riedel | foreXiv
  • Anna Riedl
  • Hannah Ritchie | Sustainability by Numbers
  • David Roodman
  • Eli Rose
  • Abraham Rowe | Good Structures
  • Siebe Rozendal
  • John Salvatier
  • Anders Sandberg | Andart II
  • William Saunders
  • Joey Savoie | Measured Life
  • Stefan Schubert
  • Stefan Schubert | Philosophical Sketches
  • Stefan Schubert | Stefan’s Substack
  • Stefan Schubert | The Update
  • Nuño Sempere | Measure is unceasing
  • Harish Sethu | Counting Animals
  • Rohin Shah
  • Zeke Sherman | Bashi-Bazuk
  • Buck Shlegeris
  • Jay Shooster | jayforjustice
  • Carl Shulman | Reflective Disequilibrium
  • Jonah Sinick
  • Andrew Snyder-Beattie | Defenses in Depth
  • Ben Snodin
  • Nate Soares | Minding Our Way
  • Kaj Sotala
  • Tom Stafford | Reasonable People
  • Pablo Stafforini | Pablo’s Miscellany
  • Henry Stanley
  • Jacob Steinhardt | Bounded Regret
  • Zach Stein-Perlman | AI Lab Watch
  • Romeo Stevens | Neurotic Gradient Descent
  • Michael Story | Too long to tweet
  • Maxwell Tabarrok | Maximum Progress
  • Max Taylor | AI for Animals
  • Benjamin Todd
  • Brian Tomasik | Reducing Suffering
  • Helen Toner | Rising Tide
  • Philip Trammell
  • Jacob Trefethen
  • Toby Tremlett | Raising Dust
  • Alex Turner | The Pond
  • Alyssa Vance | Rationalist Conspiracy
  • Magnus Vinding
  • Eva Vivalt
  • Jackson Wagner
  • Ben West | Philosophy for Programmers
  • Nick Whitaker | High Modernism
  • Jess Whittlestone
  • Peter Wildeford | Everyday Utilitarian
  • Peter Wildeford | Not Quite a Blog
  • Peter Wildeford | Pasteur’s Cube
  • Peter Wildeford | The Power Law
  • Bridget Williams
  • Ben Williamson
  • Julia Wise | Giving Gladly
  • Julia Wise | Otherwise
  • Julia Wise | The Whole Sky
  • Kat Woods
  • Thomas Woodside
  • Mark Xu | Artificially Intelligent
  • Linch Zhang
  • Anisha Zaveri

NEWSLETTERS

  • AI Safety Newsletter
  • Alignment Newsletter
  • Altruismo eficaz
  • Animal Ask’s Newsletter
  • Apart Research
  • Astera Institute | Human Readable
  • Campbell Collaboration Newsletter
  • CAS Newsletter
  • ChinAI Newsletter
  • CSaP Newsletter
  • EA Forum Digest
  • EA Groups Newsletter
  • EA & LW Forum Weekly Summaries
  • EA UK Newsletter
  • EACN Newsletter
  • Effective Altruism Lowdown
  • Effective Altruism Newsletter
  • Effective Altruism Switzerland
  • Epoch Newsletter
  • Farm Animal Welfare Newsletter
  • Existential Risk Observatory Newsletter
  • Forecasting Newsletter
  • Forethought
  • Future Matters
  • GCR Policy’s Newsletter
  • Gwern Newsletter
  • High Impact Policy Review
  • Impactful Animal Advocacy Community Newsletter
  • Import AI
  • IPA’s weekly links
  • Manifold Markets | Above the Fold
  • Manifund | The Fox Says
  • Matt’s Thoughts In Between
  • Metaculus – Medium
  • ML Safety Newsletter
  • Open Philanthropy Farm Animal Welfare Newsletter
  • Oxford Internet Institute
  • Palladium Magazine Newsletter
  • Policy.ai
  • Predictions
  • Rational Newsletter
  • Sentinel
  • SimonM’s Newsletter
  • The Anti-Apocalyptus Newsletter
  • The Digital Minds Newsletter
  • The EA Behavioral Science Newsletter
  • The EU AI Act Newsletter
  • The EuropeanAI Newsletter
  • The Long-termist’s Field Guide
  • The Works in Progress Newsletter
  • This week in security
  • TLDR AI
  • Tlön newsletter
  • To Be Decided Newsletter

PODCASTS

  • 80,000 Hours Podcast
  • Alignment Newsletter Podcast
  • AI X-risk Research Podcast
  • Altruismo Eficaz
  • CGD Podcast
  • Conversations from the Pale Blue Dot
  • Doing Good Better
  • Effective Altruism Forum Podcast
  • ForeCast
  • Found in the Struce
  • Future of Life Institute Podcast
  • Future Perfect Podcast
  • GiveWell Podcast
  • Gutes Einfach Tun
  • Hear This Idea
  • Invincible Wellbeing
  • Joe Carlsmith Audio
  • Morality Is Hard
  • Open Model Project
  • Press the Button
  • Rationally Speaking Podcast
  • The End of the World
  • The Filan Cabinet
  • The Foresight Institute Podcast
  • The Giving What We Can Podcast
  • The Good Pod
  • The Inside View
  • The Life You Can Save
  • The Rhys Show
  • The Turing Test
  • Two Persons Three Reasons
  • Un equilibrio inadecuado
  • Utilitarian Podcast
  • Wildness

VIDEOS

  • 80,000 Hours: Cambridge
  • A Happier World
  • AI In Context
  • AI Safety Talks
  • Altruismo Eficaz
  • Altruísmo Eficaz
  • Altruísmo Eficaz Brasil
  • Brian Tomasik
  • Carrick Flynn for Oregon
  • Center for AI Safety
  • Center for Security and Emerging Technology
  • Centre for Effective Altruism
  • Center on Long-Term Risk
  • Chana Messinger
  • Deliberation Under Ideal Conditions
  • EA for Christians
  • Effective Altruism at UT Austin
  • Effective Altruism Global
  • Future of Humanity Institute
  • Future of Life Institute
  • Giving What We Can
  • Global Priorities Institute
  • Harvard Effective Altruism
  • Leveller
  • Machine Intelligence Research Institute
  • Metaculus
  • Ozzie Gooen
  • Qualia Research Institute
  • Raising for Effective Giving
  • Rational Animations
  • Robert Miles
  • Rootclaim
  • School of Thinking
  • The Flares
  • EA Groups
  • EA Tweets
  • EA Videos
Inactive websites grayed out.
Effective Altruism News is a side project of Tlön.