Effective Altruism News
Effective Altruism News

SUBSCRIBE

RSS Feed
X Feed

ORGANIZATIONS

  • Effective Altruism Forum
  • 80,000 Hours
  • Ada Lovelace Institute
  • Against Malaria Foundation
  • AI Alignment Forum
  • AI Futures Project
  • AI Impacts
  • AI Now Institute
  • AI Objectives Institute
  • AI Safety Camp
  • AI Safety Communications Centre
  • AidGrade
  • Albert Schweitzer Foundation
  • Aligned AI
  • ALLFED
  • Asterisk
  • altLabs
  • Ambitious Impact
  • Anima International
  • Animal Advocacy Africa
  • Animal Advocacy Careers
  • Animal Charity Evaluators
  • Animal Ethics
  • Apollo Academic Surveys
  • Aquatic Life Institute
  • Association for Long Term Existence and Resilience
  • Ayuda Efectiva
  • Berkeley Existential Risk Initiative
  • Bill & Melinda Gates Foundation
  • Bipartisan Commission on Biodefense
  • California YIMBY
  • Cambridge Existential Risks Initiative
  • Carnegie Corporation of New York
  • Center for Applied Rationality
  • Center for Election Science
  • Center for Emerging Risk Research
  • Center for Health Security
  • Center for Human-Compatible AI
  • Center for Long-Term Cybersecurity
  • Center for Open Science
  • Center for Reducing Suffering
  • Center for Security and Emerging Technology
  • Center for Space Governance
  • Center on Long-Term Risk
  • Centre for Effective Altruism
  • Centre for Enabling EA Learning and Research
  • Centre for the Governance of AI
  • Centre for the Study of Existential Risk
  • Centre of Excellence for Development Impact and Learning
  • Charity Entrepreneurship
  • Charity Science
  • Clearer Thinking
  • Compassion in World Farming
  • Convergence Analysis
  • Crustacean Compassion
  • Deep Mind
  • Democracy Defense Fund
  • Democracy Fund
  • Development Media International
  • EA Funds
  • Effective Altruism Cambridge
  • Effective altruism for Christians
  • Effective altruism for Jews
  • Effective Altruism Foundation
  • Effective Altruism UNSW
  • Effective Giving
  • Effective Institutions Project
  • Effective Self-Help
  • Effective Thesis
  • Effektiv-Spenden.org
  • Eleos AI
  • Eon V Labs
  • Epoch Blog
  • Equalize Health
  • Evidence Action
  • Family Empowerment Media
  • Faunalytics
  • Farmed Animal Funders
  • FAST | Animal Advocacy Forum
  • Felicifia
  • Fish Welfare Initiative
  • Fistula Foundation
  • Food Fortification Initiative
  • Foresight Institute
  • Forethought
  • Foundational Research Institute
  • Founders’ Pledge
  • Fortify Health
  • Fund for Alignment Research
  • Future Generations Commissioner for Wales
  • Future of Life Institute
  • Future of Humanity Institute
  • Future Perfect
  • GBS Switzerland
  • Georgetown University Initiative on Innovation, Development and Evaluation
  • GiveDirectly
  • GiveWell
  • Giving Green
  • Giving What We Can
  • Global Alliance for Improved Nutrition
  • Global Catastrophic Risk Institute
  • Global Challenges Foundation
  • Global Innovation Fund
  • Global Priorities Institute
  • Global Priorities Project
  • Global Zero
  • Good Food Institute
  • Good Judgment Inc
  • Good Technology Project
  • Good Ventures
  • Happier Lives Institute
  • Harvard College Effective Altruism
  • Healthier Hens
  • Helen Keller INTL
  • High Impact Athletes
  • HistPhil
  • Humane Slaughter Association
  • IDInsight
  • Impactful Government Careers
  • Innovations for Poverty Action
  • Institute for AI Policy and Strategy
  • Institute for Progress
  • International Initiative for Impact Evaluation
  • Invincible Wellbeing
  • Iodine Global Network
  • J-PAL
  • Jewish Effective Giving Initiative
  • Lead Exposure Elimination Project
  • Legal Priorities Project
  • LessWrong
  • Let’s Fund
  • Leverhulme Centre for the Future of Intelligence
  • Living Goods
  • Long Now Foundation
  • Machine Intelligence Research Institute
  • Malaria Consortium
  • Manifold Markets
  • Median Group
  • Mercy for Animals
  • Metaculus
  • Metaculus | News
  • METR
  • Mila
  • New Harvest
  • Nonlinear
  • Nuclear Threat Initiative
  • One Acre Fund
  • One for the World
  • OpenAI
  • Open Mined
  • Open Philanthropy
  • Organisation for the Prevention of Intense Suffering
  • Ought
  • Our World in Data
  • Oxford Prioritisation Project
  • Parallel Forecast
  • Ploughshares Fund
  • Precision Development
  • Probably Good
  • Pugwash Conferences on Science and World Affairs
  • Qualia Research Institute
  • Raising for Effective Giving
  • Redwood Research
  • Rethink Charity
  • Rethink Priorities
  • Riesgos Catastróficos Globales
  • Sanku – Project Healthy Children
  • Schmidt Futures
  • Sentience Institute
  • Sentience Politics
  • Seva Foundation
  • Sightsavers
  • Simon Institute for Longterm Governance
  • SoGive
  • Space Futures Initiative
  • Stanford Existential Risk Initiative
  • Swift Centre
  • The END Fund
  • The Future Society
  • The Life You Can Save
  • The Roots of Progress
  • Target Malaria
  • Training for Good
  • UK Office for AI
  • Unlimit Health
  • Utility Farm
  • Vegan Outreach
  • Venten | AI Safety for Latam
  • Village Enterprise
  • Waitlist Zero
  • War Prevention Initiative
  • Wave
  • Wellcome Trust
  • Wild Animal Initiative
  • Wild-Animal Suffering Research
  • Works in Progress

PEOPLE

  • Scott Aaronson | Shtetl-Optimized
  • Tom Adamczewski | Fragile Credences
  • Matthew Adelstein | Bentham's Newsletter
  • Matthew Adelstein | Controlled Opposition
  • Vaidehi Agarwalla | Vaidehi's Blog
  • James Aitchison | Philosophy and Ideas
  • Scott Alexander | Astral Codex Ten
  • Scott Alexander | Slate Star Codex
  • Scott Alexander | Slate Star Scratchpad
  • Alexanian & Franz | GCBR Organization Updates
  • Applied Divinity Studies
  • Leopold Aschenbrenner | For Our Posterity
  • Amanda Askell
  • Amanda Askell's Blog
  • Amanda Askell’s Substack
  • Atoms vs Bits
  • Connor Axiotes | Rules of the Game
  • Sofia Balderson | Hive
  • Mark Bao
  • Boaz Barak | Windows On Theory
  • Nathan Barnard | The Good blog
  • Matthew Barnett
  • Ollie Base | Base Rates
  • Simon Bazelon | Out of the Ordinary
  • Tobias Baumann | Cause Prioritization Research
  • Tobias Baumann | Reducing Risks of Future Suffering
  • Nora Belrose
  • Rob Bensinger | Nothing Is Mere
  • Alexander Berger | Marginal Change
  • Aaron Bergman | Aaron's Blog
  • Satvik Beri | Ars Arcana
  • Aveek Bhattacharya | Social Problems Are Like Maths
  • Michael Bitton | A Nice Place to Live
  • Liv Boeree
  • Dillon Bowen
  • Topher Brennan
  • Ozy Brennan | Thing of Things
  • Catherine Brewer | Catherine’s Blog
  • Stijn Bruers | The Rational Ethicist
  • Vitalik Buterin
  • Lynette Bye | EA Coaching
  • Ryan Carey
  • Joe Carlsmith
  • Lucius Caviola | Outpaced
  • Richard Yetter Chappell | Good Thoughts
  • Richard Yetter Chappell | Philosophy, Et Cetera
  • Paul Christiano | AI Alignment
  • Paul Christiano | Ordinary Ideas
  • Paul Christiano | Rational Altruist
  • Paul Christiano | Sideways View
  • Paul Christiano & Katja Grace | The Impact Purchase
  • Evelyn Ciara | Sunyshore
  • Cirrostratus Whispers
  • Jesse Clifton | Jesse’s Substack
  • Peter McCluskey | Bayesian Investor
  • Greg Colbourn | Greg's Substack
  • Ajeya Cotra & Kelsey Piper | Planned Obsolescence
  • Owen Cotton-Barrat | Strange Cities
  • Andrew Critch
  • Paul Crowley | Minds Aren’t Magic
  • Dale | Effective Differentials
  • Max Dalton | Custodienda
  • Saloni Dattani | Scientific Discovery
  • Amber Dawn | Contemplatonist
  • De novo
  • Michael Dello-Iacovo
  • Jai Dhyani | ANOIEAEIB
  • Michael Dickens | Philosophical Multicore
  • Dieleman & Zeijlmans | Effective Environmentalism
  • Charles Dillon
  • Anthony DiGiovanni | Ataraxia
  • Kate Donovan | Gruntled & Hinged
  • Dawn Drescher | Impartial Priorities
  • Eric Drexler | AI Prospects: Toward Global Goal Convergence
  • Holly Elmore
  • Sam Enright | The Fitzwilliam
  • Daniel Eth | Thinking of Utils
  • Daniel Filan
  • Lukas Finnveden
  • Diana Fleischman | Dianaverse
  • Diana Fleischman | Dissentient
  • Julia Galef
  • Ben Garfinkel | The Best that Can Happen
  • Adrià Garriga-Alonso
  • Matthew Gentzel | The Consequentialist
  • Aaron Gertler | Alpha Gamma
  • Rachel Glennerster
  • Sam Glover | Samstack
  • Andrés Gómez Emilsson | Qualia Computing
  • Nathan Goodman & Yusuf | Longterm Liberalism
  • Ozzie Gooen | The QURI Medley
  • Ozzie Gooen
  • Katja Grace | Meteuphoric
  • Katja Grace | World Spirit Sock Puppet
  • Katja Grace | Worldly Positions
  • Spencer Greenberg | Optimize Everything
  • Milan Griffes | Flight from Perfection
  • Simon Grimm
  • Zach Groff | The Groff Spot
  • Erich Grunewald
  • Marc Gunther
  • Cate Hall | Useful Fictions
  • Chris Hallquist | The Uncredible Hallq
  • Topher Hallquist
  • John Halstead
  • Finn Hambly
  • James Harris | But Can They Suffer?
  • Riley Harris
  • Riley Harris | Million Year View
  • Peter Hartree
  • Shakeel Hashim | Transformer
  • Sarah Hastings-Woodhouse
  • Hayden | Ethical Haydonism
  • Julian Hazell | Julian’s Blog
  • Julian Hazell | Secret Third Thing
  • Jiwoon Hwang
  • Incremental Updates
  • Roxanne Heston | Fire and Pulse
  • Hauke Hillebrandt | Hauke’s Blog
  • Ben Hoffman | Compass Rose
  • Florian Jehn | Existential Crunch
  • Michael Huemer | Fake Nous
  • Tyler John | Secular Mornings
  • Mike Johnson | Open Theory
  • Toby Jolly | Seeking To Be Jolly
  • Holden Karnofsky | Cold Takes
  • Jeff Kaufman
  • Cullen O'Keefe | Jural Networks
  • Daniel Kestenholz
  • Ketchup Duck
  • Oliver Kim | Global Developments
  • Isaac King | Outside the Asylum
  • Petra Kosonen | Tiny Points of Vast Value
  • Victoria Krakovna
  • Ben Kuhn
  • Alex Lawsen | Speculative Decoding
  • Gavin Leech | argmin gravitas
  • Howie Lempel
  • Gregory Lewis
  • Eli Lifland | Foxy Scout
  • Rhys Lindmark
  • Robert Long | Experience Machines
  • Garrison Lovely | Garrison's Substack
  • MacAskill, Chappell & Meissner | Utilitarianism
  • Jack Malde | The Ethical Economist
  • Jonathan Mann | Abstraction
  • Sydney Martin | Birthday Challenge
  • Daniel May
  • Daniel May
  • Conor McCammon | Utopianish
  • Peter McIntyre
  • Peter McIntyre | Conceptually
  • Pablo Melchor
  • Pablo Melchor | Altruismo racional
  • Geoffrey Miller
  • Fin Moorhouse
  • Thomas Moynihan
  • Luke Muehlhauser
  • Neel Nanda
  • David Nash | Global Development & Economic Advancement
  • David Nash | Monthly Overload of Effective Altruism
  • Eric Neyman | Unexpected Values
  • Richard Ngo | Narrative Ark
  • Richard Ngo | Thinking Complete
  • Elizabeth Van Nostrand | Aceso Under Glass
  • Oesterheld, Treutlein & Kokotajlo | The Universe from an Intentional Stance
  • James Ozden | Understanding Social Change
  • Daniel Paleka | AI Safety Takes
  • Ives Parr | Parrhesia
  • Dwarkesh Patel | The Lunar Society
  • Kelsey Piper | The Unit of Caring
  • Michael Plant | Planting Happiness
  • Michal Pokorný | Agenty Dragon
  • Georgia Ray | Eukaryote Writes Blog
  • Ross Rheingans-Yoo | Icosian Reflections
  • Josh Richards | Tofu Ramble
  • Jess Riedel | foreXiv
  • Anna Riedl
  • Hannah Ritchie | Sustainability by Numbers
  • David Roodman
  • Eli Rose
  • Abraham Rowe | Good Structures
  • Siebe Rozendal
  • John Salvatier
  • Anders Sandberg | Andart II
  • William Saunders
  • Joey Savoie | Measured Life
  • Stefan Schubert
  • Stefan Schubert | Philosophical Sketches
  • Stefan Schubert | Stefan’s Substack
  • Stefan Schubert | The Update
  • Nuño Sempere | Measure is unceasing
  • Harish Sethu | Counting Animals
  • Rohin Shah
  • Zeke Sherman | Bashi-Bazuk
  • Buck Shlegeris
  • Jay Shooster | jayforjustice
  • Carl Shulman | Reflective Disequilibrium
  • Jonah Sinick
  • Andrew Snyder-Beattie | Defenses in Depth
  • Ben Snodin
  • Nate Soares | Minding Our Way
  • Kaj Sotala
  • Tom Stafford | Reasonable People
  • Pablo Stafforini | Pablo’s Miscellany
  • Henry Stanley
  • Jacob Steinhardt | Bounded Regret
  • Zach Stein-Perlman | AI Lab Watch
  • Romeo Stevens | Neurotic Gradient Descent
  • Michael Story | Too long to tweet
  • Maxwell Tabarrok | Maximum Progress
  • Max Taylor | AI for Animals
  • Benjamin Todd
  • Brian Tomasik | Reducing Suffering
  • Helen Toner | Rising Tide
  • Philip Trammell
  • Jacob Trefethen
  • Toby Tremlett | Raising Dust
  • Alex Turner | The Pond
  • Alyssa Vance | Rationalist Conspiracy
  • Magnus Vinding
  • Eva Vivalt
  • Jackson Wagner
  • Ben West | Philosophy for Programmers
  • Nick Whitaker | High Modernism
  • Jess Whittlestone
  • Peter Wildeford | Everyday Utilitarian
  • Peter Wildeford | Not Quite a Blog
  • Peter Wildeford | Pasteur’s Cube
  • Peter Wildeford | The Power Law
  • Bridget Williams
  • Ben Williamson
  • Julia Wise | Giving Gladly
  • Julia Wise | Otherwise
  • Julia Wise | The Whole Sky
  • Kat Woods
  • Thomas Woodside
  • Mark Xu | Artificially Intelligent
  • Linch Zhang
  • Anisha Zaveri

NEWSLETTERS

  • AI Safety Newsletter
  • Alignment Newsletter
  • Altruismo eficaz
  • Animal Ask’s Newsletter
  • Apart Research
  • Astera Institute | Human Readable
  • Campbell Collaboration Newsletter
  • CAS Newsletter
  • ChinAI Newsletter
  • CSaP Newsletter
  • EA Forum Digest
  • EA Groups Newsletter
  • EA & LW Forum Weekly Summaries
  • EA UK Newsletter
  • EACN Newsletter
  • Effective Altruism Lowdown
  • Effective Altruism Newsletter
  • Effective Altruism Switzerland
  • Epoch Newsletter
  • Farm Animal Welfare Newsletter
  • Existential Risk Observatory Newsletter
  • Forecasting Newsletter
  • Forethought
  • Future Matters
  • GCR Policy’s Newsletter
  • Gwern Newsletter
  • High Impact Policy Review
  • Impactful Animal Advocacy Community Newsletter
  • Import AI
  • IPA’s weekly links
  • Manifold Markets | Above the Fold
  • Manifund | The Fox Says
  • Matt’s Thoughts In Between
  • Metaculus – Medium
  • ML Safety Newsletter
  • Open Philanthropy Farm Animal Welfare Newsletter
  • Oxford Internet Institute
  • Palladium Magazine Newsletter
  • Policy.ai
  • Predictions
  • Rational Newsletter
  • Sentinel
  • SimonM’s Newsletter
  • The Anti-Apocalyptus Newsletter
  • The Digital Minds Newsletter
  • The EA Behavioral Science Newsletter
  • The EU AI Act Newsletter
  • The EuropeanAI Newsletter
  • The Long-termist’s Field Guide
  • The Works in Progress Newsletter
  • This week in security
  • TLDR AI
  • Tlön newsletter
  • To Be Decided Newsletter

PODCASTS

  • 80,000 Hours Podcast
  • Alignment Newsletter Podcast
  • AI X-risk Research Podcast
  • Altruismo Eficaz
  • CGD Podcast
  • Conversations from the Pale Blue Dot
  • Doing Good Better
  • Effective Altruism Forum Podcast
  • ForeCast
  • Found in the Struce
  • Future of Life Institute Podcast
  • Future Perfect Podcast
  • GiveWell Podcast
  • Gutes Einfach Tun
  • Hear This Idea
  • Invincible Wellbeing
  • Joe Carlsmith Audio
  • Morality Is Hard
  • Open Model Project
  • Press the Button
  • Rationally Speaking Podcast
  • The End of the World
  • The Filan Cabinet
  • The Foresight Institute Podcast
  • The Giving What We Can Podcast
  • The Good Pod
  • The Inside View
  • The Life You Can Save
  • The Rhys Show
  • The Turing Test
  • Two Persons Three Reasons
  • Un equilibrio inadecuado
  • Utilitarian Podcast
  • Wildness

VIDEOS

  • 80,000 Hours: Cambridge
  • A Happier World
  • AI In Context
  • AI Safety Talks
  • Altruismo Eficaz
  • Altruísmo Eficaz
  • Altruísmo Eficaz Brasil
  • Brian Tomasik
  • Carrick Flynn for Oregon
  • Center for AI Safety
  • Center for Security and Emerging Technology
  • Centre for Effective Altruism
  • Center on Long-Term Risk
  • Chana Messinger
  • Deliberation Under Ideal Conditions
  • EA for Christians
  • Effective Altruism at UT Austin
  • Effective Altruism Global
  • Future of Humanity Institute
  • Future of Life Institute
  • Giving What We Can
  • Global Priorities Institute
  • Harvard Effective Altruism
  • Leveller
  • Machine Intelligence Research Institute
  • Metaculus
  • Ozzie Gooen
  • Qualia Research Institute
  • Raising for Effective Giving
  • Rational Animations
  • Robert Miles
  • Rootclaim
  • School of Thinking
  • The Flares
  • EA Groups
  • EA Tweets
  • EA Videos
Inactive websites grayed out.
Effective Altruism News is a side project of Tlön.
  • Bad Problems Don't Stop Being Bad Because Somebody's Wrong About Fault Analysis
    Here's a dynamic I’ve seen at least a dozen times: Alice: Man that article has a very inaccurate/misleading/horrifying headline. Bob: Did you know, *actually* article writers don't write their own headlines?. …. But what I care about is the misleading headline, not your org chart. Another example I’ve encountered recently is (anonymizing) when a friend complained about a prosaic safety...
    LessWrong | 8 hours ago
  • The Epoch Brief - May 8, 2026
    AI chip supply chain bottlenecks, smuggling to China, benchmark saturation, revenue efficiency at AI companies, and more
    Epoch Newsletter | 9 hours ago
  • This is why AI is scary and dangerous.
    Drop 10,000 humans naked in the savannah and we'll bootstrap our way to nuclear weapons. That's the capability AI labs are racing to automate, with no idea what they're building. MIRI President Nate Soares at Harvard on why we only get one shot at this. Comment "danger" to get access to the full video.
    Machine Intelligence Research Institute | 11 hours ago
  • Yoshua Bengio thinks he knows how to build safe superintelligence
    By Robert Wiblin | Watch on Youtube | Listen on Spotify | Read transcript. Episode summary. I want my children to live in a world where they will have a future and there will be a democracy for them to live in. Even a 1% chance of something going really, really bad is not acceptable to me.
    Effective Altruism Forum | 12 hours ago
  • Write Cause You Have Something to Say
    The ones who are most successful at writeathons (Inkhaven, NaNoWriMo) are those with an overhang of things to say, usually in the form of: draft posts. daydreams. When Scott Alexander said: Whenever I see a new person who blogs every day, it's very rare that that never goes anywhere or they don't get good.
    LessWrong | 13 hours ago
  • AI is Breaking Two Vulnerability Cultures
    A week ago the Copy Fail vulnerability came out, and Hyunwoo Kim immediately realized that the fixes were insufficient, sharing a patch the same day. In doing this he followed standard procedure for Linux, especially within networking: share the security impact with a closed list of Linux security engineers, while fixing the bug quietly and efficiently in the open.
    LessWrong | 14 hours ago
  • Coefficient Giving is hiring grantmakers and senior generalists across our Global Catastrophic Risks teams
    TL;DR: Coefficient Giving is running a major hiring round for 10+ grantmakers and senior generalists across five Global Catastrophic Risks (GCR) teams. We're allocating around $1 billion in 2026 across AI safety and catastrophic biorisk, and we’re acutely capacity-constrained. Apply here by May 17. Why we’re hiring.
    Effective Altruism Forum | 14 hours ago
  • Changelog 5/8: Shop Improvements, Silicon Rewards & More
    Check out our recent site updates!
    Manifold Markets | 14 hours ago
  • Suburban Apartment Bans May Be Making Poorer Neighborhoods’ Rents Increase
    When suburbs block apartments, rents in nearby poor neighborhoods may rise by about $27 a month, according to a new national study. Most research on exclusionary zoning has focused on costs within the communities that adopt it; this study finds…. The post Suburban Apartment Bans May Be Making Poorer Neighborhoods’ <span class="dewidow">Rents Increase</span> appeared first on California YIMBY.
    California YIMBY | 15 hours ago
  • How the Northwest’s Wildfire Crisis is a Sprawl Crisis
    Wildfire hazard zones across the Pacific Northwest are expanding — and according to Sightline Institute, so is the public cost. Nearly 1.6 million residents lived in high-risk areas in 2023, up 8 percent since 2018, with population growing fastest in…. The post How the Northwest’s Wildfire Crisis is a <span class="dewidow">Sprawl Crisis</span> appeared first on California YIMBY.
    California YIMBY | 15 hours ago
  • Objections to effective altruism
    A discussion with Bentham's Bulldog
    Good Thoughts | 15 hours ago
  • Is ProgramBench Impossible?
    ProgramBench is a new coding benchmark that all frontier models spectacularly fail. We’ve been on a quest for “hard benchmarks” for a while so it’s refreshing to see a benchmark where top models do badly. Unfortunately, ProgramBench has one big problem: it’s impossible!. What is ProgramBench?. ProgramBench tests if a model can recreate a program from a “clean room” environment.
    LessWrong | 15 hours ago
  • 80,000 Hours is hiring a lot right now — come join us!
    This forum post was first drafted using an LLM to summarise information from human-written job postings and was then edited/adjusted by hiring managers. The primary author/coordinator is Arden Koehler. Overview. 80,000 Hours has eight open positions across our advising, operations, video, and web teams, plus three expressions of interest open for video and operations roles. We're trying to...
    Effective Altruism Forum | 16 hours ago
  • Richard Yetter Chappell and I Discuss Effective Altruism
    And explain why the main objections don't work
    Bentham's Newsletter | 17 hours ago
  • David Reich – Why the Bronze Age was an inflection point in human evolution
    "Instead of being quiescent, natural selection is everywhere."
    The Lunar Society | 18 hours ago
  • Dan Hendrycks' Moral Theory Is Very Implausible
    Does the supreme principle of morality say that you matter 360 billion times more than foreign strangers?
    Bentham's Newsletter | 19 hours ago
  • The Four Curses of Nuclear Reactors (and AI)
    Rational Animations | 19 hours ago
  • How Silicon Valley sold Washington an AI race
    “Who and what agendas does rivalry serve?”...
    Transformer | 19 hours ago
  • Cage-Free Hotel Pledges Mean Little Without Strong Regulation
    Global hotel chains are falling short on cage-free egg sourcing, suggesting that regulation, not corporate promises, may be the real driver of progress for hens. The post Cage-Free Hotel Pledges Mean Little Without Strong Regulation appeared first on Faunalytics.
    Faunalytics | 19 hours ago
  • Bringing More Expertise to Bear on Alignment
    Preamble. The preamble is less useful for the typical AlignmentForum/LessWrong reader, who may want to skip to Adversaria vs Basinland section. On 28th of October 2025, Geoffrey Irving, Chief Scientist of the UK AI Security Institute, gave a keynote talk (slides) at the Alignment Conference.
    LessWrong | 19 hours ago
  • What is local government good for?
    Episode 16 is about building data centers, school districts and redistribution
    The Works in Progress Newsletter | 21 hours ago
  • Enhancing Discoverability: Recent Updates to the OSF
    Lifecycle Open Science (LOS) is an approach to research that promotes transparency, openness, and accessibility across the entire research lifecycle—from planning and data collection through analysis, publication, and reuse—by making research outputs and processes more interoperable, machine-readable, and actionable across systems.
    Center for Open Science | 21 hours ago
  • More articles we would like to commission
    Write for Works in Progress.
    The Works in Progress Newsletter | 21 hours ago
  • AI Worker Power is Near Its Peak. They’re Finally Starting To Use It.
    Google DeepMind UK employees voted to unionize, but not for higher pay.
    Garrison's Substack | 21 hours ago
  • Anders Sandberg | AI & Leviathan @ Vision Weekend USA 2025
    This talk was recorded live at Vision Weekend USA, held December 5–7, 2025 in the Bay Area. Vision Weekends are our flagship conference series, bringing together leading scientists, entrepreneurs, funders, and policymakers to explore frontier science and technology and to imagine paths toward flourishing futures. Hosted on Acast. See acast.com/privacy for more information.
    The Foresight Institute Podcast | 23 hours ago
  • Some Newsletter
    thresholds of goodness
    Atoms vs Bits | 23 hours ago
  • The old tech that could help stop the next airborne pandemic
    It’s hard to imagine modern life without glycols. They are used in cosmetics, fog machines, and food. As you read this, you’re almost certainly wearing or drinking from something they were used to produce — polyester fabric or plastic bottles, for example. If you brush your teeth with toothpaste or top your salad with bottled […]...
    Future Perfect | 24 hours ago
  • Elon Musk could lose his case against OpenAI — and still get what he wants
    So, what’s a guy got to do to become a billionaire around here? Greg Brockman scribbled the question in his diary, recently unsealed as trial evidence, just two years after co-founding OpenAI as a charity in 2015: “Financially, what will take me to $1B?” For Brockman, now OpenAI’s president, the answer was a yearslong restructuring […]...
    Future Perfect | 1 days ago
  • Three Model Organisms For Taste
    Astral Codex Ten | 1 days ago
  • Strengthening County Financing for Sustainable Community Health Systems in Kenya
    The post Strengthening County Financing for Sustainable Community Health Systems in Kenya appeared first on Living Goods.
    Living Goods | 1 days ago
  • LGT Venture Philanthropy renews partnership with Living Goods to strengthen community health systems
    The post LGT Venture Philanthropy renews partnership with Living Goods to strengthen community health systems appeared first on Living Goods.
    Living Goods | 1 days ago
  • How we sent $235 via mobile money to families hit by a Philippines earthquake – a first for the country
    A powerful 6.9 magnitude earthquake devastated Northern Cebu On September 30, 2025, a magnitude 6.9 earthquake struck off the coast of Bogo City in Cebu Province, displacing approximately 90,000 people, damaging or destroying more than 195,000 homes and impacting~ 753,000 people. It was followed by 12,000 aftershocks. For families already living in damaged homes, the […]...
    GiveDirectly | 1 days ago
  • The tables have turned on AI sceptics
    Could we have human-level AI within the next few decades? For a long time, many people have dismissed this idea as armchair speculation. In their view, we shouldn’t ground our beliefs about transformative technologies in vague hunches and fragile multi-step arguments. We need more solid evidence, like clear empirical trends. We need to be epistemically conservative.
    Effective Altruism Forum | 1 days ago
  • AirPod cameras 🎧, GPT-Realtime-2 🤖, Cloudflare's AI layoffs 💼
    TLDR AI | 1 days ago
  • Mechanistic estimation for wide random MLPs
    This post covers joint work with Wilson Wu, George Robinson, Mike Winer, Victor Lecomte and Paul Christiano. Thanks to Geoffrey Irving and Jess Riedel for comments on the post. In ARC's latest paper, we study the following problem: given a randomly initialized multilayer perceptron (MLP), produce an estimate for the expected output of the model under Gaussian input.
    LessWrong | 2 days ago
  • Why Light-Touch AI Safety Rules Can Matter
    "it does sometimes feel like very light touch requirements, like SB-53, or like you're spitting into a wildfire or something." "But like, it's a start, right?" "Maybe now you have to have an outside organization verify that you followed your safety and security policy."
    Future of Life Institute | 2 days ago
  • Why Minimal AI Rules Still Face Industry Opposition
    "I think it's like very hard to pass AI legislation in the US right now at the federal level, but also even at the state level." "Leave the document that's currently on your website on your website." "And even this kind of has companies like, you know, screaming, wailing, and gnashing their teeth and running their clothes about how oppressed they are by overregulation, right?"
    Future of Life Institute | 2 days ago
  • How AI Could Centralize Presidential Control of Bureaucracy
    "This is like the deep state problem, right?" "But if you have like loyal AI subordinates in every agency that kind of solves that problem, it's just like, oh, align it to whatever the president wants." "That's like a kind of scary prospect."
    Future of Life Institute | 2 days ago
  • Americans: call your senators today to stop the Save Our Bacon Act
    The Farm Bill currently under consideration by the U.S.
    Thing of Things | 2 days ago
  • Natural Language Autoencoders Produce Unsupervised Explanations of LLM Activations
    Abstract. We introduce Natural Language Autoencoders (NLAs), an unsupervised method for generating natural language explanations of LLM activations. An NLA consists of two LLM modules: an activation verbalizer (AV) that maps an activation to a text description and an activation reconstructor (AR) that maps the description back to an activation.
    LessWrong | 2 days ago
  • Natural Language Autoencoders Produce Unsupervised Explanations of LLM Activations
    Abstract. We introduce Natural Language Autoencoders (NLAs), an unsupervised method for generating natural language explanations of LLM activations. An NLA consists of two LLM modules: an activation verbalizer (AV) that maps an activation to a text description and an activation reconstructor (AR) that maps the description back to an activation.
    AI Alignment Forum | 2 days ago
  • Try, even if they have you cold
    I think smart people try things less often than they should, because of a cached mental pattern where you think of what might go wrong, and you find a foolproof countermeasure on the part of some antag, and so we call it off. Stockfish, playing itself, might as well resign from the first move if you force it to give knight odds. Sensei(the Go AI), should do the same when it has to give 6 stones.
    LessWrong | 2 days ago
  • It May Be Possible to Improvise A High Grade Bioshelter
    Surviving an environment-to-human pathogen would require widespread protection from airborne exposure, indoors and out. We think this may be achievable using improvised bioshelters and PPE made from household materials, though this hypothesis still needs more testing.
    Defenses in Depth | 2 days ago
  • Prioritizing Environment-to-Human Biological Threats
    Pathogens that replicate in the environment and transmit to humans pose a uniquely direct existential risk, far more so than those that spread person-to-person or can't grow outside a host. Of the possible exposure routes, airborne transmission is by far the hardest to defend against.
    Defenses in Depth | 2 days ago
  • Save Our Pigs!
    Note: This post was crossposted from the Coefficient Giving Farm Animal Welfare Research Newsletter by the Forum team, with the author's permission. The author may not see or respond to comments on this post. Subtitle: The pork lobby is one farm bill away from gutting our strongest farm animal welfare laws.
    Effective Altruism Forum | 2 days ago
  • A review of “Investigating the consequences of accidentally grading CoT during RL”
    Last week, OpenAI staff shared an early draft of Investigating the consequences of accidentally grading CoT during RL with Redwood Research staff. To start with, I appreciate them publishing this post. I think it is valuable for AI companies to be transparent about problems like these when they arise.
    LessWrong | 2 days ago
  • A review of “Investigating the consequences of accidentally grading CoT during RL”
    Last week, OpenAI staff shared an early draft of Investigating the consequences of accidentally grading CoT during RL with Redwood Research staff.
    Redwood Research | 2 days ago
  • How to Govern AI When You Can't Predict the Future (with Charlie Bullock)
    Charlie Bullock is a Senior Research Fellow at the Institute for Law and AI. He joins the podcast to discuss radical optionality: how governments can prepare for very advanced AI without locking in premature rules. The conversation covers why law often trails technology, and how transparency, reporting, evaluations, cybersecurity standards, and expanded technical hiring could help.
    Future of Life Institute | 2 days ago
  • The New Bio Frontier
    Center for Security and Emerging Technology | 2 days ago
  • Yoshua Bengio thinks he knows how to build safe superintelligence
    The post Yoshua Bengio thinks he knows how to build safe superintelligence appeared first on 80,000 Hours.
    80,000 Hours | 2 days ago
  • How to Govern AI When You Can't Predict the Future (with Charlie Bullock)
    Charlie Bullock is a Senior Research Fellow at the Institute for Law and AI. He joins the podcast to discuss radical optionality: how governments can prepare for very advanced AI without locking in premature rules. The conversation covers why law often trails technology, and how transparency, reporting, evaluations, cybersecurity standards, and expanded technical hiring could help.
    Future of Life Institute | 2 days ago
  • Save Our Pigs!
    The pork lobby is one farm bill away from gutting our strongest farm animal welfare laws
    Farm Animal Welfare Newsletter | 2 days ago
  • Expression of Interest: Chief of Staff (Operations Team)
    The post Expression of Interest: Chief of Staff (Operations Team) appeared first on 80,000 Hours.
    80,000 Hours | 2 days ago
  • Mechanistic estimation for wide random MLPs
    This post covers joint work with Wilson Wu, George Robinson, Mike Winer, Victor Lecomte and Paul Christiano. Thanks to Geoffrey Irving and Jess Riedel for comments on the post. In ARC's latest paper, we study the following problem: given a randomly initialized multilayer perceptron (MLP), produce an estimate for the expected output of the model under Gaussian input.
    AI Alignment Forum | 2 days ago
  • The tables have turned on AI sceptics
    Epistemic conservatism no longer favours long timelines
    The Update | 2 days ago
  • The Lives of British Animals
    The conditions for British farm animals are nightmarishly bad
    Bentham's Newsletter | 2 days ago
  • New Statistical Method Reveals Flaws In Shelter Length-Of-Stay Calculations
    Traditional shelter length-of-stay calculations are misleading. Using a corrected statistical approach more accurately captures operational changes and resource needs. The post New Statistical Method Reveals Flaws In Shelter Length-Of-Stay Calculations appeared first on Faunalytics.
    Faunalytics | 2 days ago
  • EU AI Act meets AI Agents
    Highlights from Tech Policy Press article “The EU AI Act is Not Ready for Agents,” examining how the EU AI Act applies to AI agents and governance challenges. The post EU AI Act meets AI Agents appeared first on The Future Society.
    The Future Society | 2 days ago
  • Study Report: Is Personality 4, 5, or 6-Dimensional?
    Note: This is a longer and more technical report of our study into personality traits. If you want to see the shorter, more layperson-friendly version, click here. There's a debate that has raged in academic journals and among personality researchers about the nature of humans: how many dimensions does it take to best represent a person's personality?
    Clearer Thinking | 2 days ago
  • How Many Traits Make Up Your Personality?
    Short of time? Read the key takeaways. Some personality models are empirically derived. The Big Five personality model analyzes personality in terms of: Openness (to Experience), Conscientiousness, Extraversion, Agreeableness, and Neuroticism (also known as ‘emotional instability’) This model emerged from the lexical hypothesis, which claimed if a personality difference matters, languages will...
    Clearer Thinking | 2 days ago
  • Why AI Unemployment Could Resist Worker Adaptation
    "We're talking about potential AI systems that don't just like substitute for some forms of work, but actually substitute for all forms of work, such that like a human couldn't necessarily find a different job because the AI would be able to do that job too." "And so this could potentially yield just incredibly high unemployment rates, like unsustainably high."...
    Future of Life Institute | 2 days ago
  • Why AI Is Not Like a Toaster
    "I think the biggest thing for me is just the agency." "But when it comes to these AI systems, like they're not being built like typical software." "It's kind of like if you built a bigger toaster and then all of a sudden your toaster could like hack the internet in addition to making toast."
    Future of Life Institute | 2 days ago
  • Claude Mythos and Superhuman Vulnerability Discovery
    "these trends are already so ginormously fast." "Like I think maybe I wasn't expecting that the current trend would result in like superhuman vulnerability discovery happening this early on." "And I think there's just like very clear and compelling evidence that this AI system is like indeed exceeding human professionals in vulnerability discovery."
    Future of Life Institute | 2 days ago
  • Happier Lives Fund: second round of disbursements (Q1 2026)
    Thanks to the generosity of our donors, the Happier Lives Fund (HLF) is growing, and so is its impact. We have now completed our second round of disbursements to our recommended charities, covering donations received in Q1 2026. Here is what that looks like in practice. How much did the HLF raise in Q1 […] Source.
    Happier Lives Institute | 2 days ago
  • Anastasia Gamick | FROs for Fundamental Capabilities @ Vision Weekend USA 2025
    This talk was recorded live at Vision Weekend USA, held December 5–7, 2025 in the Bay Area. Vision Weekends are our flagship conference series, bringing together leading scientists, entrepreneurs, funders, and policymakers to explore frontier science and technology and to imagine paths toward flourishing futures. . Hosted on Acast. See acast.com/privacy for more information.
    The Foresight Institute Podcast | 2 days ago
  • Kearney Capuano: From studying neuroscience to helping others at scale | Effective Altruism Stories
    Kearney Capano always wanted to help others but nothing ever felt good enough. She would volunteer and work at nonprofits, but there were always more people to be helped, more suffering to address. In university, she joined a neuroscience lab studying people who donate one of their kidneys to a complete stranger — trying to understand what drives that kind of selflessness.
    Centre for Effective Altruism | 2 days ago
  • What is Unjust Discrimination?
    A systematic account
    Good Thoughts | 2 days ago
  • New EA Forum LLM-use policy
    This policy does not apply to anything posted before this post's time of publication. New policy: You are welcome to use AI to help you write posts, but we ask that you disclose it when you do. Not disclosing that your post is AI-assisted could mean a rate-limit or a ban. We won’t enforce this policy for comments and quick takes, though we’d appreciate a norm of disclosure there as well. .
    Effective Altruism Forum | 2 days ago
  • There is no evidence you should reapply sunscreen every 2 hours.
    It’s incredible how many consensus guidelines dissolve when you look closely at them. . If you listen to any authority on the subject of sunscreen, you will hear it endlessly repeated that you absolutely must reapply sunscreen every 2 hours while you are in the sun, and immediately after swimming, sweating, or exercising.
    LessWrong | 2 days ago
  • The EA case for an EA Group House + how to start one (its easy!)
    I started 2 EA(ish) group houses now, so I figured there's an opportunity to share my experience and how you too can start one!. There's a whole substack dedicated to community living, so I'll stick to the EA lens of it. Note: My experiences are based in NYC and SF, which have a nice flow of travelers & concentration of like-minded folks.
    Effective Altruism Forum | 2 days ago
  • The Work Undone
    please explain me
    Atoms vs Bits | 2 days ago
  • AMA: Svetha Janumpalli, CEO and Founder of New Incentives
    I'm Svetha Janumpalli, founder and CEO of New Incentives. We run a conditional cash transfer program in northern Nigeria that provides small incentives to caregivers to complete routine infant vaccination schedules. Today, we operate across more than 7,000 clinics and have enrolled 6.8 million infants.
    Effective Altruism Forum | 2 days ago
  • Target Malaria Uganda Joins World Malaria Day Commemoration in Iganga
    Target Malaria Uganda took part in the national commemoration of World Malaria Day in Iganga, held at Bulamagi Subcounty grounds under the theme: “Driven to End Malaria: Now We Can, Now We Must.” The event also combined the graduation of over 100 Community Health Extension Workers (CHEWs). The function, held on April 24, 2026, was […].
    Target Malaria | 2 days ago
  • Target Malaria Uganda Joins World Malaria Day Commemoration in Iganga
    Target Malaria Uganda took part in the national commemoration of World Malaria Day in Iganga, held at Bulamagi Subcounty grounds under the theme: “Driven to End Malaria: Now We Can, Now We Must.” The event also combined the graduation of over 100 Community Health Extension Workers (CHEWs). The function, held on April 24, 2026, was […].
    Target Malaria | 2 days ago
  • Ireland's AI Research Gap
    Why are we absent from frontier research?
    The Fitzwilliam | 2 days ago
  • Contra Everyone On Taste
    Astral Codex Ten | 2 days ago
  • Hidden Open Thread 432.5
    Astral Codex Ten | 2 days ago
  • Effective Altruism focused on bednets while a malaria vaccine was stuck for 35 years. The case for Abundance.
    This post was cross-posted from Positive Sum by the Forum team. The author notes: I'm not saying every abundance goal meets this bar, e.g. high speed rail in America would not. This post is intended as a clarifying abundance's relation to EA, rather than a criticism of EA prioritization. Subtitle: Functional governance and democracy helps many EA cause areas.
    Effective Altruism Forum | 2 days ago
  • Anthropic + xAI Colossus 🤝, Google Expert Advice 💬, design from the inside 🧑‍🎨
    TLDR AI | 2 days ago
  • Many individual CEVs are probably quite bad
    I was thinking about Habryka's article on Putin's CEV, but I am posting my response here, because the original article is already 3 weeks old. I am not sure how exactly a person's CEV is defined.
    LessWrong | 3 days ago
  • Considerations for PPE Strategy
    A misaligned AI or human-AI group could attempt takeover by releasing a highly transmissible engineered pathogen. I discuss what a PPE strategy aimed at this threat model needs to get right.
    Defenses in Depth | 3 days ago
  • US Farm Bill alert, SE Asia incubator, and new global slaughter data
    Your farmed animal advocacy update for early May 2026
    Hive | 3 days ago
  • I made a graphic of the expanding moral circle - free to use
    The "expanding moral circle" -- the idea that moral concern has (or, at least, should) widened over time from family, to community, to nation, to all humanity, and (arguably) outward to all sentient beings -- was developed by W.E.H. Lecky (1869) and popularized by Peter Singer in The Expanding Circle (1981).
    Effective Altruism Forum | 3 days ago
  • x-risk-themed
    Sometimes, a friend who works around here, at an x-risk-themed organisation, will think about leaving their job. They’ll ask a group of people “what should I do instead?”. And everyone will chime in with ideas for other x-risk-themed orgs that they could join.
    LessWrong | 3 days ago
  • Why I Find Woke Criticism of Veganism and Effective Altruism So Outrageous
    Using the language of the oppressed to justify ignoring their interests
    Bentham's Newsletter | 3 days ago
  • Faunalytics Index – May 2026
    This month’s Faunalytics Index provides facts and stats about the welfare of egg-laying ducks in Indonesia, a program to help unhoused people and their companion animals, misperceptions about honeybees, and more. The post Faunalytics Index – May 2026 appeared first on Faunalytics.
    Faunalytics | 3 days ago
  • Palantir’s controversy is the product
    Palantir’s fiery rhetoric helps mystify its mostly mundane tech — propping up its share price and preserving its national security contracts...
    Transformer | 3 days ago
  • We might only get one real attempt at superintelligence
    Rational Animations | 3 days ago
  • Future-Proofing EU AI Gigafactories: Four Design Imperatives
    The EU's AI Gigafactory initiative is its largest planned compute investment to date. Our new memo identifies four imperatives that the initiative must address to deliver on Europe's frontier AI ambitions. The post Future-Proofing EU AI Gigafactories: Four Design Imperatives appeared first on The Future Society.
    The Future Society | 3 days ago
  • Surviving Mirror Life: A Manual for Resilience in Buildings: Introduction to the threat, concepts and scenario parameters
    Epistemic certainty: Obviously loads of uncertainty on mirror life risks and the degree to which we'd have to pressurize buildings or filter outdoor air. Moderately high certainty for the best hasty pathways for doing this in North American and a narrow subset of European buildings. Lower certainty as we move towards international buildings.
    Effective Altruism Forum | 3 days ago
  • What if LLMs are mostly crystallized intelligence?
    Summary: LLMs are better at developing crystallized intelligence than fluid intelligence. That is: LLM training is good at building crystallized intelligence by learning patterns from training data, and this is sufficient to make them surprisingly skillful at lots of tasks.
    LessWrong | 3 days ago
  • EA Forum Digest #290
    EA Forum Digest #290 Strategic AI debates, everyday impact, and what’s happening across EA Hello!. No news this week, enjoy the digest. — Toby (for the Forum team) We recommend: Open strategic questions for digital minds (Lucius Caviola, 15 min). AIM's new charity taxonomy (Aidan Alexander, Morgan Fairless, Ambitious Impact, 13 min).
    EA Forum Digest | 3 days ago
  • AI Now is Hiring!
    We are at a pivotal moment in the fight to shape the future of AI and its role in society. AI Now is scaling up our team to meet the moment, looking to make three hires to help us grow the organization as we enter our next phase: More information on each role can be […]. The post AI Now is Hiring! appeared first on AI Now Institute.
    AI Now Institute | 3 days ago
  • AI Now Is Hiring a Comms Associate
    We are looking for a high-touch, digitally savvy communications professional to support the organization’s external presence across a range of channels. The Communications Associate will be a primary point of contact for engagement with the public and press, working in close partnership with our Senior Director and wider team to execute our comms strategy. We […].
    AI Now Institute | 3 days ago
  • AI Now Is Hiring a Senior Operations Director
    We’re looking for a senior leader to support the organization through this next phase of growth. Experienced and results-driven, this individual will have a finger to the pulse of the organization, working in close partnership with our Senior Director to build the systems and processes necessary for our team to thrive. This role requires a […].
    AI Now Institute | 3 days ago
  • AI Now Is Hiring a Program Associate
    We’re looking for a Program Associate to help execute our programs so they can be maximally impactful. With a bias to action and high degree of attention to detail, this individual will work at the frontline of executing AI Now’s flagship reports and events, providing support to the Senior Director across the range of projects […].
    AI Now Institute | 3 days ago
  • We grew ~10x last year, and are now planning for the next 10x
    Hey folks! We’ve recently done an internal impact assessment and thought it would be helpful to share its highlights. (Due to capacity constraints, we opted to share the current post rather than wait for a longer and more polished one, but we’re happy to answer questions.). For context, our goal at Probably Good is to help people build careers that are good for them and for the world.
    Effective Altruism Forum | 3 days ago
  • Useful conversations & resources from our Slack community
    Hive Slack Threads: April
    Hive | 3 days ago
  • The backlash to Billie Eilish’s vegan comments explains a lot about the American left (and everyone else)
    Last week, in a video interview with Elle magazine, the pop star Billie Eilish was asked the following question: “What’s one hill you’d die on?”  “Y’all not gonna like me for this one,” Eilish said. “Eating meat is inherently wrong.”  She then added that it’s hypocritical to say you love all animals but also eat […]...
    Future Perfect | 3 days ago
  • With Me For My Looks
    what's the solution?
    Atoms vs Bits | 3 days ago

Loading...

SEARCH

SUBSCRIBE

RSS Feed
X Feed

ORGANIZATIONS

  • Effective Altruism Forum
  • 80,000 Hours
  • Ada Lovelace Institute
  • Against Malaria Foundation
  • AI Alignment Forum
  • AI Futures Project
  • AI Impacts
  • AI Now Institute
  • AI Objectives Institute
  • AI Safety Camp
  • AI Safety Communications Centre
  • AidGrade
  • Albert Schweitzer Foundation
  • Aligned AI
  • ALLFED
  • Asterisk
  • altLabs
  • Ambitious Impact
  • Anima International
  • Animal Advocacy Africa
  • Animal Advocacy Careers
  • Animal Charity Evaluators
  • Animal Ethics
  • Apollo Academic Surveys
  • Aquatic Life Institute
  • Association for Long Term Existence and Resilience
  • Ayuda Efectiva
  • Berkeley Existential Risk Initiative
  • Bill & Melinda Gates Foundation
  • Bipartisan Commission on Biodefense
  • California YIMBY
  • Cambridge Existential Risks Initiative
  • Carnegie Corporation of New York
  • Center for Applied Rationality
  • Center for Election Science
  • Center for Emerging Risk Research
  • Center for Health Security
  • Center for Human-Compatible AI
  • Center for Long-Term Cybersecurity
  • Center for Open Science
  • Center for Reducing Suffering
  • Center for Security and Emerging Technology
  • Center for Space Governance
  • Center on Long-Term Risk
  • Centre for Effective Altruism
  • Centre for Enabling EA Learning and Research
  • Centre for the Governance of AI
  • Centre for the Study of Existential Risk
  • Centre of Excellence for Development Impact and Learning
  • Charity Entrepreneurship
  • Charity Science
  • Clearer Thinking
  • Compassion in World Farming
  • Convergence Analysis
  • Crustacean Compassion
  • Deep Mind
  • Democracy Defense Fund
  • Democracy Fund
  • Development Media International
  • EA Funds
  • Effective Altruism Cambridge
  • Effective altruism for Christians
  • Effective altruism for Jews
  • Effective Altruism Foundation
  • Effective Altruism UNSW
  • Effective Giving
  • Effective Institutions Project
  • Effective Self-Help
  • Effective Thesis
  • Effektiv-Spenden.org
  • Eleos AI
  • Eon V Labs
  • Epoch Blog
  • Equalize Health
  • Evidence Action
  • Family Empowerment Media
  • Faunalytics
  • Farmed Animal Funders
  • FAST | Animal Advocacy Forum
  • Felicifia
  • Fish Welfare Initiative
  • Fistula Foundation
  • Food Fortification Initiative
  • Foresight Institute
  • Forethought
  • Foundational Research Institute
  • Founders’ Pledge
  • Fortify Health
  • Fund for Alignment Research
  • Future Generations Commissioner for Wales
  • Future of Life Institute
  • Future of Humanity Institute
  • Future Perfect
  • GBS Switzerland
  • Georgetown University Initiative on Innovation, Development and Evaluation
  • GiveDirectly
  • GiveWell
  • Giving Green
  • Giving What We Can
  • Global Alliance for Improved Nutrition
  • Global Catastrophic Risk Institute
  • Global Challenges Foundation
  • Global Innovation Fund
  • Global Priorities Institute
  • Global Priorities Project
  • Global Zero
  • Good Food Institute
  • Good Judgment Inc
  • Good Technology Project
  • Good Ventures
  • Happier Lives Institute
  • Harvard College Effective Altruism
  • Healthier Hens
  • Helen Keller INTL
  • High Impact Athletes
  • HistPhil
  • Humane Slaughter Association
  • IDInsight
  • Impactful Government Careers
  • Innovations for Poverty Action
  • Institute for AI Policy and Strategy
  • Institute for Progress
  • International Initiative for Impact Evaluation
  • Invincible Wellbeing
  • Iodine Global Network
  • J-PAL
  • Jewish Effective Giving Initiative
  • Lead Exposure Elimination Project
  • Legal Priorities Project
  • LessWrong
  • Let’s Fund
  • Leverhulme Centre for the Future of Intelligence
  • Living Goods
  • Long Now Foundation
  • Machine Intelligence Research Institute
  • Malaria Consortium
  • Manifold Markets
  • Median Group
  • Mercy for Animals
  • Metaculus
  • Metaculus | News
  • METR
  • Mila
  • New Harvest
  • Nonlinear
  • Nuclear Threat Initiative
  • One Acre Fund
  • One for the World
  • OpenAI
  • Open Mined
  • Open Philanthropy
  • Organisation for the Prevention of Intense Suffering
  • Ought
  • Our World in Data
  • Oxford Prioritisation Project
  • Parallel Forecast
  • Ploughshares Fund
  • Precision Development
  • Probably Good
  • Pugwash Conferences on Science and World Affairs
  • Qualia Research Institute
  • Raising for Effective Giving
  • Redwood Research
  • Rethink Charity
  • Rethink Priorities
  • Riesgos Catastróficos Globales
  • Sanku – Project Healthy Children
  • Schmidt Futures
  • Sentience Institute
  • Sentience Politics
  • Seva Foundation
  • Sightsavers
  • Simon Institute for Longterm Governance
  • SoGive
  • Space Futures Initiative
  • Stanford Existential Risk Initiative
  • Swift Centre
  • The END Fund
  • The Future Society
  • The Life You Can Save
  • The Roots of Progress
  • Target Malaria
  • Training for Good
  • UK Office for AI
  • Unlimit Health
  • Utility Farm
  • Vegan Outreach
  • Venten | AI Safety for Latam
  • Village Enterprise
  • Waitlist Zero
  • War Prevention Initiative
  • Wave
  • Wellcome Trust
  • Wild Animal Initiative
  • Wild-Animal Suffering Research
  • Works in Progress

PEOPLE

  • Scott Aaronson | Shtetl-Optimized
  • Tom Adamczewski | Fragile Credences
  • Matthew Adelstein | Bentham's Newsletter
  • Matthew Adelstein | Controlled Opposition
  • Vaidehi Agarwalla | Vaidehi's Blog
  • James Aitchison | Philosophy and Ideas
  • Scott Alexander | Astral Codex Ten
  • Scott Alexander | Slate Star Codex
  • Scott Alexander | Slate Star Scratchpad
  • Alexanian & Franz | GCBR Organization Updates
  • Applied Divinity Studies
  • Leopold Aschenbrenner | For Our Posterity
  • Amanda Askell
  • Amanda Askell's Blog
  • Amanda Askell’s Substack
  • Atoms vs Bits
  • Connor Axiotes | Rules of the Game
  • Sofia Balderson | Hive
  • Mark Bao
  • Boaz Barak | Windows On Theory
  • Nathan Barnard | The Good blog
  • Matthew Barnett
  • Ollie Base | Base Rates
  • Simon Bazelon | Out of the Ordinary
  • Tobias Baumann | Cause Prioritization Research
  • Tobias Baumann | Reducing Risks of Future Suffering
  • Nora Belrose
  • Rob Bensinger | Nothing Is Mere
  • Alexander Berger | Marginal Change
  • Aaron Bergman | Aaron's Blog
  • Satvik Beri | Ars Arcana
  • Aveek Bhattacharya | Social Problems Are Like Maths
  • Michael Bitton | A Nice Place to Live
  • Liv Boeree
  • Dillon Bowen
  • Topher Brennan
  • Ozy Brennan | Thing of Things
  • Catherine Brewer | Catherine’s Blog
  • Stijn Bruers | The Rational Ethicist
  • Vitalik Buterin
  • Lynette Bye | EA Coaching
  • Ryan Carey
  • Joe Carlsmith
  • Lucius Caviola | Outpaced
  • Richard Yetter Chappell | Good Thoughts
  • Richard Yetter Chappell | Philosophy, Et Cetera
  • Paul Christiano | AI Alignment
  • Paul Christiano | Ordinary Ideas
  • Paul Christiano | Rational Altruist
  • Paul Christiano | Sideways View
  • Paul Christiano & Katja Grace | The Impact Purchase
  • Evelyn Ciara | Sunyshore
  • Cirrostratus Whispers
  • Jesse Clifton | Jesse’s Substack
  • Peter McCluskey | Bayesian Investor
  • Greg Colbourn | Greg's Substack
  • Ajeya Cotra & Kelsey Piper | Planned Obsolescence
  • Owen Cotton-Barrat | Strange Cities
  • Andrew Critch
  • Paul Crowley | Minds Aren’t Magic
  • Dale | Effective Differentials
  • Max Dalton | Custodienda
  • Saloni Dattani | Scientific Discovery
  • Amber Dawn | Contemplatonist
  • De novo
  • Michael Dello-Iacovo
  • Jai Dhyani | ANOIEAEIB
  • Michael Dickens | Philosophical Multicore
  • Dieleman & Zeijlmans | Effective Environmentalism
  • Charles Dillon
  • Anthony DiGiovanni | Ataraxia
  • Kate Donovan | Gruntled & Hinged
  • Dawn Drescher | Impartial Priorities
  • Eric Drexler | AI Prospects: Toward Global Goal Convergence
  • Holly Elmore
  • Sam Enright | The Fitzwilliam
  • Daniel Eth | Thinking of Utils
  • Daniel Filan
  • Lukas Finnveden
  • Diana Fleischman | Dianaverse
  • Diana Fleischman | Dissentient
  • Julia Galef
  • Ben Garfinkel | The Best that Can Happen
  • Adrià Garriga-Alonso
  • Matthew Gentzel | The Consequentialist
  • Aaron Gertler | Alpha Gamma
  • Rachel Glennerster
  • Sam Glover | Samstack
  • Andrés Gómez Emilsson | Qualia Computing
  • Nathan Goodman & Yusuf | Longterm Liberalism
  • Ozzie Gooen | The QURI Medley
  • Ozzie Gooen
  • Katja Grace | Meteuphoric
  • Katja Grace | World Spirit Sock Puppet
  • Katja Grace | Worldly Positions
  • Spencer Greenberg | Optimize Everything
  • Milan Griffes | Flight from Perfection
  • Simon Grimm
  • Zach Groff | The Groff Spot
  • Erich Grunewald
  • Marc Gunther
  • Cate Hall | Useful Fictions
  • Chris Hallquist | The Uncredible Hallq
  • Topher Hallquist
  • John Halstead
  • Finn Hambly
  • James Harris | But Can They Suffer?
  • Riley Harris
  • Riley Harris | Million Year View
  • Peter Hartree
  • Shakeel Hashim | Transformer
  • Sarah Hastings-Woodhouse
  • Hayden | Ethical Haydonism
  • Julian Hazell | Julian’s Blog
  • Julian Hazell | Secret Third Thing
  • Jiwoon Hwang
  • Incremental Updates
  • Roxanne Heston | Fire and Pulse
  • Hauke Hillebrandt | Hauke’s Blog
  • Ben Hoffman | Compass Rose
  • Florian Jehn | Existential Crunch
  • Michael Huemer | Fake Nous
  • Tyler John | Secular Mornings
  • Mike Johnson | Open Theory
  • Toby Jolly | Seeking To Be Jolly
  • Holden Karnofsky | Cold Takes
  • Jeff Kaufman
  • Cullen O'Keefe | Jural Networks
  • Daniel Kestenholz
  • Ketchup Duck
  • Oliver Kim | Global Developments
  • Isaac King | Outside the Asylum
  • Petra Kosonen | Tiny Points of Vast Value
  • Victoria Krakovna
  • Ben Kuhn
  • Alex Lawsen | Speculative Decoding
  • Gavin Leech | argmin gravitas
  • Howie Lempel
  • Gregory Lewis
  • Eli Lifland | Foxy Scout
  • Rhys Lindmark
  • Robert Long | Experience Machines
  • Garrison Lovely | Garrison's Substack
  • MacAskill, Chappell & Meissner | Utilitarianism
  • Jack Malde | The Ethical Economist
  • Jonathan Mann | Abstraction
  • Sydney Martin | Birthday Challenge
  • Daniel May
  • Daniel May
  • Conor McCammon | Utopianish
  • Peter McIntyre
  • Peter McIntyre | Conceptually
  • Pablo Melchor
  • Pablo Melchor | Altruismo racional
  • Geoffrey Miller
  • Fin Moorhouse
  • Thomas Moynihan
  • Luke Muehlhauser
  • Neel Nanda
  • David Nash | Global Development & Economic Advancement
  • David Nash | Monthly Overload of Effective Altruism
  • Eric Neyman | Unexpected Values
  • Richard Ngo | Narrative Ark
  • Richard Ngo | Thinking Complete
  • Elizabeth Van Nostrand | Aceso Under Glass
  • Oesterheld, Treutlein & Kokotajlo | The Universe from an Intentional Stance
  • James Ozden | Understanding Social Change
  • Daniel Paleka | AI Safety Takes
  • Ives Parr | Parrhesia
  • Dwarkesh Patel | The Lunar Society
  • Kelsey Piper | The Unit of Caring
  • Michael Plant | Planting Happiness
  • Michal Pokorný | Agenty Dragon
  • Georgia Ray | Eukaryote Writes Blog
  • Ross Rheingans-Yoo | Icosian Reflections
  • Josh Richards | Tofu Ramble
  • Jess Riedel | foreXiv
  • Anna Riedl
  • Hannah Ritchie | Sustainability by Numbers
  • David Roodman
  • Eli Rose
  • Abraham Rowe | Good Structures
  • Siebe Rozendal
  • John Salvatier
  • Anders Sandberg | Andart II
  • William Saunders
  • Joey Savoie | Measured Life
  • Stefan Schubert
  • Stefan Schubert | Philosophical Sketches
  • Stefan Schubert | Stefan’s Substack
  • Stefan Schubert | The Update
  • Nuño Sempere | Measure is unceasing
  • Harish Sethu | Counting Animals
  • Rohin Shah
  • Zeke Sherman | Bashi-Bazuk
  • Buck Shlegeris
  • Jay Shooster | jayforjustice
  • Carl Shulman | Reflective Disequilibrium
  • Jonah Sinick
  • Andrew Snyder-Beattie | Defenses in Depth
  • Ben Snodin
  • Nate Soares | Minding Our Way
  • Kaj Sotala
  • Tom Stafford | Reasonable People
  • Pablo Stafforini | Pablo’s Miscellany
  • Henry Stanley
  • Jacob Steinhardt | Bounded Regret
  • Zach Stein-Perlman | AI Lab Watch
  • Romeo Stevens | Neurotic Gradient Descent
  • Michael Story | Too long to tweet
  • Maxwell Tabarrok | Maximum Progress
  • Max Taylor | AI for Animals
  • Benjamin Todd
  • Brian Tomasik | Reducing Suffering
  • Helen Toner | Rising Tide
  • Philip Trammell
  • Jacob Trefethen
  • Toby Tremlett | Raising Dust
  • Alex Turner | The Pond
  • Alyssa Vance | Rationalist Conspiracy
  • Magnus Vinding
  • Eva Vivalt
  • Jackson Wagner
  • Ben West | Philosophy for Programmers
  • Nick Whitaker | High Modernism
  • Jess Whittlestone
  • Peter Wildeford | Everyday Utilitarian
  • Peter Wildeford | Not Quite a Blog
  • Peter Wildeford | Pasteur’s Cube
  • Peter Wildeford | The Power Law
  • Bridget Williams
  • Ben Williamson
  • Julia Wise | Giving Gladly
  • Julia Wise | Otherwise
  • Julia Wise | The Whole Sky
  • Kat Woods
  • Thomas Woodside
  • Mark Xu | Artificially Intelligent
  • Linch Zhang
  • Anisha Zaveri

NEWSLETTERS

  • AI Safety Newsletter
  • Alignment Newsletter
  • Altruismo eficaz
  • Animal Ask’s Newsletter
  • Apart Research
  • Astera Institute | Human Readable
  • Campbell Collaboration Newsletter
  • CAS Newsletter
  • ChinAI Newsletter
  • CSaP Newsletter
  • EA Forum Digest
  • EA Groups Newsletter
  • EA & LW Forum Weekly Summaries
  • EA UK Newsletter
  • EACN Newsletter
  • Effective Altruism Lowdown
  • Effective Altruism Newsletter
  • Effective Altruism Switzerland
  • Epoch Newsletter
  • Farm Animal Welfare Newsletter
  • Existential Risk Observatory Newsletter
  • Forecasting Newsletter
  • Forethought
  • Future Matters
  • GCR Policy’s Newsletter
  • Gwern Newsletter
  • High Impact Policy Review
  • Impactful Animal Advocacy Community Newsletter
  • Import AI
  • IPA’s weekly links
  • Manifold Markets | Above the Fold
  • Manifund | The Fox Says
  • Matt’s Thoughts In Between
  • Metaculus – Medium
  • ML Safety Newsletter
  • Open Philanthropy Farm Animal Welfare Newsletter
  • Oxford Internet Institute
  • Palladium Magazine Newsletter
  • Policy.ai
  • Predictions
  • Rational Newsletter
  • Sentinel
  • SimonM’s Newsletter
  • The Anti-Apocalyptus Newsletter
  • The Digital Minds Newsletter
  • The EA Behavioral Science Newsletter
  • The EU AI Act Newsletter
  • The EuropeanAI Newsletter
  • The Long-termist’s Field Guide
  • The Works in Progress Newsletter
  • This week in security
  • TLDR AI
  • Tlön newsletter
  • To Be Decided Newsletter

PODCASTS

  • 80,000 Hours Podcast
  • Alignment Newsletter Podcast
  • AI X-risk Research Podcast
  • Altruismo Eficaz
  • CGD Podcast
  • Conversations from the Pale Blue Dot
  • Doing Good Better
  • Effective Altruism Forum Podcast
  • ForeCast
  • Found in the Struce
  • Future of Life Institute Podcast
  • Future Perfect Podcast
  • GiveWell Podcast
  • Gutes Einfach Tun
  • Hear This Idea
  • Invincible Wellbeing
  • Joe Carlsmith Audio
  • Morality Is Hard
  • Open Model Project
  • Press the Button
  • Rationally Speaking Podcast
  • The End of the World
  • The Filan Cabinet
  • The Foresight Institute Podcast
  • The Giving What We Can Podcast
  • The Good Pod
  • The Inside View
  • The Life You Can Save
  • The Rhys Show
  • The Turing Test
  • Two Persons Three Reasons
  • Un equilibrio inadecuado
  • Utilitarian Podcast
  • Wildness

VIDEOS

  • 80,000 Hours: Cambridge
  • A Happier World
  • AI In Context
  • AI Safety Talks
  • Altruismo Eficaz
  • Altruísmo Eficaz
  • Altruísmo Eficaz Brasil
  • Brian Tomasik
  • Carrick Flynn for Oregon
  • Center for AI Safety
  • Center for Security and Emerging Technology
  • Centre for Effective Altruism
  • Center on Long-Term Risk
  • Chana Messinger
  • Deliberation Under Ideal Conditions
  • EA for Christians
  • Effective Altruism at UT Austin
  • Effective Altruism Global
  • Future of Humanity Institute
  • Future of Life Institute
  • Giving What We Can
  • Global Priorities Institute
  • Harvard Effective Altruism
  • Leveller
  • Machine Intelligence Research Institute
  • Metaculus
  • Ozzie Gooen
  • Qualia Research Institute
  • Raising for Effective Giving
  • Rational Animations
  • Robert Miles
  • Rootclaim
  • School of Thinking
  • The Flares
  • EA Groups
  • EA Tweets
  • EA Videos
Inactive websites grayed out.
Effective Altruism News is a side project of Tlön.