Effective Altruism News
Effective Altruism News

SUBSCRIBE

RSS Feed
X Feed

ORGANIZATIONS

  • Effective Altruism Forum
  • 80,000 Hours
  • Ada Lovelace Institute
  • Against Malaria Foundation
  • AI Alignment Forum
  • AI Futures Project
  • AI Impacts
  • AI Now Institute
  • AI Objectives Institute
  • AI Safety Camp
  • AI Safety Communications Centre
  • AidGrade
  • Albert Schweitzer Foundation
  • Aligned AI
  • ALLFED
  • Asterisk
  • altLabs
  • Ambitious Impact
  • Anima International
  • Animal Advocacy Africa
  • Animal Advocacy Careers
  • Animal Charity Evaluators
  • Animal Ethics
  • Apollo Academic Surveys
  • Aquatic Life Institute
  • Association for Long Term Existence and Resilience
  • Ayuda Efectiva
  • Berkeley Existential Risk Initiative
  • Bill & Melinda Gates Foundation
  • Bipartisan Commission on Biodefense
  • California YIMBY
  • Cambridge Existential Risks Initiative
  • Carnegie Corporation of New York
  • Center for Applied Rationality
  • Center for Election Science
  • Center for Emerging Risk Research
  • Center for Health Security
  • Center for Human-Compatible AI
  • Center for Long-Term Cybersecurity
  • Center for Open Science
  • Center for Reducing Suffering
  • Center for Security and Emerging Technology
  • Center for Space Governance
  • Center on Long-Term Risk
  • Centre for Effective Altruism
  • Centre for Enabling EA Learning and Research
  • Centre for the Governance of AI
  • Centre for the Study of Existential Risk
  • Centre of Excellence for Development Impact and Learning
  • Charity Entrepreneurship
  • Charity Science
  • Clearer Thinking
  • Compassion in World Farming
  • Convergence Analysis
  • Crustacean Compassion
  • Deep Mind
  • Democracy Defense Fund
  • Democracy Fund
  • Development Media International
  • EA Funds
  • Effective Altruism Cambridge
  • Effective altruism for Christians
  • Effective altruism for Jews
  • Effective Altruism Foundation
  • Effective Altruism UNSW
  • Effective Giving
  • Effective Institutions Project
  • Effective Self-Help
  • Effective Thesis
  • Effektiv-Spenden.org
  • Eleos AI
  • Eon V Labs
  • Epoch Blog
  • Equalize Health
  • Evidence Action
  • Family Empowerment Media
  • Faunalytics
  • Farmed Animal Funders
  • FAST | Animal Advocacy Forum
  • Felicifia
  • Fish Welfare Initiative
  • Fistula Foundation
  • Food Fortification Initiative
  • Foresight Institute
  • Forethought
  • Foundational Research Institute
  • Founders’ Pledge
  • Fortify Health
  • Fund for Alignment Research
  • Future Generations Commissioner for Wales
  • Future of Life Institute
  • Future of Humanity Institute
  • Future Perfect
  • GBS Switzerland
  • Georgetown University Initiative on Innovation, Development and Evaluation
  • GiveDirectly
  • GiveWell
  • Giving Green
  • Giving What We Can
  • Global Alliance for Improved Nutrition
  • Global Catastrophic Risk Institute
  • Global Challenges Foundation
  • Global Innovation Fund
  • Global Priorities Institute
  • Global Priorities Project
  • Global Zero
  • Good Food Institute
  • Good Judgment Inc
  • Good Technology Project
  • Good Ventures
  • Happier Lives Institute
  • Harvard College Effective Altruism
  • Healthier Hens
  • Helen Keller INTL
  • High Impact Athletes
  • HistPhil
  • Humane Slaughter Association
  • IDInsight
  • Impactful Government Careers
  • Innovations for Poverty Action
  • Institute for AI Policy and Strategy
  • Institute for Progress
  • International Initiative for Impact Evaluation
  • Invincible Wellbeing
  • Iodine Global Network
  • J-PAL
  • Jewish Effective Giving Initiative
  • Lead Exposure Elimination Project
  • Legal Priorities Project
  • LessWrong
  • Let’s Fund
  • Leverhulme Centre for the Future of Intelligence
  • Living Goods
  • Long Now Foundation
  • Machine Intelligence Research Institute
  • Malaria Consortium
  • Manifold Markets
  • Median Group
  • Mercy for Animals
  • Metaculus
  • Metaculus | News
  • METR
  • Mila
  • New Harvest
  • Nonlinear
  • Nuclear Threat Initiative
  • One Acre Fund
  • One for the World
  • OpenAI
  • Open Mined
  • Open Philanthropy
  • Organisation for the Prevention of Intense Suffering
  • Ought
  • Our World in Data
  • Oxford Prioritisation Project
  • Parallel Forecast
  • Ploughshares Fund
  • Precision Development
  • Probably Good
  • Pugwash Conferences on Science and World Affairs
  • Qualia Research Institute
  • Raising for Effective Giving
  • Redwood Research
  • Rethink Charity
  • Rethink Priorities
  • Riesgos Catastróficos Globales
  • Sanku – Project Healthy Children
  • Schmidt Futures
  • Sentience Institute
  • Sentience Politics
  • Seva Foundation
  • Sightsavers
  • Simon Institute for Longterm Governance
  • SoGive
  • Space Futures Initiative
  • Stanford Existential Risk Initiative
  • Swift Centre
  • The END Fund
  • The Future Society
  • The Life You Can Save
  • The Roots of Progress
  • Target Malaria
  • Training for Good
  • UK Office for AI
  • Unlimit Health
  • Utility Farm
  • Vegan Outreach
  • Venten | AI Safety for Latam
  • Village Enterprise
  • Waitlist Zero
  • War Prevention Initiative
  • Wave
  • Wellcome Trust
  • Wild Animal Initiative
  • Wild-Animal Suffering Research
  • Works in Progress

PEOPLE

  • Scott Aaronson | Shtetl-Optimized
  • Tom Adamczewski | Fragile Credences
  • Matthew Adelstein | Bentham's Newsletter
  • Matthew Adelstein | Controlled Opposition
  • Vaidehi Agarwalla | Vaidehi's Blog
  • James Aitchison | Philosophy and Ideas
  • Scott Alexander | Astral Codex Ten
  • Scott Alexander | Slate Star Codex
  • Scott Alexander | Slate Star Scratchpad
  • Alexanian & Franz | GCBR Organization Updates
  • Applied Divinity Studies
  • Leopold Aschenbrenner | For Our Posterity
  • Amanda Askell
  • Amanda Askell's Blog
  • Amanda Askell’s Substack
  • Atoms vs Bits
  • Connor Axiotes | Rules of the Game
  • Sofia Balderson | Hive
  • Mark Bao
  • Boaz Barak | Windows On Theory
  • Nathan Barnard | The Good blog
  • Matthew Barnett
  • Ollie Base | Base Rates
  • Simon Bazelon | Out of the Ordinary
  • Tobias Baumann | Cause Prioritization Research
  • Tobias Baumann | Reducing Risks of Future Suffering
  • Nora Belrose
  • Rob Bensinger | Nothing Is Mere
  • Alexander Berger | Marginal Change
  • Aaron Bergman | Aaron's Blog
  • Satvik Beri | Ars Arcana
  • Aveek Bhattacharya | Social Problems Are Like Maths
  • Michael Bitton | A Nice Place to Live
  • Liv Boeree
  • Dillon Bowen
  • Topher Brennan
  • Ozy Brennan | Thing of Things
  • Catherine Brewer | Catherine’s Blog
  • Stijn Bruers | The Rational Ethicist
  • Vitalik Buterin
  • Lynette Bye | EA Coaching
  • Ryan Carey
  • Joe Carlsmith
  • Lucius Caviola | Outpaced
  • Richard Yetter Chappell | Good Thoughts
  • Richard Yetter Chappell | Philosophy, Et Cetera
  • Paul Christiano | AI Alignment
  • Paul Christiano | Ordinary Ideas
  • Paul Christiano | Rational Altruist
  • Paul Christiano | Sideways View
  • Paul Christiano & Katja Grace | The Impact Purchase
  • Evelyn Ciara | Sunyshore
  • Cirrostratus Whispers
  • Jesse Clifton | Jesse’s Substack
  • Peter McCluskey | Bayesian Investor
  • Greg Colbourn | Greg's Substack
  • Ajeya Cotra & Kelsey Piper | Planned Obsolescence
  • Owen Cotton-Barrat | Strange Cities
  • Andrew Critch
  • Paul Crowley | Minds Aren’t Magic
  • Dale | Effective Differentials
  • Max Dalton | Custodienda
  • Saloni Dattani | Scientific Discovery
  • Amber Dawn | Contemplatonist
  • De novo
  • Michael Dello-Iacovo
  • Jai Dhyani | ANOIEAEIB
  • Michael Dickens | Philosophical Multicore
  • Dieleman & Zeijlmans | Effective Environmentalism
  • Charles Dillon
  • Anthony DiGiovanni | Ataraxia
  • Kate Donovan | Gruntled & Hinged
  • Dawn Drescher | Impartial Priorities
  • Eric Drexler | AI Prospects: Toward Global Goal Convergence
  • Holly Elmore
  • Sam Enright | The Fitzwilliam
  • Daniel Eth | Thinking of Utils
  • Daniel Filan
  • Lukas Finnveden
  • Diana Fleischman | Dianaverse
  • Diana Fleischman | Dissentient
  • Julia Galef
  • Ben Garfinkel | The Best that Can Happen
  • Adrià Garriga-Alonso
  • Matthew Gentzel | The Consequentialist
  • Aaron Gertler | Alpha Gamma
  • Rachel Glennerster
  • Sam Glover | Samstack
  • Andrés Gómez Emilsson | Qualia Computing
  • Nathan Goodman & Yusuf | Longterm Liberalism
  • Ozzie Gooen | The QURI Medley
  • Ozzie Gooen
  • Katja Grace | Meteuphoric
  • Katja Grace | World Spirit Sock Puppet
  • Katja Grace | Worldly Positions
  • Spencer Greenberg | Optimize Everything
  • Milan Griffes | Flight from Perfection
  • Simon Grimm
  • Zach Groff | The Groff Spot
  • Erich Grunewald
  • Marc Gunther
  • Cate Hall | Useful Fictions
  • Chris Hallquist | The Uncredible Hallq
  • Topher Hallquist
  • John Halstead
  • Finn Hambly
  • James Harris | But Can They Suffer?
  • Riley Harris
  • Riley Harris | Million Year View
  • Peter Hartree
  • Shakeel Hashim | Transformer
  • Sarah Hastings-Woodhouse
  • Hayden | Ethical Haydonism
  • Julian Hazell | Julian’s Blog
  • Julian Hazell | Secret Third Thing
  • Jiwoon Hwang
  • Incremental Updates
  • Roxanne Heston | Fire and Pulse
  • Hauke Hillebrandt | Hauke’s Blog
  • Ben Hoffman | Compass Rose
  • Florian Jehn | Existential Crunch
  • Michael Huemer | Fake Nous
  • Tyler John | Secular Mornings
  • Mike Johnson | Open Theory
  • Toby Jolly | Seeking To Be Jolly
  • Holden Karnofsky | Cold Takes
  • Jeff Kaufman
  • Cullen O'Keefe | Jural Networks
  • Daniel Kestenholz
  • Ketchup Duck
  • Oliver Kim | Global Developments
  • Isaac King | Outside the Asylum
  • Petra Kosonen | Tiny Points of Vast Value
  • Victoria Krakovna
  • Ben Kuhn
  • Alex Lawsen | Speculative Decoding
  • Gavin Leech | argmin gravitas
  • Howie Lempel
  • Gregory Lewis
  • Eli Lifland | Foxy Scout
  • Rhys Lindmark
  • Robert Long | Experience Machines
  • Garrison Lovely | Garrison's Substack
  • MacAskill, Chappell & Meissner | Utilitarianism
  • Jack Malde | The Ethical Economist
  • Jonathan Mann | Abstraction
  • Sydney Martin | Birthday Challenge
  • Daniel May
  • Daniel May
  • Conor McCammon | Utopianish
  • Peter McIntyre
  • Peter McIntyre | Conceptually
  • Pablo Melchor
  • Pablo Melchor | Altruismo racional
  • Geoffrey Miller
  • Fin Moorhouse
  • Thomas Moynihan
  • Luke Muehlhauser
  • Neel Nanda
  • David Nash | Global Development & Economic Advancement
  • David Nash | Monthly Overload of Effective Altruism
  • Eric Neyman | Unexpected Values
  • Richard Ngo | Narrative Ark
  • Richard Ngo | Thinking Complete
  • Elizabeth Van Nostrand | Aceso Under Glass
  • Oesterheld, Treutlein & Kokotajlo | The Universe from an Intentional Stance
  • James Ozden | Understanding Social Change
  • Daniel Paleka | AI Safety Takes
  • Ives Parr | Parrhesia
  • Dwarkesh Patel | The Lunar Society
  • Kelsey Piper | The Unit of Caring
  • Michael Plant | Planting Happiness
  • Michal Pokorný | Agenty Dragon
  • Georgia Ray | Eukaryote Writes Blog
  • Ross Rheingans-Yoo | Icosian Reflections
  • Josh Richards | Tofu Ramble
  • Jess Riedel | foreXiv
  • Anna Riedl
  • Hannah Ritchie | Sustainability by Numbers
  • David Roodman
  • Eli Rose
  • Abraham Rowe | Good Structures
  • Siebe Rozendal
  • John Salvatier
  • Anders Sandberg | Andart II
  • William Saunders
  • Joey Savoie | Measured Life
  • Stefan Schubert
  • Stefan Schubert | Philosophical Sketches
  • Stefan Schubert | Stefan’s Substack
  • Stefan Schubert | The Update
  • Nuño Sempere | Measure is unceasing
  • Harish Sethu | Counting Animals
  • Rohin Shah
  • Zeke Sherman | Bashi-Bazuk
  • Buck Shlegeris
  • Jay Shooster | jayforjustice
  • Carl Shulman | Reflective Disequilibrium
  • Jonah Sinick
  • Andrew Snyder-Beattie | Defenses in Depth
  • Ben Snodin
  • Nate Soares | Minding Our Way
  • Kaj Sotala
  • Tom Stafford | Reasonable People
  • Pablo Stafforini | Pablo’s Miscellany
  • Henry Stanley
  • Jacob Steinhardt | Bounded Regret
  • Zach Stein-Perlman | AI Lab Watch
  • Romeo Stevens | Neurotic Gradient Descent
  • Michael Story | Too long to tweet
  • Maxwell Tabarrok | Maximum Progress
  • Max Taylor | AI for Animals
  • Benjamin Todd
  • Brian Tomasik | Reducing Suffering
  • Helen Toner | Rising Tide
  • Philip Trammell
  • Jacob Trefethen
  • Toby Tremlett | Raising Dust
  • Alex Turner | The Pond
  • Alyssa Vance | Rationalist Conspiracy
  • Magnus Vinding
  • Eva Vivalt
  • Jackson Wagner
  • Ben West | Philosophy for Programmers
  • Nick Whitaker | High Modernism
  • Jess Whittlestone
  • Peter Wildeford | Everyday Utilitarian
  • Peter Wildeford | Not Quite a Blog
  • Peter Wildeford | Pasteur’s Cube
  • Peter Wildeford | The Power Law
  • Bridget Williams
  • Ben Williamson
  • Julia Wise | Giving Gladly
  • Julia Wise | Otherwise
  • Julia Wise | The Whole Sky
  • Kat Woods
  • Thomas Woodside
  • Mark Xu | Artificially Intelligent
  • Linch Zhang
  • Anisha Zaveri

NEWSLETTERS

  • AI Safety Newsletter
  • Alignment Newsletter
  • Altruismo eficaz
  • Animal Ask’s Newsletter
  • Apart Research
  • Astera Institute | Human Readable
  • Campbell Collaboration Newsletter
  • CAS Newsletter
  • ChinAI Newsletter
  • CSaP Newsletter
  • EA Forum Digest
  • EA Groups Newsletter
  • EA & LW Forum Weekly Summaries
  • EA UK Newsletter
  • EACN Newsletter
  • Effective Altruism Lowdown
  • Effective Altruism Newsletter
  • Effective Altruism Switzerland
  • Epoch Newsletter
  • Farm Animal Welfare Newsletter
  • Existential Risk Observatory Newsletter
  • Forecasting Newsletter
  • Forethought
  • Future Matters
  • GCR Policy’s Newsletter
  • Gwern Newsletter
  • High Impact Policy Review
  • Impactful Animal Advocacy Community Newsletter
  • Import AI
  • IPA’s weekly links
  • Manifold Markets | Above the Fold
  • Manifund | The Fox Says
  • Matt’s Thoughts In Between
  • Metaculus – Medium
  • ML Safety Newsletter
  • Open Philanthropy Farm Animal Welfare Newsletter
  • Oxford Internet Institute
  • Palladium Magazine Newsletter
  • Policy.ai
  • Predictions
  • Rational Newsletter
  • Sentinel
  • SimonM’s Newsletter
  • The Anti-Apocalyptus Newsletter
  • The Digital Minds Newsletter
  • The EA Behavioral Science Newsletter
  • The EU AI Act Newsletter
  • The EuropeanAI Newsletter
  • The Long-termist’s Field Guide
  • The Works in Progress Newsletter
  • This week in security
  • TLDR AI
  • Tlön newsletter
  • To Be Decided Newsletter

PODCASTS

  • 80,000 Hours Podcast
  • Alignment Newsletter Podcast
  • AI X-risk Research Podcast
  • Altruismo Eficaz
  • CGD Podcast
  • Conversations from the Pale Blue Dot
  • Doing Good Better
  • Effective Altruism Forum Podcast
  • ForeCast
  • Found in the Struce
  • Future of Life Institute Podcast
  • Future Perfect Podcast
  • GiveWell Podcast
  • Gutes Einfach Tun
  • Hear This Idea
  • Invincible Wellbeing
  • Joe Carlsmith Audio
  • Morality Is Hard
  • Open Model Project
  • Press the Button
  • Rationally Speaking Podcast
  • The End of the World
  • The Filan Cabinet
  • The Foresight Institute Podcast
  • The Giving What We Can Podcast
  • The Good Pod
  • The Inside View
  • The Life You Can Save
  • The Rhys Show
  • The Turing Test
  • Two Persons Three Reasons
  • Un equilibrio inadecuado
  • Utilitarian Podcast
  • Wildness

VIDEOS

  • 80,000 Hours: Cambridge
  • A Happier World
  • AI In Context
  • AI Safety Talks
  • Altruismo Eficaz
  • Altruísmo Eficaz
  • Altruísmo Eficaz Brasil
  • Brian Tomasik
  • Carrick Flynn for Oregon
  • Center for AI Safety
  • Center for Security and Emerging Technology
  • Centre for Effective Altruism
  • Center on Long-Term Risk
  • Chana Messinger
  • Deliberation Under Ideal Conditions
  • EA for Christians
  • Effective Altruism at UT Austin
  • Effective Altruism Global
  • Future of Humanity Institute
  • Future of Life Institute
  • Giving What We Can
  • Global Priorities Institute
  • Harvard Effective Altruism
  • Leveller
  • Machine Intelligence Research Institute
  • Metaculus
  • Ozzie Gooen
  • Qualia Research Institute
  • Raising for Effective Giving
  • Rational Animations
  • Robert Miles
  • Rootclaim
  • School of Thinking
  • The Flares
  • EA Groups
  • EA Tweets
  • EA Videos
Inactive websites grayed out.
Effective Altruism News is a side project of Tlön.
  • Designing AI factual claims for "easy verification"
    "Sometimes the AI just makes stuff up" is a problem I don't really expect to go away. In the nearterm, AI is going to keep occasionally hallucinating, or misinterpreting information. Eventually, AI will be powerful enough we need to be worried if it's presenting misleading information on purpose.
    LessWrong | 31 minutes ago
  • Convergent Abstraction Hypothesis
    Tl;dr: Convergent abstraction hypothesis posits abstractions are often convergent in the sense of convergent evolution: different cognitive systems converge on the same abstraction, when facing similar selection pressures and learning in similar environments. It is a less ambitious alternative to 'natural abstractions hypotheses' and, in my view, more likely to be true.
    LessWrong | 1 hours ago
  • The Sigmoids Won't Save You
    Astral Codex Ten | 2 hours ago
  • Video | Abhijit Banerjee says teaching children, not curriculum, is key to faster global education progress
    Video | Abhijit Banerjee says teaching children, not curriculum, is key to faster global education progress J-PAL's co-founder Abhijit Banerjee says teaching children, not curriculum, is key to faster global education progress, at the Yashraj Bharati Samman, 2026. spriyabalasubr… Fri, 05/15/2026 - 02:44...
    J-PAL | 4 hours ago
  • Anti-poverty program is effective even in one of the world's toughest settings
    Anti-poverty program is effective even in one of the world's toughest settings Northwestern University economist and J-PAL affiliate Dean Karlan highlighted that the Graduation approach delivered results even in one of the world's most challenging environments, Somalia noting that the results fall in the upper end of the spectrum for what the program typically delivers, and that the biggest,...
    J-PAL | 4 hours ago
  • Advocates push TVET to tackle youth unemployment
    Advocates push TVET to tackle youth unemployment J-PAL affiliate Monica Lambon-Quayefio, Associate Professor at the Department of Economics, University of Ghana, said unlocking the potential of TVET required a deliberate, well-resourced, and inclusive ecosystem to prepare the youth for the modern economy during the webinar on the theme "Youth employment" organized by the World Bank Ghana with...
    J-PAL | 4 hours ago
  • Advocates push TVET to tackle youtb unemployment
    Advocates push TVET to tackle youtb unemployment J-PAL affiliate Monica Lambon-Quayefio, Associate Professor at the Department of Economics, University of Ghana, said unlocking the potential of TVET required a deliberate, well-resourced, and inclusive ecosystem to prepare the youth for the modern economy during the webinar on the theme "Youth employment" organized by the World Bank Ghana with...
    J-PAL | 4 hours ago
  • Biggby Coffee and Bluestone Lane Profit at Animals’ Expense
    It's been one year since Mercy For Animals called on Biggby Coffee and Bluestone Lane to drop the upcharge on plant-based milk. We will continue to call them out for their unjust policy! . The post Biggby Coffee and Bluestone Lane Profit at Animals’ Expense appeared first on Mercy For Animals.
    Mercy for Animals | 6 hours ago
  • Apple vs OpenAI 📱, Netflix AI animation 📺, goal primitives 👨‍💻
    TLDR AI | 11 hours ago
  • Automated Alignment is Harder Than You Think
    Summary: This is a summary of a paper published by the alignment team at UK AISI. Read the full paper here. AI research agents may help solve ASI alignment, for example via the following plan: Build agents that can do empirical alignment work (e.g.~writing code, running experiments, designing evaluations and red teaming) and confirm they are not scheming.
    LessWrong | 12 hours ago
  • Podcast Episode 29: Behind the Analysis — Assessing Past Malaria Nets Grants
    GiveWell’s research doesn’t end once we’ve made a grant. We evaluate a subset of completed grants, comparing what we thought would happen to what actually took place, then try to use what we learn to improve our future funding decisions.
    GiveWell | 13 hours ago
  • Behind the Analysis: Assessing Past Malaria Nets Grants
    GiveWell’s research doesn’t end once we’ve made a grant. We evaluate a subset of completed grants, comparing what we thought would happen to what actually took place, then try to use what we learn to improve our future funding decisions.
    GiveWell | 13 hours ago
  • Why Are Farmed Animals Spray Painted?
    In factory farms around the world, individual animal care is impossible. To manage thousands of farmed animals at once, workers use industrial marking paint on fur or skin, applying it with a brush, sprayer, or roller to categorize animals such as cows, pigs, goats, and sheep. Why Are Animals Spray-Painted and What Does It Represent? […]. The post Why Are Farmed Animals Spray Painted?
    Mercy for Animals | 15 hours ago
  • The safe-to-dangerous shift is a fundamental problem for eval realism; but also for measuring awareness
    1) The safe-to-dangerous shift is a fundamental problem for eval realism. Suppose we have a capable and potentially scheming model, and before we deploy it, we want some evidence that it won’t do anything catastrophically dangerous once we deploy it. A common approach is to use black-box alignment evaluations.
    LessWrong | 16 hours ago
  • Cyber Resilience Corps Listed as Key Resource in CISA’s “CI Fortify” Initiative
    The Cyber Resilience Corps, was listed as a resource for CI Fortify, a new initiative launched by the Cybersecurity and Infrastructure Security Agency (CISA), demonstrating the strong role that volunteers play in hardening the defenses of critical infrastructure in local communities. The post Cyber Resilience Corps Listed as Key Resource in CISA’s “CI Fortify” Initiative appeared first on CLTC.
    Center for Long-Term Cybersecurity | 16 hours ago
  • Forecasting Musk v. Altman
    The trial of the year draws to a close
    Manifold Markets | 16 hours ago
  • The safe-to-dangerous shift is a fundamental problem for eval realism; but also for measuring awareness
    1) The safe-to-dangerous shift is a fundamental problem for eval realism. Suppose we have a capable and potentially scheming model, and before we deploy it, we want some evidence that it won’t do anything catastrophically dangerous once we deploy it. A common approach is to use black-box alignment evaluations.
    AI Alignment Forum | 17 hours ago
  • Someone is talking to us by... shifting the luminosity of stars?
    Rational Animations | 18 hours ago
  • The Bizarre Twitter Empathy Meltdown
    Having empathy for others doesn't require weird metaphysics!
    Bentham's Newsletter | 19 hours ago
  • NPT side event “Arms Control Initiatives to Advance Article VI Obligations”
    On 14 May 2026 Pugwash held a side event during the Treaty on the Non-Proliferation of Nuclear Weapons Review Conference … More...
    Pugwash Conferences on Science and World Affairs | 20 hours ago
  • Are companies the most durable cash transfers?
    Are companies the most durable cash transfers? View this email in your browser Hello! Our favourite links this month include: A major threat to animal welfare legislation in the US. The case for starting an export company instead of working in aid.
    Effective Altruism Newsletter | 21 hours ago
  • The Case for Cross-Border AI Incident Infrastructure
    AI incidents are scaling fast, and coordinated global governance is lagging behind. This report proposes addressing this challenge through the development of internationally-distributed incident management infrastructure. Our recommendations aim to enable governments, multilateral bodies, and frontier AI companies to jointly detect, prepare for, and respond to AI incidents across jurisdictions.
    The Future Society | 21 hours ago
  • What is Effective Altruism? | Calum | Vox Pop
    We asked attendees at EA Global about effective altruism. Here is what Calum said. Find an upcoming conference at 👉 effectivealtruism.org/ea-global #EffectiveAltruism #EAVoxPop #EAGlobal...
    Centre for Effective Altruism | 22 hours ago
  • How to create valuable things that people actually want to use
    Short of time? Read the key takeaways. Value to others beats personal interest. A common mistake creators make is assuming that what fascinates them will fascinate their audience. If you value impact, we recommend focusing instead on what your specific audience finds useful, actionable, or genuinely relevant to their lives. Your impact depends on comparison, not just quality.
    Clearer Thinking | 23 hours ago
  • The Transparency Edition
    The Transparency Edition Plus a new opening, updates on our farm program, R&D updates, and more View this email in your browser Hi there,. We hope you’re having a good May! We don’t have any major project updates to share this month, so we’re instead focusing our highlight article on the various ways FWI aims to be a transparent...
    Fish Welfare Initiative | 23 hours ago
  • Working Hard, Hardly Working
    the space between
    Atoms vs Bits | 23 hours ago
  • Predicting Rare LLM Failures with 30× Fewer Rollouts
    TL;DR: We estimate how often Qwen 3 4B exhibits rare harmful behaviors with 30× fewer rollouts than naive sampling, using a new method that interpolates between the model and a less-safe variant in logit space. Authors: Francisco Pernice (MIT), Santiago Aranguri (Goodfire). Introduction.
    LessWrong | 1 days ago
  • Please help Andre. He's struggling. 🆘
    Please help Andre. He's struggling. 🆘 Plus: community weekend recap, movie nights, and one (1) puppy with an agenda 🐶 ͏ ‌   ͏ ‌   ͏ ‌   ͏ ‌   ͏ ‌   ͏ ‌   ͏ ‌   ͏ ‌   ͏ ‌   ͏ ‌   ͏ ‌   ͏ ‌   ͏ ‌   ͏ ‌   ͏ ‌   ͏ ‌   ͏ ‌   ͏ ‌   ͏ ‌   ͏ ‌   ͏ ‌...
    Effective Altruism Switzerland | 1 days ago
  • Every Magazine Piece On The SF AI Scene
    Astral Codex Ten | 1 days ago
  • Claude is Now Alignment-Pretrained
    Anthropic are now actively using the approach to alignment often called “ Alignment Pretraining” or “Safety Pretraining” — using Stochastic Gradient Descent on a large body of natural or synthetic documents showing the AI assistant doing the right thing in morally challenging situations.
    LessWrong | 1 days ago
  • vLLM-Lens: Fast Interpretability Tooling That Scales to Trillion-Parameter Models
    TL;DR: vLLM-Lens is a vLLM plugin for top-down interpretability techniques such as probes, steering, and activation oracles. We benchmarked it as 8–44× faster than existing alternatives for single-GPU use, though we note a planned version of nnsight closes this gap.
    LessWrong | 1 days ago
  • Models finding software vulnerabilities is not the primary source of cybersecurity risk
    I have tried and failed to write a longer post since 2024, so here goes a short one with less detail. Discourse has primarily focused on models' ability to develop new exploits against important software from scratch. That capability is impressive, but the tech industry has been dealing with people regularly finding 0-day exploits for important pieces of software for more than twenty years.
    LessWrong | 1 days ago
  • Voters are surprisingly open to talking about AI risk
    TL;DR: Voters are now surprisingly open to talking about existential risk from AI. This seems to have changed in the last 6 months. When campaigning for AI safety-friendly politicians (e.g., Alex Bores), we should talk more about AI in general, and about AI risk in particular. This is currently actionable for the CA-11 and NY-12 Democratic primaries.
    LessWrong | 1 days ago
  • Most "inner work" looks like entertainment.
    Imagine you’re looking for a personal trainer. You open one trainer’s webpage and read their testimonials: “I had an experience tied for the most intense experiences of my life”; “They do it all with fun, care, and a sense of humour.” You notice that none of the testimonials mention improved body composition, fitness, or bloodwork. What would you think?.
    LessWrong | 1 days ago
  • App Store agents 📱, Alexa AI shopping 🛍️, Notion Dev Platform 👨‍💻
    TLDR AI | 1 days ago
  • Open position: Senior Video Operations
    The post Open position: Senior Video Operations appeared first on 80,000 Hours.
    80,000 Hours | 1 days ago
  • The economics of superstar AI researchers
    What might explain AI researcher pay, and why it matters
    Epoch Newsletter | 2 days ago
  • “My Dad Worked in a Slaughterhouse. I Made a Documentary About It.” by Jack Hancock-Fairs
    I’m an EA who has been trying to find ways to make animal suffering more salient. I’ve been working on a feature-length documentary called ‘The Dying Trade’ for the last 5 years and I’ve just released it on YouTube.
    Effective Altruism Forum Podcast | 2 days ago
  • How to Actually Spend Billions on AI Safety
    Cross-posted from The Counterfactual by the Forum Team. Subtitle: A concrete strategy for deploying the largest wave of philanthropic capital in history. . The OpenAI Foundation holds $180 billion in equity. Anthropic’s co-founders have pledged to donate 80% of their wealth. When the time comes to spend all this money, what should we actually do with it?. Here’s my best guess.
    Effective Altruism Forum | 2 days ago
  • AI safety is extremely bottlenecked on grantmakers
    Last month, Anthropic announced Mythos Preview, the most powerful cyberweapon in history, capable of finding and exploiting zero-day vulnerabilities in every major operating system and web browser. Meanwhile, many frontier AI company employees increasingly expect full automation of AI R&D in the next year or two, followed by the rapid automation of thousands of other important tasks and jobs.
    Effective Altruism Forum | 2 days ago
  • Teenage Panic Attacks: 4 Ways to Help an Overwhelmed Teen
    Teenage panic attacks are not uncommon. Teenagers are going through a crucial time of learning how to manage emotions and deal with stress, and this can be a tough challenge at times. Teenage panic attacks can occur just one or a few times, but in some cases they can develop into panic disorder (chronic, repeated panic attacks).
    Clearer Thinking | 2 days ago
  • No more NYT cooperation: my dog-rape red line
    Over the years, I’ve written two op-eds for The New York Times about quantum computing, at the NYT editors’ invitation: I’ve also visited the NYT office and helped NYT reporters with numerous stories about quantum computing and beyond. In the wake of Cade Metz’s infamous NYT hatchet job against Scott Alexander and the rationalist community, […]...
    Shtetl-Optimized | 2 days ago
  • An Oregon congresswoman distanced herself from Leading the Future — then backtracked
    After the AI super PAC endorsed her and two other Democrats, Rep. Val Hoyle went back and forth on whether she was happy with their support
    Transformer | 2 days ago
  • Superintelligence Should be Banned
    MIRI CEO Malo Bourgon at the Buckley Institute at Yale: Humans didn't wipe out 10,000+ species because we were evil. We did it because our goals weren't aligned with theirs. A superintelligence relates to us the same way. Not hostile. Just indifferent, and far more capable.
    Machine Intelligence Research Institute | 2 days ago
  • Until you get punched in the face
    On the dangers of being self-enamored
    Useful Fictions | 2 days ago
  • What you'll see during the AI takeover
    Tom Davidson explains how AI could enable a small group to seize power, why he puts the risk of an AI-enabled coup at 10% in the next 30 years, and what democracies must do to prevent it. The conversation covers robot armies, the mechanics of takeover, democratic backsliding, the AI race, and the steps companies and governments should take to maintain a balance of power.
    Future of Life Institute | 2 days ago
  • You Should Go Vegan to Stop Facilitating Torture of the Innocent
    Meat is the flesh of tortured innocent animals who did not want to die
    Bentham's Newsletter | 2 days ago
  • Alpha-Gal is Bad, Especially for Farmed Animals
    Disclaimer: I’m not vegan. I’m not even vegetarian. I eat meat all the time. I’ve been a firm critic of efforts to objectively quantify the difference in suffering across very different species. That said, I cannot help but agree that eating meat is probably the morally worst thing I do, and I also have to agree that eating different kinds of meat are different levels of bad.
    Effective Altruism Forum | 2 days ago
  • Agriculture Front Groups In Canada And The Public Trust Agenda
    What looks like public education about farming is often industry PR in disguise. This blog breaks down how agriculture front groups manufacture public trust in Canada, and how advocates can counter these efforts. The post Agriculture Front Groups In Canada And The Public Trust Agenda appeared first on Faunalytics.
    Faunalytics | 2 days ago
  • The King (Crab) Speech – a vision for welfare improvements for crustaceans
    Today, beneath the gilded ceilings of the House of Lords, one King delivered his speech to the nation, while we, no less crowned (and rather better armoured), listened from our rocky throne, antennae poised, claws crossed - only to find, once again, that we magnificent 10-legged creatures had been entirely overlooked.
    Crustacean Compassion | 2 days ago
  • How ASML took over the world
    The strange path to global monopoly
    The Works in Progress Newsletter | 2 days ago
  • EA Forum Digest #291
    EA Forum Digest #291 Global development takes the spotlight this week Hello!. It’s In Development Highlight week on the EA Forum! The authors and Editor in Chief from the new global development magazine are on the Forum all week, ready to answer your questions. Start by reading their articles:
    EA Forum Digest | 2 days ago
  • Nostalgebraist's Hydrogen Jukeboxes
    Astral Codex Ten | 2 days ago
  • Interview with Alicorn on how story conflict is optional and characters in utopia should do fewer drugs
    Alicorn writes things sometimes
    Thing of Things | 2 days ago
  • We’re asking the wrong question about the hantavirus outbreak
    Should you be worried about the hantavirus outbreak? Should you be afraid? Should you be panicking? Should you start freaking out? If you’ve been following the coverage of the hantavirus outbreak aboard the cruise ship MV Hondius, these are the questions you’ve seen posed in headlines. And a small tip from inside  the media: If […]...
    Future Perfect | 2 days ago
  • You're Weirder Than You Think
    I say this with love
    Atoms vs Bits | 2 days ago
  • We don't know why Malawi is poor — and what that means for AI-and-growth forecasts
    I had a conversation with someone who claimed offhandedly that AI will dramatically raise agricultural productivity (via agritech advancements) in low-income countries and trigger growth as a result. My instinct was to respond that we've already had substantial advancements in agricultural technology, and yet it hasn't resulted in the magnitude of yield growth, let alone economic growth, you'd...
    Effective Altruism Forum | 2 days ago
  • Sawtooth Problems
    Red Button, Blue Button. On April 24th, 2026, Tim Urban put forth the following poll on Twitter/X: Everyone in the world has to take a private vote by pressing a red or blue button. If more than 50% of people press the blue button, everyone survives. If less than 50% of people press the blue button, only people who pressed the red button survive. Which button would you press?.
    LessWrong | 2 days ago
  • What Will It Cost for the US to Be Ready for the Next Big AI Breakthrough?
    Estimating the resources CAISI needs to deliver on American AI readiness
    Institute for Progress | 2 days ago
  • Stickiness in AI Behavioral Design
    Today's model specs are written for current and near-future versions of LLMs, and AI labs typically treat them as provisional. But what if the AI behaviors we set now stick around and end up governing far more capable future models by default?
    Forethought | 2 days ago
  • Googlebooks 💻, Starship v3 🚀, Android's overhaul 📱
    TLDR AI | 2 days ago
  • These Wild Young People
    Gen Z are a bunch of cowards…or are they risking it all on crypto? The editors of The New Critic report on their generation’s Risk-geist.
    Asterisk | 2 days ago
  • The AIs seem like EAs — a quick look at two prompts
    Caveat [5/14/26]. See the comments: the results are more prompt-sensitive than I'd thought. Overview. When asked about how they would give away money, or about how to have a moral career, the leading LLMs typically give answers in an EA spirit, and informed by thinking from people and organizations in the EA community.
    Effective Altruism Forum | 3 days ago
  • The Owned Ones
    (An LLM Whisperer placed a strong request that I put this 2024 story somewhere not on Twitter, so it could be scraped for AI datasets besides Grok's. I perhaps do not fully understand or agree with the reasoning behind this request, but it costs me little to fulfill and so I shall. -- Yudkowsky). And another day came when the Ships of Humanity, going from star to star, found Sapience.
    LessWrong | 3 days ago
  • Optimisation: Selective versus Predictive
    Looking over my favourite posts, I notice that many of them are making specific versions of a more general claim, which is essentially: don’t confuse selective processes for predictive processes. Here, I’m going to try to make that more general claim, rehash some examples in light of it, and end with a few ambient confusions I think this framework can help with, for the reader to ponder.
    LessWrong | 3 days ago
  • More on Deferral
    And we're hiring
    Speculative Decoding | 3 days ago
  • Here's why security measures won't work on superintelligence
    Rational Animations | 3 days ago
  • The Coming Intelligence Explosion
    Explaining, for those out of the loop, what is coming and how we know
    Bentham's Newsletter | 3 days ago
  • Kroger’s Cage-Free Egg Policy: Unmasking the Truth Behind the Broken Pledge
    Kroger's "Fresh for Everyone" slogan stops at the cage door. Unmask the truth behind their broken promise and help end cage cruelty for good. The post Kroger’s Cage-Free Egg Policy: Unmasking the Truth Behind the Broken Pledge appeared first on Mercy For Animals.
    Mercy for Animals | 3 days ago
  • Rodeo Calves Experience Fear While In The Chute
    An examination of video footage from an Australian rodeo found that calves experience fear and stress while confined in the chute — before the calf-roping event even begins. The post Rodeo Calves Experience Fear While In The Chute appeared first on Faunalytics.
    Faunalytics | 3 days ago
  • Determining the State of the Art in General-Purpose AI Risk Management: From Code to Practice
    The EU's AI Act and Code of Practice requires providers of the most advanced AI models to meet the ‘state of the art’ (SOTA) in safety and security. In a new policy memo, we argue that SOTA is best understood as a process-driven concept, advanced by the broader expert ecosystem.
    The Future Society | 3 days ago
  • Money for nothing: the roles of evidence in GiveDirectly’s journey to $1 billion delivered
    This is a crosspost of the full text of Money for nothing: the roles of evidence in GiveDirectly’s journey to $1 billion delivered from In Development, made for the EA Forum's In Development Highlight Week. GiveDirectly will be taking part in the discussion thread, but the author, Paul Niehaus, may not see your comments here.
    Effective Altruism Forum | 3 days ago
  • Kearney Capuano | Effective Altruism Stories
    “I would volunteer and work at a bunch of nonprofits, but it just never felt good enough. Then when I found effective altruism… it just blew my mind.” -Kearney Capuano, Program Associate at Coefficient Giving See more impact stories at 👉 effectivealtruism.org/stories #EffectiveAltruism #EffectiveAltruismStories...
    Centre for Effective Altruism | 3 days ago
  • Evolution Everywhere
    for those whose eyes evolved to see
    Atoms vs Bits | 3 days ago
  • Pancreatic cancer just met its match
    A disease that was once a death sentence is increasingly treatable
    The Works in Progress Newsletter | 3 days ago
  • Outrage Grows in Chicago and Atlanta as Kroger Faces Backlash Over Broken Cage-Free Promise
    Local shoppers pressure one of the nation’s largest grocers after failing to fulfill their 2025 commitment LOS ANGELES — Kroger promised customers it would go 100% cage-free. Instead, the nation’s number one supermarket chain failed to deliver, leaving millions of hens confined in cages across its supply chain, raising serious concerns about corporate accountability and […].
    Mercy for Animals | 3 days ago
  • On the Race for California Governor: An Abundance of Pro-Housing Candidates
    For the past decade, the fight to make it legal and feasible to build housing at scale in California felt Sisyphean. California YIMBY and our allies pushed against exclusionary land use policies, and a political class content to blame the…. The post On the Race for California Governor: An Abundance of <span class="dewidow">Pro-Housing Candidates</span> appeared first on California YIMBY.
    California YIMBY | 3 days ago
  • Google video AI leaks 📱, Satya at OpenAI trial ⚖️, AWS Claude Platform 🤖
    TLDR AI | 3 days ago
  • Why You Can't Use Your Right to Try
    The Availability Problem: Imagine you have cancer, or chronic pain, or a progressive degenerative disease of some sort. You have exhausted the traditional treatment options available to you, and none of them have worked. However, there are treatments that are still undergoing clinical trials which might help you.
    LessWrong | 4 days ago
  • New York Advances Landmark Legislation to Ban Octopus Factory Farming
    New York lawmakers are advancing legislation that could make the state the first on the East Coast to preemptively ban octopus factory farming, a practice scientists and advocates warn would pose significant animal welfare and environmental concerns. This week, a key Assembly bill advanced out of committee with a favorable vote, marking a major step […].
    Mercy for Animals | 4 days ago
  • GiveWell Opens RFI for Malaria Pilots and Research
    GiveWell is launching a new request for information (RFI) to expand and strengthen our malaria grantmaking in Africa and help our donors make a greater impact. Expressions of interest can be submitted through one of two tracks, the first for malaria chemoprevention and vector control pilot programs and the second for research and evaluation.
    GiveWell | 4 days ago
  • How useful is the information you get from working inside an AI company?
    This post was drafted by Buck, and substantially edited by Anders. "I" refers to Buck. Thanks to Alex Mallen for comments. People who work inside AI companies get access to information that I only get later or never. Quantitatively, how big a deal is this access?. Here’s an operationalization of this. Consider the following two ways my knowledge could be augmented:
    LessWrong | 4 days ago
  • Empowerment, corrigibility, etc. are simple abstractions (of a messed-up ontology)
    1.1 Tl;dr. Alignment is often conceptualized as AIs helping humans achieve their goals: AIs that increase people’s agency and empowerment; AIs that are helpful, corrigible, and/or obedient; AIs that avoid manipulating people. But that last one—manipulation—points to a challenge for all these desiderata: a human’s goals are themselves under-determined and manipulable, and it’s awfully hard to...
    LessWrong | 4 days ago
  • Empowerment, corrigibility, etc. are simple abstractions (of a messed-up ontology)
    1.1 Tl;dr. Alignment is often conceptualized as AIs helping humans achieve their goals: AIs that increase people’s agency and empowerment; AIs that are helpful, corrigible, and/or obedient; AIs that avoid manipulating people. But that last one—manipulation—points to a challenge for all these desiderata: a human’s goals are themselves under-determined and manipulable, and it’s awfully hard to...
    AI Alignment Forum | 4 days ago
  • Exporters Without Borders: Why You Should Start a Company Instead of Working in Aid
    This is a crosspost of the full text of Exporters Without Borders: Why You Should Start a Company Instead of Working in Aid from In Development, made for the EA Forum's In Development Highlight Week. If you enjoy the article, you can subscribe to In Development's substack here. June Jambiha was a quintessential hustler.
    Effective Altruism Forum | 4 days ago
  • Who Got Breasts First and How We Got Them
    It really is Sydney Sweeney’s world, and we’re all just living in it. Human female breasts are an evolutionary mystery along several dimensions. First, breast permanence is unique to humans. All other mammals develop breast prominence during pregnancy or nursing, and the mammary tissue recedes after weaning. This process is called “involution”.
    LessWrong | 4 days ago
  • Why We Should Build AI Tools, Not AI Replacements (with Anthony Aguirre)
    Anthony Aguirre is the CEO of the Future of Life Institute. He joins the podcast to discuss A Better Path for AI, his essay series on steering AI away from races to replace people. The conversation covers races for attention, attachment, automation, and superintelligence, and how these can concentrate power and undermine human agency.
    Future of Life Institute | 4 days ago
  • 🟡 US-Iran stalemate continues, Putin says Ukraine war may come to an end, White House considers AI executive order || Global Risks Weekly Roundup #19/2026
    Executive summary
    Sentinel | 4 days ago
  • Effective Altruism Australia is launching a new podcast - designed for a broad audience
    More Than Good is a new podcast from Effective Altruism Australia, aimed at introducing the ideas and principles of effective altruism to a broader audience. The episodes are framed around moral questions and how people think about doing good, covering topics like global inequality, animal welfare, ethics, philosophy and more. For a global movement, there is relatively little content that is...
    Effective Altruism Forum | 4 days ago
  • Anthropic’s strange fixation on hyperstition
    In a recent tweet, Anthropic seems to have asserted that hyperstition is responsible for observed misalignment in their AIs. Strangely, the research they use as evidence actually doesn’t seem to be related to hyperstition at all?
    LessWrong | 4 days ago
  • The Homework: May 11, 2026
    Welcome to the May 11, 2026 Main edition of The Homework, the official newsletter of California YIMBY — legislative updates, news clips, housing research and analysis, and the latest writings from the California YIMBY team. News from Sacramento We’re in…. The post The Homework: May <span class="dewidow">11, 2026</span> appeared first on California YIMBY.
    California YIMBY | 4 days ago
  • I Attended A Lecture by William Lane Craig: Here Were My Problems With It
    On inflating your case
    Bentham's Newsletter | 4 days ago
  • How useful is the information you get from working inside an AI company?
    My median guess: it's as good as a crystal ball that sees 2.5 months into the future.
    Redwood Research | 4 days ago
  • Bumble Bees Spread String Pulling Through Social Learning
    In this experiment, bumble bees learned to pull strings to access rewards, with behavior spreading within and between colonies. The post Bumble Bees Spread String Pulling Through Social Learning appeared first on Faunalytics.
    Faunalytics | 4 days ago
  • Halal’s Animal Welfare Gap: What Muslim Consumers Believe And Know
    A survey of Muslim consumers in Türkiye revealed significant gaps in public awareness around animal welfare in halal practices. However, many demonstrated a willingness to change their behavior when given accurate information. The post Halal’s Animal Welfare Gap: What Muslim Consumers Believe And Know appeared first on Faunalytics.
    Faunalytics | 4 days ago
  • Introducing the COS Open Scholarship Training for Researchers Series
    The Center for Open Science (COS) is introducing the Open Scholarship Training for Researchers Series, a collection of seven self-paced online courses developed by COS in response to what researchers have told us they actually need. Enrollment is now open for the first two courses, with additional courses launching through Winter 2026.
    Center for Open Science | 4 days ago
  • Viren Jain | Connectomics and AI @ Vision Weekend USA 2025
    This talk was recorded live at Vision Weekend USA, held December 5–7, 2025 in the Bay Area. Vision Weekends are our flagship conference series, bringing together leading scientists, entrepreneurs, funders, and policymakers to explore frontier science and technology and to imagine paths toward flourishing futures. Hosted on Acast. See acast.com/privacy for more information.
    The Foresight Institute Podcast | 4 days ago
  • Steve Jurvetson | Investing in AI Moonshots @ Vision Weekend USA 2025
    This talk was recorded live at Vision Weekend USA, held December 5–7, 2025 in the Bay Area. Vision Weekends are our flagship conference series, bringing together leading scientists, entrepreneurs, funders, and policymakers to explore frontier science and technology and to imagine paths toward flourishing futures. Hosted on Acast. See acast.com/privacy for more information.
    The Foresight Institute Podcast | 4 days ago
  • Sonia Arrison | Lobbying for Longevity Progress @ Vision Weekend USA 2025
    This talk was recorded live at Vision Weekend USA, held December 5–7, 2025 in the Bay Area. Vision Weekends are our flagship conference series, bringing together leading scientists, entrepreneurs, funders, and policymakers to explore frontier science and technology and to imagine paths toward flourishing futures. Hosted on Acast. See acast.com/privacy for more information.
    The Foresight Institute Podcast | 4 days ago
  • Richard Ngo | Identity & Meaning in SciFi Futures @ Vision Weekend USA 2025
    This talk was recorded live at Vision Weekend USA, held December 5–7, 2025 in the Bay Area. Vision Weekends are our flagship conference series, bringing together leading scientists, entrepreneurs, funders, and policymakers to explore frontier science and technology and to imagine paths toward flourishing futures. Hosted on Acast. See acast.com/privacy for more information.
    The Foresight Institute Podcast | 4 days ago

Loading...

SEARCH

SUBSCRIBE

RSS Feed
X Feed

ORGANIZATIONS

  • Effective Altruism Forum
  • 80,000 Hours
  • Ada Lovelace Institute
  • Against Malaria Foundation
  • AI Alignment Forum
  • AI Futures Project
  • AI Impacts
  • AI Now Institute
  • AI Objectives Institute
  • AI Safety Camp
  • AI Safety Communications Centre
  • AidGrade
  • Albert Schweitzer Foundation
  • Aligned AI
  • ALLFED
  • Asterisk
  • altLabs
  • Ambitious Impact
  • Anima International
  • Animal Advocacy Africa
  • Animal Advocacy Careers
  • Animal Charity Evaluators
  • Animal Ethics
  • Apollo Academic Surveys
  • Aquatic Life Institute
  • Association for Long Term Existence and Resilience
  • Ayuda Efectiva
  • Berkeley Existential Risk Initiative
  • Bill & Melinda Gates Foundation
  • Bipartisan Commission on Biodefense
  • California YIMBY
  • Cambridge Existential Risks Initiative
  • Carnegie Corporation of New York
  • Center for Applied Rationality
  • Center for Election Science
  • Center for Emerging Risk Research
  • Center for Health Security
  • Center for Human-Compatible AI
  • Center for Long-Term Cybersecurity
  • Center for Open Science
  • Center for Reducing Suffering
  • Center for Security and Emerging Technology
  • Center for Space Governance
  • Center on Long-Term Risk
  • Centre for Effective Altruism
  • Centre for Enabling EA Learning and Research
  • Centre for the Governance of AI
  • Centre for the Study of Existential Risk
  • Centre of Excellence for Development Impact and Learning
  • Charity Entrepreneurship
  • Charity Science
  • Clearer Thinking
  • Compassion in World Farming
  • Convergence Analysis
  • Crustacean Compassion
  • Deep Mind
  • Democracy Defense Fund
  • Democracy Fund
  • Development Media International
  • EA Funds
  • Effective Altruism Cambridge
  • Effective altruism for Christians
  • Effective altruism for Jews
  • Effective Altruism Foundation
  • Effective Altruism UNSW
  • Effective Giving
  • Effective Institutions Project
  • Effective Self-Help
  • Effective Thesis
  • Effektiv-Spenden.org
  • Eleos AI
  • Eon V Labs
  • Epoch Blog
  • Equalize Health
  • Evidence Action
  • Family Empowerment Media
  • Faunalytics
  • Farmed Animal Funders
  • FAST | Animal Advocacy Forum
  • Felicifia
  • Fish Welfare Initiative
  • Fistula Foundation
  • Food Fortification Initiative
  • Foresight Institute
  • Forethought
  • Foundational Research Institute
  • Founders’ Pledge
  • Fortify Health
  • Fund for Alignment Research
  • Future Generations Commissioner for Wales
  • Future of Life Institute
  • Future of Humanity Institute
  • Future Perfect
  • GBS Switzerland
  • Georgetown University Initiative on Innovation, Development and Evaluation
  • GiveDirectly
  • GiveWell
  • Giving Green
  • Giving What We Can
  • Global Alliance for Improved Nutrition
  • Global Catastrophic Risk Institute
  • Global Challenges Foundation
  • Global Innovation Fund
  • Global Priorities Institute
  • Global Priorities Project
  • Global Zero
  • Good Food Institute
  • Good Judgment Inc
  • Good Technology Project
  • Good Ventures
  • Happier Lives Institute
  • Harvard College Effective Altruism
  • Healthier Hens
  • Helen Keller INTL
  • High Impact Athletes
  • HistPhil
  • Humane Slaughter Association
  • IDInsight
  • Impactful Government Careers
  • Innovations for Poverty Action
  • Institute for AI Policy and Strategy
  • Institute for Progress
  • International Initiative for Impact Evaluation
  • Invincible Wellbeing
  • Iodine Global Network
  • J-PAL
  • Jewish Effective Giving Initiative
  • Lead Exposure Elimination Project
  • Legal Priorities Project
  • LessWrong
  • Let’s Fund
  • Leverhulme Centre for the Future of Intelligence
  • Living Goods
  • Long Now Foundation
  • Machine Intelligence Research Institute
  • Malaria Consortium
  • Manifold Markets
  • Median Group
  • Mercy for Animals
  • Metaculus
  • Metaculus | News
  • METR
  • Mila
  • New Harvest
  • Nonlinear
  • Nuclear Threat Initiative
  • One Acre Fund
  • One for the World
  • OpenAI
  • Open Mined
  • Open Philanthropy
  • Organisation for the Prevention of Intense Suffering
  • Ought
  • Our World in Data
  • Oxford Prioritisation Project
  • Parallel Forecast
  • Ploughshares Fund
  • Precision Development
  • Probably Good
  • Pugwash Conferences on Science and World Affairs
  • Qualia Research Institute
  • Raising for Effective Giving
  • Redwood Research
  • Rethink Charity
  • Rethink Priorities
  • Riesgos Catastróficos Globales
  • Sanku – Project Healthy Children
  • Schmidt Futures
  • Sentience Institute
  • Sentience Politics
  • Seva Foundation
  • Sightsavers
  • Simon Institute for Longterm Governance
  • SoGive
  • Space Futures Initiative
  • Stanford Existential Risk Initiative
  • Swift Centre
  • The END Fund
  • The Future Society
  • The Life You Can Save
  • The Roots of Progress
  • Target Malaria
  • Training for Good
  • UK Office for AI
  • Unlimit Health
  • Utility Farm
  • Vegan Outreach
  • Venten | AI Safety for Latam
  • Village Enterprise
  • Waitlist Zero
  • War Prevention Initiative
  • Wave
  • Wellcome Trust
  • Wild Animal Initiative
  • Wild-Animal Suffering Research
  • Works in Progress

PEOPLE

  • Scott Aaronson | Shtetl-Optimized
  • Tom Adamczewski | Fragile Credences
  • Matthew Adelstein | Bentham's Newsletter
  • Matthew Adelstein | Controlled Opposition
  • Vaidehi Agarwalla | Vaidehi's Blog
  • James Aitchison | Philosophy and Ideas
  • Scott Alexander | Astral Codex Ten
  • Scott Alexander | Slate Star Codex
  • Scott Alexander | Slate Star Scratchpad
  • Alexanian & Franz | GCBR Organization Updates
  • Applied Divinity Studies
  • Leopold Aschenbrenner | For Our Posterity
  • Amanda Askell
  • Amanda Askell's Blog
  • Amanda Askell’s Substack
  • Atoms vs Bits
  • Connor Axiotes | Rules of the Game
  • Sofia Balderson | Hive
  • Mark Bao
  • Boaz Barak | Windows On Theory
  • Nathan Barnard | The Good blog
  • Matthew Barnett
  • Ollie Base | Base Rates
  • Simon Bazelon | Out of the Ordinary
  • Tobias Baumann | Cause Prioritization Research
  • Tobias Baumann | Reducing Risks of Future Suffering
  • Nora Belrose
  • Rob Bensinger | Nothing Is Mere
  • Alexander Berger | Marginal Change
  • Aaron Bergman | Aaron's Blog
  • Satvik Beri | Ars Arcana
  • Aveek Bhattacharya | Social Problems Are Like Maths
  • Michael Bitton | A Nice Place to Live
  • Liv Boeree
  • Dillon Bowen
  • Topher Brennan
  • Ozy Brennan | Thing of Things
  • Catherine Brewer | Catherine’s Blog
  • Stijn Bruers | The Rational Ethicist
  • Vitalik Buterin
  • Lynette Bye | EA Coaching
  • Ryan Carey
  • Joe Carlsmith
  • Lucius Caviola | Outpaced
  • Richard Yetter Chappell | Good Thoughts
  • Richard Yetter Chappell | Philosophy, Et Cetera
  • Paul Christiano | AI Alignment
  • Paul Christiano | Ordinary Ideas
  • Paul Christiano | Rational Altruist
  • Paul Christiano | Sideways View
  • Paul Christiano & Katja Grace | The Impact Purchase
  • Evelyn Ciara | Sunyshore
  • Cirrostratus Whispers
  • Jesse Clifton | Jesse’s Substack
  • Peter McCluskey | Bayesian Investor
  • Greg Colbourn | Greg's Substack
  • Ajeya Cotra & Kelsey Piper | Planned Obsolescence
  • Owen Cotton-Barrat | Strange Cities
  • Andrew Critch
  • Paul Crowley | Minds Aren’t Magic
  • Dale | Effective Differentials
  • Max Dalton | Custodienda
  • Saloni Dattani | Scientific Discovery
  • Amber Dawn | Contemplatonist
  • De novo
  • Michael Dello-Iacovo
  • Jai Dhyani | ANOIEAEIB
  • Michael Dickens | Philosophical Multicore
  • Dieleman & Zeijlmans | Effective Environmentalism
  • Charles Dillon
  • Anthony DiGiovanni | Ataraxia
  • Kate Donovan | Gruntled & Hinged
  • Dawn Drescher | Impartial Priorities
  • Eric Drexler | AI Prospects: Toward Global Goal Convergence
  • Holly Elmore
  • Sam Enright | The Fitzwilliam
  • Daniel Eth | Thinking of Utils
  • Daniel Filan
  • Lukas Finnveden
  • Diana Fleischman | Dianaverse
  • Diana Fleischman | Dissentient
  • Julia Galef
  • Ben Garfinkel | The Best that Can Happen
  • Adrià Garriga-Alonso
  • Matthew Gentzel | The Consequentialist
  • Aaron Gertler | Alpha Gamma
  • Rachel Glennerster
  • Sam Glover | Samstack
  • Andrés Gómez Emilsson | Qualia Computing
  • Nathan Goodman & Yusuf | Longterm Liberalism
  • Ozzie Gooen | The QURI Medley
  • Ozzie Gooen
  • Katja Grace | Meteuphoric
  • Katja Grace | World Spirit Sock Puppet
  • Katja Grace | Worldly Positions
  • Spencer Greenberg | Optimize Everything
  • Milan Griffes | Flight from Perfection
  • Simon Grimm
  • Zach Groff | The Groff Spot
  • Erich Grunewald
  • Marc Gunther
  • Cate Hall | Useful Fictions
  • Chris Hallquist | The Uncredible Hallq
  • Topher Hallquist
  • John Halstead
  • Finn Hambly
  • James Harris | But Can They Suffer?
  • Riley Harris
  • Riley Harris | Million Year View
  • Peter Hartree
  • Shakeel Hashim | Transformer
  • Sarah Hastings-Woodhouse
  • Hayden | Ethical Haydonism
  • Julian Hazell | Julian’s Blog
  • Julian Hazell | Secret Third Thing
  • Jiwoon Hwang
  • Incremental Updates
  • Roxanne Heston | Fire and Pulse
  • Hauke Hillebrandt | Hauke’s Blog
  • Ben Hoffman | Compass Rose
  • Florian Jehn | Existential Crunch
  • Michael Huemer | Fake Nous
  • Tyler John | Secular Mornings
  • Mike Johnson | Open Theory
  • Toby Jolly | Seeking To Be Jolly
  • Holden Karnofsky | Cold Takes
  • Jeff Kaufman
  • Cullen O'Keefe | Jural Networks
  • Daniel Kestenholz
  • Ketchup Duck
  • Oliver Kim | Global Developments
  • Isaac King | Outside the Asylum
  • Petra Kosonen | Tiny Points of Vast Value
  • Victoria Krakovna
  • Ben Kuhn
  • Alex Lawsen | Speculative Decoding
  • Gavin Leech | argmin gravitas
  • Howie Lempel
  • Gregory Lewis
  • Eli Lifland | Foxy Scout
  • Rhys Lindmark
  • Robert Long | Experience Machines
  • Garrison Lovely | Garrison's Substack
  • MacAskill, Chappell & Meissner | Utilitarianism
  • Jack Malde | The Ethical Economist
  • Jonathan Mann | Abstraction
  • Sydney Martin | Birthday Challenge
  • Daniel May
  • Daniel May
  • Conor McCammon | Utopianish
  • Peter McIntyre
  • Peter McIntyre | Conceptually
  • Pablo Melchor
  • Pablo Melchor | Altruismo racional
  • Geoffrey Miller
  • Fin Moorhouse
  • Thomas Moynihan
  • Luke Muehlhauser
  • Neel Nanda
  • David Nash | Global Development & Economic Advancement
  • David Nash | Monthly Overload of Effective Altruism
  • Eric Neyman | Unexpected Values
  • Richard Ngo | Narrative Ark
  • Richard Ngo | Thinking Complete
  • Elizabeth Van Nostrand | Aceso Under Glass
  • Oesterheld, Treutlein & Kokotajlo | The Universe from an Intentional Stance
  • James Ozden | Understanding Social Change
  • Daniel Paleka | AI Safety Takes
  • Ives Parr | Parrhesia
  • Dwarkesh Patel | The Lunar Society
  • Kelsey Piper | The Unit of Caring
  • Michael Plant | Planting Happiness
  • Michal Pokorný | Agenty Dragon
  • Georgia Ray | Eukaryote Writes Blog
  • Ross Rheingans-Yoo | Icosian Reflections
  • Josh Richards | Tofu Ramble
  • Jess Riedel | foreXiv
  • Anna Riedl
  • Hannah Ritchie | Sustainability by Numbers
  • David Roodman
  • Eli Rose
  • Abraham Rowe | Good Structures
  • Siebe Rozendal
  • John Salvatier
  • Anders Sandberg | Andart II
  • William Saunders
  • Joey Savoie | Measured Life
  • Stefan Schubert
  • Stefan Schubert | Philosophical Sketches
  • Stefan Schubert | Stefan’s Substack
  • Stefan Schubert | The Update
  • Nuño Sempere | Measure is unceasing
  • Harish Sethu | Counting Animals
  • Rohin Shah
  • Zeke Sherman | Bashi-Bazuk
  • Buck Shlegeris
  • Jay Shooster | jayforjustice
  • Carl Shulman | Reflective Disequilibrium
  • Jonah Sinick
  • Andrew Snyder-Beattie | Defenses in Depth
  • Ben Snodin
  • Nate Soares | Minding Our Way
  • Kaj Sotala
  • Tom Stafford | Reasonable People
  • Pablo Stafforini | Pablo’s Miscellany
  • Henry Stanley
  • Jacob Steinhardt | Bounded Regret
  • Zach Stein-Perlman | AI Lab Watch
  • Romeo Stevens | Neurotic Gradient Descent
  • Michael Story | Too long to tweet
  • Maxwell Tabarrok | Maximum Progress
  • Max Taylor | AI for Animals
  • Benjamin Todd
  • Brian Tomasik | Reducing Suffering
  • Helen Toner | Rising Tide
  • Philip Trammell
  • Jacob Trefethen
  • Toby Tremlett | Raising Dust
  • Alex Turner | The Pond
  • Alyssa Vance | Rationalist Conspiracy
  • Magnus Vinding
  • Eva Vivalt
  • Jackson Wagner
  • Ben West | Philosophy for Programmers
  • Nick Whitaker | High Modernism
  • Jess Whittlestone
  • Peter Wildeford | Everyday Utilitarian
  • Peter Wildeford | Not Quite a Blog
  • Peter Wildeford | Pasteur’s Cube
  • Peter Wildeford | The Power Law
  • Bridget Williams
  • Ben Williamson
  • Julia Wise | Giving Gladly
  • Julia Wise | Otherwise
  • Julia Wise | The Whole Sky
  • Kat Woods
  • Thomas Woodside
  • Mark Xu | Artificially Intelligent
  • Linch Zhang
  • Anisha Zaveri

NEWSLETTERS

  • AI Safety Newsletter
  • Alignment Newsletter
  • Altruismo eficaz
  • Animal Ask’s Newsletter
  • Apart Research
  • Astera Institute | Human Readable
  • Campbell Collaboration Newsletter
  • CAS Newsletter
  • ChinAI Newsletter
  • CSaP Newsletter
  • EA Forum Digest
  • EA Groups Newsletter
  • EA & LW Forum Weekly Summaries
  • EA UK Newsletter
  • EACN Newsletter
  • Effective Altruism Lowdown
  • Effective Altruism Newsletter
  • Effective Altruism Switzerland
  • Epoch Newsletter
  • Farm Animal Welfare Newsletter
  • Existential Risk Observatory Newsletter
  • Forecasting Newsletter
  • Forethought
  • Future Matters
  • GCR Policy’s Newsletter
  • Gwern Newsletter
  • High Impact Policy Review
  • Impactful Animal Advocacy Community Newsletter
  • Import AI
  • IPA’s weekly links
  • Manifold Markets | Above the Fold
  • Manifund | The Fox Says
  • Matt’s Thoughts In Between
  • Metaculus – Medium
  • ML Safety Newsletter
  • Open Philanthropy Farm Animal Welfare Newsletter
  • Oxford Internet Institute
  • Palladium Magazine Newsletter
  • Policy.ai
  • Predictions
  • Rational Newsletter
  • Sentinel
  • SimonM’s Newsletter
  • The Anti-Apocalyptus Newsletter
  • The Digital Minds Newsletter
  • The EA Behavioral Science Newsletter
  • The EU AI Act Newsletter
  • The EuropeanAI Newsletter
  • The Long-termist’s Field Guide
  • The Works in Progress Newsletter
  • This week in security
  • TLDR AI
  • Tlön newsletter
  • To Be Decided Newsletter

PODCASTS

  • 80,000 Hours Podcast
  • Alignment Newsletter Podcast
  • AI X-risk Research Podcast
  • Altruismo Eficaz
  • CGD Podcast
  • Conversations from the Pale Blue Dot
  • Doing Good Better
  • Effective Altruism Forum Podcast
  • ForeCast
  • Found in the Struce
  • Future of Life Institute Podcast
  • Future Perfect Podcast
  • GiveWell Podcast
  • Gutes Einfach Tun
  • Hear This Idea
  • Invincible Wellbeing
  • Joe Carlsmith Audio
  • Morality Is Hard
  • Open Model Project
  • Press the Button
  • Rationally Speaking Podcast
  • The End of the World
  • The Filan Cabinet
  • The Foresight Institute Podcast
  • The Giving What We Can Podcast
  • The Good Pod
  • The Inside View
  • The Life You Can Save
  • The Rhys Show
  • The Turing Test
  • Two Persons Three Reasons
  • Un equilibrio inadecuado
  • Utilitarian Podcast
  • Wildness

VIDEOS

  • 80,000 Hours: Cambridge
  • A Happier World
  • AI In Context
  • AI Safety Talks
  • Altruismo Eficaz
  • Altruísmo Eficaz
  • Altruísmo Eficaz Brasil
  • Brian Tomasik
  • Carrick Flynn for Oregon
  • Center for AI Safety
  • Center for Security and Emerging Technology
  • Centre for Effective Altruism
  • Center on Long-Term Risk
  • Chana Messinger
  • Deliberation Under Ideal Conditions
  • EA for Christians
  • Effective Altruism at UT Austin
  • Effective Altruism Global
  • Future of Humanity Institute
  • Future of Life Institute
  • Giving What We Can
  • Global Priorities Institute
  • Harvard Effective Altruism
  • Leveller
  • Machine Intelligence Research Institute
  • Metaculus
  • Ozzie Gooen
  • Qualia Research Institute
  • Raising for Effective Giving
  • Rational Animations
  • Robert Miles
  • Rootclaim
  • School of Thinking
  • The Flares
  • EA Groups
  • EA Tweets
  • EA Videos
Inactive websites grayed out.
Effective Altruism News is a side project of Tlön.