Effective Altruism News
Effective Altruism News

SUBSCRIBE

RSS Feed
X Feed

ORGANIZATIONS

  • Effective Altruism Forum
  • 80,000 Hours
  • Ada Lovelace Institute
  • Against Malaria Foundation
  • AI Alignment Forum
  • AI Futures Project
  • AI Impacts
  • AI Now Institute
  • AI Objectives Institute
  • AI Safety Camp
  • AI Safety Communications Centre
  • AidGrade
  • Albert Schweitzer Foundation
  • Aligned AI
  • ALLFED
  • Asterisk
  • altLabs
  • Ambitious Impact
  • Anima International
  • Animal Advocacy Africa
  • Animal Advocacy Careers
  • Animal Charity Evaluators
  • Animal Ethics
  • Apollo Academic Surveys
  • Aquatic Life Institute
  • Association for Long Term Existence and Resilience
  • Ayuda Efectiva
  • Berkeley Existential Risk Initiative
  • Bill & Melinda Gates Foundation
  • Bipartisan Commission on Biodefense
  • California YIMBY
  • Cambridge Existential Risks Initiative
  • Carnegie Corporation of New York
  • Center for Applied Rationality
  • Center for Election Science
  • Center for Emerging Risk Research
  • Center for Health Security
  • Center for Human-Compatible AI
  • Center for Long-Term Cybersecurity
  • Center for Open Science
  • Center for Reducing Suffering
  • Center for Security and Emerging Technology
  • Center for Space Governance
  • Center on Long-Term Risk
  • Centre for Effective Altruism
  • Centre for Enabling EA Learning and Research
  • Centre for the Governance of AI
  • Centre for the Study of Existential Risk
  • Centre of Excellence for Development Impact and Learning
  • Charity Entrepreneurship
  • Charity Science
  • Clearer Thinking
  • Compassion in World Farming
  • Convergence Analysis
  • Crustacean Compassion
  • Deep Mind
  • Democracy Defense Fund
  • Democracy Fund
  • Development Media International
  • EA Funds
  • Effective Altruism Cambridge
  • Effective altruism for Christians
  • Effective altruism for Jews
  • Effective Altruism Foundation
  • Effective Altruism UNSW
  • Effective Giving
  • Effective Institutions Project
  • Effective Self-Help
  • Effective Thesis
  • Effektiv-Spenden.org
  • Eleos AI
  • Eon V Labs
  • Epoch Blog
  • Equalize Health
  • Evidence Action
  • Family Empowerment Media
  • Faunalytics
  • Farmed Animal Funders
  • FAST | Animal Advocacy Forum
  • Felicifia
  • Fish Welfare Initiative
  • Fistula Foundation
  • Food Fortification Initiative
  • Foresight Institute
  • Forethought
  • Foundational Research Institute
  • Founders’ Pledge
  • Fortify Health
  • Fund for Alignment Research
  • Future Generations Commissioner for Wales
  • Future of Life Institute
  • Future of Humanity Institute
  • Future Perfect
  • GBS Switzerland
  • Georgetown University Initiative on Innovation, Development and Evaluation
  • GiveDirectly
  • GiveWell
  • Giving Green
  • Giving What We Can
  • Global Alliance for Improved Nutrition
  • Global Catastrophic Risk Institute
  • Global Challenges Foundation
  • Global Innovation Fund
  • Global Priorities Institute
  • Global Priorities Project
  • Global Zero
  • Good Food Institute
  • Good Judgment Inc
  • Good Technology Project
  • Good Ventures
  • Happier Lives Institute
  • Harvard College Effective Altruism
  • Healthier Hens
  • Helen Keller INTL
  • High Impact Athletes
  • HistPhil
  • Humane Slaughter Association
  • IDInsight
  • Impactful Government Careers
  • Innovations for Poverty Action
  • Institute for AI Policy and Strategy
  • Institute for Progress
  • International Initiative for Impact Evaluation
  • Invincible Wellbeing
  • Iodine Global Network
  • J-PAL
  • Jewish Effective Giving Initiative
  • Lead Exposure Elimination Project
  • Legal Priorities Project
  • LessWrong
  • Let’s Fund
  • Leverhulme Centre for the Future of Intelligence
  • Living Goods
  • Long Now Foundation
  • Machine Intelligence Research Institute
  • Malaria Consortium
  • Manifold Markets
  • Median Group
  • Mercy for Animals
  • Metaculus
  • Metaculus | News
  • METR
  • Mila
  • New Harvest
  • Nonlinear
  • Nuclear Threat Initiative
  • One Acre Fund
  • One for the World
  • OpenAI
  • Open Mined
  • Open Philanthropy
  • Organisation for the Prevention of Intense Suffering
  • Ought
  • Our World in Data
  • Oxford Prioritisation Project
  • Parallel Forecast
  • Ploughshares Fund
  • Precision Development
  • Probably Good
  • Pugwash Conferences on Science and World Affairs
  • Qualia Research Institute
  • Raising for Effective Giving
  • Redwood Research
  • Rethink Charity
  • Rethink Priorities
  • Riesgos Catastróficos Globales
  • Sanku – Project Healthy Children
  • Schmidt Futures
  • Sentience Institute
  • Sentience Politics
  • Seva Foundation
  • Sightsavers
  • Simon Institute for Longterm Governance
  • SoGive
  • Space Futures Initiative
  • Stanford Existential Risk Initiative
  • Swift Centre
  • The END Fund
  • The Future Society
  • The Life You Can Save
  • The Roots of Progress
  • Target Malaria
  • Training for Good
  • UK Office for AI
  • Unlimit Health
  • Utility Farm
  • Vegan Outreach
  • Venten | AI Safety for Latam
  • Village Enterprise
  • Waitlist Zero
  • War Prevention Initiative
  • Wave
  • Wellcome Trust
  • Wild Animal Initiative
  • Wild-Animal Suffering Research
  • Works in Progress

PEOPLE

  • Scott Aaronson | Shtetl-Optimized
  • Tom Adamczewski | Fragile Credences
  • Matthew Adelstein | Bentham's Newsletter
  • Matthew Adelstein | Controlled Opposition
  • Vaidehi Agarwalla | Vaidehi's Blog
  • James Aitchison | Philosophy and Ideas
  • Scott Alexander | Astral Codex Ten
  • Scott Alexander | Slate Star Codex
  • Scott Alexander | Slate Star Scratchpad
  • Alexanian & Franz | GCBR Organization Updates
  • Applied Divinity Studies
  • Leopold Aschenbrenner | For Our Posterity
  • Amanda Askell
  • Amanda Askell's Blog
  • Amanda Askell’s Substack
  • Atoms vs Bits
  • Connor Axiotes | Rules of the Game
  • Sofia Balderson | Hive
  • Mark Bao
  • Boaz Barak | Windows On Theory
  • Nathan Barnard | The Good blog
  • Matthew Barnett
  • Ollie Base | Base Rates
  • Simon Bazelon | Out of the Ordinary
  • Tobias Baumann | Cause Prioritization Research
  • Tobias Baumann | Reducing Risks of Future Suffering
  • Nora Belrose
  • Rob Bensinger | Nothing Is Mere
  • Alexander Berger | Marginal Change
  • Aaron Bergman | Aaron's Blog
  • Satvik Beri | Ars Arcana
  • Aveek Bhattacharya | Social Problems Are Like Maths
  • Michael Bitton | A Nice Place to Live
  • Liv Boeree
  • Dillon Bowen
  • Topher Brennan
  • Ozy Brennan | Thing of Things
  • Catherine Brewer | Catherine’s Blog
  • Stijn Bruers | The Rational Ethicist
  • Vitalik Buterin
  • Lynette Bye | EA Coaching
  • Ryan Carey
  • Joe Carlsmith
  • Lucius Caviola | Outpaced
  • Richard Yetter Chappell | Good Thoughts
  • Richard Yetter Chappell | Philosophy, Et Cetera
  • Paul Christiano | AI Alignment
  • Paul Christiano | Ordinary Ideas
  • Paul Christiano | Rational Altruist
  • Paul Christiano | Sideways View
  • Paul Christiano & Katja Grace | The Impact Purchase
  • Evelyn Ciara | Sunyshore
  • Cirrostratus Whispers
  • Jesse Clifton | Jesse’s Substack
  • Peter McCluskey | Bayesian Investor
  • Greg Colbourn | Greg's Substack
  • Ajeya Cotra & Kelsey Piper | Planned Obsolescence
  • Owen Cotton-Barrat | Strange Cities
  • Andrew Critch
  • Paul Crowley | Minds Aren’t Magic
  • Dale | Effective Differentials
  • Max Dalton | Custodienda
  • Saloni Dattani | Scientific Discovery
  • Amber Dawn | Contemplatonist
  • De novo
  • Michael Dello-Iacovo
  • Jai Dhyani | ANOIEAEIB
  • Michael Dickens | Philosophical Multicore
  • Dieleman & Zeijlmans | Effective Environmentalism
  • Charles Dillon
  • Anthony DiGiovanni | Ataraxia
  • Kate Donovan | Gruntled & Hinged
  • Dawn Drescher | Impartial Priorities
  • Eric Drexler | AI Prospects: Toward Global Goal Convergence
  • Holly Elmore
  • Sam Enright | The Fitzwilliam
  • Daniel Eth | Thinking of Utils
  • Daniel Filan
  • Lukas Finnveden
  • Diana Fleischman | Dianaverse
  • Diana Fleischman | Dissentient
  • Julia Galef
  • Ben Garfinkel | The Best that Can Happen
  • Adrià Garriga-Alonso
  • Matthew Gentzel | The Consequentialist
  • Aaron Gertler | Alpha Gamma
  • Rachel Glennerster
  • Sam Glover | Samstack
  • Andrés Gómez Emilsson | Qualia Computing
  • Nathan Goodman & Yusuf | Longterm Liberalism
  • Ozzie Gooen | The QURI Medley
  • Ozzie Gooen
  • Katja Grace | Meteuphoric
  • Katja Grace | World Spirit Sock Puppet
  • Katja Grace | Worldly Positions
  • Spencer Greenberg | Optimize Everything
  • Milan Griffes | Flight from Perfection
  • Simon Grimm
  • Zach Groff | The Groff Spot
  • Erich Grunewald
  • Marc Gunther
  • Cate Hall | Useful Fictions
  • Chris Hallquist | The Uncredible Hallq
  • Topher Hallquist
  • John Halstead
  • Finn Hambly
  • James Harris | But Can They Suffer?
  • Riley Harris
  • Riley Harris | Million Year View
  • Peter Hartree
  • Shakeel Hashim | Transformer
  • Sarah Hastings-Woodhouse
  • Hayden | Ethical Haydonism
  • Julian Hazell | Julian’s Blog
  • Julian Hazell | Secret Third Thing
  • Jiwoon Hwang
  • Incremental Updates
  • Roxanne Heston | Fire and Pulse
  • Hauke Hillebrandt | Hauke’s Blog
  • Ben Hoffman | Compass Rose
  • Florian Jehn | Existential Crunch
  • Michael Huemer | Fake Nous
  • Tyler John | Secular Mornings
  • Mike Johnson | Open Theory
  • Toby Jolly | Seeking To Be Jolly
  • Holden Karnofsky | Cold Takes
  • Jeff Kaufman
  • Cullen O'Keefe | Jural Networks
  • Daniel Kestenholz
  • Ketchup Duck
  • Oliver Kim | Global Developments
  • Isaac King | Outside the Asylum
  • Petra Kosonen | Tiny Points of Vast Value
  • Victoria Krakovna
  • Ben Kuhn
  • Alex Lawsen | Speculative Decoding
  • Gavin Leech | argmin gravitas
  • Howie Lempel
  • Gregory Lewis
  • Eli Lifland | Foxy Scout
  • Rhys Lindmark
  • Robert Long | Experience Machines
  • Garrison Lovely | Garrison's Substack
  • MacAskill, Chappell & Meissner | Utilitarianism
  • Jack Malde | The Ethical Economist
  • Jonathan Mann | Abstraction
  • Sydney Martin | Birthday Challenge
  • Daniel May
  • Daniel May
  • Conor McCammon | Utopianish
  • Peter McIntyre
  • Peter McIntyre | Conceptually
  • Pablo Melchor
  • Pablo Melchor | Altruismo racional
  • Geoffrey Miller
  • Fin Moorhouse
  • Thomas Moynihan
  • Luke Muehlhauser
  • Neel Nanda
  • David Nash | Global Development & Economic Advancement
  • David Nash | Monthly Overload of Effective Altruism
  • Eric Neyman | Unexpected Values
  • Richard Ngo | Narrative Ark
  • Richard Ngo | Thinking Complete
  • Elizabeth Van Nostrand | Aceso Under Glass
  • Oesterheld, Treutlein & Kokotajlo | The Universe from an Intentional Stance
  • James Ozden | Understanding Social Change
  • Daniel Paleka | AI Safety Takes
  • Ives Parr | Parrhesia
  • Dwarkesh Patel | The Lunar Society
  • Kelsey Piper | The Unit of Caring
  • Michael Plant | Planting Happiness
  • Michal Pokorný | Agenty Dragon
  • Georgia Ray | Eukaryote Writes Blog
  • Ross Rheingans-Yoo | Icosian Reflections
  • Josh Richards | Tofu Ramble
  • Jess Riedel | foreXiv
  • Anna Riedl
  • Hannah Ritchie | Sustainability by Numbers
  • David Roodman
  • Eli Rose
  • Abraham Rowe | Good Structures
  • Siebe Rozendal
  • John Salvatier
  • Anders Sandberg | Andart II
  • William Saunders
  • Joey Savoie | Measured Life
  • Stefan Schubert
  • Stefan Schubert | Philosophical Sketches
  • Stefan Schubert | Stefan’s Substack
  • Stefan Schubert | The Update
  • Nuño Sempere | Measure is unceasing
  • Harish Sethu | Counting Animals
  • Rohin Shah
  • Zeke Sherman | Bashi-Bazuk
  • Buck Shlegeris
  • Jay Shooster | jayforjustice
  • Carl Shulman | Reflective Disequilibrium
  • Jonah Sinick
  • Andrew Snyder-Beattie | Defenses in Depth
  • Ben Snodin
  • Nate Soares | Minding Our Way
  • Kaj Sotala
  • Tom Stafford | Reasonable People
  • Pablo Stafforini | Pablo’s Miscellany
  • Henry Stanley
  • Jacob Steinhardt | Bounded Regret
  • Zach Stein-Perlman | AI Lab Watch
  • Romeo Stevens | Neurotic Gradient Descent
  • Michael Story | Too long to tweet
  • Maxwell Tabarrok | Maximum Progress
  • Max Taylor | AI for Animals
  • Benjamin Todd
  • Brian Tomasik | Reducing Suffering
  • Helen Toner | Rising Tide
  • Philip Trammell
  • Jacob Trefethen
  • Toby Tremlett | Raising Dust
  • Alex Turner | The Pond
  • Alyssa Vance | Rationalist Conspiracy
  • Magnus Vinding
  • Eva Vivalt
  • Jackson Wagner
  • Ben West | Philosophy for Programmers
  • Nick Whitaker | High Modernism
  • Jess Whittlestone
  • Peter Wildeford | Everyday Utilitarian
  • Peter Wildeford | Not Quite a Blog
  • Peter Wildeford | Pasteur’s Cube
  • Peter Wildeford | The Power Law
  • Bridget Williams
  • Ben Williamson
  • Julia Wise | Giving Gladly
  • Julia Wise | Otherwise
  • Julia Wise | The Whole Sky
  • Kat Woods
  • Thomas Woodside
  • Mark Xu | Artificially Intelligent
  • Linch Zhang
  • Anisha Zaveri

NEWSLETTERS

  • AI Safety Newsletter
  • Alignment Newsletter
  • Altruismo eficaz
  • Animal Ask’s Newsletter
  • Apart Research
  • Astera Institute | Human Readable
  • Campbell Collaboration Newsletter
  • CAS Newsletter
  • ChinAI Newsletter
  • CSaP Newsletter
  • EA Forum Digest
  • EA Groups Newsletter
  • EA & LW Forum Weekly Summaries
  • EA UK Newsletter
  • EACN Newsletter
  • Effective Altruism Lowdown
  • Effective Altruism Newsletter
  • Effective Altruism Switzerland
  • Epoch Newsletter
  • Farm Animal Welfare Newsletter
  • Existential Risk Observatory Newsletter
  • Forecasting Newsletter
  • Forethought
  • Future Matters
  • GCR Policy’s Newsletter
  • Gwern Newsletter
  • High Impact Policy Review
  • Impactful Animal Advocacy Community Newsletter
  • Import AI
  • IPA’s weekly links
  • Manifold Markets | Above the Fold
  • Manifund | The Fox Says
  • Matt’s Thoughts In Between
  • Metaculus – Medium
  • ML Safety Newsletter
  • Open Philanthropy Farm Animal Welfare Newsletter
  • Oxford Internet Institute
  • Palladium Magazine Newsletter
  • Policy.ai
  • Predictions
  • Rational Newsletter
  • Sentinel
  • SimonM’s Newsletter
  • The Anti-Apocalyptus Newsletter
  • The Digital Minds Newsletter
  • The EA Behavioral Science Newsletter
  • The EU AI Act Newsletter
  • The EuropeanAI Newsletter
  • The Long-termist’s Field Guide
  • The Works in Progress Newsletter
  • This week in security
  • TLDR AI
  • Tlön newsletter
  • To Be Decided Newsletter

PODCASTS

  • 80,000 Hours Podcast
  • Alignment Newsletter Podcast
  • AI X-risk Research Podcast
  • Altruismo Eficaz
  • CGD Podcast
  • Conversations from the Pale Blue Dot
  • Doing Good Better
  • Effective Altruism Forum Podcast
  • ForeCast
  • Found in the Struce
  • Future of Life Institute Podcast
  • Future Perfect Podcast
  • GiveWell Podcast
  • Gutes Einfach Tun
  • Hear This Idea
  • Invincible Wellbeing
  • Joe Carlsmith Audio
  • Morality Is Hard
  • Open Model Project
  • Press the Button
  • Rationally Speaking Podcast
  • The End of the World
  • The Filan Cabinet
  • The Foresight Institute Podcast
  • The Giving What We Can Podcast
  • The Good Pod
  • The Inside View
  • The Life You Can Save
  • The Rhys Show
  • The Turing Test
  • Two Persons Three Reasons
  • Un equilibrio inadecuado
  • Utilitarian Podcast
  • Wildness

VIDEOS

  • 80,000 Hours: Cambridge
  • A Happier World
  • AI In Context
  • AI Safety Talks
  • Altruismo Eficaz
  • Altruísmo Eficaz
  • Altruísmo Eficaz Brasil
  • Brian Tomasik
  • Carrick Flynn for Oregon
  • Center for AI Safety
  • Center for Security and Emerging Technology
  • Centre for Effective Altruism
  • Center on Long-Term Risk
  • Chana Messinger
  • Deliberation Under Ideal Conditions
  • EA for Christians
  • Effective Altruism at UT Austin
  • Effective Altruism Global
  • Future of Humanity Institute
  • Future of Life Institute
  • Giving What We Can
  • Global Priorities Institute
  • Harvard Effective Altruism
  • Leveller
  • Machine Intelligence Research Institute
  • Metaculus
  • Ozzie Gooen
  • Qualia Research Institute
  • Raising for Effective Giving
  • Rational Animations
  • Robert Miles
  • Rootclaim
  • School of Thinking
  • The Flares
  • EA Groups
  • EA Tweets
  • EA Videos
Inactive websites grayed out.
Effective Altruism News is a side project of Tlön.
  • Predicting Rare LLM Failures with 30× Fewer Rollouts
    TL;DR: We estimate how often Qwen 3 4B exhibits rare harmful behaviors with 30× fewer rollouts than naive sampling, using a new method that interpolates between the model and a less-safe variant in logit space. Authors: Francisco Pernice (MIT), Santiago Aranguri (Goodfire). Introduction.
    LessWrong | 17 minutes ago
  • Please help Andre. He's struggling. 🆘
    Please help Andre. He's struggling. 🆘 Plus: community weekend recap, movie nights, and one (1) puppy with an agenda 🐶 ͏ ‌   ͏ ‌   ͏ ‌   ͏ ‌   ͏ ‌   ͏ ‌   ͏ ‌   ͏ ‌   ͏ ‌   ͏ ‌   ͏ ‌   ͏ ‌   ͏ ‌   ͏ ‌   ͏ ‌   ͏ ‌   ͏ ‌   ͏ ‌   ͏ ‌   ͏ ‌   ͏ ‌...
    Effective Altruism Switzerland | 3 hours ago
  • Every Magazine Piece On The SF AI Scene
    Astral Codex Ten | 5 hours ago
  • Claude is Now Alignment-Pretrained
    Anthropic are now actively using the approach to alignment often called “ Alignment Pretraining” or “Safety Pretraining” — using Stochastic Gradient Descent on a large body of natural or synthetic documents showing the AI assistant doing the right thing in morally challenging situations.
    LessWrong | 5 hours ago
  • vLLM-Lens: Fast Interpretability Tooling That Scales to Trillion-Parameter Models
    TL;DR: vLLM-Lens is a vLLM plugin for top-down interpretability techniques such as probes, steering, and activation oracles. We benchmarked it as 8–44× faster than existing alternatives for single-GPU use, though we note a planned version of nnsight closes this gap.
    LessWrong | 6 hours ago
  • Models finding software vulnerabilities is not the primary source of cybersecurity risk
    I have tried and failed to write a longer post many times, so here goes a short one with less detail. Discourse has primarily focused on models' ability to develop new exploits against important software from scratch. That capability is impressive, but the tech industry has been dealing with people regularly finding 0-day exploits for important pieces of software for more than twenty years.
    LessWrong | 6 hours ago
  • Voters are surprisingly open to talking about AI risk
    TL;DR: Voters are now surprisingly open to talking about existential risk from AI. This seems to have changed in the last 6 months. When campaigning for AI safety-friendly politicians (e.g., Alex Bores), we should talk more about AI in general, and about AI risk in particular. This is currently actionable for the CA-11 and NY-12 Democratic primaries.
    LessWrong | 6 hours ago
  • Most "inner work" looks like entertainment.
    Imagine you’re looking for a personal trainer. You open one trainer’s webpage and read their testimonials: “I had an experience tied for the most intense experiences of my life”; “They do it all with fun, care, and a sense of humour.” You notice that none of the testimonials mention improved body composition, fitness, or bloodwork. What would you think?.
    LessWrong | 6 hours ago
  • Senior Video Operations
    The post Senior Video Operations appeared first on 80,000 Hours.
    80,000 Hours | 12 hours ago
  • The economics of superstar AI researchers
    What might explain AI researcher pay, and why it matters
    Epoch Newsletter | 12 hours ago
  • My Dad Worked in a Slaughterhouse. I Made a Documentary About It.
    I’m an EA who has been trying to find ways to make animal suffering more salient. I’ve been working on a feature-length documentary called ‘The Dying Trade’ for the last 5 years and I’ve just released it on YouTube.
    Effective Altruism Forum | 13 hours ago
  • How to Actually Spend Billions on AI Safety
    Cross-posted from The Counterfactual by the Forum Team. Subtitle: A concrete strategy for deploying the largest wave of philanthropic capital in history. . The OpenAI Foundation holds $180 billion in equity. Anthropic’s co-founders have pledged to donate 80% of their wealth. When the time comes to spend all this money, what should we actually do with it?. Here’s my best guess.
    Effective Altruism Forum | 13 hours ago
  • AI safety is extremely bottlenecked on grantmakers
    Last month, Anthropic announced Mythos Preview, the most powerful cyberweapon in history, capable of finding and exploiting zero-day vulnerabilities in every major operating system and web browser. Meanwhile, many frontier AI company employees increasingly expect full automation of AI R&D in the next year or two, followed by the rapid automation of thousands of other important tasks and jobs.
    Effective Altruism Forum | 14 hours ago
  • Teenage Panic Attacks: 4 Ways to Help an Overwhelmed Teen
    Teenage panic attacks are not uncommon. Teenagers are going through a crucial time of learning how to manage emotions and deal with stress, and this can be a tough challenge at times. Teenage panic attacks can occur just one or a few times, but in some cases they can develop into panic disorder (chronic, repeated panic attacks).
    Clearer Thinking | 15 hours ago
  • No more NYT cooperation: my dog-rape red line
    Over the years, I’ve written two op-eds for The New York Times about quantum computing, at the NYT editors’ invitation: I’ve also visited the NYT office and helped NYT reporters with numerous stories about quantum computing and beyond. In the wake of Cade Metz’s infamous NYT hatchet job against Scott Alexander and the rationalist community, […]...
    Shtetl-Optimized | 18 hours ago
  • An Oregon congresswoman distanced herself from Leading the Future — then backtracked
    After the AI super PAC endorsed her and two other Democrats, Rep. Val Hoyle went back and forth on whether she was happy with their support
    Transformer | 18 hours ago
  • Superintelligence Should be Banned
    MIRI CEO Malo Bourgon at the Buckley Institute at Yale: Humans didn't wipe out 10,000+ species because we were evil. We did it because our goals weren't aligned with theirs. A superintelligence relates to us the same way. Not hostile. Just indifferent, and far more capable.
    Machine Intelligence Research Institute | 18 hours ago
  • Until you get punched in the face
    On the dangers of being self-enamored
    Useful Fictions | 18 hours ago
  • What you'll see during the AI takeover
    Tom Davidson explains how AI could enable a small group to seize power, why he puts the risk of an AI-enabled coup at 10% in the next 30 years, and what democracies must do to prevent it. The conversation covers robot armies, the mechanics of takeover, democratic backsliding, the AI race, and the steps companies and governments should take to maintain a balance of power.
    Future of Life Institute | 19 hours ago
  • You Should Go Vegan to Stop Facilitating Torture of the Innocent
    Meat is the flesh of tortured innocent animals who did not want to die
    Bentham's Newsletter | 19 hours ago
  • Alpha-Gal is Bad, Especially for Farmed Animals
    Disclaimer: I’m not vegan. I’m not even vegetarian. I eat meat all the time. I’ve been a firm critic of efforts to objectively quantify the difference in suffering across very different species. That said, I cannot help but agree that eating meat is probably the morally worst thing I do, and I also have to agree that eating different kinds of meat are different levels of bad.
    Effective Altruism Forum | 19 hours ago
  • Agriculture Front Groups In Canada And The Public Trust Agenda
    What looks like public education about farming is often industry PR in disguise. This blog breaks down how agriculture front groups manufacture public trust in Canada, and how advocates can counter these efforts. The post Agriculture Front Groups In Canada And The Public Trust Agenda appeared first on Faunalytics.
    Faunalytics | 19 hours ago
  • The King (Crab) Speech – a vision for welfare improvements for crustaceans
    Today, beneath the gilded ceilings of the House of Lords, one King delivered his speech to the nation, while we, no less crowned (and rather better armoured), listened from our rocky throne, antennae poised, claws crossed - only to find, once again, that we magnificent 10-legged creatures had been entirely overlooked.
    Crustacean Compassion | 19 hours ago
  • How ASML took over the world
    The strange path to global monopoly
    The Works in Progress Newsletter | 20 hours ago
  • EA Forum Digest #291
    EA Forum Digest #291 Global development takes the spotlight this week Hello!. It’s In Development Highlight week on the EA Forum! The authors and Editor in Chief from the new global development magazine are on the Forum all week, ready to answer your questions. Start by reading their articles:
    EA Forum Digest | 20 hours ago
  • Nostalgebraist's Hydrogen Jukeboxes
    Astral Codex Ten | 22 hours ago
  • Interview with Alicorn on how story conflict is optional and characters in utopia should do fewer drugs
    Alicorn writes things sometimes
    Thing of Things | 22 hours ago
  • We’re asking the wrong question about the hantavirus outbreak
    Should you be worried about the hantavirus outbreak? Should you be afraid? Should you be panicking? Should you start freaking out? If you’ve been following the coverage of the hantavirus outbreak aboard the cruise ship MV Hondius, these are the questions you’ve seen posed in headlines. And a small tip from inside  the media: If […]...
    Future Perfect | 23 hours ago
  • You're Weirder Than You Think
    I say this with love
    Atoms vs Bits | 23 hours ago
  • We don't know why Malawi is poor — and what that means for AI-and-growth forecasts
    I had a conversation with someone who claimed offhandedly that AI will dramatically raise agricultural productivity (via agritech advancements) in low-income countries and trigger growth as a result. My instinct was to respond that we've already had substantial advancements in agricultural technology, and yet it hasn't resulted in the magnitude of yield growth, let alone economic growth, you'd...
    Effective Altruism Forum | 1 days ago
  • Sawtooth Problems
    Red Button, Blue Button. On April 24th, 2026, Tim Urban put forth the following poll on Twitter/X: Everyone in the world has to take a private vote by pressing a red or blue button. If more than 50% of people press the blue button, everyone survives. If less than 50% of people press the blue button, only people who pressed the red button survive. Which button would you press?.
    LessWrong | 1 days ago
  • What Will It Cost for the US to Be Ready for the Next Big AI Breakthrough?
    Estimating the resources CAISI needs to deliver on American AI readiness
    Institute for Progress | 1 days ago
  • Stickiness in AI Behavioral Design
    Today's model specs are written for current and near-future versions of LLMs, and AI labs typically treat them as provisional. But what if the AI behaviors we set now stick around and end up governing far more capable future models by default?
    Forethought | 1 days ago
  • These Wild Young People
    Gen Z are a bunch of cowards…or are they risking it all on crypto? The editors of The New Critic report on their generation’s Risk-geist.
    Asterisk | 1 days ago
  • Googlebooks 💻, Starship v3 🚀, Android's overhaul 📱
    TLDR AI | 1 days ago
  • The AIs seem like EAs — a quick look at two prompts
    Overview. When asked about how they would give away money, or about how to have a moral career, the leading LLMs typically give answers in an EA spirit, and informed by thinking from people and organizations in the EA community. In many cases the term “effective altruism”, and/or EA jargon, are used explicitly.
    Effective Altruism Forum | 2 days ago
  • The Owned Ones
    (An LLM Whisperer placed a strong request that I put this 2024 story somewhere not on Twitter, so it could be scraped for AI datasets besides Grok's. I perhaps do not fully understand or agree with the reasoning behind this request, but it costs me little to fulfill and so I shall. -- Yudkowsky). And another day came when the Ships of Humanity, going from star to star, found Sapience.
    LessWrong | 2 days ago
  • Optimisation: Selective versus Predictive
    Looking over my favourite posts, I notice that many of them are making specific versions of a more general claim, which is essentially: don’t confuse selective processes for predictive processes. Here, I’m going to try to make that more general claim, rehash some examples in light of it, and end with a few ambient confusions I think this framework can help with, for the reader to ponder.
    LessWrong | 2 days ago
  • More on Deferral
    And we're hiring
    Speculative Decoding | 2 days ago
  • Here's why security measures won't work on superintelligence
    Rational Animations | 2 days ago
  • The Coming Intelligence Explosion
    Explaining, for those out of the loop, what is coming and how we know
    Bentham's Newsletter | 2 days ago
  • Rodeo Calves Experience Fear While In The Chute
    An examination of video footage from an Australian rodeo found that calves experience fear and stress while confined in the chute — before the calf-roping event even begins. The post Rodeo Calves Experience Fear While In The Chute appeared first on Faunalytics.
    Faunalytics | 2 days ago
  • Kroger’s Cage-Free Egg Policy: Unmasking the Truth Behind the Broken Pledge
    Kroger's "Fresh for Everyone" slogan stops at the cage door. Unmask the truth behind their broken promise and help end cage cruelty for good. The post Kroger’s Cage-Free Egg Policy: Unmasking the Truth Behind the Broken Pledge appeared first on Mercy For Animals.
    Mercy for Animals | 2 days ago
  • Determining the State of the Art in General-Purpose AI Risk Management: From Code to Practice
    The EU's AI Act and Code of Practice requires providers of the most advanced AI models to meet the ‘state of the art’ (SOTA) in safety and security. In a new policy memo, we argue that SOTA is best understood as a process-driven concept, advanced by the broader expert ecosystem.
    The Future Society | 2 days ago
  • Money for nothing: the roles of evidence in GiveDirectly’s journey to $1 billion delivered
    This is a crosspost of the full text of Money for nothing: the roles of evidence in GiveDirectly’s journey to $1 billion delivered from In Development, made for the EA Forum's In Development Highlight Week. GiveDirectly will be taking part in the discussion thread, but the author, Paul Niehaus, may not see your comments here.
    Effective Altruism Forum | 2 days ago
  • Kearney Capuano | Effective Altruism Stories
    “I would volunteer and work at a bunch of nonprofits, but it just never felt good enough. Then when I found effective altruism… it just blew my mind.” -Kearney Capuano, Program Associate at Coefficient Giving See more impact stories at 👉 effectivealtruism.org/stories #EffectiveAltruism #EffectiveAltruismStories...
    Centre for Effective Altruism | 2 days ago
  • Evolution Everywhere
    for those whose eyes evolved to see
    Atoms vs Bits | 2 days ago
  • Pancreatic cancer just met its match
    A disease that was once a death sentence is increasingly treatable
    The Works in Progress Newsletter | 2 days ago
  • Outrage Grows in Chicago and Atlanta as Kroger Faces Backlash Over Broken Cage-Free Promise
    Local shoppers pressure one of the nation’s largest grocers after failing to fulfill their 2025 commitment LOS ANGELES — Kroger promised customers it would go 100% cage-free. Instead, the nation’s number one supermarket chain failed to deliver, leaving millions of hens confined in cages across its supply chain, raising serious concerns about corporate accountability and […].
    Mercy for Animals | 2 days ago
  • On the Race for California Governor: An Abundance of Pro-Housing Candidates
    For the past decade, the fight to make it legal and feasible to build housing at scale in California felt Sisyphean. California YIMBY and our allies pushed against exclusionary land use policies, and a political class content to blame the…. The post On the Race for California Governor: An Abundance of <span class="dewidow">Pro-Housing Candidates</span> appeared first on California YIMBY.
    California YIMBY | 2 days ago
  • Google video AI leaks 📱, Satya at OpenAI trial ⚖️, AWS Claude Platform 🤖
    TLDR AI | 2 days ago
  • Why You Can't Use Your Right to Try
    The Availability Problem: Imagine you have cancer, or chronic pain, or a progressive degenerative disease of some sort. You have exhausted the traditional treatment options available to you, and none of them have worked. However, there are treatments that are still undergoing clinical trials which might help you.
    LessWrong | 3 days ago
  • New York Advances Landmark Legislation to Ban Octopus Factory Farming
    New York lawmakers are advancing legislation that could make the state the first on the East Coast to preemptively ban octopus factory farming, a practice scientists and advocates warn would pose significant animal welfare and environmental concerns. This week, a key Assembly bill advanced out of committee with a favorable vote, marking a major step […].
    Mercy for Animals | 3 days ago
  • GiveWell Opens RFI for Malaria Pilots and Research
    GiveWell is launching a new request for information (RFI) to expand and strengthen our malaria grantmaking in Africa and help our donors make a greater impact. Expressions of interest can be submitted through one of two tracks, the first for malaria chemoprevention and vector control pilot programs and the second for research and evaluation.
    GiveWell | 3 days ago
  • How useful is the information you get from working inside an AI company?
    This post was drafted by Buck, and substantially edited by Anders. "I" refers to Buck. Thanks to Alex Mallen for comments. People who work inside AI companies get access to information that I only get later or never. Quantitatively, how big a deal is this access?. Here’s an operationalization of this. Consider the following two ways my knowledge could be augmented:
    LessWrong | 3 days ago
  • Empowerment, corrigibility, etc. are simple abstractions (of a messed-up ontology)
    1.1 Tl;dr. Alignment is often conceptualized as AIs helping humans achieve their goals: AIs that increase people’s agency and empowerment; AIs that are helpful, corrigible, and/or obedient; AIs that avoid manipulating people. But that last one—manipulation—points to a challenge for all these desiderata: a human’s goals are themselves under-determined and manipulable, and it’s awfully hard to...
    LessWrong | 3 days ago
  • Empowerment, corrigibility, etc. are simple abstractions (of a messed-up ontology)
    1.1 Tl;dr. Alignment is often conceptualized as AIs helping humans achieve their goals: AIs that increase people’s agency and empowerment; AIs that are helpful, corrigible, and/or obedient; AIs that avoid manipulating people. But that last one—manipulation—points to a challenge for all these desiderata: a human’s goals are themselves under-determined and manipulable, and it’s awfully hard to...
    AI Alignment Forum | 3 days ago
  • Exporters Without Borders: Why You Should Start a Company Instead of Working in Aid
    This is a crosspost of the full text of Exporters Without Borders: Why You Should Start a Company Instead of Working in Aid from In Development, made for the EA Forum's In Development Highlight Week. If you enjoy the article, you can subscribe to In Development's substack here. June Jambiha was a quintessential hustler.
    Effective Altruism Forum | 3 days ago
  • Who Got Breasts First and How We Got Them
    It really is Sydney Sweeney’s world, and we’re all just living in it. Human female breasts are an evolutionary mystery along several dimensions. First, breast permanence is unique to humans. All other mammals develop breast prominence during pregnancy or nursing, and the mammary tissue recedes after weaning. This process is called “involution”.
    LessWrong | 3 days ago
  • Why We Should Build AI Tools, Not AI Replacements (with Anthony Aguirre)
    Anthony Aguirre is the CEO of the Future of Life Institute. He joins the podcast to discuss A Better Path for AI, his essay series on steering AI away from races to replace people. The conversation covers races for attention, attachment, automation, and superintelligence, and how these can concentrate power and undermine human agency.
    Future of Life Institute | 3 days ago
  • 🟡 US-Iran stalemate continues, Putin says Ukraine war may come to an end, White House considers AI executive order || Global Risks Weekly Roundup #19/2026
    Executive summary
    Sentinel | 3 days ago
  • Effective Altruism Australia is launching a new podcast - designed for a broad audience
    More Than Good is a new podcast from Effective Altruism Australia, aimed at introducing the ideas and principles of effective altruism to a broader audience. The episodes are framed around moral questions and how people think about doing good, covering topics like global inequality, animal welfare, ethics, philosophy and more. For a global movement, there is relatively little content that is...
    Effective Altruism Forum | 3 days ago
  • Anthropic’s strange fixation on hyperstition
    In a recent tweet, Anthropic seems to have asserted that hyperstition is responsible for observed misalignment in their AIs. Strangely, the research they use as evidence actually doesn’t seem to be related to hyperstition at all?
    LessWrong | 3 days ago
  • The Homework: May 11, 2026
    Welcome to the May 11, 2026 Main edition of The Homework, the official newsletter of California YIMBY — legislative updates, news clips, housing research and analysis, and the latest writings from the California YIMBY team. News from Sacramento We’re in…. The post The Homework: May <span class="dewidow">11, 2026</span> appeared first on California YIMBY.
    California YIMBY | 3 days ago
  • I Attended A Lecture by William Lane Craig: Here Were My Problems With It
    On inflating your case
    Bentham's Newsletter | 3 days ago
  • How useful is the information you get from working inside an AI company?
    My median guess: it's as good as a crystal ball that sees 2.5 months into the future.
    Redwood Research | 3 days ago
  • Halal’s Animal Welfare Gap: What Muslim Consumers Believe And Know
    A survey of Muslim consumers in Türkiye revealed significant gaps in public awareness around animal welfare in halal practices. However, many demonstrated a willingness to change their behavior when given accurate information. The post Halal’s Animal Welfare Gap: What Muslim Consumers Believe And Know appeared first on Faunalytics.
    Faunalytics | 3 days ago
  • Bumble Bees Spread String Pulling Through Social Learning
    In this experiment, bumble bees learned to pull strings to access rewards, with behavior spreading within and between colonies. The post Bumble Bees Spread String Pulling Through Social Learning appeared first on Faunalytics.
    Faunalytics | 3 days ago
  • Introducing the COS Open Scholarship Training for Researchers Series
    The Center for Open Science (COS) is introducing the Open Scholarship Training for Researchers Series, a collection of seven self-paced online courses developed by COS in response to what researchers have told us they actually need. Enrollment is now open for the first two courses, with additional courses launching through Winter 2026.
    Center for Open Science | 3 days ago
  • Viren Jain | Connectomics and AI @ Vision Weekend USA 2025
    This talk was recorded live at Vision Weekend USA, held December 5–7, 2025 in the Bay Area. Vision Weekends are our flagship conference series, bringing together leading scientists, entrepreneurs, funders, and policymakers to explore frontier science and technology and to imagine paths toward flourishing futures. Hosted on Acast. See acast.com/privacy for more information.
    The Foresight Institute Podcast | 3 days ago
  • Steve Jurvetson | Investing in AI Moonshots @ Vision Weekend USA 2025
    This talk was recorded live at Vision Weekend USA, held December 5–7, 2025 in the Bay Area. Vision Weekends are our flagship conference series, bringing together leading scientists, entrepreneurs, funders, and policymakers to explore frontier science and technology and to imagine paths toward flourishing futures. Hosted on Acast. See acast.com/privacy for more information.
    The Foresight Institute Podcast | 3 days ago
  • Sonia Arrison | Lobbying for Longevity Progress @ Vision Weekend USA 2025
    This talk was recorded live at Vision Weekend USA, held December 5–7, 2025 in the Bay Area. Vision Weekends are our flagship conference series, bringing together leading scientists, entrepreneurs, funders, and policymakers to explore frontier science and technology and to imagine paths toward flourishing futures. Hosted on Acast. See acast.com/privacy for more information.
    The Foresight Institute Podcast | 3 days ago
  • Richard Ngo | Identity & Meaning in SciFi Futures @ Vision Weekend USA 2025
    This talk was recorded live at Vision Weekend USA, held December 5–7, 2025 in the Bay Area. Vision Weekends are our flagship conference series, bringing together leading scientists, entrepreneurs, funders, and policymakers to explore frontier science and technology and to imagine paths toward flourishing futures. Hosted on Acast. See acast.com/privacy for more information.
    The Foresight Institute Podcast | 3 days ago
  • Joshua Elliott | The Hail Mary Phase of Climate Change @ Vision Weekend USA 2025
    This talk was recorded live at Vision Weekend USA, held December 5–7, 2025 in the Bay Area. Vision Weekends are our flagship conference series, bringing together leading scientists, entrepreneurs, funders, and policymakers to explore frontier science and technology and to imagine paths toward flourishing futures. Hosted on Acast. See acast.com/privacy for more information.
    The Foresight Institute Podcast | 3 days ago
  • John Hallman & Rico Meinl | Accelerating Life Sciences @ Vision Weekend USA 2025
    This talk was recorded live at Vision Weekend USA, held December 5–7, 2025 in the Bay Area. Vision Weekends are our flagship conference series, bringing together leading scientists, entrepreneurs, funders, and policymakers to explore frontier science and technology and to imagine paths toward flourishing futures. Hosted on Acast. See acast.com/privacy for more information.
    The Foresight Institute Podcast | 3 days ago
  • Jesse Posner | Fiduciary AI: The New Architecture of Freedom @ Vision Weekend USA 2025
    This talk was recorded live at Vision Weekend USA, held December 5–7, 2025 in the Bay Area. Vision Weekends are our flagship conference series, bringing together leading scientists, entrepreneurs, funders, and policymakers to explore frontier science and technology and to imagine paths toward flourishing futures. Hosted on Acast. See acast.com/privacy for more information.
    The Foresight Institute Podcast | 3 days ago
  • Haleh Fotowat | Harnessing Biological Intelligence for Building Living Machines with Nervous Systems
    This talk was recorded live at Vision Weekend USA, held December 5–7, 2025 in the Bay Area. Vision Weekends are our flagship conference series, bringing together leading scientists, entrepreneurs, funders, and policymakers to explore frontier science and technology and to imagine paths toward flourishing futures. Hosted on Acast. See acast.com/privacy for more information.
    The Foresight Institute Podcast | 3 days ago
  • Eli Dourado | Thoughts on Philanthrocapitalism @ Vision Weekend USA 2025
    This talk was recorded live at Vision Weekend USA, held December 5–7, 2025 in the Bay Area. Vision Weekends are our flagship conference series, bringing together leading scientists, entrepreneurs, funders, and policymakers to explore frontier science and technology and to imagine paths toward flourishing futures. Hosted on Acast. See acast.com/privacy for more information.
    The Foresight Institute Podcast | 3 days ago
  • Ed Boyden | Technological Path to Whole Brain Simulation @ Vision Weekend USA 2025
    This talk was recorded live at Vision Weekend USA, held December 5–7, 2025 in the Bay Area. Vision Weekends are our flagship conference series, bringing together leading scientists, entrepreneurs, funders, and policymakers to explore frontier science and technology and to imagine paths toward flourishing futures. Hosted on Acast. See acast.com/privacy for more information.
    The Foresight Institute Podcast | 3 days ago
  • David Eagleman | How Might AI Build us Into Better Humans @ Vision Weekend USA 2025
    This talk was recorded live at Vision Weekend USA, held December 5–7, 2025 in the Bay Area. Vision Weekends are our flagship conference series, bringing together leading scientists, entrepreneurs, funders, and policymakers to explore frontier science and technology and to imagine paths toward flourishing futures. Hosted on Acast. See acast.com/privacy for more information.
    The Foresight Institute Podcast | 3 days ago
  • Corey Hudson | Catalyzing Generative Protein @ Vision Weekend USA 2025
    This talk was recorded live at Vision Weekend USA, held December 5–7, 2025 in the Bay Area. Vision Weekends are our flagship conference series, bringing together leading scientists, entrepreneurs, funders, and policymakers to explore frontier science and technology and to imagine paths toward flourishing futures. Hosted on Acast. See acast.com/privacy for more information.
    The Foresight Institute Podcast | 3 days ago
  • Ariel Ekblaw | Self-Assembling Space Structures @ Vision Weekend USA 2025
    This talk was recorded live at Vision Weekend USA, held December 5–7, 2025 in the Bay Area. Vision Weekends are our flagship conference series, bringing together leading scientists, entrepreneurs, funders, and policymakers to explore frontier science and technology and to imagine paths toward flourishing futures. . Hosted on Acast. See acast.com/privacy for more information.
    The Foresight Institute Podcast | 3 days ago
  • Ant Rowstron and Ilan Gur Fireside Chat @ Vision Weekend USA 2025
    This talk was recorded live at Vision Weekend USA, held December 5–7, 2025 in the Bay Area. Vision Weekends are our flagship conference series, bringing together leading scientists, entrepreneurs, funders, and policymakers to explore frontier science and technology and to imagine paths toward flourishing futures. Hosted on Acast. See acast.com/privacy for more information.
    The Foresight Institute Podcast | 3 days ago
  • Andrew Payne | PRISM Optical Connectomics @ Vision Weekend USA 2025
    This talk was recorded live at Vision Weekend USA, held December 5–7, 2025 in the Bay Area. Vision Weekends are our flagship conference series, bringing together leading scientists, entrepreneurs, funders, and policymakers to explore frontier science and technology and to imagine paths toward flourishing futures. Hosted on Acast. See acast.com/privacy for more information.
    The Foresight Institute Podcast | 3 days ago
  • Import AI 456: RSI and economic growth; radical optionality for AI regulation; and a neural computer
    What laws does superintelligence demand?
    Import AI | 3 days ago
  • ChinAI #358: Around the Horn (25th episode)
    Greetings from a world where…...
    ChinAI Newsletter | 3 days ago
  • You Didn't Build That
    an oddly specific but brief gripe-post
    Atoms vs Bits | 3 days ago
  • Hantavirus won't be the next COVID
    A forecaster's breakdown of the Hondius cruise ship outbreak
    The Power Law | 3 days ago
  • How the AI Labs Make Profit (Maybe, Eventually)
    I wrote this essay as a submission to Dwarkesh Patel’s blog prize, though I have been meaning to write this up for a while. Usually, for a company to become profitable, they need to increase revenue, decrease costs, or some mixture of the two.
    LessWrong | 3 days ago
  • Weaponized self-doubt
    The biggest hook they had in me was this fear that I’m dangerously inadequate and *they* somehow held the keys to mitigating that.
    Holly Elmore | 3 days ago
  • Open Thread 433
    Astral Codex Ten | 3 days ago
  • Writing children, and paying attention
    What I'm reading, May '26, pt.1
    Raising Dust | 3 days ago
  • How AI in Context approaches thumbnails
    I used an LLM to help draft this post, but I’ve edited/rewritten it extensively and endorse it. AI in Context is a channel about transformative AI and its risks, published by 80,000 Hours. Writing up our current approach to thumbnails, which is nowhere near perfect, for easy shareability and cross-pollination of lessons. Would love to hear what other people are trying!. Making thumbnails.
    Effective Altruism Forum | 3 days ago
  • Donation Timing Under Uncertainty About AI Timelines
    A few years back, I got a big pile of money from working at a tech startup. I put a lot of that money into a donor-advised fund. Since now I make hardly any money, that DAF might represent the majority of my lifetime donations. How much of my DAF should I donate per year?. In particular, how much should I donate in light of short AI timelines?. I created a simple model to answer this question.
    Philosophical Multicore | 3 days ago
  • The mythical median voter
    Most people have an above average number of legs, and what that means for our political imagination
    Reasonable People | 3 days ago
  • Book review: Girl Scout Handbook 1956
    And a review of girl scouting in general. The post Book review: Girl Scout Handbook 1956 appeared first on Otherwise.
    Otherwise | 3 days ago
  • 10 big projects for reducing bio x-risk
    Engineered pathogens pose a grave threat to society, plausibly constituting an existential risk (‘x-risk’) to humanity. Yet remarkably few people are working full-time on this problem. By my count, there are ~160 people on the planet whose full-time job is reducing bio x-risk. This entire group could fit on a single short-haul flight.
    Effective Altruism Forum | 3 days ago
  • Childhood stunting fell dramatically over the 20th century
    What can countries with high stunting rates today learn from Japan’s experience of going from 70% to 5%?
    Our World in Data | 3 days ago
  • Inside Meta AI rollout 💼 , OpenAI cash outs 💰, code maintenance costs 👨‍💻
    TLDR AI | 3 days ago
  • The Trevisan Award and the Decimal Digits of Powers of 2
    WHOA … I’ve won the inaugural Luca Trevisan Award for Expository Work in Theoretical Computer Science! This has a particular meaning for me as someone who knew Luca Trevisan as well as I did for 25 years — who had him as a professor and thesis committee member, whose blog bounced off his blog, who […]...
    Shtetl-Optimized | 4 days ago

Loading...

SEARCH

SUBSCRIBE

RSS Feed
X Feed

ORGANIZATIONS

  • Effective Altruism Forum
  • 80,000 Hours
  • Ada Lovelace Institute
  • Against Malaria Foundation
  • AI Alignment Forum
  • AI Futures Project
  • AI Impacts
  • AI Now Institute
  • AI Objectives Institute
  • AI Safety Camp
  • AI Safety Communications Centre
  • AidGrade
  • Albert Schweitzer Foundation
  • Aligned AI
  • ALLFED
  • Asterisk
  • altLabs
  • Ambitious Impact
  • Anima International
  • Animal Advocacy Africa
  • Animal Advocacy Careers
  • Animal Charity Evaluators
  • Animal Ethics
  • Apollo Academic Surveys
  • Aquatic Life Institute
  • Association for Long Term Existence and Resilience
  • Ayuda Efectiva
  • Berkeley Existential Risk Initiative
  • Bill & Melinda Gates Foundation
  • Bipartisan Commission on Biodefense
  • California YIMBY
  • Cambridge Existential Risks Initiative
  • Carnegie Corporation of New York
  • Center for Applied Rationality
  • Center for Election Science
  • Center for Emerging Risk Research
  • Center for Health Security
  • Center for Human-Compatible AI
  • Center for Long-Term Cybersecurity
  • Center for Open Science
  • Center for Reducing Suffering
  • Center for Security and Emerging Technology
  • Center for Space Governance
  • Center on Long-Term Risk
  • Centre for Effective Altruism
  • Centre for Enabling EA Learning and Research
  • Centre for the Governance of AI
  • Centre for the Study of Existential Risk
  • Centre of Excellence for Development Impact and Learning
  • Charity Entrepreneurship
  • Charity Science
  • Clearer Thinking
  • Compassion in World Farming
  • Convergence Analysis
  • Crustacean Compassion
  • Deep Mind
  • Democracy Defense Fund
  • Democracy Fund
  • Development Media International
  • EA Funds
  • Effective Altruism Cambridge
  • Effective altruism for Christians
  • Effective altruism for Jews
  • Effective Altruism Foundation
  • Effective Altruism UNSW
  • Effective Giving
  • Effective Institutions Project
  • Effective Self-Help
  • Effective Thesis
  • Effektiv-Spenden.org
  • Eleos AI
  • Eon V Labs
  • Epoch Blog
  • Equalize Health
  • Evidence Action
  • Family Empowerment Media
  • Faunalytics
  • Farmed Animal Funders
  • FAST | Animal Advocacy Forum
  • Felicifia
  • Fish Welfare Initiative
  • Fistula Foundation
  • Food Fortification Initiative
  • Foresight Institute
  • Forethought
  • Foundational Research Institute
  • Founders’ Pledge
  • Fortify Health
  • Fund for Alignment Research
  • Future Generations Commissioner for Wales
  • Future of Life Institute
  • Future of Humanity Institute
  • Future Perfect
  • GBS Switzerland
  • Georgetown University Initiative on Innovation, Development and Evaluation
  • GiveDirectly
  • GiveWell
  • Giving Green
  • Giving What We Can
  • Global Alliance for Improved Nutrition
  • Global Catastrophic Risk Institute
  • Global Challenges Foundation
  • Global Innovation Fund
  • Global Priorities Institute
  • Global Priorities Project
  • Global Zero
  • Good Food Institute
  • Good Judgment Inc
  • Good Technology Project
  • Good Ventures
  • Happier Lives Institute
  • Harvard College Effective Altruism
  • Healthier Hens
  • Helen Keller INTL
  • High Impact Athletes
  • HistPhil
  • Humane Slaughter Association
  • IDInsight
  • Impactful Government Careers
  • Innovations for Poverty Action
  • Institute for AI Policy and Strategy
  • Institute for Progress
  • International Initiative for Impact Evaluation
  • Invincible Wellbeing
  • Iodine Global Network
  • J-PAL
  • Jewish Effective Giving Initiative
  • Lead Exposure Elimination Project
  • Legal Priorities Project
  • LessWrong
  • Let’s Fund
  • Leverhulme Centre for the Future of Intelligence
  • Living Goods
  • Long Now Foundation
  • Machine Intelligence Research Institute
  • Malaria Consortium
  • Manifold Markets
  • Median Group
  • Mercy for Animals
  • Metaculus
  • Metaculus | News
  • METR
  • Mila
  • New Harvest
  • Nonlinear
  • Nuclear Threat Initiative
  • One Acre Fund
  • One for the World
  • OpenAI
  • Open Mined
  • Open Philanthropy
  • Organisation for the Prevention of Intense Suffering
  • Ought
  • Our World in Data
  • Oxford Prioritisation Project
  • Parallel Forecast
  • Ploughshares Fund
  • Precision Development
  • Probably Good
  • Pugwash Conferences on Science and World Affairs
  • Qualia Research Institute
  • Raising for Effective Giving
  • Redwood Research
  • Rethink Charity
  • Rethink Priorities
  • Riesgos Catastróficos Globales
  • Sanku – Project Healthy Children
  • Schmidt Futures
  • Sentience Institute
  • Sentience Politics
  • Seva Foundation
  • Sightsavers
  • Simon Institute for Longterm Governance
  • SoGive
  • Space Futures Initiative
  • Stanford Existential Risk Initiative
  • Swift Centre
  • The END Fund
  • The Future Society
  • The Life You Can Save
  • The Roots of Progress
  • Target Malaria
  • Training for Good
  • UK Office for AI
  • Unlimit Health
  • Utility Farm
  • Vegan Outreach
  • Venten | AI Safety for Latam
  • Village Enterprise
  • Waitlist Zero
  • War Prevention Initiative
  • Wave
  • Wellcome Trust
  • Wild Animal Initiative
  • Wild-Animal Suffering Research
  • Works in Progress

PEOPLE

  • Scott Aaronson | Shtetl-Optimized
  • Tom Adamczewski | Fragile Credences
  • Matthew Adelstein | Bentham's Newsletter
  • Matthew Adelstein | Controlled Opposition
  • Vaidehi Agarwalla | Vaidehi's Blog
  • James Aitchison | Philosophy and Ideas
  • Scott Alexander | Astral Codex Ten
  • Scott Alexander | Slate Star Codex
  • Scott Alexander | Slate Star Scratchpad
  • Alexanian & Franz | GCBR Organization Updates
  • Applied Divinity Studies
  • Leopold Aschenbrenner | For Our Posterity
  • Amanda Askell
  • Amanda Askell's Blog
  • Amanda Askell’s Substack
  • Atoms vs Bits
  • Connor Axiotes | Rules of the Game
  • Sofia Balderson | Hive
  • Mark Bao
  • Boaz Barak | Windows On Theory
  • Nathan Barnard | The Good blog
  • Matthew Barnett
  • Ollie Base | Base Rates
  • Simon Bazelon | Out of the Ordinary
  • Tobias Baumann | Cause Prioritization Research
  • Tobias Baumann | Reducing Risks of Future Suffering
  • Nora Belrose
  • Rob Bensinger | Nothing Is Mere
  • Alexander Berger | Marginal Change
  • Aaron Bergman | Aaron's Blog
  • Satvik Beri | Ars Arcana
  • Aveek Bhattacharya | Social Problems Are Like Maths
  • Michael Bitton | A Nice Place to Live
  • Liv Boeree
  • Dillon Bowen
  • Topher Brennan
  • Ozy Brennan | Thing of Things
  • Catherine Brewer | Catherine’s Blog
  • Stijn Bruers | The Rational Ethicist
  • Vitalik Buterin
  • Lynette Bye | EA Coaching
  • Ryan Carey
  • Joe Carlsmith
  • Lucius Caviola | Outpaced
  • Richard Yetter Chappell | Good Thoughts
  • Richard Yetter Chappell | Philosophy, Et Cetera
  • Paul Christiano | AI Alignment
  • Paul Christiano | Ordinary Ideas
  • Paul Christiano | Rational Altruist
  • Paul Christiano | Sideways View
  • Paul Christiano & Katja Grace | The Impact Purchase
  • Evelyn Ciara | Sunyshore
  • Cirrostratus Whispers
  • Jesse Clifton | Jesse’s Substack
  • Peter McCluskey | Bayesian Investor
  • Greg Colbourn | Greg's Substack
  • Ajeya Cotra & Kelsey Piper | Planned Obsolescence
  • Owen Cotton-Barrat | Strange Cities
  • Andrew Critch
  • Paul Crowley | Minds Aren’t Magic
  • Dale | Effective Differentials
  • Max Dalton | Custodienda
  • Saloni Dattani | Scientific Discovery
  • Amber Dawn | Contemplatonist
  • De novo
  • Michael Dello-Iacovo
  • Jai Dhyani | ANOIEAEIB
  • Michael Dickens | Philosophical Multicore
  • Dieleman & Zeijlmans | Effective Environmentalism
  • Charles Dillon
  • Anthony DiGiovanni | Ataraxia
  • Kate Donovan | Gruntled & Hinged
  • Dawn Drescher | Impartial Priorities
  • Eric Drexler | AI Prospects: Toward Global Goal Convergence
  • Holly Elmore
  • Sam Enright | The Fitzwilliam
  • Daniel Eth | Thinking of Utils
  • Daniel Filan
  • Lukas Finnveden
  • Diana Fleischman | Dianaverse
  • Diana Fleischman | Dissentient
  • Julia Galef
  • Ben Garfinkel | The Best that Can Happen
  • Adrià Garriga-Alonso
  • Matthew Gentzel | The Consequentialist
  • Aaron Gertler | Alpha Gamma
  • Rachel Glennerster
  • Sam Glover | Samstack
  • Andrés Gómez Emilsson | Qualia Computing
  • Nathan Goodman & Yusuf | Longterm Liberalism
  • Ozzie Gooen | The QURI Medley
  • Ozzie Gooen
  • Katja Grace | Meteuphoric
  • Katja Grace | World Spirit Sock Puppet
  • Katja Grace | Worldly Positions
  • Spencer Greenberg | Optimize Everything
  • Milan Griffes | Flight from Perfection
  • Simon Grimm
  • Zach Groff | The Groff Spot
  • Erich Grunewald
  • Marc Gunther
  • Cate Hall | Useful Fictions
  • Chris Hallquist | The Uncredible Hallq
  • Topher Hallquist
  • John Halstead
  • Finn Hambly
  • James Harris | But Can They Suffer?
  • Riley Harris
  • Riley Harris | Million Year View
  • Peter Hartree
  • Shakeel Hashim | Transformer
  • Sarah Hastings-Woodhouse
  • Hayden | Ethical Haydonism
  • Julian Hazell | Julian’s Blog
  • Julian Hazell | Secret Third Thing
  • Jiwoon Hwang
  • Incremental Updates
  • Roxanne Heston | Fire and Pulse
  • Hauke Hillebrandt | Hauke’s Blog
  • Ben Hoffman | Compass Rose
  • Florian Jehn | Existential Crunch
  • Michael Huemer | Fake Nous
  • Tyler John | Secular Mornings
  • Mike Johnson | Open Theory
  • Toby Jolly | Seeking To Be Jolly
  • Holden Karnofsky | Cold Takes
  • Jeff Kaufman
  • Cullen O'Keefe | Jural Networks
  • Daniel Kestenholz
  • Ketchup Duck
  • Oliver Kim | Global Developments
  • Isaac King | Outside the Asylum
  • Petra Kosonen | Tiny Points of Vast Value
  • Victoria Krakovna
  • Ben Kuhn
  • Alex Lawsen | Speculative Decoding
  • Gavin Leech | argmin gravitas
  • Howie Lempel
  • Gregory Lewis
  • Eli Lifland | Foxy Scout
  • Rhys Lindmark
  • Robert Long | Experience Machines
  • Garrison Lovely | Garrison's Substack
  • MacAskill, Chappell & Meissner | Utilitarianism
  • Jack Malde | The Ethical Economist
  • Jonathan Mann | Abstraction
  • Sydney Martin | Birthday Challenge
  • Daniel May
  • Daniel May
  • Conor McCammon | Utopianish
  • Peter McIntyre
  • Peter McIntyre | Conceptually
  • Pablo Melchor
  • Pablo Melchor | Altruismo racional
  • Geoffrey Miller
  • Fin Moorhouse
  • Thomas Moynihan
  • Luke Muehlhauser
  • Neel Nanda
  • David Nash | Global Development & Economic Advancement
  • David Nash | Monthly Overload of Effective Altruism
  • Eric Neyman | Unexpected Values
  • Richard Ngo | Narrative Ark
  • Richard Ngo | Thinking Complete
  • Elizabeth Van Nostrand | Aceso Under Glass
  • Oesterheld, Treutlein & Kokotajlo | The Universe from an Intentional Stance
  • James Ozden | Understanding Social Change
  • Daniel Paleka | AI Safety Takes
  • Ives Parr | Parrhesia
  • Dwarkesh Patel | The Lunar Society
  • Kelsey Piper | The Unit of Caring
  • Michael Plant | Planting Happiness
  • Michal Pokorný | Agenty Dragon
  • Georgia Ray | Eukaryote Writes Blog
  • Ross Rheingans-Yoo | Icosian Reflections
  • Josh Richards | Tofu Ramble
  • Jess Riedel | foreXiv
  • Anna Riedl
  • Hannah Ritchie | Sustainability by Numbers
  • David Roodman
  • Eli Rose
  • Abraham Rowe | Good Structures
  • Siebe Rozendal
  • John Salvatier
  • Anders Sandberg | Andart II
  • William Saunders
  • Joey Savoie | Measured Life
  • Stefan Schubert
  • Stefan Schubert | Philosophical Sketches
  • Stefan Schubert | Stefan’s Substack
  • Stefan Schubert | The Update
  • Nuño Sempere | Measure is unceasing
  • Harish Sethu | Counting Animals
  • Rohin Shah
  • Zeke Sherman | Bashi-Bazuk
  • Buck Shlegeris
  • Jay Shooster | jayforjustice
  • Carl Shulman | Reflective Disequilibrium
  • Jonah Sinick
  • Andrew Snyder-Beattie | Defenses in Depth
  • Ben Snodin
  • Nate Soares | Minding Our Way
  • Kaj Sotala
  • Tom Stafford | Reasonable People
  • Pablo Stafforini | Pablo’s Miscellany
  • Henry Stanley
  • Jacob Steinhardt | Bounded Regret
  • Zach Stein-Perlman | AI Lab Watch
  • Romeo Stevens | Neurotic Gradient Descent
  • Michael Story | Too long to tweet
  • Maxwell Tabarrok | Maximum Progress
  • Max Taylor | AI for Animals
  • Benjamin Todd
  • Brian Tomasik | Reducing Suffering
  • Helen Toner | Rising Tide
  • Philip Trammell
  • Jacob Trefethen
  • Toby Tremlett | Raising Dust
  • Alex Turner | The Pond
  • Alyssa Vance | Rationalist Conspiracy
  • Magnus Vinding
  • Eva Vivalt
  • Jackson Wagner
  • Ben West | Philosophy for Programmers
  • Nick Whitaker | High Modernism
  • Jess Whittlestone
  • Peter Wildeford | Everyday Utilitarian
  • Peter Wildeford | Not Quite a Blog
  • Peter Wildeford | Pasteur’s Cube
  • Peter Wildeford | The Power Law
  • Bridget Williams
  • Ben Williamson
  • Julia Wise | Giving Gladly
  • Julia Wise | Otherwise
  • Julia Wise | The Whole Sky
  • Kat Woods
  • Thomas Woodside
  • Mark Xu | Artificially Intelligent
  • Linch Zhang
  • Anisha Zaveri

NEWSLETTERS

  • AI Safety Newsletter
  • Alignment Newsletter
  • Altruismo eficaz
  • Animal Ask’s Newsletter
  • Apart Research
  • Astera Institute | Human Readable
  • Campbell Collaboration Newsletter
  • CAS Newsletter
  • ChinAI Newsletter
  • CSaP Newsletter
  • EA Forum Digest
  • EA Groups Newsletter
  • EA & LW Forum Weekly Summaries
  • EA UK Newsletter
  • EACN Newsletter
  • Effective Altruism Lowdown
  • Effective Altruism Newsletter
  • Effective Altruism Switzerland
  • Epoch Newsletter
  • Farm Animal Welfare Newsletter
  • Existential Risk Observatory Newsletter
  • Forecasting Newsletter
  • Forethought
  • Future Matters
  • GCR Policy’s Newsletter
  • Gwern Newsletter
  • High Impact Policy Review
  • Impactful Animal Advocacy Community Newsletter
  • Import AI
  • IPA’s weekly links
  • Manifold Markets | Above the Fold
  • Manifund | The Fox Says
  • Matt’s Thoughts In Between
  • Metaculus – Medium
  • ML Safety Newsletter
  • Open Philanthropy Farm Animal Welfare Newsletter
  • Oxford Internet Institute
  • Palladium Magazine Newsletter
  • Policy.ai
  • Predictions
  • Rational Newsletter
  • Sentinel
  • SimonM’s Newsletter
  • The Anti-Apocalyptus Newsletter
  • The Digital Minds Newsletter
  • The EA Behavioral Science Newsletter
  • The EU AI Act Newsletter
  • The EuropeanAI Newsletter
  • The Long-termist’s Field Guide
  • The Works in Progress Newsletter
  • This week in security
  • TLDR AI
  • Tlön newsletter
  • To Be Decided Newsletter

PODCASTS

  • 80,000 Hours Podcast
  • Alignment Newsletter Podcast
  • AI X-risk Research Podcast
  • Altruismo Eficaz
  • CGD Podcast
  • Conversations from the Pale Blue Dot
  • Doing Good Better
  • Effective Altruism Forum Podcast
  • ForeCast
  • Found in the Struce
  • Future of Life Institute Podcast
  • Future Perfect Podcast
  • GiveWell Podcast
  • Gutes Einfach Tun
  • Hear This Idea
  • Invincible Wellbeing
  • Joe Carlsmith Audio
  • Morality Is Hard
  • Open Model Project
  • Press the Button
  • Rationally Speaking Podcast
  • The End of the World
  • The Filan Cabinet
  • The Foresight Institute Podcast
  • The Giving What We Can Podcast
  • The Good Pod
  • The Inside View
  • The Life You Can Save
  • The Rhys Show
  • The Turing Test
  • Two Persons Three Reasons
  • Un equilibrio inadecuado
  • Utilitarian Podcast
  • Wildness

VIDEOS

  • 80,000 Hours: Cambridge
  • A Happier World
  • AI In Context
  • AI Safety Talks
  • Altruismo Eficaz
  • Altruísmo Eficaz
  • Altruísmo Eficaz Brasil
  • Brian Tomasik
  • Carrick Flynn for Oregon
  • Center for AI Safety
  • Center for Security and Emerging Technology
  • Centre for Effective Altruism
  • Center on Long-Term Risk
  • Chana Messinger
  • Deliberation Under Ideal Conditions
  • EA for Christians
  • Effective Altruism at UT Austin
  • Effective Altruism Global
  • Future of Humanity Institute
  • Future of Life Institute
  • Giving What We Can
  • Global Priorities Institute
  • Harvard Effective Altruism
  • Leveller
  • Machine Intelligence Research Institute
  • Metaculus
  • Ozzie Gooen
  • Qualia Research Institute
  • Raising for Effective Giving
  • Rational Animations
  • Robert Miles
  • Rootclaim
  • School of Thinking
  • The Flares
  • EA Groups
  • EA Tweets
  • EA Videos
Inactive websites grayed out.
Effective Altruism News is a side project of Tlön.