~www_lesswrong_com | Bookmarks (669)
-
[Intuitive self-models] 8. Rooting Out Free Will Intuitions — LessWrong
Published on November 4, 2024 6:16 PM GMT8.1 Post summary / Table of contentsThis is the...
-
Option control — LessWrong
Published on November 4, 2024 5:54 PM GMTIntroduction and summary(This is the third in a series of...
-
The current state of RSPs — LessWrong
Published on November 4, 2024 4:00 PM GMTThis is a reference post. It contains no novel...
-
Does the "ancient wisdom" argument have any validity? If a particular teaching or tradition is old, to what extent does this make it more trustworthy? — LessWrong
Published on November 4, 2024 3:20 PM GMTProponents of spirituality and alternative medicine often use the...
-
A brief history of the automated corporation — LessWrong
Published on November 4, 2024 2:35 PM GMTLooking back from 2041When people in the early 21st...
-
Abstractions are not Natural — LessWrong
Published on November 4, 2024 11:10 AM GMT(This was inspired by a conversation with Alex Altair...
-
[Linkpost] Building Altruistic and Moral AI Agent with Brain-inspired Affective Empathy Mechanisms — LessWrong
Published on November 4, 2024 10:15 AM GMTAbstractAs AI closely interacts with human society, it is...
-
Context-dependent consequentialism — LessWrong
Published on November 4, 2024 9:29 AM GMTThis dialogue is still in progress, but due to...
-
Survival without dignity — LessWrong
Published on November 4, 2024 2:29 AM GMTI open my eyes and find myself lying on...
-
Drug development costs can range over two orders of magnitude — LessWrong
Published on November 3, 2024 11:13 PM GMTThis is a cross-post from my new newsletter, where...
-
Feedback request: what am I missing? — LessWrong
Published on November 2, 2024 5:38 PM GMTIn the past 3 and a bit years since...
-
Fragile, Robust, and Antifragile Preference Satisfaction — LessWrong
Published on November 2, 2024 5:25 PM GMTWhat do I want to do?This sounds like a...
-
Is OpenAI net negative for AI Safety? — LessWrong
Published on November 2, 2024 4:18 PM GMTI recently saw a post arguing that top AI...
-
Educational CAI: Aligning a Language Model with Pedagogical Theories — LessWrong
Published on November 1, 2024 6:55 PM GMTBharath Puranam (bharath225525@gmail.com)This research blog represents my final project...
-
Two arguments against longtermist thought experiments — LessWrong
Published on November 2, 2024 10:22 AM GMTEpistemic status: shower thoughts.I am currently going through the...
-
Both-Sidesism—When Fair & Balanced Goes Wrong — LessWrong
Published on November 2, 2024 3:04 AM GMTIn a few days time, voting will close for...
-
What can we learn from insecure domains? — LessWrong
Published on November 1, 2024 11:53 PM GMTCryptocurrency is terrible. With a single click of a...
-
Science advances one funeral at a time — LessWrong
Published on November 1, 2024 11:06 PM GMTMajor scientific institutions talk a big game about innovation,...
-
Set Theory Multiverse vs Mathematical Truth - Philosophical Discussion — LessWrong
Published on November 1, 2024 6:56 PM GMTI've been thinking about the set theory multiverse and...
-
SAE Probing: What is it good for? Absolutely something! — LessWrong
Published on November 1, 2024 7:23 PM GMTSubhash and Josh are co-first authors. Work done as...
-
'Meta', 'mesa', and mountains — LessWrong
Published on October 31, 2024 5:25 PM GMTRecently, in a conversation with a coworker, I was...
-
Toward Safety Cases For AI Scheming — LessWrong
Published on October 31, 2024 5:20 PM GMTDevelopers of frontier AI systems will face increasingly challenging...
-
AI #88: Thanks for the Memos — LessWrong
Published on October 31, 2024 3:00 PM GMTFollowing up on the Biden Executive Order on AI,...
-
The Compendium, A full argument about extinction risk from AGI — LessWrong
Published on October 31, 2024 12:01 PM GMTWe (Connor Leahy, Gabriel Alfour, Chris Scammell, Andrea Miotti,...