~www_lesswrong_com | Bookmarks (657)
-
Funding for work that builds capacity to address risks from transformative AI — LessWrong
Published on August 14, 2024 11:52 PM GMT[cross-posted from the EA Forum]Post authors: Eli Rose, Asya...
-
Adverse Selection by Life-Saving Charities — LessWrong
Published on August 14, 2024 8:46 PM GMTGiveWell, and the EA community at large, often emphasize...
-
The great Enigma in the sky: The universe as an encryption machine — LessWrong
Published on August 14, 2024 1:21 PM GMTEpistemic status: Fun speculation. I'm a dilettante in physics...
-
An anti-inductive sequence — LessWrong
Published on August 14, 2024 12:28 PM GMTI was thinking about what would it mean for...
-
Rabin's Paradox — LessWrong
Published on August 14, 2024 5:40 AM GMTQuick psychology experimentRight now, if I offered you a...
-
Announcing the $200k EA Community Choice — LessWrong
Published on August 14, 2024 12:39 AM GMTDiscuss
-
Debate: Is it ethical to work at AI capabilities companies? — LessWrong
Published on August 14, 2024 12:18 AM GMTEpistemic status: Solider mindset. These are not (necessarily) our...
-
Fields that I reference when thinking about AI takeover prevention — LessWrong
Published on August 13, 2024 11:08 PM GMTIs AI takeover like a nuclear meltdown? A coup?...
-
Ten counter-arguments that AI is (not) an existential risk (for now) — LessWrong
Published on August 13, 2024 10:35 PM GMTThis is a polemic to the ten arguments post....
-
[LDSL#6] When is quantification needed, and when is it hard? — LessWrong
Published on August 13, 2024 8:39 PM GMTThis post is also available on my Substack.In the...
-
A computational complexity argument for many worlds — LessWrong
Published on August 13, 2024 7:35 PM GMTThe following is an argument for a weak form...
-
Ten arguments that AI is an existential risk — LessWrong
Published on August 13, 2024 5:00 PM GMT #kv6N55hWbJ4HdwA8w .comments-node .InlineReactSelectionWrapper-root { flex: 1 1 70%;...
-
In Defense of Open-Minded UDT — LessWrong
Published on August 12, 2024 6:27 PM GMTA Defense of Open-Minded Updatelessness.This work owes a great...
-
Humanity isn't remotely longtermist, so arguments for AGI x-risk should focus on the near term — LessWrong
Published on August 12, 2024 6:10 PM GMTToby Ord recently published a nice piece On the...
-
Shifting Headspaces - Transitional Beast-Mode — LessWrong
Published on August 12, 2024 1:02 PM GMT I was sitting in a tiny rental lodge, feeling...
-
Simultaneous Footbass and Footdrums II — LessWrong
Published on August 11, 2024 11:50 PM GMT Getting ready for this Friday's Spark in the...
-
CultFrisbee — LessWrong
Published on August 11, 2024 9:36 PM GMTTom and I have been pondering how to make...
-
[Interim progress] Decrypting hidden chain of thought — LessWrong
Published on August 11, 2024 7:43 PM GMTIn the "Let's Think Dot by Dot" paper (https://arxiv.org/abs/2404.15758),...
-
Pleasure and suffering are not conceptual opposites — LessWrong
Published on August 11, 2024 6:32 PM GMTDiscuss
-
[LDSL#4] Root cause analysis versus effect size estimation — LessWrong
Published on August 11, 2024 4:12 PM GMTFollowup to: Information-orientation is in tension with magnitude-orientation. This...
-
Unnatural abstractions — LessWrong
Published on August 10, 2024 10:31 PM GMT"Good news, everyone, professor couldn't make it today! I...
-
[LDSL#3] Information-orientation is in tension with magnitude-orientation — LessWrong
Published on August 10, 2024 9:58 PM GMTFollowup to: Latent variable models, network models, and linear...
-
Tall tales and long odds — LessWrong
Published on August 10, 2024 3:22 PM GMTIgorV: Волчар с миней?! Bro, srsly? LMAOKatyaL: Волчар возвращн?...
-
The Great Organism Theory of Evolution — LessWrong
Published on August 10, 2024 12:26 PM GMT From Becoming Animal (David Abrams, 2010):Many months earlier, in...