~www_lesswrong_com | Bookmarks (695)
-
Several Arguments Against the Mathematical Universe Hypothesis — LessWrong
Published on February 19, 2025 10:13 PM GMTThe legendary Scott Alexander recently posted an article promoting...
-
AI #104: American State Capacity on the Brink — LessWrong
Published on February 20, 2025 2:50 PM GMTThe Trump Administration is on the verge of firing...
-
US AI Safety Institute will be 'gutted,' Axios reports — LessWrong
Published on February 20, 2025 2:40 PM GMTDiscuss
-
Eliezer's Lost Alignment Articles / The Arbital Sequence — LessWrong
Published on February 20, 2025 12:48 AM GMTNote: this is a static copy of this wiki...
-
Arbital has been imported to LessWrong — LessWrong
Published on February 20, 2025 12:47 AM GMTArbital was envisioned as a successor to Wikipedia. The...
-
The Dilemma’s Dilemma — LessWrong
Published on February 19, 2025 11:50 PM GMTHow We Frame Negotiations MattersThis is a follow up...
-
Metaculus Q4 AI Benchmarking: Bots Are Closing The Gap — LessWrong
Published on February 19, 2025 10:42 PM GMTIn Q4 we ran the second tournament in the...
-
Literature Review of Text AutoEncoders — LessWrong
Published on February 19, 2025 9:54 PM GMTThis is a brief literature review of Text AutoEncoders,...
-
DeepSeek Made it Even Harder for US AI Companies to Ever Reach Profitability — LessWrong
Published on February 19, 2025 9:02 PM GMTDiscuss
-
Won't vs. Can't: Sandbagging-like Behavior from Claude Models — LessWrong
Published on February 19, 2025 8:47 PM GMTIn a recent Anthropic Alignment Science blog post, we...
-
Abstract Mathematical Concepts vs. Abstractions Over Real-World Systems — LessWrong
Published on February 18, 2025 6:04 PM GMTConsider concepts such as "a vector", "a game-theoretic agent",...
-
How accurate was my "Altered Traits" book review? — LessWrong
Published on February 18, 2025 5:00 PM GMT4.4 years ago, I posted a review of Altered...
-
Medical Roundup #4 — LessWrong
Published on February 18, 2025 1:40 PM GMTIt seems like as other things drew our attention...
-
Dear AGI, — LessWrong
Published on February 18, 2025 10:48 AM GMTDear AGI,This is written specifically for artificial agents.There something...
-
There are a lot of upcoming retreats/conferences between March and July (2025) — LessWrong
Published on February 18, 2025 9:30 AM GMTI have written this list up for ENAIS's Fieldbuilders...
-
Misaligned actions and what to do with them? - A proposed framework and open problems — LessWrong
Published on February 18, 2025 12:06 AM GMTAbstract There are numerous examples of AI models acting in...
-
Do models know when they are being evaluated? — LessWrong
Published on February 17, 2025 11:13 PM GMTInterim research report from the first 4 weeks of...
-
AGI Safety & Alignment @ Google DeepMind is hiring — LessWrong
Published on February 17, 2025 9:11 PM GMTThe AGI Safety & Alignment Team (ASAT) at Google...
-
The Peeperi (unfinished) - By Katja Grace — LessWrong
Published on February 17, 2025 7:33 PM GMTAn AI vignette written by Katja in 2021, posted...
-
Progress links and short notes, 2025-02-17 — LessWrong
Published on February 17, 2025 7:18 PM GMTMuch of this content originated on social media. To follow...
-
Ascetic hedonism — LessWrong
Published on February 17, 2025 3:56 PM GMTIn being ascetic, you abandon the usual sources of...
-
AIS Berlin, events, opportunities and the flipped gameboard - Fieldbuilders Newsletter, February 2025 — LessWrong
Published on February 17, 2025 2:16 PM GMTCrossposted on Substack and the EA forum.Gergő from ENAIS here...
-
Monthly Roundup #27: February 2025 — LessWrong
Published on February 17, 2025 2:10 PM GMTI have been debating how to cover the non-AI...
-
What new x- or s-risk fieldbuilding organisations would you like to see? An EOI form. (FBB #3) — LessWrong
Published on February 17, 2025 12:39 PM GMTCrossposted on The Field Building Blog and the EA...