In October 2020, I started a “Wisdom List.” It’s a doc to collect pithy quotes, clever analogies, and insightful aphorisms that I collect from books, podcasts, and blogs.

Recently, I mentioned it on LessWrong and a bunch of people asked if I could share it. I guess calling it a “Wisdom List” (with caps) makes it sound pretty profound and important. I make no promises that it will live up to that.

I decided I’d just share some of the contents of the list in a blog post.

Disclaimers:

  • I may or may not update this going forward.
  • This should not be considered a normal post for this blog. It’s just a bunch of ideas from around the internet that I liked and jotted down.
  • I make no guarantees that everything in quotes is verbatim or that I cited/linked the correct sources.

With that said, here is my “Wisdom List” as of 10 November 2021, in reverse chronological order.


“The cure to boredom is curiosity. There is no cure for curiosity.” ― Dorothy Parker

“I generally think to myself, ‘I only know about the topics I can put some real time into understanding, and that’s not a lot of topics, and I’d better pick them carefully for my life goals.’” — https://www.cold-takes.com/gell-mann-earworms/

“I think this is how autoimmune diseases work, more or less. The immune system can be either more or less active. This is a tradeoff: the more active it is, the better it is at dealing with infection, but the more likely it is to accidentally attack the body and cause autoimmune symptoms. And then the immune system can either fail or not fail. Some people have very active immune systems, but the systems are very good at what they do and only attack genuine pathogens. Other people have very weak immune systems, but the immune system is so confused that the one thing it can muster up the energy to attack is your own body. Like the accidental nuclear war, the autoimmune disease is the combined result of a tradeoff plus a failure. And I think this is how anxiety disorders work too. People can be more or less anxious. Being anxious all the time is no fun. But having no anxiety at all isn’t fun either; maybe you poke a lion with a stick to see what happens, and then your genes don’t make it into the next generation. Having higher or lower baseline anxiety is a tradeoff, just like having your country’s military on high alert vs. low alert. But if you also have a failure - your anxiety flares up at things that aren’t really that bad, and doesn’t go away - then you have an anxiety disorder. Either end of the tradeoff is defensible. But either end of the tradeoff is vulnerable to failure. If you have high anxiety (but no failures), then you’re a very careful person and it probably works out well for you, unless you’re absurdly careful. If you have low anxiety (but no failures), then you’re a very brave person and that probably works out well too, unless you’re suicidally brave. If you have high anxiety (and many failures), then you’re probably naturally protected against failures of too-low anxiety, but especially vulnerable to failures of too-high anxiety, and you panic constantly over little things that don’t matter. If you have low anxiety (and many failures), then you’re probably naturally protected against failures of too-high anxiety, but especially vulnerable to failures of too-low anxiety, and you poke lions with sticks for no reason. So problems with anxiety - constant disproportionate flares of anxiety that serve no legitimate goal - are a combination of tradeoff and failure.” — https://astralcodexten.substack.com/p/ontology-of-psychiatric-conditions-34e

“Alas, all too often we don’t even think about what we don’t know. There’s a beautiful little experiment about our incuriosity, conducted by the psychologists Leonid Rozenblit and Frank Keil. They gave their experimental subjects a simple task: to look through a list of everyday objects such as a flush lavatory, a zipper, and a bicycle, and to rate their understanding of each object on a scale of one to seven. After people had written down their ratings, the researchers would gently launch a devastating ambush. They asked the subjects to elaborate. Here’s a pen and paper, they would say; please write out your explanation of a flush lavatory in as much detail as you can. By all means include diagrams. It turns out that this task wasn’t as easy as people had thought. People stumbled, struggling to explain the details of everyday mechanisms. They had assumed that those details would readily spring to mind, and they did not. And to their credit, most experimental subjects realized that they’d been lying to themselves. They had felt they understood zippers and lavatories, but when invited to elaborate, they realized they didn’t understand at all. When people were asked to reconsider their previous one-to-seven rating, they marked themselves down, acknowledging that their knowledge had been shallower than they’d realized. Rozenblit and Keil called this “the illusion of explanatory depth.” The illusion of explanatory depth is a curiosity killer and a trap. If we think we already understand, why go deeper? Why ask questions? It is striking that it was so easy to get people to pull back from their earlier confidence: all it took was to get them to reflect on the gaps in their knowledge. And as Loewenstein argued, gaps in knowledge can fuel curiosity. […] in a world where so many people seem to hold extreme views with strident certainty, you can deflate somebody’s overconfidence and moderate their politics simply by asking them to explain the details. Next time you’re in a politically heated argument, try asking your interlocutor not to justify herself, but simply to explain the policy in question. She wants to introduce a universal basic income, or a flat tax, or a points-based immigration system, or Medicare for all. OK, that’s interesting. So what exactly does she mean by that? She may learn something as she tries to explain. So may you.” — Excerpt From: Tim Harford. “The Data Detective”

“The Coordination Frontier is my term for “the cutting edge of coordination techniques, which are not obvious to most people.” I think it’s a useful concept for us to collectively have as we navigate complex new domains in the coming years. “ — https://www.lesswrong.com/posts/LtsJLfnP4YwhGdaCf/the-coordination-frontier-sequence-intro

“There’s this trap people fall into when writing, especially for a place like LessWrong where the bar for epistemic rigor is pretty high. They have a good idea, or an interesting belief, or a cool model. They write it out, but they’re not really sure if it’s true. So they go looking for evidence (not necessarily confirmation bias, just checking the evidence in either direction) and soon end up down a research rabbit hole. Eventually, they give up and never actually publish the piece. This post is about how to avoid that, without sacrificing good epistemics. There’s one trick, and it’s simple: stop trying to justify your beliefs. Don’t go looking for citations to back your claim. Instead, think about why you currently believe this thing, and try to accurately describe what led you to believe it. It’s been pointed out before that most high-schools teach a writing style in which the main goal is persuasion or debate. Arguing only one side of a case is encouraged. It’s an absolutely terrible habit, and breaking it is a major step on the road to writing the sort of things we want on LessWrong.” — https://www.lesswrong.com/posts/Psr9tnQFuEXiuqGcR/how-to-write-quickly-while-maintaining-epistemic-rigor

“I’ve read probably around a hundred books on writing (they’re addictive); and they all treated formality as entropy to be fought - a state of disorder into which writing slides. It is easier to use big words than small words, easier to be abstract than concrete, easier to use passive -ation words than their active counterparts. Perhaps formality first became associated with Authority, back in the dawn of time, because Authorities put in less effort and forced their audience to read anyway. Formality became associated with Wisdom by being hard to understand. Why suppose that scientific formality was ever about preventing propaganda?” — https://www.lesswrong.com/posts/njb9cyyzqLTHewups/informers-and-persuaders

“This puts the rationalists in a uniquely prosocial position. They’re a sort of distributed, mostly open-source monastic order, spending a lot of time contemplating the world and passing down important observations, but less time directly interacting with it. The influence of people who read rationalist blogs, but don’t self-identify as rationalists, is quite wide—the blogs are very widely followed in technology circles, and anecdotally have a large audience in the more quantitative branches of finance. Identifying as a rationalist is a losing move, because non-rationalists will think it’s weird (objectively true!) and rationalists will be relatively indifferent to a personal label” — https://diff.substack.com/p/why-did-one-internet-subculture-spot

”Synthesize new ideas constantly. Never read passively. Annotate, model, think, and synthesize while you read, even when you’re reading what you conceive to be introductory stuff. That way, you will always aim towards understanding things at a resolution fine enough for you to be creative.” — Ed Boyden in “How To Think”

“a lot of good ideas are actually bad ideas because, since they sound good, everybody’s already doing them. So I often try to think about, What sounds like a bad idea, but if you find the right plan of attack, it’s actually a really good idea? I spend a lot of time really trying to systematically tackle problems from different angles.” — Ed Boyden

https://overcast.fm/+U9ZGtVS9g/35:32 from (35:32) Tim Harford likens Shannon’s source coding theorem (compression) to how Shannon lived a meaningful and memorable life. By varying things up and pursuing his interests, he made his days incompressible. By knowing when to win and then quit, he made his memories. Related to novelty search and being interesting. Similar to Feynman — https://www.pushkin.fm/episode/fritterin-away-genius/

“One way to evaluate whether a lab is doing antifragile work is by asking whether their results would still be interesting if they’d failed to disprove the null hypothesis. In scientific research, where failure is to be expected, it’s useful to design antifragile experiments.” — https://www.lesswrong.com/posts/D4fB4zugW6YTJHKYS/antifragility-in-games-of-chance-research-and-debate

“doing the wrong thing rather than doing the thing wrong” — https://drmaciver.substack.com/p/how-to-do-everything

“Professor Quirrell had remarked over their lunch that Harry really needed to conceal his state of mind better than putting on a blank face when someone discussed a dangerous topic, and had explained about one-level deceptions, two-level deceptions, and so on. So either Severus was in fact modeling Harry as a one-level player, which made Severus himself two-level, and Harry’s three-level move had been successful; or Severus was a four-level player and wanted Harry to think the deception had been successful. Harry, smiling, had asked Professor Quirrell what level he played at, and Professor Quirrell, also smiling, had responded, One level higher than you.” — Harry Potter and the Methods of Rationality, Chapter 27

“It’s almost impossible to predict the future. But it’s also unnecessary, because most people are living in the past. All you have to do is see the present before everyone else does.” — Jason Crawford, https://www.lesswrong.com/posts/yyDrMYBfvYtKbmPmm/precognition

“I’ve come up with a set of rules that describe our reactions to technologies:

  1. Anything that is in the world when you’re born is normal and ordinary and is just a natural part of the way the world works.
  2. Anything that’s invented between when you’re fifteen and thirty-five is new and exciting and revolutionary and you can probably get a career in it.
  3. Anything invented after you’re thirty-five is against the natural order of things.”

― Douglas Adams, The Salmon of Doubt

“That which can be destroyed by the truth should be.” — P.C. Hodgell

“Harry was wondering if he could even get a Bayesian calculation out of this. Of course, the point of a subjective Bayesian calculation wasn’t that, after you made up a bunch of numbers, multiplying them out would give you an exactly right answer. The real point was that the process of making up numbers would force you to tally all the relevant facts and weigh all the relative probabilities. Like realizing, as soon as you actually thought about the probability of the Dark Mark not-fading if You-Know-Who was dead, that the probability wasn’t low enough for the observation to count as strong evidence. One version of the process was to tally hypotheses and list out evidence, make up all the numbers, do the calculation, and then throw out the final answer and go with your brain’s gut feeling after you’d forced it to really weigh everything. The trouble was that the items of evidence weren’t conditionally independent, and there were multiple interacting background facts of interest…well, one thing at least was certain. If the calculation could be done at all, it was going to take a piece of paper and a pencil.” — Harry Potter in HPMOR by Eliezer Yudkowsky

“I spent 10% of my time trying to answer the question ‘what are the important problems in my field?’ Ten percent! Friday afternoon, straight through. […] I recommend it! You find a regular time to stop and think: What are the important things? What is going on? What is the nature of what you’re doing? What is the characteristic of the job? What are the fundamentals behind it? So you’ll have some idea of where you’re headed so you can march in a uniform direction and get far instead of being a drunken sailor and getting nowhere.” — Hamming, https://youtu.be/a1zDuOPkMSw?&t=1750

“Now most great scientists have 10-20 problems in their minds […] which, when they get a clue how to attack, they drop all the things and rush at that problem and finish it off first. Something between 10 and 20 problems which they think are important but they don’t know what to do. […] Importance is not the consequences. […] The importance of a problem, to a great extent, depends upon ‘have you got a way of attacking the problem?’” — Hamming, https://youtu.be/a1zDuOPkMSw?t=1939

“A life saved is a statistic. A person hurt is an anecdote. Statistics are invisible. Anecdotes are salient.” — Taleb

“Be suspicious of ‘because’” — Taleb

“In science we do reductionism. All of our science courses tell us the way to solve a problem is to break it into smaller parts and analyse the parts. And that has been phenomenally successful for every branch of science. But the great frontier in science today is what happens when you try to go back — to put the parts together to understand the whole. That’s the field of complex systems. That’s why we don’t understand the immune system very well. We don’t understand consciousness very well, or the economy. [The whole is more than the sum of its parts is] the cliché that has entranced me for my whole research career. I want to understand how can you figure out the properties of the whole, given the properties of the parts.” — Steven Strogatz, https://youtu.be/t-_VPRCtiUg?t=1129

Taleb’s Generator: “We favour the visible, the embedded, the personal, the narrated, and the tangible. We scorn the abstract. Everything good — aesthetics, ethics — and wrong — fooled by randomness — with us seems to flow from it.”

Don’t decide if you’re “bullish” or “bearish.” That’s only considering the distribution of the odds. Also consider the distribution of payoffs. If you’re 75% sure the market will go up by 1% next week then most would call you somewhat bullish. But if you think that there’s 25% chance it’ll fall by 10%, then shorting is the far better side of the bet to take. — Taleb, Fooled by Randomness

Taleb’s Silver Rule: “Do not do unto others as you would not have them do unto you.”

“The Post-Modernists improved Western philosophy by throwing out the map. They damaged Western society by throwing out the territory too.” — lsusr in this LessWrong post

“Defiance [is about] looking at the world and having the same mental reflexes as the defiant child. It’s about the reflexive impulse to say “screw this” and choose self-reliance over hopelessness in the face of problems that are crushingly large. It’s about a deep-seated inability to go gently into that good night. It’s about being able to look at the terrible social equilibria we’re all trapped in and get pissed off — not because any individual is evil, but because almost nobody is evil and everything is broken anyway.” — Nate Soares, Defiance

“You think there’s a government? […] What makes you think that the people in those two offices have ever coordinated?” — Eric Weinstein on Lex Fridman’s podcast, paraphrasing someone else.

“If the worst person in the world proves a mathematical theorem […] we can’t undo the theorem.” — Eric Weinstein on Lex Fridman’s podcast

“The smart people need to take the greedy people behind the woodshed and explain to them what science is.” — Eric Weinstein on Lex Fridman’s podcast

An adult is “somebody who weighs things, speaks carefully, thinks about the future beyond their own lifespan. Somebody who has a pretty good idea of how to get things done, isn’t wildly caught up in punitive actions, is more focussed on breaking new ground than playing rent-seeking games.” — Eric Weinstein on Lex Fridman’s podcast

“We should be doing diversity and inclusion out of greed rather than guilt.” — Eric Weinstein on Lex Fridman’s podcast (1:50:33)

“I never allow myself to have an opinion on anything that 
I don’t know the other side’s argument better than they do.” — Charlie Munger

“Richard Feynman was fond of giving the following advice on how to be a genius. You have to keep a dozen of your favorite problems constantly present in your mind, although by and large they will lay in a dormant state. Every time you hear or read a new trick or a new result, test it against each of your twelve problems to see whether it helps. Every once in a while there will be a hit, and people will say: ‘How did he do it? He must be a genius!’” ― Gian-Carlo Rota, Indiscrete Thoughts

“a 1971 paper from Robert Wilson showed almost no games have an even or infinite number of equilibria. However, some quirky games do not follow this odd rule of thumb, and our old friend weak dominance frequently claims responsibility.” — Excerpt From: William Spaniel. “Game Theory 101: The Complete Textbook”.

“”Different things work for different people.” That sentence may give you a squicky feeling; I know it gives me one. Because this sentence is a tool wielded by Dark Side Epistemology to shield from criticism, used in a way closely akin to “Different things are true for different people” (which is simply false). But until you grasp the laws that are near-universal generalizations, sometimes you end up messing around with surface tricks that work for one person and not another, without your understanding why, because you don’t know the general laws that would dictate what works for who. And the best you can do is remember that, and be willing to take “No” for an answer.” — http://lesswrong.com/lw/9v/beware_of_otheroptimizing/

“Nature uses only the longest threads to weave her patterns, so each small piece of her fabric reveals the organization of the entire tapestry.” ― Richard P. Feynman

“Be alert! The world needs more lerts!” — Yudkowsky’s dad

“Mindfulness and Stoicism are curiously complimentary. In other words, learning about Stoicism while practising Mindfulness is like accompanying your cup of coffee with a bit of chocolate. Enjoy” — William B. Irvine

“You can know the name of a bird in all the languages of the world, but when you’re finished, you’ll know absolutely nothing whatever about the bird… So let’s look at the bird and see what it’s doing — that’s what counts. I learned very early the difference between knowing the name of something and knowing something.” ― Richard P. Feynman, “What Do You Care What Other People Think?”: Further Adventures of a Curious Character

“As they’re telling me the conditions of the theorem, I construct something which fits all the conditions. You know, you have a set (one ball)—disjoint (two balls). Then the balls turn colors, grow hairs, or whatever, in my head as they put more conditions on. Finally they state the theorem, which is some dumb thing about the ball which isn’t true for my hairy green ball thing, so I say, ‘False!’” — Richard P. Feynman (Excerpt From: Scott Young. “Ultralearning”)

“I had a scheme, which I still use today when somebody is explaining something that I’m trying to understand: I keep making up examples. For instance, the mathematicians would come in with a terrific theorem, and they’re all excited. As they’re telling me the conditions of the theorem, I construct something which fits all the conditions. You know, you have a set (one ball) - disjoint (two balls). Then the balls turn colors, grow hairs, or whatever, in my head as they put more conditions on. Finally they state the theorem, which is some dumb thing about the ball which isn’t true for my hairy green ball thing, so I say, “False!”. If it’s true, they get all excited, and I let them go on for a while. Then I point out my counterexample. “Oh. We forgot to tell you that it’s Class 2 Hausdorff homomorphic.” “Well, then,” I say, “It’s trivial! It’s trivial!” I guessed right most of the time because although the mathematicians thought their topology theorems were counterintuitive, they weren’t really as difficult as they looked. You can get used to the funny properties of this [..] and do a pretty good job of guessing how it will turn out.” — Feynman (source)

“These are the good old days” — Carly Simon’s “Anticipation” song.