Notes on The Elephant in the Brain

For the past several months, I’ve been on a sociology and (evolutionary) biology reading binge. I recently finished my favorite book on the topic: Robin Hanson and Kevin Simler’s The Elephant in the Brain. I found its ideas so concise, useful, and explanatory, even for someone who’s done previous reading in the area, that I should share a little background as well as my notes from the book.

You may have heard the common X isn’t about Y examples:

Food isn’t about Nutrition
Clothes aren’t about Comfort
Bedrooms aren’t about Sleep
Marriage isn’t about Romance
Talk isn’t about Info
Laughter isn’t about Jokes
Charity isn’t about Helping
Church isn’t about God
Art isn’t about Insight
Medicine isn’t about Health
Consulting isn’t about Advice
School isn’t about Learning
Research isn’t about Progress
Politics isn’t about Policy

The point here is directionally right and this thinking serves as a useful heuristic for understanding systems. But it’s the the mechanisms and fine-grained details that provide a useful mental model and worldview. Hopefully the highlights will intrigue you enough to read the book yourself.

There are some interesting corollaries and implications to the Hansonian worldview that I think are horrendously underrated and not really explored in the book. Namely:

  • Everything is inherently political or status oriented because sexual selection, not natural selection, is what drove most of humal evolution (this was Darwin’s real discovery which was not fully accepted and modeled until the 1980’s).
  • Virtue signaling must be good, not bad. How else would we set norms on which (arbitrary) games we compete in? Some games are better than others from an external perspective. E.g. consumerist signaling is harmful to the environment.
  • We live in our own little dream worlds. But I don’t know if it matters from personal and philosophical standpoints.
  • Certain political philosophies (Marxism) assume a natural selection-based Darwinian view of human nature which is only a small fraction of why we actually evolved.

I know those bullets have a lot of big ideas packed into them — I could talk about each one of them for hours. Hit me up for more. I’d rather not attempt to explain everything in an overly-abbreviated post.

Here are my highlights. Update: here is another great list of highlights someone compiled!


Here is the thesis we’ll be exploring in this book: We, human beings, are a species that’s not only capable of acting on hidden motives—we’re designed to do it. Our brains are built to act in our self-interest while at the same time trying hard not to appear selfish in front of other people. And in order to throw them off the trail, our brains often keep “us,” our conscious minds, in the dark. The less we know of our own ugly motives, the easier it is to hide them from others. Self-deception is therefore strategic, a ploy our brains use to look good while behaving badly.

The point is, we act on hidden motives together, in public, just as often as we do by ourselves, in private. And when enough of our hidden motives harmonize, we end up constructing stable, long-lived institutions—like schools, hospitals, churches, and democracies—that are designed, at least partially, to accommodate such motives. This was Robin’s conclusion about medicine, and similar reasoning applies to many other areas of life.

The alpha male, for example, almost never tries to replace the gamma male from guard duty; instead the alpha directs all of his competitive energies toward the beta. If the goal were to help weaker members, the alpha should be more eager to take over from the gamma than from the beta.

Knowledge suppression is useful only when two conditions are met: (1) when others have partial visibility into your mind; and (2) when they’re judging you, and meting out rewards or punishments, based on what they “see” in your mind.

Now consider the human being. Like the redwood, our species has a distinctive feature: a huge brain. But if we think of Homo sapiens like the lone redwood in the open meadow, towering in intelligence over an otherwise brain-dead field, then we’re liable to be puzzled.

Now, our competitions for prestige often produce positive side effects such as art, science, and technological innovation. But the prestige-seeking itself is more nearly a zero-sum game, which helps explain why we sometimes feel pangs of envy at even a close friend’s success.

Coalitions are what makes politics so political.

The problem with competitive struggles, however, is that they’re enormously wasteful. The redwoods are so much taller than they need to be. If only they could coordinate not to all grow so tall—if they could institute a “height cap” at 100 feet (30 meters), say—the whole species would be better off. All the energy that they currently waste racing upward, they could instead invest in other pursuits, like making more pinecones in order to spread further, perhaps into new territory. Competition, in this case, holds the entire species back. Unfortunately, the redwoods aren’t capable of coordinating to enforce a height cap, and natural selection can’t help them either.

But our species is different. Unlike other natural processes, we can look ahead. And we’ve developed ways to avoid wasteful competition, by coordinating our actions using norms and norm enforcement—a topic we turn to in the next chapter.

Collective enforcement, then, is the essence of norms. This is what enables the egalitarian political order so characteristic of the forager lifestyle.

right, it was learning to use deadly weapons that was the inflection point in the trajectory of our species’ political behavior. Once our ancestors learned how to kill and punish each other collectively, nothing would be the same. Coalition size would balloon almost overnight. Politics would then become exponentially more complicated and require more intelligence to navigate,

Typically, these are crimes of intent. If you just happen to be friendly with someone else’s spouse, no big deal. But if you’re friendly with romantic or sexual intentions, that’s inappropriate. By targeting intentions rather than actions, norms can more precisely regulate the behavior patterns that cause problems within communities. (It would be ham-fisted and unduly cumbersome to ban friendliness, for example.) But regulating intentions also opens the door to various kinds of cheating, which we’ll explore in Chapter 4.

But there are acceptable and unacceptable ways to do this. It’s perfectly acceptable just to “be yourself,” for example. If you’re naturally impressive or likable, then it seems right and proper for others to like and respect you as well. What’s not acceptable is sycophancy: brown-nosing, bootlicking, groveling, toadying, and sucking up. Nor is it acceptable to “buy” high-status associates via cash, flattery, or sexual favors. These tactics are frowned on or otherwise considered illegitimate, in part because they ruin the association signal for everyone else.

When abstract logic puzzles are framed as cheating scenarios, for example, we’re a lot better at solving them. This is one of the more robust findings in evolutionary psychology, popularized by the wife-and-husband team Leda Cosmides and John Tooby.

Here’s another way to think about it. We typically treat discretion or secret-keeping as an activity that has only one important dimension: how widely a piece of information is known. But actually there are two dimensions to keeping a secret: how widely it’s known and how openly or commonly it’s known. And a secret can be widely known without being openly known—the closeted lesbian’s sexuality, for example, or the fact that the emperor is naked.

Scalping—the unauthorized reselling of tickets, typically at the entrance to concerts and sporting events—is illegal in roughly half of the states in the United States. That’s why you’ll often hear scalpers hawking their goods with the counterintuitive (yet perfectly legal) request to buy tickets. Like wrapping alcohol in a paper bag, this practice doesn’t fool the people who are charged with stopping it; the police and venue security personnel know exactly what’s going on. And yet scalpers find it overwhelmingly in their interests to keep up the charade. This is another illustration of how even modest acts of discretion can thwart attempts at enforcing norms and laws. Note that professional norm enforcers, such as police, teachers, and human resource managers, have a strong incentive to enforce norms: it’s their job. Even so, they’re often overworked or subject to lax oversight, and therefore tempted to cut corners. Sometimes the threat of mere paperwork can be enough to keep police from enforcing minor infractions.

In 1527, King Henry VIII’s marriage to Queen Catherine of Aragon seemed unlikely to give him the son he desperately needed, and at 38 years old, he was running out of options. Everyone at court knew that Henry wanted a younger woman—Anne Boleyn—as his wife. Unfortunately, his marriage to Catherine had been blessed by the previous pope, and the current pope was in no mood to grant an annulment. What the king needed was a pretext, a false but plausible justification to distract from his real reason. So, nearly 20 years into his marriage to Catherine, the king suddenly “discovered” that she hadn’t been a virgin on their wedding night, and that therefore their marriage was illegitimate. As pretexts go, this was pretty ham-handed. But kings don’t need their excuses to be particularly subtle or airtight; their power is enough of an incentive for most people to go along. In Henry’s case, his pretext was enough to let him break from Roman Catholicism (thereby launching the English Reformation) and secure his annulment from the head of the new Anglican Church. Pretexts are a broad and useful tool for getting away with norm violations. They make prosecution more difficult by having a ready explanation for your innocence. This makes it harder for others to accuse and prosecute you. And as we’ve seen, a pretext doesn’t need to fool everyone—it simply needs to be plausible enough to make people worry that other people might believe it.

Another domain is personal health. You might suppose, given how important health is to our happiness (not to mention our longevity), it would be a domain to which we’d bring our cognitive A-game. Unfortunately, study after study shows that we often distort or ignore critical information about our own health in order to seem healthier than we really are. One study, for example, gave patients a cholesterol test, then followed up to see what they remembered months later. Patients with the worst test results—who were judged the most at-risk of cholesterol-related health problems—were most likely to misremember their test results, and they remembered their results as better (i.e., healthier) than they actually were.

In recent years, psychologists—especially those who focus on evolutionary reasoning—have developed a more satisfying explanation for why we deceive ourselves. Where the Old School saw self-deception as primarily inward-facing, defensive, and (like the general editing the map) largely self-defeating, the New School sees it as primarily outward-facing, manipulative, and ultimately self-serving. Two recent New School books have been Trivers’ The Folly of Fools (2011) and Robert Kurzban’s Why Everyone (Else) Is a Hypocrite (2013). But the roots of the New School go back to Thomas Schelling, a Nobel Prize–winning economist best known for his work on the game theory of cooperation and conflict. In his 1967 book The Strategy of Conflict, Schelling studied what he called mixed-motive games. These are scenarios involving two or more players whose interests overlap but also partially diverge.

•Ignoring information, also known as strategic ignorance. If you’re kidnapped, for example, you might prefer not to see your kidnapper’s face or learn his name. Why? Because if he knows you can identify him later (to the police), he’ll be less likely to let you go. In some cases, knowledge can be a serious liability. •Purposely believing something that’s false. If you’re a general who firmly believes your army can win, even though the odds are against it, you might nevertheless intimidate your opponent into backing down. In other words, mixed-motive games contain the kind of incentives that reward self-deception. There’s a tension in all of this. In simple applications of decision theory, it’s better to have more options and more knowledge. Yet Schelling has argued that, in a variety of scenarios, limiting or sabotaging yourself is the winning move. What gives?

What’s the benefit of self-deception over a simple, deliberate lie? There are many ways to answer this question, but they mostly boil down to the fact that lying is hard to pull off. For one thing, it’s cognitively demanding.

Beyond the cognitive demands, lying is also difficult because we have to overcome our fear of getting caught.

When asked to raise both hands, one man raised his right hand high into the air and said, when he detected my gaze locked onto his motionless left hand, “Um, as you can see, I’m steadying myself with my left hand in order to raise my right.” Apart from their bizarre denials, these patients are otherwise mentally healthy and intelligent human beings. But no amount of cross-examination can persuade them of what’s plainly true—that their left arms are paralyzed. They will confabulate and rationalize and forge counterfeit reasons until they’re blue in the face.

Meanwhile, the rest of us—healthy, whole-brained people—are confronted every day with questions that ask us to explain our behavior. Why did you storm out of the meeting? Why did you break up with your boyfriend? Why haven’t you done the dishes? Why did you vote for Barack Obama? Why are you a Christian? Each of these questions demands a reason, and in most cases we dutifully oblige. But how many of our explanations are legitimate, and how many are counterfeit? Just how pervasive is our tendency to rationalize?

Kings and popes, for example, would often “invite” their subjects to line up for public kiss-the-ring ceremonies, putting everyone’s loyalty and submission on conspicuous display and thereby creating common knowledge of the leader’s dominance.

the psychology of humor—a topic fruitfully explored in the book Inside Jokes,

And many animals, in addition to using specific gestures, will also move slowly or engage in exaggerated or unnecessary movement, as if to convey playful intent by conspicuously wasted effort that no animal would undertake if it were in serious danger.

In another interview he says, “I don’t think it’s fair to get offended by comedians.” And yet what fans say they love about Burr is that he’s honest—“refreshingly,” “brutally,” “devastatingly” honest. So which is it? Is he just joking or telling the truth? The beauty of laughter is that it gets to be both. The safe harbor of plausible deniability is what allows Burr and other comedians to get away with being honest about taboo topics. As Oscar Wilde said, “If you want to tell people the truth, make them laugh; otherwise they’ll kill you.”

Conversation, therefore, looks on the surface like an exercise in sharing information, but subtextually, it’s a way for speakers to show off their wit, perception, status, and intelligence, and (at the same time) for listeners to find speakers they want to team up with. These are two of our biggest hidden motives in conversation.

But why do speakers need to be relevant in conversation? If speakers deliver high-quality information, why should listeners care whether the information is related to the current topic? A plausible answer is that it’s simply too easy to rattle off memorized trivia. You can recite random facts from the encyclopedia until you’re blue in the face, but that does little to advertise your generic facility with information. Similarly, when you meet someone for the first time, you’re more eager to sniff each other out for this generic skill, rather than to exchange the most important information each of you has gathered to this point in your lives. In other words, listeners generally prefer speakers who can impress them wherever a conversation happens to lead, rather than speakers who steer conversations to specific topics where they already know what to say.

Now, it did make some sense for our ancestors to track news as a way to get practical information, such as we do today for movies, stocks, and the weather. After all, they couldn’t just go easily search for such things on Google like we can. But notice that our access to Google hasn’t made much of a dent in our hunger for news; if anything we read more news now that we have social media feeds, even though we can find a practical use for only a tiny fraction of the news we consume.

But when researchers Jesse Prinz and Angelika Seidel asked subjects to consider a hypothetical scenario in which the Mona Lisa burned to a crisp, 80 percent of them said they’d prefer to see the ashes of the original rather than an indistinguishable replica. This should give us pause.

Consider the lobster—as David Foster Wallace invites us to do in an essay of the same name. “Up until sometime in the 1800s,” writes Wallace, lobster was literally low-class food, eaten only by the poor and institutionalized. Even in the harsh penal environment of early America, some colonies had laws against feeding lobsters to inmates more than once a week because it was thought to be cruel and unusual, like making people eat rats. One reason for their low status was how plentiful lobsters were in old New England. “Unbelievable abundance” is how one source describes the situation. Today, of course, lobster is far less plentiful and much more expensive, and now it’s considered a delicacy, “only a step or two down from caviar.”

A similar aesthetic shift occurred with skin color in Europe. When most people worked outdoors, suntanned skin was disdained as the mark of a low-status laborer. Light skin, in contrast, was prized as a mark of wealth; only the rich could afford to protect their skin by remaining indoors or else carrying parasols. Later, when jobs migrated to factories and offices, lighter skin became common and vulgar, and only the wealthy could afford to lay around soaking in the sun.

asked participants how much they would agree to pay for nets that prevent migratory bird deaths. Some participants were told that the nets would save 2,000 birds annually, others were told 20,000 birds, and a final group was told 200,000 birds. But despite the 10- and 100-fold differences in projected impact, people in all three groups were willing to contribute the same amount. This effect, known as scope neglect or scope insensitivity, has been demonstrated for many other problems, including cleaning polluted lakes, protecting wilderness areas, decreasing road injuries, and even preventing deaths. People are willing to help, but the amount they’re willing to help doesn’t scale in proportion to how much impact their contributions will make.

Patrick West calls it “conspicuous compassion.” The idea is that we’re motivated to appear generous, not simply to be generous, because we get social rewards only for what others notice.

Consequently, even the most celebrated studies are often statistical flukes. For example, one study looked at the 49 most-cited articles published in the three most prestigious medical journals. Of the 34 of these studies that were later tested by other researchers, only 20 were confirmed.

In fact, patients show surprisingly little interest in private information on medical quality. For example, patients who would soon undergo a dangerous surgery (with a few percent chance of death) were offered private information on the (risk-adjusted) rates at which patients died from that surgery with individual surgeons and hospitals in their area. These rates were large and varied by a factor of three. However, only 8 percent of these patients were willing to spend even $50 to learn these death rates. Similarly, when the government published risk-adjusted hospital death rates between 1986 and 1992, hospitals with twice the risk-adjusted death rates saw their admissions fall by only 0.8 percent. In contrast, a single high-profile news story about an untoward death at a hospital resulted in a 9 percent drop in patient admissions at that hospital.

And yet medicine deserves its share of public scrutiny—as much, if not more so, than any other area of life. One of the simplest reasons is the prevalence and high cost of medical errors, which are estimated to cause between 44,000 and 98,000 deaths in the United States every year. As Alex Tabarrok puts it, “More people die from medical mistakes each year than from highway accidents, breast cancer, or AIDS and yet physicians still resist and the public does not demand even simple reforms.”

found that death rates plummet when doctors are required to consistently follow a simple five-step checklist. •Requiring autopsies. Around 40 percent of autopsies reveal the original cause-of-death diagnosis to have been incorrect. But autopsy rates are way down, from a high of 50 percent in the 1950s to a current rate of about 5 percent. •Getting doctors to wash their hands consistently. Compliance for best handwashing practices hovers around 40 percent. Some of these problems are downright scandalous, and yet, as Tabarrok points out, they’re largely ignored by the general public. We’d rather not look our medical gift horse in the mouth. Another way we’re reluctant to question medical quality is by getting second opinions. Doctors frequently make mistakes, as we’ve seen, and second opinions are often useful—for example, for diagnosing cancer, determining cancer treatment plans, and avoiding unnecessary surgery. And yet we rarely seek them out.

If we’re using medicine as a signal of support, however, then we’ll provide and consume more of it during a patient’s times of crisis, when they are more grateful for support. And this is exactly what we find. The public is eager for medical interventions that help people when they’re sick, but far less eager for routine lifestyle interventions. Everyone wants to be the hero offering an emergency cure, but few people want to be the nag telling us to change our diets, sleep and exercise more, and fix the air quality in our big cities—even though these nagging interventions promise much larger (and more cost-effective) health improvements. One study, for example, tracked 3,600 adults over seven and a half years. Investigators reported that people who reside in rural areas lived an average of 6 years longer than city dwellers, nonsmokers lived 3 years longer than smokers, and those who exercised a lot lived 15 years longer than those who exercised only a little. In contrast, most studies that look similarly at how much medicine people consume fail to find any significant effects. Yet it is medicine, and not these other effects, that gets the lion’s share of public attention regarding health.

Imagine a preacher addressing a congregation about the virtue of compassion. What’s the value of attending such a sermon? It’s not just that you’re getting personal advice, as an individual, about how to behave (perhaps to raise your chance of getting into Heaven). If that were the main point of a sermon, you could just as well listen from home, for example, on a podcast. The real benefit, instead, comes from listening together with the entire congregation. Not only are you learning that compassion is a good Christian virtue, but everyone else is learning it too—and you know that they’re learning it, and they know that you’re learning it, and so forth. (And if anyone happens to miss this particular sermon, don’t worry: the message will be repeated again and again in future sermons.) In other words, sermons generate common knowledge of the community’s norms. And everyone who attends the sermon is tacitly agreeing to be held to those standards in their future behavior. If an individual congregant later fails to show compassion, ignorance won’t be an excuse, and everyone else will hold that person accountable. This mutual accountability is what keeps religious communities so cohesive and cooperative.

In the end, our motives were less important than what we managed to achieve by them. We may be competitive social animals, self-interested and self-deceived, but we cooperated our way to the god-damned moon.

When Insiders Were Outsiders

When you become successful, there are two stories to tell: the triumph of that success, and the struggle of getting there. There are plenty of resources on the former — winning tactical advice, motivational think-pieces, etc. are easy to find. But I think people benefit most from the latter category.

Here’s a list of great hustle stories. Use these to remember that everyone had to start somewhere, and we’re all making it up as we go along!

If you can think of similar high-quality pieces, let me know! Contact methods on my homepage.

Worldview as a Competitive Advantage

Recently I’ve been thinking about what makes a strong competitive advantage in a world where individuals and firms are increasingly leveraged and specialized. The top several dozen venture investors basically monopolize the industry. (About 20 firms — 3% of the venture universe — earn 95% of the returns.)

The common value-add or differentiating dimensions are a great network, positive signaling effects, insightful operating experience, or the willingness to invest at the highest price. But it’s not clear that the majority of investors move the needle.

These are all useful from the perspective of founders. The tough part as an investor is that there are few unfilled niches. You’ll run into tough competition on all of the above features. So what’s left? From your perspective, does value-add even matter that much when VCs still pass on many amazing deals?

I think that worldview is the most underrated and robust advantages you can have over other investors.

An interesting thread in politics and psychology these past few years has explored how people change their minds. Example: ideas from Jonathan Haidt’s The Righteous Mind and Arnold Kling’s The Three Languages of Politics have made their way into the mainstream. The core insight here is that people rarely change their minds. Opinions on different issues aren’t individual beliefs based on separate analyses — they’re products of a single underlying worldview. If an idea doesn’t fit your worldview, you’re unlikely to accept it on its merit rather than break up an otherwise consistent and unified understanding of how things work.

This makes VC hard because the best deals are, almost by definition, outliers that don’t easily fit into any worldview. So having a worldview that’s more accommodating to a wide variety of ideas should let investors make the right picks which is historically very hard (see Airbnb and Robinhood examples linked above.)

Perhaps we should focus less on building a network or thought leadership or industry experience, and more on learning worldview-expanding (or, ideally, -shattering) ideas. Less of entrepreneur Eric Reis or How To Win Friends and Influence People, more of polymath Robin Hanson or The Sovereign Individual.

That’s the core point I want to make. As a quick and currently-relevant example, different investors seem to view Tesla through different lenses. Many tech investors I follow hold a big-picture worldview more in tune with what works for VCs. If the macro-trend of electric vehicles is there and the team is right, it’s worth betting on. Other investors, often public equities and bonds people, take a more grounded approach. Tesla is likely structurally unprofitable compared to other manufacturers that are quietly waiting for the right timing, plus their financials are comparable to GM right before it went bankrupt in 2008. One portfolio manager I know called the Tesla bond issue one of the worst he’s seen in recent memory. Luckily for Tesla, more optimistic investors controlled enough capital to purchase the issue anyway.

Of course these are total oversimplifications of each stance, but the point I’m making remains. I think investors make decisions on a case-by-case basis, but with significant bias from worldview. Or maybe worldview-veto is a more accurate descriptor of what’s going on.

There are a number of other interesting threads here such as the problem of demonstrating that you have the right subjective worldview to actually have an edge. Networks, operating experience, etc, are inherently more provable. (This relates to the Thiel contrarian philosophy: if others buy-in to your perspective, it’s not an edge.)

Another question is whether all this matters. You can’t go to an LP when fundraising and just say “I make the best decisions because I have a better wholistic model for how the world works.” The other value-add metrics also act as a positive signal. I named this blog Heuristically Speaking for a reason!

Thanks to Nathan Ju for reading a draft. Subscribe to not miss any future posts!

Degrees of Freedom

I often talk to companies that come up with complex plans involving a technical challenge, going to market, then scaling and fending off competitors. They want to do X to leverage Y, then finally become Z when the time is just right. I suspect this stems from (incorrectly) feeling like the problem they’re solving isn’t innovative enough. Perhaps we glorify different strengths of successful businesses (the product obsession of Apple, the scale of FB, the community of Airbnb, etc) and forget that no company can possibly combine all of those virtues into one perfect startup. Or maybe investors build thought leadership by constantly talking about bleeding-edge buzzwords and founders accidentally make that their default set-point.

Think back to the best startups of the past decade. They all follow fairly simple plans which essentially bet on one core thesis: taxis that you can hail with app. A site where people can rent out a room in their house. Let teens send pictures that disappear. None of these companies had a multi-phase plan from the start. They didn’t solve particularly novel engineering challenges, at least until they were already super successful and hitting scale issues. There was only one singular problem to focus all attention on. And it often sounded like a somewhat lame problem, even though it ultimately wasn’t.

When evaluating startups, I often use a mental model I call degrees-of-freedom. Count how many things need to go right for the company to survive. Each additional degree of freedom makes execution exponentially harder and more complex. (I use that word exponential in the literal mathematical sense, not in the metaphorical sense!)

If you’re dealing with regulatory uncertainty, and building on a new decentralized protocol, and introducing a new business model, sure, you’re doing something innovative. But too many things have to go right. Too many things are simply out of your control.

My advice to these ambitious founders is: don’t feel like you need to do something impressive on every front. Pick one idea to bet the company on, and don’t lose sight of it. Use simple foolproof execution strategies for everything else. Reduce the number of degrees of freedom.

Let’s all get more excited about “simple” startups!

Buying IBM

IBM is one of my favorite sales and marketing case studies. As the saying goes: nobody ever got fired for buying IBM. I had always appreciated the sales hustle, but thinking more about the “Buying IBM Effect,” I began seeing the same dynamic elsewhere:

Huge fund sizes, for example, can hurt VC returns. Deploying large amounts of capital is difficult if you’re only seeing a limited number of worthy startups. You have to choose between investing more capital per company and investing in a larger number of startups. Many investors chose the latter option. This explains some quirks of highly-saturated funding ecosystems like Stanford’s. I’ve seen companies there raise ~1mm with nothing more than an idea and a decent-but-nothing-special team. That’s not to say this strategy is impossible to pull off, but in every case I’ve seen, it’d be crazy to invest on team alone. Nobody ever lost an LP because they invested in Stanford startups, so excess capital is deployed there.

As a manager it can be too risky to hire a brilliant-yet-under-credentialed or not-well-rounded candidate. The upside is simply making the hire and your boss not realizing that you made a tough yet successful decision. The downside is that the candidate, while exceptional in one dimension, can’t cut it some other respect and drops the ball. Then you’re on the hook for bringing them on the team. Unless you’re fortunate enough to work in a managerial environment that understands the risks and rewards of making such a hire, it’s not worth it to take the risk of making the hire. Nobody ever got fired for hiring a mediocre candidate with the right background on paper. But exceptional is often what the organization needs.

Wealth managers want to provide clients with reasonable returns. Although active management and picking individual assets lets you attempt to beat the market, you’ll lose more clients by underperforming than you’ll gain by over-performing. So wealth managers usually defer to index funds which stick to average returns. (In my opinion this is the best strategy anyway, but the point I’m making is that wealth managers are forced into the passive/conservative strategy.) Nobody ever withdrew from a retirement fund because their returns were just fine.

The Buying IBM Effect is just a symptom: asymmetry between the upside and downside is the real problem.

In environments you can control, install a system that judges decisions on the swing instead of the hit. (My deepest apologies for the platitude.)

In environments you can’t control, design your product or pitch to cap the downside. Risk aversion is often an ulterior motive that you won’t uncover directly through conversation. It’s important to anticipate the structural biases of anybody you’re interfacing with and account for their internal decision making considerations.

 

Go-To Essays and Books

This is a list of essays and authors I re-read often — the type of writing where you get something new out of it each time. Inspired by Slava Akhmechet’s post.

These are the books that I re-read and recommend the most. Some are pragmatic, some are simply pleasure reading.

Tell me your go-to essays and books! Contact info on homepage.

Yes, I genuinely do like The Old Man and the Sea. Everyone yells at me for that one. It’s the one “school made me read this” book I didn’t hate. I usually re-read it once a year or so.

 

Getting Positive Feedback

I’ve noticed that founders often optimize for finding the most positive feedback possible. That’s directionally a good move and it aligns well with Paul Graham’s idea that it’s “better to make a few users love you than a lot ambivalent.” The benefits of positive feedback in conversations with potential customers are clear: you validate your product, build up a network of fans, and get to iterate towards what specific dimension of your product drives the most value.

But once in a while I run into early-stage founders who have nothing but positive feedback from all parties or have a hard time naming objections and complaints. You’d think that’s great news, but it always makes me a little uncomfortable as an investor. It usually indicates that the founder is exaggerating demand or they’re not executing their validation correctly. If literally everything you hear is positive, it probably indicates that users aren’t being totally honest in their feedback or you’re not pushing hard enough to establish an initial cult following. Here are two things to keep in mind as you talk to users:

Remove bias

Users won’t want to hurt your feelings. Make sure you present your startup as someone else’s idea. Or as a product already on the market which you happen to be researching. While users will have no problem criticizing an fintech product, they’ll be too nice to healthcare products or other “socially positive” businesses. Take every precaution to make sure the feedback your getting will match up with users’ revealed preferences (true needs) when the product comes to market.

Take what you can get

If people say they love your product, then make your ask larger and larger until you’re out of slack. In the early days of Stripe, the Collison brothers would ask people whether or not they would use Stripe. If the user said they would, the Collison brothers didn’t stop there. They’d respond with “great, give me your laptop and I’ll get you up and running.”

If you’re simply interviewing people, ask them to sign up for the beta waitlist. If that still works, ask them to refer you their friends as well for your waitlist. If that still works, ask them to pre-pay for a discount when the product launches. This will not only tell you how badly people actually want your business, it’ll build up your customer base! This is also super useful for investors.

 

Concise Ideas are One-Dimensional

Distill an idea to the most concise and clear form you can to make it memorable. 280 characters if possible.

Luckily some of the tweets, headlines, and soundbites we come across carry wisdom or at least nudge your headspace towards a new idea. This makes it too easy to forget that most things we talk about fall on a spectrum or have extra dimensions. Especially when maximizing viewership is so valuable for content creators.

Some of these are pretty straightforward. The value of “deep work” has been ingrained into our heads by the latest trends in business writing. On the other hand, several people I know online and in-person have said that the most effective people they know are all super responsive through email, text, and over the phone. So clearly you can be successful in both modes. What gives?

Taking a moment to think about it, you’ll realize that you don’t have to choose one. Block out an afternoon to dive into your work, then be obsessed with the outside world for the other hours in the day. Both of these techniques are complementary parts in a toolkit, not separate virtues you should aim for.

But you understand that already. The real argument I’m making is it’s critically important that we constantly reconsider the implications of our proverbs. Here’s an example (of many) showing why this can matter so much:

Humbleness and modesty are adjectives that usually show up when someone is being complemented. They’re great traits to have, and everyone clearly benefits when we all treat each other as equally capable and deserving peers. On the dark side of modesty, however, is imposter syndrome. (Which, by the way, disproportionately affects those from underrepresented groups!) I think that by asserting the ultimate value modesty in our bite-size thoughts, we impose a big mental and emotional barrier for people who shouldn’t act that way all of the time.

It can be incredibly useful to feel like you’re bad at something and have to improve ASAP. It motivates you to dive into nitty gritty details and be a sponge at the cost of self-esteem. Likewise, a sense of overconfidence can help you overcome risk-aversion, lead people, and sell, but at the cost of having an open mind.

Most worrying to me is that different people from atypical backgrounds have a stronger need to recognize and act on that duality.

As someone who’s never had trouble fitting right into the tech startup world, I feel that I have the luxury to not have to project any sort of confidence and can just default to whatever mood fits the situation the best (usually a feeling of being humbled by the many brilliant people out there!) But anyone who’s a part of an out-group faces a difficult tradeoff: using brazen confidence as a tool to validate themselves with the in-group will make you feel guilty over their immodesty.

Marketing “humbleness” or “confidence” as objectively desirable qualities misses the point. You can have moods where nobody can stop you, and moods where you’re still pulling yourself up by your bootstraps. They’re both horrifically useful tools at your disposal and you don’t need to stick with one or the other. Everything has a flip side that can be useful, as long as you can keep the balance.

In summary: most things are spectra, not polar, and most things are dimensional, not mutually-exclusive.

Thanks to Niraj for feedback. Subscribe to not miss any future posts!

Psychology in Product and Sales

I’m experimenting with a new blog post format. Often times I’ll read a multi-paragraph essay and feel frustrated because it could have been condensed into a series of bullet points. So that’s what I’ve made here. Let me know what you think, hopefully the concepts will be intuitive and this bullet-style list will enumerate relevant ideas and examples. This is a list of principles of psychology in product and sales. (I’ve been reading Robert Cialdini and Daniel Kahneman recently!)


  • Signaling
    • Doubling the price on jewelry signals quality, so people will buy more of the same good if it’s priced higher. This is the opposite of what you’d expect.
  • Reciprocation 
    • “Take this thing, no-strings-attached” creates a feeling of debt and favor.
    • Hare Krishnas greatly increased their fundraising efforts by handing out roses for free at airports.
    • Putting a sticky note in a mailed survey request will greatly increase response volume/quality. Response is even better if the note is handwritten.
  • Concession
    • Related to anchoring, people often feel bad or indebted for not being able to fulfill a request.
    • Salespeople start with a big ask for making a purchase but plan on it failing, then say something like okay would you at least be able to give me referrals to three friends who would find this product useful?”
  • Commitment 
    • Having people say they’re in support of something ahead of time (even days or longer) makes a future ask much more successful.
    • Canonical example is political campaigns asking people days before an election will you vote?” and people tend to overcommit and say yes. Then when election times comes, they’ll actually vote to stay true to their word.
    • Once someone goes to the bathroom in a new house or says they’ll buy a car, they’ve already made a decision in their head.
      • Salespeople know this, and will look for signs of mental commitment before jacking up prices.
  • Group initiation 
    • Soldiers go through bootcamp, frat boys haze, and Catholics baptize. Initiation builds critical bonds, and the more intensive/costly the initiation is, the stronger the effect.
    • Products like Stack Exchange make you take steps (earn some amount of reputation, in this case) before becoming a part of the community and having full access to the product.
  • Publicity effect
    • If somebody makes a statement publicly, they’ll think the statement is true even if they’d otherwise rationally find it to be false. Sales tactic would be to get someone to say they have a need for the product out loud.
    • Corollary: be reluctant to publicly share works in progress which would create biases for yourself.
    • If you can get a user to somehow indicate that they use your product (to other people, online, or by having some sort of public profile,) they’re much less likely to churn.
  • Internal vs external beliefs
    • Canonical example: experiment where kids were left in a room with a bunch of lame toys, and one cool robot toy. They are told not to play with the robot, then the experimenter leaves the room.
      • Kids played with the robot if they were told it was wrong and they’d be punished (even though they couldn’t be caught since they were alone in the room)
      • Kids didn’t play with the robot if they were simply told it was wrong
      • People can blame bad external rules for behavior, but it there’s no punishment they would have to do something only a Bad Person™ would do.
    • This backs the socially positive slant that companies like Patagonia or Lyft build their value props on.
  • Inner circles
    • This is related to the group initiation topic. Being in an Inner Circle makes the product much more sticky and drives engagement from users within it.
      • This is particularly important in products where a small group of power users greatly influence the direction and quality of the product.
    • Examples: Reddit’s gilded club, Quora’s Top Writers
    • Inner Circles can come in many layers.
      • Some startups have tried create multi-functional social platforms (meeting new people, messaging friends, etc)
      • But people use these layers to clearly define the relationship: coworkers use LinkedIn, friends/acquaintances use FB Messenger or GroupMe, and close friends use phone numbers/iMessage. This removes ambiguity and says “we’re friends because we use this medium reserved for friends of only this type”
  • Risk aversion
    • People hate losses more than they like gains.
    • “This offer is only open for a limited time!”
    • “The special edition only has 100 copies”
    • “Thanks for joining, here are 50 in-game coins to get started!” (you’d give up this arbitrary freebie if you stopped playing the game)
  • Moral-threat vs consequence-threat
    • People don’t mind taking risks if the expected cost of the consequence is low.
    • But not imposing any punishment shifts the act to a social-signalling/moral burden (rather than a financial one) which has much higher intangible costs and an unlimited downside.
    • Canonical example: a daycare had lots of late child pickups so they started charging $5 each time that happened. Parents were late more often since they had an easy out to their lateness which was simply paying the five bucks.
  • Having an excuse 
    • 6-8% of Gerber baby food is consumed by people who aren’t babies. Gerber actually tried marketing a product specifically for seniors but it failed. People didn’t want to admit they needed that sort of food, so they stuck with the baby product (plausible deniability — lots of seniors have grandkids!)
    • Most hookup apps market themselves as dating apps. While many users are actually focused on dating, nobody wants to tell others they’re only looking for hookups.
  • Anchoring
    • This effect is pretty well known.
    • I was chatting with a guy in SF who was asking for donations for a hip-hop related community org. He challenged me to donate $100 which was crazy, and I ended up donating $10 which in hindsight was twice what I’d otherwise choose to donate.
  • Self consistency
    • People have a need to be self-consistent in their beliefs and actions.
    • The question “why do you want this job?” is also a sales tactic. The candidate will be forced to articulate good reasons out of politeness – and the desire for internal consistency will make them believe these reasons. (source)
    • Unethical example: if you conduct a fake survey about lifestyle, people will hype up and inflate their lifestyle to create a compelling narrative about themself. If you follow that with an expensive ask that would validate that lifestyle, they’ll often go along to not sound self-contradictory.
      • Wouldn’t make sense to say yeah I travel all the time, but this packaged travel money-saving deal isn’t something I want.”
  • Social proof and social pressure
    • Tip jars are seeded” to give the appearance that many other people tip. 
    • Some products with FB login will show you that your friends use it too.
    • Google glass became associated with glasshole” nerds, but Snap Spectacles marketed with attractive and well-rounded models from the start.
    • “Endless chain” where you make a sale, then go to their friend and say your friend John recommended this for you.” This makes it turning down your friend instead of turning down the salesman.
  • Liking
    • Being attractive, personal and cultural similarity, giving compliments, contact & co-operation, conditioning, and association with positive ideas all make people much more open to trying a product or buying something.
    • GitHub’s Octocat is a friendly and fun mascot which users like and build an attachment to
  • Authority
    • This one is obvious. Companies plug high-profile clients whenever possible.
    • Twitter has the blue checkmark to make users feel like they’re getting higher quality information from those people through the platform.
  • Scarcity
    • Robinhood’s famous growth hack where you needed to refer people to move up a spot in the waiting list. Access to the early product was scarce.
    • New coke vs old coke
      • In the 80s, Coca-Cola tried changing the Coke recipe because it had done better in blind taste tests with consumers. But people rejected the New Coke because the Old Coke was then scarce and people wanted to keep what they knew
    • Much stronger to say you’re losing X per month” instead of you can save X per month”
  • FOMO and security
    • Uber guaranteeing people an arrival time increases number of rides since people feel the security associated with having an upper bound.
    • GroupMe SMS’d people who didn’t have the app. This made them feel like their friends were on the app but they weren’t. (Houseparty makes it easy to inspire FOMO with SMS too).

Decision Making and Mental Models

As I’ve spent more and more time reading Slate Star Codex, Less Wrong, Julia Galef, Farnam Street, and Charlie Munger, I’ve realized how useful it is to write down my mental models and decision making tools. Explicitly outlining them makes it easy to remember each perspective of a problem. Given something in particular I’m thinking about, I can just go down the list and see how each lens shapes my thoughts.

This is a general list of guiding principles and questions to ask myself when making a tricky decision or doing something new. I’d love to hear any comments and suggestions?


Is this something I’d regret doing (or not) in the future?

Jeff Bezos has cited a regret minimization as one of the reasons he started Amazon:

”I wanted to project myself forward to age 80 and say, ‘Okay, now I’m looking back on my life. I want to have minimized the number of regrets I have,’” explains Bezos. “I knew that when I was 80 I was not going to regret having tried this. I was not going to regret trying to participate in this thing called the Internet that I thought was going to be a really big deal. I knew that if I failed I wouldn’t regret that, but I knew the one thing I might regret is not ever having tried. I knew that that would haunt me every day, and so, when I thought about it that way it was an incredibly easy decision.”

This is so useful because it applies to so many different types of decisions and it’s particularly powerful with qualitative and personal problems.

Is this the right time to do this?

We naturally think about the ‘what’ and ‘how’ of a decision, but the ‘when’ and ‘why’ are equally important. If you realize that you should do something, it’s easy to think that you need to do it now, even if some other time would be better.

What routine am I forming?

The Power of Habit is one of my favorite books. Think of habit forming as analogous to compound interest for investors.

Beliefs/opinions are agents of the mind. Job based thinking

I’m a fan of Clay Christensen’s milkshake story that suggests thinking about “jobs to be done” to understand why people to buy products and services. This mental model is useful for inspecting your own beliefs and opinions. Given an arbitrary feeling, why is that how you feel? I’ll often times think some public speaking task isn’t worth it — even when it clearly is — just because I still get nervous sometimes when talking in front of a crowd. Asking myself what job that reluctance fulfills for my mind (avoiding something uncomfortable) makes it obvious that I really should just go speak.

Value of people you spend time with >>> what you do

This one’s important, fairly obvious, and has been well-covered before. I leave it here as a constant reminder, though.

Normative vs descriptive is a difficult yet critical distinction

When discussing anything subtle or controversial it’s easy to get caught up in language traps that fail to distinguish what is from what ought to be. For a rather extreme example, you might say “drugs are natural” as a matter of fact, which is technically true. But everyone assumes you’re asserting that because drugs are natural, they should be used. Clearly separating normative and descriptive statements reduces misunderstanding and clarifies your own thinking.

Hell yes, or no

Econ or game theory nerds would be reminded of the Pareto Principle. My favorite example of this is Warren Buffett’s story about focus. It’s too easy to rationalize distractions as still being productive. But those distractions are not the most long-term productive thing to do.

The evolution of everything

The cognitive biases are all byproducts of our evolution. You’re probably familiar with the sunk cost fallacy, anchoring, fundamental attribution error, or zero-sum bias. Some rationalists spend a lot of time studying the cognitive biases but I think it’s extremely difficult to actually put them to practical use. I prefer to frame the cognitive biases in terms of our evolutionary history which always invokes some concrete and relatable examples (our hunter-gatherer ancestors always had to worry about where they’d get their next meal so the risk-aversion bias makes sense in that society, for instance). Thinking about Darwinian dynamics has probably been my #1 most useful tool for understanding everything — politics, economics, people, morality, etc. Matt Ridley’s book The Evolution of Everything covers this more.

The billboard question

If you had to put a single message on a billboard, what would it say?

This exercise forces you to distill your thoughts to their most concise, elemental forms. Once you’ve simplified your idea to a billboard-sized chunk, it becomes easy to act on and communicate it to others.

As an example: if you could only send one text message to your friends, what would it say? What about a one line email to your employees? Find that thing and act in support of that singular idea.

What would you need to know to make you change your viewpoint?

I believe many people only hold views because they’re stubborn, hyper-partisan, or irrational. This applies to much more than just politics.

So how do you distinguish between an ideologue and someone who just has a strong, reasoned opinion?

Asking somebody about what information would change their mind is an incredibly powerful tool to detect this. If they can’t come up with a reasonable example of opinion-altering data, they almost certainly came to their opinion for non-rigorous reasons. Look for people with a thoughtful answer to that question and learn from them.

Goal setting: trivially easy or impossibly hard

A common piece of productivity and life advice goes something like “set goals you can hit.” It makes sense that you’d be most motivated if your goals are challenging and exciting, but still within reach.

But I think that reasoning is wrong. Goals should be trivially easy or moonshot challenging. In the first case, you’ll have no problem getting tasks done, building momentum, and clearing the path needed to focus on the bigger picture. In the second case, impossible goals remove the stress and pressure to perform. You’re okay taking risks (we’re naturally too risk averse) and more flexible in your approach to the problem.

K-step thinking

This NYT article (also: academic paper) about k-step thinking really changed the game for me when it comes to understanding crowd behavior, games, and the “average user” of a product. In situations where the best course of action is a series of steps or depends on what other people’s actions, you’ll have a hard time systematizing/rationalizing what’s going on. But most people only think a few steps ahead. There’s no need to overthink the problem and a theoretically-correct model is probably wrong in practice.

Is this hard work or something I don’t like? Conversely, is this enjoyable or just easy?

Recently there’s been a lot of discussion surrounding “grit,” success, education, and how you achieve goals. Starting early and working hard is important at the micro-level, but I think that whole mindset loses perspective of the macro-level. Case in point: a significant fraction of college students change majors (estimates vary, but 25%-50% seems to be the right) and waste time figuring out what they want to do (the how is well known). I believe the what problem is bigger and less acknowledged than the how problem.

Part of what makes discovering what you want to do such a challenge is that exploration is often at odds with rigor (success). When slowly learning things purely out of curiosity, you lose the pace you need to compete. This adds pressure to do both the interest-exploration and rigorous-skill building at the same time. Some things are obviously hard and miserable and you can rule those out. Some are enjoyable, in which case you need to dig deep and make sure you’re in it for the right reasons.

This thinking applies to prioritization too. Is your startup’s current task actually impactful, or do you just want to do it because you’ll feel productive?

Revealed Preferences as a tool for self-reflection

Related to the hard work or something I don’t like question, revealed preferences are a useful tool for understanding the true nature of yourself and others. The theory was originally created by an economist to solve the problem that “while utility maximization was not a controversial assumption, the underlying utility functions could not be measured with great certainty. Revealed preference theory was a means to reconcile demand theory by defining utility functions by observing behavior.” The idea is that what people say they want is often not at all what they actually want. This matters a lot for understanding your internal utility function (which defines what you care about and should prioritize).

Thinking empirically about how you spend your time and what historically makes you laugh/love/learn will get you much farther than trying to take a first principles approach to what sorts of things we say we care about. The non-empirical approach makes it easier for the fundamental attribution error to kick in and lets you project what you think you should be rather than what you are.

Punctuated equilibrium

Have you noticed how things seem to stay the same for a long time only to change very suddenly? This is another idea from the world of evolutionary biology. Wikipedia describes it: “most social systems exist in an extended period of stasis, which are later punctuated by sudden shifts in radical change.” Most people understand this idea in terms of technological/scientific revolutions and innovation — somebody builds a new tool that rapidly changes how people operate. But it can be applied more generally to anything operating within a larger environment or dealing with independent agents or incentive structures (politics, management, social group preferences, etc.) Phenomena like changes in political dialogue are often described as trends when I think they’re better conceptualized as punctuated equilibria. It makes it easier to systematize and predict second-order consequences.

Meta-competition as a cause for punctuated equilibrium

There’s an interesting game-theory problem for each example of punctuated equilibrium in society. In EvBio terms it can be explained that organisms naturally fit competitive niches which are often shifted by outside factors, almost like the gas from a popped balloon dissipating to fill its container. But in all the situations relevant to real life, the players are people with biases, unique objectives, and an awareness of what other people are thinking.

My best mental model for understanding this is meta-competition. In many cases, performance in some game matters less than choosing which game you compete in. I found a random blog post that used political conflict as an example: “the solidarity folks want a rivalry with the rivalry folks because they (the solidarity folks) think they can win, but the rivalry folks don’t want a rivalry with the solidarity folks because they (the rivalry folks) think they would lose.”

Remember that structural or environmental changes lead to punctuated equilibrium as actors quickly adapt to fit the new landscape or incentive structure. I think that in a lot of cases (deciding who gets the promotion, or the highest-status date, or most cultural recognition,) the result given some rules and boundaries would largely be known. So the most effective way to compete is to change the game you’re playing. This means that since people know what they can win or lose at, they compete over the game being played and when the game/rules are changed, the equilibrium shifts. A noteworthy corollary relevant to career planning: changing a system can have far more impact on the world than doing anything within a system (sounds a lot like Silicon Valley ethos!)

 XY Problem

Taken from a Stack Exchange post: “The XY problem is asking about your attempted solution rather than your actual problem. That is, you are trying to solve problem X, and you think solution Y would work, but instead of asking about X when you run into trouble, you ask about Y.

I catch myself doing this all the time. It doesn’t help that we naturally want to show off the progress that we made on something (even if it’s a dead-end) and fix the attempted solution for gratification or to close the learning feedback loop.

Optimize for serendipity

Several of the most valuable opportunities and friendships throughout my life have happened out of pure chance (read more about this in my post here). Notice that this principle is seemingly at odds with the “hell yes, or no” idea. It’s important to make the distinction: maximizing serendipity creates opportunities, and “hell yes, or no” picks the most meaningful ones. Those are two separate, independently necessary steps in the process.

We stop learning and performing when we can’t tell action/decision quality from outcome

VCs often point out that the feedback loops for investments are 10+ years so it’s hard to learn from your decisions. Less extreme cases pop up in real-life all the time. Being more aware of this helps you 1) put feedback loops in place, and 2) put less weight on what you learn from outcomes loosely connected to actions/decisions.

Training behavior. Idiosyncrasies and preferences are defense for that

I read a fascinating EvBio article theorizing that we have preferences and idiosyncrasies as a sort of social defense mechanism. Clearly we trust and build relationships with people who spend energy/resources affirming the relationship — like how your closest friends remember to call you on your birthday or reward you by playing your favorite song at a party. The fact that everyone has their own unique and seemingly random preferences ensures that people can only gain your trust by spending the time and energy to learn then remember your preferences. A social-trust Proof-Of-Work if you will (deep apologies for the blockchain reference). This helps consciously contextualize and understand our social priorities and be more deliberate in building relationships with people we care about.

Decision making: reversal and double-reversal

If you haven’t learned by now, Wikipedia articles and papers are usually more articulate than I am: “Reversal Test: When a proposal to change a certain parameter is thought to have bad overall consequences, consider a change to the same parameter in the opposite direction. If this is also thought to have bad overall consequences, then the onus is on those who reach these conclusions to explain why our position cannot be improved through changes to this parameter. If they are unable to do so, then we have reason to suspect that they suffer from status quo bias”

“Double Reversal Test: Suppose it is thought that increasing a certain parameter and decreasing it would both have bad overall consequences. Consider a scenario in which a natural factor threatens to move the parameter in one direction and ask whether it would be good to counterbalance this change by an intervention to preserve the status quo. If so, consider a later time when the naturally occurring factor is about to vanish and ask whether it would be a good idea to intervene to reverse the first intervention. If not, then there is a strong prima facie case for thinking that it would be good to make the first intervention even in the absence of the natural countervailing factor.

This is really, really effective in debates/discussions. A concrete (somewhat straw man) example: many people are strongly against any sort of gene enhancement whether through embryo selection or something like CRISPR (I personally see many unanswered questions on the topic). The argument is usually that it’s unfair to make one person unnaturally smarter than another. The reversal is asking if we should then ban private tutoring or even schools, because a select few with access to those resources are “unnaturally smarter” in all consequential ways. This is clearly at odds with the premise of the default argument against gene enhancement. There are many adjacent and orthogonal reasons to hold a position against enhancement, but the reversal is pretty widely applicable and powerful.

Good-story bias. We naturally bias towards thinking of less-likely scenarios that form a story

This one is useful in two ways. First, it’s the base rate fallacy restated in more natural words. When learning through pattern recognition and empiricism, we should try not to be biased by the good stories or outlier data points. Second, storytelling is an incredibly powerful way to influence thinking. Try to tell a story rather than give facts.

Chesterton’s Fence

When trying to change a system or policy it’s easy to find flaws and use those flaws to justify your proposed change. But almost everything was designed intentionally. There is probably a good reason for why something is the way that it is. Before working to change something, spend the time to understand how it was designed in the first place. That process will uncover issues you hadn’t previously considered or will give you further validation for altering the system or policy. This idea is referred to as Chesterton’s Fence. See the Wikipedia article for a history and quick example.

Additive or Ecological?

Any useful technology or policy developments will change user behavior. Making an explicit dichotomy between additive changes (first-order effects only) and ecological changes (higher-order effects are present) makes it easier to choose your decision-making toolkit and weigh factors appropriately.


That’s it for now. Please tell me about your mental models (seriously!) My email and Twitter are on the homepage.