As I’ve spent more and more time reading Slate Star Codex, Less Wrong, Julia Galef, Farnam Street, and Charlie Munger, I’ve realized how useful it is to write down my mental models and decision making tools. Explicitly outlining them makes it easy to remember each perspective of a problem. Given something in particular I’m thinking about, I can just go down the list and see how each lens shapes my thoughts.
This is a general list of guiding principles and questions to ask myself when making a tricky decision or doing something new. I’d love to hear any comments and suggestions?
Is this something I’d regret doing (or not) in the future?
Jeff Bezos has cited a regret minimization as one of the reasons he started Amazon:
”I wanted to project myself forward to age 80 and say, ‘Okay, now I’m looking back on my life. I want to have minimized the number of regrets I have,’” explains Bezos. “I knew that when I was 80 I was not going to regret having tried this. I was not going to regret trying to participate in this thing called the Internet that I thought was going to be a really big deal. I knew that if I failed I wouldn’t regret that, but I knew the one thing I might regret is not ever having tried. I knew that that would haunt me every day, and so, when I thought about it that way it was an incredibly easy decision.”
This is so useful because it applies to so many different types of decisions and it’s particularly powerful with qualitative and personal problems.
Is this the right time to do this?
We naturally think about the ‘what’ and ‘how’ of a decision, but the ‘when’ and ‘why’ are equally important. If you realize that you should do something, it’s easy to think that you need to do it now, even if some other time would be better.
What routine am I forming?
The Power of Habit is one of my favorite books. Think of habit forming as analogous to compound interest for investors.
Beliefs/opinions are agents of the mind. Job based thinking
I’m a fan of Clay Christensen’s milkshake story that suggests thinking about “jobs to be done” to understand why people to buy products and services. This mental model is useful for inspecting your own beliefs and opinions. Given an arbitrary feeling, why is that how you feel? I’ll often times think some public speaking task isn’t worth it — even when it clearly is — just because I still get nervous sometimes when talking in front of a crowd. Asking myself what job that reluctance fulfills for my mind (avoiding something uncomfortable) makes it obvious that I really should just go speak.
Value of people you spend time with >>> what you do
This one’s important, fairly obvious, and has been well-covered before. I leave it here as a constant reminder, though.
Normative vs descriptive is a difficult yet critical distinction
When discussing anything subtle or controversial it’s easy to get caught up in language traps that fail to distinguish what is from what ought to be. For a rather extreme example, you might say “drugs are natural” as a matter of fact, which is technically true. But everyone assumes you’re asserting that because drugs are natural, they should be used. Clearly separating normative and descriptive statements reduces misunderstanding and clarifies your own thinking.
Hell yes, or no
Econ or game theory nerds would be reminded of the Pareto Principle. My favorite example of this is Warren Buffett’s story about focus. It’s too easy to rationalize distractions as still being productive. But those distractions are not the most long-term productive thing to do.
The evolution of everything
The cognitive biases are all byproducts of our evolution. You’re probably familiar with the sunk cost fallacy, anchoring, fundamental attribution error, or zero-sum bias. Some rationalists spend a lot of time studying the cognitive biases but I think it’s extremely difficult to actually put them to practical use. I prefer to frame the cognitive biases in terms of our evolutionary history which always invokes some concrete and relatable examples (our hunter-gatherer ancestors always had to worry about where they’d get their next meal so the risk-aversion bias makes sense in that society, for instance). Thinking about Darwinian dynamics has probably been my #1 most useful tool for understanding everything — politics, economics, people, morality, etc. Matt Ridley’s book The Evolution of Everything covers this more.
The billboard question
If you had to put a single message on a billboard, what would it say?
This exercise forces you to distill your thoughts to their most concise, elemental forms. Once you’ve simplified your idea to a billboard-sized chunk, it becomes easy to act on and communicate it to others.
As an example: if you could only send one text message to your friends, what would it say? What about a one line email to your employees? Find that thing and act in support of that singular idea.
What would you need to know to make you change your viewpoint?
I believe many people only hold views because they’re stubborn, hyper-partisan, or irrational. This applies to much more than just politics.
So how do you distinguish between an ideologue and someone who just has a strong, reasoned opinion?
Asking somebody about what information would change their mind is an incredibly powerful tool to detect this. If they can’t come up with a reasonable example of opinion-altering data, they almost certainly came to their opinion for non-rigorous reasons. Look for people with a thoughtful answer to that question and learn from them.
Goal setting: trivially easy or impossibly hard
A common piece of productivity and life advice goes something like “set goals you can hit.” It makes sense that you’d be most motivated if your goals are challenging and exciting, but still within reach.
But I think that reasoning is wrong. Goals should be trivially easy or moonshot challenging. In the first case, you’ll have no problem getting tasks done, building momentum, and clearing the path needed to focus on the bigger picture. In the second case, impossible goals remove the stress and pressure to perform. You’re okay taking risks (we’re naturally too risk averse) and more flexible in your approach to the problem.
K-step thinking
This NYT article (also: academic paper) about k-step thinking really changed the game for me when it comes to understanding crowd behavior, games, and the “average user” of a product. In situations where the best course of action is a series of steps or depends on what other people’s actions, you’ll have a hard time systematizing/rationalizing what’s going on. But most people only think a few steps ahead. There’s no need to overthink the problem and a theoretically-correct model is probably wrong in practice.
Is this hard work or something I don’t like? Conversely, is this enjoyable or just easy?
Recently there’s been a lot of discussion surrounding “grit,” success, education, and how you achieve goals. Starting early and working hard is important at the micro-level, but I think that whole mindset loses perspective of the macro-level. Case in point: a significant fraction of college students change majors (estimates vary, but 25%-50% seems to be the right) and waste time figuring out what they want to do (the how is well known). I believe the what problem is bigger and less acknowledged than the how problem.
Part of what makes discovering what you want to do such a challenge is that exploration is often at odds with rigor (success). When slowly learning things purely out of curiosity, you lose the pace you need to compete. This adds pressure to do both the interest-exploration and rigorous-skill building at the same time. Some things are obviously hard and miserable and you can rule those out. Some are enjoyable, in which case you need to dig deep and make sure you’re in it for the right reasons.
This thinking applies to prioritization too. Is your startup’s current task actually impactful, or do you just want to do it because you’ll feel productive?
Revealed Preferences as a tool for self-reflection
Related to the hard work or something I don’t like question, revealed preferences are a useful tool for understanding the true nature of yourself and others. The theory was originally created by an economist to solve the problem that “while utility maximization was not a controversial assumption, the underlying utility functions could not be measured with great certainty. Revealed preference theory was a means to reconcile demand theory by defining utility functions by observing behavior.” The idea is that what people say they want is often not at all what they actually want. This matters a lot for understanding your internal utility function (which defines what you care about and should prioritize).
Thinking empirically about how you spend your time and what historically makes you laugh/love/learn will get you much farther than trying to take a first principles approach to what sorts of things we say we care about. The non-empirical approach makes it easier for the fundamental attribution error to kick in and lets you project what you think you should be rather than what you are.
Punctuated equilibrium
Have you noticed how things seem to stay the same for a long time only to change very suddenly? This is another idea from the world of evolutionary biology. Wikipedia describes it: “most social systems exist in an extended period of stasis, which are later punctuated by sudden shifts in radical change.” Most people understand this idea in terms of technological/scientific revolutions and innovation — somebody builds a new tool that rapidly changes how people operate. But it can be applied more generally to anything operating within a larger environment or dealing with independent agents or incentive structures (politics, management, social group preferences, etc.) Phenomena like changes in political dialogue are often described as trends when I think they’re better conceptualized as punctuated equilibria. It makes it easier to systematize and predict second-order consequences.
Meta-competition as a cause for punctuated equilibrium
There’s an interesting game-theory problem for each example of punctuated equilibrium in society. In EvBio terms it can be explained that organisms naturally fit competitive niches which are often shifted by outside factors, almost like the gas from a popped balloon dissipating to fill its container. But in all the situations relevant to real life, the players are people with biases, unique objectives, and an awareness of what other people are thinking.
My best mental model for understanding this is meta-competition. In many cases, performance in some game matters less than choosing which game you compete in. I found a random blog post that used political conflict as an example: “the solidarity folks want a rivalry with the rivalry folks because they (the solidarity folks) think they can win, but the rivalry folks don’t want a rivalry with the solidarity folks because they (the rivalry folks) think they would lose.”
Remember that structural or environmental changes lead to punctuated equilibrium as actors quickly adapt to fit the new landscape or incentive structure. I think that in a lot of cases (deciding who gets the promotion, or the highest-status date, or most cultural recognition,) the result given some rules and boundaries would largely be known. So the most effective way to compete is to change the game you’re playing. This means that since people know what they can win or lose at, they compete over the game being played and when the game/rules are changed, the equilibrium shifts. A noteworthy corollary relevant to career planning: changing a system can have far more impact on the world than doing anything within a system (sounds a lot like Silicon Valley ethos!)
XY Problem
Taken from a Stack Exchange post: “The XY problem is asking about your attempted solution rather than your actual problem. That is, you are trying to solve problem X, and you think solution Y would work, but instead of asking about X when you run into trouble, you ask about Y.”
I catch myself doing this all the time. It doesn’t help that we naturally want to show off the progress that we made on something (even if it’s a dead-end) and fix the attempted solution for gratification or to close the learning feedback loop.
Optimize for serendipity
Several of the most valuable opportunities and friendships throughout my life have happened out of pure chance (read more about this in my post here). Notice that this principle is seemingly at odds with the “hell yes, or no” idea. It’s important to make the distinction: maximizing serendipity creates opportunities, and “hell yes, or no” picks the most meaningful ones. Those are two separate, independently necessary steps in the process.
We stop learning and performing when we can’t tell action/decision quality from outcome
VCs often point out that the feedback loops for investments are 10+ years so it’s hard to learn from your decisions. Less extreme cases pop up in real-life all the time. Being more aware of this helps you 1) put feedback loops in place, and 2) put less weight on what you learn from outcomes loosely connected to actions/decisions.
Training behavior. Idiosyncrasies and preferences are defense for that
I read a fascinating EvBio article theorizing that we have preferences and idiosyncrasies as a sort of social defense mechanism. Clearly we trust and build relationships with people who spend energy/resources affirming the relationship — like how your closest friends remember to call you on your birthday or reward you by playing your favorite song at a party. The fact that everyone has their own unique and seemingly random preferences ensures that people can only gain your trust by spending the time and energy to learn then remember your preferences. A social-trust Proof-Of-Work if you will (deep apologies for the blockchain reference). This helps consciously contextualize and understand our social priorities and be more deliberate in building relationships with people we care about.
Decision making: reversal and double-reversal
If you haven’t learned by now, Wikipedia articles and papers are usually more articulate than I am: “Reversal Test: When a proposal to change a certain parameter is thought to have bad overall consequences, consider a change to the same parameter in the opposite direction. If this is also thought to have bad overall consequences, then the onus is on those who reach these conclusions to explain why our position cannot be improved through changes to this parameter. If they are unable to do so, then we have reason to suspect that they suffer from status quo bias”
“Double Reversal Test: Suppose it is thought that increasing a certain parameter and decreasing it would both have bad overall consequences. Consider a scenario in which a natural factor threatens to move the parameter in one direction and ask whether it would be good to counterbalance this change by an intervention to preserve the status quo. If so, consider a later time when the naturally occurring factor is about to vanish and ask whether it would be a good idea to intervene to reverse the first intervention. If not, then there is a strong prima facie case for thinking that it would be good to make the first intervention even in the absence of the natural countervailing factor.”
This is really, really effective in debates/discussions. A concrete (somewhat straw man) example: many people are strongly against any sort of gene enhancement whether through embryo selection or something like CRISPR (I personally see many unanswered questions on the topic). The argument is usually that it’s unfair to make one person unnaturally smarter than another. The reversal is asking if we should then ban private tutoring or even schools, because a select few with access to those resources are “unnaturally smarter” in all consequential ways. This is clearly at odds with the premise of the default argument against gene enhancement. There are many adjacent and orthogonal reasons to hold a position against enhancement, but the reversal is pretty widely applicable and powerful.
Good-story bias. We naturally bias towards thinking of less-likely scenarios that form a story
This one is useful in two ways. First, it’s the base rate fallacy restated in more natural words. When learning through pattern recognition and empiricism, we should try not to be biased by the good stories or outlier data points. Second, storytelling is an incredibly powerful way to influence thinking. Try to tell a story rather than give facts.
Chesterton’s Fence
When trying to change a system or policy it’s easy to find flaws and use those flaws to justify your proposed change. But almost everything was designed intentionally. There is probably a good reason for why something is the way that it is. Before working to change something, spend the time to understand how it was designed in the first place. That process will uncover issues you hadn’t previously considered or will give you further validation for altering the system or policy. This idea is referred to as Chesterton’s Fence. See the Wikipedia article for a history and quick example.
Additive or Ecological?
Any useful technology or policy developments will change user behavior. Making an explicit dichotomy between additive changes (first-order effects only) and ecological changes (higher-order effects are present) makes it easier to choose your decision-making toolkit and weigh factors appropriately.
That’s it for now. Please tell me about your mental models (seriously!) My email and Twitter are on the homepage.