Worldview as a Competitive Advantage

Recently I’ve been thinking about what makes a strong competitive advantage in a world where individuals and firms are increasingly leveraged and specialized. The top several dozen venture investors basically monopolize the industry. (About 20 firms — 3% of the venture universe — earn 95% of the returns.)

The common value-add or differentiating dimensions are a great network, positive signaling effects, insightful operating experience, or the willingness to invest at the highest price. But it’s not clear that the majority of investors move the needle.

These are all useful from the perspective of founders. The tough part as an investor is that there are few unfilled niches. You’ll run into tough competition on all of the above features. So what’s left? From your perspective, does value-add even matter that much when VCs still pass on many amazing deals?

I think that worldview is the most underrated and robust advantages you can have over other investors.

An interesting thread in politics and psychology these past few years has explored how people change their minds. Example: ideas from Jonathan Haidt’s The Righteous Mind and Arnold Kling’s The Three Languages of Politics have made their way into the mainstream. The core insight here is that people rarely change their minds. Opinions on different issues aren’t individual beliefs based on separate analyses — they’re products of a single underlying worldview. If an idea doesn’t fit your worldview, you’re unlikely to accept it on its merit rather than break up an otherwise consistent and unified understanding of how things work.

This makes VC hard because the best deals are, almost by definition, outliers that don’t easily fit into any worldview. So having a worldview that’s more accommodating to a wide variety of ideas should let investors make the right picks which is historically very hard (see Airbnb and Robinhood examples linked above.)

Perhaps we should focus less on building a network or thought leadership or industry experience, and more on learning worldview-expanding (or, ideally, -shattering) ideas. Less of entrepreneur Eric Reis or How To Win Friends and Influence People, more of polymath Robin Hanson or The Sovereign Individual.

That’s the core point I want to make. As a quick and currently-relevant example, different investors seem to view Tesla through different lenses. Many tech investors I follow hold a big-picture worldview more in tune with what works for VCs. If the macro-trend of electric vehicles is there and the team is right, it’s worth betting on. Other investors, often public equities and bonds people, take a more grounded approach. Tesla is likely structurally unprofitable compared to other manufacturers that are quietly waiting for the right timing, plus their financials are comparable to GM right before it went bankrupt in 2008. One portfolio manager I know called the Tesla bond issue one of the worst he’s seen in recent memory. Luckily for Tesla, more optimistic investors controlled enough capital to purchase the issue anyway.

Of course these are total oversimplifications of each stance, but the point I’m making remains. I think investors make decisions on a case-by-case basis, but with significant bias from worldview. Or maybe worldview-veto is a more accurate descriptor of what’s going on.

There are a number of other interesting threads here such as the problem of demonstrating that you have the right subjective worldview to actually have an edge. Networks, operating experience, etc, are inherently more provable. (This relates to the Thiel contrarian philosophy: if others buy-in to your perspective, it’s not an edge.)

Another question is whether all this matters. You can’t go to an LP when fundraising and just say “I make the best decisions because I have a better wholistic model for how the world works.” The other value-add metrics also act as a positive signal. I named this blog Heuristically Speaking for a reason!

Thanks to Nathan Ju for reading a draft. Subscribe to not miss any future posts!

Degrees of Freedom

I often talk to companies that come up with complex plans involving a technical challenge, going to market, then scaling and fending off competitors. They want to do X to leverage Y, then finally become Z when the time is just right. I suspect this stems from (incorrectly) feeling like the problem they’re solving isn’t innovative enough. Perhaps we glorify different strengths of successful businesses (the product obsession of Apple, the scale of FB, the community of Airbnb, etc) and forget that no company can possibly combine all of those virtues into one perfect startup. Or maybe investors build thought leadership by constantly talking about bleeding-edge buzzwords and founders accidentally make that their default set-point.

Think back to the best startups of the past decade. They all follow fairly simple plans which essentially bet on one core thesis: taxis that you can hail with app. A site where people can rent out a room in their house. Let teens send pictures that disappear. None of these companies had a multi-phase plan from the start. They didn’t solve particularly novel engineering challenges, at least until they were already super successful and hitting scale issues. There was only one singular problem to focus all attention on. And it often sounded like a somewhat lame problem, even though it ultimately wasn’t.

When evaluating startups, I often use a mental model I call degrees-of-freedom. Count how many things need to go right for the company to survive. Each additional degree of freedom makes execution exponentially harder and more complex. (I use that word exponential in the literal mathematical sense, not in the metaphorical sense!)

If you’re dealing with regulatory uncertainty, and building on a new decentralized protocol, and introducing a new business model, sure, you’re doing something innovative. But too many things have to go right. Too many things are simply out of your control.

My advice to these ambitious founders is: don’t feel like you need to do something impressive on every front. Pick one idea to bet the company on, and don’t lose sight of it. Use simple foolproof execution strategies for everything else. Reduce the number of degrees of freedom.

Let’s all get more excited about “simple” startups!

Buying IBM

IBM is one of my favorite sales and marketing case studies. As the saying goes: nobody ever got fired for buying IBM. I had always appreciated the sales hustle, but thinking more about the “Buying IBM Effect,” I began seeing the same dynamic elsewhere:

Huge fund sizes, for example, can hurt VC returns. Deploying large amounts of capital is difficult if you’re only seeing a limited number of worthy startups. You have to choose between investing more capital per company and investing in a larger number of startups. Many investors chose the latter option. This explains some quirks of highly-saturated funding ecosystems like Stanford’s. I’ve seen companies there raise ~1mm with nothing more than an idea and a decent-but-nothing-special team. That’s not to say this strategy is impossible to pull off, but in every case I’ve seen, it’d be crazy to invest on team alone. Nobody ever lost an LP because they invested in Stanford startups, so excess capital is deployed there.

As a manager it can be too risky to hire a brilliant-yet-under-credentialed or not-well-rounded candidate. The upside is simply making the hire and your boss not realizing that you made a tough yet successful decision. The downside is that the candidate, while exceptional in one dimension, can’t cut it some other respect and drops the ball. Then you’re on the hook for bringing them on the team. Unless you’re fortunate enough to work in a managerial environment that understands the risks and rewards of making such a hire, it’s not worth it to take the risk of making the hire. Nobody ever got fired for hiring a mediocre candidate with the right background on paper. But exceptional is often what the organization needs.

Wealth managers want to provide clients with reasonable returns. Although active management and picking individual assets lets you attempt to beat the market, you’ll lose more clients by underperforming than you’ll gain by over-performing. So wealth managers usually defer to index funds which stick to average returns. (In my opinion this is the best strategy anyway, but the point I’m making is that wealth managers are forced into the passive/conservative strategy.) Nobody ever withdrew from a retirement fund because their returns were just fine.

The Buying IBM Effect is just a symptom: asymmetry between the upside and downside is the real problem.

In environments you can control, install a system that judges decisions on the swing instead of the hit. (My deepest apologies for the platitude.)

In environments you can’t control, design your product or pitch to cap the downside. Risk aversion is often an ulterior motive that you won’t uncover directly through conversation. It’s important to anticipate the structural biases of anybody you’re interfacing with and account for their internal decision making considerations.

 

Go-To Essays and Books

Essays:

These are the books that I re-read and recommend the most.

Getting Positive Feedback

I’ve noticed that founders often optimize for finding the most positive feedback possible. That’s directionally a good move and it aligns well with Paul Graham’s idea that it’s “better to make a few users love you than a lot ambivalent.” The benefits of positive feedback in conversations with potential customers are clear: you validate your product, build up a network of fans, and get to iterate towards what specific dimension of your product drives the most value.

But once in a while I run into early-stage founders who have nothing but positive feedback from all parties or have a hard time naming objections and complaints. You’d think that’s great news, but it always makes me a little uncomfortable as an investor. It usually indicates that the founder is exaggerating demand or they’re not executing their validation correctly. If literally everything you hear is positive, it probably indicates that users aren’t being totally honest in their feedback or you’re not pushing hard enough to establish an initial cult following. Here are two things to keep in mind as you talk to users:

Remove bias

Users won’t want to hurt your feelings. Make sure you present your startup as someone else’s idea. Or as a product already on the market which you happen to be researching. While users will have no problem criticizing an fintech product, they’ll be too nice to healthcare products or other “socially positive” businesses. Take every precaution to make sure the feedback your getting will match up with users’ revealed preferences (true needs) when the product comes to market.

Take what you can get

If people say they love your product, then make your ask larger and larger until you’re out of slack. In the early days of Stripe, the Collison brothers would ask people whether or not they would use Stripe. If the user said they would, the Collison brothers didn’t stop there. They’d respond with “great, give me your laptop and I’ll get you up and running.”

If you’re simply interviewing people, ask them to sign up for the beta waitlist. If that still works, ask them to refer you their friends as well for your waitlist. If that still works, ask them to pre-pay for a discount when the product launches. This will not only tell you how badly people actually want your business, it’ll build up your customer base! This is also super useful for investors.

 

Concise Ideas are One-Dimensional

Distill an idea to the most concise and clear form you can to make it memorable. 280 characters if possible.

Luckily some of the tweets, headlines, and soundbites we come across carry wisdom or at least nudge your headspace towards a new idea. This makes it too easy to forget that most things we talk about fall on a spectrum or have extra dimensions. Especially when maximizing viewership is so valuable for content creators.

Some of these are pretty straightforward. The value of “deep work” has been ingrained into our heads by the latest trends in business writing. On the other hand, several people I know online and in-person have said that the most effective people they know are all super responsive through email, text, and over the phone. So clearly you can be successful in both modes. What gives?

Taking a moment to think about it, you’ll realize that you don’t have to choose one. Block out an afternoon to dive into your work, then be obsessed with the outside world for the other hours in the day. Both of these techniques are complementary parts in a toolkit, not separate virtues you should aim for.

But you understand that already. The real argument I’m making is it’s critically important that we constantly reconsider the implications of our proverbs. Here’s an example (of many) showing why this can matter so much:

Humbleness and modesty are adjectives that usually show up when someone is being complemented. They’re great traits to have, and everyone clearly benefits when we all treat each other as equally capable and deserving peers. On the dark side of modesty, however, is imposter syndrome. (Which, by the way, disproportionately affects those from underrepresented groups!) I think that by asserting the ultimate value modesty in our bite-size thoughts, we impose a big mental and emotional barrier for people who shouldn’t act that way all of the time.

It can be incredibly useful to feel like you’re bad at something and have to improve ASAP. It motivates you to dive into nitty gritty details and be a sponge at the cost of self-esteem. Likewise, a sense of overconfidence can help you overcome risk-aversion, lead people, and sell, but at the cost of having an open mind.

Most worrying to me is that different people from atypical backgrounds have a stronger need to recognize and act on that duality.

As someone who’s never had trouble fitting right into the tech startup world, I feel that I have the luxury to not have to project any sort of confidence and can just default to whatever mood fits the situation the best (usually a feeling of being humbled by the many brilliant people out there!) But anyone who’s a part of an out-group faces a difficult tradeoff: using brazen confidence as a tool to validate themselves with the in-group will make you feel guilty over their immodesty.

Marketing “humbleness” or “confidence” as objectively desirable qualities misses the point. You can have moods where nobody can stop you, and moods where you’re still pulling yourself up by your bootstraps. They’re both horrifically useful tools at your disposal and you don’t need to stick with one or the other. Everything has a flip side that can be useful, as long as you can keep the balance.

In summary: most things are spectra, not polar, and most things are dimensional, not mutually-exclusive.

Thanks to Niraj for feedback. Subscribe to not miss any future posts!

Psychology in Product and Sales

I’m experimenting with a new blog post format. Often times I’ll read a multi-paragraph essay and feel frustrated because it could have been condensed into a series of bullet points. So that’s what I’ve made here. Let me know what you think, hopefully the concepts will be intuitive and this bullet-style list will enumerate relevant ideas and examples. This is a list of principles of psychology in product and sales. (I’ve been reading Robert Cialdini and Daniel Kahneman recently!)


  • Signaling
    • Doubling the price on jewelry signals quality, so people will buy more of the same good if it’s priced higher. This is the opposite of what you’d expect.
  • Reciprocation 
    • “Take this thing, no-strings-attached” creates a feeling of debt and favor.
    • Hare Krishnas greatly increased their fundraising efforts by handing out roses for free at airports.
    • Putting a sticky note in a mailed survey request will greatly increase response volume/quality. Response is even better if the note is handwritten.
  • Concession
    • Related to anchoring, people often feel bad or indebted for not being able to fulfill a request.
    • Salespeople start with a big ask for making a purchase but plan on it failing, then say something like okay would you at least be able to give me referrals to three friends who would find this product useful?”
  • Commitment 
    • Having people say they’re in support of something ahead of time (even days or longer) makes a future ask much more successful.
    • Canonical example is political campaigns asking people days before an election will you vote?” and people tend to overcommit and say yes. Then when election times comes, they’ll actually vote to stay true to their word.
    • Once someone goes to the bathroom in a new house or says they’ll buy a car, they’ve already made a decision in their head.
      • Salespeople know this, and will look for signs of mental commitment before jacking up prices.
  • Group initiation 
    • Soldiers go through bootcamp, frat boys haze, and Catholics baptize. Initiation builds critical bonds, and the more intensive/costly the initiation is, the stronger the effect.
    • Products like Stack Exchange make you take steps (earn some amount of reputation, in this case) before becoming a part of the community and having full access to the product.
  • Publicity effect
    • If somebody makes a statement publicly, they’ll think the statement is true even if they’d otherwise rationally find it to be false. Sales tactic would be to get someone to say they have a need for the product out loud.
    • Corollary: be reluctant to publicly share works in progress which would create biases for yourself.
    • If you can get a user to somehow indicate that they use your product (to other people, online, or by having some sort of public profile,) they’re much less likely to churn.
  • Internal vs external beliefs
    • Canonical example: experiment where kids were left in a room with a bunch of lame toys, and one cool robot toy. They are told not to play with the robot, then the experimenter leaves the room.
      • Kids played with the robot if they were told it was wrong and they’d be punished (even though they couldn’t be caught since they were alone in the room)
      • Kids didn’t play with the robot if they were simply told it was wrong
      • People can blame bad external rules for behavior, but it there’s no punishment they would have to do something only a Bad Person™ would do.
    • This backs the socially positive slant that companies like Patagonia or Lyft build their value props on.
  • Inner circles
    • This is related to the group initiation topic. Being in an Inner Circle makes the product much more sticky and drives engagement from users within it.
      • This is particularly important in products where a small group of power users greatly influence the direction and quality of the product.
    • Examples: Reddit’s gilded club, Quora’s Top Writers
    • Inner Circles can come in many layers.
      • Some startups have tried create multi-functional social platforms (meeting new people, messaging friends, etc)
      • But people use these layers to clearly define the relationship: coworkers use LinkedIn, friends/acquaintances use FB Messenger or GroupMe, and close friends use phone numbers/iMessage. This removes ambiguity and says “we’re friends because we use this medium reserved for friends of only this type”
  • Risk aversion
    • People hate losses more than they like gains.
    • “This offer is only open for a limited time!”
    • “The special edition only has 100 copies”
    • “Thanks for joining, here are 50 in-game coins to get started!” (you’d give up this arbitrary freebie if you stopped playing the game)
  • Moral-threat vs consequence-threat
    • People don’t mind taking risks if the expected cost of the consequence is low.
    • But not imposing any punishment shifts the act to a social-signalling/moral burden (rather than a financial one) which has much higher intangible costs and an unlimited downside.
    • Canonical example: a daycare had lots of late child pickups so they started charging $5 each time that happened. Parents were late more often since they had an easy out to their lateness which was simply paying the five bucks.
  • Having an excuse 
    • 6-8% of Gerber baby food is consumed by people who aren’t babies. Gerber actually tried marketing a product specifically for seniors but it failed. People didn’t want to admit they needed that sort of food, so they stuck with the baby product (plausible deniability — lots of seniors have grandkids!)
    • Most hookup apps market themselves as dating apps. While many users are actually focused on dating, nobody wants to tell others they’re only looking for hookups.
  • Anchoring
    • This effect is pretty well known.
    • I was chatting with a guy in SF who was asking for donations for a hip-hop related community org. He challenged me to donate $100 which was crazy, and I ended up donating $10 which in hindsight was twice what I’d otherwise choose to donate.
  • Self consistency
    • People have a need to be self-consistent in their beliefs and actions.
    • The question “why do you want this job?” is also a sales tactic. The candidate will be forced to articulate good reasons out of politeness – and the desire for internal consistency will make them believe these reasons. (source)
    • Unethical example: if you conduct a fake survey about lifestyle, people will hype up and inflate their lifestyle to create a compelling narrative about themself. If you follow that with an expensive ask that would validate that lifestyle, they’ll often go along to not sound self-contradictory.
      • Wouldn’t make sense to say yeah I travel all the time, but this packaged travel money-saving deal isn’t something I want.”
  • Social proof and social pressure
    • Tip jars are seeded” to give the appearance that many other people tip. 
    • Some products with FB login will show you that your friends use it too.
    • Google glass became associated with glasshole” nerds, but Snap Spectacles marketed with attractive and well-rounded models from the start.
    • “Endless chain” where you make a sale, then go to their friend and say your friend John recommended this for you.” This makes it turning down your friend instead of turning down the salesman.
  • Liking
    • Being attractive, personal and cultural similarity, giving compliments, contact & co-operation, conditioning, and association with positive ideas all make people much more open to trying a product or buying something.
    • GitHub’s Octocat is a friendly and fun mascot which users like and build an attachment to
  • Authority
    • This one is obvious. Companies plug high-profile clients whenever possible.
    • Twitter has the blue checkmark to make users feel like they’re getting higher quality information from those people through the platform.
  • Scarcity
    • Robinhood’s famous growth hack where you needed to refer people to move up a spot in the waiting list. Access to the early product was scarce.
    • New coke vs old coke
      • In the 80s, Coca-Cola tried changing the Coke recipe because it had done better in blind taste tests with consumers. But people rejected the New Coke because the Old Coke was then scarce and people wanted to keep what they knew
    • Much stronger to say you’re losing X per month” instead of you can save X per month”
  • FOMO and security
    • Uber guaranteeing people an arrival time increases number of rides since people feel the security associated with having an upper bound.
    • GroupMe SMS’d people who didn’t have the app. This made them feel like their friends were on the app but they weren’t. (Houseparty makes it easy to inspire FOMO with SMS too).

Decision Making and Mental Models

As I’ve spent more and more time reading Slate Star Codex, Less Wrong, Julia Galef, Farnam Street, and Charlie Munger, I’ve realized how useful it is to write down my mental models and decision making tools. Explicitly outlining them makes it easy to remember each perspective of a problem. Given something in particular I’m thinking about, I can just go down the list and see how each lens shapes my thoughts.

This is a general list of guiding principles and questions to ask myself when making a tricky decision or doing something new. I’d love to hear any comments and suggestions?


Is this something I’d regret doing (or not) in the future?

Jeff Bezos has cited a regret minimization as one of the reasons he started Amazon:

”I wanted to project myself forward to age 80 and say, ‘Okay, now I’m looking back on my life. I want to have minimized the number of regrets I have,’” explains Bezos. “I knew that when I was 80 I was not going to regret having tried this. I was not going to regret trying to participate in this thing called the Internet that I thought was going to be a really big deal. I knew that if I failed I wouldn’t regret that, but I knew the one thing I might regret is not ever having tried. I knew that that would haunt me every day, and so, when I thought about it that way it was an incredibly easy decision.”

This is so useful because it applies to so many different types of decisions and it’s particularly powerful with qualitative and personal problems.

Is this the right time to do this?

We naturally think about the ‘what’ and ‘how’ of a decision, but the ‘when’ and ‘why’ are equally important. If you realize that you should do something, it’s easy to think that you need to do it now, even if some other time would be better.

What routine am I forming?

The Power of Habit is one of my favorite books. Think of habit forming as analogous to compound interest for investors.

Beliefs/opinions are agents of the mind. Job based thinking

I’m a fan of Clay Christensen’s milkshake story that suggests thinking about “jobs to be done” to understand why people to buy products and services. This mental model is useful for inspecting your own beliefs and opinions. Given an arbitrary feeling, why is that how you feel? I’ll often times think some public speaking task isn’t worth it — even when it clearly is — just because I still get nervous sometimes when talking in front of a crowd. Asking myself what job that reluctance fulfills for my mind (avoiding something uncomfortable) makes it obvious that I really should just go speak.

Value of people you spend time with >>> what you do

This one’s important, fairly obvious, and has been well-covered before. I leave it here as a constant reminder, though.

Normative vs descriptive is a difficult yet critical distinction

When discussing anything subtle or controversial it’s easy to get caught up in language traps that fail to distinguish what is from what ought to be. For a rather extreme example, you might say “drugs are natural” as a matter of fact, which is technically true. But everyone assumes you’re asserting that because drugs are natural, they should be used. Clearly separating normative and descriptive statements reduces misunderstanding and clarifies your own thinking.

Hell yes, or no

Econ or game theory nerds would be reminded of the Pareto Principle. My favorite example of this is Warren Buffett’s story about focus. It’s too easy to rationalize distractions as still being productive. But those distractions are not the most long-term productive thing to do.

The evolution of everything

The cognitive biases are all byproducts of our evolution. You’re probably familiar with the sunk cost fallacy, anchoring, fundamental attribution error, or zero-sum bias. Some rationalists spend a lot of time studying the cognitive biases but I think it’s extremely difficult to actually put them to practical use. I prefer to frame the cognitive biases in terms of our evolutionary history which always invokes some concrete and relatable examples (our hunter-gatherer ancestors always had to worry about where they’d get their next meal so the risk-aversion bias makes sense in that society, for instance). Thinking about Darwinian dynamics has probably been my #1 most useful tool for understanding everything — politics, economics, people, morality, etc. Matt Ridley’s book The Evolution of Everything covers this more.

The billboard question

If you had to put a single message on a billboard, what would it say?

This exercise forces you to distill your thoughts to their most concise, elemental forms. Once you’ve simplified your idea to a billboard-sized chunk, it becomes easy to act on and communicate it to others.

As an example: if you could only send one text message to your friends, what would it say? What about a one line email to your employees? Find that thing and act in support of that singular idea.

What would you need to know to make you change your viewpoint?

I believe many people only hold views because they’re stubborn, hyper-partisan, or irrational. This applies to much more than just politics.

So how do you distinguish between an ideologue and someone who just has a strong, reasoned opinion?

Asking somebody about what information would change their mind is an incredibly powerful tool to detect this. If they can’t come up with a reasonable example of opinion-altering data, they almost certainly came to their opinion for non-rigorous reasons. Look for people with a thoughtful answer to that question and learn from them.

Goal setting: trivially easy or impossibly hard

A common piece of productivity and life advice goes something like “set goals you can hit.” It makes sense that you’d be most motivated if your goals are challenging and exciting, but still within reach.

But I think that reasoning is wrong. Goals should be trivially easy or moonshot challenging. In the first case, you’ll have no problem getting tasks done, building momentum, and clearing the path needed to focus on the bigger picture. In the second case, impossible goals remove the stress and pressure to perform. You’re okay taking risks (we’re naturally too risk averse) and more flexible in your approach to the problem.

K-step thinking

This NYT article (also: academic paper) about k-step thinking really changed the game for me when it comes to understanding crowd behavior, games, and the “average user” of a product. In situations where the best course of action is a series of steps or depends on what other people’s actions, you’ll have a hard time systematizing/rationalizing what’s going on. But most people only think a few steps ahead. There’s no need to overthink the problem and a theoretically-correct model is probably wrong in practice.

Is this hard work or something I don’t like? Conversely, is this enjoyable or just easy?

Recently there’s been a lot of discussion surrounding “grit,” success, education, and how you achieve goals. Starting early and working hard is important at the micro-level, but I think that whole mindset loses perspective of the macro-level. Case in point: a significant fraction of college students change majors (estimates vary, but 25%-50% seems to be the right) and waste time figuring out what they want to do (the how is well known). I believe the what problem is bigger and less acknowledged than the how problem.

Part of what makes discovering what you want to do such a challenge is that exploration is often at odds with rigor (success). When slowly learning things purely out of curiosity, you lose the pace you need to compete. This adds pressure to do both the interest-exploration and rigorous-skill building at the same time. Some things are obviously hard and miserable and you can rule those out. Some are enjoyable, in which case you need to dig deep and make sure you’re in it for the right reasons.

This thinking applies to prioritization too. Is your startup’s current task actually impactful, or do you just want to do it because you’ll feel productive?

Revealed Preferences as a tool for self-reflection

Related to the hard work or something I don’t like question, revealed preferences are a useful tool for understanding the true nature of yourself and others. The theory was originally created by an economist to solve the problem that “while utility maximization was not a controversial assumption, the underlying utility functions could not be measured with great certainty. Revealed preference theory was a means to reconcile demand theory by defining utility functions by observing behavior.” The idea is that what people say they want is often not at all what they actually want. This matters a lot for understanding your internal utility function (which defines what you care about and should prioritize).

Thinking empirically about how you spend your time and what historically makes you laugh/love/learn will get you much farther than trying to take a first principles approach to what sorts of things we say we care about. The non-empirical approach makes it easier for the fundamental attribution error to kick in and lets you project what you think you should be rather than what you are.

Punctuated equilibrium

Have you noticed how things seem to stay the same for a long time only to change very suddenly? This is another idea from the world of evolutionary biology. Wikipedia describes it: “most social systems exist in an extended period of stasis, which are later punctuated by sudden shifts in radical change.” Most people understand this idea in terms of technological/scientific revolutions and innovation — somebody builds a new tool that rapidly changes how people operate. But it can be applied more generally to anything operating within a larger environment or dealing with independent agents or incentive structures (politics, management, social group preferences, etc.) Phenomena like changes in political dialogue are often described as trends when I think they’re better conceptualized as punctuated equilibria. It makes it easier to systematize and predict second-order consequences.

Meta-competition as a cause for punctuated equilibrium

There’s an interesting game-theory problem for each example of punctuated equilibrium in society. In EvBio terms it can be explained that organisms naturally fit competitive niches which are often shifted by outside factors, almost like the gas from a popped balloon dissipating to fill its container. But in all the situations relevant to real life, the players are people with biases, unique objectives, and an awareness of what other people are thinking.

My best mental model for understanding this is meta-competition. In many cases, performance in some game matters less than choosing which game you compete in. I found a random blog post that used political conflict as an example: “the solidarity folks want a rivalry with the rivalry folks because they (the solidarity folks) think they can win, but the rivalry folks don’t want a rivalry with the solidarity folks because they (the rivalry folks) think they would lose.”

Remember that structural or environmental changes lead to punctuated equilibrium as actors quickly adapt to fit the new landscape or incentive structure. I think that in a lot of cases (deciding who gets the promotion, or the highest-status date, or most cultural recognition,) the result given some rules and boundaries would largely be known. So the most effective way to compete is to change the game you’re playing. This means that since people know what they can win or lose at, they compete over the game being played and when the game/rules are changed, the equilibrium shifts. A noteworthy corollary relevant to career planning: changing a system can have far more impact on the world than doing anything within a system (sounds a lot like Silicon Valley ethos!)

 XY Problem

Taken from a Stack Exchange post: “The XY problem is asking about your attempted solution rather than your actual problem. That is, you are trying to solve problem X, and you think solution Y would work, but instead of asking about X when you run into trouble, you ask about Y.

I catch myself doing this all the time. It doesn’t help that we naturally want to show off the progress that we made on something (even if it’s a dead-end) and fix the attempted solution for gratification or to close the learning feedback loop.

Optimize for serendipity

Several of the most valuable opportunities and friendships throughout my life have happened out of pure chance (read more about this in my post here). Notice that this principle is seemingly at odds with the “hell yes, or no” idea. It’s important to make the distinction: maximizing serendipity creates opportunities, and “hell yes, or no” picks the most meaningful ones. Those are two separate, independently necessary steps in the process.

We stop learning and performing when we can’t tell action/decision quality from outcome

VCs often point out that the feedback loops for investments are 10+ years so it’s hard to learn from your decisions. Less extreme cases pop up in real-life all the time. Being more aware of this helps you 1) put feedback loops in place, and 2) put less weight on what you learn from outcomes loosely connected to actions/decisions.

Training behavior. Idiosyncrasies and preferences are defense for that

I read a fascinating EvBio article theorizing that we have preferences and idiosyncrasies as a sort of social defense mechanism. Clearly we trust and build relationships with people who spend energy/resources affirming the relationship — like how your closest friends remember to call you on your birthday or reward you by playing your favorite song at a party. The fact that everyone has their own unique and seemingly random preferences ensures that people can only gain your trust by spending the time and energy to learn then remember your preferences. A social-trust Proof-Of-Work if you will (deep apologies for the blockchain reference). This helps consciously contextualize and understand our social priorities and be more deliberate in building relationships with people we care about.

Decision making: reversal and double-reversal

If you haven’t learned by now, Wikipedia articles and papers are usually more articulate than I am: “Reversal Test: When a proposal to change a certain parameter is thought to have bad overall consequences, consider a change to the same parameter in the opposite direction. If this is also thought to have bad overall consequences, then the onus is on those who reach these conclusions to explain why our position cannot be improved through changes to this parameter. If they are unable to do so, then we have reason to suspect that they suffer from status quo bias”

“Double Reversal Test: Suppose it is thought that increasing a certain parameter and decreasing it would both have bad overall consequences. Consider a scenario in which a natural factor threatens to move the parameter in one direction and ask whether it would be good to counterbalance this change by an intervention to preserve the status quo. If so, consider a later time when the naturally occurring factor is about to vanish and ask whether it would be a good idea to intervene to reverse the first intervention. If not, then there is a strong prima facie case for thinking that it would be good to make the first intervention even in the absence of the natural countervailing factor.

This is really, really effective in debates/discussions. A concrete (somewhat straw man) example: many people are strongly against any sort of gene enhancement whether through embryo selection or something like CRISPR (I personally see many unanswered questions on the topic). The argument is usually that it’s unfair to make one person unnaturally smarter than another. The reversal is asking if we should then ban private tutoring or even schools, because a select few with access to those resources are “unnaturally smarter” in all consequential ways. This is clearly at odds with the premise of the default argument against gene enhancement. There are many adjacent and orthogonal reasons to hold a position against enhancement, but the reversal is pretty widely applicable and powerful.

Good-story bias. We naturally bias towards thinking of less-likely scenarios that form a story

This one is useful in two ways. First, it’s the base rate fallacy restated in more natural words. When learning through pattern recognition and empiricism, we should try not to be biased by the good stories or outlier data points. Second, storytelling is an incredibly powerful way to influence thinking. Try to tell a story rather than give facts.

Chesterton’s Fence

When trying to change a system or policy it’s easy to find flaws and use those flaws to justify your proposed change. But almost everything was designed intentionally. There is probably a good reason for why something is the way that it is. Before working to change something, spend the time to understand how it was designed in the first place. That process will uncover issues you hadn’t previously considered or will give you further validation for altering the system or policy. This idea is referred to as Chesterton’s Fence. See the Wikipedia article for a history and quick example.

Additive or Ecological?

Any useful technology or policy developments will change user behavior. Making an explicit dichotomy between additive changes (first-order effects only) and ecological changes (higher-order effects are present) makes it easier to choose your decision-making toolkit and weigh factors appropriately.


That’s it for now. Please tell me about your mental models (seriously!) My email and Twitter are on the homepage.

Simpson’s Paradox and Thinking Rationally in Venture Capital

Decision making in venture capital relies heavily on probabilistic thinking and difficult-to-compare historical data. The heuristics are too rough and the feedback loops are too long. Most of the time correlation does not imply causation. You can’t distinguish “A causes B,” “B causes A,” and “C causes both A and B.”

You can get around the correlations vs causation problem by treating startup success as a function of independent variables (see Leo Polovets’ great post on this). Since most investors assess risk through empirical data and qualitative measures learned through pattern recognition, human biases can easily influence decision making.

Here’s my favorite example which is pulled from Michael Nielsen’s excellent post:

Suppose you’re suffering from kidney stones and go to see your doctor. The doctor tells you two treatments are available, treatment A and treatment B. You ask which treatment works better, and the doctor says “Well, a study found that treatment A has a higher probability of success than treatment B.”

You start to say “I’ll take treatment A, thanks!”, when your doctor interrupts: “But the same study also looked to see which treatment worked better, depending on whether patients had large kidney stones or small kidney stones.” You say “Well, do I have large kidney stones or small kidney stones”? As you speak the doctor interrupts again, looking sheepish, and says “Actually, it doesn’t matter. You see, they found that treatment B has a higher probability of success than treatment A, regardless of whether you have large or small kidney stones.”

Take a second to wonder: how is that possible? I was initially stumped, and a couple brilliant friends of mine couldn’t think of a concrete explanation off the top of their heads. It turns out that this result came from a legitimate real-life study. The sample sizes of the different groups were not controlled:

Okay, that makes sense. But the point is that empiricism can easily fail when you treat complex problems as a set of independent variables.

VC is pretty famous for fitting power law distributions and having skewed samples sizes. Replacing large/small kidney stones with a startup-relevant category and Treatment A/B with something a startup is doing, you’ll have a massively uneven set of data points to draw in — this is precisely what opens the door to Simpson’s Paradox.

The question then becomes: what are the most important cases of Simpson’s Paradox in VC? Perhaps large founding teams, or “distracted teams” consisting of university professors fit the bill. There are few examples of this, especially compared to the number of standard 2-3 cofounders we’re used to, so the statistical waters are muddied.

Tomasz Tunguz wrote that this type of thinking can also be applied to finding market opportunities: (In 2013 no less — ahead of the game!)

The Berkeley example reminds me of the SpaceX’s formation story Elon Musk shared at the D conference this year. Musk implicitly knew launching satellites into space would be expensive. After all, NASA’s annual budget is about $19B. But when Musk and his team analyzed each cost component of a space launch, they found that less than 10% of the costs were the rocket and the fuel and the launch equipment. This meant Musk could conceivably reduce the costs of space shipping by 80%.

While it’s not a true statistical example of Simpson’s Paradox, the point is the same. The market held a worldview based on aggregate data. But Musk recognized the aggregate space costs didn’t tell the true story. By digging deeper, he and his team found a lurking explanatory variable and an opportunity to disrupt the industry.

I think everyone should read about the common statistical paradoxes and fallacies. An obvious followup post would cover something like Bayes’ Rule in VC. Only one in five doctors correctly answer the linked Probability 101 question related to cancer rates (!!!) and I bet this many investors fall into similar traps.

What do you optimize for?

Advice depends on context, assumptions, and what you’re trying to optimize for. Much of it boils down to “this is what worked for me so take it with a grain of salt and try to calibrate it for you.” Useful advice either tries to account for differences between people or include generalizable principles for how you should think about something.

People will try to make these adjustments doing something  like “focus on what you like the most and are best at.” Although I think that spirit is right, it frames the problem in a counterproductive way. “Focus” is often understood as “follow a set plan towards this goal and do what you think you should be doing to succeed.”

I think this is the wrong type of optimization.

There are two things I optimize for which I’d like to explore here: interestingness and serendipity.

First, “interesting.” You may have noticed that what’s interesting to you has changed over time. Why is that? Keep in mind that interests are distinct from talents. Do interests change for the same reason that mathematically-inclined minds tend to be interested in formal logic but not painting, while artistically-inclined minds tend to be interested in Broadway but not software engineering? Is what we’re generally curious about related to things that give us happy fulfilling lives?

My take on this is that “interesting” is a heuristic for all of the things we need: usefulness, novelty, personal fulfillment, etc. Not only do interests change over time, we seem to jump between intense focuses and binge something until we suddenly stop caring about it. Our brains need some way to choose what to learn about. This idea has been studied in a more in-depth and rigorous way than I’ll argue here — check out this paper for instance.

You can probably relate to this real life anecdote: in high-pressure situations, I’m intensely interested in specific problem-related information and career-focused things like how to deploy code with Docker to save me time. It’s critical to note that I’m genuinely interested in that sort of stuff and I don’t explore it for external reasons. I’m just inexplicably more curious in that moment. When I have more free-time and no responsibilities, however, I find myself thinking much more about food, politics, my next workout, philosophy, music, or stand-up comedy. All things that don’t accomplish any specific goal but still enrich me as a person (a fact my humanities professors are always so ready to remind me of!)

The point is, interests aren’t just a luxury. They serve a useful purpose that you should consciously consider when organizing your life.

Second, “serendipity.” This is a major theme of Reid Hoffman’s The Startup of You  and Marc Andreessen’s career guide. The thinking is that breakthrough opportunities usually present themselves through random chance. Maybe you happen to stumble across the right problem at the right time and think “hmm, why hasn’t anyone solved it this other way?” or your friends decide to go to Denny’s at 3am to discuss a business idea. Seemingly small and innocuous moments lead to truly exciting opportunities.

I’ve already directly observed this in my own limited experience. I attribute pretty much every major success of mine to pure luck (with the prerequisite of hard work pouncing on an opportunity when I see it):

  • 100k-download Android app: I spent months toying around with different programming tools and just happened to stumble upon a great tutorial and project idea I liked. I also randomly played around with a bunch of different marketing techniques for fun.  Only one of many happened to stick and things just naturally snowballed from there. It was a low-quality app that I happened kill it because I had failed at plenty of other projects before it.
  • College: originally I could only see myself at a some “elite coastal school” (staying in the Midwest, I’ve since been telling myself it’s more like “coastal-elite school.” Hah!) but luckily I decided to serendipitously apply to a bunch of schools I didn’t really care about. If I didn’t go out of my way to stir some up random luck, I wouldn’t have gotten an offer from UIUC that put me in a top CS program and saved me a quarter-million in tuition over my next-choice option.
  • Contrary: I randomly saw a Facebook post and decided to cold email Eric.  If I hadn’t been on my phone that night or if I hadn’t decided to spontaneously write a note, I wouldn’t have gotten the amazing and humbling chance to help build a venture fund.
  • Friends: One year in college I went on a trip to Silicon Valley organized by my school. I wasn’t super excited and had actually turned down the chance to go the year before, but I decided I could use a little more serendipity in my life. There I made a great friend. Through her, I made some more friends. One of them became my roommate for the next few years. He introduced me to many other cool people. Again, the original trip was just serendipity at work — there was no goal or process involved, but super valuable relationships grew out of it.

I’m sure everyone has similar stories of pure chance turning into something incredibly meaningful. Yet most people would probably have taken the above examples and focused on some sort of process or execution that made the most of the opportunities.

Think of it this way: we spend most of our lives doing things. Working towards goals. Learning. Talking. We do a great job of carrying out whatever it is that we’re trying to optimize for. We really don’t give ourselves enough credit. These two heuristics help you broaden the opportunities you come across and choose which ones matter most. That’s at least half the challenge — the rest comes naturally.