Concise Ideas are One-Dimensional

Distill an idea to the most concise and clear form you can to make it memorable. 280 characters if possible.

Luckily some of the tweets, headlines, and soundbites we come across carry wisdom or at least nudge your headspace towards a new idea. This makes it too easy to forget that most things we talk about fall on a spectrum or have extra dimensions. Especially when maximizing viewership is so valuable for content creators.

Some of these are pretty straightforward. The value of “deep work” has been ingrained into our heads by the latest trends in business writing. On the other hand, several people I know online and in-person have said that the most effective people they know are all super responsive through email, text, and over the phone. So clearly you can be successful in both modes. What gives?

Taking a moment to think about it, you’ll realize that you don’t have to choose one. Block out an afternoon to dive into your work, then be obsessed with the outside world for the other hours in the day. Both of these techniques are complementary parts in a toolkit, not separate virtues you should aim for.

But you understand that already. The real argument I’m making is it’s critically important that we constantly reconsider the implications of our proverbs. Here’s an example (of many) showing why this can matter so much:

Humbleness and modesty are adjectives that usually show up when someone is being complemented. They’re great traits to have, and everyone clearly benefits when we all treat each other as equally capable and deserving peers. On the dark side of modesty, however, is imposter syndrome. (Which, by the way, disproportionately affects those from underrepresented groups!) I think that by asserting the ultimate value modesty in our bite-size thoughts, we impose a big mental and emotional barrier for people who shouldn’t act that way all of the time.

It can be incredibly useful to feel like you’re bad at something and have to improve ASAP. It motivates you to dive into nitty gritty details and be a sponge at the cost of self-esteem. Likewise, a sense of overconfidence can help you overcome risk-aversion, lead people, and sell, but at the cost of having an open mind.

Most worrying to me is that different people from atypical backgrounds have a stronger need to recognize and act on that duality.

As someone who’s never had trouble fitting right into the tech startup world, I feel that I have the luxury to not have to project any sort of confidence and can just default to whatever mood fits the situation the best (usually a feeling of being humbled by the many brilliant people out there!) But anyone who’s a part of an out-group faces a difficult tradeoff: using brazen confidence as a tool to validate themselves with the in-group will make you feel guilty over their immodesty.

Marketing “humbleness” or “confidence” as objectively desirable qualities misses the point. You can have moods where nobody can stop you, and moods where you’re still pulling yourself up by your bootstraps. They’re both horrifically useful tools at your disposal and you don’t need to stick with one or the other. Everything has a flip side that can be useful, as long as you can keep the balance.

In summary: most things are spectra, not polar, and most things are dimensional, not mutually-exclusive.

Thanks to Niraj for feedback. Subscribe to not miss any future posts!

Psychology in Product and Sales

I’m experimenting with a new blog post format. Often times I’ll read a multi-paragraph essay and feel frustrated because it could have been condensed into a series of bullet points. So that’s what I’ve made here. Let me know what you think, hopefully the concepts will be intuitive and this bullet-style list will enumerate relevant ideas and examples. This is a list of principles of psychology in product and sales. (I’ve been reading Robert Cialdini and Daniel Kahneman recently!)


  • Signaling
    • Doubling the price on jewelry signals quality, so people will buy more of the same good if it’s priced higher. This is the opposite of what you’d expect.
  • Reciprocation 
    • “Take this thing, no-strings-attached” creates a feeling of debt and favor.
    • Hare Krishnas greatly increased their fundraising efforts by handing out roses for free at airports.
    • Putting a sticky note in a mailed survey request will greatly increase response volume/quality. Response is even better if the note is handwritten.
  • Concession
    • Related to anchoring, people often feel bad or indebted for not being able to fulfill a request.
    • Salespeople start with a big ask for making a purchase but plan on it failing, then say something like okay would you at least be able to give me referrals to three friends who would find this product useful?”
  • Commitment 
    • Having people say they’re in support of something ahead of time (even days or longer) makes a future ask much more successful.
    • Canonical example is political campaigns asking people days before an election will you vote?” and people tend to overcommit and say yes. Then when election times comes, they’ll actually vote to stay true to their word.
    • Once someone goes to the bathroom in a new house or says they’ll buy a car, they’ve already made a decision in their head.
      • Salespeople know this, and will look for signs of mental commitment before jacking up prices.
  • Group initiation 
    • Soldiers go through bootcamp, frat boys haze, and Catholics baptize. Initiation builds critical bonds, and the more intensive/costly the initiation is, the stronger the effect.
    • Products like Stack Exchange make you take steps (earn some amount of reputation, in this case) before becoming a part of the community and having full access to the product.
  • Publicity effect
    • If somebody makes a statement publicly, they’ll think the statement is true even if they’d otherwise rationally find it to be false. Sales tactic would be to get someone to say they have a need for the product out loud.
    • Corollary: be reluctant to publicly share works in progress which would create biases for yourself.
    • If you can get a user to somehow indicate that they use your product (to other people, online, or by having some sort of public profile,) they’re much less likely to churn.
  • Internal vs external beliefs
    • Canonical example: experiment where kids were left in a room with a bunch of lame toys, and one cool robot toy. They are told not to play with the robot, then the experimenter leaves the room.
      • Kids played with the robot if they were told it was wrong and they’d be punished (even though they couldn’t be caught since they were alone in the room)
      • Kids didn’t play with the robot if they were simply told it was wrong
      • People can blame bad external rules for behavior, but it there’s no punishment they would have to do something only a Bad Person™ would do.
    • This backs the socially positive slant that companies like Patagonia or Lyft build their value props on.
  • Inner circles
    • This is related to the group initiation topic. Being in an Inner Circle makes the product much more sticky and drives engagement from users within it.
      • This is particularly important in products where a small group of power users greatly influence the direction and quality of the product.
    • Examples: Reddit’s gilded club, Quora’s Top Writers
    • Inner Circles can come in many layers.
      • Some startups have tried create multi-functional social platforms (meeting new people, messaging friends, etc)
      • But people use these layers to clearly define the relationship: coworkers use LinkedIn, friends/acquaintances use FB Messenger or GroupMe, and close friends use phone numbers/iMessage. This removes ambiguity and says “we’re friends because we use this medium reserved for friends of only this type”
  • Risk aversion
    • People hate losses more than they like gains.
    • “This offer is only open for a limited time!”
    • “The special edition only has 100 copies”
    • “Thanks for joining, here are 50 in-game coins to get started!” (you’d give up this arbitrary freebie if you stopped playing the game)
  • Moral-threat vs consequence-threat
    • People don’t mind taking risks if the expected cost of the consequence is low.
    • But not imposing any punishment shifts the act to a social-signalling/moral burden (rather than a financial one) which has much higher intangible costs and an unlimited downside.
    • Canonical example: a daycare had lots of late child pickups so they started charging $5 each time that happened. Parents were late more often since they had an easy out to their lateness which was simply paying the five bucks.
  • Having an excuse 
    • 6-8% of Gerber baby food is consumed by people who aren’t babies. Gerber actually tried marketing a product specifically for seniors but it failed. People didn’t want to admit they needed that sort of food, so they stuck with the baby product (plausible deniability — lots of seniors have grandkids!)
    • Most hookup apps market themselves as dating apps. While many users are actually focused on dating, nobody wants to tell others they’re only looking for hookups.
  • Anchoring
    • This effect is pretty well known.
    • I was chatting with a guy in SF who was asking for donations for a hip-hop related community org. He challenged me to donate $100 which was crazy, and I ended up donating $10 which in hindsight was twice what I’d otherwise choose to donate.
  • Self consistency
    • People have a need to be self-consistent in their beliefs and actions.
    • The question “why do you want this job?” is also a sales tactic. The candidate will be forced to articulate good reasons out of politeness – and the desire for internal consistency will make them believe these reasons. (source)
    • Unethical example: if you conduct a fake survey about lifestyle, people will hype up and inflate their lifestyle to create a compelling narrative about themself. If you follow that with an expensive ask that would validate that lifestyle, they’ll often go along to not sound self-contradictory.
      • Wouldn’t make sense to say yeah I travel all the time, but this packaged travel money-saving deal isn’t something I want.”
  • Social proof and social pressure
    • Tip jars are seeded” to give the appearance that many other people tip. 
    • Some products with FB login will show you that your friends use it too.
    • Google glass became associated with glasshole” nerds, but Snap Spectacles marketed with attractive and well-rounded models from the start.
    • “Endless chain” where you make a sale, then go to their friend and say your friend John recommended this for you.” This makes it turning down your friend instead of turning down the salesman.
  • Liking
    • Being attractive, personal and cultural similarity, giving compliments, contact & co-operation, conditioning, and association with positive ideas all make people much more open to trying a product or buying something.
    • GitHub’s Octocat is a friendly and fun mascot which users like and build an attachment to
  • Authority
    • This one is obvious. Companies plug high-profile clients whenever possible.
    • Twitter has the blue checkmark to make users feel like they’re getting higher quality information from those people through the platform.
  • Scarcity
    • Robinhood’s famous growth hack where you needed to refer people to move up a spot in the waiting list. Access to the early product was scarce.
    • New coke vs old coke
      • In the 80s, Coca-Cola tried changing the Coke recipe because it had done better in blind taste tests with consumers. But people rejected the New Coke because the Old Coke was then scarce and people wanted to keep what they knew
    • Much stronger to say you’re losing X per month” instead of you can save X per month”
  • FOMO and security
    • Uber guaranteeing people an arrival time increases number of rides since people feel the security associated with having an upper bound.
    • GroupMe SMS’d people who didn’t have the app. This made them feel like their friends were on the app but they weren’t. (Houseparty makes it easy to inspire FOMO with SMS too).

Decision Making and Mental Models

As I’ve spent more and more time reading Slate Star Codex, Less Wrong, Julia Galef, Farnam Street, and Charlie Munger, I’ve realized how useful it is to write down my mental models and decision making tools. Explicitly outlining them makes it easy to remember each perspective of a problem. Given something in particular I’m thinking about, I can just go down the list and see how each lens shapes my thoughts.

This is a general list of guiding principles and questions to ask myself when making a tricky decision or doing something new. I’d love to hear any comments and suggestions?


Is this something I’d regret doing (or not) in the future?

Jeff Bezos has cited a regret minimization as one of the reasons he started Amazon:

”I wanted to project myself forward to age 80 and say, ‘Okay, now I’m looking back on my life. I want to have minimized the number of regrets I have,’” explains Bezos. “I knew that when I was 80 I was not going to regret having tried this. I was not going to regret trying to participate in this thing called the Internet that I thought was going to be a really big deal. I knew that if I failed I wouldn’t regret that, but I knew the one thing I might regret is not ever having tried. I knew that that would haunt me every day, and so, when I thought about it that way it was an incredibly easy decision.”

This is so useful because it applies to so many different types of decisions and it’s particularly powerful with qualitative and personal problems.

Is this the right time to do this?

We naturally think about the ‘what’ and ‘how’ of a decision, but the ‘when’ and ‘why’ are equally important. If you realize that you should do something, it’s easy to think that you need to do it now, even if some other time would be better.

What routine am I forming?

The Power of Habit is one of my favorite books. Think of habit forming as analogous to compound interest for investors.

Beliefs/opinions are agents of the mind. Job based thinking

I’m a fan of Clay Christensen’s milkshake story that suggests thinking about “jobs to be done” to understand why people to buy products and services. This mental model is useful for inspecting your own beliefs and opinions. Given an arbitrary feeling, why is that how you feel? I’ll often times think some public speaking task isn’t worth it — even when it clearly is — just because I still get nervous sometimes when talking in front of a crowd. Asking myself what job that reluctance fulfills for my mind (avoiding something uncomfortable) makes it obvious that I really should just go speak.

Value of people you spend time with >>> what you do

This one’s important, fairly obvious, and has been well-covered before. I leave it here as a constant reminder, though.

Normative vs descriptive is a difficult yet critical distinction

When discussing anything subtle or controversial it’s easy to get caught up in language traps that fail to distinguish what is from what ought to be. For a rather extreme example, you might say “drugs are natural” as a matter of fact, which is technically true. But everyone assumes you’re asserting that because drugs are natural, they should be used. Clearly separating normative and descriptive statements reduces misunderstanding and clarifies your own thinking.

Hell yes, or no

Econ or game theory nerds would be reminded of the Pareto Principle. My favorite example of this is Warren Buffett’s story about focus. It’s too easy to rationalize distractions as still being productive. But those distractions are not the most long-term productive thing to do.

The evolution of everything

The cognitive biases are all byproducts of our evolution. You’re probably familiar with the sunk cost fallacy, anchoring, fundamental attribution error, or zero-sum bias. Some rationalists spend a lot of time studying the cognitive biases but I think it’s extremely difficult to actually put them to practical use. I prefer to frame the cognitive biases in terms of our evolutionary history which always invokes some concrete and relatable examples (our hunter-gatherer ancestors always had to worry about where they’d get their next meal so the risk-aversion bias makes sense in that society, for instance). Thinking about Darwinian dynamics has probably been my #1 most useful tool for understanding everything — politics, economics, people, morality, etc. Matt Ridley’s book The Evolution of Everything covers this more.

The billboard question

If you had to put a single message on a billboard, what would it say?

This exercise forces you to distill your thoughts to their most concise, elemental forms. Once you’ve simplified your idea to a billboard-sized chunk, it becomes easy to act on and communicate it to others.

As an example: if you could only send one text message to your friends, what would it say? What about a one line email to your employees? Find that thing and act in support of that singular idea.

What would you need to know to make you change your viewpoint?

I believe many people only hold views because they’re stubborn, hyper-partisan, or irrational. This applies to much more than just politics.

So how do you distinguish between an ideologue and someone who just has a strong, reasoned opinion?

Asking somebody about what information would change their mind is an incredibly powerful tool to detect this. If they can’t come up with a reasonable example of opinion-altering data, they almost certainly came to their opinion for non-rigorous reasons. Look for people with a thoughtful answer to that question and learn from them.

Goal setting: trivially easy or impossibly hard

A common piece of productivity and life advice goes something like “set goals you can hit.” It makes sense that you’d be most motivated if your goals are challenging and exciting, but still within reach.

But I think that reasoning is wrong. Goals should be trivially easy or moonshot challenging. In the first case, you’ll have no problem getting tasks done, building momentum, and clearing the path needed to focus on the bigger picture. In the second case, impossible goals remove the stress and pressure to perform. You’re okay taking risks (we’re naturally too risk averse) and more flexible in your approach to the problem.

K-step thinking

This NYT article (also: academic paper) about k-step thinking really changed the game for me when it comes to understanding crowd behavior, games, and the “average user” of a product. In situations where the best course of action is a series of steps or depends on what other people’s actions, you’ll have a hard time systematizing/rationalizing what’s going on. But most people only think a few steps ahead. There’s no need to overthink the problem and a theoretically-correct model is probably wrong in practice.

Is this hard work or something I don’t like? Conversely, is this enjoyable or just easy?

Recently there’s been a lot of discussion surrounding “grit,” success, education, and how you achieve goals. Starting early and working hard is important at the micro-level, but I think that whole mindset loses perspective of the macro-level. Case in point: a significant fraction of college students change majors (estimates vary, but 25%-50% seems to be the right) and waste time figuring out what they want to do (the how is well known). I believe the what problem is bigger and less acknowledged than the how problem.

Part of what makes discovering what you want to do such a challenge is that exploration is often at odds with rigor (success). When slowly learning things purely out of curiosity, you lose the pace you need to compete. This adds pressure to do both the interest-exploration and rigorous-skill building at the same time. Some things are obviously hard and miserable and you can rule those out. Some are enjoyable, in which case you need to dig deep and make sure you’re in it for the right reasons.

This thinking applies to prioritization too. Is your startup’s current task actually impactful, or do you just want to do it because you’ll feel productive?

Revealed Preferences as a tool for self-reflection

Related to the hard work or something I don’t like question, revealed preferences are a useful tool for understanding the true nature of yourself and others. The theory was originally created by an economist to solve the problem that “while utility maximization was not a controversial assumption, the underlying utility functions could not be measured with great certainty. Revealed preference theory was a means to reconcile demand theory by defining utility functions by observing behavior.” The idea is that what people say they want is often not at all what they actually want. This matters a lot for understanding your internal utility function (which defines what you care about and should prioritize).

Thinking empirically about how you spend your time and what historically makes you laugh/love/learn will get you much farther than trying to take a first principles approach to what sorts of things we say we care about. The non-empirical approach makes it easier for the fundamental attribution error to kick in and lets you project what you think you should be rather than what you are.

Punctuated equilibrium

Have you noticed how things seem to stay the same for a long time only to change very suddenly? This is another idea from the world of evolutionary biology. Wikipedia describes it: “most social systems exist in an extended period of stasis, which are later punctuated by sudden shifts in radical change.” Most people understand this idea in terms of technological/scientific revolutions and innovation — somebody builds a new tool that rapidly changes how people operate. But it can be applied more generally to anything operating within a larger environment or dealing with independent agents or incentive structures (politics, management, social group preferences, etc.) Phenomena like changes in political dialogue are often described as trends when I think they’re better conceptualized as punctuated equilibria. It makes it easier to systematize and predict second-order consequences.

Meta-competition as a cause for punctuated equilibrium

There’s an interesting game-theory problem for each example of punctuated equilibrium in society. In EvBio terms it can be explained that organisms naturally fit competitive niches which are often shifted by outside factors, almost like the gas from a popped balloon dissipating to fill its container. But in all the situations relevant to real life, the players are people with biases, unique objectives, and an awareness of what other people are thinking.

My best mental model for understanding this is meta-competition. In many cases, performance in some game matters less than choosing which game you compete in. I found a random blog post that used political conflict as an example: “the solidarity folks want a rivalry with the rivalry folks because they (the solidarity folks) think they can win, but the rivalry folks don’t want a rivalry with the solidarity folks because they (the rivalry folks) think they would lose.”

Remember that structural or environmental changes lead to punctuated equilibrium as actors quickly adapt to fit the new landscape or incentive structure. I think that in a lot of cases (deciding who gets the promotion, or the highest-status date, or most cultural recognition,) the result given some rules and boundaries would largely be known. So the most effective way to compete is to change the game you’re playing. This means that since people know what they can win or lose at, they compete over the game being played and when the game/rules are changed, the equilibrium shifts. A noteworthy corollary relevant to career planning: changing a system can have far more impact on the world than doing anything within a system (sounds a lot like Silicon Valley ethos!)

 XY Problem

Taken from a Stack Exchange post: “The XY problem is asking about your attempted solution rather than your actual problem. That is, you are trying to solve problem X, and you think solution Y would work, but instead of asking about X when you run into trouble, you ask about Y.

I catch myself doing this all the time. It doesn’t help that we naturally want to show off the progress that we made on something (even if it’s a dead-end) and fix the attempted solution for gratification or to close the learning feedback loop.

Optimize for serendipity

Several of the most valuable opportunities and friendships throughout my life have happened out of pure chance (read more about this in my post here). Notice that this principle is seemingly at odds with the “hell yes, or no” idea. It’s important to make the distinction: maximizing serendipity creates opportunities, and “hell yes, or no” picks the most meaningful ones. Those are two separate, independently necessary steps in the process.

We stop learning and performing when we can’t tell action/decision quality from outcome

VCs often point out that the feedback loops for investments are 10+ years so it’s hard to learn from your decisions. Less extreme cases pop up in real-life all the time. Being more aware of this helps you 1) put feedback loops in place, and 2) put less weight on what you learn from outcomes loosely connected to actions/decisions.

Training behavior. Idiosyncrasies and preferences are defense for that

I read a fascinating EvBio article theorizing that we have preferences and idiosyncrasies as a sort of social defense mechanism. Clearly we trust and build relationships with people who spend energy/resources affirming the relationship — like how your closest friends remember to call you on your birthday or reward you by playing your favorite song at a party. The fact that everyone has their own unique and seemingly random preferences ensures that people can only gain your trust by spending the time and energy to learn then remember your preferences. A social-trust Proof-Of-Work if you will (deep apologies for the blockchain reference). This helps consciously contextualize and understand our social priorities and be more deliberate in building relationships with people we care about.

Decision making: reversal and double-reversal

If you haven’t learned by now, Wikipedia articles and papers are usually more articulate than I am: “Reversal Test: When a proposal to change a certain parameter is thought to have bad overall consequences, consider a change to the same parameter in the opposite direction. If this is also thought to have bad overall consequences, then the onus is on those who reach these conclusions to explain why our position cannot be improved through changes to this parameter. If they are unable to do so, then we have reason to suspect that they suffer from status quo bias”

“Double Reversal Test: Suppose it is thought that increasing a certain parameter and decreasing it would both have bad overall consequences. Consider a scenario in which a natural factor threatens to move the parameter in one direction and ask whether it would be good to counterbalance this change by an intervention to preserve the status quo. If so, consider a later time when the naturally occurring factor is about to vanish and ask whether it would be a good idea to intervene to reverse the first intervention. If not, then there is a strong prima facie case for thinking that it would be good to make the first intervention even in the absence of the natural countervailing factor.

This is really, really effective in debates/discussions. A concrete (somewhat straw man) example: many people are strongly against any sort of gene enhancement whether through embryo selection or something like CRISPR (I personally see many unanswered questions on the topic). The argument is usually that it’s unfair to make one person unnaturally smarter than another. The reversal is asking if we should then ban private tutoring or even schools, because a select few with access to those resources are “unnaturally smarter” in all consequential ways. This is clearly at odds with the premise of the default argument against gene enhancement. There are many adjacent and orthogonal reasons to hold a position against enhancement, but the reversal is pretty widely applicable and powerful.

Good-story bias. We naturally bias towards thinking of less-likely scenarios that form a story

This one is useful in two ways. First, it’s the base rate fallacy restated in more natural words. When learning through pattern recognition and empiricism, we should try not to be biased by the good stories or outlier data points. Second, storytelling is an incredibly powerful way to influence thinking. Try to tell a story rather than give facts.

Chesterton’s Fence

When trying to change a system or policy it’s easy to find flaws and use those flaws to justify your proposed change. But almost everything was designed intentionally. There is probably a good reason for why something is the way that it is. Before working to change something, spend the time to understand how it was designed in the first place. That process will uncover issues you hadn’t previously considered or will give you further validation for altering the system or policy. This idea is referred to as Chesterton’s Fence. See the Wikipedia article for a history and quick example.

Additive or Ecological?

Any useful technology or policy developments will change user behavior. Making an explicit dichotomy between additive changes (first-order effects only) and ecological changes (higher-order effects are present) makes it easier to choose your decision-making toolkit and weigh factors appropriately.


That’s it for now. Please tell me about your mental models (seriously!) My email and Twitter are on the homepage.

Simpson’s Paradox and Thinking Rationally in Venture Capital

Decision making in venture capital relies heavily on probabilistic thinking and difficult-to-compare historical data. The heuristics are too rough and the feedback loops are too long. Most of the time correlation does not imply causation. You can’t distinguish “A causes B,” “B causes A,” and “C causes both A and B.”

You can get around the correlations vs causation problem by treating startup success as a function of independent variables (see Leo Polovets’ great post on this). Since most investors assess risk through empirical data and qualitative measures learned through pattern recognition, human biases can easily influence decision making.

Here’s my favorite example which is pulled from Michael Nielsen’s excellent post:

Suppose you’re suffering from kidney stones and go to see your doctor. The doctor tells you two treatments are available, treatment A and treatment B. You ask which treatment works better, and the doctor says “Well, a study found that treatment A has a higher probability of success than treatment B.”

You start to say “I’ll take treatment A, thanks!”, when your doctor interrupts: “But the same study also looked to see which treatment worked better, depending on whether patients had large kidney stones or small kidney stones.” You say “Well, do I have large kidney stones or small kidney stones”? As you speak the doctor interrupts again, looking sheepish, and says “Actually, it doesn’t matter. You see, they found that treatment B has a higher probability of success than treatment A, regardless of whether you have large or small kidney stones.”

Take a second to wonder: how is that possible? I was initially stumped, and a couple brilliant friends of mine couldn’t think of a concrete explanation off the top of their heads. It turns out that this result came from a legitimate real-life study. The sample sizes of the different groups were not controlled:

Okay, that makes sense. But the point is that empiricism can easily fail when you treat complex problems as a set of independent variables.

VC is pretty famous for fitting power law distributions and having skewed samples sizes. Replacing large/small kidney stones with a startup-relevant category and Treatment A/B with something a startup is doing, you’ll have a massively uneven set of data points to draw in — this is precisely what opens the door to Simpson’s Paradox.

The question then becomes: what are the most important cases of Simpson’s Paradox in VC? Perhaps large founding teams, or “distracted teams” consisting of university professors fit the bill. There are few examples of this, especially compared to the number of standard 2-3 cofounders we’re used to, so the statistical waters are muddied.

Tomasz Tunguz wrote that this type of thinking can also be applied to finding market opportunities: (In 2013 no less — ahead of the game!)

The Berkeley example reminds me of the SpaceX’s formation story Elon Musk shared at the D conference this year. Musk implicitly knew launching satellites into space would be expensive. After all, NASA’s annual budget is about $19B. But when Musk and his team analyzed each cost component of a space launch, they found that less than 10% of the costs were the rocket and the fuel and the launch equipment. This meant Musk could conceivably reduce the costs of space shipping by 80%.

While it’s not a true statistical example of Simpson’s Paradox, the point is the same. The market held a worldview based on aggregate data. But Musk recognized the aggregate space costs didn’t tell the true story. By digging deeper, he and his team found a lurking explanatory variable and an opportunity to disrupt the industry.

I think everyone should read about the common statistical paradoxes and fallacies. An obvious followup post would cover something like Bayes’ Rule in VC. Only one in five doctors correctly answer the linked Probability 101 question related to cancer rates (!!!) and I bet this many investors fall into similar traps.

What do you optimize for?

Advice depends on context, assumptions, and what you’re trying to optimize for. Much of it boils down to “this is what worked for me so take it with a grain of salt and try to calibrate it for you.” Useful advice either tries to account for differences between people or include generalizable principles for how you should think about something.

People will try to make these adjustments doing something  like “focus on what you like the most and are best at.” Although I think that spirit is right, it frames the problem in a counterproductive way. “Focus” is often understood as “follow a set plan towards this goal and do what you think you should be doing to succeed.”

I think this is the wrong type of optimization.

There are two things I optimize for which I’d like to explore here: interestingness and serendipity.

First, “interesting.” You may have noticed that what’s interesting to you has changed over time. Why is that? Keep in mind that interests are distinct from talents. Do interests change for the same reason that mathematically-inclined minds tend to be interested in formal logic but not painting, while artistically-inclined minds tend to be interested in Broadway but not software engineering? Is what we’re generally curious about related to things that give us happy fulfilling lives?

My take on this is that “interesting” is a heuristic for all of the things we need: usefulness, novelty, personal fulfillment, etc. Not only do interests change over time, we seem to jump between intense focuses and binge something until we suddenly stop caring about it. Our brains need some way to choose what to learn about. This idea has been studied in a more in-depth and rigorous way than I’ll argue here — check out this paper for instance.

You can probably relate to this real life anecdote: in high-pressure situations, I’m intensely interested in specific problem-related information and career-focused things like how to deploy code with Docker to save me time. It’s critical to note that I’m genuinely interested in that sort of stuff and I don’t explore it for external reasons. I’m just inexplicably more curious in that moment. When I have more free-time and no responsibilities, however, I find myself thinking much more about food, politics, my next workout, philosophy, music, or stand-up comedy. All things that don’t accomplish any specific goal but still enrich me as a person (a fact my humanities professors are always so ready to remind me of!)

The point is, interests aren’t just a luxury. They serve a useful purpose that you should consciously consider when organizing your life.

Second, “serendipity.” This is a major theme of Reid Hoffman’s The Startup of You  and Marc Andreessen’s career guide. The thinking is that breakthrough opportunities usually present themselves through random chance. Maybe you happen to stumble across the right problem at the right time and think “hmm, why hasn’t anyone solved it this other way?” or your friends decide to go to Denny’s at 3am to discuss a business idea. Seemingly small and innocuous moments lead to truly exciting opportunities.

I’ve already directly observed this in my own limited experience. I attribute pretty much every major success of mine to pure luck (with the prerequisite of hard work pouncing on an opportunity when I see it):

  • 100k-download Android app: I spent months toying around with different programming tools and just happened to stumble upon a great tutorial and project idea I liked. I also randomly played around with a bunch of different marketing techniques for fun.  Only one of many happened to stick and things just naturally snowballed from there. It was a low-quality app that I happened kill it because I had failed at plenty of other projects before it.
  • College: originally I could only see myself at a some “elite coastal school” (staying in the Midwest, I’ve since been telling myself it’s more like “coastal-elite school.” Hah!) but luckily I decided to serendipitously apply to a bunch of schools I didn’t really care about. If I didn’t go out of my way to stir some up random luck, I wouldn’t have gotten an offer from UIUC that put me in a top CS program and saved my a quarter-million in tuition over my next-choice option.
  • Contrary Capital: I randomly saw a Facebook post and decided to make a cold email.  If I hadn’t been on my phone that night or if I hadn’t decided to spontaneously email the founder, I wouldn’t have gotten the amazing and humbling chance to help lead a venture fund.
  • Friends: My sophomore year I went on a trip to Silicon Valley organized by my school. I wasn’t super excited and had actually turned down the chance to go the year before, but I decided I could use a little more serendipity in my life. There I made a great friend. Through her, I made some more friends. One of them became my roommate. He introduced me to many other cool people. Again, the original trip was just serendipity at work — there was no goal or process involved, but super valuable relationships grew out of it.

I’m sure everyone has similar stories of pure chance turning into something incredibly meaningful. Yet most people would probably have taken the above examples and focused on some sort of process or execution that made the most of the opportunities.

Think of it this way: we spend most of our lives doing things. Working towards goals. Learning. Talking. We do a great job of carrying out whatever it is that we’re trying to optimize for. We really don’t give ourselves enough credit. These two heuristics help you broaden the opportunities you come across and choose which ones matter most. That’s at least half the challenge — the rest comes naturally.

Existential Risk and Effective Altruism

The Effective Altruism movement is a philosophy and social movement that applies evidence and reason to determine the most effective ways to benefit others. In recent years, organizations like GiveWell and the Bill & Melinda Gates Foundation have helped to popularize the core concepts of Effective Altruism.

They support the idea that charity should be done with a strictly analytical mindset. Under the assumption that all living creatures have some level of sentience, Effective Altruism tries to minimize the sum of all conscious suffering in the long-run. Pretty straightforward.

This problem usually reduces to some basic number crunching on the ways in which people suffer and the cost necessary to mitigate that suffering. For example, it costs about $40,000 to train a seeing eye dog to help a blind person live their lives. It also costs about $100 to fund a simple surgery which would prevent somebody from going blind. It should be self evident that resources are limited and that all people’s suffering should be weighted equally. So choosing to spend limited resources on a seeing eye dog is considered immoral because it would come at the cost of ~400 people not getting eye surgery and losing their vision.

This sort of utilitarian thought is fairly intuitive. To help quantify reduction of suffering across a diverse set of unique actions, health economists and bioethicists defined the Quality-Adjusted Life Year (QALY), a unit measuring longevity, discounted for disease. A perfectly healthy infant may expect to have 80 QALYs ahead of them, but if that child were born blind, they may have, say, 60 QALYs ahead of them (in this made-up example, blindness causes life to be 75% as pleasant as a perfectly healthy life).

Traditional charity tends to be locally focused—you’d deliver meals for elderly people in your town or support a soup kitchen for the homeless. Considering Effective Altruism principles, however, you’d probably come to the conclusion that you can almost always save more QALYs from disease by funding health problems in impoverished African or Asian areas. In general, the more analytical you are in your giving, the more you choose to spend on this sort of giving opportunity.

As philosophers become more and more rigorous in their approach to Effective Altruism, you’d expect them to continue tending towards provably high-impact spending opportunities. But many moral philosophers actually argue that we should instead focus our attention towards mitigating existential risk, dangers that could potentially end human civilization (think doomsday asteroid collisions, adversarial AI, bio-weapons, etc.).

Here’s the basic argument: when trying to maximize the sum of positive sentient experiences in the long run, we need to consider what “long run” could actually mean. There are two cases. 1) humanity reaches a level of technological advancement that removes scarcity, eradicates most diseases, and allows us to colonize other planets and solar systems over the course of millions/billions of years and 2) humanity becomes extinct due to some sort of catastrophic failure or slow resource depletion.

In the first case, humans would live for millions/billions of years across thousands of planets, presumably with an excellent quality of life because of the advanced technology allowing this expansion. Call this 10²³ QALYs (1bn years * 1k planets * 1bn people per planet * 100 years of life per person). Of course this scenario is unlikely—a lot of things need to go right in the next several thousand years for this to happen. But no matter how small the odds, it’s clear that the potential for positive sentient experience is unfathomably large.

It’s worth noting that in the second case, the upper limit is likely on the scale of thousands of years. Philosophers argue that by that time we’ll have colonized other planets which significantly decreases the risk of any given disaster affecting the entire human race. So our second case future-QALY estimate is about 10¹⁶ (10bn human lives * 10k years before extinction * 100 years per life).

Given these rough estimates, we can do some quick algebra to find the probability threshold that would make it worthwhile to spend money mitigating existential risk: 10¹⁶ / 10²³ = 0.0000001. So if the chance of some catastrophic disaster is more than one in ten million, it’s more cost-effective to mitigate that risk than support the lives of people currently suffering.

So how do the best-estimate numbers actually work out? The Oxford Future of Humanity Institute guessed there’s a 19% chance of extinction before 2100. This is a totally non-scientific analysis of the issue, but interesting nonetheless. The risks of non-anthropogenic (human caused) extinction events are a little easier to quantify—based on asteroid collision historical occurrences and observed near-misses, we can expect mass (not necessarily total) extinction causing collisions to happen once every ~50 million years.

A compelling argument supporting a non-negligible chance of extinction is the Fermi Paradox. If intelligent life developed somewhere else in the galaxy, it would only take a few million years to travel across the entire galaxy and colonize each livable solar system. That’s not much time on cosmic and evolutionary scales, so where are the aliens? Either we’re the first life form to civilize, or all the others died out. Many astronomers studying this topic think the latter case is more likely and we have no reason to say we’re any different.

Regardless, there’s an uncomfortable amount of uncertainty surrounding the likelihood of existential global catastrophes. Although the philosophical and mathematical underpinnings of this idea are well understood, nobody knows how to pick the right numbers. Since it’s so hard to imagine what the right probabilities are, it can be argued that we should hedge against the worst-case downside. Traditional charity focuses on eliminating poverty and health problems which only accelerate the course of human development. This choice can be visualized:

Spending on Existential Risk has a very small chance of avoiding a huge downside

Traditional charity spending only shifts the human development curve

These pictures are good at explaining the magnitude of risk involved and the sentiment of those that argue for funding Existential Risk research over traditional charity.

So how should you choose to effectively allocate your resources to do good? That’s still a tough question. I’d highly recommend reading The Most Good You Can Do. Most folk involved in the Effective Altruism movement (myself included) would suggest supporting GiveWell or The Centre for Effective Altruism. But if the idea explained in this essay is powerful enough, consider the Centre for the Study Of Existential Risk.


Thoughts? Tweet me at @whrobbins or find my email at willrobbins.org!

Impact of Blockchain: Smart Contract Based Incentive Compensation

Most examples of potential blockchain applications focus on making something more efficient. OpenBazaar is like eBay but without fees. Edgeless is an online casino with no edge (duh). That’s all great, but I’m always on the lookout for ways in which blockchain tech can make a more structural impact on the way we do things. Here’s an idea that I’d love to hear some thoughts on:

Problem: incentive plans in finance are hard

Performance based compensation for traders and portfolio managers is, in theory, a great way to align everyone’s incentives and reduce the overall system’s risk. But the problem with bonuses and incentive schemes is that they often look short-term and reduce downside risk to traders but leave the upside unlimited.

Say you’re a trader who has a $100k base salary with an annual performance bonus that increases by some amount for every percentage point that the trader beats the market index. This is a common scheme and it generally works alright.

But unethical and careless traders can game the system by taking on excessive risk. If you invest $25mm in risky assets (like investing in cryptocurrencies, ironically) that have potential for huge upside and huge downside, there are two possible outcomes:

  • The investment a huge success. You make $100k salary and a $2mm bonus. The company you work for makes $20mm+.
  • The investment a huge failure. You still make $100k salary (still not bad!). The company you work for loses $20mm+.

Say the asset you invest in will plummet in value with 0.9 probability and skyrocket with 0.1 probability. Then the expected value of making the trade is about $300k from your perspective, but about $-20mm from the company’s perspective.

Clearly this is massively unbalanced and will incentivize risky behavior that could have rippling effects in the firm as well as the overall economy.

Potential solution: smart contracts that track true long term performance

The ideal incentive compensation plan would track a trader or portfolio manager across their entire career and between employers and pay based on the true long-term performance of their trades/portfolio. There are a few obstacles to this:

  1. There’s no mechanism for a firm to effectively compensate people who no longer work for them
  2. Employees don’t want to wait until the end of employment to get a bonus
  3. It’s hard to track the performance of a portfolio over long time periods and compare it to an index (there’s too much noise from changing interest rates, inflation rates, economic cycles, etc.)

Making investments through a smart contract or DAO token would enable companies and employees to pay out based on some arbitrary function of investment performance. Instead of just cashing out on an investment’s performance with an annual bonus, the value of a smart contract / DAO token could accurately, easily, and securely be pegged to the investor’s performance across an entire career. This could help solve problems 1 and 3 listed above.

Traders could work knowing that a series of short-term risky plays would absolutely not be in their interest. Incentives of the firm and individual would be aligned much more effectively and this could help mitigate financial crises.

Problem 2 is more tricky to address. One potential solution would to create a market of these bonus contracts/tokens. Users could look at a portfolio, assess it’s risk spread instead of value, and make bets on whether the bonus plan is likely to be stable. Of course this market would be effectively facilitated by a smart contract or DAO! I don’t know if this idea has been explored before by existing financial institutions—please let me know if you’re familiar with it!

What’s next?

I think it’s way too early to begin building something similar to what I’ve described. There’s no way to effectively track the performance of generic assets over time because most trades take place on private/opaque platforms. This would become feasible if DAOs and blockchain marketplaces become far more common. Hopefully that’ll be the case and someone will pursue this concept—it’s important to think more about financial risk and stability as economies and (crypto)currencies become more globalized.


Thoughts? Tweet me at @whrobbins or find my email at willrobbins.org!

Analyzing Venture Opportunities Part 1: The Product and Market

I spend a lot of time talking about business opportunities through my work with Contrary. I’ve noticed that many first-time founders forget to cover certain topics in meetings and pitches. If you’ve been thinking about a startup for a long time, non-obvious ideas become can so ingrained in your head that it’s hard to articulate the assumptions you’re making. This is a list of things that VCs consider when analyzing a venture opportunity’s product and market—make sure to touch on each when talking with a VC.

  • Why now? Think about where this product/market is on the S-Curve. Company should have recently become possible (but not prevalent) because of a new market trend or tech innovation. Unfilled niches are short lived but being too early is very costly.
  • What’s the initial niche? No valuable market is entirely unfilled. There must be some specific niche that can be won over. It’s important that the company provides something 10x better than existing products/services. (Side-note: the degree to which the startup has to be better is directly related to how much they have to change existing customer behavior). Example: Amazon originally focused just on books and made the customer experience convenient and low-price in a way that bookstores fundamentally couldn’t match.
  • Can you grow from that initial niche? Remember that the initial niche is just part of the plan to solve a bigger problem in a bigger market. Example: Amazon used its bookstore cash, workforce, technology, and processes to expand into other retail markets.
  • Is the product/service defensible? There should be something that prevents competition from changing their product or using their resources to create a new product. This is often legal (patents), social (network effects), economies of scale, informational (data that’s valuable across different products), or strategic (example: Facebook struggles to take on Snapchat partly because everything FB contradicts Snap’s core privacy values).
  • What metrics and KPIs will show that you’re growing? Metrics are necessary to make sure growth is on track, and execution should be focused on improving the most important metrics. (How did the company decide which metrics to focus on?)
  • What de-risks your assumptions and bets? Assumptions are pretty much the entire foundation of an early-stage startup’s game plan. Being able to quickly prove or disprove assumptions will give founders a more clear picture of reality.
  • How are you going to make money? There should be some sort of exit strategy or long-term profitability goals. Is the revenue stream recurring, network/data based, etc?
  • Why are competitors doing X and not Y? There should be some analysis of how competitors’ strategy and execution interacts with available market opportunities (related to Peter Thiel’s Secrets — things you know but no one else does).
  • How is the market growing? Both growth rate and change in growth rate are important for a founder to know.
  • What are your current bottlenecks / resource constraints? This ties in to your roadmap and execution strategy. Have you thought deeply about what’s important to get done and what can wait?
  • What have you learned from users and how has that informed your decisions? It’s important to understand what users need and how you can better serve them. Do you look at new users by cohort? Have you segmented users based on any patterns?

Note that every answer to these questions does NOT have to be perfect. Part of analyzing a business is finding the flaws (there’s always at least one) and thinking about how it can be overcome or compensated for. Don’t sweat it too much if you can’t find a great answer to some of these questions.

Part 2: Thinking About People is now available!

The Flawed Economics of Robinhood: Why Users Are Better Off Without It

Robinhood has been getting more traction and press coverage recently. It’s catching on with some of my friends from school and I’ve gotten into interesting conversations over Robinhood’s value as a business. The purpose of this post is to explain why I think Robinhood will hurt its own users despite its well-intentioned mission to “democratize access to the financial markets.”

Note: this post ballooned into a 2000 word essay. Skip to the TL;DR at the bottom if you don’t want to spend 4–8 minutes going more in depth.

A Random Walk Down Your News Feed

Retail investors (non-professionals) can’t beat the market in the long run. This phenomenon has been well documented and I am not aware of any compelling evidence contradicting it. Active retail investors also tend to perform worse than passive investors in the long run. This doesn’t mean that everyone will lose money—it means that if an investor’s portfolio grows 5% in a year, it’s highly likely that they could have made more (say 8%) just by buying a simple index fund and holding it.

Most traders use some mix of a few common trading strategies:

Fundamental Analysis

This refers to the idea that traders should focus on the intrinsic value of a security when making decisions. If a company is selling stock at $5 per share, you should only buy if you can expect to earn $5 in dividends over the entire course of the company’s lifetime.

Famous investors like Warren Buffet don’t buy anything that isn’t priced cheaper than the underlying asset’s value. This is the only rule you as a consumer need to follow unless you really know what you’re doing. The hard part is determining what the true value of an asset is.

Technical Analysis

This refers to the idea that quantitative indicators, historical market data, and social/psychological/political analysis can help you predict where the price of a stock is going. If you can tell when the right time to buy and sell is, the actual price and valuations of an asset don’t matter.

In practice, this is extremely difficult to do well. The vast majority of day traders lose money trying to predict how other people will make trades. But for certain (highly advanced) firms, this strategy is amazingly profitable. This philosophy also plays a part in how bubbles form—if speculators think that they can make money with an investment, they’re often willing to overlook prices that are way above the true value of whatever it is that they’re buying.

Throwing Darts

Alternatively entitled “buying Apple, Berkshire Hathaway, and whatever company I like seeing on my Facebook News Feed.” Needless to say, this is a losing strategy. But a non-negligible number of people still run their portfolio this way.

Throwing darts has been especially tempting the past several years because markets have been doing so well overall. It’s easy to be encouraged by modest returns but equally easy to forget that putting money into an index fund would be at least as profitable and less work.

Which of These Strategies Works Best With Robinhood?

Two of these strategies (not the third) are valid investment theories. There is a lot of debate over which is more viable, and real-world professional investors sit somewhere on the spectrum between fundamental and technical analysis.

But all three strategies are doomed to underperform using Robinhood. Remember the fact that retail investors already can’t beat market indexes. Robinhood doesn’t provide any information or systematic advantage to reverse users’ predisposition to poor performance. I’d guess that it’s even harder to make informed decisions because there will always be lower quality information available to users on a mobile-only platform.

The lack of quality financial information will let users rely more on irrelevant news seen on social media, their friends, and their guts to make decisions. That’s not good.

Robinhood’s Product and Strategy

“Democratize access to the financial markets.” What does that mean? Are markets not already accessible to the masses? There are plenty of brokers who let you set up an account for free with low minimum balances and small trade fees. Does Robinhood’s beautifully designed mobile app and free trades policy really democratize things? There are two groups who seem to think that it does:

Retail Investors: Millennials and Generation Z

Robinhood is one of most elegant and aesthetic apps on the market right now. It has smartwatch companion apps, a fun intro video, and creating an account takes less than 4 minutes.

Robinhood as a company is clearly in touch with modern product expectations. People my age want to go through the full user experience on mobile, start to finish, with as few exceptions as possible.

The actual features are similarly streamlined. The premium account option, Robinhood Gold, gives users access to more advanced trading options and margin lending (loans from the broker that amplify the profits or losses you’ll make).

So do these features help people access financial markets? Sure they do. But that’s not necessarily a good thing knowing that retail investors underperform averages.

Robinhood markets their margin lending as a way to “get up to 2x your buying power.” There’s no mention of risk or the financial mechanics of margin lending. It just sounds like a great way to make more money—“buying power” is such a positive and harmless descriptor. People without the proper experience will get burned by this unless the loans are better explained.

I’ll ignore the lack of tools available to users (Quicken integration, export to Excel, ability to easily manage many diversified holdings) because they can be easily implemented in the future. But even that wouldn’t solve the underlying issue with mobile-first stock trading: it’s too hard to fit all the relevant information into a 5″ screen. The charts are overly simplistic and making an informed investment decision requires more detailed research. Of course users could do research on a computer and just execute the trade on their phones, but that defeats part of Robinhood’s value proposition. So it’s in Robinhood’s interest to convince users that they can get by with mobile alone (again, this will make it easy for users to under-educate themselves and speculate).

Low Table Stakes Investors

A quick Google search found data showing that the average Millennial saves less than 8% of their income and has a net worth between $-20k (debt) and $20k.

Zero commission on trades and no minimum account balance is clearly an advantage for these users. It removes the biggest barrier to entry. Robinhood markets itself as a way for low stakes consumers to get started in investing.

As a quick aside, there’s even doubt that the free trades are a net benefit for users. Slippage is the difference between the price of a trade as it’s ordered and the true cost of trade as it’s executed. Paid trades with larger firms are generally thought to be executed more efficiently and more likely to trade at the the best price. So over time, depending on trading volume and portfolio size, users could theoretically be better off just paying for each trade at with a different broker. But Robinhood could definitely improve this over time if it is a real problem now so I don’t hold it to them.

It’s hard to get more into this topic without making hand-wavy judgements about what people should or should not be able to do. I know for sure that users attracted by the low fees and lack of minimum balances are likely to have a weaker financial safety net. This is why the SEC requires that investors be accredited before investing in risky unregulated securities like startups. Since part of Robinhood’s success depends on people taking out loans (more on this later), I feel that appealing to low-stakes consumers approaches a grey area, especially when the product is designed to be as easy as possible (you only need to tap your phone 3 times to make a trade!)

Anyone who can’t afford fees or minimum account balances simply should not take the risk of trading stocks. There are cheaper and safer investment opportunities out there. Again, people should be able to do whatever they want. But I think it’s worth speaking out to prevent Robinhood from convincing these potential users to actively trade.

Even the name “Robinhood” makes users feel like they’re empowered to take control of their own financial future and able to beat the pros at their own game. At risk of sounding like a broken record, this is impossible (well, highly unlikely on average, to be more accurate.) Of course Robinhood makes all of the appropriate disclaimers crystal clear but the brand seems to signal that active trading is a good idea.

The Fundamental Flaw

As I mentioned above, Robinhood doesn’t earn revenue by executing trades. It makes money through interest on users’ uninvested funds, “Robinhood Gold” which includes access to margin lending and advanced trading features, and interest on margin loans.

Putting the advanced features aside (I imagine that a subscription to after-hours trading access and instant deposit of funds is relatively cheap and only scales with respect to the numbers of users), Robinhood’s success is dependent on maximizing (a) the amount of money left in accounts as cash, and (b) margin loans.

[Update: I’ve also learned that Robinhood sells order flow to hedge funds who then make money off the spread. This is fine, but it supports the idea that Robinhood is incentivized to encourage active trading.]

Knowing that Robinhood users are highly likely to underperform the market or even lose money, Robinhood’s success metrics are inversely related to users’ success metrics. The more cash users store in their accounts, the more interest they are losing out on. Worse, users will lose more and more money in aggregate as they increase leverage on their investments using margin lending.

I consider this to be a fundamental flaw in Robinhood. I just don’t see a way for both Robinhood and its users to be financially successful under this business model.

Robinhood’s Long Term Vision

A world where Robinhood succeeds in fully “democratizing access to the financial markets” is a world that’s less stable than the one we live in now.

Frankly, I’m surprised that most Robinhood users aren’t more cautious of participating in the stock market. Millennials (Robinhood’s core audience) were hit hard by the recession. I suppose that several years of recovery has erased memory of previous bubbles — markets have maintained strong, steady recovery growth between Robinhood’s 2013 launch and now.

If playing the stock market from the comfort and convenience of your iPhone became common, markets would become more volatile and susceptible to bubbles. I haven’t met any Robinhood users who express this concern which is even more worrisome, in a way.

Most people think that bubbles are caused by banks and the government. In some cases this may be true—consumers weren’t the ones giving out subprime mortgages and building complex financial instruments in the 2000s. But bubbles are definitely possible in the broader economy. Look at the Japanese bubble in the ’80s for example. The Japanese real estate market was valued at just over $20 trillion. That’s just over one fifth of all the world’s wealth at the time. Clearly an island that’s 5% the area of the U.S. could not possibly have that much intrinsic value. Yet the bubble continued to inflate.

A more recent and fitting example is Bitcoin. Cryptocurrencies are interesting to me because they seem to be used and understood most by consumers. Banks and governments weren’t particularly interested or involved in blockchain tech until recently. It was mostly speculative consumers who drove the price of BTC over $1000 in 2014. About a year later, after a peak and crash cycle, BTC was priced around $350. Luckily Bitcoin was (and still is) too small to affect the overall economy.

The point is that people en masse aren’t always rational. Only a small fraction of the population can spend the time to read up on finance, economics, and current events. So an economy where every college kid, lawyer, salesperson, Uber driver, and stay-at-home parent is encouraged to actively invest is bound to experience the unreal highs of a bubble and, of course, the disastrous crash of the pop.


TL;DR

  • It’s widely accepted that the average investor cannot beat market averages in the long term.
  • Many studies have shown that index funds and passive investing are the most successful strategies for users. This is the opposite of what Robinhood encourages.
  • Robinhood markets itself to consumers with the least financial experience and risk tolerance.
  • Robinhood’s success is largely dependent on users taking out margin loans that amplify the profits or losses of a trade.
  • Because we know that the average retail investor is not likely to succeed actively trading, Robinhood’s margin lending will hurt users in aggregate.
  • This means that Robinhood’s value proposition and incentive structure are fundamentally misaligned with the best interests of users.
  • Robinhood’s vision is to “democratize access to the financial markets”
  • But a world where everyone uses Robinhood to make their own investment decisions would be less stable and more prone to speculation/bubbles. History has shown that the masses are unable to see when prices are too disconnected from the intrinsic value of an asset.

A Clever Malware Tactic and Why There’s Nothing You Can Do About It

As the owner of a mildly successful Android app, I sometimes get emailed about advertising, marketing, or acquisition opportunities. The messages usually propose some sketchy advertising partnership or pitch me some SEO work, and they’re pretty easy to weed out and ignore.

How I found a scam

I recently had an interesting encounter. It started off with another cold email. For context, I’m trying to cash out on my app by selling it.

Hi Will

I would like to purchase your brick breaker app listed on https://play.google.com/store/apps/details?id=com.RobbinsDev.Brick_Breaker

http://www.selltheapps.com/source/app/2614.php

My offer would be USD$500

Please let me know if this acceptable

Thank You,

Gabriel

Interesting. Not too many red-flags popping up yet. I responded quickly. School’s about to start and if there’s opportunity for a deal, I want to get it done ASAP (so don’t judge my utter lack of negotiation!).

Yes, I can accept that offer.

What information would you like from me?

A couple hours later:

Hi Will,

Great!

Do you have screenshots for

1. Total lifetime installs

2. Current installs by user by country breakdown

3. Total current installs

So I send the screenshots and get this back:

Hi,

Thanks for the screenshots.

Let’s proceed with the purchasing with the agreed price of USD500

I have the following payment methods available

1. Bank transfer/wire

2. Credit card

3. Skrill

4. Paypal

Let me know which method is comfortable for you and we can proceed with payment and app transfer

Thanks

Gabriel

Hmm, it feels like we’re jumping the gun. Any competent businessperson would ask about IP rights or obligations. I forwarded the email chain to a friend with some comments:

But their website is a shell and was registered on Aug 7 [actually it was registered 2.5 yrs ago, I misread the record] through DomainProxy according to the whois

Can’t find any info on the leadership of this company

The time zone places them in Asia. But the names on the website/emails are gabriel, calvin, and tony which aren’t Asian

The wire transfer requires my acct numbers which is a bit sketchy

The other payment options can be reversed super easily

I think that finding apps then offering to buy them is an uncommon scam strategy

I’m not sure what their desired endgame is. Steal the app by reversing payments? Get my acct. number for the wire, then print checks with it?

I start doing more in depth research on this guy. Not much comes up when I scour the web for his personal information and business records. I manage to convince him to chat over Skype, and we talk about his background and what he plans to do with the app.

I can’t get a single substantiative answer to my questions. As far as I can tell, everything he told me was a lie. Clearly this guy’s not legit. But at this point I’m too curious. What’s he up to?

A few more back/forth inquisitive emails accomplished nothing. I finally responded:

Hi Gabriel,

I’ve decided to not move forward with the deal.

You said that your company has been around for 10 years [on the Skype call] when it’s only been around for about a month. I’m not sure what exactly is going on (swapping the app out for malware?), but I can’t be a part of it.

Will

He sent back a few emails weakly defending himself and offering a different shell company to try and back his reputation. Here’s the smoking gun (emphasis mine, of course):

Could you enlighten me as well what is the real concern about? As the app purchase does not reveal any of your personal information and it is alright if you don’t wish to provide the original source code.

What’s going on, and what this means for broader security risk

Here’s the scammer’s game-plan:

  1. Find an Android app with a lot of users
  2. Purchase that Android app
  3. “Update” the app with malware (you don’t even need to buy the original source code!)
  4. ???
  5. Profit

This concerns me for two reasons.

I’ve had 3 “online advertisers” with non-existent reputations contact me in the past couple of months looking to buy Brick Breaker Free. I never had a problem with that previously, so it looks like this strategy is catching on. This also implies that it’s profitable.

Second, users can’t do anything to fight this scam. One day, you’re playing a fun game on your phone. The next day, you update to the latest version (I’m sure it’ll mention “bug fixes” or something similarly innocuous) and BAM! Malware.

From the development side, I know how tempting it is to just sell an app without due diligence. It’s not hard to see through these people’s shenanigans, but what if someone doesn’t know what to look out for, or what if they just don’t care? What if the scammers become more sophisticated and well-versed in business etiquette?

I just don’t see any way to easily prevent this from occurring.


Thoughts? Tweet me at @whrobbins or find my email at willrobbins.org!