The Great Mental Models Volume 1

  • Developing a system for finding the right solution to the right problem in a way that examines multiple angles while still being simple is powerful. Why do we need this?
    • We have a hard time removing ourselves from the problem
    • We have too much invested in our ego - we need to have objective constants we can use to evaluate problems.
  • Until we are willing to adapt our behavior and actions and update our models when evidence proves us wrong, learning and understanding are meaningless.
    • “We compound the problem of flawed models when we fail to update our models when evidence indicates they are wrong. Only by repeated testing of our models against reality and being open to feedback can we update our understanding of the world and change our thinking.”

The Map is Not the Terrain

  • Maps represent a single point in time, thus by definition represent something that no longer exists.
    • Important to remember in business with any presentation, pro forma, financials, etc…
    • Much like the news, we’re consuming stories created by other people. People who have consumed a lot of information, reflected on it, and drew the conclusions they want to share with us.
    • Human nature is to consume these stories as the single-point of truth without doing the work ourselves.
  • When evaluating the map, we must be aware of 3 things:
    • We cannot assume that early matches between the map and the territory (reality) equals a match across the entire map.
    • We must avoid the pull to adhere to the map, and instead be willing to take in new information. They are guides, not laws.
    • We must consider the “cartographer” of the map and the context in which it was created.

Circle of Competence

  • Failure is necessary but not sufficient for competence
  • How do we judge our competence?
    • We make decisions quickly, and accurately.
    • We know the additional information needed to make a decision
    • We know the difference between what we can know and what we can’t
  • To gain competence:
    • Be willing to learn - experience + reflection
      • Experience comes from yourself, others, books, etc…
      • Learning from just your own experience is too slow
    • Measure your track record honestly and in real-time
      • We don’t do this because we don’t really want to know our weak or blind spots.
    • Solicit external feedback to create loops
      • We must remove our bias to accurately measure our own progress.
    • To improve processes, we must keep an accurate record of thoughts and opinions in real time.

First Principles

  • Find the elements of a problem that can not be further reduced to get to the foundation of a solution.
    • “If your “whys” result in a statement of falsifiable fact, you have hit a first principle. If they end up with a “because I said so” or ”it just is”, you know you have landed on an assumption that may be based on popular opinion, cultural myth, or dogma. These are not first principles.”
  • Use Socratic questioning to uncover first principles of a problem:
    • What do I think and why?
    • How do I know this is true?
    • What is the opposite of my opinion?
    • How can I support my conclusion and from where?
      • Consider the source of the where
    • What if I am wrong?
  • We cannot get frustrated by the time it takes to answer the above questions or stop when we cannot answer them. If we give up due to our own ignorance, we will never get to the first principles from which to work.
  • First principles move us away from randomness while opening up more possibilities by taking off our blinders. As a result, we move from randomness to choices that have more defined probabilities of success.

Thought Experiments

  • Using our imagination to investigate outcomes
  • Most popular examples are counter-factuals, but they are also the ones where me must exercise the most caution due to the complex nature of the world.
  • The more scenarios you can image where X happens without Y, the weaker the case for X to be a critical cause to the situation.
  • If we run thought experiments in a systemic way, we can test our natural intuition.
  • We often know the difference between necessary and sufficient conditions for success, we struggle with the gap between the two.
    • “But the sufficient set itself is far larger than the necessary set. Without that distinction, it’s too easy for us to be misled by the wrong stories.”

Second Order Thinking

  • The more connections, the more important second order connections are. (And everything is connected)
  • Only needs to evaluate the most likely effects and consequences to check our understanding of outcomes - we still need to avoid paralysis by analysis.

Probabilistic Thinking

  • While still maintaining our “base rate”, we should process information in a way that allows us to change the probabilities of existing assumptions being true.
    • “For each bit of prior knowledge, you are not putting it in a binary structure, saying it is true or not. You’re assigning it a probability of being true. Therefore, you can’t let your priors get in the way of processing new knowledge. In Bayesian terms, this is called the likelihood ratio or the Bayes factor. Any new information you encounter that challenges a prior simply means that the probability of that prior being true may be reduced.”
    • “Successfully thinking in shades of probability means roughly identifying what matters, coming up with a sense of the odds, doing a check on our assumptions, and then making a decision. We can act with a higher level of certainty in complex, unpredictable situations.”
  • The probability of accurate probabilities
    • “Finally, you need to think about something we might call metaprobability —the probability that your probability estimates themselves are any good
  • Risk magnitude
    • “any small error in measuring the risk of an extreme event can mean we’re not just slightly off, but way off—off by orders of magnitude, in fact. In other words, not just 10% wrong but ten times wrong, or 100 times wrong, or 1,000 times wrong. Something we thought could only happen every 1,000 years might be likely to happen in any given year!”
  • Never take risks that will take you out of the game completely, learn from bad bets.
  • Create situations where randomness and uncertainty are your allies

Inversion

  • The process of inversion:
    • identify the problem
    • define your objective
    • identify the forces that support change toward your objective
    • find the forces that impede change towards the objective
    • strategize solutions (either augmenting the support or removing the impeding forces)
      • “Once we figure out our objective, we focus on the things we need to put in place to make it happen, the new training or education, the messaging and marketing. But Lewin theorized that it can be just as powerful to remove obstacles to change.”
  • It’s easier to avoid mistakes than to create solutions.
    • “Avoiding stupidity is easier than seeking brilliance. Combining the ability to think forward and backward allows you to see reality from multiple angles.”

Occam’s Razor

  • Instead of working to prove or disprove complex situations, base them on the explanation that has the fewest moving parts.
    • “If all else is equal, that is if two competing models both have equal explanatory power, it’s more likely that the simple solution suffices.”
    • “Sometimes unnecessary complexity just papers over the systemic flaws that will eventually choke us. Opting for the simple helps us make decisions based on how things really are.”
  • The obvious counter to this model, things are not simple.

Hanlon’s Razor

  • As a general rule, people don’t operate from malice but from stupidity.
    • “When we see something we don’t like happen and which seems wrong, we assume it’s intentional. But it’s more likely that it’s completely unintentional. Assuming someone is doing wrong and doing it purposefully is like assuming Linda is more likely to be a bank teller and a feminist. Most people doing wrong are not bad people trying to be malicious.”

image

Tags: Bookshelf

Reference: The Great Mental Models Vol 1

Notes mentioning this note

There are no notes linking to this note.