Lessons from: The Great Mental Models Vol. 3: Systems and Mathematics
Name: The Great Mental Models Vol. 3: Systems and Mathematics
Author(s): Beaubien, Rhiannon ; Leizrowice, Rosie
Synopsis
The third and last book in the series, this is about systems and maths. Wikipedia defines a system as "… a group of interacting or interrelated elements that act according to a set of rules to form a unified whole.". The book gives us models to understand how systems work - since we are surrounded by systems, in many ways life is a system - this is valuable information.Core ideas
- Feedback loops are everywhere, systems are full of them: Generally, we have issue considering feedback that is not aligned with our world view, and on top of that, not all feedback is equally true/valuable. It is a meta skill to sift through feedback and pick that what is genuinely useful, even though it makes you feel uncomfortable in the moment. It is hard to understand this because the feedback of our actions is rarely immediate. Now, a feedback loop if when the output of a system affects itself. Consciously or unconsciously, we create feedback that impacts others which then comes back to impact us - so, we need to be careful about what actions we take because they might just end up coming back to us. What goes around comes around. There are two basic types of feedback loops: balancing and reinforcing, which are also called negative and positive. Balancing loops tend toward equilibrium, while reinforcing loops amplify a particular process. A system will self-destruct if it has an excess of reinforcing loops, balancing loops make a system more sustainable. When making decisions consider for a moment what feedback loops will your actions trigger (esp. in the long term), and if that will lead to harmful reinforcing loops then laws must be put in place to control them.
- Systems need feedback loops to maintain equilibrium: Systems of all sorts like to move towards equilibrium, but they do not stay in that state for long, or rather, it takes strong feedback loops to keep systems in a state of equilibrium. If you become dependent on a particular equilibrium to perform well, you become vulnerable, it is useful to know what actions will lead you back into homeostasis.
- Every system has a bottleneck which is its weakest link towards its objective: Devising band-aid solutions might work in the short term, but the long term solve is to always rethink and innovate that part of the system. If you think you’ve identified a bottleneck, it’s a good idea to do what you can to validate that this factor is indeed the limiting factor. Otherwise, you might end up solving the wrong problem. As you solve one bottleneck, another part of the system will become the new bottleneck.
- Systems behave differently at different scales: Systems tend to scale non-linearly, different parts might struggle to keep up. As growth occurs, resilience can be increased by keeping a measure of independence between parts of a system. Provided links remain between parts (possibly as shared goals), systems can safely scale in this fashion: by dividing into largely independent, yet connected, parts. Dependencies tend to age poorly because they rely on every one of their dependencies aging well. Scaling up is not always advantageous, be content in finding the ideal balance for you and sticking with it. Understanding that systems can scale nonlinearly is useful because it helps us appreciate how much a system can change as it grows.
- A system without backups (margin of safety) will not be functioning for very long: Every system MUST be built with margin of safety (redundancies) in mind, and that includes the system closest to you, your life. When calculating the ideal margin of safety, we always need to consider how high the stakes are. The greater the cost of failure, the bigger the buffer should be. At the same time, do not make the mistake of assuming margin of safety to be your license to indulge in risky behaviour. Margin of safety should be optimal, and too much margin can be a drag on resources, limit creativity and slow down processes.
- Systems are never static and churn is a fundamental feature: Churn is not always bad, in fact churn is often what is needed to keep a system adaptive to the external environment. Overt attempts at reducing churn can only end in harming a system. A system can never go on indefinitely, and studying can help system managers rethink and reinvent the system for the times to come.
- Systems are made stronger by strong algorithms: Systems adjust and respond on the information provided by algorithms. A well-designed algorithm needs to be directionally correct (not perfect) and produce consistent results, those results when used by the system need to move the system in the right direction - therefore, it is important to review and refine your algorithms, if an algorithm has started giving inconsistent results it might be that a previously unknown (and inactive) variable has emerged. It is good to take time and identify the algorithms you use in your own life to make the system "work". This list of algorithms can be immensely useful as you can resort to following them as a "standard operating procedure" to save energy and also when the larger system around you is in flux.
- We can learn from the mental model of critical mass that changing a system doesn’t require changing everything about it. Changing a small percentage of its parts can shift the whole thing into a new state.
- When systems as a whole develop properties, behaviours and outcomes that cannot be predicted by looking at their individual components, that's called emergence. Emergence does not need centralized control or coordination, as its individual parts automatically organize themselves basis interaction with their relative parts. It's hard to predict when a system will develop emergent properties and the complexity of a system does not always guarantee emergence.
- Irreducibility is about finding the point beyond which you will inevitably change the fundamentals so that you can recognize when you are changing the system to something different: Understanding the irreducible components of a system means you won’t waste your time trying to change what is unchangeable. The irreducible elements of a system are not fixed and depend on the context and goals of that system. Complex systems that work invariably evolve from simple systems. Attempting to build a complex system from scratch tends to be ineffective. It takes consistent, incremental progress from something basic that works.
- The law of diminishing returns teaches us that outcomes are not linear and not all inputs to a system are equal: The way of running a system, that initially produces good results, can become less effective as time goes on. When your results consistently take a nosedive and your past successes seem distant memories, diminishing returns have likely set in.
- Compounding is a crucial, versatile mental model to understand because it shows us that we can realize enormous gains through incremental efforts over time.
- Randomness as a model reminds us that sometimes our pattern-seeking, narrative-building tendencies can be unproductive. You must get out into the world and experience the serendipity of stumbling into new things.
- Zeroes are those parts of the system that negate the whole thing, without fixing them it is no use working on other parts of the system: Any system is only as strong as its weakest link, and if that link is a zero then the system will not work. For example, if a logistics company relies on gig workers for last mile delivery, then the unreliable availability of suitable gig workers can be its zero.
- Different inputs can produce identical results, and there is more than one way to solve most problems. Using equivalence as a lens helps us appreciate the richness of the solution space.
- Using the model of global and local maxima helps us remember that we often cannot reach our full potential if we aren’t willing to stretch ourselves, take risks, and fail once in a while.
Notable quotes
- Any system with an unchecked reinforcing feedback loop is ultimately unsustainable and destructive.
- In trying to improve the flow of your system, focusing on anything besides the bottleneck is a waste of time.
- Some bottlenecks are better to have than others because they are easier to organize the rest of our system around.
- One critical point about bottlenecks is they can move around systems. You fix one only to introduce another.
- If you do not look at things on a large scale, it will be difficult to master strategy.
- A system without backups is unlikely to function for long.
- Early success is a terrible teacher.
- Replacing components of a system is an inevitable part of keeping a system healthy.
- Increasingly, algorithms are designed to be directionally correct over perfect.
- We cannot understand systems with emergent properties by reducing them to their components.
- Innovation does not take a genius or a village; it takes a big network of freely interacting minds.
- Attempting to build a complex system from scratch tends to be ineffective. It takes consistent, incremental progress from something basic that works.
- Therefore, becoming accustomed to simple, not extravagant, ways of life makes one completely healthy, makes man unhesitant in the face of life’s necessary duties, puts us in a better condition for the times of extravagance which occasionally come along, and makes us fearless in the face of chance.
- Compounding is an immensely powerful and often misunderstood force.
- Play iterated games. All the returns in life, whether in wealth, relationships, or knowledge, come from compound interest.
- Individual risks are uncertain; aggregate risks are predictable.
- Randomness is a fundamental part of the universe, and embracing it instead of trying to fit order where it doesn’t belong can help us do two things: be less predictable and be more creative.
- Regression to the mean is beneficial for differentiating between luck and skill.
- We may need to temporarily worsen our solution if we want to continue searching for improvements.
- You make your plans looking upward toward you goal, only to reach it and find that what you thought was the pinnacle, the ceiling of your endeavour, was in fact only the floor of your next level.
- We need to make the big changes first, before we try to optimize the details.
In closing
Books like the Great Mental Models are not one-time reads, not in the sense of their complexity or rapturous style of writing but simply because we tend to forget these timeless models as life goes on. I think I'll give those books (or at least my reviews of them) a re-read once every few years. And then there is the more important matter of putting these lessons in practice.
The authors advise:
"Pick a model, maybe one per week, and start looking at your life through that lens. What do you notice? What looks different? Write down or record your observations. Take the time to reflect on your experiences using each model, because it is through reflection that the most valuable knowledge builds. Note where you make a different choice based on the insight provided by the model. Pay attention to what worked and improved your outcomes. Learn from your mistakes. Over time you will build knowledge of where each model is most useful and most likely to help you.
As you practice using more models, you will begin to build a latticework. You will see connections and notice that some models give the best insight when paired with certain others. Eventually your latticework will be comprehensive enough that you will be able to use it in every situation, reducing your blind spots and preventing problems."
I think this sums it all up quite well, these models are more or less universal as far as our lives go, so investing in them in the manner described above should be worth the time. Of course, it takes dedication to put this in practice and so will be better achieved if done under a deliberate and programmed effort by the individual.
Comments
Post a Comment