Friday, 22 March 2024 ------------------------ Hello. Today I feel more refreshed after good sleep. In the fourth chapter we learn about event vs. behavior, linearity vs. nonlinearity, model boundaries, limiting factor, and bounded rationality. To better understand the structure of a system from its inputs and outputs, it's better to view it from behavior level than event level. Event level views single instances of outputs. Behavior level views multiple instances of outputs for a given period, like a time graph. Multiple instances allows us to see patterns that can help us see the long-term behavior of a system that can give use clues about the structure of the system. It's hard to make any sense of a single event or data point. Due to the limits of our brain, simply observing events day to day will not make us understand the behavior. We must capture events into a meaningful model to do this. Likewise, we are prone to understand the world in linearity but a lot of feedback loops are governed by nonlinearity. A plant may need water to live, but too much water and the plant dies. The more complex system, the more confusing this can be. In systems with reinforcing feedback loops and stabilizing feedback loops, a small change can become a big change if it shifts which loop is dominant. The book gives an example of a forest with budworms eating the trees. Over hundreds of years the forest/system had resilience with oscillating levels of budworms. The first stabalising constraint of budworms was its predators. They would replicate as budworms replicated. However every few decades or so, multiple warm springs would make budworms replicate more than their predators. This forest had pine, spruce, fir trees. The budworms mostly ate the spruce trees. Usually, spruce trees would dominate the other trees, but because budworms ate the spruce trees more, they make them less dominant, and as there are less spruce trees, the budworm population collapses naturally. You would not understand how this structure is making the system resilient and stable over a long time if you did not view the long-term behavior of the system. In this example, managers of the forest wanted consistent and stable returns because the local economy heavily relied on it. Therefore when a natural budworm explosion happened, they would spray pesticides to reduce budworms. But by doing this, they also reduced the natural predators of budworms, and allowed spruce trees to dominate more. This structure change, less predators and more food for the budworms, meant that they had now created an environment where the risk for budworm explosion was constant. This forced them to continue spraying evermore pesticide to prevent an explosion, resulting in unsustainable costs and other unwanted effects. This was a real world example that happened in North America from 1950's until DDT (the pesticide) was banned in 1970's. In the models we make, we have to make boundaries for what to include. If our system is behaving unexpectedly the answer may lie at the boundaries of the model. Reality is one big continuum. To make the best boundaries for our models we must be aware of what we're trying to understand. Too big or too narrow boundaries can obscure what we're trying to understand. As we talked about in the previous book, we must be aware of how our bias for categorical thinking influences this to allow us to be flexible depending on model. We may limit our models to academic or political boundaries. Models limited within the borders of a nation may be appropriate for some things but not other things such as understanding ozone depletion. For a forest, pollution outside the borders of it may have to be included that can cause acid rain in the forest. As we discussed in a previous writing reinforcing loops exist in an environment of constraints, usually many different factors, meaning growth has limits. At any given time, of all the factors, the most significant one is the 'limiting factor', the one currently constraining the growth. An example from book is a corporate growth model that explains how the limiting factor for a corporate shifts throughout its growth. To improve growth, they must be able to better foresee the limiting factors in the future. Delays in feedback and response can cause under- and over investment. Delays are ubiquitous in systems. Delays can be a matter of seconds, months, decades, centuries. To respond effectively or before its too late, foresight to respond before receiving feedback can be necessary. We may not receive any feedback for years but the moment a shift in dominance of feedback loops occur, we may experience exponential force that we're unable to respond to after the shift has happened. Likewise policy changes can have effects on things such as social norms that can take a generation to reverse after the policy is reversed. This reminds me of my time managing Minecraft servers. I was mainly concerned with marketing as players had usually been limiting factor for growth. At one point I had a found a big gap in the market that led me to create a new server. This was a couple of months before the covid lockdowns. With those two forces together, within half a year, the new server went from 0 players to 50k daily players with 5k concurrent players at daily peak times (this is a lot relatively to this market). The limiting factor was no longer players but poor server performance and a lack of moderation. Anyone that has tried running a Minecraft Java server will know how poor performance (most game logic runs on a single thread) is. On my other server, a stable moderation team had been built up over years of running. It proved very difficult to build this up for the new server quickly after the player base had already exploded. Ultimately, growth stalled due to these two factors. I found myself in a position I didn't really want to be in. I preferred slow and stable growth, and with trust issues I was unable to efficiently delegate work over to others. Something I had started to get by without a job with relative ease had unexpectedly turned into something bigger because along the way I got caught up in growing the player count, forgetting what I had initially set out to achieve. Just as we must be aware of the boundaries of a model, we must be aware of what information streams to each subunit/actor in the system that affect their ability for self-organization and their behavior to see if it aligns with the overall system's intended purpose. This is to be aware of the 'bounded rationality' of each actor in the system. The book gives the example of the invisible hand, an economic idea where each actor can act in their own interest and still their actions will benefit the overall system collectively. This is the foundational idea for free markets. In many respects it works very well but the model must assume each actor has all information and that they act rationally on that information for the model to work. Each actor will only see the system from their angle with imperfect data (limits, delays). We also understand the brain must use optimizations to function in practice therefore humans as actors are not able to act rationally. If we do not account for this, a system will continue to behave unexpectedly and may suddenly collapse when shifts in dominance of feedback loops occur. The book gives an interesting real world example of how in a neighborhood of houses, for some reason, they installed electric meter in the basement of some houses and in the hall of other houses. During a time with pressure on electricity, there was more awareness on consumption of it. They noticed that some houses in that area was using 1/3 less electricity yet they were similar households. The reason was that when the electric meter was in the hall, the residents in the house were more aware of how their behavior affected their consumption. An example of how difference in feedback information can impact behavior. This is why structure changes are more effective than replacing actors. If the new actor has the same information and incentives as the old actor it will simply repeat.