“The Purpose of a System Is What It Does.”
Derek & Laura Cabrera
Derek Cabrera (Ph.D., Cornell) is an internationally known systems scientist and serves on the faculty of Cornell University where he teaches systems thinking, systems leadership, and systems mapping and is Program Director for the Graduate Certification Program in Systems Thinking, Modeling, and Leadership (STML). He is a senior scientist at Cabrera Research Lab. Laura Cabrera (B.S., M.P.A, & PhD, Cornell) currently teaches Systems Thinking and Modeling and Systems Leadership at Cornell University at the Institute for Policy Affairs. She is also a senior researcher at the Cabrera Research Lab. Over the past decade, Cabrera has applied her expertise in research methods and translational research to increase public understanding, practical application, and dissemination of sophisticated systems science and systems thinking models.
More posts by Derek & Laura Cabrera
This post is adapted from Chapter 4 of Flock Not Clock.
We must sound a cautionary note on evaluating whether capacity and mission align. Systems and management scientist Stafford Beer developed an important and popular systems thinking heuristic known by the acronym POSIWID: “The purpose of a system is what it does.” Beer regarded POSIWID as “bald fact” and a better starting point for understanding a system than a focus on designers’ or users’ intention or expectations. When assessing alignment, we need to focus on what the system actually does rather than its ostensible, original, or ideal purpose (since these often do not match).
POSIWID has other important implications. It means that we need to think differently about how much control we really have over complex systems. Where capacity systems consist of many subsystems integrated into a system of systems, we want to look at them individually and collectively and ask:
- What is the system’s stated (ostensible) purpose?
- What is the system’s behavior?
- What does that behavior say about what the system’s purpose is?
- Is there alignment between the actual and ostensible purpose?
- If not, what is the system’s structure?
- How can we alter the structure to drive new behavior?
In Activity 4.2, you can see the value of POSIWID thinking. It flips the system and its purpose on its head. Instead of looking at the results of a system as problematic, you look at the results of the system as designed or by design. The worse the result, the more clear the value of POSIWID thinking. Take, for example, a company that is bleeding cash: you might look at this as a problem (and of course it is), but for a moment consider that everything about that company—all of its internal systems—are actually really good at spending money. This flips the problem on its head. We can now look for processes, cultural morays, and other parts of the system that are good at burning cash. Recasting the system’s purpose as POSIWID recasts the problem you are trying to solve.
Activity 4.2: POSIWID activity
Instead of the system being badly designed to serve a good outcome, it is brilliantly designed at bringing about a bad outcome. One way to determine whether a system stated purpose is also its actual purpose (which should be mission) is to measure or assess its effectiveness.
Remember what Einstein said: “Not everything that is measured matters and not everything that can be measured, matters.” Don’t measure everything about a system, measure what matters. What matters in capacity systems? This is simple: how much capacity do they produce to do your mission? That’s it. That’s all that matters in these systems. Do they contribute to making your mission happen? As such, capacity assessment is a mission-critical system within all organizations.
Capturing, measuring, interpreting, and using data is a critical part of organizational life and a necessary focus of leadership. A dashboard is a metaphorical (or actual as realized through one of many cloud-based platforms) snapshot of the important data or metrics of your organization. Some of the more important metrics on your dashboard should pertain to mission. You need measures—indicators—of your capability to do your mission.
As an uncommitted student in his early 20s at the University of Oregon, Derek was swept up in the regional conflict either endearingly or pejoratively (depending on which side you were on) represented by the spotted owl. Loggers and the logging industry were pitted against environmentalists and (literally) tree huggers. One side wanted the industry jobs and natural resources that resulted from clear cutting the old growth of Oregon’s lush forests, while the other side wanted the ecological and recreational conservation of those same lush forests. The spotted owl really wasn’t the issue, but it was an indicator species. Scientists had determined that the size of the spotted owl population was an indicator of the larger health of the forest ecosystem. By maintaining an ongoing count of owls in the forest, scientists were measuring its health.Thus was born the “poster owl” for a movement to protect the forests.
There are countless “indicator species” in nature and in business and society—variables that tell us something about the big picture. These indicators are feedback from the real world. They’re easier to count than, say, the qualitative health of a forest ecosystem. Lichen, for example, only grows in areas where the air quality is high. You won’t often find lichen in cities. If lichens are in decline, the real world is giving you feedback not merely about lichens but about the air quality of the city itself.
One of the keys to monitoring capacity is to find the right “indicator species” to help you quickly measure the health of your systems, which is determined by a single outcome: whether systems are providing capacity for you to do your mission. Of course, there are often different indicators for each system, as well as the system of systems as a whole. But before you go on a hunt for low-hanging indicators, remember that you must find a way to measure what matters. We are awash in information. But one of the keenest insights from the field of research methodology is “just because you can [collect data] doesn’t mean you should.” Despite the current fixation with big data, having a lot of data isn’t in itself important. Having the right data is important. Measure what matters.
Here’s another way in which measurement is deceptively tricky. Often the ability to quantitatively measure something comes from the capacity to qualitatively understand it. This means that simply counting the things you can count isn’t enough.
Better to understand the things you want to measure (in science this is called construct validity) and learn, over time, to measure them.
Let’s examine, for example, customer satisfaction. What is customer satisfaction? You might say that the degree to which folks continue to buy your product is a good indicator of customer satisfaction. Yet we’ve continued to subscribe to the cable company’s service for our home for years. However, every time we see that there’s an emerging product on the horizon that might replace our dependence on cable, we get giddy. The second that product is viable, we'll toss cable like a used Kleenex. Until then, we’ll continue our subscription, but are we really satisfied? A better but somewhat more difficult measurement to assess is how much we rave about a particular company, service, or product.
The point is to think of metrics as measurements that are ever evolving to capture reality. And, most important, remember that ability to understand a thing qualitatively must precede attempts to measure it quantitatively. You have to figure out which indicator you need to be assessing and how might you go about measuring it.