An Interlude: Forum for the Future’s System Innovation Animation

In my first entry, I wrote that the main motivation to start this blog which I have been planning for a long time came from Forum for the Future‘s new strategy about system innovation. Last week they have released a short animation announcing this new strategic move and explaining what system innovation is. I found the animation to be concise, clear and to the point. Of course, whenever a complex topic is simplified for the sake of making it easy to understand, there is selective elimination of certain aspects of the topic which make it complex (and which generally constitute the essence of the topic). But for this very reason, I think the animation is very effective. The main messages given in the animation are completely aligned with those messages us ranty academics have been trying to get through to the public, businesses and governments for a long time. The difference is, we write pages and pages long of reports, journal articles, books etc which none of these stakeholders can be bothered to read (even if they would bother and even if they had time, most often than not they don’t have access to the material) and these guys come up with a short animation which is fun to watch and not full of perplexing scientific jargon. Another important determinant of effective communication is of course the timeliness of the message. Governments, organisations and individuals are starting to realise that the sustainability issues are far more complex than we once thought them to be and there is definitely a need to move beyond single-issue-focused optimisation approaches. I’d like to share this animation here for two reasons. First, simply because I’d like to spread it so that more people will hear about system innovation. Second, using it as an inspiring spark for me, I’d like to ask some questions I have in my mind about how best to approach system innovation (which I’ll leave to my upcoming entries).

So, here’s the animation:

To recap, the main messages in this animation are:

1. Although there have been efforts to achieve sustainability for a long time, unsustainability prevails (my addition: in fact, the indicators tell us that it’s worsening at great pace);

2. In order to achieve sustainability, instead of focusing on individual elements (products, services, companies, etc) we need to focus on systems (my addition: well, sustainability is a property of systems and not of individual system elements, so please, no more “this is a sustainable product”, “we are a sustainable company” nonsense);

3. Focusing on systems requires collaboration of all involved stakeholders (my addition: well, practically every single one of us);

4. To achieve system innovation, measurement and analysis, futures thinking and futures inquiry tools, and, creativity and innovation tools are needed (my addition: this corresponds to the three types of knowledge needed for systemic interventions: 1. Systems knowledge; 2. Target knowledge, and; 3. Tranformation knowledge (Wiek, Binder & Scholz, 2006)).

5. FFF proposes to start their system innovation adventure focusing on three sectors: food, energy and finance.

References I used in this post:

Wiek, A., Binder, C., & Scholz, R. W. (2006). Functions of scenarios in transition processes. Futures, 38(7), 740-766.

Complexity and co-evolution

Socio-technical systems are complex adaptive systems. Therefore, in order to attempt initiating and steering system innovations, we must understand what complex adaptive systems are and how they do behave.

Defining complex systems is not an easy task. As a starting point, complex systems are what simple systems are not. The major distinguishing characteristics of simple systems are predictable behaviour, small number of components with few interactions among them, centralised decision-making and decomposability (Casti, 1986). Therefore, through negation of these characteristics, the major characteristics of complex systems are identified as unpredictable behaviour, large number of components with many interactions among them, decentralised decision-making and limited or no decomposability. A distinction between complicated and complex systems is also useful here. Cilliers (1998) argues that if a system has a very large amount of components but yet can still be fully analysed, the system is complicated rather than complex. A complex system, on the contrary to a complicated one, has intricate sets of non-linear feed-back loops so that it can only be partially analysed at a time. In this sense a machine of any kind with large quantity of parts is complicated whereas a human being or an ecosystem is complex. 

Funtowicz and Ravetz (1994) classify complex systems as ordinary and emergent. They argue that ordinary complex systems tend to remain in a dynamic stability until the system in overwhelmed by perturbations such as direct assaults like fire or invaders. Conversely, in emerging complex systems there is continuous novelty and these systems cannot be fully explained mechanistically or functionally since some of their elements possess individuality, intention, purpose, foresight and values. Any system involving society is thus an emergent complex system.

Hjorth and Bagheri (2006) state that complex systems cannot be fragmented without losing their identities and purposefulness. Similarly, Linstone (1999) refers to the general illusion or misassumption that we can break complex systems into parts and study these parts in isolation. He calls this as ‘a crucial assumption of reductionism (p.15)’ and points to the fact that such implied linearity is not a characteristic of complex systems. Indeed, in complex systems, the complexity is not determined by the characteristics of the components of the system but rather the relationships and the interaction between the components (Manson, 2001). The interaction between the components is not necessarily physical but can be in the form of information exchange as well (Cilliers, 1998). Mant (1997) gives an illustrative example of irreducibility of complex systems in his frog and bike analogy. One can dismantle a bicycle, carry out maintenance and reassemble it. The bicycle is still a bicycle and works perfectly. Nevertheless, if you separate a part of frog for any reason and keep on breaking it apart, the frog will perform unpredictable adjustments to survive until a time comes and the system (i.e. frog) tips over into collapse. Therefore, it is not possible to study complex systems meaningfully by breaking them into their components. At times when there is a need to define system boundaries, this should be done acknowledging how the part under study relates to the rest of the system.    

In addition to irreducibility and emergent behaviour, the other characteristics of complex systems are self-organisation, continuous change, sensitivity to initial conditions, learning, irreducible uncertainty, and contextuality (Cilliers, 1998; Gallopín, Funtowicz, O’Connor & Ravetz, 2001; Manson, 2001; Cooke-Davies, Cicmil, Crawford & Richardson, 2007). Complex systems in general are hierarchic or have multiple-levels and each element is a subsystem and each system is part of a bigger system (Casti, 1986; Gallopín et al. 2001; Holling, 2001; Gallopín, 2004). Hierarchical structures have adaptive significance (Simon, 1974). This adaptive significance is not due to a top-down authoritative control but rather due to the formation of semi-autonomous levels which interact with each other and pass on material and/or information to the higher and slower levels (Holling, 2001).

It is impossible for an analyst to understand a complex system totally and correctly. However, some requirements can be extracted with references to characteristics counted above. First, emergent behaviour, sensitivity to initial conditions and learning which takes place by system components imply time-dependency of complex systems. This time-dependency is two-fold; both history of the system and the particular moment the analysis is undertaken will affect the outcome. Since context is important to understand adaptive systems, and there are multiple-levels in a system, an analysis should include more than one level as well as the different perspectives present in the system (Gallopín et al. 2001; Gallopín, 2004). For an effective analysis, the analyst needs to oversee the (sub)system being analysed from a vantage point. This vantage point should be at a higher or preferably meta-level to identify a context specific perspective while still acknowledging the interconnections between the (subsystem) being analysed and the rest (Espinosa, Harnden & Walker, 2008).

The three major subsystems of the meta-system (i.e. ecology, economy, society) and most of the sub-systems of these components (e.g. evolutionary processes, market operations, individual animals, companies, etc.) are classified under a special category of complex systems terminologically known as complex adaptive systems (CAS). The distinguishing feature of CAS is that ‘they interact with their environment and change in response to a change (Clayton & Radcliffe, 1996, p.23)’. They are resilient; therefore, they ‘can tolerate certain levels of stress or degradation (p. 31)’. As a result, sustainability of a CAS can be achieved if the adaptive capacity of it is not destroyed.

The sustainability of a single entity is dependent on and determined by sustainability of the other components with which that single entity has interactions. Together all these components form a system, and therefore, sustainability can only be achieved using non-reductionist, dynamic systems thinking. The subsystems of a system should be adaptable to changes which occur both in the other subsystems, and as a result, in the entire system. The subsystems must co-evolve to render sustainability possible.

The term co-evolution was first coined by Ehrlich and Raven in 1964 to explain the mutual evolutionary processes of plants and butterflies (Ehrlich & Raven, 1964).  Even though the term first emerged in the area of evolutionary biology, it spread in other, especially interdisciplinary, domains studying interactions between natural and human-made systems (Norgaard, 1984, 1995; Winder, McIntosh, & Jeffrey, 2005; Rammel, Stagl, & Wilfing, 2007). Some of the other domains which use the co-evolutionary approach to explain, analyse and manage interacting natural and social systems include technology studies, organisational science, environmental and resource management, ecological economics and policy studies (Rammel et al., 2007; Kallis, 2007a).

It is important here to note that, despite many similarities between biological evolution and social, cultural, technological and economic change, there are differences as well (Rammel & Van Den Bergh, 2003; Kallis, 2007b). In the wider context of sustainable development, co-evolutionary change does not necessarily happen on a reactionary basis as generally happens in ecosystems. Rather, in socio-economic or socio-technical levels, it can also be deliberately aimed at both the individual and collective levels by system components in accordance with changing system conditions (Holling 2001; Cairns Jr, 2007; Kemp, Loorbach, & Rotmans, 2007). Co-evolution is reflexive and refers to the mutual change of all system components. During this mutual change, one component may or may not dictate a change over other(s).

References used in this post:

Cairns Jr, J. (2007). Sustainable co-evolution. International Journal of Sustainable Development and World Ecology, 14(1), 103-108.

Casti, J. L. (1986). On system complexity: identification, measurement and management. In J. L. Casti & A. Karlquist (Eds.), Complexity, Language and Life: Mathematical Approaches (pp. 146-173). Berlin: Springer-Verlag.

Cilliers, P. (1998). Complexity and postmodernism: understanding complex systems. London; New York: Routledge.

Clayton, A. M. H., & Radcliffe, N. J. (1996). Sustainability: a systems approach. London: Earthscan.

Cooke-Davies, T., Cicmil, S., Crawford, L., & Richardson, K. (2007). We’re not in Kansas Anymore, Toto: Mapping the Strange Landscape of Complexity Theory, and Its Relationship to Project Management. Project Management Journal, 38(2), 50-61.

Ehrlich, P. R., & Raven, P. H. (1964). Butterflies and Plants: A Study in Coevolution. Evolution, 18(4), 586-608.

Espinosa, A., Harnden, R., & Walker, J. (2008). A complexity approach to sustainability – Stafford Beer revisited. European Journal of Operational Research, 187(2), 636-651.

Funtowicz, S., & Ravetz, J. R. (1994). Emergent complex systems. Futures, 26(6), 568-582.

Gallopín, G. C., Funtowicz, S., O’Connor, M., & Ravetz, J. (2001). Science for the twenty-first century: From social contract to the scientific core. International Social Science Journal, 53(168), 219-229.

Gallopín, G. (2004). Sustainable Development: Epistemological Challenges to Science and Technology. presented at the meeting of the Workshop on Sustainable Development: Epistemological Challenges to Science and Technology, Santiago, Chile.

Hjorth, P., & Bagheri, A. (2006). Navigating towards sustainable development: A system dynamics approach. Futures, 38(1), 74-92.

Holling, C. S. (2001). Understanding the complexity of economic, ecological, and social systems. Ecosystems, 4(5), 390-405.

Kallis, G. (2007a). Socio-environmental co-evolution: some ideas for an analytical approach. International Journal of Sustainable Development and World Ecology, 14, 4-13. 

Kallis, G. (2007b). When is it coevolution? Ecological Economics, 62(1), 1-6.

Kemp, R., Loorbach, D., & Rotmans, J. (2007). Transition management as a model for managing processes of co-evolution towards sustainable development. International Journal of Sustainable Development and World Ecology, 14(1), 78-91.

Linstone, H. A. (1999). Decision Making for Technology Executives : Using Multiple Perspectives to Improved Performance. Norwood, Mass.: Artech House.

Manson, S. M. (2001). Simplifying complexity: A review of complexity theory. Geoforum, 32(3), 405-414.

Mant, A. (1997). Intelligent leadership. St. Leonards, N.S.W.: Allen & Unwin.

Norgaard, R. B. (1984). Coevolutionary Development Potential. Land Economics, 60(2), 160-173.

Norgaard, R. B. (1995). Development Betrayed: The End of Progress and a Coevolutionary Revisioning of the Future. London; New York: Routledge.

Rammel, C., Stagl, S., & Wilfing, H. (2007). Managing complex adaptive systems — A co-evolutionary perspective on natural resource management. Ecological Economics, 63(1), 9-21.

Rammel, C., & Van Den Bergh, J. C. J. M. (2003). Evolutionary policies for sustainable development: Adaptive flexibility and risk minimising. Ecological Economics, 47(2-3), 121-133.

 Simon, H. A. (1974). The organization of complex systems. In Pattee, H. H. (Ed.), Hierarchy theory: the challenge of complex systems. New York: Braziller. p. 3-27.

Winder, N., McIntosh, B. S., & Jeffrey, P. (2005). The origin, diagnostic attributes and practical application of co-evolutionary theory. Ecological Economics, 54(4), 347-361.

 

 

What is system innovation for sustainability?

System innovation is defined as “a transition from one socio-technical system to another (Geels, 2005, p.2)”. Some historical examples of system innovation are the transition from sailing ships to steam ships, the transition from horse-and-carriage to automobiles, and the transition from piston engine aircrafts to jetliners in American aviation (Geels, 2002a, 2002b, 2005). Much more profound examples of system innovation are agricultural revolution and industrial revolution, both of which fundamentally changed how the society operates. The society is currently experiencing another profound system innovation determined by the rapid development and diffusion of information and communication technologies. Since system innovation is a transformation which takes place at the wider societal context, it covers not only product and process innovations but also changes in user practices, markets, policy, regulations, culture, infrastructure, lifestyle, and management of firms (see, for example, Berkhout, 2002; Geels, 2006; Kemp and Rotmans, 2005; Sartorius, 2006).  In other words, system innovation occurs when the societal system functions differently and thus there is a requirement for fundamental structural change (Frantzeskaki and De Haan, 2009).

Historical examples of system innovation differ from system innovation for sustainability simply by not having a predefined and desired output. On the contrary to historical examples, endeavours to achieve system innovation for sustainability has a desired outcome: sustainable socio-technical systems. This raises questions about what sustainability means, how sustainability of a system can be achieved, what characteristics socio-technical systems have and how can we change socio-technical systems. Answers to these will be investigated in my upcoming musings. But next, I’ll write about the history of system innovation, how it all started and where it is now.

References used in this post:

Berkhout, F., 2002. Technological regimes, path dependency and the environment. Global Environmental Change, 12(1), 1-4.

Frantzeskaki, N., De Haan, H., 2009. Transitions: Two steps from theory to policy. Futures, 41(9), 593-606.

Geels, F. W. 2002a. Technological transitions as evolutionary reconfiguration processes: a multi-level perspective and a case-study. Research Policy, 31(8-9), 1257-1274. Retrieved May 20, 2007 from ScienceDirect.

Geels, F. 2002b. Understanding the Dynamics of Technological Transitions: a co-evolutionary and socio-technical analysis. Unpublished Ph.D., University of Twente, Twente.

Geels, F. W., 2005. Technological transitions and system innovations: a co-evolutionary and socio-technical analysis. Cheltenham, UK; Northampton, Mass.: Edward Elgar Pub.

Geels, F. W., 2006. System innovations and transitions to sustainability: challenges for innovation theory. Paper presented at the SPRU 40th Anniversary Conference,11-13 September 2006.

Kemp, R., Rotmans, J., 2005. The Management of the Co-evolution of Technical, Environmental and Social Systems, in: Weber, M., Hemmelskamp, J. (Eds.), Towards environmental innovation systems. Berlin, New York: Springer, pp. 33-55.

Sartorius, C., 2006. Second-order sustainability–conditions for the development of sustainable innovations in a dynamic environment. Ecological Economics, 58(2), 268-286.