“Man thinks he is the master of language, but language is the master of man.”
Language affects how we perceive the world - and ourselves. It shapes our thinking and our actions at every scale. Our language has, for the last few hundred years, been inspired by scientific progress and the inner workings of machines. Just stop and think about it for a minute: We are happy when our projects run like well-oiled machines. The market mechanism keeps the wheels turning. Our brains are thought of as hardware and consciousness as software. We use the analogy of a blind watchmaker to explain evolution. We drive change programmes; we upgrade, gear up, debug, reboot, streamline and ramp up left and right. The language of machines is everywhere.
Most of the time, however, this is not accurate use of language.
Different systems require different languages
There are at least three types of systems that are ontologically distinct: ordered systems, complex adaptive systems and chaotic systems.
Machines are ordered. You can make a blueprint of a machine. If you break it, then with the right skills and tools, you can put it back together: Problem. Solved. Within the realm of engineering, the language of machines is useful.
Living systems, on the other hand, are complex. They consist of nested hierarchies of organisation: cells make up organs which make up humans which make up societies etcetera. All of these levels of organisation exercise dynamic upward and downward causation on themselves. In addition, complex adaptive systems have this unique potential for emergence, which allows for completely new features to arise from diverse interactions between parts.
You cannot fix complex systems. There are no distinct problems or solutions in complex systems - there are only responses. Some are wiser than others. You may have learned this in your relationship with your family.
Language influences what is possible
Changing language does not automatically change everything, but it does change the possibility space.
The language of machines made possible the belief that we can take anything apart, fix its parts in isolation, put that object back together and the reconstructed whole will work just fine.
The language of machines made possible the belief that we can have linear materials economies that use natural resources as input and create pollution as output.
The language of machines made possible the belief that we can name 17 sustainable development goals, address them in isolation, and expect our cumulative effort to “solve” the whole.
The language of machines made possible the belief that we can solve any narrowly defined goal without simultaneously causing psychological, social or environmental externalities.
The language of machines made possible the belief in reductionism, materialism, objectivism and positivism, atomism and individualism (i.e., social atomism).
The hubris of modernity speaks the language of machines.
A “new” language to navigate complexity?
Think of a meadow. There is no one aspect of a meadow that is more important than another: the plants, the pollinators, the topsoil, the micronutrients, the fungi, the sunlight and the oxygen are all equally important for the meadow to flourish. And so are gravity and electromagnetism. Similarly, there are no root causes or individual problems to be solved. In a meadow, nothing is single-purpose. Each part of the meadow has multiple functions. Further, there are no fully separate components or clear boundaries. Everything is an interdependent and co-evolving flux. At the risk of sounding a bit new age, the idea of there being something like separate components is, in this sense, not ontologically accurate.
Now, how would this new proposed ‘Language of Meadows’ affect our thinking and our actions? What would it make possible?
Might we ask different kinds of questions? Might we focus on relationships, instead of the things themselves? Might our attention shift from optimisation to learning? Might we start seeing ‘underlying dynamics’ that give rise to ‘ecologies of symptoms’? Might we start thinking about ourselves in relation to others, in relation to machines, in relation to the natural world, in relation to the future, in relation to the past and, of course, in relation to ourselves?
At the VTT iBEX programme, the object of our work is shifting from technological innovation to systems innovation. During this process, it is becoming apparent that contradictions and tensions arise in different parts of the organisational system. These tensions show up in terms of the adequacy of the tools we use, the incentive structures, the skills and competencies, the work communities - and not least in the language we use. We would argue that only when all of the above start shifting can we start navigating complexity in any meaningful way.
This blog post draws inspiration from Nora Bateson's analogy of a meadow as a metaphor for understanding complexity, and from Dave Snowden's Cynefin framework, which differentiates between ordered, complex, and chaotic systems.
Katri Kallio, VTT
Thomas Holm, Leapfrog
Maija Ojanen-Saloranta, VTT
Szymon Wiktorowicz, VTT