Select Page


Reading Time: ( Word Count: )


‘Concerning the challenge we just faced about how to describe things in numbers and definitions, What is the reason for a unity/oneness? For however many things have a plurality of parts and are not merely a complete aggregate but instead some kind of a whole beyond its parts, there is some cause of it since even in bodies, for some the fact that the there is contact is the cause of a unity/oneness while for others there is viscosity or some other characteristic of this sort. But a definition [which is an] explanation is one [thing] not because it is bound-together, like the Iliad, but because it is a definition of a single thing’

Often translated as ‘The whole is greater than the sum of the parts’

Περὶ δὲ τῆς ἀπορίας τῆς εἰρημένης περί τετοὺς ὁρισμοὺς καὶ περὶ τοὺς ἀριθμούς, τί αἴτιον τοῦ ἓν εἶναι; πάντων γὰρ ὅσα πλείω μέρη ἔχει καὶ μή ἐστιν οἷον σωρὸς τὸ πᾶν ἀλλ᾿ ἔστι τι τὸ ὅλον παρὰ τὰ μόρια, ἔστι τι αἴτιον, ἐπεὶ καὶ ἐν τοῖς σώμασι τοῖς μὲν ἁφὴ αἰτία τοῦ ἓν εἶναι, τοῖς δὲ γλισχρότης ἤ τι πάθος ἕτερον τοιοῦτον. ὁ δ᾿ ὁρισμὸς λόγος ἐστὶν εἷς οὐ συνδέσμῳ καθάπερ ἡ Ἰλιάς, ἀλλὰ τῷ ἑνὸς εἶναι.

Aristotle, Metaphysics 8.6 [=1045a]

The first law of thermodynamics states that the total amount of energy in a closed system cannot be created or destroyed (though it can be changed from one form to another). In spite of the rider (given in parentheses) there is in this law a sense of the eternal and changeless nature of the ‘fundamental stuff of the universe’, the energy out of which all else is built.

In contrast, we know that the world was once a formless plasma and that out of this, in the course of the history of the universe, emerged something as complex as the human being. In spite of the lasting presence of energy this massive historical transformation from plasma to human being entailed an explosion of novelty – of changes in form, properties, relations, and substances. And included in the novelty of things that emerged from plasmic sameness were human language and ideas as representations of the world.

These two mental pictures of the world challenge us with a dilemma, an antinomy of outlook, a conundrum that dates back to the ancients. They represent two outlooks, two perspectives or ways of looking at the world – even expectations about the way it ‘really’ is. These are the worlds of eternal and changeless ‘being’ and the constant flux of the world of ‘becoming’, presenting us with the contradiction of permanence in change, and metaphysical intuitions. I mention this here because it arises again and again when considering the question of ‘reduction’.

If all the world is one (perhaps energy, or space-time, or physico-chemical processes) then how do we account for variety? On the one hand we want to account for ‘everything’ in all its complexity, but at the same time we want to reduce all this complexity to its simplest possible irreducible components.

This metaphysical question will be addressed in more detail elsewhere but for the time-being it will be taken as self-evident that with increasing complexity has come all manner of novelty. From the plasma of the Big Bang emerged not only elements, compounds, rocks, plants, fungi, and animals, but their associated properties and relations, including human thoughts and ideas. The origination of novelty has been referred to as ‘emergence’.


But how do we account for emergence? Is it possible to predict the emergence of new objects, properties, and relations?

Aristotle’s quote (above) expresses a deep philosophical issue that has persisted to this day, a question about wholes, parts, and their relations. [11]


Interest in the emergent properties[12] of structured wholes gathered popularity in modern times with the notion of holism when, in the 1890s, South African statesman Jan Smuts in the book Holism and Evolution (1926) coined the word ‘holism’ in reference to explanations that invoke larger or wider scales.

Holism (and its later variants organicism, organismic biology, emergence) placed emphasis not on the material components of systems, but their relations, drawing attention to the interdependence of parts, homeostasis, the operation of networks and communication systems, self-regulation, and the properties of complex systems in general.

British emergentists developed a theory of the hierarchical structure of levels of matter aggregated into irreducible wholes with emergent properties. However, debate between proponents and opponents of this view about emergent properties became bogged down in poor definition with claimants talking at cross-purposes.

American philosopher Ernest Nagel in the 1960s tried to express the apparently contrasting views[6] by defining reductionism as the claim that ‘all the events in nature are simply the spatial rearrangement of a set of ‘ultimate’ items whose total number, properties and laws of behaviour remain unchanged regardless of any rearrangement’ (sometimes called generative atomism)[9]) and, further, that ‘We can account for novelty simply through the playing out of physical laws in time as matter combines in various ways’. The was an implication that, if we had a complete physics, then we could account for all events, both past and future. In contrast, emergentism claimed that new kinds of behaviour conforming to novel modes of dependence arise when hitherto non-existent combinations and integrations of matter occur. This gave rise to new qualities, structures, properties, and processes. Biology was central to this view so, for example, it was claimed that an organism is an operational whole that has qualities or properties that are not possessed by the molecules or elements that make it up: life has properties beyond those of the inanimate matter of which it is comprised and these are associated with new causal powers. Since the causal powers relate to the entity itself and not those of its components then the entity is irreducible. The emergent properties and causality cannot be predicted from the base properties while nevertheless depending on them.

Reductionism proceeded on an assumption that examining objects at ever finer resolutions provided more rigorously scientific explanations.

Are emergence and reductionism incompatible?

Emergence seems to assume that particular characteristics or properties are either scale-free or inappropriate for reduction: this is expressed as three objections to reductionism.

1. Hierarchically organised systems exhibit properties and/or processes at higher levels that are in some sense autonomous (and possible sources of ‘downward causation’), they cannot be predicted from those at lower levels (irreducible hierarchical organisation – it denies that there is increasing explanatory power, clarity, and predictive value with reduction). Prime examples include the emergence of life and consciousness. Emergence is also associated with increasing unpredictabilty.

2. Simple structures give rise to or evolve novel traits and structures that cannot be accounted for in a reductionist framework – the problem of complexification (a kind of evolutionary cosmogony)

3. Organic wholes exhibit function in a way that inanimate matter does not

In general terms these objections relate to matters concerning predictability, the origin of novelty and complexity, and functional explanation. We need clear examples. Here is a selection of examples to give you a flavour of the debate:

Consider the following: the synergistic behaviour of large flocks of birds and shoals of fish; the formation of a snowflake; the integrated activity in an ant colony; tidal ripples in sand on the beach; ants resolving the ‘travelling salesman’s dilemma’ by finding the shortest route between about eleven locations – a massive computer calculation achieved by mindless ants following simple innate rules of behaviour and pheromone trails; the ‘wisdom of the crowd’ when guessing the number of jelly beans in a jar is difficult for an individual but how averaging the estimates of numerous individuals can closely approximate the correct answer; the way alzheimers disease does not target information held in individual neurons but weakens the capacity of a neural network as a whole; how ‘swarm intelligence’ of insect colonies gives rise to communally-directed activity; the slime mould that grows by oozing along the ground and, without conscious deliberation, but by using chemical feed-back, it traces out the shortest route through a maze connecting two sources of nutrient; music coming from an orchestra of 50 musicians; the economic or social patterns resulting from individual choices.

Attention is drawn to the way that characters of one part might seem random and undirected (disorganised) but when seen as a member of a collective there is pattern (organised), like individuals in human society. In this sense we can see reducibility as a matter of degree depending on the system. Even so, emergent properties need not be mysterious an unexplainable in terms of system components.There is a gradation of wholes where the interdependence of parts varies from negligible (as in the sugar crystals of a sugar lump) to critical (as with the organs of a human body where a change in any single part can cause a change in all the others). Biologically living systems are regarded as integrated wholes whose properties cannot be reduced to those of smaller parts. Their essential, or ‘systemic’, properties are properties of the whole, which none of the parts share. They arise from the organizing relations of the parts, i.e. from the configuration of ordered relationships that is characteristic of that particular class of organisms or systems. Systemic properties are destroyed when a system is dissected into isolated elements.

The study of emergence has become part of systems and complexity theory which is, to all intents and purposes, the study of order in the universe. If we regard order as ‘the aggregation of correlated phenomena into composite patterns that allow us to make sense of the world’, then it is immediately apparent that the composite patterns of everyday life look well beyond the concepts and vocabulary of physics. The study includes pattern and pattern formation and transformation, open and closed systems, synergies, symmetries, and complexity.


For clarity, we must begin with a definition. Emergence can be defined generally as ‘the coming into existence of a novelty that could not have been predicted’ or, more specifically, ‘non-linear pattern formation where synergies between parts give rise to new patterns of organization’.

A major question is whether wholes (living organisms of cells, societies of individual people) exist in any sense independently of the elements out of which they are made since new properties, functions and patterns form as more parts are added in various arrangements.

Novelties exhibit degrees of strength, which has given rise to two schools of thought: weak emergence, and strong emergence.

Weak emergence

Though constituted of simpler constitutive parts, the unexpected novelty could, at least in principle, given sufficient computational power be explained in terms of parts. Though the features of the whole are ontologically and causally derived from its constituent parts, irreducibility results from the complexity of the attempt. For example, the structure of a flower could, in theory, be described in terms of its molecular composition but this would be impractically complex. The parts are not constrained by the nature of the novelty (whole) – macro does not influence micro.

Strong emergence

Though constituted of simpler constitutive parts, the unexpected novelty is not derivable, even in theory, from the features of its constituent parts and their interactions. For example, the liquid solvent properties of water arise from the synergies of hydrogen and oxygen and cannot be explained in terms of the properties of the elements alone. An account of water requires new descriptive categories, concepts, and terms. This new set of categories can be referred to as a new (higher or lower) ‘integrative level’ (aspect). The novelty follows irreducible regularities (rules) that are not evident in the parts. The characteristics of the parts are constrained by the nature of the novelty (whole). Macro influences micro. Examples include quantum entanglement, water, life, human consciousness.

Philosophers distinguish between epistemic and ontological emergence.


There is a lack of clarity over what exactly we mean, firstly by ‘reduction’ but also by Aristotle’s claim to the effect that ‘the whole is more than the sum of its parts’. Both statements are ambiguous because they can be interpreted in several ways – they have various meanings (polysemy).[2] Aristotle’s statement is one of unhelpful generality: we can defend or deny its claims truthfully depending on what we choose to mean by ‘whole’, ‘sum’ and ‘part’. The number three, for example, might be considered a combination of the numbers one and two. To claim that three is more than the sum of one and two can be made and defended, but it would stretch our everyday understanding of ‘whole’ and ‘part’. Each claim, then, is best examined on its own particular merit.

But let’s begin by thinking in general about parts and wholes. Immediately we face a philosophical paradox, dilemma, contradiction, or antinomy – the problem of the one and the many. We can, it seems, claim with equal validity either a variety of existing things, or that this variety can be explained in terms of a single thing (reality or substance) – that only one thing is ontologically basic or prior to everything else. Perhaps the universe is a single thing that can only be artificially and arbitrarily divided into many things, a problem already noted.

Parts & wholes

Perhaps something can be learned about parts and wholes from the way that we think, as this can influence our intuitions about the world?

The mental process

To think at all we need something to think about. As a matter of psychological necessity we need units of thought – let’s call them concepts. Concepts express both generality and particularity: for example, the general concept of dog, and the particular concept of my dog Rover. We might call this the detail or grain of our thought. Examples of very general concepts would be matter, space, time, or music. Then, to establish particularity, it becomes psychologically helpful to break up the generalities into units that act as building blocks out of which we can then construct a framework of thought. These units act as ‘axioms’ that we take for granted: they are standards or yardsticks against which we measure and construct other things. Sometimes there seems to be a single foundational unit, like the atom of Democritus, the brick of a house, or the biological notoriously-difficult-to-define species. Biologically at the microscopic scale we have cells. Sometimes we just use a range of convenient units without placing emphasis on one as being fundamental to all the others. The unit of music is the crotchet perhaps as a foundational note that can be added to, or subdivided. The unit of time is perhaps the second or minute, while ‘now’ is contentious. Number systems rest on the building block of a single unit, number one. Space units like centimetres, metres, miles and so on, seem to lack a foundational unit.

Principle 1 – in order to achieve cognitive focus the mind must create units of thought (concepts) as representations of the external world. General concepts are often broken up into units, often with a particular kind of unit taken as being fundamental. Science tries to make the relationship between our representations and the external world as intelligible as possible


Taking the universe to be the largest physical whole, everything in it is in some way, no matter how obscurely, related to everything else. It follows that if we are to understand and explain the universe in its entirety then the most effective way of doing this is to provide some kind of total circumscription – otherwise we will be leaving some parts out and the explanation will be incomplete. But describing the entire universe scientifically in all its complexity seems an impossible task so, according to cosmologist Stephen Hawking: ‘Instead, we break up the problem into bits and invent a number of partial theories. Each of these partial theories describes and predicts a certain limited class of observations, neglecting the effects of other quantities …’ … but … ‘If everything in the universe depends on everything else in a fundamental way, it might be impossible to get close to a full solution by investigating parts of the problem in isolation’.[6]

Open & closed systems

Perhaps another way of expressing this difficulty is to regard it as a matter of context: what exactly is under consideration and what is not? Are we dealing, explicitly or implicitly, with an open or closed system?

This relates to the analytic and synthetic approaches to explanation. To be objective we can study an object or event within its total context, or we can isolate it from this context in order to manipulate or consider a limited range of variables.

Let’s try to tease out some basic distinctions and ideas.

Part – a part is generally related to and explained in terms of the whole of which it is a part. This simply follows the semantics: if it is a part then it is a part of something else; that is, our understanding of a part depends on its role within a wider context. To understand and explain an ant as a part, rather than a whole, we need to know how it interacts with other ants and its environment

Whole – again, following the semantics, a whole is generally (though perhaps not so strongly as the part) related to and explained or analysed in terms of its constituent parts To understand and explain an ant as an individual organism we need to know about its parts and the way they interact

Paradoxically every[3] object in the universe (apart from the universe itself) can be both a whole and a part. An ant is a whole individual but it is also a part of a colony … and so on. This creates a cognitive dissonance since we feel intuitively that something cannot be the two things at the same time, both a whole and a part. Like a visual illusion this cognitive illusion relates to our tendency to only contemplate a situation from one viewpoint at a time, not both at once. If you break a rock in two, do you then have two rocks – or one rock in two parts? This paradox is not invariable since we can consider an object in terms of both its its wider context and constituent parts, although we laugh uneasily at this cognitive dissonance as when we ask ‘Which came first, the chicken or the egg?‘ as we waver between considering the wider context by ‘looking forward’ to a destination and whole (becoming a chicken), or ‘looking backward’ to an origin and part (the egg) that gave rise to the chicken. Similarly we might think of either an acorn as having the potential to produce a tree (see purpose) or of a tree as having the potential to produce acorns … even though both apply. Mentally we continuously flip between past, present, and future – between history and potentiality.

There is a famous logical paradox concerning the whole as greater than the sum of the parts. Wholes are of two different kinds: those which contain themselves as members and those which do not. The set of all of the people in a room does not not contain itself because all the people together do not make another person. However, all the piles of sand in the world put together would constitute an additional collective pile of sand. A proper part of an object is a part that is not identical to the whole. This kind of problem has lead to various errors and ambiguities, notoriously Bertrand Russell’s 1901 paradox in set theory concerning the set of all sets that do not contain themselves as members, such that the condition for a set to contain itself is that it should not contain itself.

A closed system can be closely defined and explained in terms of the operation of its parts using the methods of analysis. However, every whole exists within a context or environment and is thus defined and explained in terms of its synthesis or inclusion within a greater whole.

Analysis, synthesis, & their hierarchical metaphors

(If our mode of explanation can be either by analysis or synthesis then are there circumstances in which we would prefer one over the other? Physics purports to deal with matter at its largest scale (the universe) and smallest scale (fundamental particles) but emergentism denies the explanatory completeness of physics.

Explanation by reference to parts we call analysis while explanation by placing the object of investigation in a wider context can be referred to as synthesis. Just as we can analyze an object into progressively smaller and smaller parts in an infinite analytical regress (or until an assumed ‘rock bottom’ is reached), so we can also synthesize them into ever more inclusive wholes in an infinite synthetic regress (or progress as antonym – or until an all-inclusive ‘rock top’ is reached). This raises interesting questions and possibilities concerning the metaphorical structure and symmetry of explanation.

The methods of analysis and synthesis attract hierarchical metaphorical language. The smaller vs larger and less inclusive vs more inclusive dimensions, the symmetry suggesting directions in space, looking in opposite ‘directions’, either ‘up’ (progressive inclusivity) or ‘down’ (progressive division) or maybe ‘forward’ (to greater inclusivity) or ‘back’ (to greater reduction). With the metaphor of hierarchy we take an analytic view of something by looking at its parts, by ‘looking downwards towards the bottom’ but when we think synthetically by understanding and explaining how something fits into a wider context we say we are either ‘looking upwards towards the top’. If we can steadily and systematically ‘build up’ the universe from its fundamental particles and their relations or we can ‘break down’ the universe from its totality into its parts.

The holon

The word ‘holon’ was coined by Arthur Koestler in The Ghost in the Machine (1967, p. 48) to designate the part-whole hybrid – something that is simultaneously both a whole and a part: he was clearly thinking mostly of organic systems. ‘Holons are autonomous, self-reliant units that possess a degree of independence and handle contingencies without asking higher authorities for instructions. These holons are also simultaneously subject to control from one or more of these higher authorities. The first property ensures that holons are stable forms that are able to withstand disturbances, while the latter property signifies that they are intermediate forms, providing a context for the proper functionality for the larger whole‘.

For our purposes ‘holon’ is a term expressing a duality of potential explanation of every object as simultaneously a whole that can be subdivided and analyzed in terms of its parts, and a part that can be synthesized by placing it within a wider whole.

Principle 2 – Every object is a holon – it is simultaneously both a whole and a part: we can understand and explain it analytically, in terms of its constituent parts, or synthetically in terms of its place within a wider context

Now let’s look more closely at the general question ‘In what possible sense can a whole be more than the sum of its parts and their relations?‘ For example, how can a toy building made out of lego blocks be more than the blocks out of which it has been constructed – or for that matter, how can an organism be more than the molecules out of which it is constructed? If we remove blocks and molecules, what is left? Nothing is left. The claim is therefore either false or the ‘more’ that is being asserted is something abstract, something immaterial and therefore some kind of illusion – something that is not ‘real’.

The ‘more’, it is then claimed, is the particular relationship between the molecules and their organization or structure: their particular combination, spatial arrangement, and mode of dynamic interaction. It is this organization that produces novelty, new relations, something additional and new as something ‘more’ that makes the molecules an organism and the lego blocks a building. This now seems a strong claim … except that among the abstract ‘more’ are properties. A living organism has properties not exhibited by inanimate matter, it can reproduce, metabolize, grow etc.

Principle 4 – Only by studying parts and their dynamic relations can we really come to grips with physical reality: parts alone are not sufficient

Principle 5 – A collection of physical objects is not something in physical addition to the objects themselves, what is extra is something abstract (that is real) – it can be regarded as a power or property that is real but not physical

Principle 6 – Though all physical objects consist of matter, the abstract (immaterial) organization of this matter generates properties that can have causal influence on physical structure

Now, from Principle 6 we can see that although, in principle, we can indeed describe social and biological phenomena in terms of their physical components there is additional information, as new properties, that must be accounted for. Further, even given full physico-chemical knowledge it may not be (or would be nigh impossible) to build up from scratch or anticipate these new properties.

Most scientists would agree that a chair and the arrangement of molecules out of which it is made are one and the same. Any problems concerning difference and novelty become questions concerning ways of understanding and explaining and the connections between different ways of doing this.

Epistemic & ontologic emergence

This concerns matters of knowledge: for example, that we cannot predict the future because of the complexity of a particular system while ontological emergence concerns the origin of novel and autonomous structures, functions, and properties. The probabilities of particular outcomes based on the units and those of the composite do not correlate, that states of the system are not determined by the states of the basic units (entangled quantum mechanics). These examples of ontological emergence show that generative atomism cannot be a universal method for representing the world.Fusion emergence is a form of ontological emergence when entities combine to change their identity, as occurs when uniting a flame with gunpowder. Or, if a $2 note is exchanged for two $1 notes then the original physical units or components no longer exist in any meaningful way.

One form of epistemic emergence is inferential emergence. When something cannot be predicted as an outcome within a system then it is emergent, there is a correlation between emergence and unpredictability. An example of inferential emergence is weak emergence when a future state can only be predicted by knowing intermediary states. This is compatible with generative atomism but computers are essential. An astronomer can predict a solar eclipse hundreds of years ahead with precision. Computationally incompressible. Existence of weakly emergent systems shows that some aspects of the world cannot be known by the unaided human intellect – computers are absolutely necessary for us to know some aspects of the world. Conceptual emergence occurs when we need a new conceptual framework (language and ideas) that doesn’t belong to the base entities from which the object emerged. Ontological emergence occurs rather than existing in models and conceptual frameworks emergent things are actually in the world. Each occurs as either synchronic, that is, the emergent features and the thing from which it emerges exist at the same time or diachronic when the features emerge over time. A state or property is conceptually emergent in a particular theoretical framework (domain) when another frame must be developed as an explanation. An economic recession cannot be explained in terms or vocabulary of fundamental physics, even if everything is grounded in physics.

Capturing a whole system in terms of a few simple principles is not possible in such cases because the vocabulary is inadequate (affects axiomatics).

Metaphysically generative atomism is not universally true. We need a new metaphysics.
Methodologically the human intellect is too weak to predict many states of the world. Recognise we are not at the centre of the knowledge universe.
Epistemologically the axiomatic method has limitations. Human conceptual frameworks do not set the limits to knowledge – we need ways of understanding how machines represent the world.

Emergence – Paul Humphreys – the book

Principle of emergence – emergence occurs when there is a change in scale, inclusiveness, or complexity where explanations in one domain do not transpose to those of another: the change may be a consequence of aspect (viewing the situation from a different perspective or using different criteria), or a change in the object under investigation (like an increase in the complexity or relations of its parts)

Compositional hierarchy. Strong forces act first followed by weaker and slower forces. Levels are areas of order with greater stability and resilience.
Paul Humphreys

Reductionism vs Holism

Reductionism and holism represent two ways of doing science that are based on different assumptions and procedures. This is rarely expressed in explicit terms and, for the sake of exposition, is characterized crudely here in their extreme forms as the two Kuhnian-like (see science) paradigms of ‘analytical reductionism’ and ‘holism’. Stated crudely, analytical reductionism is the preferred methodology of physics and mathematics while holism is more applicable to biology and the social sciences.

Analytical reductionism (analysis)

The analytical reductionist sees the world as comprising a few universal laws that govern the physical world which consists of simple and fundamental building blocks organized in various degrees of complexity. In broad terms investigation proceeds on the assumption of a universe that is matter in motion studied by deductive reasoning like a mathematical axiomatic system of explanation. Human experience and social systems are complexities that must, ultimately, be embraced by this paradigm.

The world is thus reduced to basic building blocks and universal rules from which the material world is constructed. The more complex, inclusive, and larger-scale objects of the world are aggregates of these basic building blocks.

Scientific knowledge is thus obtained by analysis to elucidate the nature of the world’s elementary constituents from which the material world is causally derived. Causality is mostly linear insofar as the relations between the parts of a system cannot add or subtract anything to or from the system.

The world is often treated as existing independently of consciousness and transparent to scientific investigation – seeking to find and create an explanatory map that corresponds to the actual or real world.

The reductionist program tends towards a unitary science in which one overall theory, like a unified field theory, can adequately explain everything.

Holism-emergence (synthesis)

Emergence challenges analytical reductionism by claiming that its novelties are irreducible – or rather, that explanations based on universal laws and fundamental material units can never provide a complete or adequately informative explanation of the world. We would do better to understand the characteristics and properties of the many ‘levels’ of existence, looking for patterns that differentiate and integrate. In other words, the holistic approach investigates objects, functions, and relations from the perspective of the wider system (the environment or context). The whole and its own properties is given priority since they cannot be adequately explained or derived from the properties of the parts. The whole has causal efficacy over the parts and concerns often focus on dynamic process.

Causality is non-linear (there is interdependence, networks, relations, integration, context, connectivity). The world is treated more as relations in process than a collection of objects. There is sympathy to increasing subjectivity with increasing material complexity. Reductionist scientists and philosophers doubt the possibility of the whole influencing the parts (‘How is it possible for the whole to causally affect the constituent parts on which its very existence and nature depend? (Jaewong Kim) as a kind of ‘downward causation’.

Objections to analytical reductionism:

1. Ontology. A molecule is no more real than an elephant. Why should size or simplicity be a reason for ontological precedence? There is only value in its explanatory role.

2. Representation. There is always a foundational description of any property that is complete and unique, only one kind of stuff from which all else is built. But all representations are only partial. The biologist finds unified field theory, M-theory, or string theory totally devoid of information concerning biological objects. Reductionism ignores the reality of difference. ‘Every thing is a physical thing, but not every thing is a living thing’. Being a living thing is part of the reality of the world, not an adjunct to something more fundamental. If ontological emergence (strong emergence) is really unpredictable then there can be no universal way of representing the world

3. Structure. Reductionism ignores, or makes invisible, the reality of structure. A Mexican Wave is more than just many individuals waving.

Since weak emergence is concerned with the properties, processes, and functions of parts, where the ‘whole’ has no influence on the parts, then it is best studied using the methods of analysis that deals with micro- rules, structures, processes, and objects. Such an approach sees the world as consisting of simple basic building blocks and laws. It is therefore possible to have a theory of everything that captures all ‘levels’ or ‘scales’ of existence under a single explanatory system as the ‘unity of science’.

But if ‘aspects’ exist independently with their own rules and regularities, processes, and objects then our best account of ‘everything’ will identify not building blocks but patterns and processes of organization. It will provide abstract generic models applied at all scales without reduction that capture the features of all.

Epistemological emergence relates to our inability to account for some phenomena due to our computational limitations e.g. explaining an ant colony in terms of the interactions of its component molecules. Ontological emergence concerns not just statements about our knowledge of the world but of how the world actually is, irrespective of our understanding of it. Scientifically emergence is often related to non-linearity, self-organization, pattern-formation, and synergies. The novelty is not ‘additive’ but a consequence of the integrated interactions of the parts the novelty that arises being referred to as a synergy.

Synergies are non-linear and context-dependent, while linear relations are context independent. In a synergy the parts have specific roles in relation to one-another thus providing a context (e.g. a sports team). There is thus both differentiation and integration. The human body has highly specialized differentiated parts that are miraculously integrated towards achieving many functions. This is not the case with linear systems.

Separation of properties is differentiation while coordination is integration. Emergence is the integration of parts to produce new properties (synergies) over and above those of the parts and it often results in a combined functionality. Each ‘level’ has its own internal properties, features, and dynamics that depends on the integrity of the synergies between the constituent parts. The ‘higher’ level depends on the lower for its existence but it creates its own conditions that feed back to influence the ‘lower’ levels and their context of operation as ‘downward causation’ like the relationship between individual humans and their social institutions. The scale (level) is important because of the need for new terminologies as with micro and macro-economics, or quantum mechanics and general relativity, psychology for micro- individual explanation and sociology for macro- patterns that arise out of the synchronized activity of many individuals in social systems.

Do we use emergent levels or scales out of expediency and lack of knowledge (we cannot predict outcomes because of the number and complexity of objects involved) or because it is just not possible to give an account of the macro from the micro? This depends on whether the emergence is weak (derivable at least in theory or principle from the component parts) or strong (where there is extremely limited reference to the micro-level). In strong emergence something novel and autonomous arises through the particular combination of constituents (the covalent bonding of 2H into H2).


Pattern is any set of correlations between states of elements within a system (a structured relation or correlation between variables) in either space and/or time. If there is no correlation then there is no organization, pattern, or order, and the system is said to be random. Pattern it may be generated either externally (a house constructed by the builder) or internally (a living organism – why it is, say, a fish and not a dog – by its genetic code). Characteristically the novelty (new whole) of internally generated pattern is not a consequence of central control but local dynamics. Ant colonies achieve coherence (self-organization) not through a hierarchical chain of command but by the integration of patterns of local interaction. Sometimes random events can amplify certain features as a form of positive feedback and this can result in some things persisting rather than others.

In this sense pattern formation is a form of adaptation. In human terms social organization achieves more than individuals can achieve independently.

Patterns have an important connection to energy and mathematics.

All patterns have an underlying mathematical structure – indeed mathematics may be understood as the science of pattern. When variables change in relation to one-another the world becomes more intelligible. The relation may be positive or negative (the variables move in tandem, either together – money in bank vs amount of interest – or apart – distance travelled vs fuel used), of varying strength (age and health are only weakly correlated, of limited predictive use), and linear (linearity gives a straight line graph (plotting distance travelled vs petrol used) or non-linear (the proportionality of the change may itself change does not give a straight line as cost of new house vs its size). The expression ‘correlation does not imply causation’ indicates that connections between variables may not be direct but mediated by many variables.

The robustness of a pattern will be a function of the number of relations and the strength of the correlation between them (a marching army is strong, the price of fruit and health of the community it serves weak).


Symmetry is an organizing principle of patterns. Captures the idea of permanence in change, of sameness and difference – how things can remain the same under transformation (a characteristic of music, architecture. In mathematics it is part of group theory. In physics it is now recognised that almost all ‘laws of nature’ arise in symmetries. A symmetry can be abstractly defined as ‘a rule that will map or transform one element in relation to another’ e.g. a snowflake is a reflection transformation. Asymmetry describes how things are different within some frame of reference e.g. the canopy of a tree. The frame of reference is important because a lack of symmetry can become symmetrical when viewed from another aspect (at a ‘higher level of abstraction’). Symmetry can also be used to define the degree of order within a system with order ‘the arrangement or disposition of objects in relation to each other in accordance with a particular sequence, pattern, or method as a transformation or symmetry between states over time. Symmetries compress information which can be expressed symbolically. The infinite series {2,4,8,16,32 etc.} can expressed as a simple mathematical rule f(x)=2x.

Broken symmetries require extra rules. Symmetries describe simple systems with a small set of rules governing the difference between the small number of parts.


Complexity is an interaction between symmetry and asymmetry in creating pattern that has order but is also random and chaotic; this interplay is a defining feature of many complex patterns.

Aspect theory

The metaphorical language of ‘hierarchy’, ‘integrative levels’, ‘up’ and ‘down’, ‘high’ and ‘low’ are treated here as both unnecessary and confusing. Hierarchy implies rank-value and since the objects under consideration here are considered ontologically equal (something exists or it does not; existence does not imply value; a molecule exists equally with a human being) – there is no ‘higher’ or ‘lower’, nothing that is ‘more’ or ‘less’ fundamental in this sense. Altitudinal metaphorical reference to ‘levels’, ‘higher’, ‘lower’ etc. now becomes redundant. Explaining the world in terms of ‘levels’ as, say, elementary particles, atoms, molecules, macromolecules, cells, tissues, organs, bodies, communities, societies, ecosystems – or different academic disciplines like physics, biology, and social science – becomes unnecessary. These are simply different ‘aspects’ of the world: the same world explained in different ways. There is no ontological hierarchy, there are only different ‘aspects’, similarly there is no ‘up’ and down’ simply more or less complexity, inclusiveness, or scale (singly or in combination). Since existence itself is not privileged or ranked in any way then neither are these ‘aspects’.

Translating one aspect into another
How do different aspects relate to one-another? Aspects are not ‘reduced’ into other aspects but expressed in a different way or translated. Translating one aspect into another will depend in part on whether the criteria of distinction between aspects is based primarily on inclusiveness, scale, or complexity.

Each aspect may have its own unique properties and rules of interaction and connection that are difficult to predict or translate from those of another aspect. Since there are degree of complexity, scale, and inclusiveness those aspects with least difference in these factors will be easiest to translate. Physics has more in common with chemistry than with social science.

As a new subject finds its feet it is important to be clear about the use of potentially confusing terms. The following is an aide memoir for the terms used in this article.

Aspects are, however, established using different criteria (mostly complexity, inclusiveness, and scale) and this must be noted where possible.

Emergence – in general ‘the coming into existence of a novelty that could not have been predicted’. More specifically, ‘non-linear pattern formation where synergies between parts give rise to new patterns of organization’
Novelty – the features arising by emergence – including structures, properties, functions, and patterns
Synergy – a special relationship between the parts of a whole giving rise to a novelty or the effectiveness of subsystems acting in coordination or an interaction or coordination between two or more elements or organizations that produce a combined effect that is greater or less than the sum of their separate effects (adds or subtracts value) or a non-linear interaction e.g. an ant colony, two merged companies or elements combined to form a human body.

Wholes are not derived from but constitutive of

Emergentists point out that in purely mechanical (reductionist) systems, once we understand how the parts are integrated to form a whole, the future becomes predictable. Given state X, state Y will follow in an orderly and predictable way.

In the 1950s and 1960s reductionism was often associated with semantic reduction (translating one theory into another) that is, the reduction of properties of one knowledge domain into those of another. This was most evident in the philosophy of mind where mental states, like pain, were frequently equated to particular physical or neural states. Mental properties just are properties of the brain, there is a type-identity. This eliminated the mysterious causal connection between the mental and physical to reveal the physical reality and its uncontroversial causation. For every mental property there is a physical property that realizes it; for every physical effect there are sufficient causes for that effect; mostly particular effects are brought about by single causes (not overdetermined). The mental then becomes causally inert, an epiphenomenon, and an eliminativist position can be taken – the view that talk of mental states is devoid of scientific content. By the same token life itself was simply complex biochemical processes.

But how can we be responsible for our actions if all causation is microdetermined? Philosophers subsequently pointed out that ‘pain’, as understood in physical terms, would be very different between different animals and even between different humans, a thesis that became known as multiple realization. In other words, in practice, there is no one-to-one correspondence between mental states and physical states, it is just not possible to define mental states in physical terms. To overcome this difficulty mental states were then equated with functional states which allowed for their multiple realization. Functional properties are understood in terms of their causal role so there is no problem in equating the pain of a tadpole, dog, and human. Of course these functional properties are realized by the material constituents of the organisms but they are manifest at a different scale, the scale of the organism. On this understanding the mental and its causation are real and entirely dependant on, but irreducible to, the physical.

((Put the exclusion argument here))
Today physicalists (most scientists) are either reductive physicalists who still maintain that mental properties are reducible in this way, or non-reductive physicalists who think that this kind of reduction is incomplete.

There are scientific kinds (like hearts, legs, and eyes) that are functionally defined and multiply realizable. That is, there are many different kinds of hearts, even among humans, so they are not all physically equivalent even though they all have the function of pumping blood. Each is physically different. Functional vocabulary is not directly structural or physical. A type-identity reductionist holds that the relation between terms in two domains ‘heart’ of biology and ‘heart’ of physics is one-to-one. The functionalist holds that it is one-to-many, that functional objects are multiply realizable.

Then, if kinds are multiply realizable, then their causal pathways and laws are also in some way irreducible. Special science cannot be reconstructed from the vocabulary used to describe its realizing systems (the same system described at a more micro or reduced scale). But do th esmall-scale generalities of the special sciences warrant their being called ‘science’? Are they autonomous or require further grounding in physics? Universal laws of physics circumscrive general order while the regularities and patterns of the special sciences capture the order of particular instances. Both have their place.

Biologists look to function and the future in a way that does not occur in physics and chemistry. Certainly biological systems can be explained in non-functional terms but in so doing they appear to lose explanatory value. Hearts sere the function or purpose of pumping blood around the body. This is clearly not a conscious purpose or causation directed at the future (teleology), it is a consequence of events that occurred in the past and can be explained without reference to ‘ends’. The way that organic systems seem to be directed towards ends simply reflect the way our minds perceive the world, not the way the world necessarily is.

Nevertheless the apparent directed activity that occurs in an object that has been subjected to the moulding influence of natural selection (teleonomy). The sentence ‘The function of an eye is to see’ has meaning in a way that the sentence ‘The function of a falling stone is to hit the ground’ does not.

Matter does not behave in a random way. We can see how the laws of physics result in ‘directed’ change guiding, as it were, the matter of the universe along a certain path. Then how this directed change grades into the semi-purposive (but still deterministic) teleonomic change that we see in organisms and their parts, and then the purposive (?but still deterministic) character of human conscious purpose.

Most scientific explanations fall into two categories as they: they are questioning either structure or function. Structure has a timeless quality: it is ‘just the way things are’, but function looks to the future. Biologists explore what particular structures do in relation to ends. How the genetic code regulates development, the heart circulates blood and so on. Many scientists would claim that the teleonomic, end-directed, or functional character of biological systems is illusory. Regardless, it is a mode of thinking that seems indispensable to biological research and it is unlikely to pass away. The foundation of all biology is natural selection which poses the simple question ‘What did evolution select it for?’

Properties & relations
It is not matter itself that is at issue here but the properties that arise as a consequence of the relations that exist within certain structures. As smaller units combine into larger ones in some systems new properties (substances, processes or patterns etc.) arise that could not be predicted from the units themselves. As in the sociological example it is the ties, connections, or relations of the elements that creates these new properties, not the parts themselves.

New or emergent properties are said to arise out of less complex fundamental entities and are novel or irreducible with respect to them. Life and consciousness are the two most obvious instances of emergence, self-consciousness as an emergent property of the complex organisation of neurons in our brains. But life itself as ‘adaptive complexity’ (Richard Dawkins) is closer to our concern.

Ontology of properties and relations … Properties or relations between parts can seem a rather suspicious and abstract idea, quite unlike the brute reality of matter itself. But it seems fair to say that science has, over time become more concerned with properties, relations, and organisation and less with objects or matter itself. One good example is the inextricable link or relation between space and time.

Properties & supervenience
Claims that one sort of thing is reducible to another (or that one supervenes on another) make most sense if we take them to involve properties. For example the claim that the psychological realm supervenes on the physical realm involves mental and physical properties.

It may also be claimed that such reduction is simply not possible. In living systems especially new properties are said to emerge as biological systems become more complex – irreducible properties that are not shared by component parts or prior states. Moreover the new properties are frequently unpredictable from prior states. For example individual neurons do not have the properties of mental states.

The word ‘supervenience’ (coined by John Post in 1984 but made popular by philosopher Donald Davidson) is used for situations where less inclusive or smaller-scale properties determine more inclusive or larger-scale properties. That is, there is a dependency relationship such that any change in one (e.g. mental state) implies a change in another state (physico-chemical). The more inclusive is said to supervene on (be reducible to) the less inclusive. Thus the social supervenes on the psychological, the psychological on the biological. In our special-interest case biological properties supervene on physical properties. It is the nature of this connection that is critical . Do organisms have a causal influence on their physical constituents – does their form, pattern, or configuration supervene on the properties of the constituents?

A characteristic of supervenience is that there can be no change in the large scale without a change in the small-scale. Whether the two are identical (the biological process identical to the physico-chemical process) is a matter for philosophical debate.

Though scaling is often a factor in supervenience and reduction it is not absolutely necessary: length and breadth can supervene on area; space can supervene on time in an equivalent way while scale reductions and supervenience have an asymmetry.
Our task is to explain how apparent emergent properties arise rather than deny their existence. If the matter is the same how can it rearrangement be so important? Also does the same apply to phenomena, explanations, theories, and meanings as well as objects.

Perhaps emergence arises as part of describing or analysing systems but is not part of the systems themselves (philosophically the problem is epistemological, a view known as ontological reductionism). When all is said and done all we have is the matter itself. This reduction acknowledges emergent properties but regards them as completely explicable in terms of the processes from which they are composed.

In token reduction (token ontological reductionism) every instantiated object is the sum of objects at a smaller scale: on this view biology is simply physics and chemistry while in type reduction (type ontological reductionism) every descriptive concept or type is a sum of types at a smaller scale. Type or concept reduction of the biological to the physicochemical is often rejected. Epistemological reduction holds that all phenomena can be explained in terms of component parts. If all three forms of reduction hold then we have strong reduction.(Stanford Encyclopaedia of Philosophy)

Explanatory reduction
Explanatory reduction allows the relation between objects other than theories to be examined such as mechanisms, fragments of theories, even facts and assumes a causal relation between large and small scales.

Principle 2 – in considering the relationship between wholes and parts each particular case must be examined on its own merit.

Scientific knowledge of the world can be viewed as the attempt to understand why one thing happens and not another – it is the study of patterns of causation.

One way of explaining what a molecule is likely to do (its behaviour) is to recognise two major sources of influence (causation). Firstly, there is the interaction of the component atoms because this determines many of the properties of the molecule. Secondly, since molecules do not exist in isolation they can be influenced by surrounding factors like temperature, pressure and the presence of other atoms and molecules. There are various ways of representing these two kinds of causal influence – as object and context, internal and external, top-down and bottom-up, intrinsic and extrinsic. For our purposes we will note that effectively nothing in the universe exists in isolation, so everything is subject to both kinds of causation. The language of hierarchy intuitively places value on its ranks, so we shall use the terms extrinsic and intrinsic, noting that existing circumstances are the consequence of the interaction of these two modes of causal influence. This pairing of causal influence comes to us in many forms: mind and body, organism and environment, and so on.

A remarkable change took place when variable self-replicating molecules were subject to the ‘selective’ or constraining influence of extrinsic factors (the environment). This marked the beginnings of life and evolution by natural selection.

Mechanistic analysis
Much of biological explanation might be viewed as mechanical analysis – the observation and explanation of organic matter through its component parts in much the same way that we study the parts of a watch to reveal the way that it works. An organism has features not possessed by its individual parts (emergent properties) and mechanistic analysis can demonstrate how these features arise.

Sometimes the parts in the part-whole relationship are critical (change a lung for a kidney!) and sometimes they are not (change one molecule in a kidney for another similar molecule). In biology there seem to be wholes that are ‘more’ or ‘less’ organised on a continuum – from highly organised (the parts strongly integrated and interdependent as in an organism) to more aggregative or sum-like (like populations of organisms), also depending on scale.

In this way emergent properties can depend to a greater or lesser extent on the degree of organisation of the parts – their arrangement, not just their individual properties – consider the music made by a band, the sequence of bases in DNA, or a sugar lump and its individual sugar crystals. In most cases emergent properties can be adequately explained in terms of component interaction – consciousness seems to be an exception.

Top-down or bottom-up
The Nobel prize-winning brain scientist Roger Sperry (cf. Sperry, 1983) introduced the concept of ‘top-down causality’ as an explanation of causal interaction between mind (consciousness, qualia) and brain (physicochemical processes). The idea of top-down causation has subsequently been taken up by a number of other writers. Can the whole shape the behaviour of the parts in ways the pieces alone could never find by themselves.

Bottom-up causation says that, given an inventory of the smallest stuff and the rules for their interactions, you can explain everything from crystals to cells to your own sweet sense of consciousness. Bottom-up causation is, at this moment in the history of physics, the dominant view.

Computers exemplify the emergence of new kinds of causation out of the underlying physics, not implied by physics but rather by the logic of higher-level possibilities. … A combination of bottom-up causation and contextual affects (top-down influences) enables their complex functioning. Hardware is only causally effective because of the software which animates it: by itself hardware can do nothing. Both hardware and software are hierarchically structured with the higher level logic driving the lower level events.

Ellis ‘Bottom-up emergence by itself is strictly limited in terms of the complexity it can give rise to. Emergence of genuine complexity is characterized by a reversal of information flow from bottom-up to top-down’.
Is culture as a set of dynamic, contingent and unpredictable social interactions including not only our interactions with each other but with the material world and our environment, affects the lower levels: “the top is always exerting an influence on the bottom.” This way of thinking does not deny the influence of genes, but does challenge a genes-in-primary-control sort of formulation.

Selection acting on genes seems exactly a top down approach.

I agree it’s artificial, since all scales are acting all at once in the really real reality. But in order to focus on a given process, one needs to compartmentalize, and to get it right one needs to understand the boundaries of the compartment, its top boundary for top-down influences as well as its bottom boundary for bottom-up influences. Both influences can be important.

Emergence of genuine complexity is characterised by a reversal of information flow from bottom up to top down.”

Paul Davies: what makes life different from non-life is that in life “information manipulates the matter it is instantiated in.”

Reduction in biology itself – units of selection (gene fundamentalism)
Adaptations serve the interests of genes, not organisms. Does selection operate primarily or exclusively at the scale of the gene? Can developmental biology be reduced to developmental genetics and molecular biology?

We tend to think of evolutionary change as occurring in the organism. On an evolutionary scale our individual lives are brief, the sole factor that persists through time, which is ‘immortal’ (Dawkins’s word) is the genome. This is the central point of the book The Selfish Gene by Richard Dawkins. Genes literally embody a program that produces development.

The central point of critics of the gene concept is that functional decomposition identifies multiple overlapping and crosscutting parts of genomes. The “open reading frames” to which biologists refer when they count the genes in the human genome not only can overlap but are sometimes read in both directions. Subsequent to transcription they are broken into different lengths, edited, recombined and so on, so that one “gene” may be the ancestor of hundreds or even thousands of final protein products.

We might call this gene reductionism – but we have noted that reduction can proceed further. Why don’t gene reductionists use smaller units still?

To which it is responded that social interaction involves much more than just genes.

In isolation, DNA is no more than a very complex chemical.

Life is spatiotemporally bounded to earth, each species itself spatiotemporally bounded. Mid twentieth century saw disciplines as claiming to be sciences (sociology, psychoanalysis, Marxism, political and economic science, astrology) when they were not.

To this list is sometimes added the Temporal reduction with scales embedded in explanations smaller scale must be prior to larger scale explanations (eg. tissues in ontogeny, gene expression).

Key points

Consider what would be needed to unify three disparate subjects: physics, biology and economics. This would appear theoretically possible since all of these subjects ultimately relate to physical objects and their interaction. Now consider how you would unify the following scientific explanations into a common physical language: in physics the conduct of electricity through copper wire, in biology the hibernation of Brown bears in winter, and in economics the relationship between interest rates and inflation.

We must answer this question in the light of Principles 1 and 2: firstly, that all matter is of equal status regardless of its size (although we find certain units of special explanatory value): secondly, that there are no factor(s) that clearly distinguishes science from non-science or pseudo-science. In other words we cannot claim that molecules are more real or fundamental than animals, or that economics is not a science.

Intuitively we might attribute the difficulty of translating these three disciplines into a unified physical theory to one or all of the following: scale, complexity, language.

Is there a reducing gradation of predictive power with increasing complexity – physics, biology, economics?

If the unit scales are generally more complex then do causal pathways and pattern also become more complex? Perhaps scale cannot be extrapolated across domains because the results are nonlinear – as we pass between domains quantitative change becomes qualitative change?

Our emphasis until recent times has been mostly on the analytic breaking up of things into components to see how they work. Part of this history has been the creation of literally hundreds of disciplines out of what was once the single study of biology. We are now passing through a phase of re-synthesis as biology merges at one extreme with the physical sciences and at the other extreme with the social sciences. One extreme is represented by the new insights of molecular biology, genomics and biotechnology while at the other we see the integration of ideas from sociology, anthropology, linguistics and especially new developments in psychology, moral philosophy.

If in causal terms, the whole can be completely explained in terms of its component interactions then the whole, having no causal agency, is referred to as an epiphenomenon.

The epiphenomena are then termed to be “nothing but” the outcome of the workings of the fundamental phenomena. In this way, for example, religion can be deemed to be “nothing but” an evolutionary adaptation, and beliefs can be considered “nothing but” the outcome of neurobiological processes. There is a tendency to avoid taking the epiphenomena as being important in its own right.

Social and behavioural systems, political science and sociology, can be explained in terms of neurochemistry, genes and brain structure. At the highest sociocultural level, explanations focus on the influence on behavior of where and how we live. Between these extremes there are behavioural, cognitive and social explanations.

Reality is a multi-layered unity. Another person as an aggregation of atoms, an open biochemical system in interaction with the environment, a specimen of homo sapiens, an object of beauty, someone whose needs deserve my respect and compassion,

But it is hard to avoid the conclusion that we either pass into a knowledge regress or deny that studying human behaviour is science.

In philosophy thought about emergence often turns on whether we can distinguish what you might call mere epistemological emergence from genuinely ontological emergence. Where epistemological emergence is in play, we grant that the low level facts do in fact determine everything at the upper level, even if, as it happens, we have no way of predicting the upper level from the lower, and even if our ways of comprehending the lower-level are shaped by what we know about the higher level. Think chaotic systems, traffic patterns, etc. Full-blown ontological emergence makes a much stronger claim. Facts at the lower-level do not fix the facts at the upper level. There are, then, on such a view, genuinely emergent phenomena. The trick has always been to explain how that could be and whether it even makes sense. Can one give an example of genuine ontological emergence?

The microscope
Working with different domains of knowledge is like zooming in and out of regions of the world in space and time, seeing different patterns and regularities in nature as we do so. As we focus on one domain the laws, principles, and categories of the others become part of a blurred and much less relevant background. When working in the world of biology the world of physics is mostly irrelevant, not because it is unimportant but because it is taken for granted in our selective cognition.

It appears to be a function of our minds that we must apprehend the world through categories of scale and the greater the difference in scale the more difficult this becomes. It simply makes no sense to explain monetary and fiscal policy in physicochemical terms – though in theory this is not absurd since these matters are a consequence of interacting physical objects – but this would need an infinitely complex computer since our minds could not cope with the problems of scale and their difficulties in relation to vocabulary, categories, properties and relations.

When the microscope was discovered it was necessary to create a whole new language of terms as we observed structures in animal and plant cells. The same is true at the molecular scale. The new terms were needed to deal with a new scale of comprehension.

Looking at life on Earth over its full time period of 3.5 billion years and on the scale of all life we might imagine a three-dimensional branching tree-like structure as different life forms differentiate along the branches up to the present day and there are many dead ends. To assist our perception we fix on particular categories or units of scale depending on utility and our interests. We recognise various aggregates of organisms with individual species as the ill-defined fundamental unit, these arranged into a progressively more inclusive units as genera, families etc. Within an organism like ourselves we select operational units like organs, tissues, cells, molecules and atoms. When thinking about evolution we choose the units gene, organism, population and this.

Are some aspects of biological science autonomous in that they do not benefit or utilise the knowledge of molecular biology?

The television
We know a television is made up of tiny units called pixels and that these pixels can flash different colours in a predetermined and coordinated way that allows us to produce images of people and other objects. This representation of people and other objects by means of a pixel matrix is interpreted by our eyes and brains in the same way that we interpret the representations created by actual objects in the world. This metaphor illustrates several important aspects of our perception and cognition.

Firstly, the images that are so meaningful to us are made up of simple basic constituents, flashing pixels, that individually lack meaning.

Secondly, the activity of the pixels acting together is meaningful because the pixels have been organised to distribute colour across the TV screen in a highly coordinated way.

Thirdly, the meaningful images we see are interpreted by our eyes and brains as objects in the real world: they are categories created in our minds since the TV screen is just composed of pixels, not people and other objects.

Fourthly, the fact that TV screens (which are just a layer of flashing pixels) can create visual representations that are highly convincing to our eyes and brains makes us aware that our brains can add structure to the world that does not exist. It is the task of science to establish as close a correlation as possible between our perceptions and reality knowing that our minds can be deceived.

Problems with reduction:
The effects of molecular processes often depend on the context in which they occur. So one molecular kind can correspond to many kinds at a larger scale (one to many) while at the same time large-scale structures and processes can arise from different kinds of molecular processes, so that many molecular kinds can also correspond to a single larger-scale kind (many to one or multiple realization).

Structure and function relate to spatial and temporal (spatiotemporal) factors respectively. Each represent a mode or type of organisation important in reduction. This is why development is an important aspect here.

Scientific explanation often involves units from different scales of reference.

In a reductive explanation the intrinsic can be important (what is internal and what external), reduction favouring internal causality. Protein folding can have external causality. Temporal and intrinsic factors thus play a part in reduction as well as simply the relations of parts and wholes. Perhaps there are different kinds of reduction?

Perhaps we should move away from the idea of reduction towards science as best characterised as proceeding by unification as integration and synthesis rather than reduction. The theories and disciplinary approaches to be used depend on the nature of the problem being discussed.

Complexity – the unconscious collective behaviour of social insects as an emergent property. Complexity arises from dynamics not constitution? Chaos theory, for example, demonstrates how some systems are acutely sensitive to the minutest changes that can totally changes their behaviour. Such systems are widespread and difficult to analyse in a reductionist way. Such complex systems seemingly spontaneously generate remarkable patterns of behaviour in a holistic manner. Highly complex systems seem to contain vast amounts of information, ‘active information’ being a new arena for theorising.(see )

The problem is not whether explanations are reductionist or not, but whether the particular degree of reduction is sufficient to answer the question being posed.

The task of science is to describe, as accurately as possible, the structure and workings of the objects that exist outside the human brain. But to do that we must use the brain itself, an object that has been moulded and limited by its evolution. To describe the universe we must first understand as much as we can about the limitations of the tool we use to comprehend it.

As we pass from physics and chemistry to biology and sociology the cognitive units or categories of nature that we use as tools to do our science tend to increase in abstraction, complexity of material organisation, and causal intricacy. We sense a graded change in the character of the subject matter that is a difference in degree but not in kind, more a matter of trends. Physics and chemistry appear to proceed mostly by analysis while much of biology is about synthesis as it attempts to explain organisational complexity and the role of phenomena within functional systems, its teleonomic character tending to look to the future. The compositional or holistic concern with organisational factors and adaptive function.

Reduction is complicated by our metaphysics – the way we assume the natural world is structured – the nature of reality. Science is now providing us with an improved picture of this reality.

The confusing aspects of language include metaphor, anaphor and polysemy.

We can now combine the principles and findings of this section as follows:

Reduction is only useful and appropriate when it improves our understanding. In considering the relationship between wholes and parts each particular case must be examined on its own merit. We regard scientific categories as important because we believe they map the natural world as best we can ultimately providing us with compelling explanations that help us manage the world.
Science uses categories that map the natural world as accurately as possible but some of these may be mental categories with no instantiation in the physical world and others may be relational in character. The scientific need for explanation, like the philosophical requirement for rational justification or causal origin, leads to an explanatory regress seeking more ‘fundamental’ answers. However, a satisfactory answer does not depend on the size of the unit but the plausibility, effectiveness or utility of the answer (Principle 1). Hierarchical language applied to biological organisation implies value and is best replaced with the language of scale. The greater difference in size of scale units used by different domains, the greater the difficulties of reduction – communication and translation. Provided scientific units are credible then the scale we use for explanation is simply a matter of utility.

We analyse a problem to obtain a broader understanding, a better synthesis. Science progresses by a process of both analysis and synthesis with emphasis alternating between the two in a kind of dialectic.

With decreasing levels of complexity compensatory activity or ‘self-regulation’ decreases in likelihood.

Key points
Reductionism, the translation of ideas from one domain of knowledge into those of another

Scientifically credible units of matter have no precedence over one-another based on size alone

The idea of something being more ‘fundamental’ probably derives from our tendency to explain by a process of analysis, by breaking down into smaller parts. It is also probably related to internalised hierarchical thinking in terms of ‘levels’ to which we unconsciously apply value

Science examines matter at various scales which correspond loosely to disciplines as domains of knowledge, language and theory

Though there are clear links between domains of knowledge, each domain seeks optimal explanatory results using its own language, principles and procedures. Linking or even uniting (reducing one domain to another) may have benefits but presently appears to pose insurmountable difficulties.

If we regard science as the matching of our mental categories to the reality of the external world then there can be no ‘fundamental’ science and also no clear distinction between what is science and what is not. There will simply be better and worse explanations of the world of matter and energy. For a whole variety of reasons it is evident that astronomy is more scientific than astrology.

Are some scientific explanations ‘better’ than others?
Is physics more ‘fundamental’ than biology?
Does the physical world exist in ‘levels’ of organisation?
Do new properties emerge as things get more complex or are the ‘fundamental’ properties always the same – is the whole greater than the sum of the parts?
What is reductionism and why is it often treated as an error in thinking?

We draw scales of convenience which we believe reflect reality. Why should gene selectionism not reduce further to physics and chemistry?

And scientific categories, we believe, relate closely to objects in the external world. Even so, the scientific information considered valuable to the modern world would be inconsequential for a native living in the New Guinea jungle, much more important is whether it is edible or poisonous. And for an ecologist the actual species in a particular environment may not matter – more important is their role within an ecosystem, say, whether the organism is a predator or herbivore.

We can imagine scientists investigating nature as a watchmaker investigates a watch: if we want to know how the watch works then we examine the parts, how they fit together, and how together they operate effectively. By a process of analysis we then see how the parts interact to produce an operational whole. In terms of our classification of categories this is a sum or additive category.

Reductionism reflects a particular perspective on causality: supervening (more inclusive) phenomena that are completely explained by smaller scale (less inclusive) phenomena can be termed epiphenomena. It is often assumed that epiphenomena have no causal effect on the phenomena that explain it.

Only statements can be deduced, not properties: properties require a theory.

Though we do not know why the laws of physics are as they are
A part of the teleonomic view of the world in which there are many paths to the same end. The development of cells in embryology is determined by their environment.

Species exist because they perform their functions (Aristotle).

Mental categories
Concepts provide the meaning that language expresses and they comprise the blocks of information on which reason can work. If we regard categories as concepts (units of thought or mental representations) then they can be of two kinds, either universals (types) which are general categories like ‘chair’ and ‘tree’, or particulars (tokens) like my chair or the oak tree outside my window. Though categories may sometimes be clearly defined as having necessary and sufficient conditions, most simply share a family resemblance – a set of characteristics that overlap with those of other categories.

We can also regard universals as sets and particulars as sums and, for simplicity mind-objects are called types and physical objects are called tokens.

Sets are abstract, consisting of objects that are ‘members’ of that set even when their members are physical objects. So, for example, English Oak and Chinese Elm are members of the set ‘tree’. Sums consist of parts rather than members: so a leg is a part of our body and a body can be physically moved. A forest is a collection of trees, so it is a sum not a set. The distinction between a sum and set may not always be crystal-clear but it helps to be aware of the idea – that is, it helps to be wary of the use of abstract and concrete nouns.

Categories can also consist of properties or relations. Plants share the physical property of being photosynthetic (they instantiate photosynthesis). When dealing with properties it is useful to distinguish intrinsic properties (inner properties that are independent of external influences) and extrinsic (relational) properties that do. Categories like this are easiest to understand when the properties are intrinsic but when relations between properties are important then we get the language of parts and wholes.

Scientific properties like specific colours, weights, densities, and temperatures or the ability to photosynthesise, are regarded as contingent (they are tokens that instantiate the types colour, weight and process) factors that are part of the scientific world of empirical investigation (what might be called a naturalistic ontology).

The particular kind of category that we use will depend on the particular situation and mixtures are possible. We must be aware of difficulties relating to precision and clarity of our categories. The word ‘goldfish’ can refers to a specific physical object or token, the word ‘society’ refers to something that is physically undefined – like a distinction between abstract and concrete nouns.

Principle 5 – Science uses categories (names, explanations, definitions, theories etc.) that reflect as accurately as possible the natural world: these categories consist of either sets (universals), sums (particulars) or properties. Properties may be either intrinsic (internal) or extrinsic (relational).

Principle 6 – Sets (universals), being abstract, can add complexity to the analysis of whole and parts

There is an expectation that biology should produce universal laws like those of physics but as biology is only concerned with living organisms this is an unreasonable expectation.

We must ask what could possibly be the point of converting the language of one into another: it is not only unnecessary but also unimaginably complex.

‘Predispositions’ and ‘propensities’ are proximate mechanisms.

Cognitive focus and cognitive illusion
We have all experienced visual illusions where a stick in water appears to bend and how when we focus on some images they seem to be one think one moment and another the next, but never both at the same time. In a similar way we struggle with cognitive illusions that create cognitive dissonance: something cannot be simultaneously similar and different, a whole and a part … it must be either one or the other. And yet we know that an ant is a whole individual while at the same time being part of a colony.

Our brains organise knowledge by classifying it into categories or. The method we use to classify or organise knowledge can influence the way we perceive the world and our ability to discover and create new knowledge.

Our scientific map of reality, like all our cognition, removes unwanted noise, acting like a camera lens by filtering out inconsequential information as we ‘zoom in’ and ‘zoom out’ of different regions of categorisation. What we must ask ourselves as scientists is the extent to which the categories we create are accurate representations of objects in the external world, the extent to which the groupings we create are accurate representations of objects in the external world, and the extent to which the way we rank these categories and groups is an accurate representations of what is going on in the external world.

We can immediately comment on these questions. Categories are tricky: the dog in front of me seems a real and concrete object in the world but the general category ‘dog’ is abstract, rather like the non-existent category ‘unicorn’. Groupings are similar although perhaps not so clear: ‘primate’ seems OK, and ‘London trams’ alright but I’m not so sure about ‘institutions’ and ‘society’ , they seem more fuzzy categories. Both categories and groups, we might say, need scientific investigation – we need sound evidence for their existence. Ranking itself is different. The external world does not rank its contents, that is what we do. All we can do is try and determine as accurately as possible what there is in the world, what exists. When we draw up a biological classification we are ranking organisms according to their similarities and differences which we assume has something to do with the way they evolved, with the nature of their existence in the external world. Ranking plants according to edibility is clearly more subjective.

Complexity – move to emergence
(Scale) We understand all objects within a context. which depends on their relationship to other objects. Some wholes, like billiard balls on a table or sugar crystals in a sugar lump, we can understand fully by examining the properties of the individual parts in a process of analysis. With a complex whole like the human body we can only understand the parts by seeing how they are related to one-another in relation to the function of the whole and this process we call synthesis.

We can imagine a continuum of groups whose parts have varying degrees of connectivity. Increasing complexity is generally associated with other factors: an increasing number of elements, an increasing degree of connectivity, often into a network, where it is relationships that define the system, not the components themselves, there is also usually an increase in diversity of the parts. Adaptive complex systems like organisms are also capable of self-regulation (teleonomy). Analytic systems have simple and predictable linear causal relationships where input and output are eequal while complex systems have complicated and non-linear and often unpredictable causation that is not amenable to modelling.

The ‘possibility space’ allows us to think about random and complex situations without thinking about causes and effects. It assumes that each time a situation of that kind arises, the set of possible outcomes is the same and the probabilities are also the same. So, considering the likelihood of life on other planets would entail a sample space (the set of all possible outcomes), the set of events, and the assignment of probabilities to events.

Are the laws of physics (which may be strict or probabilistic) descriptive or prescriptive: that is, do they simply describe the way things are, or do they actually exert an influence on things?

The discovery of laws was long regarded as central to science and from a theistic perspective this made sense – natural laws were God’s laws as part of his divine plan for in the universe.
In everyday parlance we say that the laws of nature ‘determine outcomes’, that they ‘govern behaviour’ and so on. Taken literally this suggests that laws are rules that are in some way prior to activity, that they exert an extraneous influence or constraining force on things, they are something outside the system or circumstance itself, like a physical barrier, a programmer, or system of governance.

For many scientists this is unacceptable: laws do not exist in some transcendental realm acting on matter in the world. Law-language simply describes the way the world is, the decree-like lawfulness implied in language is metaphor and best treated as such.

Nevertheless, laws do explain or account for why the world is as it is, while descriptions simply state facts. Uniformities in classes of objects and activities can be described and given mathematical expression and this is critical to the predictive power of science. So how do we account for laws? Well, we simply replace the idea of law with that of succinct descriptions of patterns or regular behaviour with strength in their simplicity and generality.

One characteristic of the diverse range of scientific generalities (laws) is that they exhibit varying specificity: there is a trade-off between simplicity and generality.

One descriptive account of a scientific law is given by the late Australian-American David Lewis. Consider the set of all truths and select some of these as axioms, thus permitting the construction of a deductive system, the logical consequences of which become its theorems. These deductive systems compete with one another along (at least) two dimensions: the simplicity of the axioms, and the strength or information content of the system as a whole. We prefer to be well-informed but to achieve this we sacrifice simplicity. So, for example, a system comprising the entire set of truths about the world would be maximally strong, but also maximally complex. Conversely, a generality like ‘events occur’ is uninformative. So, what we need is the most useful balance between the two, and that, perhaps, is what the ‘laws’ of nature do. This is not a precise formula but a suggestion or heuristic for a way of thinking about scientific laws as we look for the simplest generalizations possible from which we can draw the most information. Thus there are laws covering a wide degree of resilience scattered among the various scientific disciplines. On this view the collection of particular facts about the world are the laws of nature because the laws are condensed descriptions of those facts.
Laws can also be regarded as above but with the best propositions within particular vocabularies so we could have different laws for different vocabularies. An economist wouldn’t be interested (at least not qua economist) in deductive systems that talked about quarks and leptons: her language would be along the lines of inflation and interest rates. The best system for this coarser-grained vocabulary will give us the laws of economics, distinct from the laws of physics.
On this descriptive account laws are part of our map, not the laws themselves which are just convenient ways of abbreviating reality. Because regularities assist the organization of knowledge and depend on facts about us. Nature does not make these regularities laws, we do.

Mereological reductionism is the claim that the stuff in the universe is built of things described by fundamental physics, even though physicists may be unsure of these. But nomic reductionism holds that the fundamental laws of physics are the only really existant laws, and that laws in other disciplines just convenient abbreviations necessitated by our computational limitations.
Nomic reductionism appeals through the apparent redundancy of non-fundamental laws. Macroscopic systems are entirely built out of parts whose behavior is determined by the laws of physics the laws of other systems are therefore superfluous. This argument relies on the prescriptive conception of laws: it assumes that real laws do things, they physically influence matter and energy. Thisseems to be overdetermination but if we regard laws as descriptive all we have are different best systems, geared towards vocabularies at different scales and therefore different regularities described in different condensed ways. There is nothing problematic with having different ways to compress information about a system. We need not claim one methods of condensation as more real than another.
Accepting the descriptive conception of laws severs the ontological divide between the fundamental and non-fundamental laws, privileging the laws of physics is the result of a confused metaphysical picture.
However, even if we accept that laws of physics don’t possess a different ontological status, we can still believe that they have a prized position in the explanatory hierarchy. This leads to explanatory reductionism, the view that explanations couched in the vocabulary of fundamental physics are always better because fundamental physics provides us with more accurate models than the non-fundamental sciences. Also, even if one denies that the laws of physics themselves are pushing matter around, one can still believe that all the actual pushing and pulling there is, all the causal action, is described by the laws of physics, and that the non-fundamental laws do not describe genuine causal relations. We could call this kind of view causal reductionism.
Unfortunately for the reductionist, explanatory and causal reductionism don’t fare much better than nomic reductionism. Stay tuned for the reasons why!
We can imagine a continuum of groups whose parts have varying degrees of connectivity. Increasing complexity is generally associated with other factors: an increasing number of elements, an increasing degree of connectivity, often into a network, where it is relationships that define the system, not the components themselves, there is also usually an increase in diversity of the parts. Adaptive complex systems like organisms are also capable of self-regulation (teleonomy). Analytic systems have simple and predictable linear causal relationships where input and output are equal while complex systems have chaotic, fractal, complicated. non-linear, non-predictable and often unpredictable causation with fuzzy logic that is not amenable to modelling. Small initial conditions can have massive consequences.

A heart has little meaning except in relation to the body of which it is a part. Hydrogen and oxygen combined as water are different and unpredictable from the individual atoms.

Top-down causation
Emergence generally entails the language of scale but causation may be helpful as ‘higher’ levels of behaviour arise from ‘lower’ level causes. Emergence is conveniently illustrated through the world of computers. Computer hardware enables but does not control – it is software, as information, that makes a computer work – software tells the hardware what to do. The information software carries is abstract, not physical, but it has causal agency, it sets the constraints for ‘lower-level’ action whose goals can be achieved in many ways (multiple realization).

Epiphenomena are by-products of things, not the things themselves: brain and mind; brain and consciousness.

We understand objects within a context which depends on their relationship to other objects. Some wholes, like billiard balls on a table or sugar crystals in a sugar lump, we can understand fully by examining the properties of the individual parts in a process of analysis. With a complex whole like the human body we can only understand the parts by seeing how they are related to one-another in relation to the function of the whole and this process we call synthesis.

Generalisations in biology are often not strict with various exceptions, it is often not uniquely biological since organism for example follow the rules of hydraulics and aerodynamics and its generalisations rarely have the law-like character of physical laws. Biology accepts the generalisations of physics and then proceeds within its own domain.

Classical reductionism – either the laws or generalities of biology, psychology, and social science are the deductive consequence of laws of physics or they are not true.

Multiple realization – if genes have many effects on many phenotypic characteristics and phenotypic characteristics are affected by many genes then. The relationship between genetic and phenotypic facts is many/many and therefore cannot be a deductive consequence. There is a complex two-way relationship between the genome and its molecular environment.

Paley was the watchmaker.

Synthesis & analysis
We can see in analysis and synthesis several opposing ideas. Analysis is breaking down, synthesis is building up. Analysis looking down or while synthesis is looking up. In a literary sense analysis is associated with Classicism that ‘looks back’ to old traditions and certainties and universal characteristics and conservatism while synthesis like Romanticism ‘looks forward’ for novelty, creativity, experiment and imagination, maybe even the great intellectual traditions of permanence of Being and the change of Becoming.

Order & constraint
The ancients wondered why there was order in the world rather than randomicity and chaos. Today we can point out that this order comes about by means of constraints on possible outcomes. Not everything is possible. The most obvious constraints on activity in the universe are a consequence of physical constants, what we call ‘physical laws’. This means, for example, that given the initial conditions of the universe the possible outcomes are already limited. The best science at present indicates a heat death. Though the universe is mindless, owing to the constraining action of physical constants, at any given time it has potential that will become actualized in a more or less predictable way. That is, there are ‘ends’ to inexorable physical processes and in this sense these processes are teleological. When life emerged the nature of teleology would undergo a radical change. Some organisms would survive and others perish. Though morality is supplied by the human mind, the reasons for organismal survival exist in nature. Situations become ‘good for’ and ‘bad for’; there is rudimentary normativity and functional design. There is also the passage of historical information from one generation of organism to the next, the historical organism-environment interaction being ‘represented’ as information contained in genes or gene-like chemicals.

Emergence as new biological order is more a consequence of the constraining boundary conditions than the exisatence of biological laws or law-like behaviour.

The fallacies of composition and division
The fallacy of composition arises when it is inferred that something is true of the whole from the fact that it is true of some (or even all) its individual parts. For example: ‘fermions and bosons are not living, therefore nothing made of fermions and bosons is living’ this is akin to an assertion of emergence – that the whole may have properties or qualities not apparent in the parts. This is in contradistinction the fallacy of division when it is inferred that something true for the whole must also be true of its parts. For example: ‘my living brain exhibits consciousness therefore its constituent atoms display consciousness’.

To explain something is to subsume it under a law.

The adaptations of living organisms are treated as ‘forward-looking’.

This imposition of order (function) by natural selection, the process of adaptation, is a holistic feature that has been called ‘downward causation’ (the whole may alter its parts) and it cannot be improved by reductionist explanation.(Campbell, 1974).

Hierarchy implies that levels are ontologically distinct but they may be only epistemologically so. We must communicate in a linear way but the ideas being communicated may not be related in a linear way.

Memes are informational not physical.
We do not have to comprhend reasons we just do them. Though our mind understand neurons do not. Though an ants nest is a highly integrated and purposeful unit the individual antss do not understand this. Meaning can emerge from non-meaning.
Computers have taught us about the importance of software and system behaviour rather than physics and chemistry.
Abstract notions can be causal – habits, words, shapes, songs, techniques, learning by watching, long-division: none of these is in the genome.

So, to help understand the many issues at stake here we can imagine a continuum in the structure of ‘wholes’ ranging from those where there is minimum interaction between the parts (aggregations) to those in which there is a highly integrated interdependence of the parts – many variables and complex causation (systems). As an example of the former we might think of a rock made out of grains of sand and of the latter a living organism. But there are many different kinds of wholes, so consider the following: a car engine, an ant colony, a flock of birds, a shoal of fish, 4 as a sum of 2 and 2, a symphony as organised sound, economic patterns that are a consequence of mass markets.

The situation is complicated by the nature of the whole which might be: a mechanism or process, a behaviour (movement of flocks of birds or shoals of fish), a property, a system with parts in a state of dynamic interdependence, a concrete object.

In very general terms it seems that physics and chemistry tend to take a reductionist or atomistic approach to their disciplines while biology and the social sciences adopt a more holistic or organismic stance. This raises a major metaphysical question about the nature of reality. What is the structure of the physical world? Is the best scientific representation of the world expressed through the relations of fundamental particles acting under the influence of physical constants, or is it better represented in some other way?

Macro-causation supervenes on or is determined by micro-emergence (strong emergence)

(The humanistic view of human nature (Roger Scrution). Perhaps it is impossible to ‘reduce’ the language and ideas of the human realm to that of science which cannot capture the sense of self and other, I and though. So, for example, music, colour, and art can be subjected to meticulous scientific scrutiny but still lack a dimension of uniquely human understanding.)

Media Gallery

Emergence – How Stupid Things Become Smart Together

Kurzgesagt – In a Nutshell – 2017 – 7:30

What’s Strong Emergence?

Closer to Truth – 2020 – 26:47

First published on the internet – 1 March 2019

Print Friendly, PDF & Email