
Reductionism

What are we to make of an assertion like this?
Is it correct . . . or is it just playing with words? Is there something special and unique about biology that cannot be expressed in a simple physicochemical way? If it is true, then will we, in the future, find ourselves replacing biological language with physicochemical language? If you think this is unlikely, then what are the problems or misconceptions that will prevent it from happening?
The questions outlined above relate to one of the most vexed questions in science today – the relationship between the various scientific disciplines: what characteristics do they share, and what makes each distinct? Are some disciplines more scientific than others? What are the differences in principles and methodology between, on the one hand, the so-called hard sciences like physics and chemistry and, on the other, the soft or special sciences like biology, sociology, politics, and economics? Is there a foundational subject or subjects? If biological explanations are ‘improved’ by being ‘broken down’ (explained or analyzed) in terms of physics and chemistry (reductionism), then can explanations be equally improved by being ‘built up’ into greater wholes?

Oxytocin is the cuddle hormone – it is what makes us feel affectionate.
So where is love in all this?
Is this chemical all that there is to this powerful emotion?
Adapted from an image of the oxytocin molecule in Wikimedia Commons
Edgar181 – Accessed 8 Sept. 2015
The problem
The problem of reductionism is notoriously complicated because it touches on so many concerns in the philosophy of science. Among the major topics are: the relationship between physics and the rest of science including debates concerning the difference between ‘hard’ and ‘soft’ science, and between physics and the special sciences. What exactly, is the difference (if any) between physics and biology, and between the brain as physico-chemical processes and the mind or consciousness.
A brief scanning of the literature on reductionism quickly reveals its complexity as it hits up against a whole lexicon of daunting specialist terms, all warning you of the minefield ahead: constructivism, emergence, foundationalism, holism, organicism, vitalism, eliminativism, supervenience, multiple realization, epiphenomena, teleology, qualia, degeneracy, consilience, granularity, realism and anti-realism, perspectivism, and much more . . . all seemingly mixed up into a scientific and philosophical soup of ideas.
Reductionism lies at the heart of two competing notions or paradigms concerning the way we should be doing science based on different metaphysical systems . . . different assumptions about the nature of reality itself. The two paradigms are not distinct, being related in complex ways. However, for ease of exposition they can be contrasted as, on the one hand, reductionism (foundationalism) and, on the other hand, holism (emergentism).
As understood here:
Reductionism emphasizes: the unity of science; the ideal of mathematics; the foundation of science in the laws, theories and concepts of physics; and the primacy of analysis as a mode of explanation – the understanding of scientific entities in terms of the operation of their parts.
Holism challenges the notion of a unified science, advocates anti-foundationalism by asserting the validity of independent domains of discourse, and the equivalent use and validity of synthesis as a means of scientific explanation – the understanding of scientific entities in terms of their relationship to more encompassing wholes.
The question being posed is whether biology . . . differs in its subject-matter, conceptual framework and methodology from the physical sciences’.[12]
We may be suspicious of reductionist claims but rarely are they subject to close scrutiny by practicing biologists. In this article I shall try to draw some of the threads of this vexed problem together. For simplicity, and to challenge the reader, the article will develop an overall claim based on a series of challengeable principles.
The reductionist challenge
One way of loosely circumscribing reductionism is to regard it as the translation of ideas from one domain of knowledge to another. In this form it is often used as a way of simplifying, debunking, or explaining away.
Reductionist claims usually take the form ‘A is really just B’ or ‘A is nothing but B’. A well-known example would be the statement ‘humans are really just DNA’s way of making more DNA’. Is this a serious, valid, and useful scientific claim?
After being confronted by a reductionist claim of this sort we are left wondering whether we have been cheated – thinking that something important, even critical, has been ignored or passed over – but not knowing what that is.
There is the implication that some subjects, language, or ideas are superfluous, that they can be eliminated altogether or explained and understood in a scientifically more respectable way.
Examples
Actual examples of reduction in the history of science are quite rare but there is the move from classical thermodynamics to statistical mechanics; the transition from physical optics to Maxwell’s electromagnetic theory whose equations have resulted in smartphones and TVs; and the transition from Newtonian mechanics to Einsteinian relativistic mechanics. Maybe in biology there is the translation of Mendel’s gene theory into the biochemistry of DNA. But when do we know that such a task has been completed successfully?
One famous attempt at reduction was that of English philosophers Alfred North Whitehead and Bertrand Russell who, in Principia Mathematica (1910, 1912, 1913, & 2nd edn 1927), examined the foundations of mathematics. By using axioms and inference rules they tried, unsuccessfully, to explain mathematics purely in terms of logic and set theory.
To understand the provocative flavour of reductionism here are some further examples . . .
It might be claimed, for instance, that history is just a fancy name for what is really only biology – the study of human behaviour; that morality is just a set of functional biological adaptations; that concepts and mental images in our minds are just physicochemical processes in our brains; or that sociology is a fiction because there is no such thing as ‘society’ just groups of individuals.
Various forms of reduction have become ‘-isms’: like psychologism – the claim that many aspects of our behaviour can be explained in purely psychological terms; biological determinism – that human behaviour can be explained in purely biological terms; environmental determinism – that social development is largely a consequence of environmental factors; genetic determinism – that genes determine our behaviour more than culture.
In the humanities similar cases, perhaps slightly different in character, might be for example a historical or literary analysis from the perspective of psychoanalysis or Marxist theory.
Of special current relevance is the way that the mind reduces to the brain, the mental to the neural, and the neural to the physico-chemical.One articulate contemporary statement of scientific reductionism in general is that of Alex Rosenberg, Professor of Philosophy at Duke University in America. In The Atheist’s Guide to Reality: Enjoying Life without Illusions (2011) Rosenberg poses his thesis with an uncompromising directness. He is an advocate of scientism[1] and the claim that ‘science alone gives us genuine knowledge of reality‘, that ‘What ultimately exist are just fermions and bosons and the physical laws that describe the way these particles and that of the larger objects made up of them behave‘, that ‘the physical facts fix all the facts‘ in a ‘purposeless and meaningless world‘. Further, since ‘morality is illusory‘, the consistent atheist must be a nihilist, ‘albeit a nice one‘.
Rosenberg’s view may be characterized as an extreme variant of materialism[5] or physicalism[2] within the host of philosophical positions that can be adopted on such matters.[3][4][5][6][7][8][9] However, this is a particularly hard-nosed approach[10] when compared to naturalism[3] and scientific realism[4] which have a more relaxed attitude to the claims of science.
Rosenberg’s stance is that of physicalist reductionism[12] and it provides a useful target for those with different views.
Unpacking ‘reduction’
Already you may be thinking that these examples are just a matter of woolly thinking, oversimplifications, unreasoned comparisons, or semantic confusions . . . so let’s be more specific.
To get started we need to establish a common understanding, some ground rules. What exactly do we mean by ‘reduction’? What is being reduced when we suggest reducing X to Y . . . are we talking about one, several, or all of the following: properties, terms, concepts, physical objects or phenomena, explanations, meanings, theories, principles, and laws?
Philosophers have found a way of simplifying this problem by distinguishing three kinds of reduction as it relates to: existence, explanation and methodology. This provides us with a broad classification of, on the one hand, the objects of reduction and, on the other, what is being claimed for a reduction.
EXISTENCE – what exists
EXPLANATION – how we can claim that reduction has been achieved
METHODOLOGY – how the reduction is carried out
This distinction will be needed in the discussions to come as it helps to clarify any disagreements about reductionist claims.
Principle 1 – when encountering a reductionist claim it helps to distinguish whether the claim is about existence, explanation, or methodology
Existence
These are reductionist (metaphysical) claims aims about what ‘really’ exists by reducing entities or phenomena to others – often relating to modes of representation and the distinction between ‘appearance’ and ‘reality’. For example, it may be claimed that that a table or chair, though seeming to be a solid unitary object, is ‘really’ made up of molecules that consist mostly of space (ontological reduction)
Explanation
These are reductionist claims about our ways of knowing, understanding, and explaining – such as the reduction of one theory to another . For example, that it is best to explain genes in terms of molecular biology (epistemological reduction)
Methodology
This relates to the problems of translating one form of knowledge into another: chemistry into physics, biology into chemistry and so on. For example, what exactly are the factors complicating the translation of words like ‘mitosis’, ‘predator’, ‘hibernation’, and ‘interest rates’ into physicochemical language (methodological reduction)?
Five Faces of reduction
I shall now ask you, the reader, to examine a set of challengeable principles that can be used to assess reductionist claims. The principles are organized around five topics that are key ingredients in the confusions and problems relating to reductionism:
1. The representation of reality in perception, cognition, and language
2. Explanation & causation
3. Scientific fundamentalism – that there is a unity of science based on a foundation of mathematics, physics, universal physical laws and constants, and fundamental particles
4. Emergentism (holism) – wholes, parts, and emergent properties
5. Domains of knowledge and the translation of ideas from one domain of knowledge into those of another
1. Reality & representation
The first topic looks at the ambiguities and confusions that can arise from the way we intuitively structure reality, the way our perception and cognition filters all our experience to give us a uniquely human outlook on reality. Also the way scientific language can conceal errors and ambiguities in the way we describe and represent the world.
2. Explanation & causation
Causation is the glue that we use to bind our scientific explanations and it is, we believe, the bedrock on which science builds its observations and predictions. And yet causation is problematic to scientists and philosophers alike.
3. Foundationalism
This article examines the claim that there is a unity of science based on the foundations of physics and mathematics, that ‘physics fixes all the facts’. That, at least in principle, everything in the universe can be explained in terms of the fundamental constituents of matter their relations – including normativity, function, purpose, mind, meaning, thoughts and representations. The actual foundations may be treated in terms of matter (the foundational physics of fundamental particles out of which all matter is made) or explanatory axioms (the physical laws that underpin the order of the cosmos).
4. Wholes and parts
This article examines the challenge to foundationalism, that wholes are in some sense more than an aggregation of parts, and that novelty has emerged in the universe in an unpredictable way by giving rise to new and unexpected features and properties – like the emergence of life from inanimate matter, and consciousness from brains.
5. Domains of knowledge
Sciences tends to arrange its subdisciplines in a sequence that runs from mathematics and logic to physics, chemistry, biology, behaviour and psychology, then the economic, social and political sciences. Is the segregation of scientific knowledge into these domains just a matter of convenience or does it relate in some way to the structure of the world? Related to this question there is the way that each scientific discipline has developed its own particular language, principles, practices, and academic empires. How are these domains of knowledge to communicate with one-another? Is it possible to translate one discipline into another?
A preliminary thought to ponder: the challenge for science and philosophy in the 21st century is not just to devise a physical account of the material universe in the form of some kind of unified field theory of space-time, M-theory, or string theory, but to provide an intellectually coherent account of the world that encompasses, among (many) other things, matter, life, the mind, consciousness, normativity, function and purpose, information, meaning, and representation.
1. Reality & representation
Whether something can be ‘reduced’ to something else depends largely on our intuitions about what there ‘is’ . . . about the nature of the objects in existence or, at least, the way we represent them in scientific theories, laws etc. This is a complex topic addressed in the article on representation.
2. Explanation & causation
Causation underlies the workings of the universe and our discourse about it. Anyone who is curious about the natural world must at some time or another in their lives have wondered about the true nature of causation, especially those people with a scientific curiosity. This series of articles on causation became necessary, not only for these reasons, but because causation is so frequently called on to do work in the philosophical debate about reductionism and today’s competing scientific world views.
It is dubious whether the reduction of causal relations to non-causal features has been achieved and scientific accounts are strong alternatives with revisionary non-eliminative accounts finding favour. Can emergent entities play a causa role in the worlds? But is causation confined to the physical realm?
The issue to be addressed here is, firstly, can causation itself be reduced to something simpler. But the role that causation plays in causal interactions that operate within and between domains of knowedge. The outline of this article follows the account given by Humphreys in the Oxford Handbook of Causation of 2009.[2]
At the outset it is important to distinguish between reduction between the objects of investigation themselves (ontological reduction) and linguistic or conceptual reduction as the reduction of our representations of those objects.
Reduction of causation itself
Eliminative reduction of causation
We must decide whether causation is itself amenable to reductive treatment. Reduction may be eliminative reduction in which the reduced entity is considered dispensable because inaccessible (Hume’s claim that we do not experience causal connection) so we can therefore eliminate it from our theoretical discourse and/or real objects (ontology) (the Mill/Ramsay/Lewis model) substituting phenomena that are more amenable to direct empirical inspection. The most popular theory of this kind is Humean lawlike regularity but in this group would be the logical positivists, logical empiricists (e.g. Ernest Nagel, Carl Hempel), Bertrand Russell, and many contemporary physicalists with an empiricist epistemology. Hume’s view was that we arrive at cause through the habit of association and in this way he removed causal necessity from the world by giving it a psychological foundation. A benign expression of this view would be that ‘C caused E when from initial conditions A described using law-like statements it can be deduced that E’.
Non-eliminative reduction of causation
Causation is so central to everyday explanation, scientific experiment, and action that many have adopted a non-eliminative position. X is reduced to Y but not eliminated, simply expressed in different concepts like probabilities, interventions, or lawlike regularities. Non-eliminativists like the late Australian philosopher David Armstrong hold that causation is essentially a primitive concept that we can at least sometimes access epistemically as contingent relations of nomic necessity among universals and thus amenable to multiple realization. language or with eliminativist accounts explaining causation in non-causal terms.
Revisionary reduction of causation
Here the reduced concept is modified somewhat, as when folk causation is replaced by scientific causation. Most philosophical and self-conscious accounts of causation are revisionary to a greater or lesser degree.
Circularity
Many accounts of causation include reference to causation-like factors as occurs with natural necessity, counterfactual conditionals, and dispositions in what has become known as the modal circle. The fact that no fully satisfactory account of causation can totally eliminate the notion of cause itself is support for a primitivist case.
Domains of reduction
Discussions in both science and philosophy refer to ‘levels’ or ‘scales’ or ‘domains’ of both objects and discourse. So physics is overtopped by progressively more complex or inclusive layers of reality such as chemistry, biochemistry, biology, sociology etc. This hierarchically stratified characterization of reality is discussed elsewhere. Here the task is to examine the way causation might operate within and between these different objects and and domains of discourse.
The attempt at reducoing one domain to another is not a straightforward translation as an account must be given of the different objects, terms, theories, laws, properties and their role in causal processes. The preferred theory of causation (whether, say, a singularist or regularity theory) will be pertinent to what kind of causal reduction may be possible.
Relations between domains
Suppose we are engaged in the reduction of a biological process to one in physics and chemistry, say the reduction of Mendelian genetics to biochemistry, then what kinds of causal interactions might we invoke? The causal relation might be: a relation of identity; an explicit definition; an implicit definition via a theory; a contingent statement of a lawlike connection; a elation of natural or metaphysical necessitation as in supervenience; an explanatory relation; a relation of emergence; a realization relation; a relation of constitution; even causation itself. If indeed the causation were different in different domains then this might render reduction restricted or impossible. Accounts like counterfactual analysis are domain independent.(p. 636)
However, there are domain-specific claims such as physicalism’s Humean supervenience. Under some theories causation is restricted to physical causation as the transfer of conserved physical quantities and this is difficult to apply to the social sciences.
Domain-specific causation & physicalism
Could it be that causation in biology is different from that in physics or sociology or is causation of the same general kind – is their ‘social cause’ and ‘biological cause’ or just ’cause’? The most contentious area here is mental causation where intentionality is often treated as ‘agency’ rather than ‘event’ causation.
Supervenience
In the 1960s domain reduction was promoted through the reduction of theories via bridging laws (Ernest Nagel). One major challenge for such an approach has been multiple realization whereby something like ‘pain’ can be expressed physically in so many ways that this renders its further reduction unlikely although this has been countered by supervenience accounts. For example Humean supervenience regards the world as the spatio-temporal distribution of locaized physical particulars with everything else including laws of nature and causal relations supervening on this.(p. 639) Supervenience is generally regarded a a non-reductive relation.
Functionalism
Multiple realization characterizes properties in terms of their causal roles. Money is causally realized by coins, cheques, promissory note etc. The role of ‘doorstop’ can be functionally and reducibly defined so not all cases of multiple realization are irreducible, irreducibility needs to be taken case by case. For Kim (1997;1999) ‘Functionalization of a property is both necessary and sufficient for reduction …. it explains why reducible properties are predictable and explainable’. Since almost all properties can be functionalized few need to be candidates for emergent properties (p. 644)
Upward & downward causation
The restriction of cause to physical domains is supported by the downward causation and exclusion argument.
Causal exclusion principle & non-reductive physicalism
The causal exclusion principle states that there cannot be more than one sufficient cause for an effect. If we accept this then how are we to account for the causes we allocate at large scales, say the cause of a rise in interest rates? What is the causal relevance of multiply realizable or functional properties (redness, pain, and mental properties)? Does this principle automatically devolve into smallism, that we ultimately explain everything all the way down to leptons and bosons, or smaller and more basic entities when we find them because they are the ones doing the causal work? How can a macro situation have causal relevance if it can be fully accounted for at the micro scale. These properties then become epiphenomena, a by-product or phenomenon with no physical basis.
If C is causally sufficient for E then any other event D is causally irrelevant. Every physical event E has a physical event C causally sufficient for E. If event D supervenes on C then D is distinct from C.
There is increasing evidence supporting the causal autonomy of disciplinary discourse or non-reductive physicalism. Properties in the special sciences are not identical to physical properties since they are multiply realized although they do supervene on (instances of) physical properties since changes in the special properties entail changes in the physical properties further the special properties are causes and effects of other special properties.
A large-scale cause can exclude a small-scale cause. Pain might cause screaming while there is no equivalent neural property. This occurs when the trigger is extrinsic to the system. The pain resulting from a pin prick is initiated by the pin; it cannot possibly be initiated at the neural scale.
The exclusion principle can be applied to any kind of event that supervenes on physical events and shpows that there is no clear causal role for supervening events.
The main questions to be addressed in relation to causation and reduction are: can causation itself be reduced; is there a base-level physicochemical causation underlying all other forms of causation; how does causation operate within a. non-physicochemical domains of discourse and scales and b. between non-physicochemical domains of discourse and scales.
In posing these questions it should be noted that it is cutomary to discuss different academic disciplines, as different domains of knowledge that use their own specific terminology, theories and principles. So for example we have physics, chemistry, biology, and sociology being refereed to as ‘domains of discourse’ and stratified or into ‘levels’ or ‘scales’ of existence. From the outset a careful distinction must be made between ontological reduction, the reductive relations between objects themselves, and linguistic or conceptual reduction which deals with our representations of these objects.
Cause & reductionism
So far in discussing reductionism it has been noted that at present we explain the world scientifically using several scales or perspectives. These scales correspond approximately to particular specialised academic disciplines with their own objects of study including their terminologies, theories, and principles. One possible way of expressing this would be: matter, energy, motion, and force (physics), living organisms (biology), behaviour (psychology), and society (sociology, politics, economics). Each discipline has its own specialist objects of study like be quarks (physics), lungs (biology), desires (psychology), and interest rates (economics). Since it has been argued that each disciplines is addressing the same physical reality from different perspectives or scales the question arises as to the causal relationships between these various objects of study. This raises the question about the relationship between causes at different scales, perspectives, or, in the old terminology, ‘levels of organisation’ when they deal with different entities. How do we reconcile causation at the fundamental particle scale with causation at the political scale assuming the physical reality that they are dealing with is the same?
To answer this question we need to do some groundwork … our modest philosophical program is to ask: What is causation and in what sense does it exist? Is it something that exists independently of us and, if not, in what way does in depend on us? Is causation part of the human-centred Manifest Image? What role does causation play in our reasoning? In other words we need to demonstrate that causation is either a fundamental fact of the universe, or some kind of mental construct, or it can be explained in different and simpler terms.
If we assume the process of explanation proceeding by analysis or synthesis and we regard fermions and bosons as the smallest units of matter then causation must act primarily from the wider context. A rise in interest rates, or the pumping of a heart cannot be initiated by fermions and bosons themselves. To make sense of the fermions and bosons that exist in a heart we must consider their wider context.
Does causation occurs at all scales depending on its initiators or is there a privileged foundational with macroscales explained by microscales, that genes coding (in humans about 25,000 genes and 100,000 proteins) for proteins, cells, tissues, organs, and the organism. That is, a causal chain that leads to progressively larger, more inclusive, and complex structures. This is the central dogma of genetic determinism. But does causation occur between cells, organs, or tissues? Are genes triggered by transcription factors that turn them on and off. Is the environment causal from outside the organism along with other constraining factors at all scales. Homeostasis. Evolution occurs through changes in the genotype that are produced by selection of the phenotype as natural selecrtion expresses the organism-environment continuum.
If ‘levels’ or ‘scales’ do not exist as separate physical objects then there is only one fundamental mode of being. This is simply one physical reality that can be interpreted or explained in different ways: it has no foundationalscale or level.
Weak emergence: descriptions at scale X are shorthand for those at scale Y; strong when X cannot be given for Y.
Universal laws apply to biology, an unsupported elephant will fall to the ground, but biology has its own causal regularities that are, of their very nature, restricted to living organisms.
A cause can be sufficient for its effect but not necessary (a piece of glass C starting a fire E) – we can infer E from C but not vice-versa; it may be necessary but not sufficient (presence of oxygen C in a fire-prone region E) – we can infer E from C but not vice-versa. Under this characterization cause can be defined as either sufficient conditions (or even necesary and sufficient conditions).
Some scales of explanation or causal description are more appropriate than others. It is possible to provide an explanation that is either overly general or overly detailed. What is appropriate depends on the causal structure, what would provide the most effective terms and structures for empirical investigation. This contrasts with the view that there is a fundamental or foundational scale at which explanation is most complete. (Woodward 2009). Causes need to be appropriate to their effects. Bosons nfluencing interest rates. Interest rates affecting the configuration of sub-atomic particles. Fine-grained explanations may be more stable but not always. (Woodward 2009).
One are where this tension expresses itself is in the argument over the mechanism of biological selection in evolution. Should we regard natural selection as ultimately and inevitably a consequence of what is going on in the genes (see Richard Dawkins book The Selfish Gene) or are there causal influences that operate between cells, between tissues, between individuals, between populations, and in relation to causes generated by the environment?
Noble, D. 2012. A Theory of Biological Relativity. Interface Focus 2: 55-64.
It is widely assumed that large-scale causes can be reduced to small-scale causes, the macro to micro: that macro causation frequently (but not always) falls under micro laws of nature. This presupposes a means of correlating the relata at the different scales. This might be interpreted as microdeterminism, the claim that the macro world is a consequence of the micro world. The causal order of the macro world emerges out of the causal order of the micro world. A strict interpretation might be that a macro causal relation exists between two events when there are micro descriptions of the events instantiating a physical law of nature and a more relaxed version that there are causal relations between events that supervene. It might also be the case that even if there is causal sufficiency and completeness the existence of necessitating lawful microdeterminism (laws) does not entail causal completeness. Perhaps in some cases there is counterfactual dependence at the macro but not the micro scale.
Granularity & reductionism
We are tempted to think that we can improve on the precision of causal explanations. Could or should we try to improve the precision of of causal explanations by giving more detail or being more scientific? For example I might explain how driving over a dog was related to my personal psychology, the biochemical activity going on in my brain, the politics of the suburb where the accident occurred and so on. That is, the explanation could be given using language and concepts taken from different domains of knowledge: psychology, politics, sociology, biochemistry and so on. The same situation can be described using different domains of knowledge, scales of existence, and so on. What is of special interest is that the cause will be different depending on the perspective chosen. For simplicity the choice of detail chosen for the explanation is referred to as its granularity. This raises the problems of reduction that is discussed elsewhere. Is there a foundational or more informative scale or terminology that can be used? Is an explanation taken to the smallest possible physical scale the best explanation? Are the causal relations dependant on more metaphysically basic facts like fundamental laws? Do facts about organisms beneficially reduce to biochemical facts … and so on. Is fine grain the best?
Principle 3 – Any description of causation presents the metaphysical challenge of selecting the grain of the terms and conditions to be employed
We can appear to express the same cause using different terms that seem to alter the meaning and therefore the causal relations under consideration, for example: we might replace ‘The match caused the fire’ with ‘Friction acting on phosphorus produced a flame that caused the fire’. This raises the question ‘But what was really the cause?’ with the potential for seemingly different answers when we want only one. The depth of detail in terminology is sometimes referred to as granularity and it raises the question of whether some explanations are more basic or fundamental that others, that some statements can be beneficially reduced to others (reductionism).
This gives us an extended definition of science: science studies the order of the world by investigating causal processes. Causal processes are of many kinds: there are, we might say for example, that Though contentious we might add that we must resist the temptation to reduce causes of one kind to causes of another kind. Causally it makes no sense to reduce biology to physics by saying that fermions and bosons cause the heart to beat. A heart might consist of fermions and bosons but these do not have causal efficacy in this sense. This takes us away from the traditional method of attempting to define science which has been in terms of its methodology (the hypothetico-deductive or deductive-nomological method).
Multiple realization
Physicalists can be divided into two camps: those that think everything can be reduced to physics (reductive physicalists) and those that do not (nonreductive physicalists). The reductionist physicalist claims a type-identity thesis such that, for example, mental properties like feelings are identical with physical properties: that all mental properties are caused by physical properties. Assuming we have two entities, one acting causally on the other seems mistaken the two being, in fact, one and the same. Similarly the non causal connection between temperature and mean molecular kinetic energy. Also life and complex biochemistry? The question arises though as to the identity of objects. Is pain physically identical in a human and a herring? Here it seems that pain can be expressed in many different physical ways, known as ‘multiple realization’. This attack on the type-identity thesis led to the modified claim that mental states are identifiable with functional states which then allows multiple realization, a functional property being understood in terms of the causal role it plays. However, we can think of pain as being either coarse-grained, or fine-grained. ??Either one thing, a mix of properties hardly warranting aggregation under a single category, or OK.
Emergence
Reduction is generally contrasted with emergence. Acounts of emergence are rarely causal in form. Why cannot ‘horizontal’ causation give rise to emergent features within the same domain?
3. Scientific fundamentalism
It might be assumed that science provides us with the most secure form of knowledge and that, within science, the most secure forms of knowledge are mathematics and physics. But why is this so?
The explanatory regress
We explain one ‘thing’ in terms of another – we do not explain it in terms of itself. Reductionism, like all science, is a form of explanation: it gives a clarification, simplification, reasons, or justification. And it does so by explaining the whole in terms of its parts.
Justification
Aristotle observed that explanations, to be logically consistent, require further explanation. Like a child, we can continue to ask ‘but why?’, demanding yet more explanations as justification.
In practice, at some stage in the explanatory process we accept one particular explanation as sufficient for our purposes – but that does not mean that, logically, the demand for further explanation cannot continue.
Fundamentalism
Explanations, like philosophical justification, can enter an infinite regress or lapse into circularity. The only way out of this dilemma is to draw a line in the sand, to accept one particular explanation as sufficient for purpose, and then use this as a point of security or foundation for further inferences.
This fundamentalism can then serve as an unquestioned bedrock of self-evident or unjustified truth or axiom (sometimes called a primitive or brute fact). A good example of a scientific brute fact is a law of physics.
We feel a compulsion to be as fundamental as possible in our explanations: if further questions can be posed then the problem has not been adequately addressed. Scientific explanation seems to stop at physical constants and laws – even though we cannot explain why these laws are as they are or, indeed, why there are any laws at all. Mathematics is the cardinal case of theories and explanations built on axioms.
Coherentism
An alternative to fundamentalism is coherentism whereby beliefs must hang together, forming a coherent web of interlocked ideas.
Semantics, metaphor, definition
Much turns on what we assume is meant by ‘foundational, or ‘fundamental’ and our mental characterization of ‘reduction’.
Fundamental
We have seen that one way of bringing a regress to a halt is to find an explanation that does not need justification – one that is beyond question, primitive, self-evident, or a brute fact. It then becomes futile looking for further definitions, explanations or proofs because such foundational concepts presuppose the things they are meant to be explaining.
In mathematics these basic assumptions are known as axioms and they form the foundational logical structure on which all mathematics rests: if the axioms are unreliable, then the entire edifice comes crashing down.
This is the mode of thinking that we can call scientific fundamentalism. Aristotle used this principle to underpin his logic of scientific demonstration – the famous deductive syllogism. This was a form of argument which first stated a universally secure foundational principle, then declares a particular instance, such that the premise necessarily entails the conclusion (e.g. All swans are white (foundational or universal principle), this is a swan (particular instance), therefore this swan is white).
(e.g. This is a swan, all observed swans have been white, therefore this swan is probably white). The conclusion of a deductive argument appears certain while that of an inductive argument has degrees of probability that depend on the quality of evidence.
Principle 2 – Foundationalism – is the search for secure assertions that can be taken as the underpinning for other statements and assertions
The overwhelming character of foundationalism or fundamentalism is that of ranked dependency: some entities only exist, have authenticity, or can be explained because of others. They are diminished in relation to something else of greater significance.
Principle 3 – foundationalism or fundamentalism are relations of dependency – where one object depends on, or is subordinate to, the existence, explantion, or method of investigation, of another
Reduction
How do we represent ‘reduction’ in our mind’s eye? There are two objects: the reducee (that which is reduced) and reducer (that to which it is reduced). The word is a metaphor derived from the Latin reducere to bring back, to be assimilated by, or to diminish. We imagine the reducer as in some way prior to, or more basic than the reducee. Sometimes this is treated as a process of elimination (eliminativism) as when we regard the description of mental illness (reducer) as eliminating or substituting for possession by demons (reducee), or the idea of oxygen (reducer) replacing that of phlogiston (reducee).
Whether the reducee is eliminated, subsumed, or replaced by the reducer, a prioritization or ranking has taken place: the reducer has been prioritized over the reducee (for whatever reason). Ranking and prioritization are characteristics of our minds, not of nature so whenever we perform a ‘reduction’ we need to determine whether we are assuming that the reduction occurs in nature or in our minds.
Principle 4 – ‘reduction’ is metaphorical language used fore the prioritization or ranking of something in relation to something else. It occurs in our minds, not in nature
Maths & physics
‘All science is either physics or stamp collecting’
Ernest Rutherford, British chemist and physicist c. 1900
Many reasons can be found for placing mathematics and physics at the forefront of the sciences. Since at least the time of the classical philosophers of Ancient Greece, mathematics has been treated as a model or template for all knowledge, including physics, as the mode of thinking towards which all other thinking should aspire. A sign above the entrance to Plato’s Academy in ancient Athens read: ‘Let no-one ignorant of geometry enter here‘.

Artist’s impression of Gravity Probe B orbiting the Earth to measure space-time
This is a four-dimensional description of the universe including height, width, length, and time using differential geometry
Differential geometry is the language in which Einstein’s General Theory of Relativity expresses the smooth manifold that is the curvature of space-time – which allows us to position satellites in orbit around the earth. Differential geometry is also used to study gravitational lensing and black holes
The Riemannian geometry of relativity is a non-Euclidian geometry of curved space
Courtesy Wikimedia Commons
Image sourced from NASA at http://www.nasa.gov/mission_pages/gpb/gpb_012.html[/caption]
Mathematics had practical application beyond astronomy, it provided the precision needed to engineer the magnificent monumental architecture we associate with classical civilization. Numerologists like Pythagoras (c. 570–495 BCE) became cult figures for thinking men. The pre-Socratic philosophers had examined the nature of substance, looking for universal properties and fundamental elements, bequeathing to their successors the idea of four foundational elements – Earth, Air, Fire, and Water – in a tradition that continued into the Medieval world, along with Democritus’s idea of matter being composed of tiny indivisible particles of matter called atoms. The study of living organisms, we believe, did not really get started until the time of Aristotle (zoology) and Theophrastus (botany). Only then do we see the emergence of a critical analytic curiosity in organisms themselves rather than just their utilitarian value as food, medicines, and materials. So biology, it seems, arrived as an afterthought in scientific enquiry as expressed so eloquently in Aristotle‘s ‘Invitation to Biology”.
‘It is not good enough to study the stars no matter how perfect they may be. Rather we must also study the humblest creatures even if they seem repugnant to us. And that is because all animals have something of the good, something of the divine, something of the beautiful’ … ‘inherent in each of them there is something natural and beautiful. Nothing is accidental in the works of nature: everything is, absolutely, for the sake of something else. The purpose for which each has come together, or come into being, deserves its place among what is beautiful’Aristotle – De Partibus Animalium (The Parts of Animals) – 645 a15
The universality of mathematics
One feature of the 17th century Scientific Revolution was the unification by Kepler, Newton, and others of subjects like optics and astronomy with physics to yield what are sometimes referred to as the ‘mathematical’ or ‘exact’ sciences. These approximate the exactness and precision of mathematics. Philosophers from Descartes, Leibniz, and Kant to Bertrand Russell and the logical positivists have regarded these subjects as paradigms of rational and objective knowledge because they are quantitative investigations of the physical causes of natural phenomena using rigorous hypothesis testing to yield precisely quantifiable predictions.
Mathematical knowledge has a unique and appealing beauty: it gives us knowledge that is: certain; incorrigible (it does not undergo revision in the way that empirical facts do); timeless or eternal (we are inclined to think that 2 = 2 = 4 must always be true: it was true before humans occupied the world and it would even be true if no universe existed); and it is necessary (its truths seem to lie outside our world of space and time and yet they can be grasped by our reason, they could not be otherwise). In addition, numbers are not causally interactive.
All this makes mathematical knowledge highly abstract since we are not really sure what it is actually about. The simple answer is of course ‘it is about numbers’, but the concept of number has baffled philosophers from the earliest times. If numbers do not actually exist in space and time and they are causally inert (they are abstract objects) then how can we have any knowledge of them? There is no universally-agreed answer to this question but three broad approaches. Either they are independent abstract objects, or they are in the world, or they are mental constructs. The details need not concern us but if numbers do not depend on experience then perhaps we have some special faculty of numerical perception (say, the intuitive abstract objects of Kant), or we can relate them to set theory, to objects in the world (logicism) or the yare simply mental constructs. Each of these positions has major difficulties and the question still has no universally accepted answer. The fact that mathematics is so abstract means we have every reason to dismiss it as some kind of mental construct, a phantom of our minds. But maths has been applied directly to the world in a practical and economic way that has had an immeasurable impact on human life (see, for example, Gravity Probe B illustrated above). There are the many facts about the material world that were first suggested by mathematics before being empirically proven – for example, the Higgs Boson, gravity waves, the existence of Neptune, and the speed of light.
Because numbers seem to have a special kind of reality (and probably under the influence of the charismatic Pythagoras) Plato postulated his world of Forms, (Plato’s world of forms was a world of timeless truths, of generalities, not to be thought of like a separate place from Earth, like a heaven), it was a realm of ideas that could be accessed and applied by reason. This was a special kind of objective knowledge superior to empirical knowledge which, being derived from experience and sensation, was contingent and corrigible.
But how can we possibly believe in the objectivity of such an abstract realm and, anyway, how could we possibly connect with it?
Aristotle did not believe in Plato’s world of Forms, considering number to exist in the world as a property of objects. But, as philosophers later pointed out, how can number exist in a pair of shoes (one pair or two shoes)? Is the property in such a case 1 or 2? Philosopher Kant believed mathematics to be a form of innate intuition, an expression of our human sense of space and time. Arithmetic expressed, through number, our linear and sequential experience of time, while geometry was a way of representing our sense of space. For Kant then mathematics was an abstraction that came from our heads, it did not exist objectively in the world.
The subjectivity or objectivity of number (whether numbers are real) remains a matter for intense intellectual debate. The impact of mathematics on the world cannot be questioned, and the security we feel as a consequence of its necessity, universality, and certainty have given it a special place in the scientific vision of reality . . . so it is hardly surprising that it has been emulated by other disciplines. In physics we see its universality reflected in the laws of physics.
Modernity has maintained its reverence for the application of mathematics to scientific theories and concepts but with the recognition that maths, at its core, is logical not empirical, it is founded on subjunctive statements (if … then): if X (this may be an axiom) then Y. As philosopher David Hume expressed it, maths is about ‘relations of ideas’ not ‘matters of fact’ … it is not empirical.
Principle 5 –Mathematics was inherited from the ancient world as the most secure form of knowledge. Since mathematics provided certain, necessary, timeless and universal truth it was regarded as the form of knowledge against which the statements all science could be measured, and to which all science should aspire
Smallism
Physics, in investigating the nature of matter, proceeds analytically by breaking it up into ever smaller parts, a process that, over the years, has always found (albeit different) the world’s apparent ‘rock bottom’ material constituents. In 1947 these physical building blocks were electrons, protons and neutrons, later it became quarks and other sub-atomic particles, today we have fermions and bosons. In this way all our explanations of matter have brought us to an end point, what we might indeed call the ‘fundamental reality’ of matter and existence . . . the smallest scientifically acceptable units as described by physics.
Principle 3 – Smallism – physics explains matter by proceeding analytically and experimentally to discover its smallest indivisible constituents, its fundamental particles. These are sometimes regarded as the foundational ingredients of ‘reality’
Fundamentalism
We can refer to our intuition that the small units of physics and chemistry are fundamental to both matter and material explanation as ‘scientific fundamentalism’. From this flows the sense of what has also been called ‘generative atomism’, the belief that, like a child’s Lego set any whole can be built out of its fundamental building blocks. To understand the whole we must start with the parts. Small units, it might seem, somehow have greater scientific credibility; they are more authoritative and reliable; they provide better explanations; they are less complicated and therefore more easily understood and they are objects studied by physics.
Principle 4 – Fundamentalism – is the assumption that all scientific explanation of matter must ultimately reduce to explanation of the smallest known particles of matter and their interactions
In arriving at the smallest or fundamental constituents of matter we have a feeling of finality: being fundamental we might feel that these constituents are in some sense more real than the wholes of which they were a part. But this is clearly some kind of mental trickery, a cognitive illusion. There is nothing more ‘real’ about a fundamental particle than an elephant. Indeed, because we can see, touch, and hear an elephant we might argue that the elephant is more empirically real than an invisible fermion or boson (which has a smaller wavelength than that of light). We regard small thigs as special not because of their mere existence (their ontology or being) but because of their role in analysis and explanation (their significance is epistemological). They are part of our habitual explanation of wholes in terms of their components and the relations between these components. Following Aristotle’s explanatory regress our explanations must therfore bottom-out at the smallest particles we know at any point in history.
Sometimes referred to as ‘ontological reduction’ this principle asserts that no physical object ‘exists’ more or less than any other. Smaller units of matter are no more ‘real’ than larger units of matter, nor are more inclusive or less inclusive units, or even more or less complex units. In terms of existence or reality atoms, rocks, bacteria, and humans are equals.
Principle 5 – All matter exists equally: no physical object ‘exists’ more or less than any other. Smaller units of matter are no more ‘real’ than larger units of matter, nor are more inclusive or less inclusive units, or more complex or less complex units (principle of flat ontology)
Reduction, organization, explanatory power
What is controversial in reductionism and science today is not the matter itself (ontological reduction) – but the nature of its organisation, the relations between its parts (epistemological reduction) – especially the parts of living organisms. We must therefore look for other reasons for our prioritization of one domain of knowledge over another, for the intuition that explanations in one domain are in some way superior (have greater explanatory power) than those in another: why, for example, we might consider it useful to think of biology in terms of physico-chemical processes. Why does scientific fundamentalism have such persuasive power over our general attitude to science. If all matter is ontologically equivalent then it is our cognitive focus that is making a distinction between different domains or scales of existence (the physicochemical, biological, social, psychological and so on). Analysis has explanatory power but this does not make the parts under consideration, either their size or inclusiveness, more ‘real’. On reflection we realise that no sort of matter is more real or fundamental in itself. Matter is just matter: small matter is just smaller than big matter, it does not have properties that make it existentially privileged in any way. So, in terms of material reality or existence (ontology) a bison is just as real as a boson.
When we take an overview of all the sciences is it true that ‘Particle physics is the foundational subject underlying – and in some sense explaining – all the others‘?[1] Could this be simply a comment on the way analysis is a habituated mode of explanation? To investigate the regress of scientific explanation to foundational particle physics we need to look at different kinds of explanation.
Explanatory rock bottom and adequate explanation
We might assume that, of necessity, the explanatory regress passes to ever smaller and ‘more fundamental’ material objects. But this is not inevitable: sometimes one particular kind of explanation is sufficient. Sometimes we feel no need to enter an explanatory regress. One particular answer is adequate.
Here are a couple of everyday examples of explanation. First, if asked ‘Why did the chicken cross the road?’ we could call on answers from scientific specialists such as a chicken biochemist, a neurologist, an endocrinologist, and an animal psychologist. But what if we were told that the chicken was being chased by a fox. This, surely, for most of us, is a satisfying and sufficient answer to our question. We do not need or desire to be told anything else. Does this mean that in this case scientific answers were incorrect or inferior in some way? No, only that their explanations were not the most appropriate for the circumstances under consideration. Statements like ‘polar bears hibernate in winter’, ‘inflation can be managed by adjusting interest rates’, ‘evolution is replication with variation under selection’, or even ‘e = mc2’ appear sufficient in themselves: their veracity may be challenged but we do not think they need reformulating or reducing to improve or clarify what is being expressed.
Practical incoherence
Firstly, there is the logical absurdity of trying to explain all phenomena in terms of the smallest workable scientific particles. What is to be achieved by explaining many biological facts in this way, like the fact that polar bears hibernate in winter? Examples become more ludicrous as we consider wider scientific contexts. How could we possibly explain a rise in interest rates in terms of fundamental physical particles and the laws of physics? What would such an explanation possibly look like? It is not that such a situation is logically impossible. We can imagine a supercomputer of the future that could enumerate the many causal factors at play in such a situation but we simply do not think this way, and nor do we need to. Explaining the causes of an interest rate rise in physicochemical terms would not simplify matters and give greater clarity, it would entail an explanation so complex as to be barely imaginable.
What then constitutes a satisfactory scientific answer to a scientific question?
Principle 6 – The principle of sufficient explanation: explanations are fit for purpose, they do not need to be circular, foundational, or part of an infinite regress
This example demonstrates the multi-causal nature of many occurrences – like car and plane crashes. Questions about cause(s) in such situations are not abandoned because of their complexity since they must achieve a resolution in a court of law. In many instances, in spite of the apparent complexity, rulings are readily made.
Our intuitive desire for foundational explanations creates several difficulties.
The primacy of analysis – generative atomism
If someone asks you ‘What is a heart and how does it work?’ we might answer analytically by treating the heart as a whole and explaining the parts and how they interact. Alternatively we might answer synthetically by treating the heart as a part and explaing how it interacts with other organs to contribute to the functioning of the body as a whole.
Much of science proceeds by explanatory analysis, breaking down physical entities into their constituent parts. But here too Aristotle’s dictum applies as we are inclined to proceed in a regress to ever smaller parts until we feel we have reached rock bottom, the world’s fundamental particles. There has, in the course of history, been a variable rock bottom. If the future continues as the past then there is nothing absolute, necessary, or certain about the particles that make up rock bottom. Democritus defined atoms as indivisible particles but physics has split the atom again an again with today perhaps fermions and bosons approximating the foundational bricks out of which the universe is constructed.
Scope – universality of physical constants
Physics approaches mathematics in the (near) universality of of its physical constants. Since it has a universal scope it also has an all-embracing character that is not shared by other sciences: its principles, theories, and laws are of such generality that they encompass all matter excepts under the most extreme situations. A falling stone and a falling monkey both conform to Newton’s laws of gravitational attraction. Physics tries to explain the world at not only the smallest scale as the behaviour of fundamental particles but also at the widest scale as constants or constraints that apply universally to all matter.
The foundations of science are generally taken to lie in mathematics and physics because their basic assumptions have universal application in two important ways: firstly, physics works with the stuff of the universe at its extremes – from the smallest particles to the cosmos in its entirety; secondly
Principle 7 – Physics combines with mathematics to formulate constants and constraints that apply to not only the smallest known particles but to the universe as a whole and therfore its scope is wider than that of other sciences
The challenge to scientific fundamentalism
So what have we decided constitutes something being more scientific or less scientific?
Arguing that that one is ‘more scientific’ than another requires an extended justification. So far we might claim, for example, that physics encompasses all matter, while biology only deals with living matter. Physics deals with generalities and regularities that apply throughout the universe while biology only deals with the subset of generalities that relate to living orgnisms. Whatever principles and generalities we can establish in relation to life appear to lack the scope and reliability that we see in physical laws.
Because both a rock and an elephant conform to the same effects of gravity does not automatically mean that physics is more fundamental.
Adding value
We might intuitively feel that the objects of an explanation (the explanans) are more fundamental than the object being explained (the explanandum)
True science, special science, hard and soft science
Has this account so far established a clear distinction between fundamental or foundational science and other science? Can we distinguish between hard and soft sciences, or indeed between science and non-science – or are such distinctions just a matter of semantics? The term ‘special sciences’ is generally used to denote those sciences dealing with a restricted class of objects as, say, biology (living organisms), and psychology (minds) while physics, in contrast, is kown as ‘general science’. Reductionism would maintain that the special sciences are, in principle, reducible to physics or entities that may be described by physics.
Can we establish a clear benchmark using criteria of certainty, necessity, universality, corrigibility (falsifiability), certainty, or predictive capacity by which to rank in order the following areas of study: mathematics, physics, astrology, genetics, biology, psychoanalysis, psychology, history, political science, sociology, and economics. Would this establish a reliable table of scientific merit? Are such ranking criteria appropriate or should other factors be considered and, if so, what would they be?
In spite of many historical attempts, the philosophy of science has failed to establish uncontroversial necessary and sufficient conditions that would satisfy a definition of ‘science’ (see Science and reason). At present it appears that what we call science is, more or less, our most rigorous application of reason to an assemblage of theories, principles, and practices that share a family resemblance as a means of enquiry. It is this that has proved our most effective way of organising the knowledge we use to understand, explain, and manage the natural world.
In at least a practical and intellectual sense the special sciences are autonomous, their explanations, methodologies, terms, and objects of study are perceived as self-sufficient without any requirement or benefits flowing from translation to another scale or ‘lower level’ in spite of assumptions about successful reductions in the past and the causal completeness of physics.
Fundamental can be ontic (that out of which everything is made – microphysics) or epistemic (that to which everything conforms).
When we reduce are we suggesting a relation of identity between the reduced and reducing entities that justifies the elimination of the reduced entity: or are we merely referring to differe3nt modes of describing the same thing?
Method & subject-matter
Abstraction-reduction
It is a characteristic of explanation that it abstracts: it considers one particular aspect of the natural world to the exclusion of a more general context. In general our focus is on the explanation, not the context, the context being assumed or taken for granted. When a biologist gives an explanation of the way a heart pumps blood, it is assumed that the laws of physics are in operation – this does not have to be stated. Thus all explanations we provide have two key characteristics: firstly, abstraction – that is, they abstract from a greater whole, they focus on a particular situation or object while ignoring the context; secondly, they enter a potential analytic or synthetic regress. Explanations thus resemble our perceptive and cognitive focus by paying attention to a particular set of circumstances (foreground) while ignoring the wider context (background). In providing an explanation there is a kind of unspoken rider … something along the lines … ‘assuming the uniformity of nature, and other things being equal (ceteris paribus)’.
Principle 8 – Explanations abstract information from a wider context
(It is a characteristic of explanations that they tend to abstract (reduce) from the whole (the wider context). An explanation considers one aspect to the exclusion of others. An explanation is regarded as satisfactory, or ‘complete’, when it is sufficient for its purpose; it cannot account for the full context – which is taken for granted. When a biologist explains the way a heart pumps blood, it is assumed that the laws of physics are in operation – this does not have to be stated, in addition, to make the explanation complete.
Though most explanations ignore the wider context, they have the potential to enter either an analytic or synthetic regress. That is, the explanation can procede by progressive reduction and simplification (analysis) or it can consider an ever widening context (synthesis).
Explanations resemble our perceptive and cognitive focus by paying attention to a particular set of circumstances (foreground) while ignoring the wider context (background). In providing any explanation there is an unspoken rider . . . something along the lines . . . ‘assuming the uniformity of nature, and other things being equal (ceteris paribus)’.)
Proximate & ultimate explanation
Is sex for recreation or procreation?
A proximate explanation is the explanation that is closest to the event that is to be explained while an ultimate explanation is a more distant reason. In behaviour a proximate cause is the immediate trigger for that behaviour: the proximate cause for running might be a gun shot, the ultimate cause being survival. Biology itself divides in its approach to proximal and ultimate causes. Ultimate causes usually relate to evolution and adaptation and therefore function, answering the question of why selection favoured that trait – and the answers tend to be teleological. Proximal causes deal with day-to-day situations and immediate causation. Proximate and ultimate explanations are complementary, they are not in opposition with one being better or more explanatory than the other, both have their place. This is a trap for the unwary since proximate answers can be mistakenly given to ultimate questions.
So, one possibility is that there is no privileged perspective that entails all others, each is equally valid and the explanation that is most appropriate will depend on the particular circumstances. In all this we are abstracting and studying certain factors while ignoring others. When we study the genetic code we do not consider it appropriate to think about electrons and quantum mechanics: when we study the heart we do not worry about gravity or consult the periodic table.
Principle 9 – Satisfactory explanations generally depend, not on the size of the units under consideration or the inclusiveness of the frame of reference, but the plausibility, effectiveness, or utility of the answer in relation to the question posed.
So, sex is for both procreation and pleasure.
(Is the explanation contingent on our human interests and limitations or is it a full causal account?)
4. Unity of science, spatiotemporal boundaries, scope & scale
As science progressed it provided increasingly elegant summations of knowledge about the physical world. Apparently disparate phenomena were united under common laws that could be expressed using mathematical equations: the motion of the planets, the behaviour of fluids, electricity, and light. The integration of physics and mathematics had such explanatory and predictive power in relation to so many phenomena that there seemed no end to what they might achieve. Gravity was a universal force that treated falling rocks and falling monkeys with absolute equality. Physics embraced space and time, matter and energy – and that was mighty close to everything. Its explanatory breadth and predictive power was, and still is, thoroughly demonstrated through its spin-off technology. Today our GPS systems integrate space flight and complex electronics with relativity theory and quantum physics to provide flat earth maps on our car navigation systems. There was a vision of physics as a fundamental discipline incorporating all other knowledge. Physics was universal in scope and scale while other scientific disciplines dealt with only sub-sets of the physics enterprise. So, for example, physics encompassed all matter, biology only living matter, animal behaviour all sentient living matter, sociology humans as they interact in groups, anthropology human beings, human psychology human brains and behaviour. This characterization of science presents us with a metaphysical monism: there is one scientific truth for one reality based on one set of underlying principles (scientific laws). This vision is generally referred to as the ‘unity of science’.
Principle 10 – Scientific fundamentalism is a metaphysical monism: there is one scientific truth for one reality based on one set of underlying principles (scientific laws). This monistic vision is generally referred to as the ‘Unity of Science’
All the convoluted complication of complexity – the mess of multiplicity of objects – their properties, relations, and aggregations – can be simplified and reduced by analysis as the adoption of a philosophy approximating monism as a description of the many in terms of the few. Scientifically we do this by means of the elementary particle, generalization to principles and laws, and systematization.
For some physicists there is a goal like a ‘unified field theory’: when quantum mechanics is reconciled with relativity then our account of the physical world will be complete.
Principle 11 – The unity of science (metaphysical monism) – there is one scientific truth for one reality based on one set of underlying principles (scientific laws)
Does this universal character of physics give some kind of precedence to physics: does it make physics more ‘fundamental’?
Principle 12 – Because physics is broad in scope it seems to encompass or absorb other disciplines of more limited scope.
Citations
[1] Ellis 2005
[2] see Naomi Thompson and Fictionalism about grounding https://www.youtube.com/watch?v=yMO64-21aik
[3] Fictionalism can apply across many domains. So, for example, we can be fictionalist about numbers (i.e. numbers have no referents, but they are useful) and morality (there is no objective right or wrong, but the notion of right and wrong, good and bad serve an important role in human life)
References
Ellis, G.F.R. 2005. Physics, complexity and causality. Nature 435: 743
Physical reductionism is possible but explanatory reductionism is not.
Supervenience of th emental on the neuralogical was an idea introduced by Donald Davidson as a dependence relationship.
The article on reality and representation also discussed the way our minds, that is, our cognition based on the objects of our perception, attempt to put order into the confusing complexity of mental categories that make up reality. Working on the scientific image can improve the categories we use to describe the nature of reality but it does not give is an overall structure. We give structure to reality by applying metaphors that generally work well for us in daily life – by distinguishing between: what is bigger and what is smaller; what is contained in or is a part of something else; what is simple and what is tied to other factors in a complex relationship; and by what can be ranked or valued in relation to something else.
It was also noted that when we describe the physical world we do so from different perspectives: we can give different accounts and explanations of the same physical state of affairs. So, for example, we can give physical, chemical, biological, psychological, sociological accounts of what is the same physical situation.
The question than arises as to whether any one particular mode of explanation and description should have priority over others and, if so, for what reason? That is the topic of this article.
The problem of reduction in science brings together a web of ideas, beliefs and assumptions about the world. To help connect some of the threads of this story I have organized the discussion into a set of principles that can be used for easy reference.
So far we have considered cognitive segregation, the way our minds divide the world into meaningful categories of understanding, our percepts and concepts, and the way that our cognition allows us to, as it were, look beyond the world of our biologically-given human perception (the manifest image) to a less anthropocentric world that allows us to not only investigate the way other sentient organisms perceive the world but to investigate the composition and operation of the external world itself.
What about the world of solid objects around us? Our curiosity about substance stretches back to at least Democritus’s and his world of fundamental indivisible particles called atoms. This was not an observed world but a postulated metaphorical world. By the 1940s it was thought that we had reached the truly fundamental constituents of matter when atoms were split into protons, electrons and neutrons. The metaphor was still of ‘particles’ like billiard balls rotating in a solar-system-like way around a nucleus. The world of particles would be transformed into one of forces made up of fields. Since the 1940s the metaphor has been changed again. Particles have been replaced by waves: so the world outside our minds is perhaps best characterized as space which consists of interacting vibrating fields. The Higgs field explains where ‘particles’ get their masses.
Humans are, nevertheless, special. Our unique mode of representation and comprehension (our reasoning faculty and the capacity to communicate and store information using symbolic languages) allows us to look beyond the world of direct experience (the manifest image) towards the way the world actually is (the scientific image).
This is amply demonstrated by the time-honoured deference to ‘hard’ sciences like maths, physics and chemistry when compared to a ‘soft’ science like biology.
A force is due to a field and a field is something that has energy and a value in space and time like a magnetic field, temperature, and wind speed.
With increasing complexity comes greater difficulty in predicting outcomes. As a consequence biological principles and patterns seem to lack the precision and universality that we see in the laws of physics. Biological principles are derived from highly complex organisational and causal networks and open systems with a vast number of variables in which no two organisms are structurally identical. We might think that physics in accounting for the behaviour of the planets in the solar system has achieved much but the impressive and universal predictive laws of celestial mechanics can be derived relatively simply from the positions and momenta of planetary bodies in a relatively closed system. The number of variables is few.
Because the physical world ‘contains’ living organisms as a part, does it follow that the the universal laws of physics ‘contain’ those of biology it is tempting to assume that its scope is universal and that other realms of knowledge are simply sub-sets of physics. For example, biology is spatiotemporally bounded,[8] it takes the laws of physics as given; it is answering different questions in a different realm of thought. We could conceive replacing biology with the physical sciences thus making biology part of a system of strict universal laws but even if that were possible ‘we would not have explained the phenomena of biology. We would have rendered them invisible’.[9] Some laws apply over the whole range of scales.
Perhaps an explanation at one level does not require an explanation at another – or, at least, not at a level that is distant from it? We might explain chemistry in terms of physics but biology is conceptually more distant. We can feel cognitive focus at work here … atomic numbers, Maxwell’s equations, or the theory of relativity are not directly relevant when we work within the biological domain, or at least they are taken for granted as background. Hence the absurdity of explaining sociological phenomena in terms of physics and chemistry.
For example, since the large is explained analytically in terms of the small, we intuitively place greater value on the small giving it ontological precedence simply by virtue of size (but see Principle 3). Biologists no longer claim, as they once did, that living matter is quite different in kind from inanimate matter but this is a matter of perspective (all matter is physical matter but not all matter is biological matter). Many people once believed that the mind was inhabited by a spirit or soul and that, in a similar way, bodies were also inhabited by some special kind of spirit or vital force (elan vitale, entelechy). This general view, known as vitalism, is now discredited. The existence of such forces is not only implausible but, since they cannot be detected and studied, are of no explanatory value. They are best ignored.
Physics has its own problems with scale as it wrestles to reconcile the behaviour of matter at the small distances of quantum physics and the vast scale of cosmology, the break-down of laws in at the Big Bang or the singularities of Black Holes. Whether we look at the patterns in nature described by Newton, Einstein, Joule, Faraday, Maxwell or the various laws of thermodynamics the link to biology frequently seems tenuous. Of course the physics of matter is important to know about when studying nerves and macromolecules, but much of this is incidental to many biological questions.
In its most basic form foudationalism regards matter as the only reality but even the mechanistic philosophers of the Scientific Revolution recognised that this matter was in motion and today we realize the sigificance of not just matter but its mode of organization.
A flat ontology removes the necessity for the grounding of an object in something other than itself. There is no need for the Principle of Sufficient Reason. Explanations and reasons do not provide underlying truth or get closer to reality, they simply express or ,reduce, one scale or mode of existence in terms of another.
Commentary
Scientific fundamentalism & the unity of science
We can define scientific fundamentalism as the view that the smallest particles of matter and the principles and theories of physics and chemistry underpin all other science. There are at least five reasons why this view has appeal.
1. The analytic process of explaining wholes in terms of their constituent parts has explanatory weight that suggests parts are in some way more real or fundamental than wholes.
2. Second, analytic explanation, like the philosophical requirement for rational justification or causal origin, leads to an explanatory regress seeking ever more ‘fundamental’ solutions and suggesting that there must be rock bottom or ultimate explanation that can only lie within physics
3. The explanation of the complex in terms of the simple reduces causal complexity
4. The scope of physics (the universe, space, time, and matter) suggests that it must incorporate or subsume all other scientific disciplines
5. Fifth, both philosophers and scientists when explaining natural phenomena employ the metaphorical hierarchical language and imagery of levels of organisation. Though a convenient mental device hierarchical thinking suggests that the natural world is itself ranked from high to low (with physics as a foundation) . Talk of hierarchical organisation is better replaced by the language of scale.
6. It is a consequence of the historical tradition coming to us from antiquity wjereby the physics of astronomy and mathematics both preceded and received greater attention than biology although subjects that today we might call political and social science were regarded a very important.
Aristotle’s gave science its foundation in reason through deductive logic while scientists of the early modern period emphasized inductive logic and the importance of an emphasis on the world itself, on experiment and observation. Up to the 1960s there was a hope and belief that science could be defined and unified under a common set of principles. Today this ambition is meeting strong opposition because it seems that we have no conclusive criterion clearly demarcating science from non-science. That does not mean that astrology is science: robust scientific explanation entails many demanding criteria that astrology fails meet. But the distinction between the sciences of physics, chemistry, biology, the social sciences, history, and everyday reasoning is one of family resemblance or degree, not necessary and sufficient demarcation. Foundationalism with its insistence on science as a unique and special form of knowledge grouded in physics has been replaced by coherentism or pragmatism, the view of science as a coherent system of justified belief, a system of shared ideas that work.
The view that physics somehow expresses ‘reality’ more effectively than other disciplines (scientific fundamentalism) comes from the general impression given by these factors. However, parts do not have some special quality (ontological privilege) or are more ‘real’ than wholes. Scientifically credible units of matter have no intrinsic (ontological) precedence over one-another based on size or inclusiveness alone. Smaller units of matter (molecules) are no more ‘real’ than larger units of matter (dogs and cats). However they might have utility in explanation and large units may be more complex in terms of their causation and our conceptual understanding of them.
All explanation abstracts from a wider context and, in this sense, it is reduction. Though it is in the nature of explanation to ‘reduce’ by looking at constituent parts the adequacy of the explanation does not depend on the size of the units under consideration, but the plausibility, effectiveness, or utility of the answer in relation to the question posed. Using parts to explain wholes gives parts explanatory value but does not make them more ‘fundamental’ in any meaningful physical sense.
Though the objects of physics are no more real or fundamental than those of biology, it is evident that adaptive complexity (life) involves intricate systems of causality that increases the difficulty of prediction at smaller scales. The greater complexity (causal relations) of the domain units under consideration, the greater the difficulties in prediction, communication, and translation into other domains.
Aristotle’s Objection
Pre-Socratic natural philosophers were materialists who regarded nature as consisting only of matter (Earth, Air, Fire, and Water in some combination). Aristotle criticised this view because matter is always changing. Any functional structure such as an organism can have all or some of its matter replaced by different matter and yet retain its identity as a particular organism. That is, continuity is maintained, but not through the matter of an organism but through its arrangement or functional structure; although we have a concept of ‘dog’, each individual and kind of dog consists of different matter; matter is just ‘stuff’, when an organism grows it grows in a particular structured way, it does not simply add to what is already there by simply getting larger.
For Aristotle an, organism’s form rather than its matter is its nature. To understand an animal or plant we need to know not only its constituent matter but the way it is structured and why it is structured in a particular way. Matter is necessary to create form but it is subordinate to it.
Biology is not just molecules, it is molecules of certain kinds integrated in ways that give rise to unique properties. A living organism (life) is very different from a rock (inanimate matter). Every physical thing is physical, but not every physical thing is biological. There is no privileged bottom level or a universe consisting of one stuff: all representations are partial.
In science ‘black box’ refers to a system whose inputs and outputs are known but not the inner workings. We really need a corresponding term ‘white box’ to indicate the explanation of the inner workings of a system that ignores or takes for granted the context outside the system which, in much of science, might be expressed as ‘the uniformity of nature’.
LaPlace’s demon
Scientific explanation is steeped in the culture of causation and hence determinism. Lurking in the background there is always the figure of Laplace’s demon, the claim that someone (the demon) who knows the precise location and momentum of every atom in the universe (to infinite precision) at a given time should, in principle, be able to calculate all past and future states of the universe.
Explanation by analysis & synthesis
All explanations abstract certain features from a wider circumstance and in this sense they are reductionist.
When we wish to explain the structure and/or function of a particular physical object, as we have seen, we do not explain it in terms of itself but either in terms of the structures out of which it is composed or the role that it plays within a greater whole (or both). Which option we choose (analysis or synthesis) depends to some extent on the particular object that we choose to explain and understand. If, say, the object is gold, Au, then I tend to proceed by analysis, looking for the atomic number, density, bouiling point and so on. It is true that I gain a better understanding of gold if I see where it fits in the periodic table in relation to other elements but my focus of interest is on the element itself and the method of analysis. In contrast, if I want to understand and explain the heart then, although I can explain its division into auricle, ventricle, valves and so on, but it is difficult to just rely on such factors without explaining the role that the heart plays within a body, that is its relation to the other organs within a greater whole. In this case we proceed by both analysis and synthesis.
This is the methodology of explanation but, also as already considered, the success of the outcome depends on the purpose for which the explanation was given.
Science has always fought over what appear to be these alternative or opposing methodologies. On the one hand knowledge and understanding is to be gained by placing an object in its full and natural context (synthesis). On the other hand we try to understand the same object by isolating it from its natural context in order to better understand its unique features (analysis).
Scientific utility
We may simply choose the explanations, terms, definitions, laws, and assumptions (categories) that provide answers to the particular questions that concern us.
Principle 14 – there is no unequivocal criterion that distinguishes science from non-science
If we assume that science proceeds by the constant critical scrutiny and refinement of our scientific categories (which include theories and generalisations, principles, names, definitions, laws, phenomena, and so on) as we map our concepts onto the natural world itself (reality). The better we can explain and understanding the world the better we can manage it. And of course science has extended our senses through technology like microscopes and telescopes which have allowed us to experience the world that lies beyond our natural biology and sensory input.
Principle 15 -Scientific categories help us to organise the knowledge we use to understand, explain and manage the natural world.
hierarchy operates like the ‘stacking’ or subroutines in computer programming as nested subroutines are completed, returning to the primary routine: it also resembles the nesting and trees that occur in generative grammar (Chomsky hierarchy).
As we establish new cognitive frames of reference with the macro-microscope so causation appears to occur between the different cognitive categories. A molecule causes W, a leg causes X, a body causes Y, a colony causes Z. Molecules do not cause legs, legs do not cause colonies, colonies do not cause biomes. We have to ask whether causation can occur in this way according to each frame (do causal ‘levels’ make sense?). Is there a nested hierarchy of causation. And does any particular kind of causation take priority? How does this relate to material, formal, efficient, and final cause of Aristotle. Adaptive significance deals with ultimate or teleological causes while mechanistic and developmental analysis deals with proximate causes.Explaining bird song
Principle 10 – The analytic process of explanation of large and complex in terms of small and simple persuades us that parts have some ontological privilege (are more real) than wholes – but parts can also be explained synthetically by considering their role within a greater whole
The use of the word reduction emphasises the size of the units under consideration rather than the actual source of the process which is based in the abstractive process of cognitive focus on scale.
A cognitive dissonance arises when we realize that we can think of such a grouping in two ways – either as progressive division (analysis) or progressive addition (synthesis) depending on whether we begin our thinking with the most-inclusive or least-inclusive category. The dissonance seems to arise in part because we think of groups as ranks and it is then difficult to think of ranks as being of equal status, we find it very difficult to resist our impulse to create rank-value: we also find it difficult to think of a particular system in terms of analysis and synthesis at the same time, and for similar reasons.
Principle 10 – Nested hierarchies can be understood in two ways as being either progressively inclusive or progressively divisive – to understand and describe the objects within the hierarchy we can proceed either by analysis or by synthesis (or both)
Explore top-down and bottom-up. Is the world nested?
Life is not just stuff but the dynamic constraints operational within dynamic structural relations that are inherited by the work of negentropy.
Senses in which all science is grounded in physics:
1. It aspires to the certain, necessary, timeless and universal truth of mathematics
The nature of explanation see [2]
We might regard metaphysics as the study of ‘what there is’ and/or the study of ‘what depends on what’. The latter refers to the way the human mind struggles to find order and the slippery relation between our mental ordering processes and the order of the world. Explanations proceed by ‘grounding’, by providing reasons. One ‘thing’ can be grounded in many ways and we can express grounding in many ways – as a means of justification, a reason, a cause, a foundational axiom, ‘because’ etc. So, for example, we explain wholes in terms of their parts.
The position argued here is that this grounding is illusory but it cannot be simply removed (eliminativism) because it serves valuable role (fictionalism)[3]. The grounding relation maybe between, say, facts and material objects.
Grounding also relates to our intuitions about the structure of reality – say, for example, that facts about biology, depend on facts about, chemistry, which depend on facts about physics etc. Dependence relations thus present us with structure which facilitates thought and further explanation. ‘Grounders’ (people who support the idea of grounding) support their views in several ways: that grounding is asymmetric higher level scientific facts depend on lower level scientific facts (if biology is explained by chemistry, then chemistry cannot be explained by biology); it is irreflexive (it cannot ground itself); it is transitive (if biological facts depend on chemical facts and chemical facts depend on physical facts then biological facts depend on physical facts); there is a fundamental or foundational ‘level’ of explanation where the process of grounding must stop and that this foundation explains everything else. Philosophers use the idea of supervenience to try and come to grips with grounding.
Examples of possible ‘grounds’ might be: for morality – non-moral properties like happiness, pleasure or pain; for material objects – the smallest possible particles; for logic – the way true propositions are based in the world (that the proposition ‘snow is white’ is true if in fact snow is white); the logically complex is grounded in the logically simple.
Grounding talk expresses our intuitions about dependence relations in reality – that some things are less ‘real’, or less significant than others.
Realists and eliminativists
Realists hold that grounding relations hold independently of what people may think or say; they are are independent of conceptual and linguistic schemes and people. They are discovered, not created. Eliminativists hold that grounding talk is incoherent or unintelligible and should be abandoned. Perhaps there are composition relations in tables but that is all etc., there is no role for grounding talk.
COMMENTARY
Reductionism is not just a thesis about the way the world is, iyt is also a thesis about what the mind is like as well.
Ultimate reality
The question of ultimate reality is a metaphysical question. Consider the following: any understanding of the universe can only be established from a particular point of view. There must be, as it were, an independent interpreter of whatever there is – a ‘point of view of the universe’. No such point of view exists with the unlikely exception of God who, arguably, must have a God’s-eye view. Secondly, it is reasonable to claim that the only worthwhile, relatively reliable, or non-controversial answer to such a question posed in human terms must ultimately rest on empirical evidence. With this tacitly agreed, ultimate reality then translates into the best that science has to tells us. For some reason many people then interpret the question as one about the nature of matter and its relations. Falling back on our predilection for analytic explanation and the mistaken conviction that the smallest is the most real the discussion of ultimate reality falls into a debate about fundamental particles, waves, fields, and the like. But the boson is no more real than a bison, or a human being.
Page Menu
Concern with the emergent properties relating to structure gathered interest with the notion of holism when in the 1890s South African statesman Jan Smuts in the book Holism and Evolution (1926) coined the word ‘holism’ in reference to explanations that invoke larger or wider scales. Holism (and its later variants organicism, organismic biology, emergence) placed emphasis not on the material components but their relations, drawing attention to the interdependence of parts, homeostasis, the operation of networks and communication systems, self-regulation, and the properties of complex systems in general. For example, in accounting for the presence of thorns on plants an analytic approach might explain the thorns in terms of the proximate and analytic explanations that discuss morphological developmental pathways and thorn structure. But we are also satisfied by the ultimate explanation that thorns arose through the selection pressure of browsing animals. British emergentists postulated the hierarchical structure of matter and its aggregation into irreducible wholes with emergent properties.
If our mode of explanation can be either by analysis or synthesis then are there circumstances in which we would prefer one over the other? Physics purports to deal with matter at its largest scale (the universe) and smallest scale (fundamental particles) but emergentism denies the explanatory completeness of physics.
As already implied, opposing claims about reductionism and emergence have bogged down in poor definition and people talking at cross-purposes. American philosopher Nagel in the 1960s tried to express the apparently contrasting views as follows: [6] defining reductionism as the claim that ‘all the events in nature are simply the spatial rearrangement of a set of ‘ultimate’ items whose total number, properties and laws of behaviour remain unchanged regardless of any rearrangement’ (sometimes called generative atomism[9]) and that ‘We can account for novelty simply through the playing out of physical laws in time as matter combines in various ways’. Or, in other words, if we had a complete physics then we could account for all events, both past and future. Meanwhile emergentism claims that new kinds of behaviour conforming to novel modes of dependence arise when hitherto non-existent combinations and integrations of matter occur. This gives rise to new qualities, structures, properties, and processes. Biology was central to this view so, for example, it is claimed that an organism is an operational whole that has qualities or properties that are not possessed by the molecules or elements that make it up: life has properties in addition to those of the inanimate matter of which it is comprised and that this identity has new causal powers. Since the causal powers relate to the entity itself and not those of its components then the entity is irreducible. The emergent properties and causality cannot be predicted from the base properties while nevertheless depending on them.
It seems that reductionism proceeds on an assumption that examining objects at ever finer resolutions increases not only general knowledge about the structure of the world but also the precision and predictability of our scientific explanations.
Are emergence and reductionism incompatible? Emergence seems to assume that particular characteristics or properties are either scale-free or inappropriate for reduction: this is expressed as three objections to reductionism.
1. Hierarchically organised systems exhibit properties and/or processes at higher levels that are in some sense autonomous (and possible sources of ‘downward causation’), they cannot be predicted from those at lower levels (irreducible hierarchical organisation – it denies that there is increasing explanatory power, clarity, and predictive value with reduction). Prime examples include the emergence of life and consciousness. Emergence is also associated with increasing unpredictabilty.
2. Simple structures give rise to or evolve novel traits and structures that cannot be accounted for in a reductionist framework – the problem of complexification (a kind of evolutionary cosmogony)
3. Organic wholes exhibit function in a way that inanimate matter does not
In general terms these objections relate to matters concerning predictability, the origin of novelty and complexity, and functional explanation. We need clear examples. Here is a selection of examples to give you a flavour of the debate:
Consider the following: the synergistic behaviour of large flocks of birds and shoals of fish; the formation of a snowflake; the integrated activity in an ant colony; tidal ripples in sand on the beach; ants resolving the ‘travelling salesman’s dilemma’ by finding the shortest route between about eleven locations – a massive computer calculation achieved by mindless ants following simple innate rules of behaviour and pheromone trails; the ‘wisdom of the crowd’ when guessing the number of jelly beans in a jar is difficult for an individual but how averaging the estimates of numerous individuals can closely approximate the correct answer; the way alzheimers disease does not target information held in individual neurons but weakens the capacity of a neural network as a whole; how ‘swarm intelligence’ of insect colonies gives rise to communally-directed activity; the slime mould that grows by oozing along the ground and, without conscious deliberation, but by using chemical feed-back, it traces out the shortest route through a maze connecting two sources of nutrient; music coming from an orchestra of 50 musicians; the economic or social patterns resulting from individual choices.
Attention is drawn to the way that characters of one part might seem random and undirected (disorganised) but when seen as a member of a collective there is pattern (organised), like individuals in human society. In this sense we can see reducibility as a matter of degree depending on the system. Even so, emergent properties need not be mysterious an unexplainable in terms of system components.There is a gradation of wholes where the interdependence of parts varies from negligible (as in the sugar crystals of a sugar lump) to critical (as with the organs of a human body where a change in any single part can cause a change in all the others). Biologically living systems are regarded as integrated wholes whose properties cannot be reduced to those of smaller parts. Their essential, or ‘systemic’, properties are properties of the whole, which none of the parts share. They arise from the organizing relations of the parts, i.e. from the configuration of ordered relationships that is characteristic of that particular class of organisms or systems. Systemic properties are destroyed when a system is dissected into isolated elements.
Philosophers distinguish between epistemic and ontological emergence.
Epistemic & ontologic emergence
This concerns matters of knowledge: for example, taht we cannot predict the future because of the complexity of a particular system while ontological emergence concerns the origin of novel and autonomous structures, functions, and properties. The probabilities of particular outcomes based on the units and those of the composite do not correlate, that states of the system are not determined by the states of the basic units (entangled quantum mechanics). These examples of ontological emergence show that generative atomism cannot be a universal method for representing the world.Fusion emergence is a form of ontological emergence when entities combine to change their identity, as occurs when uniting a flame with gunpowder. Or, if a $2 note is exchanged for two $1 notes then the original physical units or components no longer exist in any meaningful way.
One form of epistemic emergence is inferential emergence. When something cannot be predicted as an outcome within a system then it is emergent, there is a correlation between emergence and unpredictability. An example of inferential emergence is weak emergence when a future state can only be predicted by knowing intermediary states. This is compatible with generative atomism but computers are essential. An astronomer can predict a solar eclipse hundreds of years ahead with precision. Computationally incompressible. Existence of weakly emergent systems shows that some aspects of the world cannot be known by the unaided human intellect – computers are absolutely necessary for us to know some aspects of the world. Conceptual emergence occurs when we need a new conceptual framework (language and ideas) that doesn’t belong to the base entities from which the object emerged. Ontological emergence occurs rather than existing in models and conceptual frameworks emergent things are actually in the world. Each occurs as either synchronic, that is, the emergent features and the thing from which it emerges exist at the same time or diachronic when the features emerge over time. A state or property is conceptually emergent in a particular theoretical framework (domain) when another frame must be developed as an explanation. An economic recession cannot be explained in terms or vocabulary of fundamental physics, even if everything is grounded in physics.
Capturing a whole system in terms of a few simple principles is not possible in such cases because the vocabulary is inadequate (affects axiomatics).
Metaphysically generative atomism is not universally true. We need a new metaphysics.
Methodologically the human intellect is too weak to predict many states of the world. Recognise we are not at the centre of the knowledge universe.
Epistemologically the axiomatic method has limitations. Human conceptual frameworks do not set the limits to knowledge – we need ways of understanding how machines represent the world.
Emergence – Paul Humphreys – the book
Principle of emergence – emergence occurs when there is a change in scale, inclusiveness, or complexity where explanations in one domain do not transpose to those of another: the change may be a consequence of aspect (viewing the situation from a different perspective or using different criteria), or a change in the object under investigation (like an increase in the complexity or relations of its parts)
Compositional hierarchy. Strong forces act first followed by weaker and slower forces. Levels are areas of order with greater stability and resilience.
Paul Humphreys
EMERGENCE
The study of emergence has become part of systems and complexity theory which is, to all intents and purposes, the study of order in the universe. If we regard order as ‘the aggregation of correlated phenomena into composite patterns that allow us to make sense of the world’, then it is immediately apparent that the composite patterns of everyday life look well beyond the concepts and vocabulary of physics. The study includes pattern and pattern formation and transformation, open and closed systems, synergies, symmetries, and complexity.
Definition
Emergence can be defined generally as ‘the coming into existence of a novelty that could not have been predicted’ or, more specifically, ‘Non-linear pattern formation where synergies between parts give rise to new patterns of organization’.
A major question is whether wholes (living organisms of cells, societies of individual people) exist in any sense independently of the elements out of which they are made since new properties, functions and patterns form as more parts are added in various arrangements.
Strong & weak emergence
The novelties produced in this way exhibit degrees of strength.
Weak emergence – though constituted of simpler constitutive parts, the unexpected novelty could, at least in principle, given sufficient computational power be explained in terms of parts. Though the features of the whole are ontologically and causally derived from its constituent parts, irreducibility results from the complexity of the attempt. For example, the structure of a flower could, in theory, be described in terms of its molecular composition but this would be impractically complex. The parts are not constrained by the nature of the novelty (whole) – macro does not influence micro.
Strong emergence – though constituted of simpler constitutive parts, the unexpected novelty is not derivable, even in theory, from the features of its constituent parts and their interactions. For example, the liquid solvent properties of water arise from the synergies of hydrogen and oxygen and cannot be explained in terms of the properties of the elements alone. An account of water requires new descriptive categories, concepts, and terms. This new set of categories can be referred to as a new (higher or lower) ‘integrative level’ (aspect). The novelty follows irreducible regularities (rules) that are not evident in the parts. The characteristics of the parts are constrained by the nature of the novelty (whole). Macro influences micro. Examples include quantum entanglement, water, life, human consciousness.
Reductionism & Holism
Reductionism (analysis)
The reductionist approach has the general character of deduction and the axiomatic systems of mathematics with physics as its best exemplar. The world is reduced to basic building blocks and universal rules from which the material world is then constructed. The more complex, inclusive, and larger-scale objects of the world are aggregates of these basic building blocks. Ultimate scientific knowledge is thus obtained by analysis to elucidate the nature of the world’s elementary constituents from which the material world is causally derived. Causality is linear insofar as the relations between the parts of a system cannot add or subtract anything to or from the system. The world is often treated as existing independently of consciousness and transparent to scientific investigation – such that science is attempting to find and create an explanatory map that corresponds to the actual or real world. The reductionist program tends towards a unitary science in which one theory can explain everything.
Holism (synthesis)
The holistic approach investigates objects, functions, and relations from the perspective of the wider system (the environment or context). The whole and its own properties is given priority since they cannot be adequately explained or derived from the properties of the parts. The whole has causal efficacy over the parts and concerns often focus on dynamic process. Causality is non-linear (there is interdependence, networks, relations, integration, context, connectivity). The world is treated more as relations in process than a collection of objects. There is sympathy to a degree of subjectivity with no objective truth but individual and scientist in a reciprocal participatory relationship with the world. This allows for multiple interpretations of the world.
A growing number of scientists are becoming dissatisfied with the reductionist paradigm. Reductionist scientists and philosophers doubt the possibility of the whole influencing the parts (‘How is it possible for the whole to causally affect the constituent parts on which its very existence and nature depend? (Jaewong Kim). Explanations that incorporate complex objects are shorthand conveniences for explanations that, given sufficient computing power, could be given in terms of fundamental constituents.
There are various objections:
1. Ontology. A molecule is no more real than an elephant. Why should size or simplicity be a reason for ontological precedence? There is only value in its explanatory role.
2. Representation. There is always a foundational description of any property that is complete and unique, only one kind of stuff from which all else is built. But all representations are only partial. The biologist finds unified field theory, M-theory, or string theory totally devoid of information concerning biological objects. Reductionism ignores the reality of difference. ‘Every thing is a physical thing, but not every thing is a living thing’. Being a living thing is part of the reality of the world, not an adjunct to something more fundamental. If ontological emergence (strong emergence) is really unpredictable then there can be no universal way of representing the world
3. Structure. Reductionism ignores, or makes invisible, the reality of structure. A Mexican Wave is more than just many individuals waving.
Since weak emergence is concerned with the properties, processes, and functions of parts, where the ‘whole’ has no influence on the parts, then it is best studied using the methods of analysis that deals with micro- rules, structures, processes, and objects. Such an approach sees the world as consisting of simple basic building blocks and laws. It is therefore possible to have a theory of everything that captures all ‘levels’ or ‘scales’ of existence under a single explanatory system as the ‘unity of science’.
But if ‘aspects’ exist independently with their own rules and regularities, processes, and objects then our best account of ‘everything’ will identify not building blocks but patterns and processes of organization. It will provide abstract generic models applied at all scales without reduction that capture the features of all.
Epistemological emergence relates to our inability to account for some phenomena due to our computational limitations e.g. explaining an ant colony in terms of the interactions of its component molecules. Ontological emergence concerns not just statements about our knowledge of the world but of how the world actually is, irrespective of our understanding of it. Scientifically emergence is often related to non-linearity, self-organization, pattern-formation, and synergies. The novelty is not ‘additive’ but a consequence of the integrated interactions of the parts the novelty that arises being referred to as a synergy.
Synergies are non-linear and context-dependent, while linear relations are context independent. In a synergy the parts have specific roles in relation to one-another thus providing a context (e.g. a sports team). There is thus both differentiation and integration. The human body has highly specialized differentiated parts that are miraculously integrated towards achieving many functions. This is not the case with linear systems.
Separation of properties is differentiation while coordination is integration. Emergence is the integration of parts to produce new properties (synergies) over and above those of the parts and it often results in a combined functionality. Each ‘level’ has its own internal properties, features, and dynamics that depends on the integrity of the synergies between the constituent parts. The ‘higher’ level depends on the lower for its existence but it creates its own conditions that feed back to influence the ‘lower’ levels and their context of operation as ‘downward causation’ like the relationship between individual humans and their social institutions. The scale (level) is important because of the need for new terminologies as with micro and macro-economics, or quantum mechanics and general relativity, psychology for micro- individual explanation and sociology for macro- patterns that arise out of the synchronized activity of many individuals in social systems.
Do we use emergent levels or scales out of expediency and lack of knowledge (we cannot predict outcomes because of the number and complexity of objects involved) or because it is just not possible to give an account of the macro from the micro? This depends on whether the emergence is weak (derivable at least in theory or principle from the component parts) or strong (where there is extremely limited reference to the micro-level). In strong emergence something novel and autonomous arises through the particular combination of constituents (the covalent bonding of 2H into H2).
Analytical reductionism
Emergence emphasizes the possibility for two different outlooks on the nature of science. The analytical reductionist sees the world as comprising a few basic laws governing the fundamental building blocks of nature which are arranged in different ways: which leads to the study of ever simpler and ‘basic’ constituents. The structure is that of a mathematical axiomatic system and explanation, a development of the thesis of the universe as matter in motion. It also places human experience and social systems as peripheral to the main business of science.
Emergence challenges the possibility of such an enterprise because its novelties are irreducible. We would do better to understand the characteristics and properties of the many ‘levels’ of existence, looking for patterns that differentiate and integrate. It seeks the integration of synthesis not the fragmentation of analysis.
Pattern
Pattern is any set of correlations between states of elements within a system (a structured relation or correlation between variables) in either space and/or time. If there is no correlation then there is no organization, pattern, or order, and the system is said to be random. Pattern it may be generated either externally (a house constructed by the builder) or internally (a living organism – why it is, say, a fish and not a dog – by its genetic code). Characteristically the novelty (new whole) of internally generated pattern is not a consequence of central control but local dynamics. Ant colonies achieve coherence (self-organization) not through a hierarchical chain of command but by the integration of patterns of local interaction. Sometimes random events can amplify certain features as a form of positive feedback and this can result in some things persisting rather than others.
In this sense pattern formation is a form of adaptation. In human terms social organization achieves more than individuals can achieve independently.
Patterns have an important connection to energy and mathematics.
All patterns have an underlying mathematical structure – indeed mathematics may be understood as the science of pattern. When variables change in relation to one-another the world becomes more intelligible. The relation may be positive or negative (the variables move in tandem, either together – money in bank vs amount of interest – or apart – distance travelled vs fuel used), of varying strength (age and health are only weakly correlated, of limited predictive use), and linear (linearity gives a straight line graph (plotting distance travelled vs petrol used) or non-linear (the proportionality of the change may itself change does not give a straight line as cost of new house vs its size). The expression ‘correlation does not imply causation’ indicates that connections between variables may not be direct but mediated by many variables.
The robustness of a pattern will be a function of the number of relations and the strength of the correlation between them (a marching army is strong, the price of fruit and health of the community it serves weak).
Symmetry
Symmetry is an organizing principle of patterns. Captures the idea of permanence in change, of sameness and difference – how things can remain the same under transformation (a characteristic of music, architecture. In mathematics it is part of group theory. In physics it is now recognised that almost all ‘laws of nature’ arise in symmetries. A symmetry can be abstractly defined as ‘a rule that will map or transform one element in relation to another’ e.g. a snowflake is a reflection transformation. Asymmetry describes how things are different within some frame of reference e.g. the canopy of a tree. The frame of reference is important because a lack of symmetry can become symmetrical when viewed from another aspect (at a ‘higher level of abstraction’). Symmetry can also be used to define the degree of order within a system with order ‘the arrangement or disposition of objects in relation to each other in accordance with a particular sequence, pattern, or method as a transformation or symmetry between states over time. Symmetries compress information which can be expressed symbolically. The infinite series {2,4,8,16,32 etc.} can expressed as a simple mathematical rule f(x)=2x.
Broken symmetries require extra rules. Symmetries describe simple systems with a small set of rules governing the difference between the small number of parts.
Complexity
Complexity is an interaction between symmetry and asymmetry in creating pattern that has order but is also random and chaotic; this interplay is a defining feature of many complex patterns.
Open & closed systems
A closed system can be closely defined and explained in terms of the operation of its parts using the methods of analysis. However, every whole exists within a context or environment and is thus defined and explained in terms of its synthesis or inclusion within a greater whole.
Aspect (integrative levels)
The metaphorical language of ‘hierarchy’, ‘levels’, ‘up’ and ‘down’, ‘high’ and ‘low’ are treated here as both unnecessary and confusing. Hierarchy implies rank-value and since the objects under consideration here are considered ontologically equal (something exists or it does not; existence does not imply value; a molecule exists equally with a human being) – there is no ‘higher’ or ‘lower’, nothing that is ‘more’ or ‘less’ fundamental in this sense. Altitudinal metaphorical reference to ‘levels’, ‘higher’, ‘lower’ etc. now becomes redundant. Explaining the world in terms of ‘levels’ as, say, elementary particles, atoms, molecules, macromolecules, cells, tissues, organs, bodies, communities, societies, ecosystems – or different academic disciplines like physics, biology, and social science – becomes unnecessary. These are simply different ‘aspects’ of the world: the same world explained in different ways. There is no ontological hierarchy, there are only different ‘aspects’, similarly there is no ‘up’ and down’ simply more or less complexity, inclusiveness, or scale (singly or in combination). Since existence itself is not privileged or ranked in any way then neither are these ‘aspects’.
Translating one aspect into another
How do different aspects relate to one-another? Aspects are not ‘reduced’ into other aspects but expressed in a different way or translated. Translating one aspect into another will depend in part on whether the criteria of distinction between aspects is based primarily on inclusiveness, scale, or complexity.
Each aspect may have its own unique properties and rules of interaction and connection that are difficult to predict or translate from those of another aspect. Since there are degree of complexity, scale, and inclusiveness those aspects with least difference in these factors will be easiest to translate. Physics has more in common with chemistry than with social science.
Glossary
As a new subject finds its feet it is important to be clear about the use of potentially confusing terms. The following is an aide memoir for the terms used in this article.
Aspects are, however, established using different criteria (mostly complexity, inclusiveness, and scale) and this must be noted where possible.
Emergence – in general ‘the coming into existence of a novelty that could not have been predicted’. More specifically, ‘non-linear pattern formation where synergies between parts give rise to new patterns of organization’
Novelty – the features arising by emergence – including structures, properties, functions, and patterns
Synergy – a special relationship between the parts of a whole giving rise to a novelty or the effectiveness of subsystems acting in coordination or an interaction or coordination between two or more elements or organizations that produce a combined effect that is greater or less than the sum of their separate effects (adds or subtracts value) or a non-linear interaction e.g. an ant colony, two merged companies or elements combined to form a human body.
Wholes are not derived from but constitutive of
System=whole=object
Predictability
Emergentists point out that in purely mechanical (reductionist) systems, once we understand how the parts are integrated to form a whole, the future becomes predictable. Given state X, state Y will follow in an orderly and predictable way.
Type-identity
In the 1950s and 1960s reductionism was often associated with semantic reduction (translating one theory into another) that is, the reduction of properties of one knowledge domain into those of another. This was most evident in the philosophy of mind where mental states, like pain, were frequently equated to particular physical or neural states. Mental properties just are properties of the brain, there is a type-identity. This eliminated the mysterious causal connection between the mental and physical to reveal the physical reality and its uncontroversial causation. For every mental property there is a physical property that realizes it; for every physical effect there are sufficient causes for that effect; mostly particular effects are brought about by single causes (not overdetermined). The mental then becomes causally inert, an epiphenomenon, and an eliminativist position can be taken – the view that talk of mental states is devoid of scientific content. By the same token life itself was simply complex biochemical processes.
But how can we be responsible for our actions if all causation is microdetermined? Philosophers subsequently pointed out that ‘pain’, as understood in physical terms, would be very different between different animals and even between different humans, a thesis that became known as multiple realization. In other words, in practice, there is no one-to-one correspondence between mental states and physical states, it is just not possible to define mental states in physical terms. To overcome this difficulty mental states were then equated with functional states which allowed for their multiple realization. Functional properties are understood in terms of their causal role so there is no problem in equating the pain of a tadpole, dog, and human. Of course these functional properties are realized by the material constituents of the organisms but they are manifest at a different scale, the scale of the organism. On this understanding the mental and its causation are real and entirely dependant on, but irreducible to, the physical.
((Put the exclusion argument here))
Today physicalists (most scientists) are either reductive physicalists who still maintain that mental properties are reducible in this way, or non-reductive physicalists who think that this kind of reduction is incomplete.
There are scientific kinds (like hearts, legs, and eyes) that are functionally defined and multiply realizable. That is, there are many different kinds of hearts, even among humans, so they are not all physically equivalent even though they all have the function of pumping blood. Each is physically different. Functional vocabulary is not directly structural or physical. A type-identity reductionist holds that the relation between terms in two domains ‘heart’ of biology and ‘heart’ of physics is one-to-one. The functionalist holds that it is one-to-many, that functional objects are multiply realizable.
Then, if kinds are multiply realizable, then their causal pathways and laws are also in some way irreducible. Special science cannot be reconstructed from the vocabulary used to describe its realizing systems (the same system described at a more micro or reduced scale). But do th esmall-scale generalities of the special sciences warrant their being called ‘science’? Are they autonomous or require further grounding in physics? Universal laws of physics circumscrive general order while the regularities and patterns of the special sciences capture the order of particular instances. Both have their place.
Function
Biologists look to function and the future in a way that does not occur in physics and chemistry. Certainly biological systems can be explained in non-functional terms but in so doing they appear to lose explanatory value. Hearts sere the function or purpose of pumping blood around the body. This is clearly not a conscious purpose or causation directed at the future (teleology), it is a consequence of events that occurred in the past and can be explained without reference to ‘ends’. The way that organic systems seem to be directed towards ends simply reflect the way our minds perceive the world, not the way the world necessarily is.
Nevertheless the apparent directed activity that occurs in an object that has been subjected to the moulding influence of natural selection (teleonomy). The sentence ‘The function of an eye is to see’ has meaning in a way that the sentence ‘The function of a falling stone is to hit the ground’ does not.
Matter does not behave in a random way. We can see how the laws of physics result in ‘directed’ change guiding, as it were, the matter of the universe along a certain path. Then how this directed change grades into the semi-purposive (but still deterministic) teleonomic change that we see in organisms and their parts, and then the purposive (?but still deterministic) character of human conscious purpose.
Most scientific explanations fall into two categories as they: they are questioning either structure or function. Structure has a timeless quality: it is ‘just the way things are’, but function looks to the future. Biologists explore what particular structures do in relation to ends. How the genetic code regulates development, the heart circulates blood and so on. Many scientists would claim that the teleonomic, end-directed, or functional character of biological systems is illusory. Regardless, it is a mode of thinking that seems indispensable to biological research and it is unlikely to pass away. The foundation of all biology is natural selection which poses the simple question ‘What did evolution select it for?’
Properties & relations
It is not matter itself that is at issue here but the properties that arise as a consequence of the relations that exist within certain structures. As smaller units combine into larger ones in some systems new properties (substances, processes or patterns etc.) arise that could not be predicted from the units themselves. As in the sociological example it is the ties, connections, or relations of the elements that creates these new properties, not the parts themselves.
New or emergent properties are said to arise out of less complex fundamental entities and are novel or irreducible with respect to them. Life and consciousness are the two most obvious instances of emergence, self-consciousness as an emergent property of the complex organisation of neurons in our brains. But life itself as ‘adaptive complexity’ (Richard Dawkins) is closer to our concern.
Ontology of properties and relations … Properties or relations between parts can seem a rather suspicious and abstract idea, quite unlike the brute reality of matter itself. But it seems fair to say that science has, over time become more concerned with properties, relations, and organisation and less with objects or matter itself. One good example is the inextricable link or relation between space and time.
Properties & supervenience
Claims that one sort of thing is reducible to another (or that one supervenes on another) make most sense if we take them to involve properties. For example the claim that the psychological realm supervenes on the physical realm involves mental and physical properties.
It may also be claimed that such reduction is simply not possible. In living systems especially new properties are said to emerge as biological systems become more complex – irreducible properties that are not shared by component parts or prior states. Moreover the new properties are frequently unpredictable from prior states. For example individual neurons do not have the properties of mental states.
Supervenience
The word ‘supervenience’ (coined by John Post in 1984 but made popular by philosopher Donald Davidson) is used for situations where less inclusive or smaller-scale properties determine more inclusive or larger-scale properties. That is, there is a dependency relationship such that any change in one (e.g. mental state) implies a change in another state (physico-chemical). The more inclusive is said to supervene on (be reducible to) the less inclusive. Thus the social supervenes on the psychological, the psychological on the biological. In our special-interest case biological properties supervene on physical properties. It is the nature of this connection that is critical . Do organisms have a causal influence on their physical constituents – does their form, pattern, or configuration supervene on the properties of the constituents?
A characteristic of supervenience is that there can be no change in the large scale without a change in the small-scale. Whether the two are identical (the biological process identical to the physico-chemical process) is a matter for philosophical debate.
Though scaling is often a factor in supervenience and reduction it is not absolutely necessary: length and breadth can supervene on area; space can supervene on time in an equivalent way while scale reductions and supervenience have an asymmetry.
Our task is to explain how apparent emergent properties arise rather than deny their existence. If the matter is the same how can it rearrangement be so important? Also does the same apply to phenomena, explanations, theories, and meanings as well as objects.
Perhaps emergence arises as part of describing or analysing systems but is not part of the systems themselves (philosophically the problem is epistemological, a view known as ontological reductionism). When all is said and done all we have is the matter itself. This reduction acknowledges emergent properties but regards them as completely explicable in terms of the processes from which they are composed.
In token reduction (token ontological reductionism) every instantiated object is the sum of objects at a smaller scale: on this view biology is simply physics and chemistry while in type reduction (type ontological reductionism) every descriptive concept or type is a sum of types at a smaller scale. Type or concept reduction of the biological to the physicochemical is often rejected. Epistemological reduction holds that all phenomena can be explained in terms of component parts. If all three forms of reduction hold then we have strong reduction.(Stanford Encyclopaedia of Philosophy)
Explanatory reduction
Explanatory reduction allows the relation between objects other than theories to be examined such as mechanisms, fragments of theories, even facts and assumes a causal relation between large and small scales.
Principle 2 – in considering the relationship between wholes and parts each particular case must be examined on its own merit.
Causation
Scientific knowledge of the world can be viewed as the attempt to understand why one thing happens and not another – it is the study of patterns of causation.
One way of explaining what a molecule is likely to do (its behaviour) is to recognise two major sources of influence (causation). Firstly, there is the interaction of the component atoms because this determines many of the properties of the molecule. Secondly, since molecules do not exist in isolation they can be influenced by surrounding factors like temperature, pressure and the presence of other atoms and molecules. There are various ways of representing these two kinds of causal influence – as object and context, internal and external, top-down and bottom-up, intrinsic and extrinsic. For our purposes we will note that effectively nothing in the universe exists in isolation, so everything is subject to both kinds of causation. The language of hierarchy intuitively places value on its ranks, so we shall use the terms extrinsic and intrinsic, noting that existing circumstances are the consequence of the interaction of these two modes of causal influence. This pairing of causal influence comes to us in many forms: mind and body, organism and environment, and so on.
A remarkable change took place when variable self-replicating molecules were subject to the ‘selective’ or constraining influence of extrinsic factors (the environment). This marked the beginnings of life and evolution by natural selection.
Mechanistic analysis
Much of biological explanation might be viewed as mechanical analysis – the observation and explanation of organic matter through its component parts in much the same way that we study the parts of a watch to reveal the way that it works. An organism has features not possessed by its individual parts (emergent properties) and mechanistic analysis can demonstrate how these features arise.
Sometimes the parts in the part-whole relationship are critical (change a lung for a kidney!) and sometimes they are not (change one molecule in a kidney for another similar molecule). In biology there seem to be wholes that are ‘more’ or ‘less’ organised on a continuum – from highly organised (the parts strongly integrated and interdependent as in an organism) to more aggregative or sum-like (like populations of organisms), also depending on scale.
In this way emergent properties can depend to a greater or lesser extent on the degree of organisation of the parts – their arrangement, not just their individual properties – consider the music made by a band, the sequence of bases in DNA, or a sugar lump and its individual sugar crystals. In most cases emergent properties can be adequately explained in terms of component interaction – consciousness seems to be an exception.
Top-down or bottom-up
The Nobel prize-winning brain scientist Roger Sperry (cf. Sperry, 1983) introduced the concept of ‘top-down causality’ as an explanation of causal interaction between mind (consciousness, qualia) and brain (physicochemical processes). The idea of top-down causation has subsequently been taken up by a number of other writers. Can the whole shape the behaviour of the parts in ways the pieces alone could never find by themselves.
Bottom-up causation says that, given an inventory of the smallest stuff and the rules for their interactions, you can explain everything from crystals to cells to your own sweet sense of consciousness. Bottom-up causation is, at this moment in the history of physics, the dominant view.
Computers exemplify the emergence of new kinds of causation out of the underlying physics, not implied by physics but rather by the logic of higher-level possibilities. … A combination of bottom-up causation and contextual affects (top-down influences) enables their complex functioning. Hardware is only causally effective because of the software which animates it: by itself hardware can do nothing. Both hardware and software are hierarchically structured with the higher level logic driving the lower level events.
Ellis ‘Bottom-up emergence by itself is strictly limited in terms of the complexity it can give rise to. Emergence of genuine complexity is characterized by a reversal of information flow from bottom-up to top-down’.
Is culture as a set of dynamic, contingent and unpredictable social interactions including not only our interactions with each other but with the material world and our environment, affects the lower levels: “the top is always exerting an influence on the bottom.” This way of thinking does not deny the influence of genes, but does challenge a genes-in-primary-control sort of formulation.
Selection acting on genes seems exactly a top down approach.
I agree it’s artificial, since all scales are acting all at once in the really real reality. But in order to focus on a given process, one needs to compartmentalize, and to get it right one needs to understand the boundaries of the compartment, its top boundary for top-down influences as well as its bottom boundary for bottom-up influences. Both influences can be important.
Emergence of genuine complexity is characterised by a reversal of information flow from bottom up to top down.”
Paul Davies: what makes life different from non-life is that in life “information manipulates the matter it is instantiated in.”
Reduction in biology itself – units of selection (gene fundamentalism)
Adaptations serve the interests of genes, not organisms. Does selection operate primarily or exclusively at the scale of the gene? Can developmental biology be reduced to developmental genetics and molecular biology?
We tend to think of evolutionary change as occurring in the organism. On an evolutionary scale our individual lives are brief, the sole factor that persists through time, which is ‘immortal’ (Dawkins’s word) is the genome. This is the central point of the book The Selfish Gene by Richard Dawkins. Genes literally embody a program that produces development.
The central point of critics of the gene concept is that functional decomposition identifies multiple overlapping and crosscutting parts of genomes. The “open reading frames” to which biologists refer when they count the genes in the human genome not only can overlap but are sometimes read in both directions. Subsequent to transcription they are broken into different lengths, edited, recombined and so on, so that one “gene” may be the ancestor of hundreds or even thousands of final protein products.
We might call this gene reductionism – but we have noted that reduction can proceed further. Why don’t gene reductionists use smaller units still?
To which it is responded that social interaction involves much more than just genes.
In isolation, DNA is no more than a very complex chemical.
Life is spatiotemporally bounded to earth, each species itself spatiotemporally bounded. Mid twentieth century saw disciplines as claiming to be sciences (sociology, psychoanalysis, Marxism, political and economic science, astrology) when they were not.
To this list is sometimes added the Temporal reduction with scales embedded in explanations smaller scale must be prior to larger scale explanations (eg. tissues in ontogeny, gene expression).
COMMENTARY
Consider what would be needed to unify three disparate subjects: physics, biology and economics. This would appear theoretically possible since all of these subjects ultimately relate to physical objects and their interaction. Now consider how you would unify the following scientific explanations into a common physical language: in physics the conduct of electricity through copper wire, in biology the hibernation of Brown bears in winter, and in economics the relationship between interest rates and inflation.
We must answer this question in the light of Principles 1 and 2: firstly, that all matter is of equal status regardless of its size (although we find certain units of special explanatory value): secondly, that there are no factor(s) that clearly distinguishes science from non-science or pseudo-science. In other words we cannot claim that molecules are more real or fundamental than animals, or that economics is not a science.
Intuitively we might attribute the difficulty of translating these three disciplines into a unified physical theory to one or all of the following: scale, complexity, language.
Is there a reducing gradation of predictive power with increasing complexity – physics, biology, economics?
If the unit scales are generally more complex then do causal pathways and pattern also become more complex? Perhaps scale cannot be extrapolated across domains because the results are nonlinear – as we pass between domains quantitative change becomes qualitative change?
Our emphasis until recent times has been mostly on the analytic breaking up of things into components to see how they work. Part of this history has been the creation of literally hundreds of disciplines out of what was once the single study of biology. We are now passing through a phase of re-synthesis as biology merges at one extreme with the physical sciences and at the other extreme with the social sciences. One extreme is represented by the new insights of molecular biology, genomics and biotechnology while at the other we see the integration of ideas from sociology, anthropology, linguistics and especially new developments in psychology, moral philosophy.
If in causal terms, the whole can be completely explained in terms of its component interactions then the whole, having no causal agency, is referred to as an epiphenomenon.
The epiphenomena are then termed to be “nothing but” the outcome of the workings of the fundamental phenomena. In this way, for example, religion can be deemed to be “nothing but” an evolutionary adaptation, and beliefs can be considered “nothing but” the outcome of neurobiological processes. There is a tendency to avoid taking the epiphenomena as being important in its own right.
Social and behavioural systems, political science and sociology, can be explained in terms of neurochemistry, genes and brain structure. At the highest sociocultural level, explanations focus on the influence on behavior of where and how we live. Between these extremes there are behavioural, cognitive and social explanations.
Reality is a multi-layered unity. Another person as an aggregation of atoms, an open biochemical system in interaction with the environment, a specimen of homo sapiens, an object of beauty, someone whose needs deserve my respect and compassion,
But it is hard to avoid the conclusion that we either pass into a knowledge regress or deny that studying human behaviour is science.
In philosophy thought about emergence often turns on whether we can distinguish what you might call mere epistemological emergence from genuinely ontological emergence. Where epistemological emergence is in play, we grant that the low level facts do in fact determine everything at the upper level, even if, as it happens, we have no way of predicting the upper level from the lower, and even if our ways of comprehending the lower-level are shaped by what we know about the higher level. Think chaotic systems, traffic patterns, etc. Full-blown ontological emergence makes a much stronger claim. Facts at the lower-level do not fix the facts at the upper level. There are, then, on such a view, genuinely emergent phenomena. The trick has always been to explain how that could be and whether it even makes sense. Can one give an example of genuine ontological emergence?
Metaphors
The microscope
Working with different domains of knowledge is like zooming in and out of regions of the world in space and time, seeing different patterns and regularities in nature as we do so. As we focus on one domain the laws, principles, and categories of the others become part of a blurred and much less relevant background. When working in the world of biology the world of physics is mostly irrelevant, not because it is unimportant but because it is taken for granted in our selective cognition.
It appears to be a function of our minds that we must apprehend the world through categories of scale and the greater the difference in scale the more difficult this becomes. It simply makes no sense to explain monetary and fiscal policy in physicochemical terms – though in theory this is not absurd since these matters are a consequence of interacting physical objects – but this would need an infinitely complex computer since our minds could not cope with the problems of scale and their difficulties in relation to vocabulary, categories, properties and relations.
When the microscope was discovered it was necessary to create a whole new language of terms as we observed structures in animal and plant cells. The same is true at the molecular scale. The new terms were needed to deal with a new scale of comprehension.
Looking at life on Earth over its full time period of 3.5 billion years and on the scale of all life we might imagine a three-dimensional branching tree-like structure as different life forms differentiate along the branches up to the present day and there are many dead ends. To assist our perception we fix on particular categories or units of scale depending on utility and our interests. We recognise various aggregates of organisms with individual species as the ill-defined fundamental unit, these arranged into a progressively more inclusive units as genera, families etc. Within an organism like ourselves we select operational units like organs, tissues, cells, molecules and atoms. When thinking about evolution we choose the units gene, organism, population and this.
Are some aspects of biological science autonomous in that they do not benefit or utilise the knowledge of molecular biology?
The television
We know a television is made up of tiny units called pixels and that these pixels can flash different colours in a predetermined and coordinated way that allows us to produce images of people and other objects. This representation of people and other objects by means of a pixel matrix is interpreted by our eyes and brains in the same way that we interpret the representations created by actual objects in the world. This metaphor illustrates several important aspects of our perception and cognition.
Firstly, the images that are so meaningful to us are made up of simple basic constituents, flashing pixels, that individually lack meaning.
Secondly, the activity of the pixels acting together is meaningful because the pixels have been organised to distribute colour across the TV screen in a highly coordinated way.
Thirdly, the meaningful images we see are interpreted by our eyes and brains as objects in the real world: they are categories created in our minds since the TV screen is just composed of pixels, not people and other objects.
Fourthly, the fact that TV screens (which are just a layer of flashing pixels) can create visual representations that are highly convincing to our eyes and brains makes us aware that our brains can add structure to the world that does not exist. It is the task of science to establish as close a correlation as possible between our perceptions and reality knowing that our minds can be deceived.
Problems with reduction:
The effects of molecular processes often depend on the context in which they occur. So one molecular kind can correspond to many kinds at a larger scale (one to many) while at the same time large-scale structures and processes can arise from different kinds of molecular processes, so that many molecular kinds can also correspond to a single larger-scale kind (many to one or multiple realization).
Structure and function relate to spatial and temporal (spatiotemporal) factors respectively. Each represent a mode or type of organisation important in reduction. This is why development is an important aspect here.
Scientific explanation often involves units from different scales of reference.
In a reductive explanation the intrinsic can be important (what is internal and what external), reduction favouring internal causality. Protein folding can have external causality. Temporal and intrinsic factors thus play a part in reduction as well as simply the relations of parts and wholes. Perhaps there are different kinds of reduction?
Perhaps we should move away from the idea of reduction towards science as best characterised as proceeding by unification as integration and synthesis rather than reduction. The theories and disciplinary approaches to be used depend on the nature of the problem being discussed.
Complexity – the unconscious collective behaviour of social insects as an emergent property. Complexity arises from dynamics not constitution? Chaos theory, for example, demonstrates how some systems are acutely sensitive to the minutest changes that can totally changes their behaviour. Such systems are widespread and difficult to analyse in a reductionist way. Such complex systems seemingly spontaneously generate remarkable patterns of behaviour in a holistic manner. Highly complex systems seem to contain vast amounts of information, ‘active information’ being a new arena for theorising.(see )
The problem is not whether explanations are reductionist or not, but whether the particular degree of reduction is sufficient to answer the question being posed.
CONCLUSION
The task of science is to describe, as accurately as possible, the structure and workings of the objects that exist outside the human brain. But to do that we must use the brain itself, an object that has been moulded and limited by its evolution. To describe the universe we must first understand as much as we can about the limitations of the tool we use to comprehend it.
As we pass from physics and chemistry to biology and sociology the cognitive units or categories of nature that we use as tools to do our science tend to increase in abstraction, complexity of material organisation, and causal intricacy. We sense a graded change in the character of the subject matter that is a difference in degree but not in kind, more a matter of trends. Physics and chemistry appear to proceed mostly by analysis while much of biology is about synthesis as it attempts to explain organisational complexity and the role of phenomena within functional systems, its teleonomic character tending to look to the future. The compositional or holistic concern with organisational factors and adaptive function.
Reduction is complicated by our metaphysics – the way we assume the natural world is structured – the nature of reality. Science is now providing us with an improved picture of this reality.
The confusing aspects of language include metaphor, anaphor and polysemy.
We can now combine the principles and findings of this section as follows:
Reduction is only useful and appropriate when it improves our understanding. In considering the relationship between wholes and parts each particular case must be examined on its own merit. We regard scientific categories as important because we believe they map the natural world as best we can ultimately providing us with compelling explanations that help us manage the world.
Science uses categories that map the natural world as accurately as possible but some of these may be mental categories with no instantiation in the physical world and others may be relational in character. The scientific need for explanation, like the philosophical requirement for rational justification or causal origin, leads to an explanatory regress seeking more ‘fundamental’ answers. However, a satisfactory answer does not depend on the size of the unit but the plausibility, effectiveness or utility of the answer (Principle 1). Hierarchical language applied to biological organisation implies value and is best replaced with the language of scale. The greater difference in size of scale units used by different domains, the greater the difficulties of reduction – communication and translation. Provided scientific units are credible then the scale we use for explanation is simply a matter of utility.
We analyse a problem to obtain a broader understanding, a better synthesis. Science progresses by a process of both analysis and synthesis with emphasis alternating between the two in a kind of dialectic.
With decreasing levels of complexity compensatory activity or ‘self-regulation’ decreases in likelihood.
Key points
Reductionism, the translation of ideas from one domain of knowledge into those of another
Scientifically credible units of matter have no precedence over one-another based on size alone
The idea of something being more ‘fundamental’ probably derives from our tendency to explain by a process of analysis, by breaking down into smaller parts. It is also probably related to internalised hierarchical thinking in terms of ‘levels’ to which we unconsciously apply value
Science examines matter at various scales which correspond loosely to disciplines as domains of knowledge, language and theory
Though there are clear links between domains of knowledge, each domain seeks optimal explanatory results using its own language, principles and procedures. Linking or even uniting (reducing one domain to another) may have benefits but presently appears to pose insurmountable difficulties.
If we regard science as the matching of our mental categories to the reality of the external world then there can be no ‘fundamental’ science and also no clear distinction between what is science and what is not. There will simply be better and worse explanations of the world of matter and energy. For a whole variety of reasons it is evident that astronomy is more scientific than astrology.
Are some scientific explanations ‘better’ than others?
Is physics more ‘fundamental’ than biology?
Does the physical world exist in ‘levels’ of organisation?
Do new properties emerge as things get more complex or are the ‘fundamental’ properties always the same – is the whole greater than the sum of the parts?
What is reductionism and why is it often treated as an error in thinking?
We draw scales of convenience which we believe reflect reality. Why should gene selectionism not reduce further to physics and chemistry?
And scientific categories, we believe, relate closely to objects in the external world. Even so, the scientific information considered valuable to the modern world would be inconsequential for a native living in the New Guinea jungle, much more important is whether it is edible or poisonous. And for an ecologist the actual species in a particular environment may not matter – more important is their role within an ecosystem, say, whether the organism is a predator or herbivore.
We can imagine scientists investigating nature as a watchmaker investigates a watch: if we want to know how the watch works then we examine the parts, how they fit together, and how together they operate effectively. By a process of analysis we then see how the parts interact to produce an operational whole. In terms of our classification of categories this is a sum or additive category.
Reductionism reflects a particular perspective on causality: supervening (more inclusive) phenomena that are completely explained by smaller scale (less inclusive) phenomena can be termed epiphenomena. It is often assumed that epiphenomena have no causal effect on the phenomena that explain it.
Only statements can be deduced, not properties: properties require a theory.
Though we do not know why the laws of physics are as they are
A part of the teleonomic view of the world in which there are many paths to the same end. The development of cells in embryology is determined by their environment.
Species exist because they perform their functions (Aristotle).
Mental categories
Concepts provide the meaning that language expresses and they comprise the blocks of information on which reason can work. If we regard categories as concepts (units of thought or mental representations) then they can be of two kinds, either universals (types) which are general categories like ‘chair’ and ‘tree’, or particulars (tokens) like my chair or the oak tree outside my window. Though categories may sometimes be clearly defined as having necessary and sufficient conditions, most simply share a family resemblance – a set of characteristics that overlap with those of other categories.
We can also regard universals as sets and particulars as sums and, for simplicity mind-objects are called types and physical objects are called tokens.
Sets are abstract, consisting of objects that are ‘members’ of that set even when their members are physical objects. So, for example, English Oak and Chinese Elm are members of the set ‘tree’. Sums consist of parts rather than members: so a leg is a part of our body and a body can be physically moved. A forest is a collection of trees, so it is a sum not a set. The distinction between a sum and set may not always be crystal-clear but it helps to be aware of the idea – that is, it helps to be wary of the use of abstract and concrete nouns.
Categories can also consist of properties or relations. Plants share the physical property of being photosynthetic (they instantiate photosynthesis). When dealing with properties it is useful to distinguish intrinsic properties (inner properties that are independent of external influences) and extrinsic (relational) properties that do. Categories like this are easiest to understand when the properties are intrinsic but when relations between properties are important then we get the language of parts and wholes.
Scientific properties like specific colours, weights, densities, and temperatures or the ability to photosynthesise, are regarded as contingent (they are tokens that instantiate the types colour, weight and process) factors that are part of the scientific world of empirical investigation (what might be called a naturalistic ontology).
The particular kind of category that we use will depend on the particular situation and mixtures are possible. We must be aware of difficulties relating to precision and clarity of our categories. The word ‘goldfish’ can refers to a specific physical object or token, the word ‘society’ refers to something that is physically undefined – like a distinction between abstract and concrete nouns.
Principle 5 – Science uses categories (names, explanations, definitions, theories etc.) that reflect as accurately as possible the natural world: these categories consist of either sets (universals), sums (particulars) or properties. Properties may be either intrinsic (internal) or extrinsic (relational).
Principle 6 – Sets (universals), being abstract, can add complexity to the analysis of whole and parts
There is an expectation that biology should produce universal laws like those of physics but as biology is only concerned with living organisms this is an unreasonable expectation.
We must ask what could possibly be the point of converting the language of one into another: it is not only unnecessary but also unimaginably complex.
‘Predispositions’ and ‘propensities’ are proximate mechanisms.
Cognitive focus and cognitive illusion
We have all experienced visual illusions where a stick in water appears to bend and how when we focus on some images they seem to be one think one moment and another the next, but never both at the same time. In a similar way we struggle with cognitive illusions that create cognitive dissonance: something cannot be simultaneously similar and different, a whole and a part … it must be either one or the other. And yet we know that an ant is a whole individual while at the same time being part of a colony.
Our brains organise knowledge by classifying it into categories or. The method we use to classify or organise knowledge can influence the way we perceive the world and our ability to discover and create new knowledge.
Our scientific map of reality, like all our cognition, removes unwanted noise, acting like a camera lens by filtering out inconsequential information as we ‘zoom in’ and ‘zoom out’ of different regions of categorisation. What we must ask ourselves as scientists is the extent to which the categories we create are accurate representations of objects in the external world, the extent to which the groupings we create are accurate representations of objects in the external world, and the extent to which the way we rank these categories and groups is an accurate representations of what is going on in the external world.
We can immediately comment on these questions. Categories are tricky: the dog in front of me seems a real and concrete object in the world but the general category ‘dog’ is abstract, rather like the non-existent category ‘unicorn’. Groupings are similar although perhaps not so clear: ‘primate’ seems OK, and ‘London trams’ alright but I’m not so sure about ‘institutions’ and ‘society’ , they seem more fuzzy categories. Both categories and groups, we might say, need scientific investigation – we need sound evidence for their existence. Ranking itself is different. The external world does not rank its contents, that is what we do. All we can do is try and determine as accurately as possible what there is in the world, what exists. When we draw up a biological classification we are ranking organisms according to their similarities and differences which we assume has something to do with the way they evolved, with the nature of their existence in the external world. Ranking plants according to edibility is clearly more subjective.
Complexity – move to emergence
(Scale) We understand all objects within a context. which depends on their relationship to other objects. Some wholes, like billiard balls on a table or sugar crystals in a sugar lump, we can understand fully by examining the properties of the individual parts in a process of analysis. With a complex whole like the human body we can only understand the parts by seeing how they are related to one-another in relation to the function of the whole and this process we call synthesis.
We can imagine a continuum of groups whose parts have varying degrees of connectivity. Increasing complexity is generally associated with other factors: an increasing number of elements, an increasing degree of connectivity, often into a network, where it is relationships that define the system, not the components themselves, there is also usually an increase in diversity of the parts. Adaptive complex systems like organisms are also capable of self-regulation (teleonomy). Analytic systems have simple and predictable linear causal relationships where input and output are eequal while complex systems have complicated and non-linear and often unpredictable causation that is not amenable to modelling.
The ‘possibility space’ allows us to think about random and complex situations without thinking about causes and effects. It assumes that each time a situation of that kind arises, the set of possible outcomes is the same and the probabilities are also the same. So, considering the likelihood of life on other planets would entail a sample space (the set of all possible outcomes), the set of events, and the assignment of probabilities to events.
Law-language
Are the laws of physics (which may be strict or probabilistic) descriptive or prescriptive: that is, do they simply describe the way things are, or do they actually exert an influence on things?
The discovery of laws was long regarded as central to science and from a theistic perspective this made sense – natural laws were God’s laws as part of his divine plan for in the universe.
In everyday parlance we say that the laws of nature ‘determine outcomes’, that they ‘govern behaviour’ and so on. Taken literally this suggests that laws are rules that are in some way prior to activity, that they exert an extraneous influence or constraining force on things, they are something outside the system or circumstance itself, like a physical barrier, a programmer, or system of governance.
For many scientists this is unacceptable: laws do not exist in some transcendental realm acting on matter in the world. Law-language simply describes the way the world is, the decree-like lawfulness implied in language is metaphor and best treated as such.
Nevertheless, laws do explain or account for why the world is as it is, while descriptions simply state facts. Uniformities in classes of objects and activities can be described and given mathematical expression and this is critical to the predictive power of science. So how do we account for laws? Well, we simply replace the idea of law with that of succinct descriptions of patterns or regular behaviour with strength in their simplicity and generality.
One characteristic of the diverse range of scientific generalities (laws) is that they exhibit varying specificity: there is a trade-off between simplicity and generality.
One descriptive account of a scientific law is given by the late Australian-American David Lewis. Consider the set of all truths and select some of these as axioms, thus permitting the construction of a deductive system, the logical consequences of which become its theorems. These deductive systems compete with one another along (at least) two dimensions: the simplicity of the axioms, and the strength or information content of the system as a whole. We prefer to be well-informed but to achieve this we sacrifice simplicity. So, for example, a system comprising the entire set of truths about the world would be maximally strong, but also maximally complex. Conversely, a generality like ‘events occur’ is uninformative. So, what we need is the most useful balance between the two, and that, perhaps, is what the ‘laws’ of nature do. This is not a precise formula but a suggestion or heuristic for a way of thinking about scientific laws as we look for the simplest generalizations possible from which we can draw the most information. Thus there are laws covering a wide degree of resilience scattered among the various scientific disciplines. On this view the collection of particular facts about the world are the laws of nature because the laws are condensed descriptions of those facts.
Laws can also be regarded as above but with the best propositions within particular vocabularies so we could have different laws for different vocabularies. An economist wouldn’t be interested (at least not qua economist) in deductive systems that talked about quarks and leptons: her language would be along the lines of inflation and interest rates. The best system for this coarser-grained vocabulary will give us the laws of economics, distinct from the laws of physics.
On this descriptive account laws are part of our map, not the laws themselves which are just convenient ways of abbreviating reality. Because regularities assist the organization of knowledge and depend on facts about us. Nature does not make these regularities laws, we do.
Mereological reductionism is the claim that the stuff in the universe is built of things described by fundamental physics, even though physicists may be unsure of these. But nomic reductionism holds that the fundamental laws of physics are the only really existant laws, and that laws in other disciplines just convenient abbreviations necessitated by our computational limitations.
Nomic reductionism appeals through the apparent redundancy of non-fundamental laws. Macroscopic systems are entirely built out of parts whose behavior is determined by the laws of physics the laws of other systems are therefore superfluous. This argument relies on the prescriptive conception of laws: it assumes that real laws do things, they physically influence matter and energy. Thisseems to be overdetermination but if we regard laws as descriptive all we have are different best systems, geared towards vocabularies at different scales and therefore different regularities described in different condensed ways. There is nothing problematic with having different ways to compress information about a system. We need not claim one methods of condensation as more real than another.
Accepting the descriptive conception of laws severs the ontological divide between the fundamental and non-fundamental laws, privileging the laws of physics is the result of a confused metaphysical picture.
However, even if we accept that laws of physics don’t possess a different ontological status, we can still believe that they have a prized position in the explanatory hierarchy. This leads to explanatory reductionism, the view that explanations couched in the vocabulary of fundamental physics are always better because fundamental physics provides us with more accurate models than the non-fundamental sciences. Also, even if one denies that the laws of physics themselves are pushing matter around, one can still believe that all the actual pushing and pulling there is, all the causal action, is described by the laws of physics, and that the non-fundamental laws do not describe genuine causal relations. We could call this kind of view causal reductionism.
Unfortunately for the reductionist, explanatory and causal reductionism don’t fare much better than nomic reductionism. Stay tuned for the reasons why!
,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,
We can imagine a continuum of groups whose parts have varying degrees of connectivity. Increasing complexity is generally associated with other factors: an increasing number of elements, an increasing degree of connectivity, often into a network, where it is relationships that define the system, not the components themselves, there is also usually an increase in diversity of the parts. Adaptive complex systems like organisms are also capable of self-regulation (teleonomy). Analytic systems have simple and predictable linear causal relationships where input and output are equal while complex systems have chaotic, fractal, complicated. non-linear, non-predictable and often unpredictable causation with fuzzy logic that is not amenable to modelling. Small initial conditions can have massive consequences.
A heart has little meaning except in relation to the body of which it is a part. Hydrogen and oxygen combined as water are different and unpredictable from the individual atoms.
,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,
Top-down causation
Emergence generally entails the language of scale but causation may be helpful as ‘higher’ levels of behaviour arise from ‘lower’ level causes. Emergence is conveniently illustrated through the world of computers. Computer hardware enables but does not control – it is software, as information, that makes a computer work – software tells the hardware what to do. The information software carries is abstract, not physical, but it has causal agency, it sets the constraints for ‘lower-level’ action whose goals can be achieved in many ways (multiple realization).
Epiphenomena are by-products of things, not the things themselves: brain and mind; brain and consciousness.
We understand objects within a context which depends on their relationship to other objects. Some wholes, like billiard balls on a table or sugar crystals in a sugar lump, we can understand fully by examining the properties of the individual parts in a process of analysis. With a complex whole like the human body we can only understand the parts by seeing how they are related to one-another in relation to the function of the whole and this process we call synthesis.
http://lesswrong.com/about/
http://lesswrong.com/lw/ct3/natural_laws_are_descriptions_not_rules/
Generalisations in biology are often not strict with various exceptions, it is often not uniquely biological since organism for example follow the rules of hydraulics and aerodynamics and its generalisations rarely have the law-like character of physical laws. Biology accepts the generalisations of physics and then proceeds within its own domain.
Classical reductionism – either the laws or generalities of biology, psychology, and social science are the deductive consequence of laws of physics or they are not true.
Multiple realization – if genes have many effects on many phenotypic characteristics and phenotypic characteristics are affected by many genes then. The relationship between genetic and phenotypic facts is many/many and therefore cannot be a deductive consequence. There is a complex two-way relationship between the genome and its molecular environment.
Paley was the watchmaker.
Synthesis & analysis
We can see in analysis and synthesis several opposing ideas. Analysis is breaking down, synthesis is building up. Analysis looking down or while synthesis is looking up. In a literary sense analysis is associated with Classicism that ‘looks back’ to old traditions and certainties and universal characteristics and conservatism while synthesis like Romanticism ‘looks forward’ for novelty, creativity, experiment and imagination, maybe even the great intellectual traditions of permanence of Being and the change of Becoming.
Order & constraint
The ancients wondered why there was order in the world rather than randomicity and chaos. Today we can point out that this order comes about by means of constraints on possible outcomes. Not everything is possible. The most obvious constraints on activity in the universe are a consequence of physical constants, what we call ‘physical laws’. This means, for example, that given the initial conditions of the universe the possible outcomes are already limited. The best science at present indicates a heat death. Though the universe is mindless, owing to the constraining action of physical constants, at any given time it has potential that will become actualized in a more or less predictable way. That is, there are ‘ends’ to inexorable physical processes and in this sense these processes are teleological. When life emerged the nature of teleology would undergo a radical change. Some organisms would survive and others perish. Though morality is supplied by the human mind, the reasons for organismal survival exist in nature. Situations become ‘good for’ and ‘bad for’; there is rudimentary normativity and functional design. There is also the passage of historical information from one generation of organism to the next, the historical organism-environment interaction being ‘represented’ as information contained in genes or gene-like chemicals.
Emergence as new biological order is more a consequence of the constraining boundary conditions than the exisatence of biological laws or law-like behaviour.
The fallacies of composition and division
The fallacy of composition arises when it is inferred that something is true of the whole from the fact that it is true of some (or even all) its individual parts. For example: ‘fermions and bosons are not living, therefore nothing made of fermions and bosons is living’ this is akin to an assertion of emergence – that the whole may have properties or qualities not apparent in the parts. This is in contradistinction the fallacy of division when it is inferred that something true for the whole must also be true of its parts. For example: ‘my living brain exhibits consciousness therefore its constituent atoms display consciousness’.
To explain something is to subsume it under a law.
The adaptations of living organisms are treated as ‘forward-looking’.
This imposition of order (function) by natural selection, the process of adaptation, is a holistic feature that has been called ‘downward causation’ (the whole may alter its parts) and it cannot be improved by reductionist explanation.(Campbell, 1974).
Hierarchy implies that levels are ontologically distinct but they may be only epistemologically so. We must communicate in a linear way but the ideas being communicated may not be related in a linear way.
Memes are informational not physical.
We do not have to comprhend reasons we just do them. Though our mind understand neurons do not. Though an ants nest is a highly integrated and purposeful unit the individual antss do not understand this. Meaning can emerge from non-meaning.
Computers have taught us about the importance of software and system behaviour rather than physics and chemistry.
Abstract notions can be causal – habits, words, shapes, songs, techniques, learning by watching, long-division: none of these is in the genome.
So, to help understand the many issues at stake here we can imagine a continuum in the structure of ‘wholes’ ranging from those where there is minimum interaction between the parts (aggregations) to those in which there is a highly integrated interdependence of the parts – many variables and complex causation (systems). As an example of the former we might think of a rock made out of grains of sand and of the latter a living organism. But there are many different kinds of wholes, so consider the following: a car engine, an ant colony, a flock of birds, a shoal of fish, 4 as a sum of 2 and 2, a symphony as organised sound, economic patterns that are a consequence of mass markets.
The situation is complicated by the nature of the whole which might be: a mechanism or process, a behaviour (movement of flocks of birds or shoals of fish), a property, a system with parts in a state of dynamic interdependence, a concrete object.
In very general terms it seems that physics and chemistry tend to take a reductionist or atomistic approach to their disciplines while biology and the social sciences adopt a more holistic or organismic stance. This raises a major metaphysical question about the nature of reality. What is the structure of the physical world? Is the best scientific representation of the world expressed through the relations of fundamental particles acting under the influence of physical constants, or is it better represented in some other way?
Macro-causation supervenes on or is determined by micro-emergence (strong emergence)
(The humanistic view of human nature (Roger Scrution). Perhaps it is impossible to ‘reduce’ the language and ideas of the human realm to that of science which cannot capture the sense of self and other, I and though. So, for example, music, colour, and art can be subjected to meticulous scientific scrutiny but still lack a dimension of uniquely human understanding.)
Complexity
Many scientists would answer that complexity arises as the laws of physics play out in a deterministic way. This is one aspect of reductionism, which we need to define.
Classical reductionism (determinism)
The path of cosmic destiny is determinate. Knowing the precise conditions at the origin of the universe should, in principle, be sufficient to explain the emergence of smartphones and you, here, right now. Expressed another way – it should, in theory, be possible to explain sociology in terms of particle physics (though complex in the extreme), that is, the small-scale accounts for the large-scale. Physics is based on the proven high predictive capacity of mathematics.
By this account the regularities of chemistry, biology, psychology and the social sciences are epiphenomena (by-products) because they are grounded in physical causation. All physical objects are composed of elementary particles under the influence of the four fundamental forces and physical laws alone give a unique outcome for each set of initial data. English theoretical physicist Paul Dirac (1902-1984) expressed a first step down this path when he claimed that ‘chemistry is just an application of quantum physics’. One of the basic assumptions implicit in the way physics is usually done is that all causation flows in a bottom up fashion, from micro to macro scales. The physical sciences of the 17th to 19th centuries were characterised by systems of constant conditions involving very few variables: this gave us simple physical laws and principles about the natural world that underpinned the production of the telephone, radio, cinema, car and plane.[1 pg]
There are many objections to such a view and since the 1970s these objections have become the focus of studies in complexity theory. After 1900 with the development of probability theory and statistical mechanics it became possible to take into account regularities emerging from a vast number of variables working in combination: though the movement of 10 billiard balls on a table may be difficult to predict, when there are extremely large numbers of balls it becomes possible to answer and quantify general questions that relate to the collective behaviour of the balls (how frequently will they collide, how far will each one move on average before it is hit etc.) when we have no idea of the behaviour of any individual ball. In fact, as the number of variables increases certain calculations become more accurate say, the average frequency of calls to a telephone exchange or the likelihood of any given number being rung by more than one person. It allows, for example, insurance companies and casinos to calculate odds and ensure that the odds are in their favour. This applies even when the individual events (say the age of death) are unpredictable unlike the predictable way a billiard ball behaves. Much of our knowledge of the universe and natural systems depends on calculations of such probabilities.
Science in the 21st century is tackling complex systems. People wish to know what the weather will be like in a fortnight’s time; to what extent is climate changes anthropogenic; what is the probability that I might die of some heritable disease; how does the brain work; what is the degree of risk related to the use of a particular genetically modified organism; will interest rates be higher in six months’ time and, if so, by how much? This is the world of organic complexity, neural networks, chaos theory, fractals, and complex networks like the internet. In contrast the processes going on in biological systems seemed to involve many subtly interconnected variables that were difficult to measure and whose behaviour was not amenable to the formation of law-like patterns similar to those of the physical sciences. Up to about 1900 then much of biological science was essentially descriptive with meagre analytical, mathematical or quantitative foundations. But there are systems that are organised into functioning wholes: labour unions, ant colonies, the world-wide-web, the biosphere. Consists of many simple components interconnected, often as a network, through a complex non-linear architecture of causation, no central control producing emergent behaviour. Emergent behaviour as scaling laws can entail hierarchical structure (nested, scalar), coordinated information-processing, dynamic change and adaptive behaviour (complex adaptive systems [ecosystems, biosphere, stock market] self-organising, non-conscious evolution, ‘learning’, or feedback). Examples: are an ant colony, economic system, brain.
However, when a living organism is split up into its component molecules there is no remaining ingredient such as ‘the spark of life’ or the ‘emergence gene’ so emergent properties do not have some form of separate existence. And yet emergent properties are not identical to, reducible to, predictable from, or deducible from their constituent parts – which do not have the properties of the whole. The brain consists of molecules but individual molecules do not think and feel, and we could not predict the emergence of a brain by simply looking at an organic molecule.
Levels of reality
Many scientists and philosophers find it useful to grasp complexity in the world through the metaphor of hierarchy as ‘levels of organisation’ with its associations of ‘higher’ and ‘lower’ and a world that is in some way layered. But ‘as if’ (metaphorical) language can be mistakenly taken for reality and best minimised unless it serves a clear purpose or is unavoidable.
What exactly do we mean by ‘levels’ in nature and can these ideas be expressed more clearly? Hierarchies rank their objects as ‘higher’ or ‘lower’ with their ‘level’ based on some ranking criterion.
Scale
’Level’ is used in various senses. Firstly, it expresses scale/size as we move from small to large in a sequence like – molecules – cells – tissues – organs – organisms, and from large to small as we pass along the sequence – universe – galaxy – solar system – rock – molecule – quark.
Complexity
But it cannot be just a matter of physical size because organisms are generally treated as ‘higher’ in such hierarchies than, say, a large lump of rock. So secondly, or perhaps in addition, we are referring to complexity, the fact that an organism has parts that are closely integrated in a complex network of causation in a way that does not occur in a rock. There are difficulties here too in definition e.g. how do we rank against one-another society, a human, an ecosystem. A microorganism can well be considered more complex than the universe.
Context
But then, thirdly, ‘levels’ also suggest frames of reference, one special set of things that can be related to another set of things so we can viewing one set of things from a ‘higher’ or ‘lower’ vantage point, and here context or scope becomes a key factor.
There are then three major criteria on which scientific hierarchies are built: scale (inclusiveness or scope), causal complexity, and context. When considering any particular scientific hierarchy it helps to consider these factors (separately or in combination).
There are a few complications. Sometimes the layering is expressed in terms of disciplines or domains of knowledge as physics, chemistry, biology, psychology, and sociology which is different from the phenomena that the disciplines study. In this case each discipline or domain constitutes a contextual ‘level’ with its own language, vocabulary, and mathematical equations that are valid for the restricted conditions and variables of that level. There is increased decoupling with increased separation of domains.
Sometimes the hierarchy is given as a loose characterisation of what these subjects deal with – laws of the universe, molecules, organisms, minds, humans in groups, or some-such. Sometimes the nature of the physical matter is given emphasis – whether it is organic or inorganic.
At the core of the scientific enterprise is the idea of causation and for eminent physicist George Ellis it is causal relations acting within hierarchical systems that are the source of complexity in the universe. His views are summarised in his paper On the Nature of Causation in Complex Systems in which he outlines the controversial idea of ‘top-down’ causation.[1] I briefly outline these ideas below.
My preference would be to regard causation as acting between the cognitive categories we use to denote phenomena in the world. We take these categories to vary in both their inclusiveness and complexity. This, for me, is a more satisfactory mental representation of ‘reality’ than a layered hierarchy of objects arranged above and below one-another. I would use the expression ‘more inclusive and complex to less inclusive and complex’ in preference to the expression ‘top-down’ but the convenience of the shorthand is obvious.
Degree of causal complexity in parts & wholes
Although we can speak of parts and wholes in general terms, actual instances demonstrate varying degrees of causal interdependence. At one end of the spectrum are holons (holon – a category that can be treated as either a whole or a part) are aggregates with minimal interdependence of constituents, and at the other we have living organisms where the significance of a constituent, like a heart, depends strongly on its relation to the body.
As we shift cognitive focus the relationship between wholes and parts can display varying degrees of interdependence: removing a molecule from an ant body is unlikely to be problematic, and similarly removing one ant from its colony but removing an organ from a body could be.
Wholes sometimes only exist because of the precise relations of the parts – in other wholes this does not matter. Sometimes characteristics appear ‘emergent’ (irreducible as in highly organised wholes) and sometimes they appear ‘reducible’ (as in aggregative wholes): we need to consider each instance in context. Some holons are straightforwardly additive (sugar grains aggregated into a sugar lump) but others grade into kinds that are not so amenable to reduction – consider the music produced by an orchestra, carbon dioxide, a language, a painting, an economic system, the human body, consciousness, and the sum 2 = 2 = 4.
Neither of these factors need affect the following account which is highly abbreviated from Ellis:
Causation
We can define causation simply as ‘a change in X resulting in a reliable and demonstrable change in Y in a given context’. Particular causes are substantiated by experimental verification and these are isolated from surrounding noise.
Causation, or causality, is the capacity of one variable to influence another. The first variable may bring the second into existence or may cause the incidence of the second variable to fluctuate. A distinction is sometimes made between causation and correlation, the latter being
Philosopher Hume saw even the laws of physics as a matter of constant conjunction rather than the kind of causation we generally refer to. One event (effect) can have multiple causes: one cause can have multiple effects.
The claim is that causation is not restricted to physics and physical chemistry as is frequently maintained. Examples of ‘bottom-up’ causation would be the brain as a neural network of firing neurons or the view that there is a direct causal chain from DNA to phenotype.
?? p.3 Complexity emerges through whole-part two-way causation as cause-initiating wholes (scientific categories) become progressively more inclusive and complex and the entities at a particular scale are precisely defined. (with properties associated with hierarchies: information hiding, abstraction, inheritance, encapsulation, transitivity etc.)
Top-down causation
Ellis claims that bottom-up causation is limited in the complexity it can produce, genuine complexity requires a reversal of information flow from ‘bottom-up’ to ‘top-down’ and a coordination of its effects. To understand what a neuron does we must explain not only its structure or parts (analysis) but how it fits into the function of the brain as a whole (synthesis). Fractal nature is 3-D and fractal geometry is an important mathematical application two-way.
‘Higher’ levels are causally real because they have causal influence over ‘lower’ levels. It is ‘top-down’ causation that gives rise to complexity – such as computers and human beings. As we pass from ‘lower’ to ‘higher’ categories there is a loss of detailed information but an increase in generality (coarse-graining) which is why car mechanics do not need to understand particle physics. As wholes and parts become more widely separated (‘higher’ from ‘lower’) so the equivalence of language and concepts generally becomes more obscure although sometimes translation (coarse-graining) is possible.
Top-down causation is demonstrated when a change in high level variables results in a demonstrable change in low-level variables in a reliable (non-random) way, this being repeatable and testable. The change must depend only on the higher level when the context variables cannot be described at a lower level state. It is common in physics, chemistry and biology, for example the influence of the moon on tides and subsequent effect of tides on organisms. Biological ‘function’ derives from higher-order constructs in downward causation. Cultural neuroscience is an excellent example of a synthetic discipline dominated by top-down causation.
Equivalence classes relate higher level behaviour to (often many different) lower level states. The same top-level state must lead to the same top-level outcome regardless of which lower level state produces it.
Top-down causation occurs when higher level variables set the context for lower-level action.
Ellis recognises five kinds of top-down causation:
Algorithmic – high level variables have causal power over lower level dynamics so that the outcome depends uniquely on the higher level structural, boundary and initial conditions e.g. algorithmic computation where a program determines the machine code which determines the switching of transistors; nucleosynthesis in early universe determined by pressure and temperature; developmental biology a consequence of reading the DNA but also responding to the environment at all stages as developmental plasticity.
Non-adaptive information control – Non-adaptive higher-level entities influence lower level entities towards particular ends through feedback control loops. The outcome is not determined by the initial or boundary conditions but by the ‘goals’ e.g. a thermostat and homeostatic systems.
Adaptive selection (teleonomy) – variation in entities with subsequent selection of specific kinds suited to the environment and survival e.g. the products of Darwinian evolution. Selection discards unimportant information. Genes do not determine outcomes alone but as dictated by the environment. This is like non-adaptive information control but with particular kinds of outcomes selected rather than just one. In evolution we see convergence from a different starting point. Novel complex outcomes can be achieved from a simple algorithm or underlying set of rules. Adaptive selection allows local resistance to entropy with a corresponding build-up of useful information.
Adaptive information control – adaptive selection of goals in feedback control system such as Darwinian evolution that results in homeostatic systems; associative learning.
Intelligence – the selection of goals involves the symbolic representation of objects, states and relationships to investigate the possible outcome of goal choices. The implementation of thoughts and plans using language including the quantitative and geometric representations of mathematics. These are all abstract and irreducible higher-level variables. They can be represented in spoken and written form. Causally effective are the imagined realities (trust) of ideologies, money, laws, rules of sport, values, all abstract but causal. Ultimately our understanding of meaning and purpose.
So it is the goals that determine outcomes and the initial conditions are irrelevant.
In living systems the best example of downward causation is adaptation in which it is the environment that is a major determinant of the evolution in the structure of the DNA.
For higher levels to be causally effective there must be some causal access (causal slack) at lower levels and this comes from: the way the system is structured constraining lower level dynamics; openness allowing new information across the boundary and affecting local conditions as in changing the nature of the lower elements as in cell differentiation and humans in society; micro-indeterminism combine with adaptive selection.
Top-down causation in computers
What happens in this hierarchy. Top-down causation occurs when the boundary conditions (the extremes of an independent variable) and initial conditions (lowest values of the variable) determine consequences.
Top-down causation is especially prevalent in biology but also in digital computers – the paradigm of mechanistic algorithmic causation such that it is possible without contradicting the causal powers of the underlying micro physics. Understanding the emergence of genuine complexity out of the underlying physics depends on recognising this kind of causation.
Computer systems illustrate downward causation where the software tells the hardware what to do – and what the hardware does will depend on the different software. What drives it is the abstract informational logic in the system, not the physical software on the USB stick. The context matters.
Abstract causation
Non-physical entities can have causal efficacy. High levels drive the low levels in the computer. Bottom levels enable but do not cause. Program is not the same as its instantiations. Which of the following are abstract? Which are real? Which exist? Which can have causal influence: values, moral precepts, social laws, scientific laws, numbers, computer programs, thoughts, equations. Can something have causal influence and not exist? In what sense?
A software program is abstract logic: it is not stored electronic states in computer memory, but their precise pattern (a higher level relation) not evident in the electrons themselves.
Logical relations
High level algorithms determine what computations occur in an abstract logic that cannot be deduced from physics.
Universal physics
The physics of a computer does not restrict the logic, data, and computation that can be used (except the processing speed). It facilitates higher-level actions rather than constraining them.
Multiple realization
The same high level logic can be implemented in many ways (electronic transistors and relays, hydraulic valves, biological molecules) demonstrating that lower level physics is not driving the causation. Higher level logic can be instantiated in many ways by equivalence classes of lower level states. For example, our bodies are still the same, they are still us, even though the cells are different from those we had 10 years ago. The letter ‘p’ on a computer may be bold, italic, capital, red, 12 pt, light pixels or printed ink … but still the letter ‘p’. The higher level function drives the lower level interactions which can happen in many different ways (information hiding) so a computer demonstrates the emergence of a new kind of causation, not out of the underlying physics but out of the logic of higher level possibilities. Complex computer functioning is a mix of bottom-up causation and contextual effects.
Thoughts, like computer programs and data, are not physical entities?
How can there be top-down causation when the lower-level physics determines what can happen given the initial conditions? Well, simply by placing constraints on what is possible at the lower level; by changing properties when in combination as when an independent hydrogen molecule combines with oxygen to form water; where low level entities cannot exist outside their higher-level context, like a heart without a body; when selection creates order by deleting or constraining lower-level possibilities; when random fluctuation and quantum indeterminacy affect low level physics.
Supervenience
Is an ontological relation where upper level properties are determined by or depend on (supervene on) lower level properties, say – social on psychological, psychological on biological, biological on chemical, etc. Do mental properties supervene on neural properties? Properties of one kind are dependent on (but not determined by in a causal sense) those of another kind. For example can the same mental state be supported by different brain states (yes). Why do we need statements about mental states if we know the underlying brain states?
A set of properties A supervenes upon another set B when no two things can differ with respect to A-properties without also differing with respect to their B-properties: there cannot be an A-difference without a B-difference.(Stanford Encyclopaedia).
((Everyone agrees that reduction requires supervenience. This is particularly obvious for those who think that reduction requires property identity, because supervenience is reflexive. But on any reasonable view of reduction, if some set of A-properties reduces to a set of B-properties, there cannot be an A-difference without a B-difference. This is true both of ontological reductions and what might be called “conceptual reductions”—i.e., conceptual analyses.
The more interesting issue is whether supervenience suffices for reduction (see Kim 1984, 1990). This depends upon what reduction is taken to require. If it is taken to require property identity or entailment, then, as we have just seen (Section 3.2), even supervenience with logical necessity is not sufficient for reduction. Further, if reduction requires that certain epistemic conditions be met, then, once again, supervenience with logical necessity is not sufficient for reduction. That A supervenes on B as a matter of logical necessity need not be knowable a priori.))
SUMMARY
Reductionism regards a partial cause as a whole cause with analysis passing down to the smallest scales in physics and causation then proceeding up from these levels, this being a representation of both reality and process of science. Recent work has muddied the simplicity of this approach in many ways. For example, current inflationary cosmology suggests that the origin of the galaxies is a consequence of random or uncertain quantum fluctuations in the early universe. If this is the case then prediction becomes a false hope, even at this early stage apart from any other chaotic factors arising from complexity. Reductionism does not deny emergent phenomena but claims the ability to understand phenomena completely in terms of constituent processes.
Biology presents us with many fascinating examples of how organic complexity arises, for example, how the iteration of simple rules can give rise to complexity as with the fractal genes that produce the scale-free bifurcating systems of the pulmonary, nervous and blood circulatory systems. In complex systems there is often strength in quantity, randomness, local interactions with simple iterated rules, gradients, generalists operate more effectively than specialists. They are directed towards optimal adaptation. Multiple individuals acting randomly assume some kind of spontaneous ordering or structuring.
When some elements combine they take on a completely new and independent character.
These are presented as systems operating ‘bottom-up’, the ‘parts’ being unaware of the ‘whole’ that has emerged, much as Wikipedia emerges from a grass-roots base ‘bottom-up’ rather than scholarly entries ‘top-down’.
We cannot predict the future structure of DNA coding given its own structure – this is determined by the environment.
In language we have letters, words, sentences, paragraphs exhibiting increasing complexity and inclusiveness with meaning an emergent property. Meaning determines a top-down constraint on the choice of words but the words constrain the meanings that can be expressed.
Emergent properties cannot be explained at a ‘lower level’ – they are not present at ‘lower levels’. Rather than showing that ‘higher-level activities do not exist it is the task of mechanistic explanation to show how they arise from the parts.
Fundamentalism suggests that a partial cause is the whole cause.
In sociology although agency seems to ultimately derive from the individual we nevertheless live within the structure of social networks of varying degrees of complexity. Though a problem like obesity can be investigated by the sociologist in terms of the supply and nature of foods, marketing, sedentary lifestyles and so on, weight variation can also be strongly correlated with social networks. There appears to be a collective aspect to the problem of obesity. One way of looking at this is to realise that change is not always instigated by altering the physical composition of a whole but by changing the rules of operation: in the case of society this could be social laws or customs of various kinds.
This has long been a source of ambiguity in sociological methodology. Adam Smith claimed that common good could be achieved through the selfish activities of individuals (methodological individualism) while Karl Marx and Emile Durkheim saw outcomes as a result of collective forces like class, race, or religion (methodological holism). Modern examination of social networks can combine these approaches by regarding individuals as nodes in a web of connections.
Does emergence illegitimately get something from nothing? Are the properties and organisation subjective qualities?
We assess life-related systems in terms of both structure and function. Structure relates to parts, function mostly to wholes. Perhaps strangely, we perceive causation as being instigated by either parts (structure) or wholes (function).
The characteristics of emergent or complex systems include: ‘self-regulation’ by feedback loops; a large number of variables that are causally related but in a ‘directed’ way, exhibiting some form of natural selection through differential survival and reproduction or with unusual constrained path-dependent outcomes: as occurs in markets. Emergence may be a particular consequence of diversity and complexity, organisation and connectivity.
Economist Jeffrey Goldstein in the journal Emergence isolates key characteristics of emergence: it involves radical novelty; coherence (sometimes as ‘self-regulation’); integrity or ‘wholeness’; it is the product of a dynamic process (it evolves); it is supervenient (lower-level properties of a system determine its higher level properties).
Chaos theory draws attention to the way that complex systems are aperiodic unpredictable (chaotic), the way that variability in a complex system is not noise but the way the system works. Chaos in dynamical systems is sensitive dependence on initial conditions and how the interation of simple patterns can produce complexity.
[Are levels fractal – is there the same noise at each level?]Simplicity with few variables: disorganised complexity of many variables that can be averaged and in which the whole is greater than the sum of the parts: organised complexity where the behaviour is not simply the sum of the parts.
Chaos
Chaos theory studies the behavior of dynamical systems that are highly sensitive to initial conditions—an effect which is popularly referred to as the butterfly effect. Small differences in initial conditions (such as those due to rounding errors in numerical computation) yield widely diverging outcomes for such dynamical systems, rendering long-term prediction impossible in general.[1] This happens even though these systems are deterministic, meaning that their future behavior is fully determined by their initial conditions, with no random elements involved.[2] In other words, the deterministic nature of these systems does not make them predictable.[3][4] This behavior is known as deterministic chaos, or simply chaos. This was summarised by Edward Lorenz as follows:[5]
Chaos: When the present determines the future, but the approximate present does not approximately determine the future.
Fractals
Self-similarity at different scales mathematically created through iteration. Evident in biological networks – branching, veins, nervous system, roots, lungs: a form of optimal space-filling that can be applied to human networks. It is also a means of packing information into small space as in the brain.
Citations & notes
Complex systems
Part of the modern scientific enterprise is to examine and provide explanations for what happens in complex systems like the human body and human societies.
Chaos theory
Chaos theory noted that in complex systems there was often minuter and unpredictable variability making the system non-linear, non-additive, non-periodic and chance-prone. Though it is predictive over short time spans long-term predictions are not possible. Minute differences at a particular state of a complex system can become amplified into large and unpredictable effects (butterfly flapping its wings changing a major weather pattern, the ‘butterfly effect’). The unpredictability is scale-free, a fractal (fractional dimension).
Scale-free systems
These are often produced as a logical and most efficient solution to a biophysical problem.
Some systems and patterns are scale-free: they look and ?behave the same at any scale. For example, bifurcating systems where a system repeatedly divides into two, like the branching of a tree, occur in neurons, circulatory system (where no cell is more than 5 cells from the circulatory system although the circulatory system consists of no more the 5% of the total body mass), and pulmonary system.
Scale-free systems can be generated from simple rule(s) and one property of such systems is that the variability that occurs at any particular scale is proportionally the same: it does not decrease as the scale is reduced.
Much of the complexity of living systems can be accounted for by ‘fractal genes’ which code complex systems with simple rules and mutations in fractal genes can be recognised easily.
Simple rules can arise in nature through a variety of sources:
• Attraction-repulsion (gives rise to patterns that we see, for example, in urban planning)
• Swarm intelligence (like ants finding the shortest distance between numbers of points.
• Power-law distributions (which are fractals – the neurons of the cortex follow a power-law distribution of its dendrites, making it an ideal structure as a neural network for pattern recognition)
• Wisdom of the crowd when the crowd components are truly experts and non (or evenly) biased
• One feature of complex systems is that you cannot predict a finishing state from a starting state of the system – but often from widely divergent starting states we see a convergence to a particular state, as we see in convergent evolution where biologically different organisms take on similar forms in particular environments. (origin of agriculture).
Emergence
Primacy of explanation
For example, in providing explanations that ‘reducing’ complexity we can place undue emphasis on particular ‘levels’ or frames of explanation. Human behaviour can, for instance, be explained in terms of its effect on other people, in terms of the hormones that drive it, or in terms of the genes that trigger the production of the hormones, or the processes that are going on in the brain when the behaviour occurs, or even in terms of evolutionary theory, long-term selective pressures and reproductive outcomes. In other words, when we ask for the reason for a particular kind of behaviour we will probably get different answers from a sociologist, evolutionary biologist, evolutionary psychologist, clinical psychologist, anatomist, molecular biologist, behavioural geneticist, endocrinologist, neuroscientist or someone trained in some other discipline. The important point is that there is no privileged perspective that entails all the others, each is equally valid and which explanation is most appropriate will depend on the particular circumstances.
Nature and nurture is a subtly nuanced interaction remembering also that though the brain can influence behaviour the body can influence the brain. There is also a subtle causal interplay between brain and body.
Complexification & prediction
Given certain conditions in the universe certain other consequences will follow. Though complexity is not inevitable – there is in fact a universal physical law of entropy, a tendency to randomness – it is a historical fact that …
1. As things get more complex they become less predictable.
2. Quantity can produce quality (chimps have 98% of our DNA; half the difference is olfactory, the rest is quantity of neurons and genes that release the brain from genetic influence)
3. The simpler the constituent parts the better
4. More random noise produces better solutions in networks
5. There is much to be gained from power of gradients, attraction and repulsion
6. Generalists work better than specialists (more adaptive)
7. All emergent adapted systems ‘work’ from the bottom up not the top down: they do not require blueprints and the people who construct them, they arise without a blueprint (e.g. Wikipedia)
8. There is no ‘ideal’ or optimal complex system except insofar as it is the ‘best adapted’ which is a general not a precise condition.
Hierarchies and heterarchies.
Citations & notes
[1] Ellis 2008. http://www.mth.uct.ac.za/~ellis/Top-down%20Ellis.pdf
General references
Ellis, G. 2008. On the nature of causation in complex systems. An extended version of RSSA Centenary Transactions paper
Ellis, G. 2012. Recognising top-down causation. Personal Web Site http://fqxi.org/community/forum/topic/1337
Gleick Chaos and complexity.
Sapolsky, R. Chaos and complexity Youtube. & introduction to human behavioural biology.
[1] Weaver, W. 1948. Science and Complexity. American Scientist 36:536
http://people.physics.anu.edu.au/~tas110/Teaching/Lectures/L1/Material/WEAVER1947.pdf
If we explain something by considering it as an effect of some preceding cause then this chain of cause-and-effect regress goes on ad infinitum, or it ends in a primordial cause which, since it cannot be related to a preceding cause, does not explain anything. Hume suggested that when billiard balls collide there is not something additional to the collision ‘the cause’ that is some external factor or force acting on the billiard balls – the balls simply collide.
While free will entails purpose and meaningful choice.
Ontology is clear at all levels except the quantum level.
Origin (emergence) of complexity. Specific outcomes can be achieved in many low-level implementations – that is because it is the higher level that is shaping outcomes (is causal). Higher levels constrain what lower levels can do and this creates new possibilities. Channelling allows complex development. Non-adaptive cybernetic system with feedback loop – use of information flow – thermostat. Feedback control the physical structure so the goals determine outcomes, the initial conditions are irrelevant. Organisms have goals that are not a physical thing. Adaptation is a selection state. DNA sequences are determined by the context, the environment. Selection is a process of winnowing of important information.
Where do goals come from – goals are adaptively selected in organism. Adaptive selection mind is a special case where symbolic representation are taking a role in determining how goals work. The plan of an aircraft is abstract plans and could not work without it. Money is causally effective because of its social system. Maxwell’s equations, theory, gave us televisions and smartphones.
Brain constrains your muscles. Pluripotent cells are directed by their context. Because of randomness selection can take place. Key analytical idea is of many functional equivalence classes, many low level states that all are equivalent to or correspond to a high level state. It is higher-level states that get selected for when adaptation takes place. Whenever you can find many low-level states corresponding to a high-level state then this indicates that top-down causation is going on.
You must acknowledge the entire causal web. There are always multiple explanations – top-down and bottom-up can both be true at the same time. Why do aircraft fly? A bottom-up physical explanation might refer to aerodynamics: a top-down explanation might refer to the design of the plane, its pilot etc.
When we consider cause as relating to context we might also consider Aristotle’s categorisation of cause into four kinds:
Material – lower-level (physical) cause – ‘that out of which’
Formal – same level (immediate) cause – ‘what it is to be’ ‘change of arrangement, shape or appearance, pattern or form which when present makes matter into a particular type of thing, its organisation
Efficient – immediate higher (contextual) – ‘source of change’
Final cause – the ultimate higher level cause – ‘that for the sake of which’
When does top-down take place? Bottom-up suffices when you don’t need to know the context. Perfect gas law, black body radiation. Vibrations of a drum depend on the container, cars etc.
Randomness at the bottom level is needed for selection to occur.
Like any explanation biology itself is contextual it uses the world of physics as background noise and then explains its own domain as best it can.
Martin Nowak, Harvard University Professor of Mathematics and Biology.
Explanation, testing and description.
‘Function’ derives from higher-order constructs in downward causation.
To understand what a neuron does we must know not only its structure or parts (analysis) but how it fits into the function of the brain as a whole (synthesis).
When we sense something, we are receiving a physically existent phenomenon, ie it exists independently of our sensing of it.
Scope
Continua
Selective perception and cognition have the potential to create discrete categories out of objects that in nature we know to be continua (e.g. the continuous colour spectrum split into individual colours, or the continuous sound waves of spoken language broken up into words and meanings) or to make continua out of things that in nature we know to be discrete (as we do with all universals like ‘tree’, ‘table’ or ‘coat’). We can underestimate how different entities are when they occur in the same mental category and overestimate how similar they might be when placed in different categories. When using reducing categories we can lose sight of the big picture.
Causation, explanation, justification
Why did the hooligan smash the shop window?
Because in her evolutionary history violence was a useful adaptation
Because political parties are too soft-handed about law and order nowadays
Because she came from a rough neighbourhood
Because the police were out on strike
Because there was nobody around
Because of the negative influence of her peer group
Because her boyfriend told her to
Because her parents failed to teach her to respect property
Because her body produced a temporary surge in testosterone
Because neurons were exploding in the anger region of her brain
Because her genes indicate that she was predisposed to violence
Are all of these simultaneously true and relevant or can they be prioritised in some way? If prioritised – on what grounds?
What matters most in science – explanation, testing, or description?
Adaptive selection allows local resistance to entropy with a corresponding build up of useful information.
So what we can see at the largest and smallest scales is approaching what will ever be possible, except for refining the details.
Anton Biermans
If we only understand something by If we understand something only if we can explain it as the effect of some cause, and understand this cause only if we can explain it as the effect of a preceding cause, then this chain of cause-and-effect either goes on ad infinitum, or it ends at some primordial cause which, as it cannot be reduced to a preceding cause, cannot be understood by definition.
Causality therefore ultimately cannot explain anything. If, for example, you invent Higgs particles to explain the mass of other particles, then you’ll eventually find that you need some other particle to explain the Higgs, a particle which in turn also has to be explained etcetera.
If you press the A key on your computer keyboard, then you don’t cause the letter A to appear on your computer screen but just switch that letter on with the A tab, just like when you press the heck, you don’t cause the door to open, but just open it. Similarly, if a let a glass fall out of my hand, then I don’t cause it to break as it hits the floor, I just use gravity to smash the glass so there’s nothing causal in this action.
Though chaos theory often is thought to say that the antics of a moth at one place can cause a hurricane elsewhere, if an intermediary event can cancel the hurricane, then the moth’s antics only can be a cause in retrospect, if the hurricane actually does happens, so it cannot cause the hurricane at all. Though events certainly are related, they cannot always be understood in terms of cause and effect.
The flaw at the heart of Big Bang Cosmology is that in the concept of cosmic time (the time passed since the mythical bang) it states that the universe lives in a time continuum not of its own making, that it presumes the existence of an absolute clock, a clock we can use to determine what in an absolute sense precedes what.
This originates in our habit in physics to think about objects and phenomena as if looking at them from an imaginary vantage point outside the universe, as if it is legitimate scientifically to look over God’s shoulders at His creation, so to say.
However, a universe which creates itself out of nothing, without any outside interference does not live in a time continuum of its own making but contains and produces all time within: in such universe there is no clock we can use to determine what precedes what in an absolute sense, what is cause of what.
For a discussion why big bang cosmology describes a fictitious universe, see my essay ‘Einstein’s Error.’
Paul
While free will entails purpose and meaningful choice.
There is no experiment which says that an act is good or bad. There are no units of good and bad, no measurements.
We are concerned with knowledge of reality, not reality itself. When we sense something, we are sensing something that exists independently of our sensing of it.
Ontology is clear at all levels except the quantum level.
Origin (emergence) of complexity. Specific outcomes can be achieved in many low-level implementations – that is because it is the higher level that is shaping outcomes (is causal). Higher levels constrain what lower levels can do and this creates new possibilities. Channelling allows complex development. Non-adaptive cybernetic system with feedback loop – use of information flow – thermostat. Feedback control the physical structure so the goals determine outcomes, the initial conditions are irrelevant. Organisms have goals that are not a physical thing. Adaptation is a selection state. DNA sequences are determined by the context, the environment. Selection is a process of winnowing of important information.
Where do goals come from – goals are adaptively selected in organism. Adaptive selection mind is a special case where symbolic representation are taking a role in determining how goals work. The plan of an aircraft is abstract plans and could not work without it. Money is causally effective because of its social system. Maxwell’s equations, theory, gave us televisions and smartphones.
Brain constrains your muscles. Pluripotent cells are directed by their context. Because of randomness selection can take place. Key analytical idea is of many functional equivalence classes, many low level states that all are equivalent to or correspond to a high level state. It is higher-level states that get selected for when adaptation takes place. Whenever you can find many low-level states corresponding to a high-level state then this indicates that top-down causation is going on.
You must acknowledge the entire causal web. There are always multiple explanations – top-down and bottom-up can both be true at the same time. Why do aircraft fly? A bottom-up physical explanation might refer to aerodynamics: a top-down explanation might refer to the design of the plane, its pilot etc.
Aristotle was right about cause.
Material – lower-level (Physical) cause – ‘that out of which’
Formal – same level (immediate) cause – ‘what it is to be’ ‘change of arrangement, shape or appearance, pattern or form which when present makes matter into a particular type of thing
Efficient – immediate higher (contextual) – ‘source of change’
Final cause – the ultimate higher level cause – ‘that for the sake of which’
When does top-down take place? Bottom-up suffices when you don’t need to know the context. Perfect gas law, black body radiation. Vibrations of a drum depend on the container, cars etc.
Cultural neuroscience is a good example.
Cosmic context. Physics cannot tell us because quantum uncertainty means physical outcome indeterminate (34 mins).
Tidal wave patterns in the sand or the mesmerising movement of flocks of birds which are governed by a local rule cannot compare with organic complexity.
Ethics has no measurement units and no way of testing good or bad.
The problem of prediction
At around 50 words we are struggling to understand a sentence.
Which of the following are abstract? Which are real? Which exist? Which can have causal influence: values, moral precepts, social laws, scientific laws, numbers, computer programs, thoughts, equations. Can something have causal influence and not exist? In what sense?
Randomness at the bottom level is needed for selection to occur.
Martin Nowak, Harvard University Professor of Mathematics and Biology.
Complexity
Since the 1970s these issues have been subsumed by the study of complexity theory and complex systems – everything from organic complexity and neural networks, to chaos theory, the internet and so on. At the core of the scientific enterprise is causation and are the causal relations between phenomena and it is causation acting within hierarchical systems that is, for physicist George Ellis, the source of complexity in the universe, as summarised in his paper On the Nature of Causation in Complex Systems which I outline briefly here.[1] In this paper Ellis explains the idea of ‘top-down’ causation.
http://humbleapproach.templeton.org/Top_Down_Causation/
Causation
We can define causation simply as ‘a change in X resulting in a reliable and demonstrable change in Y in a given context’.
Top-down causation
Higher levels are causally real because they have causal influence over lower levels. It is top-down causation that gives rise to complexity – such as computers and human beings.
Ellis claims that bottom-up causation is limited in the complexity it can produce, genuine complexity requiring a reversal of information flow from bottom-up to top-down, some coordination of effects. Tidal wave patterns in the sand or the mesmerising movement of flocks of birds which are governed by a local rule cannot compare with organic complexity.
Top-down causation in computers
What happens in this hierarchy. Top-down causation occurs when the boundary conditions (the extremes of an independent variable) and initial conditions (lowest values of the variable) determine consequences.
Top-down causation is especially prevalent in biology but also in digital computers – the paradigm of mechanistic algorithmic causation such that it is possible without contradicting the causal powers of the underlying micro physics. Understanding the emergence of genuine complexity out of the underlying physics depends on recognising this kind of causation.
Abstract causation
Non-physical entities can have causal efficacy. High levels drive the low levels in the computer. Bottom levels enable but do not cause. Program is not the same as its instantiations.
A software program is abstract logic: it is not stored electronic states in computer memory, but their precise pattern (a higher level relation) not evident in the electrons themselves.
Logical relations
High level algorithms determine what computations occur in an abstract logic that cannot be deduced from physics.
Universal physics
The physics of a computer does not restrict the logic, data, and computation that can be used (except the processing speed). It facilitates higher-level actions rather than constraining them.
Multiple realization
The same high level logic can be implemented in many ways (electronic transistors and relays, hydraulic valves, biological molecules) demonstrating that lower level physics is not driving the causation. Higher level logic can be instantiated in many ways by equivalence classes of lower level states. For example, our bodies are still the same, they are still us, even though the cells are different from those we had 10 years ago. The letter ‘p’ on a computer may be bold, italic, capital, red, 12 pt, light pixels or printed ink … but still the letter ‘p’. The higher level function drives the lower level interactions which can happen in many different ways (information hiding) so a computer demonstrates the emergence of a new kind of causation, not out of the underlying physics but out of the logic of higher level possibilities. Complex computer functioning is a mix of bottom-up causation and contextual effects.
Thoughts, like computer programs and data, are not physical entities?
How can there be top-down causation when the lower-level physics determines what can happen given the initial conditions? Well, simply by placing constraints on what is possible at the lower level; by changing properties when in combination as when an independent hydrogen molecule combines with oxygen to form water; where low level entities cannot exist outside their higher-level context, like a heart without a body; when selection creates order by deleting or constraining lower-level possibilities; when random fluctuation and quantum indeterminacy affect low level physics .
SUMMARY
Classical reductionism
Given the initial conditions and sufficient information we can predict future states, outcomes are determinate. Physicist Dirac claimed that ‘chemistry is just an application of quantum physics’. This appears to be physically untrue in many ways. Current inflationary cosmology suggests that the origin of the galaxies is a consequence of random or uncertain quantum fluctuations in the early universe. If this is the case then prediction becomes a false hope, even at this early stage apart from any other chaotic factors arising from complexity.
The origin of novelty & complexity
Biology presents us with many fascinating examples of how organic complexity arises, for example, how the iteration of simple rules can give rise to complexity as with the fractal genes that produce the scale-free bifurcating systems of the pulmonary, nervous and blood circulatory systems.
What is not clear is why this should be considered in some way special or unaccounted for by molecular interaction.
Can a reductionist view of reality account for the origin of complexity: can examining parts explain how the whole hangs together?
Clearly material organisational novelty must arise since the universe which was once undifferentiated plasma now contains objects as structurally complex as living organisms.
When some elements combine they take on a completely new and independent character. Examples of emergence would be
Multiple individuals acting randomly assume some kind of spontaneous ordering or structuring.
Within the theme of emergence is often observation of the teleonomic character of evolutionary novelty and the way organic systems are functional wholes. As with, illustrating the teleonomy of one organism acting under natural selection and how novel complex outcomes can be achieved from a simple algorithm or underlying set of rules.
Reductionism does not deny emergent phenomena but claims the ability to understand phenomena completely in terms of constituent processes.
Top-down causation
Top-down causation is more common than bottom-up.
We cannot predict the future structure of DNA coding given its own structure – this is determined by the environment.
These are presented as systems operating ‘bottom-up’, the ‘parts’ being unaware of the ‘whole’ that has emerged, much as Wikipedia emerges from a grass-roots base ‘bottom-up’ rather than scholarly entries ‘top-down’.
Hierarchy of causation
The idea of hierarchy can add further complication and confusion through the idea of ‘bottom-up’ or ‘top-down’ causality and the fact that biological systems have a special kind of history as a consequence of the teleonomic character of natural selection which leads us to ask about function: what is the whole or part for (implying the future)?
In complex systems there is often strength in quantity, randomness, local interactions with simple iterated rules, gradients, generalists operate more effectively than specialists. They are directed towards optimal adaptation.
Unpredictable – is there a blueprint from the ‘start’? In evolution we see convergence from a different starting point.
Simile and association.
Hierarchy
Emergent properties cannot be explained at a ‘lower level’ – they are not present at ‘lower levels’. Rather than showing that ‘higher-level activities do not exist it is the task of mechanistic explanation to show how they arise from the parts.
Examples:
Examples of emergence come from many disciplines.
In language we have letters, words, sentences, paragraphs exhibiting increasing complexity and inclusiveness with meaning an emergent property. Meaning determines a top-down constraint on the choice of words but the words constrain the meanings that can be expressed.
When a living organism is split up into its component molecules there is no remaining ingredient such as ‘the spark of life’ or the ‘emergence gene’ so emergent properties do not have some form of separate existence. And yet emergent properties are not identical to, reducible to, predictable from, or deducible from their constituent parts – which do not have the properties of the whole. The brain consists of molecules but individual molecules do not think and feel, and we could not predict the emergence of a brain by simply looking at an organic molecule.
In sociology although agency seems to ultimately derive from the individual we nevertheless live within the structure of social networks of varying degrees of complexity. Though a problem like obesity can be investigated by the sociologist in terms of the supply and nature of foods, marketing, sedentary lifestyles and so on, weight variation can also be strongly correlated with social networks. There appears to be a collective aspect to the problem of obesity. One way of looking at this is to realise that change is not always instigated by altering the physical composition of a whole but by changing the rules of operation: in the case of society this could be social laws or customs of various kinds.
This has long been a source of ambiguity in sociological methodology. Adam Smith claimed that common good could be achieved through the selfish activities of individuals (methodological individualism) while Karl Marx and Emile Durkheim saw outcomes as a result of collective forces like class, race, or religion (methodological holism). Modern examination of social networks can combine these approaches by regarding individuals as nodes in a web of connections.
Does emergence illegitimately get something from nothing? Are the properties and organisation subjective qualities?
We assess life-related systems in terms of both structure and function. Structure relates to parts, function mostly to wholes. Perhaps strangely, we perceive causation as being instigated by either parts (structure) or wholes (function).
The characteristics of emergent or complex systems include: ‘self-regulation’ by feedback loops; a large number of variables that are causally related but in a ‘directed’ way, exhibiting some form of natural selection through differential survival and reproduction or with unusual constrained path-dependent outcomes: as occurs in markets. Emergence may be a particular consequence of diversity and complexity, organisation and connectivity.
Economist Jeffrey Goldstein in the journal Emergence isolates key characteristics of emergence: it involves radical novelty; coherence (sometimes as ‘self-regulation’); integrity or ‘wholeness’; it is the product of a dynamic process (it evolves); it is supervenient (lower-level properties of a system determine its higher level properties).
To summarise: when considering wholes and parts in general we need to consider specific instances. Some wholes are more or less straightforwardly additive (sugar grains aggregated into a sugar lump) but other wholes seem to grade into kinds that are not so amenable to reduction – consider the music produced by an orchestra, carbon dioxide, a language, a painting, an economic system, the human body, consciousness, and the sum 2 = 2 = 4.
Part of the disagreement between reduction and emergence can be explained by regarding wholes as having parts that are more or less interdependent. At one end of the spectrum are aggregates and at the other living organisms. As we shift cognitive focus the relationship between wholes and parts can display varying degrees of interdependence: removing a molecule from an ant body is unlikely to be problematic although removing an organ could be, while removing an ant from its colony is probably unproblematic. Wholes sometimes only exist because of the precise relations of the parts – in other wholes it does not matter. Sometimes characteristics appear ‘emergent’ (irreducible as in organised wholes) and sometimes they appear ‘reducible’ (as in aggregative wholes).
Chaos
Chaos theory draws attention to the way that complex systems are aperiodic unpredictable (chaotic), the way that variability in a complex system is not noise but the way the system works.
(methodological reductionism assumes a causal relationship between the elements of structure and higher-order constructs (“function”). This criticism is deep, because it does not only claim that the whole cannot be understood by only looking at the parts, but also that the parts themselves cannot be fully understood without understanding the whole. That is, to understand what a neuron does, one must understand in what way it contributes to the organization of the brain (or more generally of the living entity. you can’t understand a phenomenon just looking at its elements (at whatever scale defined) but you also have to take into account all the relationships between them. )).
Methodol redn the right way, or the only way, to understand the whole is to understand the elements that compose it.))))))
Complex systems
What kind of scientific questions will we want to answer in the 21st century?
There appears to have been a change in the character of scientific questions towards the end of the 20th century. Science in the 21st century is dealing much more with complex systems. People wish to know what the weather will be like in a fortnight’s time; to what extent is climate changes anthropogenic; what is the probability that I might die of some heritable disease; what is the degree of risk related to the use of a particular genetically modified organism; will interest rates be higher in six months’ time and, if so, by how much?
Fundamentalism suggests that a partial cause is the whole cause. Why does a plane fly? (air molecules under the wing, it has a pilot, it was designed to fly, there is a timetable, airline must make a profit)
Historical background
Few-variable simplicity
In very general terms the physical sciences of the 17th to 19th centuries were characterised by systems of constant conditions involving very few variables: this gave us simple physical laws and principles about the natural world that underpinned the production of the telephone, radio, cinema, car and plane.[1 pg]
In contrast the processes going on in biological systems seemed to involve many subtly interconnected variables that were difficult to measure and whose behaviour was not amenable to the formation of law-like patterns similar to those of the physical sciences. Up to about 1900 then much of biological science was essentially descriptive and meagre analytical, mathematical or quantitative foundations.
Disorganised complexity
After 1900 with the development of probability theory and statistical mechanics it became possible to take into account regularities emerging from a vast number of variables working in combination: though the movement of 10 billiard balls on a table may be difficult to predict, when there are extremely large numbers of balls it becomes possible to answer and quantify general questions that relate to the collective behaviour of the balls (how frequently will they collide, how far will each one move on average before it is hit etc.) when we have no idea of the behaviour of any individual ball. In fact, as the number of variables increases certain calculations become more accurate say, the average frequency of calls to a telephone exchange or the likelihood of any given number being rung by more than one person. It allows, for example, insurance companies and casinos to calculate odds and ensure that the odds are in their favour. This applies even when the individual events (say the age of death) are unpredictable unlike the predictable way a billiard ball behaves. Much of our knowledge of the universe and natural systems depends on calculations of such probabilities.
Organised complexity
Disorganised complexity has predictive power because of the predictable randomicity of the behaviour of its components, the mathematics of averages.
But there are systems that are organised into functioning wholes: labour unions, ant colonies, the world-wide-web, the biosphere.
Chaos in dynamical systems is sensitive dependence on initial conditions and how the interation of simple patterns can produce complexity.
Consists of many simple components interconnected, often as a network, through a complex non-linear architecture of causation, no central control producing emergent behaviour. Emergent behaviour as scaling laws can entail hierarchical structure (nested, scalar), coordinated information-processing, dynamic change and adaptive behaviour (complex adaptive systems [ecosystems, biosphere, stock market] self-organising, non-conscious evolution, ‘learning’, or feedback).
Examples: are an ant colony, economic system, brain.
Simplicity with few variables: disorganised complexity of many variables that can be averaged and in which the whole is greater than the sum of the parts: organised complexity where the behaviour is not simply the sum of the parts.
omplex systems
Dynamic systems theory works on the mathematics of how systems change.
Chaos
Chaos theory studies the behavior of dynamical systems that are highly sensitive to initial conditions—an effect which is popularly referred to as the butterfly effect. Small differences in initial conditions (such as those due to rounding errors in numerical computation) yield widely diverging outcomes for such dynamical systems, rendering long-term prediction impossible in general.[1] This happens even though these systems are deterministic, meaning that their future behavior is fully determined by their initial conditions, with no random elements involved.[2] In other words, the deterministic nature of these systems does not make them predictable.[3][4] This behavior is known as deterministic chaos, or simply chaos. This was summarised by Edward Lorenz as follows:[5]
Chaos: When the present determines the future, but the approximate present does not approximately determine the future.
Fractals
Self-similarity at different scales mathematically created through iteration. Evident in biological networks – branching, veins, nervous system, roots, lungs: a form of optimal space-filling that can be applied to human networks. It is also a means of packing information into small space as in the brain.
Nature is 3-D and fractal geometry is an important mathematical application.
Citations & notes
[1] Weaver, W. 1948. Science and Complexity. American Scientist 36:536
http://people.physics.anu.edu.au/~tas110/Teaching/Lectures/L1/Material/WEAVER1947.pdf
Complex systems
Part of the modern scientific enterprise is to examine and provide explanations for what happens in complex systems like the human body and human societies.
Reductionism
Western science has for about 500 years tackled such situations by breaking them down into their component parts. Concluding that if you know the state of a system at time A then it should be possible to determine its state at time B. The whole is the sum of its parts: complex systems are additive.
Chaos theory
Chaos theory noted that in complex systems there was often minuter and unpredictable variability making the system non-linear, non-additive, non-periodic and chance-prone. Though it is predictive over short time spans long-term predictions are not possible. Minute differences at a particular state of a complex system can become amplified into large and unpredictable effects (butterfly flapping its wings changing a major weather pattern, the ‘butterfly effect’). The unpredictability is scale-free, a fractal (fractional dimension).
Scale-free systems
These are often produced as a logical and most efficient solution to a biophysical problem.
Some systems and patterns are scale-free: they look and ?behave the same at any scale. For example, bifurcating systems where a system repeatedly divides into two, like the branching of a tree, occur in neurons, circulatory system (where no cell is more than 5 cells from the circulatory system although the circulatory system consists of no more the 5% of the total body mass), and pulmonary system.
Scale-free systems can be generated from simple rule(s) and one property of such systems is that the variability that occurs at any particular scale is proportionally the same: it does not decrease as the scale is reduced.
Much of the complexity of living systems can be accounted for by ‘fractal genes’ which code complex systems with simple rules and mutations in fractal genes can be recognised easily.
Simple rules can arise in nature through a variety of sources:
• Attraction-repulsion (gives rise tro patterns that we see, for example, in urban planning)
• Swarm intelligence (like ants finding the shortest distance between numbers of points.
• Power-law distributions (which are fractals – the neurons of the cortex follow a power-law distribution of its dendrites, making it an ideal structure as a neural network for pattern recognition)
• Wisdom of the crowd when the crowd components are truly experts and non (or evenly) biased
Emergence
One feature of complex systems is that you cannot predict a finishing state from a starting state of the system – but often from widely divergent starting states we see a convergence to a particular state, as we see in convergent evolution where biologically different organisms take on similar forms in particular environments. (origin of agriculture).
(cellular automata)
Holism
Explanatory frameworks & categories
Continua
Sometimes, for convenience, we break down things which are continuous in nature into discontinuous categories. The most obvious example is the colour spectrum which though physically continuous we break up into the colours of the rainbow’. Though we are aware of what we are doing we are less aware of some of the consequences: that we underestimate how different entities are when they occur in the same category; overestimate how different they are when placed in different categories; and when using reducing categories we can lose sight of the big picture.
Primacy of explanation
For example, in providing explanations that ‘reducing’ complexity we can place undue emphasis on particular ‘levels’ or frames of explanation. Human behaviour can, for instance, be explained in terms of its effect on other people, in terms of the hormones that drive it, or in terms of the genes that trigger the production of the hormones, or the processes that are going on in the brain when the behaviour occurs, or even in terms of evolutionary theory, long-term selective pressures and reproductive outcomes. In other words, when we ask for the reason for a particular kind of behaviour we will probably get different answers from a sociologist, evolutionary biologist, evolutionary psychologist, clinical psychologist, anatomist, molecular biologist, behavioural geneticist, endocrinologist, neuroscientist or someone trained in some other discipline. The important point is that there is no privileged perspective that entails all the others, each is equally valid and which explanation is most appropriate will depend on the particular circumstances.
Nature and nurture is a subtly nuanced interaction remembering also that though the brain can influence behaviour the body can influence the brain. There is also a subtle causal interplay between brain and body.
Complexification & prediction
Given certain conditions in the universe certain other consequences will follow. Though complexity is not inevitable – there is in fact a universal physical law of entropy, a tendency to randomness – it is a historical fact that …
9. As things get more complex they become less predictable.
10. Quantity can produce quality (chimps have 98% of our DNA; half the difference is olfactory, the rest is quantity of neurons and genes that release the brain from genetic influence)
11. The simpler the constituent parts the better
12. More random noise produces better solutions in networks
13. There is much to be gained from power of gradients, attraction and repulsion
14. Generalists work better than specialists (more adaptive)
15. All emergent adapted systems ‘work’ from the bottom up not the top down: they do not require blueprints and the people who construct them, they arise without a blueprint (e.g. Wikipedia)
16. There is no ‘ideal’ or optimal complex system except insofar as it is the ‘best adapted’ which is a general not a precise condition.
Hierarchies and heterarchies.
One of the basic assumptions implicit in the way physics is usually done is that all causation flows in a bottom up fashion, from micro to macro scales.
The edge of the observable universe is about 46–47 billion light-years away.
What matters most – explanation, testing, or description?
The key point about adaptive selection (once off or repeated) is that it lets us locally go against the flow of entropy, and this lets us build up useful information.
Daniel Bernstein
Though I believe that any and all interactions can be expressed and described in terms of the fundamental aspects of reality, we lack the theory to do so. And even if we did have such theory that would show all higher scale interactions to be emerging from the fundamental interactions, the amount of data necessary to track every elementary particle and force would prohibit the description of even the simplest systems.
My understanding is that objects are structurally bound if, within a given scale of reality and under effect of a given force associated with the given scale on them, they behaves as a single object. So, the mathematical models of a particular scale of physical reality can threat composite objects as “virtually fundamental” in such a way that the top-down or bi-directional causalities not only make sense, but becomes the only workable alternative to tracking the interactions between the fundamental particles composing the interacting structures.
So what we can see at the largest and smallest scales is approaching what will ever be possible, except for refining the details.
Anton Biermans
I’m afraid that you (and everybody else, for that matter) confuse causality with reason.
If we understand something only if we can explain it as the effect of some cause, and understand this cause only if we can explain it as the effect of a preceding cause, then this chain of cause-and-effect either goes on ad infinitum, or it ends at some primordial cause which, as it cannot be reduced to a preceding cause, cannot be understood by definition.
Causality therefore ultimately cannot explain anything. If, for example, you invent Higgs particles to explain the mass of other particles, then you’ll eventually find that you need some other particle to explain the Higgs, a particle which in turn also has to be explained etcetera.
If you press the A key on your computer keyboard, then you don’t cause the letter A to appear on your computer screen but just switch that letter on with the A tab, just like when you press the heck, you don’t cause the door to open, but just open it. Similarly, if a let a glass fall out of my hand, then I don’t cause it to break as it hits the floor, I just use gravity to smash the glass so there’s nothing causal in this action.
Though chaos theory often is thought to say that the antics of a moth at one place can cause a hurricane elsewhere, if an intermediary event can cancel the hurricane, then the moth’s antics only can be a cause in retrospect, if the hurricane actually does happens, so it cannot cause the hurricane at all. Though events certainly are related, they cannot always be understood in terms of cause and effect.
The flaw at the heart of Big Bang Cosmology is that in the concept of cosmic time (the time passed since the mythical bang) it states that the universe lives in a time continuum not of its own making, that it presumes the existence of an absolute clock, a clock we can use to determine what in an absolute sense precedes what.
This originates in our habit in physics to think about objects and phenomena as if looking at them from an imaginary vantage point outside the universe, as if it is legitimate scientifically to look over God’s shoulders at His creation, so to say.
However, a universe which creates itself out of nothing, without any outside interference does not live in a time continuum of its own making but contains and produces all time within: in such universe there is no clock we can use to determine what precedes what in an absolute sense, what is cause of what.
For a discussion why big bang cosmology describes a fictitious universe, see my essay ‘Einstein’s Error.’
Ontology is clear at all levels except the quantum level.
Computer systems illustrate downward causation where the software tellsthe hardware what to do – and what the hardware does will depend on the different software. What drives it is the abstract informational logic in the system, not the physical software on the USB stick. The context matters. There are 5 kinds of downward causation: algorithmic,
non-adaptive – thermostat, heart beat, body temperature and systems with feedback
adaptive, intelligent.
So it is the goals that determine outcomes and the initial conditions are irrelevant.
In living systems the best example of downward causation is adaptation in which it is the environment that is a major determinant of the structure of the DNA.
Origin (emergence) of complexity. Specific outcomes can be achieved in many low-level implementations – that is because it is the higher level that is shaping outcomes (is causal). Higher levels constrain what lower levels can do and this creates new possibilities. Channelling allows complex development. Non-adaptive cybernetic system with feedback loop – use of information flow – thermostat. Feedback control the physical structure so the goals determine outcomes, the initial conditions are irrelevant. Organisms have goals that are not a physical thing. Adaptation is a selection state. DNA sequences are determined by the context, the environment. Selection is a process of winnowing of important information.
Where do goals come from – goals are adaptively selected in organism. Adaptive selectionmind is a special case where symbolic representation are taking a role in determining how goals work. The plan of an aircraft is abstract plans and could not work without it. Money is causally effective because of its social system. Maxwell’s equations, theory, gave us televisions and smartphones.
Brain constrains your muscles. Pluripotent cells are directed by their context. Because of randomness selection can take place. Key analytical idea is of many functional equivalence classes, many low level states that all are equivalent to or correspond to a high level state. It is higher-level states that get selected for when adaptation takes place. Whenever you can find many low-level states corresponding to a high-level state then this indicates that top-down causation is going on.
You must acknowledge the entire causal web. There are always multiple explanations – top-down and bottom-up can both be true at the same time. Why does an aircraft fly? Bottom-up physical explanation due to air pressure. Top-down is that it was designed to fly (pilot, timetable, makes a profit). All are simultaneously true and relevant.
Aristotle was right about cause.
Material – lower-level (Physical) cause – ‘that out of which’
Formal – same level (immediate) cause – ‘what it is to be’ ‘change of arrangement, shape or appearance, pattern or form which when present makes matter into a particular type of thing
Efficient – immediate higher (contextual) – ‘source of change
Final cause – the ultimate higher level cause – ‘that for the sake of which’
When does top-down take place? Bottom-up suffices when you don’t need to know the context. Perfect gas law, black body radiation. Vibrations of a drum depend on the container, cars etc.
Cultural neuroscience is a good example.
Determinism is challenged by chaos theory, quantum uncertainty (entanglement).
Recognise we are not at the centre of the knowledge universe. Epistemologically the axiomatic method has limitations. Human conceptual frameworks do not set the limits to knowledge – we need ways of understanding how machines represent the world.
Reduction & causation
Causation underlies the workings of the universe and our discourse about it. Anyone who is curious about the natural world must at some time or another in their lives have wondered about the true nature of causation, especially those people with a scientific curiosity. This series of articles on causation became necessary, not only for these reasons, but because causation is so frequently called on to do work in the philosophical debate about reductionism and today’s competing scientific world views.
It is dubious whether the reduction of causal relations to non-causal features has been achieved and scientific accounts are strong alternatives with revisionary non-eliminative accounts finding favour. Can emergent entities play a causa role in the worlds? But is causation confined to the physical realm?
The issue to be addressed here is, firstly, can causation itself be reduced to something simpler. But the role that causation plays in causal interactions that operate within and between domains of knowedge. The outline of this article follows the account given by Humphreys in the Oxford Handbook of Causation of 2009.[2]
At the outset it is important to distinguish between reduction between the objects of investigation themselves (ontological reduction) and linguistic or conceptual reduction as the reduction of our representations of those objects.
Reduction of causation itself
Eliminative reduction of causation
We must decide whether causation is itself amenable to reductive treatment. Reduction may be eliminative reduction in which the reduced entity is considered dispensable because inaccessible (Hume’s claim that we do not experience causal connection) so we can therefore eliminate it from our theoretical discourse and/or real objects (ontology) (the Mill/Ramsay/Lewis model) substituting phenomena that are more amenable to direct empirical inspection. The most popular theory of this kind is Humean lawlike regularity but in this group would be the logical positivists, logical empiricists (e.g. Ernest Nagel, Carl Hempel), Bertrand Russell, and many contemporary physicalists with an empiricist epistemology. Hume’s view was that we arrive at cause through the habit of association and in this way he removed causal necessity from the world by giving it a psychological foundation. A benign expression of this view would be that ‘C caused E when from initial conditions A described using law-like statements it can be deduced that E’.
Non-eliminative reduction of causation
Causation is so central to everyday explanation, scientific experiment, and action that many have adopted a non-eliminative position. X is reduced to Y but not eliminated, simply expressed in different concepts like probabilities, interventions, or lawlike regularities. Non-eliminativists like the late Australian philosopher David Armstrong hold that causation is essentially a primitive concept that we can at least sometimes access epistemically as contingent relations of nomic necessity among universals and thus amenable to multiple realization. language or with eliminativist accounts explaining causation in non-causal terms.
Revisionary reduction of causation
Here the reduced concept is modified somewhat, as when folk causation is replaced by scientific causation. Most philosophical and self-conscious accounts of causation are revisionary to a greater or lesser degree.
Circularity
Many accounts of causation include reference to causation-like factors as occurs with natural necessity, counterfactual conditionals, and dispositions in what has become known as the modal circle. The fact that no fully satisfactory account of causation can totally eliminate the notion of cause itself is support for a primitivist case.
Domains of reduction
Discussions in both science and philosophy refer to ‘levels’ or ‘scales’ or ‘domains’ of both objects and discourse. So physics is overtopped by progressively more complex or inclusive layers of reality such as chemistry, biochemistry, biology, sociology etc. This hierarchically stratified characterization of reality is discussed elsewhere. Here the task is to examine the way causation might operate within and between these different objects and and domains of discourse.
The attempt at reducoing one domain to another is not a straightforward translation as an account must be given of the different objects, terms, theories, laws, properties and their role in causal processes. The preferred theory of causation (whether, say, a singularist or regularity theory) will be pertinent to what kind of causal reduction may be possible.
Relations between domains
Suppose we are engaged in the reduction of a biological process to one in physics and chemistry, say the reduction of Mendelian genetics to biochemistry, then what kinds of causal interactions might we invoke? The causal relation might be: a relation of identity; an explicit definition; an implicit definition via a theory; a contingent statement of a lawlike connection; a elation of natural or metaphysical necessitation as in supervenience; an explanatory relation; a relation of emergence; a realization relation; a relation of constitution; even causation itself. If indeed the causation were different in different domains then this might render reduction restricted or impossible. Accounts like counterfactual analysis are domain independent.(p. 636)
However, there are domain-specific claims such as physicalism’s Humean supervenience. Under some theories causation is restricted to physical causation as the transfer of conserved physical quantities and this is difficult to apply to the social sciences.
Domain-specific causation & physicalism
Could it be that causation in biology is different from that in physics or sociology or is causation of the same general kind – is their ‘social cause’ and ‘biological cause’ or just ’cause’? The most contentious area here is mental causation where intentionality is often treated as ‘agency’ rather than ‘event’ causation.
Supervenience
In the 1960s domain reduction was promoted through the reduction of theories via bridging laws (Ernest Nagel). One major challenge for such an approach has been multiple realization whereby something like ‘pain’ can be expressed physically in so many ways that this renders its further reduction unlikely although this has been countered by supervenience accounts. For example Humean supervenience regards the world as the spatio-temporal distribution of locaized physical particulars with everything else including laws of nature and causal relations supervening on this.(p. 639) Supervenience is generally regarded a a non-reductive relation.
Functionalism
Multiple realization characterizes properties in terms of their causal roles. Money is causally realized by coins, cheques, promissory note etc. The role of ‘doorstop’ can be functionally and reducibly defined so not all cases of multiple realization are irreducible, irreducibility needs to be taken case by case. For Kim (1997;1999) ‘Functionalization of a property is both necessary and sufficient for reduction …. it explains why reducible properties are predictable and explainable’. Since almost all properties can be functionalized few need to be candidates for emergent properties (p. 644)
Upward & downward causation
The restriction of cause to physical domains is supported by the downward causation and exclusion argument.
Causal exclusion principle & non-reductive physicalism
The causal exclusion principle states that there cannot be more than one sufficient cause for an effect. If we accept this then how are we to account for the causes we allocate at large scales, say the cause of a rise in interest rates? What is the causal relevance of multiply realizable or functional properties (redness, pain, and mental properties)? Does this principle automatically devolve into smallism, that we ultimately explain everything all the way down to leptons and bosons, or smaller and more basic entities when we find them because they are the ones doing the causal work? How can a macro situation have causal relevance if it can be fully accounted for at the micro scale. These properties then become epiphenomena, a by-product or phenomenon with no physical basis.
If C is causally sufficient for E then any other event D is causally irrelevant. Every physical event E has a physical event C causally sufficient for E. If event D supervenes on C then D is distinct from C.
There is increasing evidence supporting the causal autonomy of disciplinary discourse or non-reductive physicalism. Properties in the special sciences are not identical to physical properties since they are multiply realized although they do supervene on (instances of) physical properties since changes in the special properties entail changes in the physical properties further the special properties are causes and effects of other special properties.
A large-scale cause can exclude a small-scale cause. Pain might cause screaming while there is no equivalent neural property. This occurs when the trigger is extrinsic to the system. The pain resulting from a pin prick is initiated by the pin; it cannot possibly be initiated at the neural scale.
The exclusion principle can be applied to any kind of event that supervenes on physical events and shpows that there is no clear causal role for supervening events.
The main questions to be addressed in relation to causation and reduction are: can causation itself be reduced; is there a base-level physicochemical causation underlying all other forms of causation; how does causation operate within a. non-physicochemical domains of discourse and scales and b. between non-physicochemical domains of discourse and scales.
In posing these questions it should be noted that it is cutomary to discuss different academic disciplines, as different domains of knowledge that use their own specific terminology, theories and principles. So for example we have physics, chemistry, biology, and sociology being refereed to as ‘domains of discourse’ and stratified or into ‘levels’ or ‘scales’ of existence. From the outset a careful distinction must be made between ontological reduction, the reductive relations between objects themselves, and linguistic or conceptual reduction which deals with our representations of these objects.
Cause & reductionism
So far in discussing reductionism it has been noted that at present we explain the world scientifically using several scales or perspectives. These scales correspond approximately to particular specialised academic disciplines with their own objects of study including their terminologies, theories, and principles. One possible way of expressing this would be: matter, energy, motion, and force (physics), living organisms (biology), behaviour (psychology), and society (sociology, politics, economics). Each discipline has its own specialist objects of study like be quarks (physics), lungs (biology), desires (psychology), and interest rates (economics). Since it has been argued that each disciplines is addressing the same physical reality from different perspectives or scales the question arises as to the causal relationships between these various objects of study. This raises the question about the relationship between causes at different scales, perspectives, or, in the old terminology, ‘levels of organisation’ when they deal with different entities. How do we reconcile causation at the fundamental particle scale with causation at the political scale assuming the physical reality that they are dealing with is the same?
To answer this question we need to do some groundwork … our modest philosophical program is to ask: What is causation and in what sense does it exist? Is it something that exists independently of us and, if not, in what way does in depend on us? Is causation part of the human-centred Manifest Image? What role does causation play in our reasoning? In other words we need to demonstrate that causation is either a fundamental fact of the universe, or some kind of mental construct, or it can be explained in different and simpler terms.
If we assume the process of explanation proceeding by analysis or synthesis and we regard fermions and bosons as the smallest units of matter then causation must act primarily from the wider context. A rise in interest rates, or the pumping of a heart cannot be initiated by fermions and bosons themselves. To make sense of the fermions and bosons that exist in a heart we must consider their wider context.
Does causation occurs at all scales depending on its initiators or is there a privileged foundational with macroscales explained by microscales, that genes coding (in humans about 25,000 genes and 100,000 proteins) for proteins, cells, tissues, organs, and the organism. That is, a causal chain that leads to progressively larger, more inclusive, and complex structures. This is the central dogma of genetic determinism. But does causation occur between cells, organs, or tissues? Are genes triggered by transcription factors that turn them on and off. Is the environment causal from outside the organism along with other constraining factors at all scales. Homeostasis. Evolution occurs through changes in the genotype that are produced by selection of the phenotype as natural selecrtion expresses the organism-environment continuum.
If ‘levels’ or ‘scales’ do not exist as separate physical objects then there is only one fundamental mode of being. This is simply one physical reality that can be interpreted or explained in different ways: it has no foundationalscale or level.
Weak emergence: descriptions at scale X are shorthand for those at scale Y; strong when X cannot be given for Y.
Universal laws apply to biology, an unsupported elephant will fall to the ground, but biology has its own causal regularities that are, of their very nature, restricted to living organisms.
A cause can be sufficient for its effect but not necessary (a piece of glass C starting a fire E) – we can infer E from C but not vice-versa; it may be necessary but not sufficient (presence of oxygen C in a fire-prone region E) – we can infer E from C but not vice-versa. Under this characterization cause can be defined as either sufficient conditions (or even necesary and sufficient conditions).
Some scales of explanation or causal description are more appropriate than others. It is possible to provide an explanation that is either overly general or overly detailed. What is appropriate depends on the causal structure, what would provide the most effective terms and structures for empirical investigation. This contrasts with the view that there is a fundamental or foundational scale at which explanation is most complete. (Woodward 2009). Causes need to be appropriate to their effects. Bosons nfluencing interest rates. Interest rates affecting the configuration of sub-atomic particles. Fine-grained explanations may be more stable but not always. (Woodward 2009).
One are where this tension expresses itself is in the argument over the mechanism of biological selection in evolution. Should we regard natural selection as ultimately and inevitably a consequence of what is going on in the genes (see Richard Dawkins book The Selfish Gene) or are there causal influences that operate between cells, between tissues, between individuals, between populations, and in relation to causes generated by the environment?
Noble, D. 2012. A Theory of Biological Relativity. Interface Focus 2: 55-64.
It is widely assumed that large-scale causes can be reduced to small-scale causes, the macro to micro: that macro causation frequently (but not always) falls under micro laws of nature. This presupposes a means of correlating the relata at the different scales. This might be interpreted as microdeterminism, the claim that the macro world is a consequence of the micro world. The causal order of the macro world emerges out of the causal order of the micro world. A strict interpretation might be that a macro causal relation exists between two events when there are micro descriptions of the events instantiating a physical law of nature and a more relaxed version that there are causal relations between events that supervene. It might also be the case that even if there is causal sufficiency and completeness the existence of necessitating lawful microdeterminism (laws) does not entail causal completeness. Perhaps in some cases there is counterfactual dependence at the macro but not the micro scale.
Granularity & reductionism
We are tempted to think that we can improve on the precision of causal explanations. Could or should we try to improve the precision of of causal explanations by giving more detail or being more scientific? For example I might explain how driving over a dog was related to my personal psychology, the biochemical activity going on in my brain, the politics of the suburb where the accident occurred and so on. That is, the explanation could be given using language and concepts taken from different domains of knowledge: psychology, politics, sociology, biochemistry and so on. The same situation can be described using different domains of knowledge, scales of existence, and so on. What is of special interest is that the cause will be different depending on the perspective chosen. For simplicity the choice of detail chosen for the explanation is referred to as its granularity. This raises the problems of reduction that is discussed elsewhere. Is there a foundational or more informative scale or terminology that can be used? Is an explanation taken to the smallest possible physical scale the best explanation? Are the causal relations dependant on more metaphysically basic facts like fundamental laws? Do facts about organisms beneficially reduce to biochemical facts … and so on. Is fine grain the best?
Principle 3 – Any description of causation presents the metaphysical challenge of selecting the grain of the terms and conditions to be employed
We can appear to express the same cause using different terms that seem to alter the meaning and therefore the causal relations under consideration, for example: we might replace ‘The match caused the fire’ with ‘Friction acting on phosphorus produced a flame that caused the fire’. This raises the question ‘But what was really the cause?’ with the potential for seemingly different answers when we want only one. The depth of detail in terminology is sometimes referred to as granularity and it raises the question of whether some explanations are more basic or fundamental that others, that some statements can be beneficially reduced to others (reductionism).
This gives us an extended definition of science: science studies the order of the world by investigating causal processes. Causal processes are of many kinds: there are, we might say for example, that Though contentious we might add that we must resist the temptation to reduce causes of one kind to causes of another kind. Causally it makes no sense to reduce biology to physics by saying that fermions and bosons cause the heart to beat. A heart might consist of fermions and bosons but these do not have causal efficacy in this sense. This takes us away from the traditional method of attempting to define science which has been in terms of its methodology (the hypothetico-deductive or deductive-nomological method).
Multiple realization
Physicalists can be divided into two camps: those that think everything can be reduced to physics (reductive physicalists) and those that do not (nonreductive physicalists). The reductionist physicalist claims a type-identity thesis such that, for example, mental properties like feelings are identical with physical properties: that all mental properties are caused by physical properties. Assuming we have two entities, one acting causally on the other seems mistaken the two being, in fact, one and the same. Similarly the non causal connection between temperature and mean molecular kinetic energy. Also life and complex biochemistry? The question arises though as to the identity of objects. Is pain physically identical in a human and a herring? Here it seems that pain can be expressed in many different physical ways, known as ‘multiple realization’. This attack on the type-identity thesis led to the modified claim that mental states are identifiable with functional states which then allows multiple realization, a functional property being understood in terms of the causal role it plays. However, we can think of pain as being either coarse-grained, or fine-grained. ??Either one thing, a mix of properties hardly warranting aggregation under a single category, or OK.
Emergence
Reduction is generally contrasted with emergence. Acounts of emergence are rarely causal in form. Why cannot ‘horizontal’ causation give rise to emergent features within the same domain?
Commentary & Key points
Science is divided not only over what exactly we mean by ‘science’ but the relationship between the various scientific disciplines in terms of their scientific ranking and validity. This reflects, in part, differing metaphysical assumptions, contrasting views on the underlying structure of the universe, causation, and the nature of reality. It also draws on differing methodologies and modes of explanation.
If we take physicalism to be the scientific canon then non-reductive physicalism, as some version of emergentism, is becoming the new orthodoxy. Reductive physicalism assumes that the special sciences ultimately reduce to the physical sciences. Non-reductive physicalism claims that the special sciences supervene on the physical sciences: where there can be no change at one scale without a change at the other.
Domains of knowledge
The sciences (like normativity, consciousness, and biological complexity) – exhibit a gradation in character rather than sharp disjunctions. With increase in physical complexity comes greater imprecision.
Academic disciplines or domains of knowledge – maths, physics, biology, social science, English literature, history etc. – as taught in universities and schools have arisen historically largely as a matter of convenience and historical circumstance not because they are thought to reflect the structure of reality. The disciplines of science are, however, different because many scientists believe its subjects reflect the structure of the natural world. When we divide disciplines into say, physics, chemistry, biology, geology, social science, history, and geography what criteria distinguish one from the other. How useful are the criteria we use and could we devise some system that expresses the relationship between domain categories in a coherent way?
To illustrate the way domain categories can influence the way we thing consider the following: many philosophers of science believe today that there is no single factor demarcating science from other intellectual pursuits, if this becomes widely accepted then any clear demarcation between science and the humanities becomes untenable. ‘Geography, like biology and sociology, is a huge and loosely defined field (so loosely defined, in fact, that since the 1940s many universities have decided that it is not an academic discipline at all and have closed their geography departments)’.[1, Morris, Why West Rules] A prominent historian has suggested that the word ‘history’ is a pretentious way of talking about human behaviour in the past, in which case the subject History might be better treated as a sub-discipline of a more inclusive domain like ‘animal behaviour’ or ‘sociobiology’ which could also include subjects like human psychology, political science and sociology. Maybe we should include anthropology and archaeology here. Would economics and political science be more logically considered as sub-disciplines of sociology and therefore of slightly lower rank? If biology is really about physics and chemistry then maybe it would make more sense if deliberately taught as a sub-discipline of these subjects? Then there all the hybrid disciplines like astrobiology, biophysics, biogeography, biochemistry, geochemistry, social science, scientific humanities. Old arrangements are clearly archaic relics but the question now is whether subjects academic disciplines should simply be a matter of convenience or reflect some coherent rationale or, indeed, the world.
In practical terms a restructuring of the sciences is unlikely. Scientists today tend to work within their own fields, each with their own procedures, principles, theories and technical terms, resulting in physical and intellectual separation that resists fragmentation or combination. Academic territory, whether of theoretical knowledge or physical space, will be defended. Real or imagined disciplinary imperialism is a factor to be noted in any analysis of reductionism.
The point is that we take our existing taxonomy of domains of knowledge for granted when there are no unambiguous criteria on which our classification is based. Which are the wholes and which are the parts? Are our classifications a matter of subjective convenience and utility or are they sometimes founded in the objective nature of the world as we might suppose for science?
Principle 1 -Reductionism challenges the boundaries (criteria) that we use to distinguish one domain of knowledge from another, asking if we can devise scientifically acceptable criteria for dividing up the natural world into domains that more closely mirror the world itself.
Principle 2 – Classifications proceed (like reduction and explanation in general) by abstraction – by simplifying complexity. Classifications also establish relations between items and in so doing they contribute to theory-construction, description, explanation and, importantly, prediction.
There does appear to be a great opportunity for consilience here – a reconsideration of the way we both represent all knowledge and teach it I our schools and universities.
Domain units
If we accept Principle 1 – that no piece of matter exists in a more fundamental way than any other – (all matter is ontologically equal – either it exists or it does not) then can we make use of the scale units of various disciplines?
The important points is that no domain of knowledge or scale that is ‘in reality’ (ontologically) more fundamental than any other but talk of ‘scale’ and ‘domains’ now gives us some useful categories to work with.
Principle 3 – Explanations in one particular domain do not take precedence over those in any other – although some explanations may carry higher degrees of confidence than others and some domains may contain more high confidence explanations than other domains
Consider the material reality of the following: molecules (chemistry), organisms in general and ant colonies in particular (biology), historical events (history), society (organisations, trading blocs, communities). There seems to be some abstraction going on as we move through this series: we are passing from the language of brute matter into worlds with some conceptual loading. We will return to this later – suffice it to say at this stage that the status of these units is more fuzzy.
THE UNITY OF SCIENCE
A universal language
At present science is divided into disciplines with effectively different languages and objects of study. Wouldn’t it be easier if we abandoned all talk of scales and levels and spoke in a language where all units were comprised of the same thing? This would be like suddenly enjoying the efficiency of having a world currency instead of many, or a single global language instead of the confusing babble of different and often uncomprehensible languages of many nations? Instead of the diverse terminologies of sociology, politics, biology and chemistry we could have one single language – that of physics.
This may be possible in theory but could never eventuate in practice in spite of some unification. In theory it may be possible to describe cell division in molecular terms but in practice the translation of structures, variables and pathways of interaction would be phenomenally and prohibitively complex.
We can see here how our cognition is coping with complexity by imposing ‘scales’ on the world. Scales close to one-another, like physics and chemistry, operate in similar ways and use similar terminologies so explaining characteristics of one in the specialist technical terms and theories of the other may not present major problems. We can easily understand the close connections between molecular biology, biochemistry, and genetics. But as the difference in scale units increases so too do the difficulties in the translation of one domain of knowledge into another – and there is a corresponding decrease in the benefits of doing so. Explaining the major concepts and theories of political science in terms of atoms and molecules appears, at face value, absurd. Predicting weather patterns is difficult enough in the terms we use today without breaking our explanations down into the causal interrelationships of every molecule within the system – even though, in theory, this may be possible. This is not because this is logically impossible but because of our cognitive limitations: we do not have the computing power to do this and so find simpler modes of mental synthesis.
From this we can derive two general principles: On the other hand the relationship between biochemistry, molecular genetics, and chemistry is clearly and so in this case ‘reduction’ is much more credible. On the other hand the general use of physicochemical or molecular language and ideas in describing the generalities of ecology doesn’t really make sense. The question ‘Do we only need one scale or ‘level’ to explain everything’ is not so much a question of feasibility, more a question of utility. We need the cognitive convenience of talk at different scales.
THE METHODOLOGY OF REDUCTION
It is one thing to speak of reducing theory A into theory B but quite another to carry it out. In fact there is a subtle distinction between the ways that this can be done – whether it be done by translation, derivation, explanation or some combination of these.
1. Translation
One way of expressing the methodological aspect of reduction is to consider the reduction of one knowledge domain or theory to that of another: maths to logic, consciousness to physics and chemistry. This may be done by: translating key concepts of A into those of B; deriving key ideas of B from those of A; or when all the circumstances of B are explained by A.
The attempt to translate the terms of one discipline into those of another has proved too problematic to be realistic. Even where terms apply to the same object they may have slightly different independent meanings.
THE REDUCTION
It has been agreed that in terms of ‘matter’ an organism it consists of molecules. It has also been suggested that molecules are not ‘fundamental’. After all, we could also say that organisms consist of cells, tissues, or organs without threatening their physical ‘reality’. Indeed, rather than looking at ‘wholes’ that are larger then molecules we can be more fundamental by describing an organism in terms of its sub-atomic particles. But at this point it will probably be argued that subatomic particles are not informative – it is the way they are organised into functional parts and the relationship between these parts that is important.
We now need to ask how we can translate one scale, level, or knowledge domain into another. How do we translate language about molecules into language about cells into language about tissues, into language about organs, into language about organisms. We must also ask whether the ‘reality’ of the units chosen.
Theory reduction
Philosophers have asked whether the theories or principles of the biological sciences can be demonstrated as logical consequences of the theories of the physical sciences?
In the 1960s American philosopher Ernest Nagel (an early logical empiricist philosopher of biology along with Carl Hempel, and followed in the 1970s by David Hull) suggested a theoretical in-principle model for this logical reduction. A target theory was deduced from a base theory via bridging laws. This has proved difficult although it is still pursued in potentially comparable fields of study, one example being the reduction of classical Mendelian genetics to molecular biology. There was a messiness and imprecision because concepts vary and there are problems over the equivalence of terms, and target theories need subsequent modification.
If nature is indeed arranged hierarchically then we can perhaps take advantage of the principles of hierarchical classification: inclusivity, transitivity, association and distinction, and exclusivity.
Supposing the successful reduction of biology to physics and chemistry why not continue to, say, sociology. Would sociologists find it of any use addressing the consequences of the protestant work ethic in physicochemical terms?
We fall back on the principle of explanatory utility: biological terminology is generally more useful than physicochemical terminology. For the most part there seems little to be gained from a statement like ‘leg = block of physicochemical processes X’. There will, of course, be times when we need to know the chemical composition of a leg, and nowadays in many circumstances we might want to think of a gene in chemical terms rather than as a blob of matter like a bead on a string, so a reduction here is useful. In this sense reduction is neither in some way mistaken, misrepresenting, or inadequate – just totally impractical.
For example the word ‘cell’ in biology when used in a general sense is a useful biological concept but it can refer to many different kinds of physical objects and no individual cell is uniquely indispensable: we cannot say, for example, ‘x is a cell iff x is ‘physical expression’. That is, different structures can produce the same outcomes as when compensatory adjustments are made if brain damage occurs (a phenomenon referred to as multiple realization or degeneracy). Some form of physicochemical shorthand expression might convey the general meaning ‘leaf’ but individual leaves will have unique molecular structures. In general the vocabulary of biology does not map easily onto that of physics.
Interaction between levels or scales
In providing explanations and ‘reducing’ complexity we can place undue emphasis on particular ‘levels’ or frames of explanation or scales. Society is not always concerned with large things and physics small things.
Insofar as science is concerned with the structure of the natural world then it encourages the improved understanding of the structure and behaviour of matter. Simply stating something in different words is unproductive unless something is gained in the process.
Principle 3 – reduction is only scientifically useful when it improves our understanding by providing a better explanation (by giving a necessary and sufficient answer to the question being posed). Provided scientific units are credible then the scale we use for explanation is simply a matter of utility.
Probability (degree of confidence)
To some extent we measure the scientific merit of an explanation through our confidence in in its predictability – the probability of a particular outcome. However, if we predict the likelihood of the sun coming up tomorrow morning as being very near to 100% and the likelihood of climate change causing a rise of 2oC over the next 50 years as 70% can we say that one statement is more scientific than the other? It would seem not but this draws attention to the fact that there do seem to be greater degrees of certainty in (parts of) some domains rather than others. In cases like this we understand a scientific explanation as being the one that is the best we can achieve at present.
REDUCING BIOLOGY TO PHYSICS & CHEMISTRY
Why shouldn’t biology become a branch of the physical sciences? And in exactly what way can biological organisation be something over and above the molecules out of which an organism is composed?
In some respects it already is since as lumps of matter organisms obey many of the laws of physics, such as those of gravitation. And we can recognise how much of genetics has moved from the domain of general biology into the world of biochemistry and molecular biology. Since the human body undoubtedly consists of physicochemical objects and interactions perhaps we can envisage a super-computer that one day might be able to formulate a vast algorithm that simulates how all these molecules interact as the body goes about its daily business. But are there any problems that make the proposition theoretically impossible?
Because we can study both the brain and the gene in terms of molecules then for some biologists it might appear that we have, in the macro-molecule, a more fundamental level, scale, or source of unity. The question is whether such explanations are feasible, and if they are, whether the answers they give are informative or not. We are yet to resolve this.
Principle 7 – what is controversial about organic wholes is not their existence but the nature of their origins, their differentiation into parts, and the interaction of those parts.
Is there something about organic organisation and the complex interaction of the parts and their properties that defies reduction to explanations in terms of constituent physicochemical processes?
The biological theory of ‘emergence’ claims that this is so.
Biology & its link to other disciplines
Living matter is variable replicating matter that has the capacity, over many generations, to incorporate physical changes in response to influences from its surroundings. The variation that facilitates the persistence and replication of this matter is incorporated as physical change over many generations since changes that do not permit replication simply cease to exist. In mechanical terminology this is fine-tuning using feedback.
In biological terminology we have environmental adaptation by natural selection – descent with modification as a result of heritable variation and differential reproduction. Natural selection is the way we account for adaptive complexity and design in nature – the complex interplay of parts serving some function – and it is the process underlying the evolution of the entire community of life.
The process of natural selection introduces several crucial ideas:
1. There is a clear distinction between evolving and non-evolving matter: living and inanimate matter
2. Living matter cannot exist independently of its surroundings and therefore exists in a kind of organism-environment continuum
3. Natural selection gives a naturalistic account of the self-evident design we see in nature: it is the mindless way in which functional organized organic complexity, including humans and their brains, arose
4. Natural selection is a process that discriminates (selects) and which can therefore succeed or fail. Living matter has rudimentary ‘interests’ in the sense that some changes in the environment facilitate its survival and reproduction while others do not
5. ?, life ‘adapts’ to its environment (value and reason); thirdly, the interplay between life and inanimate matter as a kind of continuum.
Commentary & Key points
Science is divided not only over what exactly we mean by ‘science’ but the relationship between the various scientific disciplines in terms of their scientific ranking and validity. This reflects, in part, differing metaphysical assumptions, contrasting views on the underlying structure of the universe, causation, and the nature of reality. It also draws on differing methodologies and modes of explanation.
If we take physicalism to be the scientific canon then non-reductive physicalism, as some version of emergentism, is becoming the new orthodoxy. Reductive physicalism assumes that the special sciences ultimately reduce to the physical sciences. Non-reductive physicalism claims that the special sciences supervene on the physical sciences: where there can be no change at one scale without a change at the other.
Media Gallery
Emergence – How Stupid Things Become Smart Together
Kurzgesagt – In a Nutshell – 2017 – 7:30
What’s Strong Emergence?
Closer to Truth – 2020 – 26:47
—
First published on the internet – 1 March 2019