Select Page
COMPLEXITY
Reductionism reminds us of how, in apparent defiance of the Second Law of Thermodynamics,the iniverse has accumulated local regions of negentropy and structure. The universe began as plasma before forming hydrogen and helium and later the elements of the Periodic Table and life. How do we explain the emergence of complexity in the universe? What do we know about the causal processes intervening between the universe at the Big Bang when it was undifferentiated plasma and objects in the universe today that are as structurally complex as living organisms?

Many scientists would answer that complexity arises as the laws of physics play out in a deterministic way. This is one aspect of reductionism, which we need to define.

Classical reductionism (determinism)
The path of cosmic destiny is determinate. Knowing the precise conditions at the origin of the universe should, in principle, be sufficient to explain the emergence of smartphones and you, here, right now. Expressed another way – it should, in theory, be possible to explain sociology in terms of particle physics (though complex in the extreme), that is, the small-scale accounts for the large-scale. Physics is based on the proven high predictive capacity of mathematics.

By this account the regularities of chemistry, biology, psychology and the social sciences are epiphenomena (by-products) because they are grounded in physical causation. All physical objects are composed of elementary particles under the influence of the four fundamental forces and physical laws alone give a unique outcome for each set of initial data. English theoretical physicist Paul Dirac (1902-1984) expressed a first step down this path when he claimed that ‘chemistry is just an application of quantum physics’. One of the basic assumptions implicit in the way physics is usually done is that all causation flows in a bottom up fashion, from micro to macro scales. The physical sciences of the 17th to 19th centuries were characterised by systems of constant conditions involving very few variables: this gave us simple physical laws and principles about the natural world that underpinned the production of the telephone, radio, cinema, car and plane.[1 pg]

There are many objections to such a view and since the 1970s these objections have become the focus of studies in complexity theory. After 1900 with the development of probability theory and statistical mechanics it became possible to take into account regularities emerging from a vast number of variables working in combination: though the movement of 10 billiard balls on a table may be difficult to predict, when there are extremely large numbers of balls it becomes possible to answer and quantify general questions that relate to the collective behaviour of the balls (how frequently will they collide, how far will each one move on average before it is hit etc.) when we have no idea of the behaviour of any individual ball. In fact, as the number of variables increases certain calculations become more accurate say, the average frequency of calls to a telephone exchange or the likelihood of any given number being rung by more than one person. It allows, for example, insurance companies and casinos to calculate odds and ensure that the odds are in their favour. This applies even when the individual events (say the age of death) are unpredictable unlike the predictable way a billiard ball behaves. Much of our knowledge of the universe and natural systems depends on calculations of such probabilities.

science in the 21st century is tackling complex systems. People wish to know what the weather will be like in a fortnight’s time; to what extent is climate changes anthropogenic; what is the probability that I might die of some heritable disease; how does the brain work; what is the degree of risk related to the use of a particular genetically modified organism; will interest rates be higher in six months’ time and, if so, by how much? This is the world of organic complexity, neural networks, chaos theory, fractals, and complex networks like the internet. In contrast the processes going on in biological systems seemed to involve many subtly interconnected variables that were difficult to measure and whose behaviour was not amenable to the formation of law-like patterns similar to those of the physical sciences. Up to about 1900 then much of biological science was essentially descriptive with meagre analytical, mathematical or quantitative foundations. But there are systems that are organised into functioning wholes: labour unions, ant colonies, the world-wide-web, the biosphere. Consists of many simple components interconnected, often as a network, through a complex non-linear architecture of causation, no central control producing emergent behaviour. Emergent behaviour as scaling laws can entail hierarchical structure (nested, scalar), coordinated information-processing, dynamic change and adaptive behaviour (complex adaptive systems [ecosystems, biosphere, stock market] self-organising, non-conscious evolution, ‘learning’, or feedback). Examples: are an ant colony, economic system, brain.

However, when a living organism is split up into its component molecules there is no remaining ingredient such as ‘the spark of life’ or the ‘emergence gene’ so emergent properties do not have some form of separate existence. And yet emergent properties are not identical to, reducible to, predictable from, or deducible from their constituent parts – which do not have the properties of the whole. The brain consists of molecules but individual molecules do not think and feel, and we could not predict the emergence of a brain by simply looking at an organic molecule.

Levels of reality
Many scientists and philosophers find it useful to grasp complexity in the world through the metaphor of hierarchy as ‘levels of organisation’ with its associations of ‘higher’ and ‘lower’ and a world that is in some way layered. But ‘as if’ (metaphorical) language can be mistakenly taken for reality and best minimised unless it serves a clear purpose or is unavoidable.

What exactly do we mean by ‘levels’ in nature and can these ideas be expressed more clearly? Hierarchies rank their objects as ‘higher’ or ‘lower’ with their ‘level’ based on some ranking criterion.

Scale
’Level’ is used in various senses. Firstly, it expresses scale/size as we move from small to large in a sequence like – molecules – cells – tissues – organs – organisms, and from large to small as we pass along the sequence – universe – galaxy – solar system – rock – molecule – quark.

Complexity
But it cannot be just a matter of physical size because organisms are generally treated as ‘higher’ in such hierarchies than, say, a large lump of rock. So secondly, or perhaps in addition, we are referring to complexity, the fact that an organism has parts that are closely integrated in a complex network of causation in a way that does not occur in a rock. There are difficulties here too in definition e.g. how do we rank against one-another society, a human, an ecosystem. A microorganism can well be considered more complex than the universe.

Context
But then, thirdly, ‘levels’ also suggest frames of reference, one special set of things that can be related to another set of things so we can viewing one set of things from a ‘higher’ or ‘lower’ vantage point, and here context or scope becomes a key factor.

There are then three major criteria on which scientific hierarchies are built: scale (inclusiveness or scope), causal complexity, and context. When considering any particular scientific hierarchy it helps to consider these factors (separately or in combination).

There are a few complications. Sometimes the layering is expressed in terms of disciplines or domains of knowledge as physics, chemistry, biology, psychology, and sociology which is different from the phenomena that the disciplines study. In this case each discipline or domain constitutes a contextual ‘level’ with its own language, vocabulary, and mathematical equations that are valid for the restricted conditions and variables of that level. There is increased decoupling with increased separation of domains.

Sometimes the hierarchy is given as a loose characterisation of what these subjects deal with – laws of the universe, molecules, organisms, minds, humans in groups, or some-such. Sometimes the nature of the physical matter is given emphasis – whether it is organic or inorganic.

At the core of the scientific enterprise is the idea of causation and for eminent physicist George Ellis it is causal relations acting within hierarchical systems that are the source of complexity in the universe. His views are summarised in his paper On the Nature of Causation in Complex Systems in which he outlines the controversial idea of ‘top-down’ causation.[1] I briefly outline these ideas below.

My preference would be to regard causation as acting between the cognitive categories we use to denote phenomena in the world. We take these categories to vary in both their inclusiveness and complexity. This, for me, is a more satisfactory mental representation of ‘reality’ than a layered hierarchy of objects arranged above and below one-another. I would use the expression ‘more inclusive and complex to less inclusive and complex’ in preference to the expression ‘top-down’ but the convenience of the shorthand is obvious.

Degree of causal complexity in parts & wholes
Although we can speak of parts and wholes in general terms, actual instances demonstrate varying degrees of causal interdependence. At one end of the spectrum are holons (holon – a category that can be treated as either a whole or a part) are aggregates with minimal interdependence of constituents, and at the other we have living organisms where the significance of a constituent, like a heart, depends strongly on its relation to the body.

As we shift cognitive focus the relationship between wholes and parts can display varying degrees of interdependence: removing a molecule from an ant body is unlikely to be problematic, and similarly removing one ant from its colony but removing an organ from a body could be.

Wholes sometimes only exist because of the precise relations of the parts – in other wholes this does not matter. Sometimes characteristics appear ‘emergent’ (irreducible as in highly organised wholes) and sometimes they appear ‘reducible’ (as in aggregative wholes): we need to consider each instance in context. Some holons are straightforwardly additive (sugar grains aggregated into a sugar lump) but others grade into kinds that are not so amenable to reduction – consider the music produced by an orchestra, carbon dioxide, a language, a painting, an economic system, the human body, consciousness, and the sum 2 = 2 = 4.

Neither of these factors need affect the following account which is highly abbreviated from Ellis:

Causation
We can define causation simply as ‘a change in X resulting in a reliable and demonstrable change in Y in a given context’. Particular causes are substantiated by experimental verification and these are isolated from surrounding noise.
Causation, or causality, is the capacity of one variable to influence another. The first variable may bring the second into existence or may cause the incidence of the second variable to fluctuate. A distinction is sometimes made between causation and correlation, the latter being

Philosopher Hume saw even the laws of physics as a matter of constant conjunction rather than the kind of causation we generally refer to. One event (effect) can have multiple causes: one cause can have multiple effects.

The claim is that causation is not restricted to physics and physical chemistry as is frequently maintained. Examples of ‘bottom-up’ causation would be the brain as a neural network of firing neurons or the view that there is a direct causal chain from DNA to phenotype.

?? p.3 Complexity emerges through whole-part two-way causation as cause-initiating wholes (scientific categories) become progressively more inclusive and complex and the entities at a particular scale are precisely defined. (with properties associated with hierarchies: information hiding, abstraction, inheritance, encapsulation, transitivity etc.)

Top-down causation
Ellis claims that bottom-up causation is limited in the complexity it can produce, genuine complexity requires a reversal of information flow from ‘bottom-up’ to ‘top-down’ and a coordination of its effects. To understand what a neuron does we must explain not only its structure or parts (analysis) but how it fits into the function of the brain as a whole (synthesis). Fractal nature is 3-D and fractal geometry is an important mathematical application two-way.

‘Higher’ levels are causally real because they have causal influence over ‘lower’ levels. It is ‘top-down’ causation that gives rise to complexity – such as computers and human beings. As we pass from ‘lower’ to ‘higher’ categories there is a loss of detailed information but an increase in generality (coarse-graining) which is why car mechanics do not need to understand particle physics. As wholes and parts become more widely separated (‘higher’ from ‘lower’) so the equivalence of language and concepts generally becomes more obscure although sometimes translation (coarse-graining) is possible.

Top-down causation is demonstrated when a change in high level variables results in a demonstrable change in low-level variables in a reliable (non-random) way, this being repeatable and testable. The change must depend only on the higher level when the context variables cannot be described at a lower level state. It is common in physics, chemistry and biology, for example the influence of the moon on tides and subsequent effect of tides on organisms. Biological ‘function’ derives from higher-order constructs in downward causation. Cultural neuroscience is an excellent example of a synthetic discipline dominated by top-down causation.

Equivalence classes relate higher level behaviour to (often many different) lower level states. The same top-level state must lead to the same top-level outcome regardless of which lower level state produces it.

Top-down causation occurs when higher level variables set the context for lower-level action.

Ellis recognises five kinds of top-down causation:
Algorithmic – high level variables have causal power over lower level dynamics so that the outcome depends uniquely on the higher level structural, boundary and initial conditions e.g. algorithmic computation where a program determines the machine code which determines the switching of transistors; nucleosynthesis in early universe determined by pressure and temperature; developmental biology a consequence of reading the DNA but also responding to the environment at all stages as developmental plasticity.

Non-adaptive information control – Non-adaptive higher-level entities influence lower level entities towards particular ends through feedback control loops. The outcome is not determined by the initial or boundary conditions but by the ‘goals’ e.g. a thermostat and homeostatic systems.

Adaptive selection (teleonomy) – variation in entities with subsequent selection of specific kinds suited to the environment and survival e.g. the products of Darwinian evolution. Selection discards unimportant information. Genes do not determine outcomes alone but as dictated by the environment. This is like non-adaptive information control but with particular kinds of outcomes selected rather than just one. In evolution we see convergence from a different starting point. Novel complex outcomes can be achieved from a simple algorithm or underlying set of rules. Adaptive selection allows local resistance to entropy with a corresponding build-up of useful information.

Adaptive information control – adaptive selection of goals in feedback control system such as Darwinian evolution that results in homeostatic systems; associative learning.

Intelligence – the selection of goals involves the symbolic representation of objects, states and relationships to investigate the possible outcome of goal choices. The implementation of thoughts and plans using language including the quantitative and geometric representations of mathematics. These are all abstract and irreducible higher-level variables. They can be represented in spoken and written form. Causally effective are the imagined realities (trust) of ideologies, money, laws, rules of sport, values, all abstract but causal. Ultimately our understanding of meaning and purpose.

So it is the goals that determine outcomes and the initial conditions are irrelevant.
In living systems the best example of downward causation is adaptation in which it is the environment that is a major determinant of the evolution in the structure of the DNA.

For higher levels to be causally effective there must be some causal access (causal slack) at lower levels and this comes from: the way the system is structured constraining lower level dynamics; openness allowing new information across the boundary and affecting local conditions as in changing the nature of the lower elements as in cell differentiation and humans in society; micro-indeterminism combine with adaptive selection.

Top-down causation in computers
What happens in this hierarchy. Top-down causation occurs when the boundary conditions (the extremes of an independent variable) and initial conditions (lowest values of the variable) determine consequences.

Top-down causation is especially prevalent in biology but also in digital computers – the paradigm of mechanistic algorithmic causation such that it is possible without contradicting the causal powers of the underlying micro physics. Understanding the emergence of genuine complexity out of the underlying physics depends on recognising this kind of causation.

Computer systems illustrate downward causation where the software tells the hardware what to do – and what the hardware does will depend on the different software. What drives it is the abstract informational logic in the system, not the physical software on the USB stick. The context matters.

Abstract causation
Non-physical entities can have causal efficacy. High levels drive the low levels in the computer. Bottom levels enable but do not cause. Program is not the same as its instantiations. Which of the following are abstract? Which are real? Which exist? Which can have causal influence: values, moral precepts, social laws, scientific laws, numbers, computer programs, thoughts, equations. Can something have causal influence and not exist? In what sense?

A software program is abstract logic: it is not stored electronic states in computer memory, but their precise pattern (a higher level relation) not evident in the electrons themselves.

Logical relations

High level algorithms determine what computations occur in an abstract logic that cannot be deduced from physics.

Universal physics
The physics of a computer does not restrict the logic, data, and computation that can be used (except the processing speed). It facilitates higher-level actions rather than constraining them.

Multiple realization
The same high level logic can be implemented in many ways (electronic transistors and relays, hydraulic valves, biological molecules) demonstrating that lower level physics is not driving the causation. Higher level logic can be instantiated in many ways by equivalence classes of lower level states. For example, our bodies are still the same, they are still us, even though the cells are different from those we had 10 years ago. The letter ‘p’ on a computer may be bold, italic, capital, red, 12 pt, light pixels or printed ink … but still the letter ‘p’. The higher level function drives the lower level interactions which can happen in many different ways (information hiding) so a computer demonstrates the emergence of a new kind of causation, not out of the underlying physics but out of the logic of higher level possibilities. Complex computer functioning is a mix of bottom-up causation and contextual effects.

Thoughts, like computer programs and data, are not physical entities?

How can there be top-down causation when the lower-level physics determines what can happen given the initial conditions? Well, simply by placing constraints on what is possible at the lower level; by changing properties when in combination as when an independent hydrogen molecule combines with oxygen to form water; where low level entities cannot exist outside their higher-level context, like a heart without a body; when selection creates order by deleting or constraining lower-level possibilities; when random fluctuation and quantum indeterminacy affect low level physics.

Supervenience
Is an ontological relation where upper level properties are determined by or depend on (supervene on) lower level properties, say – social on psychological, psychological on biological, biological on chemical, etc. Do mental properties supervene on neural properties? Properties of one kind are dependent on (but not determined by in a causal sense) those of another kind. For example can the same mental state be supported by different brain states (yes). Why do we need statements about mental states if we know the underlying brain states?

A set of properties A supervenes upon another set B when no two things can differ with respect to A-properties without also differing with respect to their B-properties: there cannot be an A-difference without a B-difference.(Stanford Encyclopaedia).

((Everyone agrees that reduction requires supervenience. This is particularly obvious for those who think that reduction requires property identity, because supervenience is reflexive. But on any reasonable view of reduction, if some set of A-properties reduces to a set of B-properties, there cannot be an A-difference without a B-difference. This is true both of ontological reductions and what might be called “conceptual reductions”—i.e., conceptual analyses.
The more interesting issue is whether supervenience suffices for reduction (see Kim 1984, 1990). This depends upon what reduction is taken to require. If it is taken to require property identity or entailment, then, as we have just seen (Section 3.2), even supervenience with logical necessity is not sufficient for reduction. Further, if reduction requires that certain epistemic conditions be met, then, once again, supervenience with logical necessity is not sufficient for reduction. That A supervenes on B as a matter of logical necessity need not be knowable a priori.))

SUMMARY
Reductionism regards a partial cause as a whole cause with analysis passing down to the smallest scales in physics and causation then proceeding up from these levels, this being a representation of both reality and process of science. Recent work has muddied the simplicity of this approach in many ways. For example, current inflationary cosmology suggests that the origin of the galaxies is a consequence of random or uncertain quantum fluctuations in the early universe. If this is the case then prediction becomes a false hope, even at this early stage apart from any other chaotic factors arising from complexity. Reductionism does not deny emergent phenomena but claims the ability to understand phenomena completely in terms of constituent processes.

Biology presents us with many fascinating examples of how organic complexity arises, for example, how the iteration of simple rules can give rise to complexity as with the fractal genes that produce the scale-free bifurcating systems of the pulmonary, nervous and blood circulatory systems. In complex systems there is often strength in quantity, randomness, local interactions with simple iterated rules, gradients, generalists operate more effectively than specialists. They are directed towards optimal adaptation. Multiple individuals acting randomly assume some kind of spontaneous ordering or structuring.

When some elements combine they take on a completely new and independent character.

These are presented as systems operating ‘bottom-up’, the ‘parts’ being unaware of the ‘whole’ that has emerged, much as Wikipedia emerges from a grass-roots base ‘bottom-up’ rather than scholarly entries ‘top-down’.

We cannot predict the future structure of DNA coding given its own structure – this is determined by the environment.

In language we have letters, words, sentences, paragraphs exhibiting increasing complexity and inclusiveness with meaning an emergent property. Meaning determines a top-down constraint on the choice of words but the words constrain the meanings that can be expressed.

Emergent properties cannot be explained at a ‘lower level’ – they are not present at ‘lower levels’. Rather than showing that ‘higher-level activities do not exist it is the task of mechanistic explanation to show how they arise from the parts.

Fundamentalism suggests that a partial cause is the whole cause.

In sociology although agency seems to ultimately derive from the individual we nevertheless live within the structure of social networks of varying degrees of complexity. Though a problem like obesity can be investigated by the sociologist in terms of the supply and nature of foods, marketing, sedentary lifestyles and so on, weight variation can also be strongly correlated with social networks. There appears to be a collective aspect to the problem of obesity. One way of looking at this is to realise that change is not always instigated by altering the physical composition of a whole but by changing the rules of operation: in the case of society this could be social laws or customs of various kinds.

This has long been a source of ambiguity in sociological methodology. Adam Smith claimed that common good could be achieved through the selfish activities of individuals (methodological individualism) while Karl Marx and Emile Durkheim saw outcomes as a result of collective forces like class, race, or religion (methodological holism). Modern examination of social networks can combine these approaches by regarding individuals as nodes in a web of connections.

Does emergence illegitimately get something from nothing? Are the properties and organisation subjective qualities?

We assess life-related systems in terms of both structure and function. Structure relates to parts, function mostly to wholes. Perhaps strangely, we perceive causation as being instigated by either parts (structure) or wholes (function).

The characteristics of emergent or complex systems include: ‘self-regulation’ by feedback loops; a large number of variables that are causally related but in a ‘directed’ way, exhibiting some form of natural selection through differential survival and reproduction or with unusual constrained path-dependent outcomes: as occurs in markets. Emergence may be a particular consequence of diversity and complexity, organisation and connectivity.

Economist Jeffrey Goldstein in the journal Emergence isolates key characteristics of emergence: it involves radical novelty; coherence (sometimes as ‘self-regulation’); integrity or ‘wholeness’; it is the product of a dynamic process (it evolves); it is supervenient (lower-level properties of a system determine its higher level properties).

Chaos theory draws attention to the way that complex systems are aperiodic unpredictable (chaotic), the way that variability in a complex system is not noise but the way the system works. Chaos in dynamical systems is sensitive dependence on initial conditions and how the interation of simple patterns can produce complexity.

[Are levels fractal – is there the same noise at each level?]

Simplicity with few variables: disorganised complexity of many variables that can be averaged and in which the whole is greater than the sum of the parts: organised complexity where the behaviour is not simply the sum of the parts.

Chaos
Chaos theory studies the behavior of dynamical systems that are highly sensitive to initial conditions—an effect which is popularly referred to as the butterfly effect. Small differences in initial conditions (such as those due to rounding errors in numerical computation) yield widely diverging outcomes for such dynamical systems, rendering long-term prediction impossible in general.[1] This happens even though these systems are deterministic, meaning that their future behavior is fully determined by their initial conditions, with no random elements involved.[2] In other words, the deterministic nature of these systems does not make them predictable.[3][4] This behavior is known as deterministic chaos, or simply chaos. This was summarised by Edward Lorenz as follows:[5] Chaos: When the present determines the future, but the approximate present does not approximately determine the future.

Fractals
Self-similarity at different scales mathematically created through iteration. Evident in biological networks – branching, veins, nervous system, roots, lungs: a form of optimal space-filling that can be applied to human networks. It is also a means of packing information into small space as in the brain.

Citations & notes
Complex systems
Part of the modern scientific enterprise is to examine and provide explanations for what happens in complex systems like the human body and human societies.

Chaos theory
Chaos theory noted that in complex systems there was often minuter and unpredictable variability making the system non-linear, non-additive, non-periodic and chance-prone. Though it is predictive over short time spans long-term predictions are not possible. Minute differences at a particular state of a complex system can become amplified into large and unpredictable effects (butterfly flapping its wings changing a major weather pattern, the ‘butterfly effect’). The unpredictability is scale-free, a fractal (fractional dimension).

Scale-free systems
These are often produced as a logical and most efficient solution to a biophysical problem.

Some systems and patterns are scale-free: they look and ?behave the same at any scale. For example, bifurcating systems where a system repeatedly divides into two, like the branching of a tree, occur in neurons, circulatory system (where no cell is more than 5 cells from the circulatory system although the circulatory system consists of no more the 5% of the total body mass), and pulmonary system.

Scale-free systems can be generated from simple rule(s) and one property of such systems is that the variability that occurs at any particular scale is proportionally the same: it does not decrease as the scale is reduced.

Much of the complexity of living systems can be accounted for by ‘fractal genes’ which code complex systems with simple rules and mutations in fractal genes can be recognised easily.

Simple rules can arise in nature through a variety of sources:

• Attraction-repulsion (gives rise to patterns that we see, for example, in urban planning)
• Swarm intelligence (like ants finding the shortest distance between numbers of points.
• Power-law distributions (which are fractals – the neurons of the cortex follow a power-law distribution of its dendrites, making it an ideal structure as a neural network for pattern recognition)
• Wisdom of the crowd when the crowd components are truly experts and non (or evenly) biased
• One feature of complex systems is that you cannot predict a finishing state from a starting state of the system – but often from widely divergent starting states we see a convergence to a particular state, as we see in convergent evolution where biologically different organisms take on similar forms in particular environments. (origin of agriculture).

Emergence

Primacy of explanation
For example, in providing explanations that ‘reducing’ complexity we can place undue emphasis on particular ‘levels’ or frames of explanation. Human behaviour can, for instance, be explained in terms of its effect on other people, in terms of the hormones that drive it, or in terms of the genes that trigger the production of the hormones, or the processes that are going on in the brain when the behaviour occurs, or even in terms of evolutionary theory, long-term selective pressures and reproductive outcomes. In other words, when we ask for the reason for a particular kind of behaviour we will probably get different answers from a sociologist, evolutionary biologist, evolutionary psychologist, clinical psychologist, anatomist, molecular biologist, behavioural geneticist, endocrinologist, neuroscientist or someone trained in some other discipline. The important point is that there is no privileged perspective that entails all the others, each is equally valid and which explanation is most appropriate will depend on the particular circumstances.

Nature and nurture is a subtly nuanced interaction remembering also that though the brain can influence behaviour the body can influence the brain. There is also a subtle causal interplay between brain and body.

Complexification & prediction
Given certain conditions in the universe certain other consequences will follow. Though complexity is not inevitable – there is in fact a universal physical law of entropy, a tendency to randomness – it is a historical fact that …

1. As things get more complex they become less predictable.
2. Quantity can produce quality (chimps have 98% of our DNA; half the difference is olfactory, the rest is quantity of neurons and genes that release the brain from genetic influence)
3. The simpler the constituent parts the better
4. More random noise produces better solutions in networks
5. There is much to be gained from power of gradients, attraction and repulsion
6. Generalists work better than specialists (more adaptive)
7. All emergent adapted systems ‘work’ from the bottom up not the top down: they do not require blueprints and the people who construct them, they arise without a blueprint (e.g. Wikipedia)
8. There is no ‘ideal’ or optimal complex system except insofar as it is the ‘best adapted’ which is a general not a precise condition.

Hierarchies and heterarchies.

Citations & notes
[1] Ellis 2008. http://www.mth.uct.ac.za/~ellis/Top-down%20Ellis.pdf

General references
Ellis, G. 2008. On the nature of causation in complex systems. An extended version of RSSA Centenary Transactions paper
Ellis, G. 2012. Recognising top-down causation. Personal Web Site http://fqxi.org/community/forum/topic/1337
Gleick Chaos and complexity.
Sapolsky, R. Chaos and complexity Youtube. & introduction to human behavioural biology.
[1] Weaver, W. 1948. Science and Complexity. American Scientist 36:536
http://people.physics.anu.edu.au/~tas110/Teaching/Lectures/L1/Material/WEAVER1947.pdf

If we explain something by considering it as an effect of some preceding cause then this chain of cause-and-effect regress goes on ad infinitum, or it ends in a primordial cause which, since it cannot be related to a preceding cause, does not explain anything. Hume suggested that when billiard balls collide there is not something additional to the collision ‘the cause’ that is some external factor or force acting on the billiard balls – the balls simply collide.

While free will entails purpose and meaningful choice.

Ontology is clear at all levels except the quantum level.

Origin (emergence) of complexity. Specific outcomes can be achieved in many low-level implementations – that is because it is the higher level that is shaping outcomes (is causal). Higher levels constrain what lower levels can do and this creates new possibilities. Channelling allows complex development. Non-adaptive cybernetic system with feedback loop – use of information flow – thermostat. Feedback control the physical structure so the goals determine outcomes, the initial conditions are irrelevant. Organisms have goals that are not a physical thing. Adaptation is a selection state. DNA sequences are determined by the context, the environment. Selection is a process of winnowing of important information.

Where do goals come from – goals are adaptively selected in organism. Adaptive selection mind is a special case where symbolic representation are taking a role in determining how goals work. The plan of an aircraft is abstract plans and could not work without it. Money is causally effective because of its social system. Maxwell’s equations, theory, gave us televisions and smartphones.

Brain constrains your muscles. Pluripotent cells are directed by their context. Because of randomness selection can take place. Key analytical idea is of many functional equivalence classes, many low level states that all are equivalent to or correspond to a high level state. It is higher-level states that get selected for when adaptation takes place. Whenever you can find many low-level states corresponding to a high-level state then this indicates that top-down causation is going on.

You must acknowledge the entire causal web. There are always multiple explanations – top-down and bottom-up can both be true at the same time. Why do aircraft fly? A bottom-up physical explanation might refer to aerodynamics: a top-down explanation might refer to the design of the plane, its pilot etc.

When we consider cause as relating to context we might also consider Aristotle’s categorisation of cause into four kinds:

Material – lower-level (physical) cause – ‘that out of which’
Formal – same level (immediate) cause – ‘what it is to be’ ‘change of arrangement, shape or appearance, pattern or form which when present makes matter into a particular type of thing, its organisation
Efficient – immediate higher (contextual) – ‘source of change’
Final cause – the ultimate higher level cause – ‘that for the sake of which’

When does top-down take place? Bottom-up suffices when you don’t need to know the context. Perfect gas law, black body radiation. Vibrations of a drum depend on the container, cars etc.

Randomness at the bottom level is needed for selection to occur.

Like any explanation biology itself is contextual it uses the world of physics as background noise and then explains its own domain as best it can.

Martin Nowak, Harvard University Professor of Mathematics and Biology.

Explanation, testing and description.

‘Function’ derives from higher-order constructs in downward causation.

To understand what a neuron does we must know not only its structure or parts (analysis) but how it fits into the function of the brain as a whole (synthesis).

When we sense something, we are receiving a physically existent phenomenon, ie it exists independently of our sensing of it.

Scope

Continua
Selective perception and cognition have the potential to create discrete categories out of objects that in nature we know to be continua (e.g. the continuous colour spectrum split into individual colours, or the continuous sound waves of spoken language broken up into words and meanings) or to make continua out of things that in nature we know to be discrete (as we do with all universals like ‘tree’, ‘table’ or ‘coat’). We can underestimate how different entities are when they occur in the same mental category and overestimate how similar they might be when placed in different categories. When using reducing categories we can lose sight of the big picture.

Causation, explanation, justification
Why did the hooligan smash the shop window?

Because in her evolutionary history violence was a useful adaptation
Because political parties are too soft-handed about law and order nowadays
Because she came from a rough neighbourhood
Because the police were out on strike
Because there was nobody around
Because of the negative influence of her peer group
Because her boyfriend told her to
Because her parents failed to teach her to respect property
Because her body produced a temporary surge in testosterone
Because neurons were exploding in the anger region of her brain
Because her genes indicate that she was predisposed to violence

Are all of these simultaneously true and relevant or can they be prioritised in some way? If prioritised – on what grounds?

What matters most in science – explanation, testing, or description?

Adaptive selection allows local resistance to entropy with a corresponding build up of useful information.

So what we can see at the largest and smallest scales is approaching what will ever be possible, except for refining the details.

Anton Biermans

If we only understand something by If we understand something only if we can explain it as the effect of some cause, and understand this cause only if we can explain it as the effect of a preceding cause, then this chain of cause-and-effect either goes on ad infinitum, or it ends at some primordial cause which, as it cannot be reduced to a preceding cause, cannot be understood by definition.

Causality therefore ultimately cannot explain anything. If, for example, you invent Higgs particles to explain the mass of other particles, then you’ll eventually find that you need some other particle to explain the Higgs, a particle which in turn also has to be explained etcetera.

If you press the A key on your computer keyboard, then you don’t cause the letter A to appear on your computer screen but just switch that letter on with the A tab, just like when you press the heck, you don’t cause the door to open, but just open it. Similarly, if a let a glass fall out of my hand, then I don’t cause it to break as it hits the floor, I just use gravity to smash the glass so there’s nothing causal in this action.

Though chaos theory often is thought to say that the antics of a moth at one place can cause a hurricane elsewhere, if an intermediary event can cancel the hurricane, then the moth’s antics only can be a cause in retrospect, if the hurricane actually does happens, so it cannot cause the hurricane at all. Though events certainly are related, they cannot always be understood in terms of cause and effect.

The flaw at the heart of Big Bang Cosmology is that in the concept of cosmic time (the time passed since the mythical bang) it states that the universe lives in a time continuum not of its own making, that it presumes the existence of an absolute clock, a clock we can use to determine what in an absolute sense precedes what.

This originates in our habit in physics to think about objects and phenomena as if looking at them from an imaginary vantage point outside the universe, as if it is legitimate scientifically to look over God’s shoulders at His creation, so to say.

However, a universe which creates itself out of nothing, without any outside interference does not live in a time continuum of its own making but contains and produces all time within: in such universe there is no clock we can use to determine what precedes what in an absolute sense, what is cause of what.

For a discussion why big bang cosmology describes a fictitious universe, see my essay ‘Einstein’s Error.’

Paul

While free will entails purpose and meaningful choice.

There is no experiment which says that an act is good or bad. There are no units of good and bad, no measurements.

We are concerned with knowledge of reality, not reality itself. When we sense something, we are sensing something that exists independently of our sensing of it.

Ontology is clear at all levels except the quantum level.

Origin (emergence) of complexity. Specific outcomes can be achieved in many low-level implementations – that is because it is the higher level that is shaping outcomes (is causal). Higher levels constrain what lower levels can do and this creates new possibilities. Channelling allows complex development. Non-adaptive cybernetic system with feedback loop – use of information flow – thermostat. Feedback control the physical structure so the goals determine outcomes, the initial conditions are irrelevant. Organisms have goals that are not a physical thing. Adaptation is a selection state. DNA sequences are determined by the context, the environment. Selection is a process of winnowing of important information.

Where do goals come from – goals are adaptively selected in organism. Adaptive selection mind is a special case where symbolic representation are taking a role in determining how goals work. The plan of an aircraft is abstract plans and could not work without it. Money is causally effective because of its social system. Maxwell’s equations, theory, gave us televisions and smartphones.

Brain constrains your muscles. Pluripotent cells are directed by their context. Because of randomness selection can take place. Key analytical idea is of many functional equivalence classes, many low level states that all are equivalent to or correspond to a high level state. It is higher-level states that get selected for when adaptation takes place. Whenever you can find many low-level states corresponding to a high-level state then this indicates that top-down causation is going on.

You must acknowledge the entire causal web. There are always multiple explanations – top-down and bottom-up can both be true at the same time. Why do aircraft fly? A bottom-up physical explanation might refer to aerodynamics: a top-down explanation might refer to the design of the plane, its pilot etc.

Aristotle was right about cause.

Material – lower-level (Physical) cause – ‘that out of which’
Formal – same level (immediate) cause – ‘what it is to be’ ‘change of arrangement, shape or appearance, pattern or form which when present makes matter into a particular type of thing
Efficient – immediate higher (contextual) – ‘source of change’
Final cause – the ultimate higher level cause – ‘that for the sake of which’

When does top-down take place? Bottom-up suffices when you don’t need to know the context. Perfect gas law, black body radiation. Vibrations of a drum depend on the container, cars etc.

Cultural neuroscience is a good example.

Cosmic context. Physics cannot tell us because quantum uncertainty means physical outcome indeterminate (34 mins).

Tidal wave patterns in the sand or the mesmerising movement of flocks of birds which are governed by a local rule cannot compare with organic complexity.

Ethics has no measurement units and no way of testing good or bad.

The problem of prediction

At around 50 words we are struggling to understand a sentence.

Which of the following are abstract? Which are real? Which exist? Which can have causal influence: values, moral precepts, social laws, scientific laws, numbers, computer programs, thoughts, equations. Can something have causal influence and not exist? In what sense?

Randomness at the bottom level is needed for selection to occur.

Martin Nowak, Harvard University Professor of Mathematics and Biology.

Complexity
Since the 1970s these issues have been subsumed by the study of complexity theory and complex systems – everything from organic complexity and neural networks, to chaos theory, the internet and so on. At the core of the scientific enterprise is causation and are the causal relations between phenomena and it is causation acting within hierarchical systems that is, for physicist George Ellis, the source of complexity in the universe, as summarised in his paper On the Nature of Causation in Complex Systems which I outline briefly here.[1] In this paper Ellis explains the idea of ‘top-down’ causation.
http://humbleapproach.templeton.org/Top_Down_Causation/

Causation
We can define causation simply as ‘a change in X resulting in a reliable and demonstrable change in Y in a given context’.

Top-down causation
Higher levels are causally real because they have causal influence over lower levels. It is top-down causation that gives rise to complexity – such as computers and human beings.

Ellis claims that bottom-up causation is limited in the complexity it can produce, genuine complexity requiring a reversal of information flow from bottom-up to top-down, some coordination of effects. Tidal wave patterns in the sand or the mesmerising movement of flocks of birds which are governed by a local rule cannot compare with organic complexity.

Top-down causation in computers
What happens in this hierarchy. Top-down causation occurs when the boundary conditions (the extremes of an independent variable) and initial conditions (lowest values of the variable) determine consequences.

Top-down causation is especially prevalent in biology but also in digital computers – the paradigm of mechanistic algorithmic causation such that it is possible without contradicting the causal powers of the underlying micro physics. Understanding the emergence of genuine complexity out of the underlying physics depends on recognising this kind of causation.

Abstract causation
Non-physical entities can have causal efficacy. High levels drive the low levels in the computer. Bottom levels enable but do not cause. Program is not the same as its instantiations.

A software program is abstract logic: it is not stored electronic states in computer memory, but their precise pattern (a higher level relation) not evident in the electrons themselves.

Logical relations
High level algorithms determine what computations occur in an abstract logic that cannot be deduced from physics.

Universal physics
The physics of a computer does not restrict the logic, data, and computation that can be used (except the processing speed). It facilitates higher-level actions rather than constraining them.

Multiple realization
The same high level logic can be implemented in many ways (electronic transistors and relays, hydraulic valves, biological molecules) demonstrating that lower level physics is not driving the causation. Higher level logic can be instantiated in many ways by equivalence classes of lower level states. For example, our bodies are still the same, they are still us, even though the cells are different from those we had 10 years ago. The letter ‘p’ on a computer may be bold, italic, capital, red, 12 pt, light pixels or printed ink … but still the letter ‘p’. The higher level function drives the lower level interactions which can happen in many different ways (information hiding) so a computer demonstrates the emergence of a new kind of causation, not out of the underlying physics but out of the logic of higher level possibilities. Complex computer functioning is a mix of bottom-up causation and contextual effects.

Thoughts, like computer programs and data, are not physical entities?

How can there be top-down causation when the lower-level physics determines what can happen given the initial conditions? Well, simply by placing constraints on what is possible at the lower level; by changing properties when in combination as when an independent hydrogen molecule combines with oxygen to form water; where low level entities cannot exist outside their higher-level context, like a heart without a body; when selection creates order by deleting or constraining lower-level possibilities; when random fluctuation and quantum indeterminacy affect low level physics .

SUMMARY
Classical reductionism
Given the initial conditions and sufficient information we can predict future states, outcomes are determinate. Physicist Dirac claimed that ‘chemistry is just an application of quantum physics’. This appears to be physically untrue in many ways. Current inflationary cosmology suggests that the origin of the galaxies is a consequence of random or uncertain quantum fluctuations in the early universe. If this is the case then prediction becomes a false hope, even at this early stage apart from any other chaotic factors arising from complexity.

The origin of novelty & complexity
Biology presents us with many fascinating examples of how organic complexity arises, for example, how the iteration of simple rules can give rise to complexity as with the fractal genes that produce the scale-free bifurcating systems of the pulmonary, nervous and blood circulatory systems.

What is not clear is why this should be considered in some way special or unaccounted for by molecular interaction.

Can a reductionist view of reality account for the origin of complexity: can examining parts explain how the whole hangs together?

Clearly material organisational novelty must arise since the universe which was once undifferentiated plasma now contains objects as structurally complex as living organisms.

When some elements combine they take on a completely new and independent character. Examples of emergence would be

Multiple individuals acting randomly assume some kind of spontaneous ordering or structuring.

Within the theme of emergence is often observation of the teleonomic character of evolutionary novelty and the way organic systems are functional wholes. As with, illustrating the teleonomy of one organism acting under natural selection and how novel complex outcomes can be achieved from a simple algorithm or underlying set of rules.

Reductionism does not deny emergent phenomena but claims the ability to understand phenomena completely in terms of constituent processes.

Top-down causation
Top-down causation is more common than bottom-up.

We cannot predict the future structure of DNA coding given its own structure – this is determined by the environment.

These are presented as systems operating ‘bottom-up’, the ‘parts’ being unaware of the ‘whole’ that has emerged, much as Wikipedia emerges from a grass-roots base ‘bottom-up’ rather than scholarly entries ‘top-down’.

Hierarchy of causation
The idea of hierarchy can add further complication and confusion through the idea of ‘bottom-up’ or ‘top-down’ causality and the fact that biological systems have a special kind of history as a consequence of the teleonomic character of natural selection which leads us to ask about function: what is the whole or part for (implying the future)?

In complex systems there is often strength in quantity, randomness, local interactions with simple iterated rules, gradients, generalists operate more effectively than specialists. They are directed towards optimal adaptation.

Unpredictable – is there a blueprint from the ‘start’? In evolution we see convergence from a different starting point.

Simile and association.

Hierarchy
Emergent properties cannot be explained at a ‘lower level’ – they are not present at ‘lower levels’. Rather than showing that ‘higher-level activities do not exist it is the task of mechanistic explanation to show how they arise from the parts.

Examples:
Examples of emergence come from many disciplines.

In language we have letters, words, sentences, paragraphs exhibiting increasing complexity and inclusiveness with meaning an emergent property. Meaning determines a top-down constraint on the choice of words but the words constrain the meanings that can be expressed.

When a living organism is split up into its component molecules there is no remaining ingredient such as ‘the spark of life’ or the ‘emergence gene’ so emergent properties do not have some form of separate existence. And yet emergent properties are not identical to, reducible to, predictable from, or deducible from their constituent parts – which do not have the properties of the whole. The brain consists of molecules but individual molecules do not think and feel, and we could not predict the emergence of a brain by simply looking at an organic molecule.

In sociology although agency seems to ultimately derive from the individual we nevertheless live within the structure of social networks of varying degrees of complexity. Though a problem like obesity can be investigated by the sociologist in terms of the supply and nature of foods, marketing, sedentary lifestyles and so on, weight variation can also be strongly correlated with social networks. There appears to be a collective aspect to the problem of obesity. One way of looking at this is to realise that change is not always instigated by altering the physical composition of a whole but by changing the rules of operation: in the case of society this could be social laws or customs of various kinds.

This has long been a source of ambiguity in sociological methodology. Adam Smith claimed that common good could be achieved through the selfish activities of individuals (methodological individualism) while Karl Marx and Emile Durkheim saw outcomes as a result of collective forces like class, race, or religion (methodological holism). Modern examination of social networks can combine these approaches by regarding individuals as nodes in a web of connections.

Does emergence illegitimately get something from nothing? Are the properties and organisation subjective qualities?

We assess life-related systems in terms of both structure and function. Structure relates to parts, function mostly to wholes. Perhaps strangely, we perceive causation as being instigated by either parts (structure) or wholes (function).

The characteristics of emergent or complex systems include: ‘self-regulation’ by feedback loops; a large number of variables that are causally related but in a ‘directed’ way, exhibiting some form of natural selection through differential survival and reproduction or with unusual constrained path-dependent outcomes: as occurs in markets. Emergence may be a particular consequence of diversity and complexity, organisation and connectivity.

Economist Jeffrey Goldstein in the journal Emergence isolates key characteristics of emergence: it involves radical novelty; coherence (sometimes as ‘self-regulation’); integrity or ‘wholeness’; it is the product of a dynamic process (it evolves); it is supervenient (lower-level properties of a system determine its higher level properties).

To summarise: when considering wholes and parts in general we need to consider specific instances. Some wholes are more or less straightforwardly additive (sugar grains aggregated into a sugar lump) but other wholes seem to grade into kinds that are not so amenable to reduction – consider the music produced by an orchestra, carbon dioxide, a language, a painting, an economic system, the human body, consciousness, and the sum 2 = 2 = 4.

Part of the disagreement between reduction and emergence can be explained by regarding wholes as having parts that are more or less interdependent. At one end of the spectrum are aggregates and at the other living organisms. As we shift cognitive focus the relationship between wholes and parts can display varying degrees of interdependence: removing a molecule from an ant body is unlikely to be problematic although removing an organ could be, while removing an ant from its colony is probably unproblematic. Wholes sometimes only exist because of the precise relations of the parts – in other wholes it does not matter. Sometimes characteristics appear ‘emergent’ (irreducible as in organised wholes) and sometimes they appear ‘reducible’ (as in aggregative wholes).

Chaos
Chaos theory draws attention to the way that complex systems are aperiodic unpredictable (chaotic), the way that variability in a complex system is not noise but the way the system works.

(methodological reductionism assumes a causal relationship between the elements of structure and higher-order constructs (“function”). This criticism is deep, because it does not only claim that the whole cannot be understood by only looking at the parts, but also that the parts themselves cannot be fully understood without understanding the whole. That is, to understand what a neuron does, one must understand in what way it contributes to the organization of the brain (or more generally of the living entity. you can’t understand a phenomenon just looking at its elements (at whatever scale defined) but you also have to take into account all the relationships between them. )).
Methodol redn the right way, or the only way, to understand the whole is to understand the elements that compose it.))))))

[Are levels fractal – is there the same noise at each level?]

Complex systems
What kind of scientific questions will we want to answer in the 21st century?

There appears to have been a change in the character of scientific questions towards the end of the 20th century. Science in the 21st century is dealing much more with complex systems. People wish to know what the weather will be like in a fortnight’s time; to what extent is climate changes anthropogenic; what is the probability that I might die of some heritable disease; what is the degree of risk related to the use of a particular genetically modified organism; will interest rates be higher in six months’ time and, if so, by how much?

Fundamentalism suggests that a partial cause is the whole cause. Why does a plane fly? (air molecules under the wing, it has a pilot, it was designed to fly, there is a timetable, airline must make a profit)

Historical background
Few-variable simplicity
In very general terms the physical sciences of the 17th to 19th centuries were characterised by systems of constant conditions involving very few variables: this gave us simple physical laws and principles about the natural world that underpinned the production of the telephone, radio, cinema, car and plane.[1 pg]

In contrast the processes going on in biological systems seemed to involve many subtly interconnected variables that were difficult to measure and whose behaviour was not amenable to the formation of law-like patterns similar to those of the physical sciences. Up to about 1900 then much of biological science was essentially descriptive and meagre analytical, mathematical or quantitative foundations.

Disorganised complexity
After 1900 with the development of probability theory and statistical mechanics it became possible to take into account regularities emerging from a vast number of variables working in combination: though the movement of 10 billiard balls on a table may be difficult to predict, when there are extremely large numbers of balls it becomes possible to answer and quantify general questions that relate to the collective behaviour of the balls (how frequently will they collide, how far will each one move on average before it is hit etc.) when we have no idea of the behaviour of any individual ball. In fact, as the number of variables increases certain calculations become more accurate say, the average frequency of calls to a telephone exchange or the likelihood of any given number being rung by more than one person. It allows, for example, insurance companies and casinos to calculate odds and ensure that the odds are in their favour. This applies even when the individual events (say the age of death) are unpredictable unlike the predictable way a billiard ball behaves. Much of our knowledge of the universe and natural systems depends on calculations of such probabilities.

Organised complexity
Disorganised complexity has predictive power because of the predictable randomicity of the behaviour of its components, the mathematics of averages.

But there are systems that are organised into functioning wholes: labour unions, ant colonies, the world-wide-web, the biosphere.

Chaos in dynamical systems is sensitive dependence on initial conditions and how the interation of simple patterns can produce complexity.

Consists of many simple components interconnected, often as a network, through a complex non-linear architecture of causation, no central control producing emergent behaviour. Emergent behaviour as scaling laws can entail hierarchical structure (nested, scalar), coordinated information-processing, dynamic change and adaptive behaviour (complex adaptive systems [ecosystems, biosphere, stock market] self-organising, non-conscious evolution, ‘learning’, or feedback).
Examples: are an ant colony, economic system, brain.

Simplicity with few variables: disorganised complexity of many variables that can be averaged and in which the whole is greater than the sum of the parts: organised complexity where the behaviour is not simply the sum of the parts.

omplex systems
Dynamic systems theory works on the mathematics of how systems change.

Chaos
Chaos theory studies the behavior of dynamical systems that are highly sensitive to initial conditions—an effect which is popularly referred to as the butterfly effect. Small differences in initial conditions (such as those due to rounding errors in numerical computation) yield widely diverging outcomes for such dynamical systems, rendering long-term prediction impossible in general.[1] This happens even though these systems are deterministic, meaning that their future behavior is fully determined by their initial conditions, with no random elements involved.[2] In other words, the deterministic nature of these systems does not make them predictable.[3][4] This behavior is known as deterministic chaos, or simply chaos. This was summarised by Edward Lorenz as follows:[5] Chaos: When the present determines the future, but the approximate present does not approximately determine the future.

Fractals
Self-similarity at different scales mathematically created through iteration. Evident in biological networks – branching, veins, nervous system, roots, lungs: a form of optimal space-filling that can be applied to human networks. It is also a means of packing information into small space as in the brain.

Nature is 3-D and fractal geometry is an important mathematical application.

Citations & notes
[1] Weaver, W. 1948. Science and Complexity. American Scientist 36:536

http://people.physics.anu.edu.au/~tas110/Teaching/Lectures/L1/Material/WEAVER1947.pdf

Complex systems
Part of the modern scientific enterprise is to examine and provide explanations for what happens in complex systems like the human body and human societies.

Reductionism
Western science has for about 500 years tackled such situations by breaking them down into their component parts. Concluding that if you know the state of a system at time A then it should be possible to determine its state at time B. The whole is the sum of its parts: complex systems are additive.

Chaos theory
Chaos theory noted that in complex systems there was often minuter and unpredictable variability making the system non-linear, non-additive, non-periodic and chance-prone. Though it is predictive over short time spans long-term predictions are not possible. Minute differences at a particular state of a complex system can become amplified into large and unpredictable effects (butterfly flapping its wings changing a major weather pattern, the ‘butterfly effect’). The unpredictability is scale-free, a fractal (fractional dimension).

Scale-free systems
These are often produced as a logical and most efficient solution to a biophysical problem.

Some systems and patterns are scale-free: they look and ?behave the same at any scale. For example, bifurcating systems where a system repeatedly divides into two, like the branching of a tree, occur in neurons, circulatory system (where no cell is more than 5 cells from the circulatory system although the circulatory system consists of no more the 5% of the total body mass), and pulmonary system.

Scale-free systems can be generated from simple rule(s) and one property of such systems is that the variability that occurs at any particular scale is proportionally the same: it does not decrease as the scale is reduced.

Much of the complexity of living systems can be accounted for by ‘fractal genes’ which code complex systems with simple rules and mutations in fractal genes can be recognised easily.

Simple rules can arise in nature through a variety of sources:

• Attraction-repulsion (gives rise tro patterns that we see, for example, in urban planning)
• Swarm intelligence (like ants finding the shortest distance between numbers of points.
• Power-law distributions (which are fractals – the neurons of the cortex follow a power-law distribution of its dendrites, making it an ideal structure as a neural network for pattern recognition)
• Wisdom of the crowd when the crowd components are truly experts and non (or evenly) biased

Emergence
One feature of complex systems is that you cannot predict a finishing state from a starting state of the system – but often from widely divergent starting states we see a convergence to a particular state, as we see in convergent evolution where biologically different organisms take on similar forms in particular environments. (origin of agriculture).

(cellular automata)

Holism

Explanatory frameworks & categories

Continua
Sometimes, for convenience, we break down things which are continuous in nature into discontinuous categories. The most obvious example is the colour spectrum which though physically continuous we break up into the colours of the rainbow’. Though we are aware of what we are doing we are less aware of some of the consequences: that we underestimate how different entities are when they occur in the same category; overestimate how different they are when placed in different categories; and when using reducing categories we can lose sight of the big picture.

Primacy of explanation
For example, in providing explanations that ‘reducing’ complexity we can place undue emphasis on particular ‘levels’ or frames of explanation. Human behaviour can, for instance, be explained in terms of its effect on other people, in terms of the hormones that drive it, or in terms of the genes that trigger the production of the hormones, or the processes that are going on in the brain when the behaviour occurs, or even in terms of evolutionary theory, long-term selective pressures and reproductive outcomes. In other words, when we ask for the reason for a particular kind of behaviour we will probably get different answers from a sociologist, evolutionary biologist, evolutionary psychologist, clinical psychologist, anatomist, molecular biologist, behavioural geneticist, endocrinologist, neuroscientist or someone trained in some other discipline. The important point is that there is no privileged perspective that entails all the others, each is equally valid and which explanation is most appropriate will depend on the particular circumstances.

Nature and nurture is a subtly nuanced interaction remembering also that though the brain can influence behaviour the body can influence the brain. There is also a subtle causal interplay between brain and body.

Complexification & prediction
Given certain conditions in the universe certain other consequences will follow. Though complexity is not inevitable – there is in fact a universal physical law of entropy, a tendency to randomness – it is a historical fact that …

9. As things get more complex they become less predictable.
10. Quantity can produce quality (chimps have 98% of our DNA; half the difference is olfactory, the rest is quantity of neurons and genes that release the brain from genetic influence)
11. The simpler the constituent parts the better
12. More random noise produces better solutions in networks
13. There is much to be gained from power of gradients, attraction and repulsion
14. Generalists work better than specialists (more adaptive)
15. All emergent adapted systems ‘work’ from the bottom up not the top down: they do not require blueprints and the people who construct them, they arise without a blueprint (e.g. Wikipedia)
16. There is no ‘ideal’ or optimal complex system except insofar as it is the ‘best adapted’ which is a general not a precise condition.

Hierarchies and heterarchies.

One of the basic assumptions implicit in the way physics is usually done is that all causation flows in a bottom up fashion, from micro to macro scales.

The edge of the observable universe is about 46–47 billion light-years away.

What matters most – explanation, testing, or description?

The key point about adaptive selection (once off or repeated) is that it lets us locally go against the flow of entropy, and this lets us build up useful information.

Daniel Bernstein
Though I believe that any and all interactions can be expressed and described in terms of the fundamental aspects of reality, we lack the theory to do so. And even if we did have such theory that would show all higher scale interactions to be emerging from the fundamental interactions, the amount of data necessary to track every elementary particle and force would prohibit the description of even the simplest systems.

My understanding is that objects are structurally bound if, within a given scale of reality and under effect of a given force associated with the given scale on them, they behaves as a single object. So, the mathematical models of a particular scale of physical reality can threat composite objects as “virtually fundamental” in such a way that the top-down or bi-directional causalities not only make sense, but becomes the only workable alternative to tracking the interactions between the fundamental particles composing the interacting structures.

So what we can see at the largest and smallest scales is approaching what will ever be possible, except for refining the details.

Anton Biermans
I’m afraid that you (and everybody else, for that matter) confuse causality with reason.

If we understand something only if we can explain it as the effect of some cause, and understand this cause only if we can explain it as the effect of a preceding cause, then this chain of cause-and-effect either goes on ad infinitum, or it ends at some primordial cause which, as it cannot be reduced to a preceding cause, cannot be understood by definition.

Causality therefore ultimately cannot explain anything. If, for example, you invent Higgs particles to explain the mass of other particles, then you’ll eventually find that you need some other particle to explain the Higgs, a particle which in turn also has to be explained etcetera.

If you press the A key on your computer keyboard, then you don’t cause the letter A to appear on your computer screen but just switch that letter on with the A tab, just like when you press the heck, you don’t cause the door to open, but just open it. Similarly, if a let a glass fall out of my hand, then I don’t cause it to break as it hits the floor, I just use gravity to smash the glass so there’s nothing causal in this action.

Though chaos theory often is thought to say that the antics of a moth at one place can cause a hurricane elsewhere, if an intermediary event can cancel the hurricane, then the moth’s antics only can be a cause in retrospect, if the hurricane actually does happens, so it cannot cause the hurricane at all. Though events certainly are related, they cannot always be understood in terms of cause and effect.

The flaw at the heart of Big Bang Cosmology is that in the concept of cosmic time (the time passed since the mythical bang) it states that the universe lives in a time continuum not of its own making, that it presumes the existence of an absolute clock, a clock we can use to determine what in an absolute sense precedes what.

This originates in our habit in physics to think about objects and phenomena as if looking at them from an imaginary vantage point outside the universe, as if it is legitimate scientifically to look over God’s shoulders at His creation, so to say.

However, a universe which creates itself out of nothing, without any outside interference does not live in a time continuum of its own making but contains and produces all time within: in such universe there is no clock we can use to determine what precedes what in an absolute sense, what is cause of what.

For a discussion why big bang cosmology describes a fictitious universe, see my essay ‘Einstein’s Error.’

Ontology is clear at all levels except the quantum level.

Computer systems illustrate downward causation where the software tellsthe hardware what to do – and what the hardware does will depend on the different software. What drives it is the abstract informational logic in the system, not the physical software on the USB stick. The context matters. There are 5 kinds of downward causation: algorithmic,
non-adaptive – thermostat, heart beat, body temperature and systems with feedback
adaptive, intelligent.

So it is the goals that determine outcomes and the initial conditions are irrelevant.
In living systems the best example of downward causation is adaptation in which it is the environment that is a major determinant of the structure of the DNA.

Origin (emergence) of complexity. Specific outcomes can be achieved in many low-level implementations – that is because it is the higher level that is shaping outcomes (is causal). Higher levels constrain what lower levels can do and this creates new possibilities. Channelling allows complex development. Non-adaptive cybernetic system with feedback loop – use of information flow – thermostat. Feedback control the physical structure so the goals determine outcomes, the initial conditions are irrelevant. Organisms have goals that are not a physical thing. Adaptation is a selection state. DNA sequences are determined by the context, the environment. Selection is a process of winnowing of important information.

Where do goals come from – goals are adaptively selected in organism. Adaptive selectionmind is a special case where symbolic representation are taking a role in determining how goals work. The plan of an aircraft is abstract plans and could not work without it. Money is causally effective because of its social system. Maxwell’s equations, theory, gave us televisions and smartphones.

Brain constrains your muscles. Pluripotent cells are directed by their context. Because of randomness selection can take place. Key analytical idea is of many functional equivalence classes, many low level states that all are equivalent to or correspond to a high level state. It is higher-level states that get selected for when adaptation takes place. Whenever you can find many low-level states corresponding to a high-level state then this indicates that top-down causation is going on.

You must acknowledge the entire causal web. There are always multiple explanations – top-down and bottom-up can both be true at the same time. Why does an aircraft fly? Bottom-up physical explanation due to air pressure. Top-down is that it was designed to fly (pilot, timetable, makes a profit). All are simultaneously true and relevant.

Aristotle was right about cause.

Material – lower-level (Physical) cause – ‘that out of which’
Formal – same level (immediate) cause – ‘what it is to be’ ‘change of arrangement, shape or appearance, pattern or form which when present makes matter into a particular type of thing
Efficient – immediate higher (contextual) – ‘source of change
Final cause – the ultimate higher level cause – ‘that for the sake of which’

When does top-down take place? Bottom-up suffices when you don’t need to know the context. Perfect gas law, black body radiation. Vibrations of a drum depend on the container, cars etc.

Cultural neuroscience is a good example.

Determinism is challenged by chaos theory, quantum uncertainty (entanglement).

Recognise we are not at the centre of the knowledge universe. Epistemologically the axiomatic method has limitations. Human conceptual frameworks do not set the limits to knowledge – we need ways of understanding how machines represent the world.

Print Friendly, PDF & Email