What Goes Up Must Come Down

 

“Why do you think those adverts on TV are repeated so often? “

Many years ago this question was posed in an informal discussion, quite out of any context, by scientist and TV personality Professor Heinz Wolff.  It emerged from an insatiable curiosity about everything that is a hallmark of both the engaged scientist and innovator.  Memories of the day have long faded, except for that question.

Photo by P. G. Champion

Professor_Heinz_Wolff

The fact that such a mundane question can be posed by so eminent a scientist suggests that the answer might be important.  The same TV advertisements are indeed broadcast repeatedly.  They are expensive and so the benefits of doing so must be appreciable.  We have now reached a point to sketch an outline of what an answer might look like.

Adverts communicate information that enables TV viewers to perceive the value of their associated products or services.  In the vocabulary of this website, the broadcast information creates instances of Consumer Product Interaction with a consequent consumer perception of value.  A viewer may feel a closer association with their sporting hero by wearing the brand he is commissioned to promote.

The raising of value perceptions has been imagined as the elevation of a huge, wobbly marque-like structure referred to as a Value Surface, on which a single point marks an individual’s response at the time of that Consumer Product Interaction.  This value perception may subsequently go up or down depending on whatever information follows.  News of a drug scandal degrades the image of the sportsman and his sponsored products.  The Value Surface in its entirety is forever in an agitated seascape motion that is hopefully nudged upwards by each broadcast advertisement.

One mechanism of action for the advertisements is through the classical conditioning discussed in the previous post.  Just as the dogs of Pavlov could be taught to associate the sound of a bell with the arrival of food, the information and images in the advert may evoke an equivalent response in value perceived for the associated products or services.  Recognition of brand attributes has been associated with conditioned responses by Janiszewski and Warlop (2013) .

The requirement to frequently rebroadcast the same information clearly indicates that the association with value needs continual reinforcement.  This could be necessary to increase the number of recipients or enhance value perceived, that is to increase the overall breadth or height of a Value Surface.  TV advertisements certainly have a huge reach.  It is also possible that residual perceived value following the advertisement broadcast might naturally diminish without the subsequent reinforcement of the message.  One might consider that the advert has a role in propagating ideas as a meme, as discussed in New Economics of Innovation, which is analogous to the viral propagation of genetic information.  At any time there are many other possibly conflicting memes competing for survival in the consciousness of a recipient population.   Advertising agencies and media companies alike have done well out of this silent struggle.

With so much content passing every second through the information environment, markets might well be naturally forgetful.  We will explore this idea in more detail later.  At this point we should note that it is often hard work to raise a Value Surface and then to keep it aloft.  This effort appears in the investments and endeavours made by an enterprise and entrepreneur in making their information valuable.  Next we will transform the Value Surface into a single point that represents perceived value for a whole population of consumers.

 

The Neural Roots of an Economic Trajectory

The Value Surface concept discussed above and elsewhere is essentially a micro-analysis of the behaviour of a population of consumers[1].  This is needed to construct and interpret the viscoelastic-derived model that has been used to simulate the commercial operations of an enterprise.  To provide a more macroscopic view of that enterprise, we can collapse the three dimensions of the Value Surface into a single point that is representative of the entire value-creating innovative activities at a particular moment in time.

Taking the average across the multiple perceptions of value for a consumer population gives a single measure of height to which a product can be elevated by the endeavours of an enterprise[2].  Clearly some products will be more difficult to raise than others. The resistance to this elevation will be different for different products, and will inversely represent how attractive they are to their consumers. We can then follow this point of average perceived value with time to draw a trajectory of a product through an economic space.  The role of the enterprise is to “fly” the product through this space, as described in the companion Foreground papers: An Economic Trajectory  and Flightpaths and Forgetful Markets.

Here, with more freedom to speculate, we will consider the neural origins of a potential field through which an economic entity might fly.

In Valoris Cognita Barcelona we have considered how a person’s brain may interpret information received through the senses to build and adapt a continuously changing model of reality.  Consciousness may be considered to arise from an elimination of the errors that separate neural cognition from external reality.  It is a descent down an “error surface” to a point that is satisfactory for survival, but which inevitably leaves some unimportant residual error with an associated subjective perception.

Whilst the electrical impulses and neuro-chemical transmission of information that arise in neural cells and synaptic connections is clearly powered by the host individual, a neural model of reality may be changed by this information flow to become less transient and more ordered to some degree by the memories it engenders.  This explanation needs help from an analogy with another energy dissipative system.

In Writing the Information we considered various systems through which energy flows and thereby transforms the system that is its conduit.  One such example was the flow of water down a mountainous terrain.  The water flow itself was driven by its inherent potential energy acquired due to its height.  The direction of flow is not random but clearly follows a path to descend to the lowest point as fast as possible, and in doing so it sculpts distinctive river valleys into the terrain that serve to further accumulate and direct the water flow and erosion.  In summary, whilst the flow originates from the potential energy of the water and is transient, semi-permanent features are created and remain long after their creation.  In these features are written the full history of previous torrents through the energy dissipative erosive events they have engendered.

Could the energy flows through neural circuity have left similar features that we call memories through associated energy dissipative events?

Water Erosion Pattern

Could the information content that is etched into a brain and into individual neural models of reality be the source of potential energy fields we have hypothesised as resisting the elevation of a Value Surface and its associated Economic Trajectory?

Could a continuous neural remodelling response to new information together with the competition for vital attention explain why signals that are not reinforced dissipate leading to a natural forgetfulness of markets?

Can innovation that is making information valuable actually be a physical phenomenon or should we be content to have a suitable physical analogue of this fundamentally social process?

The latter question may not be as outrageous as it may first seem.  We have discussed in Semantics of Information that deletion of information has thermodynamic consequences.  There are similar issues in other domains for which information may have a physical manifestation.

 

Applying  Landauer’s principle to the information content of the universe  M. Paul Gough (2008) calculates that an information energy makes a significant contribution to the dark energy that is hypothesised to have determined the dynamics of the expanding universe throughout its entire history.

 

NASA and the European Space Agency.
Hubble_ultra_deep_field_largeNASA’s Hubble telescope reveals around 10,000 galaxies with the deepest view into the history of the Universe.

So why are adverts on TV repeated so often?

Imagine a world without the repeated marketing needed to encourage product purchase.  The dynamics of an economic trajectory indicate that goods may not simply remain frozen in space.  The potential retained within the Value Surface has already been diminished by previous Sale Events.  Now, the inevitably dynamics of descent will be brought into play and there will begin an increasing rate of loss of information in a reversal of the previous ascent phase of the trajectory.  Investment and prices will fall as companies seek to convert the residual potential in the collapsing Value Surface into income and consumers will forget how they once valued the goods.

This dynamic mechanical analogy provides an explanation for why commercial television repeats and repeats again advertisers’ content.  It provides an explanation of why salespersons continually need to repeat and reinforce the value proposition of their products.  So finally we can propose a derivative of the Labour Theory of Value Creation where labour is also working to prevent the depreciation of that value due to a natural forgetfulness of dynamic markets.  Innovation has a role therefore, not only in making information valuable in the first instance, but also in retaining the value of the information and opposing a tendency for that value to decline with time as a product matures.

As it is with commodities and technologies, so it is with whole companies that integrate the value of their various commercial activities.  The value of the company as it appears in its stock market valuation depends ultimately on the effective deployment of capital to propel the company on its own economic journey.  The forgetfulness of the market appears in the fluctuating patterns of valuation of public quoted stock.  Memories are short and a regular injection of good news is needed to counter a fall in value.  Volumes of press releases evidence this common practice. The similarity of the qualitative nature of the economic trajectories of commodities, technologies and companies, whilst these may differ substantially in time and duration, together with the voraciousness of consumers to absorb this “vital” information, point to a similar cause and effect arising from the action of an economic potential field.

 

Multiple images of beautiful independence might well convince a receptive individual of the transcendental properties of a particular perfume.

In doing so, their need to be repeated on multiple occasions may be essential to overcome a natural forgetfulness of the fragrance market.

 

 Dior j'adore sm

 Notes:

[1] It may be considered equivalent to the statistical mechanics description of the thermodynamics of a liquid or a gas, which is at the origin or models of economics developed by Paul Samuelson et al.

[2] Some consequences of taking an average measure of value perception are considered in A Labour Theory of Value Creation.

Of Dogs, Men and Crows

About one year ago we set out with a definition of innovation that is making information valuable and with an aim to explore innovation using the physical and biological sciences to add to the understanding of innovation as a social and economic activity.

In the previous posting Valoris Cognita Barcelona this journey led to sketching out of some principles whereby the information content of an innovation might manifest itself in the neural circuitry of the brains of consumers, who might then perceive an associated value.

Unfortunately, consumer brains like all models produce approximations of the real world. There can be no absolute reality in this subjective world, just opinions held to varying degrees of conviction depending on how sensory information fits to the cerebral model an individual consumer uses to understand the world in which he lives.  Furthermore, this cerebral modelling is not a uniquely human capability but is shared with other organisms.

Here we will start a catalogue of examples of other species for which information may be considered valuable.

The classical example of the perception of value associated with information is in the classical conditioning of animal behaviour discovered by Ivan Pavlov in the early years of the 20th century.  Famously, saliva secretion in dogs which normally occurs when they find food can be generated by other stimuli, such as the ringing of a bell, which through repetitive association the dog has learned to indicate the arrival of food.

One might interpret this conditioned response of the dog as valuing the information communicated by the ringing bell as it might value the arrival of the food itself.

Whilst alternative unconditioned responses, that are innate and naturally occurring, appear to be hardwired in deeper and more primitive parts of the brain, learned conditioned responses arise in the cerebral context that is responsible for higher order intelligent behaviour.  The initial neural correlations with the perception of value can thus be identified.

 Ivan_Pavlov

Since the discovery of the physiological basis of classical conditioning there has been widespread application of the concept in marketing and advertising[1].  It is unsurprising to consider that the learned response of Pavlov’s dogs to the various stimuli that provoked their salivation can be associated with the desires of consumers in shopping centres as they encounter the brands that line the shelves therein.  In this case the brand communicates the necessary information for a receptive consumer to perceived value in the associated product.

Wormhole:        Consumer Behaviour: There is a great deal to learn from other fields especially when it comes to consumer motivation and behaviour.

 

A more recent study of a quite different aspect of animal behaviour, that is to do with the reaction to inequality in the treatment of individuals, also indicates a perception of value in species that enjoy a complex social behaviour.

In 2013, Claudia Wascher and Thomas Bugnyar [2] reported on the behaviour of pairs of crows where one individual is rewarded preferentially in relation to a second.  The birds had learned to associate a token exchange with their reward with food.  The study of Wascher and Bugnyar revealed that the bird’s behaviour depended on the inequality the researchers introduced into the reward system:-

Crow 1

A view from the office window.  Crows and ravens have cognitive abilities similar to primates, especially in their social interactions, in various forms of cooperation and problem solving and with a high selectivity in partner choice and in coalition and alliance formation.

*   If only one bird of a pair was rewarded with food for the same exchange task, this diminished their willingness to participate in the token exchange[3].

*   If one bird received lesser quality food for the same exchange task, this also diminished willingness to participate in the token exchange.

*   A bird receiving lesser quality food for the same exchange task may even choose not to accept their reward, even though they had already paid the cost in the token exchange.

*   If one bird was given food as a gift whilst a second had to “work” for the food through a token exchange, then this also reduced their willingness to participate in the token exchange.

*   Different individuals respond differently to inequity in complex ways, making the above findings apparent in the statistics of the population rather than appearing in the individuals on every occasion.

This response to inequality is interesting in itself as it mirrors human preference for fairness in reward distribution, where even a person receiving a disproportionately higher reward can be dissatisfied by an unequal distribution.  Whilst primates also behave in a similar manner, the inequality response in dogs is determined solely by the presence or absence of the reward and not its quality.  Fish on the other hand appear completely insensitive to inequality.

Crows have a complex social behaviour and accordingly their behavioural response to inequality is highly sensitive.  Furthermore, for this reaction to occur, these birds must show attributes that are particularly relevant to the current discussion on value perception.  This sensitivity to inequality requires the birds to have:-

  • An inherent perception of the relative value of different items upon which their response to inequality is determined.
  • An inherent perception that the cost (in terms of token exchange) needs to be equivalent for the same reward as recognised in the received food.
  • A perception that the value of a reward is inherently related to the work required to acquire it.

So it seems that the Value Surface concept and even a Labour Theory of Value have behavioural roots that may have arisen independently in species of mammals and birds that share complex social interactions within their communities[4].

Smith, Ricardo and Marx could recognise their classical articulation of economic motivation in this fascinating insight into animal behaviour that appears to be associated with the co-operative tendencies of the species concerned.  All three repeatedly emphasised that such economic behaviour emerges through social interaction, and so it seems.  However, it also appears that social relationships can condition the associated neural responses in individuals.

It is therefore significant to note that perceptions of value, whilst they must originate through cognition and brain function, are fundamentally associated with a society and the complex social relationships that exist therein.  There is such a thing as society.  Furthermore, Sale Events are dependent on more than a simple individual comparison of cost and benefit.  Environmental factors within the society play a role, one of which is the fairness and equality that underpin the transactional behaviour.

 

Notes:

[1] Limbad, Shaileshkumar J.  “The Application of Classical Conditioning Theory in Advertisements” International Journal of Marketing and Technology3.4 (Apr 2013): 197-207.

[2] Wascher CAF, Bugnyar T (2013)  Behavioral Responses to Inequity in Reward Distribution and Working Effort in Crows and Ravens. PLoS ONE 8(2): e56885. doi:10.1371/journal.pone.0056885

[3] The willingness to participate in the token exchange is referred to as “exchange performance” and is the likelihood a token exchange will occur.  Crows like humans are varied in their responses to stimuli as is recognised in the Value Surface concept.

[4] This explanation seems more plausible than the alternative that these behavioural traits arose when birds and mammals had a common ancestry, that was around 320 million years ago.

Valoris Cognita Barcelona

In memory of Joe Egan, born 25th February 1916

Here we move tentatively into the terra incognito of the physiology of value perception.  Expect amendments to follow.

The previous posting on Ubiquitous Error Elimination leads us to consider that value perceptions gained through the automatic numerical meanderings of a Least Squares Method fitted model may have something in common with value perceived by real consumers of some actual goods.

A point of commonality is to consider one’s brain as a model that delivers consciousness of observed reality.  The observations are sensory inputs that comprise many analog signals, which an internal perception must seek to organise into a cognitive model with a minimum error value.  Although the physiological processes are largely unknown, one might expect some form of neural modelling and fitting to observed reality as part of the emergence of conscious awareness.  If so, then all such model fitting will be subject to the general principles of navigating a complex error surface.[1]

In The Grand Design (Bantam Books, New York, 2010) Stephen Hawking and Leonard Mlodinow set out to address some very big questions employing “model-dependent realism”, which assumes our brains form models of the world based on information received through the senses.  There is no definitive true reality and many such models can co-exist and may be adopted dependent on their usefulness and value.  Such individual perceptions of an external reality are likely to depend on intrinsic assumptions and will probably find acceptable limits of error that are sufficient to ensure survival.  To do more, at least in a primitive society, would be a waste of energy.

If one could link the concept of a potential field that we have hypothesised as acting to constrain the creation of value in the raising of a Value Surface, to an error surface associated with cognition and perception, then this could indicate some physiological foundations for the earlier hypothesising.  Some physiological potential must be driving a descent down an individual’s cognitive error surface to achieve a reliable perception of reality.  Otherwise one’s consciousness would have no physical cost and Maxwell’s Demon might happily defy the 2nd law of thermodynamics.  Information that confers survival, in a primitive society, and perhaps quality of life in a modern society, must be classified as more valuable, indicating a higher use-value to external objects or a greater exchange value, than a random replication of useless information.  Hence information can be made valuable through the very processes of biological perception.

This sketching out of an association between the physics of value perception and the biological origins of consciousness is entirely speculative.  A deeper analysis of this association must await another day.  However, the general concept of an intrinsic neural model that is fitted to observed reality does lead to some interesting observations.

This concept explains individual differences of opinion that are reflected in an oscillating Value Surface.  Individual perceptions of the real world clearly differ.  The start points to fit a neural model to these perceptions certainly should differ.  Different acceptable local minima on the error surface may provide different individuals with a different interpretation of the same reality.  Human beings probably do not process sensory data exactly in the same way and clearly can reach different conclusions when given similar scenarios to manage.  If people behaved as automaton robots, then each commodity Value Surface would be a rigid plane of equal valuation.

Yet people’s opinions and beliefs are extremely stable for such a dynamic fitting of internal model with external reality.  Such stability could arise if the start of every new minima search begins at the most recent minima for a comparable reality, perhaps retrieved from memory.  In this case only the change from the previous reality needs to be reconstructed in the modified internal model, which then provides the next start point in a continual modification of an internal neural model to reflect changing perceptions of a real world.

Barcelona Seafront

Whilst cogitating on this very subject of the fundamental origins of the conscious mind, the author was sitting on a bench on Barcelona waterfront.  A very brief interruption was made by a smart thirty-something year old who mixed languages rapidly in an urgent attempt to communicate.  Within ten seconds the chap had disappeared along with a bag containing everything that was valuable, snatched by a second person during the distraction.  Passport, wallet, travel tickets, laptop, money all had disappeared.  Yet in disbelief I imagined I could see my familiar grey stolen rucksack where it should have been, on the bench beside me, for a good few seconds before grim reality fully allowed itself to be recognised.  Reality had changed too quickly and it seemed the refitting of my internal model was taking long enough to notice the processing delay.  Later on at the UK Passport Office, I was informed that Barcelona is the bag-snatch capital of Europe and had I known this, the adjustment to the new reality might have been smoother.  Or maybe I would have protected my belongings with more conscious deliberation.

Several years on, the memory of that minor Barcelona trauma is fresh and easy to recall.

As considered in “Writing the Information”, can such vivid memories be the river valleys etched into the error surface of my consciousness by the cascading experience of these earlier events?  This is a subjective and even metaphysical suggestion, but such a cognitive system should certainly be an attribute in favour of survival and as such could be a selected epigenetic trait.  Important information would be considered valuable by its hosts.  I will be more careful of my luggage on any future visit to Barcelona[2].

Whatever are the mental mechanism and however current controversies on the nature of the mind and consciousness play out in the future, the subject is central to understanding innovation.  Not only are the intellectual processes that act on information at the very origins of innovation, but the subjective appreciation of value by the consumer, whatever the product of the imagination, can be traced back to its source in the obscure processing of the human brain and its constituent 100 billion information processing neuron cells.

 

Back in Barcelona in 1887 Santiago Ramón y Cajal started to work with a new Golgi staining method that used a silver preparation which, for the first time, enabled neurons to be clearly visible through a microscope.  It was the start of the modern discipline of neuroscience.  Ramón y Cajal used the Golgi method to produce many graphical illustrations of complex neuronal shapes.  On observing these cellular structures exemplified below, it is difficult not to see similarities to the dendritic patterns considered in “Writing the Information”, and to infer that the associated metaphor might extend into this neuroscience domain.  That is, the tree-like neuronal patterns once again infer, albeit circumstantially, that an energy transmission function is at the heart of these microscopic constituent cellular elements of the brain and central nervous system.

Santiago Ramón y Cajal shared a Nobel Prize with Camillo Golgi recognising their work on the structure of the nervous system which today forms a “Neuron Doctrine” that is a basis of the current understanding of the anatomy and physiology of the central nervous system.

 

Purkinje Neuron

Drawing of Purkinje neuron by Santiago Ramón y Cajal, 1899;
Instituto Santiago Ramón y Cajal, Madrid, Spain.
Acknowledgement to Wikipedia: http://en.wikipedia.org/wiki/File:PurkinjeCell.jpg

The dendritic structure of neuron anatomy and physiology enables the cellular behaviour to be mapped onto the generic “Green Box of Innovation” template introduced earlier.  In this case, an electrical signal flows from the multiply connected and complex dendritic structures, through to a central cell nucleus and a single axon strand of connected cells that can reach across millimetres, to stretch out to a branched terminal region, there to connect to dendrites from neighbouring neurons.  The axon-dendrite connection is known as a synapse in which a communicated signal is transferred by chemical means.  Here the information transfer through the synapse requires a transformation of electrical to chemical energy in neurotransmitters and then back to electrical energy as the neurotransmitter binds to synaptic cell receptors to begin the transmission through the next dendritic link of a connected neuron.

 

Neuronal Green Box

 

A synaptic link connecting neurons can either excite or inhibit the transmission of an electrical signal, known as an action potential in connected dendrite links. Perhaps 10,000 such dendrite signals converge on a cell nucleus to give rise to a single event which occurs at the axon hillock, the point where the filamentous axon connects with the cell nucleus.  This integration of the many dendrite signals that need to cross an energy threshold to determine whether that neuron will fire a single electrical pulse through its axon to communicate with its cellular neighbours is a main physiological function of the brain and other parts of the central nervous system.  These pulses may last for only a millisecond and each neuron may contribute to the information flow up to 100 times per second.  Clearly there is much information flowing through the average brain.

 

We have described an energy transfer process that needs to reach a critical threshold before a neuron will fire and propagate its signal.  Billions of such signals must converge to create a perception of value at a Consumer Product Interaction that is a precursor of a Sale Event.  Again this is an integration of received information into an “all-or-nothing” decision to purchase.  Though differing in terms of scale similarities appear in the energy flows of the action potentials of neural circuitry and those operating on consumer preferences  in the shopping centre.

There are Artificial Intelligence (AI) models that attempt to replicate on a very small scale the manner the brain naturally might function.  Neural networks, an example of which is shown in the figure below, are brain-like numerical models of layers of connected neurons whose connection properties provide a generic set of parameters that can be specified to characterise the behaviour of the system.  These connection parameter values can be estimated by using a Least Square Method, navigating to the lowest point on an error surface between a simulated behaviour and a known “training set” of real output values.  Once the ideal simulation with the smallest error has been found, then the associated neural network parameter values should faithfully reproduce the real world, so long as this is retained within the limited confines defined and exemplified by the training set.

Neural Network

A typical neural network connecting four input neurons to two output neurons
through a single intermediate layer of 6 intermediate neurons.

AI neural networks can be useful as they continuously learn from new data just as humans might.  The predictions they can make can be informative as are human intuitive predictions.  They are also susceptible to weaknesses of ambiguity in human understanding.  There may be many local minima on the error surface to trap the descending Least Squares Method.[3]  Also, like the brain, a neural network model is adaptable to fit with the many diverse challenges an organism might face, but this means the solution is an arbitrary fit to observable data.  There is nothing intrinsic in the model that represents the world that is being simulated, nor are there any overt assumptions that can intelligently be applied to simplify this real world.

In the real brain of the analyst, the real multi-billion neural network can be applied to explore the world using models with some conceptual simplification.  Effectively this is positioning the human processing power at the front end of the entire modelling process.

This is the origin of An Innovative Enterprise Simulation that uses the Method of Least Squares to provide a vision that would otherwise be unavailable to the unassisted human senses.

It is a model to explore the process of innovation itself.

 

Notes:

[1] An error surface emerging from the fit of neural systems to physiological signals will certainly comprise a huge number of dimensions.

[2] The points here are discussed in considerable detail in The Believing Brain by Michael Shermer (Constable and Robinson Ltd, London, 2012) who considers that many beliefs are hard-wired into our brains and then consciously rationalised often through the selective use of information and associated mechanisms of bias.

[3] Actually neural network algorithms can apply such mechanical concepts as momentum whereby the speedy descending searching for a minima can overrun the lowest local point and though it might then need to retrace its search, this can avoid getting stuck in a local crevice on the error surface.

Ubiquitous Error Elimination

Evolution by Error Elimination:  There is one feature on the landscape of innovation that has already been recognised and which will arise again in the future, time and time again.  It appears in all innovation management and evolutionary systems.  It is essential for the creation of new knowledge and in the perception of its value.  It is a fundamental process in the building of models and the fitting of these models to the real world.  These are some the guises of the ubiquitous Error Elimination.

In its most fundamental form Error Elimination appears in the epistemology of the philosopher of science Sir Karl Popper.  In The Logic of Scientific Discovery (1934), Popper recognised an asymmetry in the nature of knowledge that whilst no amount of empirical evidence can prove an assertion to be true, a single case alone may prove it to be false.  It follows that no theory can definitively be proven to be true.

In later work Popper went onto explore how scientific knowledge, which originates in the subjective mind of the scientist, goes onto become an “objective” feature of the world.  In Objective Knowledge: An Evolutionary Approach (1972) Popper develops a “three-worlds” view in which all physical artefacts are “World 1” objects and subjective thoughts and ideas belong to “World 2”.  Popper’s “World 3” is populated by things originating through the human mind but which have gone on to have an existence beyond the confines of that mind.  These include abstract concepts, the content of all books, designs, theories, etc.

Popper’s Three WorldsPopper’s Three-Worlds Relationship

Combining the approach to challenge the validity of existing theories with empirical tests designed specifically to bring about their failure, together with the creation of objective scientific knowledge for those theories that survive this ordeal of falsification, led Popper to conclude that scientific knowledge creation proceeds through an evolutionary sequence:-

Problem 1 >> Tentative Solution >> Error Elimination >> Problem 2

 

Here, the tentative solution to the initiating problem is continually refined in the light of new empirical evidence, until the new data fundamentally conflicts with existing knowledge, which gives rise to a new problem for the cycle to repeat.

 

Error Elimination Creates Value by Risk Reduction:  In earlier work we have extended the evolutionary epistemology of Karl Popper to reach technology innovations that might emerge from the scientific research upon which the original work of Popper is based (Egan et al., 2013, Williams et al., 2013).   This involves an explicit recognition of a subjective Value Appreciation stage which forms the link between subjective World 2 and objective World 3 in Popper’s evolutionary knowledge theory.

Indeed, for scientific knowledge, Popper describes such a value appreciation that is achieved through inter-subjective testing, expert peer review and publication and through which the knowledge becomes objective.

4-Point Innovation Cycle

Popper’s evolutionary epistemology cycle, including an explicit identification of Value Appreciation

Initially, there is often a high risk that a Tentative Solution will not consistently resolve its initiating problem in practice and proof of concept projects are required to understand and manage this risk.  This conforms to Popper’s Error Elimination stage the output of which may comprise accumulated information on designs, and the technical and commercial evaluations from which to conclude the potential benefits and residual risks of an innovation.  In fact, the reduction in risk though Error Elimination can be interpreted as a creation of value through innovation, as it is the value that is perceived by the consumer of this information.

In terms of the previous “Green Box of Innovation” that provides a generalisation of an innovation process based upon enhancing the value of information, it is the parameters of the “box” that determine the operational form through which input information is transformed into outputs that have utility and value.   Maximising the value of the outputs is once again an application of Error Elimination to discover the parameters that provide the best operational form for the Tentative Solution to resolve the real world problem it is tentatively designed to address.

In a direct analogy with the growth of scientific knowledge, the existing Tentative Solution should be repeatedly challenged.  The empirical information will continue to provide evidence of utility and thereby continually adjust perceptions of value.  Hence, feedback loops operate through which the value of the Tentative Solution can be enhanced through the Error Elimination process.

 

Error Elimination by Least Squares:   An innovator may deploy a powerful cocktail of creativity, intuition and experience to make a Tentative Solution relevant and valuable by Error Elimination.  Computers are not gifted with such human capabilities, but on the other hand they excel in their relentless ability the crunch numbers.

The Least Squares method is one of a number of numerical optimisation techniques whereby outputs of a computer simulation can be ‘fitted’ to real-world data.  To do this, some starting values of the model parameters are selected, without knowledge, and a simulated behaviour is derived.  The simulated outputs are compared with real-life and the difference is a measure of the error of that simulation.  This initial error can indicate how to adjust the model parameters to achieve a better fit to the empirical data.  The Least Squares approach enables a further better guess at the model parameters and onward thus rolls an iterative process of Error Elimination to continually improve upon the match between the simulated and the real, to minimize the error and hone in upon parameter values that may provide a new insight into the real world through the window of a best-fit model and its parameters that now describe real behaviour.

Error Elimination we have seen to be part of the process of innovation.  With the Least Squares method it becomes an algorithmic procedure to navigate an error surface.  It works as follows.

It is as though a blind wanderer is placed into a mountainous terrain (for a two parameter model, where the error is a vertical third dimension) with the task of finding the point of lowest altitude, for at this point of minimum error there can be found some useful insight.  Her tool is a stick of enormously variable length through which she can perceive the elevation of the surrounding landscape.  Down steeply sloping hillsides her stick will extend to accelerate descent and avoid the confusion of small rocky undulations.  Into the valley her guide is shortened to follow a meandering contour, always descending towards her goal.  When the topography becomes tortuous, progress is restricted to very small steps, frustrating advancement as the blind wanderer must squeeze through each crevice eventually perhaps to expose wider valleys.  Finally, when all around is higher from the shortest to the longest reach, the wanderer may wonder if she is at the unique point of minimum error.  The wanderer may mark that spot and start again and then again from distant and disparate origins to confirm uniqueness[1], although this might not be necessary.  She may have acquired a valuable insight.

Watching the Least Squares algorithm operate in the virtual world of a computer, it is easy to imagine the numerical model as a blind wanderer seeking the best fit to measurements of reality.  The patterns of descent show a striking resemblance to those previously described in “Writing the Information”, although the topography of an error surface runs through n+1 dimensions, where n is the number of model parameters.   However, this complexity is not relevant for the Least Squares algorithm as Error Elimination proceeds just as it would in our familiar three dimensions.

In an ideal world the final error could be completely eliminated.  It would be an unmistakeable match of a perfect model with perfect data.  Yet all measurements contain their own errors (noise) and in the output of all worldly processes the primary signal is polluted by artefacts which confound perfection with ambiguity.  Also, all models must necessarily be simplifications of the real world, with a judicious ignorance of secondary and tertiary influences.  A perfect model of the real world requires the real world to be the model. For the innovator, it is sufficient to be close enough for practical purposes.

So the innovator must still contribute an essential human element, to innovate upon the structure of the model to better conform to real world observations.  The investigator thus enters into a liaison with the computer to become an n+2 dimension of a hybrid man-machine error surface, which must be navigated to make the model converge towards reality.  Here the inventor is the creative agent giving the model its operational form and the innovator contributes by forging the relationship of the model with reality.  And there may be as many models as pictures hung in a gallery, for value is not in the picture itself but in the understanding gained of its subject.

It is perhaps surprising or even problematic that an automatic computer routine such as Least Squares may be suggested as a means or even a metaphor for innovation.  However, it is not a paradox if the algorithm works on new inputs, so that the path taken to descend the error surface is new and may lead to new and potentially valuable insights.  Of course if this is repeated using the same inputs it would be repetitious and nothing of value could emerge.  Nor is there any accumulation of value as the original path descends to the point of minimum error, as it is only when this point is reached that any value is realised in the insight provided by the “best-fit” model parameters and outputs.

In all the above cases innovation is making information valuable through a process of Error Elimination.  That analogous mechanisms appear in both human and machine applications suggests that the process of innovation itself may not be an entirely social phenomenon.

 

Notes:

[1] This may be considered to be a rather trivial instance of Popper’s challenge of falsification.

Green Boxes of Innovation

 

 FabiusCOP21

Thank you M Laurent Fabius and all 195 nation’s delegates for your achievements at COP21 in Paris today.

It is now time to pull together threads from earlier discussion to sketch a morphology for a generic system for innovation.  Our earlier Black Boxes of Innovation can be combined with the idea that innovation is information made valuable through bioenergetic transformations. The black boxes then become green.

 Green Box of Innovation In this generic system of innovation, inputs may include empirical information, other raw materials and energy.

These inputs are transformed in the Green Box into outputs comprising information that has been made valuable.  This information may be embedded in products and services by commercial organisations.  It may be conserved in the generic code and epigenetics that confer survival benefits to organisms.  It may be the content of a newspaper or the legislation to govern a society.

The parameter values set the operational state of the Green Box.

The creation of valuable information through innovation and the replication of this information through production are intrinsically coupled within the Green Box[1].

The number of parameters can vary from zero to very many, each providing an extra degree of freedom or variability to the Green Box operations.  Indeed the structure of this website has been designed around the concept of a Green Box with no parameters, as it provides a qualitative commentary that connects inputs to outputs.  A more quantitative or algorithmic association between inputs and outputs may use parameters as operational variables for the Green Box – in weather forecasting for example.

The change of colour is used to indicate that the Green Box is not simply an automatic means of processing selected inputs into useful and valuable outputs.  There are additional features that together establish a process of Innovation that is Making Information Valuable:-

  1. There is a dynamic search for relevant input information
  2. There is an integration of various inputs into a coherent interpretation, which may legitimately be challenged at any time by new findings and the output information appropriately modified
  3. There is an optimisation through which the value of the outputs is maximised for the available inputs
  4. Information creation through innovation and information replication through production are to varying degrees coupled within an innovation system

The above four points specify a generic behaviour for an abstract innovation system.  In “Writing the Information” we consider how this behaviour arises from energy throughput and transformation in physical and biological systems, and we have extended this concept into the economic and societal domains.

 Tree Innovation  tree production

The tree as a Green Box:  Information creation through innovation (left) and information replication through production (right) are coupled in this biological system

An analogous behaviour of innovation systems with a tree-like morphology can be observed in various real physical, biological and economic settings which are considered below.

53484_femur_sm Wolff’s law reveals that bones are continuously remodelling to enable a skeleton to adapt and be optimised to support loads to which it is exposed.  In the ends of longs bones, such as at the hip joint of the femur shown here, there are filamentous tree-like trabecular structures of cancellous bone that channel the complex forces that are transmitted across the joint into the stiffer and stronger cortical bone below.

  1. It is a dynamic system where input information regarding physiological loads determines whether trabeculae are reinforced or resorbed
  2. The cancellous trabeculae combine to form an integrated structural system
  3. In healthy bone the system is optimised to provide sufficient strength for normal physical activity with lowest physiological cost
  4. The trabecular geometry is broadly similar for all individuals from the same species

Remodelling also occurs in many other tissues through “mechano-transduction” mechanisms that are largely unknown, although collagen fibrils and tree-like proteoglycan structures have a role to play.  Numerical models are being used to explore this specific Green Box of innovation.

 cardiovascular system With every heartbeat blood gushes through the widest lumen of the aorta and outwards into the body, through finer intermediate arteries and down to the finest capillaries.  This tiny capillary blood flow delivers oxygen and energy to the cells of adjacent tissues.  Those tissues cut off from this energy supply will atrophy and die.

  1. Cells can produce angiogenic factors that call for the creation of new capillaries to enhance blood supply when this is needed
  2. The cardiovascular anatomy is a highly integrated physiological system
  3. Bloodflow can shut down preferentially in peripheral regions at times of stress to protect the vital organs
  4. Cardiovascular anatomy is broadly similar for all individuals from the same species

After the delivery of oxygen and energy is made, the vascular system will collect the outflow, which re-emerges in a series of confluences back into the mainstream venous bloodflow, back to the heart to become re-energised and to repeat this cycle through which energy is continuously allowed to flow to sustain life.

BBC News logo

Google logo

Le Monde logo

 

Twitter logo

News items emerge from interesting events in minor crevices of society and are transmitted through social networks and regional agencies into the major conduits of national and international press.  In this way stories are selected to fill the columns of newspapers which flow out through their distribution channels to find their way onto the doorsteps of a nation the following morning, and onto the web pages of computers at any time of the day.

  1. The search for the latest scoop is a never-ending quest for the journalist profession
  2. Various information sources are used for corroboration to “stand-up” a story in a respected publication
  3. Fierce competition and a 24/7 news cycle with fast developing technologies for information dissemination imply the need for continual optimisation of reporting operations.
  4. Dissemination architectures are relatively fixed and through which the press mixes the creation of new information with the copying and distribution of the latest news.
 container port Since the 1950’s the innovation of the shipping container has been responsible for a remodelling of transportation networks around new ports that provide docking hubs for massive container ships.

  1. The old ports such as London and New York have been displaced to Felixstowe and New Jersey
  2. Tree-like transport infrastructures grew out from these container ports to convey the goods from manufacturing centres, and in reverse to deliver the transported goods to the shops and homes of consumers
  3. Whilst the new container ports grew in size and capacity, the old ports atrophied and died through disuse in an analogous manner to bone remodelling.
  4. Whilst the daily operations of a container port are the repetitive actions of a production system, the development of the infrastructures around the production hub provides one element of innovation in this system
 Big Ben And so it is with politics that one can identify analogous innovative patterns.  Votes cast into the ballet boxes of democracies elect representatives who themselves channel their authority into Government by an executive, by a cabinet, that exercises their rights to develop policies that flow out through the tributaries of the state, back into the constituencies and on into the homes of the voter.

  1. Policies evolve with time aiming to attract the interest of an elector population.
  2. A manifesto integrates policies into a coherent package of information intended to be of sufficient value to be “sold” in return for a vote.
  3. The information content and the dissemination networks are in a continual state of flux to gain the furthest reach into an electorate.
  4. Anyone who has worked on the telephones and doorsteps at election time understands the need to continually repeat policy benefits.

Taxes flow in a parallel conduit, from each member in that same society through into the huge central fiscal channel of the Government treasury, whereupon they are redistributed outwards according to the particular ideologies and macroeconomic priorities of the exchequer, back into the many small niches of the society from whence they came.

Innovation and production are economic activities.   In his Économique et mécanique Leon Walras sought to explain that it was of little importance that physical phenomena can easily be measured whilst economic ones cannot  …. because with each exchange, consciously or unconsciously, a person will know deep down whether his needs are satisfied or not in proportion to the value of the goods exchanged.  Innovation and production within the generic Green Box can be explored using models and simulations of this perception and exchange of value.

 

Notes:

[1] The special cases of pure innovation and of pure production are described in “A Labour Theory of Value Creation

Black Boxes of Innovation

At various points in the Foreground and Background pages of this website key points arise from “models” and “simulations”.  What do we mean by these terms?

Essentially, models are constructed to explore aspects of the physical world that are not directly measureable or accessible by the senses.  In fact, one might even consider that normally such simulations may be employed to interpret what the senses might actually sense – but more on this later.

A model may conveniently be considered as a “Black Box”, with inputs that are directly measureable or sensed.  This Black Box may have parameters that control its inner workings and its outputs provide new information[1].  Of course the input information may be nonsensical or the model operation erroneous, making any new information arbitrary and useless.  Outputs need to be useful for the model to be valuable.  Take a television, for example, where the input radio signal is received through an aerial, the inner circuits are tuned to receive a particular frequency, the numerical content is decoded for the channel (parameter) that is selected and the output is displayed to inform and entertain the viewer.  It is not necessary to understand exactly how a television works to appreciate its value.

Innovation itself may be treated using a Black Box approach, by converting empirical observations and scientific research into information that has utility and value.  In this application we have connected the models that make up the tools of scientific research to the mysterious Black Box through which value created through the researcher’s endeavour is appreciated.

Black Boxes of Early-Stage Innovation

For further information see: When Science Meets Innovation: a new model of research translation

We have modelled the subject of value appreciation using a Value Surface that maps perception of value across a statistical population of consumers.  The elevation of this Value Surface in a hypothetical economic potential field provides a means to link investment and the innovative endeavours of an enterprise to value created.  Let us consider why a model might be useful to explore such a potential field.

 

A Flat-Earth Person Finds Dimension Three

Imagine that you are a flat-earth person.  Not one of a conventional, three-dimensional kind who is able to believe that at some point you may slide off the edge of our saucer-shaped planet.  Rather imagine that the force of gravity in some way acts to compress your perception of height to an infinitesimal thinness.  You will still live on Earth as you do today, but how different your view of the planet would be.  You would truly be a two-dimensional person.  Let us consider what you may see and how you will fit these observations into an understanding of your world.

If you are initially resident on a horizontal plane, then this surface will stretch out before you.  At various points in the distance the altitude of the terrain may change.  Any increase in height, even if this is just a gentle slope, will be seen as an impenetrable barrier.  Likewise, a real lower piece of ground will appear as a hole to oblivion.  You will see the edges of these barriers and holes as lines of constant height, just as contours appear on an Ordinance Survey map.

Your task today is to move to your next appointment, for which you have a map giving the details of your journey.  This map may be a length of string containing paired instructions of distance and direction.  First go 0.7km at 123 degrees, which should bring you to a hill.  Your perception of this hill as an insurmountable barrier does not change but something strange happens as you begin to climb.  The plain on which you approached instantly disappears from view. Facing you now is a solid vertical wall, behind is a limitless abyss.  You only perceive the linear contour that wraps around either side of the hill on which you climb and disappears from view.  But something is happening as you move forward.  This contour is constantly changing and, more importantly, you are using energy although this does not seem to be having any effect.  You are not afraid as the steps you are taking and their associated changes of contour are all precisely detailed on the map you have, so that you cannot possibly be lost!

Finally and suddenly, the hill you have climbed breaks into a plain and again your full two-dimensions of perception are restored.  In the distance the new plain stretches away to further barriers and holes.  Following your map, a descent into a hole is the reverse of the hill you have just climbed.  You step into the abyss whilst behind, a perceived contour gives shape to an otherwise impenetrable barrier.  But caution must be exercised here.  Some holes, the really steep ones, should be descended with care and are best avoided.  Energy is returned too fast to be easily dissipated by your bodily processes.  And then there are the ‘strange’ holes.  These are clearly identified on the string-map as areas to be avoided at all costs – few return from such a descent and, when they do, they are strangely wet.

Shadows lengthen as you move across the new plain.  The impenetrable barriers radiate darkness and as the day moves into evening, this dark-radiation grows in intensity until the effects from all barriers superimpose.  It is important for you to reach your destination before this complete darkness falls.

Before you fall asleep, satisfied with your two-dimensional endeavours of the day, you find time to unfurl another coil of string.  It is a popular book on genetics and evolution and you learn from the string of characters how wonderfully optimised you are for two-dimensional survival by your genetic template.  The mechanism is a model of natural selection through the duplication of a molecular double ziz-zag.

Your perception of the two-dimensional world is likely to be something akin to a large department store.  It is a collection of planes containing interesting artefacts but separated by these impenetrable-appearing barriers.  Up or down you must go to gain access to new vista.  Rather like the opening of an elevator door.

But then someone, an innovator, imagines that gravity is acting as a potential field perpendicular to your perception.  Such potential fields fill their space with a force that pulls or pushes onto things which enter their vicinity.  Magnetism is one example.  In this case, it is proposed that a constant downward force of attraction is acting as you move upwards through this field.  The height you have gained is then proportional to the energy expended in the climb[2].  This is a breakthrough.  All two-dimensional surveyors now have to do is monitor the energy required to reach every point on the impenetrable barriers to calculate their height.

The method the two-dimensional cartographers used in charting their hidden territory was to move to a higher altitude and send back a signal to the starting point giving information on the height gained.  Conservation of energy provides the basic principle.  The tools of the cartographer are a ramp of fixed length L and a heavy cylinder of mass Mc.[3]  The ramp is used to reach up to a point of higher altitude as shown in the figure below.  The cylinder is then released from the top and carries the signal revealing the height h to the bottom.

cylinder-ramp model

The cylinder and ramp of the two-dimensional cartographer

The ramp-cylinder system shown above is the cartographer’s Black Box within which the conservation of energy principle operates: –

Potential energy at the top     =  Kinetic energy of the rolling cylinder at the bottom

=  Translational kinetic energy + Rotational kinetic energy

Mc.g.h  =  ½ . Mc.Vc2  +  ¼ . Mc .Vc2

= ¾ . Mc .Vc2

So that h = ¾ . (Vc2 / g )

Here Vc is the velocity of the cylinder at the bottom of the ramp.  Unfortunately, this velocity is not easy for our two-dimensional cartographers to measure, as the cylinder appears from nowhere to clatter off the end of the ramp.  Some further analysis is needed and this is provided by the mathematical tools of calculus, which convert the measurement into one of time t from the moment the cylinder is released to the point it reaches the base of the ramp.  In this case: –

h = (3.L2)/(g.t2)

and         Sin (a) = h / L

All the mapmakers now need to do is to time the arrival of the cylinder and apply the above equation to know the height at which it was released.  They may then move onto another point and repeat the process, to gain a full knowledge of the height and the slope a of their surrounding terrain.

The conversion of potential energy into the kinetic energy of the descending cylinder is used to create the information content of the two-dimensional map.  The Black Box that provides the two-dimensional cartographers access to their third dimension contains a model that converts measurements that they can make, time in this case, into something meaningful that they cannot measure directly but which they need to know.

In principle models can be valuable, but only when they provide useful insights into the real world.  And their value, like that of any product, will diminish as they become superseded by alternatives with a greater acuity of vision, as the cycle of innovation rolls on.

 

Notes:

[1] Strictly speaking the output information is not new but is a new interpretation of the input information.

[2]   Potential Energy = Mass .g. Height, where g is a gravitational constant equal to 9.81 m s-2 .

[3]   The cylinder is a three-dimensional object and thus causes something of a problem for the cartographers.  It is selected as a linear object with the special property that it will roll along the ramp.

Writing the Information

For decades school classrooms have echoed with a chorus describing the water cycle: “evaporation, transpiration, condensation, precipitation, run-off”.  The sun heats the waters of the ocean.  Thermal energy beamed down onto the ocean surface agitates water molecules to an extent that some are able to cross the energy barrier at the ocean surface to begin an airborne journey.  Thermal effects cause the moist air to rise as the potential energy of the heavier, colder air displaces the warm vapours to higher altitudes, until they reach an equilibrium height in the gentle folds of clouds.  Prevailing winds blow the clouds towards land, to rise further into foothills overhung by a slate grey cloudscape in which the moisture condenses into drops of relentless drizzle.

As this rain falls the potential energy gained when the sun earlier had heated the ocean surface is transformed into a kinetic energy in the falling droplets.  In isolation this energy is miniscule.  The smallest drops coalesce into bigger ones that can then collect in tiny trickles that run for seconds down a window that faces into the rain.  These patterns are not random but are choreographed by the Principle of Least Action.

The window glass appears unaffected by the downpour, yet it too is behaving like a liquid that is flowing over centuries rather than seconds!

As the raindrops collect together to form rivulets and streams, they combine their energies to become an erosive torrent.  Rain engorged streams cascade down mountainous slopes.  Some potential energy in the water is diverted into creating the micro-fractures of erosion, through which steep river valleys will eventually be sculpted.  These sharp ravines may be assumed to be a natural consequence of energy flow by a carrier material, which in this case is water.

Water Erosion Pattern

The direction taken by streams, as well as the patterns of erosion of canyons and valleys, are clearly not random but again co-ordinated and optimised by the Principle of Least Action.  Macroscopically the descending water converts its potential energy into kinetic energy as rapidly as possible.  Microscopically the energy transferred and dissipated through the erosion shown above can be considered analogous to the accumulated effect of micro-fracture events in a viscoelastic material.

This accumulation of microscopic erosion events is the information content that describes the creation of the river valley over time.  That is, in an ideal sense, the energy passing through the water cycle forms the conduit through which the water flows and optimises its design to comply with the Principle of Least Action.  The associated information content is written by this process.  It is a feature of the environment and is unrelated to any living agent.  The information that is written by the water cycle has a syntax and meaning in the accumulation of the erosive events with time, but has no associated value.  Simply the energy and information content are linked by the physics of the system.

And so it was for millennia that rivers became terrestrial conduits for energy flow.  Then a new form of conduit emerged to process more of the vast energy resource that showered down upon the early Earth.  About four billion years ago biology was born.  Soon thereafter solar energy began to be captured by the photosynthetic activity of Stromatolites, beginning the sequence through which this energy eventually would pass through the bioenergetic transformations of the food chain.  Energy is passed from vegetable to animal, from prey to predator, assuming the predator is able to catch its prey.  The energy is passed from a fallen leaf into the soil, into bacteria and on into further tributaries of the food chain.  The energy each biological organism receives enables it to sustain itself to the search for more food and to reproduce.

A tree is one such organism deserving attention as a biological equivalent of the river valley.  Trees have common features but as individuals no two are alike due to the information written into their physical structure.  To receive solar energy a tree must extend up towards the forest canopy and position the photosynthetic chemical factories in its leaves to be aligned to the sunlight.  Unfortunately, the descending sunlight may become blocked by many obstacles, notably by its own boughs or those of adjacent trees.  Growth is therefore a sequence of reactions to seek the essential sunlight.  Hence the contortion of an arboreal structure and the shedding of the deadwood that fails in its quest to reach the sunlight.

Tree Pattern

As is the case of the river valley, the realignment of the growing tree may be considered as events analogous to micro-fracture events in the physical domains.  This accumulated sequence of events then provides the information that describes the tree through its life; information also has a syntax and meaning.  One might speculate that this information has value, at least for the individual tree as it determines its survival.

Wormhole:
Do we underestimate the power of plants and trees?   bbc.com, 20th Nov 2015

As energy captured by photosynthesis in plants cascades down the food chain, it supports the development of other organisms.  Through this throughput of energy there is information content in those same living organisms, just as in the tree, which supports their survival.  For an intelligent agent this is the information that enables their participation in an economic society both as producer and consumer.  The throughput of energy over time creates the information that is the intelligence of the agent.  Furthermore, deployment of this same energy of the intelligent agent in labour creates or replicates the information in the product or service of that labour and thereby creates value from their economic activity.

Let us return to the earlier mountainous cascade, where now an intelligent agent is able to develop another system through which energy is able to flow.  The potential energy and kinetic energy of the torrents that race down the steep hillsides can be harvested and deployed elsewhere.

A Highland Electricity Company elects to obstruct a particularly fast-flowing river with a hydroelectric dam and directs the water flow through its turbines.  A reservoir of water with an enormous potential energy builds up behind the dam, but this energy is insufficient to breach the mechanical strength of the obstacle – fortunately the dam holds firm.  The energy that once propelled the water through turbulent streams is now controlled and set for harvesting.  As water pours through turbine channels, the blades are rotated and the water leaves slightly subdued, stripped of a portion of its kinetic energy.  Generators have converted some of this passing energy into an electric potential.

As the river’s remaining energy takes the water to the sea, the electrical energy now flows through a different channel, a national grid of high-tension cables to power a society.  The intelligent agent has thus developed a further energy conduit that is also filled with energy obtained from the combustion of fossil fuels.   That energy also came from the sun and fell onto primordial forests in an earlier epoch from which the chemical energy of the oil and gas and coal has now found freedom to flow again through the electrical conduits.

 Foundry As electricity is consumed the electrical energy is again transformed, in some cases to produce heat and melt metallic ingots of a foundry and cast these into the shapes of a preformed item of economic production.  In this foundry, materials, manpower and machines combine their energies in manufacturing operations.  The effect is to enhance the information content of the fabricated goods.  In this case the information has a syntax and meaning that can be interpreted by consumers in a perception of the value of the goods.

Each value adding step can be interpreted as the movement of the fabricated goods through an economic potential field.  A further utilisation of terrestrial energy resources propels these manufactured commodities to a higher economic potential.  Millions of individual blips on the Value Surface of the goods are thus created to record their elevated value.

A road network is no less a conduit for energy flow than a river valley or a tree.  Human beings as producers depart on the minor tributaries and pour down major transport links to deliver their labour.  Manufactured goods are loaded onto juggernauts, ships and through huge container ports to carry them onward through supply chains.  Through the motorways, main roads and side roads of an economy the goods are carried to the shelves of the retail outlets to be made available to a geographically distributed consumer population.   Human beings as consumers enter the same transport networks to find the goods and services that they need.   The information content can be drawn into maps and GPS devices.

Elsewhere we have considered as equivalent the information communicated by the distribution of some fabricated goods and the information communicated through media channels that advertise those same goods.  Precisely the same equivalence may formed between the transport networks  and telecommunications networks that communicate information generated by intelligent agents and which enable those agents to integrate their activity in an economic society.  The society thus brings together all the energy contributions of its component parts.  At this high level the associated information content includes the rules and regulations that enable the society to function, as well as the information that is shared between the component parts.

Whereas water is the carrier of energy in the river and has created the information content of the valley, and the passage of energy through the food chain has fashioned the design of trees and other organisms that make up this chain, the energy in the materials, manpower and machines are also the source of the information content of the fabricated goods and services.  Likewise, the energy that drives an economy is the source of the information content of its transportation and telecommunications infrastructures and the governance of the society itself.  Roads, railways, relays and regulations that are disused will eventually meet the same fate as the dried river bed or the deadwood of the tree.

We have described specific examples of a generic process where the passage of energy through a system causes the system to change, and discrete events record that change in the information content of the system.  A system’s information content can therefore be considered to be a legacy of this energy throughput:-

System Carrier Energy Information Content
River Water Potential, Kinetic Erosion of the river valley
Tree Carbohydrates, ATP, etc. Chemical Physical structure of the tree
Food Chain Primitive Organisms and Species Chemical, Kinetic Behaviour to survive and reproduce
Intelligent Agent Food, Fossil Fuels, Human*, etc. Chemical, Kinetic, Electrical (neural) Production and consumption
Commercial Organisation Human*, Machines and Materials Chemical, Electrical Potential# Product or Service
Power Generation Water, Fossil Fuels, Wind, Nuclear Electrical Electricity distribution network
Transportation and Telecommunication Fossil Fuels,

Human*, Machines.

Chemical, Electrical,

Kinetic, Potential#

Transport and telecoms networks
Society Human* Potential# Governance, Legal, Press, Operational Routines

* Human beings as innovator, producer and consumer of goods and services and a carrier of the chemical energy obtained through the food chain.  # Potential energy based on the assumption of an economic potential field

In the diagram below the connected conduits from the table above enable the energy throughput through which the information content of the system  (blue boxes) is written.  In yellow are sources of energy which enter the system from external sources and storage.

Energy Conduits

 

One final thought:  In the effort to understand this article you will have consumed energy from the food chain.  A third of this energy passes through your brain and hopefully this will have etched a memory of your cogitations into the information content of the neural circuits that form the blue box that is your brain!  It will be the information that enables you to perceive the value of the contents of this page.

 

The Value of Information

InnovationMaking Information Valuable.  In the previous posting “On Value, Capital and Energy” we sought to make sense of this short definition of innovation by considering what is meant by “Value”.  For this we needed to go back to the works of the classical economists who concerned themselves very much with this particular subject.  Now we shall do the same for “Information”.

To state that information can be valuable is hardly controversial.  Information can be extremely valuable.  Knowing in advance information that could affect the price of a company’s shares can lead to financial gains.  Insider trading is illegal as it distorts a stock market in which all agents are intended to have equal access to the same valuable information.  High frequency trading can expedite access to this information by milliseconds, a short interval that also can be important.

In On Value, Capital and Energy the creation of value is associated with the bioenergetics of the innovator, and specifically with the treatment of value as a form of potential energy.  To discuss information in the same terms it is necessary to form a similar association between information and energy.

Information is physically encoded in many forms.  Indeed whether it is the storage of information in DNA, in computer memory or in the neural circuits of people, one can assume that information is at least in part formed through the organisation of some physical material.

The organisation of a physical system with higher information content is associated with the system’s energy.  This is most famously demonstrated by Maxwell’s Demon.  In 1867 Scottish physicist James Clerk Maxwell imagined an intelligent being capable of operating a trap-door in a wall inside a closed gas-filled chamber and thereby segregating fast moving “hot” gas molecules from slow “cold” ones where normally these molecules are randomly mixed together.  By making this selection the Demon can produce a machine that does mechanical work (a Szilard Engine) – in doing this the Demon can convert pure heat to mechanical energy.  This is impossible according to the 2nd law of thermodynamics – it suggests perpetual motion is possible and that time can reverse its direction!

Maxwells Demon

Figure 1: Maxwell’s Demon[1]

It appears that to resolve this Maxwell’s Demon paradox, the intelligent Demon must identify each fast moving molecule, operate the trap-door and then forget the information before repeating the cycle (or at least have a finite memory).  It is in the act of deleting information that energy (heat) is released and the entropy of the universe is increased to remain compatible with the laws of thermodynamics.  In a more practical sense, this same concept appears in Landauer’s Principle that has shown that the irreversible deletion of information in a computational operation must be accompanied by a dissipation of energy as heat.  On the other hand, the replication of information does not necessarily require any additional energy input.  Such considerations lead to a physical theory of information[2].

The deletion of information in a dynamic sense is the irreversible flow of information from its physical media of storage.  If such an information outflow ends with the increase of entropy, it is reasonable to posit a hypothesis that information content in a physical system is linked to the system’s energy.

In the biological domain, the commencement of information replication could be considered to coincide with the origins of life.  In this most primitive form, information must have been encoded in chemical sequences on molecular strands such as DNA, which must have had the essential and original property of self-replication, thereby automatically copying information and initiating a mechanism for population growth through replication.  From this simple molecular replication has arisen the development of cellular entities where information content has increased and which is now encoding physical signals that determine the survival of the organism.  Evolution of living systems can be considered as the development of the information content of an organism in response to environmental challenges it faces.  In this case, one might consider evolution through natural selection as a means to increase the “value” of this information.

This association of innovation and evolution is further discussed in New Economics of Innovation.

The information content of living organisms has thus evolved to become a principal determinant of their survival.  Information that confers advantages to an organism to be better adapted to its environment, as might be achieved by a mutation of genetic coding for example, could be construed to have value to the organism and indeed to the species[3].  One genetic sequence is then not the same as another, as the more valuable is associated with superior attributes.  This long process of biological evolution has led to the eventual development of an intelligence extending the concept of value beyond that which favours simple genetic replication and into epigenetic mechanisms by which information allows organisms to be increasingly responsive to their environment.  Information such as an aversion to snakes will aid survival, and this valuable information is shared as a natural reaction by many different biological species.

From this epigenetic well-spring, the value of information can thus be traced as an energetic phenomenon, driven by a growth of the brain and the intelligence that the organ bestows, and which is a consequence of metabolic biological energy transformations.

At some point between the emergence of self-replicating biomolecules and the world as we know it today economic activities became established, in which the notion of value is further enlarged to be related to the production and trade of goods and services.  Here information must be embedded in the traded commodities and transactions are based on an appreciation of value that this information conveys.

In an advanced society, the path has moved away from value as being related directly with survival, to a more extensive set of social objectives that we might list as being the ingredients of daily life.

The bioenergetic transformations converting food to the physical and mental activity of the worker have been described.  The transformation of the raw materials into the finished goods through expenditure of the worker’s energy fundamentally is a process of adding information and through this adding value.  Capital deployed in the production facility serves to improve productivity by enhancing this value creation.

As a component flows down a production line, at each stage with more labour and capital deployed, more value is added as more information content is added into the component.  Finished goods will then be shipped through global distribution channels, with the additional energies of labour and transport combining to further increase the value of the goods as they are brought into proximity with their eventual customer.  At the same time, information about the products is flowing through multiple marketing and advertising channels, with the intention of further enhancing the value of the products in the minds of the consumer.  Either by the information gained by direct contact with the goods or through the surrogate means of advertising, the value of the goods is perceived in the minds of consumers and compared with the price.  If the comparison is favourable the goods can be sold and money should flow in the opposite direction as all intermediaries in the chain of supply are reimbursed.

We have developed a Value Surface representation in which the amalgamated perception of value for a population of consumers of a particular commodity can be linked to the energy involved in the creation and replication of this value.  Now we are able to interpret the height of the Value Surface to be fundamentally related to the product information that is received by the consumer.  Delete this information and the associated energy is lost, according to Laudauer’s Principle.

The role of the innovator and the entrepreneur is to ensure that energy expended on the fabrication of their commodities has created and disseminated information that is perceived to be of sufficient value for the goods to be sold at a price that satisfactorily reimburses the producers.  The innovator’s job is to make the commodities easy to lift in a prevailing economic potential field.  In other words:  Innovation is Making Information Valuable.

 

Notes:

[1] Image from: https://commons.wikimedia.org/wiki/File:Maxwell%27s_demon.svg.  Please see terms and conditions for reuse.

[2]   A highly readable insight into theory of information and its thermodynamic background is given by M. B. Plenio and V. Vitelli in The physics of forgetting: Landauer’s erasure principle and information theory. Contemporary Physics, 2001, volume 42, number 1, pages 25- 60

[3] As considered in detail in The Selfish Gene by Richard Dawkins

On Value, Capital and Energy

Innovation:  Making Information Valuable.  To begin to make sense of this boiled down definition we need to understand the nature of both Information and Value.  Here we will discuss the latter.

In fact, even the question “what is value?” can lead an individual to respond with the subject of their “values”, which are entirely different.  Dictionaries contribute with definitions that are rather circular; along the lines of value is how much something is worth.  For something more fundamental we need to go back to the days of the first classical economists.

In the works of the 19th century political economists the understanding of the nature of value is of major significance.  In 1817 David Ricado embarked On the Principles of Political Economy, and Taxation with an initial chapter “On Value” in which he developed the twin concepts of value in use and value in exchange proposed forty years earlier by Adam Smith.  Fifty years further on Karl Marx also developed these notions by virtue of his extremely detailed analysis in Capital.  What followed has been over one hundred years of controversy that continues to this day.

The subject begins traditionally by comparing the value after capture of a beaver and a deer in some primitive society.  Both have a use-value or utility which for the two animals are qualitative and different.  They also have an exchange value one for the other, whereby the beaver hunter might acquire the utility of the deer and vice versa through some mutual exchange with the deer hunter.  This exchange must depend on the relative value in exchange of the beaver and deer.

Ricado makes a primary assumption here:-

In the early stages of society, the exchangeable value of these commodities, or the rule which determines how much of one shall be given in exchange for another, depends almost exclusively on the comparative quantity of labour expended on each.

So if it takes twice the labour to capture a beaver than a deer, then one beaver will exchange for two deer.  On such considerations Ricardo follows up with a warning:-

That this is really the foundation of the exchangeable value of all things, excepting those which cannot be increased by human industry, is a doctrine of the utmost importance in political economy; for from no source do so many errors, and so much difference of opinion in that science proceed, as from the vague ideas which are attached to the word value.

The rationale here depends on the use value.  Both hunters have need for part of the other’s catch and have a choice to catch for it themselves or undertake a mutual exchange.  The exchange value has no meaning without an ultimate use value, and the proportions of exchange will depend on the relative effort to acquire the item that is needed.  Exchange values must be based on some quantitative relationship and the exchangeable quantity the beaver and the deer have in common is the amount of human labour devoted to their capture.

Innovation in this early state of society might improve the trap to catch the beaver.  In this case the exchange value of the beaver might be reduced to one deer as then the same amount of human labour is needed to capture either animal.

In a more diverse and developed market, Ricardo considers the role of capital to be the embodiment of the labour that was involved in its creation and again this capital, in machinery for example, contributes in an incremental and proportional manner to the total labour required to bring the traded goods to market.  Again innovation, whether it be applied to the product or to the efficiency of the capital in the means of production or distribution, the effect is the same, that is to diminish the total labour required and thereby reduce the exchange value of the product.

If fewer men were required to cultivate the raw cotton, or if fewer sailors were employed in navigating, or shipwrights in constructing the ship, in which it was conveyed to us; if fewer hands were employed in raising the buildings and machinery, or if these, when raised, were rendered more efficient, the stockings would inevitably fall in value, and consequently command less of other things.
……. Economy in the use of labour never fails to reduce the relative value of a commodity, whether the saving be in the labour necessary to the manufacture of the commodity itself, or in that necessary to the formation of the capital, by the aid of which it is produced. 

This is the long term or equilibrium effect of innovation in what has become known as the Labour Theory of Value.  In the long term innovated commodities become more socially accessible.  However, in developed markets there are time-dependent factors that also must be taken into account.  The value derived from a measure of total labour required might be considered a natural value, whereas the market value at any time might deviate around this due to numerous factors concerning the specific properties of the market and the individual preferences it comprises.

The awareness that the labour of different professions in reality contributes value in different degrees is simply accounted for.  This difference is a relatively fixed feature of commodity production and thus different periods of labour duration might be attributed to different skills or intensities of work.  And the fact that different individuals might labour with different intensities is similarly accommodated by taking an average value for a generic labour necessary at a particular time and under specific conditions of production – which Karl Marx refers to as the socially necessary abstract labour[1] and which he considers to be the value of the commodity.

By considering use values and exchange values, Marx identified two forms of transaction.  A transaction typical of the exchange of commodities, such as occurred between the beaver and deer hunters, first requires the intermediate transfer of goods into money.  This is the commodity-money-commodity (C-M-C) transaction through which the use values of the commodities are traded.

Quite different are money-commodity-money (M-C-M) transactions.  Here the initial money as capital is invested in materials, wages of labour and capital equipment in order to make commodities for sale.  This sequence starts and ends with money and the motivation of the capitalist to pursue the transaction is to finish with more money.  This net profit from the transaction is what Marx refers to as surplus value and is possible if the labourer is paid less than the value he is able to create through the deployment of his labour.

This is where innovation can acquire its time-dependent benefits.  The natural effect of innovation in increasing the productivity can be immediate, whilst the adjustments of relative value and price take time as the innovation propagates through the market.  For this period of time there is a relative surplus value from trading the commodities.  The market value will remain above the adjusted natural value.  In cases of a monopoly advantage provided by patent protection, for example, this adjustment might take some considerable time during which the relative surplus value effect created by innovation can provide for increased profits, wages or both.

In Capital, Karl Marx considers many consequences of the M-C-M transaction.  For Marx the existence of money is essential for the conversion of actual physical (concrete) labour to become the abstract labour deployed in the creation of value.  Eventually, in a fully developed capitalist market system based on the circulation of capital along with individual freedom and equality, there is a symbiotic relationship between the capitalist and the labourer.

Competition imposes the need for continuous capital accumulation on the capitalist.

Competition makes the immanent laws of capitalist production to be felt by individual capitalists as external coercive laws.  It compels him to keep constantly extending his capital in order to preserve it, but extend it he cannot except by means of progressive accumulation.

This translates into a never ending quest for relative surplus value. Innovation is a primary tool to bring about this increase in labour productivity [2].

On the other hand, survival of labour requires wages that are sufficient to pay for the commodities the labourers need for their reproduction – the “wage goods”.  Innovation here can increase the productivity of labour-power required to produce these wage goods making them relatively more accessible.  Marx also notes that wages may consequently fall enabling further access to relative surplus value for the capitalist.

In summary, whether society pursues a simple exchange of useful commodities or whether this exchange is undertaken to satisfy the demands of a capitalist economy, it is the embedded labour in goods and services that determines their value.

But this pre-eminence of the Labour Theory of Value seems to fundamentally contradict how the modern world operates.

Is not value, like beauty, really is in the eye of the beholder?  Surely it is the prerogative of the consumer to decide what is and what is not valuable.  And for those things that are considered valuable, the price is set by the laws of supply and demand whilst the income and idiosyncrasies of the consumer will determine that value.

Are not the endeavours of labour, like any other service, subject to the same laws of supply and demand?  And this labour-power needs to be of value to its customer, which in this case is the employer who will set remuneration according to this perceived value.

Merging the  prevailing consumer perception of value within a Labour Theory of Value is an essential step to provide an interpretation of value that is relevant for a modern society.  For this we have introduced the concept of a Value Surface.  This Value Surface combines with the classical Labour Theory of Value to form a Labour Theory of Value Creation in which consumers form a perception of value on the basis of the creative endeavours of the innovator.

The classical Labour Theory of Value presents a Value Surface as a flat plane which is held at a value equivalent to the labour deployed in production and distribution.  Onto this may be an additional residual surplus value that should slowly decay over time with the diffusion of innovation through the market.  Apart from these minor time-dependent features, the Value Surface is presented as a static and fixed statement of value for the associated commodities.  The value of the beaver is fixed to two deer!

In the construction of a more realistic Value Surface we should recognise that there will be a variable distribution of the appreciation of value across a population of potential consumers.  Furthermore, a potential energy might be considered retained in an elevated Value Surface.  The source of this energy could then be traced through the Labour Theory of Value back to the endeavours of the labourer as envisaged by the classical economists.  The Value Surface then is like a huge wobbly marquee erected using these energies, which have at their source the energy of labour as a biological phenomenon.

It is interesting to interpret the bioenergetic  transformations that drive the endeavours of the labourer in relation to the socially necessary abstract labour of Marx, which then could be equally replaced by the socially necessary abstract ADP, protein and carbohydrate.  It could take the form of socially necessary abstract sunlight [3].  Of course there are energy losses to heat in the transformations that take sunlight to labour-power, as there most certainly should be in the raising of the Value Surface by human labour.  But as there is no accounting of energy conservation this should not be of concern.

This subject of energy flow into capital and its role in value creation was actually considered 100 years before Karl Marx published Capital.

Around 1755 François Quesnay and his fellow Physiocrats stood at the origin of modern economic thinking.  Quesnay was then the first consulting physician to Louis XV and outside of his medical work at Versailles he was leader of the Physiocrats who opposed the repressive mercantile system with a radical idea that wealth and value arises only from the stock of  land, that rural agricultural communities were the source of value and the downstream artisan labour of the cities was technically unproductive, reworking the earlier founded wealth derived from the land, and merely consuming the resources supplied by rural communities [4].  What France needed to revive its flagging economic fortunes was stated in 14 maxims and summarised by laissez faire, laissez passer – free trade.

To illustrate the economic mechanisms in play, Quesnay produced a Tableau Économique in three versions between 1758 and 1763.  Along with its 22 notes of explanation, the Tableau was a dangerous document in pre-revolutionary Paris and it stirred a variety of different opinions.  The early French economist Mirabeau considered the Tableau to accompany the printing press and money as one of the world’s three greatest inventions.  To the philosopher J-J Rousseau it was the product of an odious, if legal, despotism [5].

 

Quesnay_TableauFigure 2.8:  3rd Edition of Tableau Économique  of François Quesnay (1763) In the Tableau Économique, the Third Edition which is shown adjacent, there are three columns.  The Tableau charts how the money flows through the economy of the farmer, the landowner and the industrialists who then produce the goods consumed by the landowner and the farmer.There is a continual flow of money between those working the land (left), those owning the land (centre) and those producing and distributing the commodities, lodging, clothing, etc. of industry (right).  Half the receipts of industry on the right are returned back to the land as payment for raw materials and other products of the land.  The other half is sterile being consumed unproductively.

Excess consumption by the unproductive right column was considered to deplete the capital that was needed for investment in future agricultural production.  “Hence it is seen that excess of decorative luxury may very promptly ruin by magnificence an opulent state.”

As a practicing physician Quesnay had developed the systematic analysis of expenditure, work, profit and consumption that was summarised in the Tableau Économique as an analogy to William Harvey’s principles on the circulation of blood that had become known a century earlier.  More generally and in harmony with the mechanical and physical advances that had recently emerged during the Enlightenment, the Physiocrats considered that an economy worked on a circular flow of money that operated on mathematical principles.

Adam Smith had cause to visit Paris in 1765 employed as personal tutor of Henry Scott, the young 3rd Duke of Buccleuch.  Quesnay met with Smith on a number of occasions and the ideas they shared impressed Smith such that he considered dedicating to Quesnay his An Inquiry into the Nature and Causes of the Wealth of Nations had not the latter died two-years before its publication [6].  In this book Smith identified industrial production as a source of national wealth and considered dangerous the view that economic phenomena are suitable for a mechanistic analysis and rather are entities resolved through social relationships.  Smith thus adhered to the views of his friend and mentor the Scottish philosopher David Hume that the credibility of systems of philosophy should be illustrated with examples drawn from common life and history.  David Ricardo and Karl Marx later followed and strengthened the fundamentally social interpretation of their works of political economy.

It is interesting to consider how an awareness of bioenergetics and the associated energy transformations that drive human labour might have illuminated the early conversations and speculations between MM Quesnay and Smith on the origins of economic phenomena.

The Tableau Économique is a document of its time.  It is a revolutionary document that stands at the very origin of contemporary society.  Because of this we should leave the final words of this section to François Quesnay on the harmony between this society and the natural world in which it is founded [7].

L’ordre naturel est essential des Sociétés Politiques ….. l’étude et la démonstration DES LOIS DE LA NATURE relatives à la subsistance, et la multiplication du genre humain.  L’observation universelle de ces lois est l’intérêt commun et général de tous les hommes.  La connaissance universelle de ces lois est donc le préliminaire indispensable, et le moyen nécessaire du bonheur de tous.

 

Notes:

[1] That is: “in a given state of society, under certain social average conditions or production, with a given social average intensity, and average skill of the labour employed.”

[2] Unbridled innovation can lead to instabilities and various checks to innovation, such as the need to achieve returns on existing fixed capital investment, monopoly protection of technology and chronic labour surpluses can act to limit changes brought by innovation to be reasonable for the applied capital.  Also the level of exploitation is limited by the need to retain the consumer behaviour of the working class, in order to provide a market for the commodities of the capitalists.

[3] Accounting also for additional nuclear power that is not received from the sun.

[4] These ideas were first published in Diderot’s Encyclopédie in 1756 and 1757

[5] An interesting collection of many divergent opinions on the ideas of the Physiocrats can be found at the beginning of The Physiocrats by Henry Higgs, Macmillan and Co 1897

[6] As recorded on page 624 of Dictionnaire de l’économie politique by Charles Coquelin

[7] Ephémérides du citoyen, No. 11 page 13 1769

On Metaphors and Mechanisms

We have explored in the previous posting how, in order to understand innovation, evolutionary economics loosely adopts, adapts and extends concepts of evolutionary biology to apply these in an economic context.  To avoid confounding a cardinal assumption of free will of the innovator and the entrepreneur, it is often emphasised that evolution here serves as a metaphor rather than a mechanism.  One should not directly transpose biological mechanisms into the social domain.  They are considered signposts to guide an enquiry, not rigid conduits to channel thinking.

Whilst biological metaphors have a natural resonance with innovation, it is the physical sciences that have had a more enduring and intimate contact with economic thought.  Back in the second half of the 18th century the two subjects found themselves in particularly close proximity.  Around this time, Adam Smith met Voltaire, friend and collaborator of Émilie du Châtelet who was instrumental in elucidating of the nature of kinetic energy[1] and later died from complications of childbirth in the same year as she had completed the first French translation of Newton’s Principia Mathematica.  In this pre-revolutionary epoch, in Paris the Lumières were fermenting ideas mixing early economic concepts of the Physiocrats lead by François Quesnay, with the science of Jean d’Alembert who with Diderot had recently created a new and novel Encyclopédie.  Smith among many others was present and could not have remained uninfluenced by such a prestigious gathering when ten years later he produced his seminal work on An Inquiry into the Nature and Causes of the Wealth of Nations.

 Quesnay_Tableau  MONIAC Prototype at Leeds
The Tableau économique from 1758 in which François Quesnay and the Physiocrats considered agricultural surpluses as the source of wealth, which then flowed back and forth to the landowners and to the industrialists in the cities[2]. A prototype  MONIAC (Monetary National Income Analogue Computer) from 1949 that uses fluidic logic to model the workings of the UK economy[3].(Exhibited at the University of Leeds)

The further development of Political Economics proceeded though the 19th century, involving such notable figures as David Ricado, John Stuart Mill and Karl Marx.  Then around 1870 there was the “Marginal Revolution” initiated through independent publications of Léon Walras, Joseph Stanley Jevons and Carl Menger.  Again these later developments can link their origins to the principles of mechanics, as is particularly evident in the words and equations of Walras.  This is no more clearly demonstrated than in his 1909 final summary publication Économique et mécanique, which concludes with the following translated remarks:-

In examining as carefully as one might wish the four theories given above namely, the theory of maximum satisfaction with the exchange [of commodities] and the maximum energy of a balanced beam, and also the theory of general [economic] equilibrium of the market and that of the universal equilibrium of celestial bodies, one will find between these two mechanical theories a single unique difference: the exteriority of the two mechanical phenomena and intimacy of the two economic phenomena, and thus, the ability to make everyone aware of the conditions of equilibrium of the beam and conditions of universal equilibrium of the sky, due to the existence of common measures for these physical phenomena, and the inability to demonstrate to anyone the conditions of equilibrium of the exchange and the conditions for a general equilibrium of the market, because of a lack of common measures for these psychological phenomena.  We have metres and centimetres to note the length of the lever arm of the beam and grams and kilogrammes to note the supported weights.  We also have instruments to determine the relative movement of stars.  We are not able to measure the intensity of need between those who exchange goods. But this should be of no consequence because with each exchange, consciously or unconsciously, a person will know deep down whether his needs are satisfied or not in proportion to the value of the goods exchanged.  Whether the measure be externally made or be internal, depending on whether the measurements are physical or psychological, this does not prevent the measurement itself and a comparison of quantities and quantitative relationships, and therefore as a consequence the science should be mathematical.

 

Walras goes on further to ask whether actually the masses and forces of mechanics and the equivalent utilities and scarcity of commodities in economics are not all abstract factors to make the mathematical equations work in their respective domains.

Following the classical mechanics of the balanced beam and the movement of stars used by Walras, later came an association with the newer science of thermodynamics, which describes the steam engine and all other systems through which mechanical work can be drawn from the flow of heat from a hotter to a cooler temperature.  While classical mechanics is time-invariant, the mechanisms of thermodynamics only work in one direction, heat flows only from high to low temperatures and the arrow of time has a single and specific direction.  Such properties make thermodynamics a natural metaphorical associate for economic phenomena.

Around 1875 the renowned physicist and mathematician Willard Gibbs in his publication On the Equilibrium of Heterogeneous Substances applied thermodynamic principles to chemical systems and their equilibria.  The complex mathematics delayed a broad adoption of this work, which later became recognised as one of the greatest achievements of 19th century science.

In his 1947 treatise Foundations of Economic Analysis, the American economist Paul Samuelson, who interestingly had a direct intellectual lineage back to Willard Gibbs himself, recast the mathematics of thermodynamics to apply to equilibrium phenomena in the marginal supply and demand of neoclassical economics. This opened up a rich seam of mathematical formalism that greatly enhanced the reach and power of neoclassical economic theory over subsequent decades.  Samuelson and several other economists became Nobel laureates as a result of these endeavours.

Mathematically the thermodynamic and economic equilibrium are both systems of constrained optimisation (on energy and utility) with a similar set of equations of state.  However, there has been little enthusiasm to link the parallel mechanical and economic worlds.  Samuelson[4] in 1960 asked:

Why should there be laws like the first or second laws of thermodynamics holding in the economic realm? Why should “utility” be literally identified with entropy, energy, or anything else? Why should a failure to make such a successful identification lead anyone to overlook or deny the mathematical isomorphism that does exist between minimum systems that arise in different disciplines?

 

Over time this view must have changed as in 1947 Samuelson begins his Foundations of Economic Analysis with the words:

The existence of analogies between central features of various theories implies the existence of a general theory which underlies the particular theories and unifies them with respect to those central features.   This fundamental principle of generalization by abstraction was enunciated by the eminent American mathematician E. H. Moore more than thirty years ago.

 

It is a matter of debate, therefore, whether we are dealing with metaphors or with mechanisms.

But where does innovation sit in this macroeconomic analysis?  In the mathematical development of neoclassical economic theory it is assumed that rational economic consumers maximise their utility and firms maximise their profit, to attain a stable equilibrium.  Innovation then appears as an external (exogenous) factor which accounts for increased growth through increased productivity with time.  It was not until the 1980s that technological change through innovation was brought inside the neoclassical economic analysis, with an Endogenous Growth Theory in which the mathematics is adjusted to accommodate the effect of technological innovation by adjusting for the otherwise conventional diminishing returns of deployed capital.

The weakness of mainstream neoclassical economics to deal with innovation is explored by Nathan Rosenberg in his Exploring the Black Box.  This book from 1994 includes an exploration of the “path-dependent aspects of technological change”, in which it is clear that there is a close interweaving of science and technology development that cannot be understood except from a historical and time dependent perspective.  As considered within evolutionary economics, technologies such as the transistor, laser and information theory have extended the impact of innovation well beyond their original domain of application and justification.  These uncertainties weaken the hold that neoclassical economics can take on innovation, as acquisition of knowledge is not costless but costly, and the outputs of R&D cannot be rationally selected at the point of investment.

Quite different forms of microeconomic analysis have been developed, aided by the availability of computational numerical models, in which concepts of statistical mechanics and the explanation of such physical phenomena as the phase changes of solids to liquids and gases have seen analogous mechanisms applied in economics.  In these numerical simulations of the behaviour of large numbers of independent entities, molecules or economic agents, unexpected patterns emerge from a soup of complex interactions and which resemble real world phenomena.  This emergent behaviour of complex systems can at least explain some of the unpredictability of economic phenomena even though the insights they provide are essentially qualitative.  These interesting ideas are brought together in Critical Mass: How One Thing Leads to Another by Philip Ball

Notwithstanding the merits of the above economic ideas based on classical mechanics, thermodynamics and statistical mechanics, they do not say very much at all about technological innovation.  Each theoretical approach is naturally dependent on the acuity of its underlying assumptions, such as the rational behaviour and economic equilibrium that leads to an optimum market exchange.

It is clear that an innovation theory should not be founded on an a priori assumption of economic equilibrium as novel ideas can be inherently disruptive to this.  It should recognise the relevance of the biological metaphor.  The strength of the relationship with mechanics, which has been deployed for two centuries, must therefore be adapted.

So we will propose a hybrid metaphor combining the biological with the mechanical, and linking these to the social and economics domain to create an integrated science of innovation.

Whether we end with a metaphor or a mechanism, it is not necessary to establish a bloodline between the mechanical, biological and innovation disciplines.  Rather, we leave the last words on this point to the Nobel Prize winning economist Paul Krugman[5]:-

In economics we often use the term “neoclassical” either as a way to praise or to damn our opponents. Personally, I consider myself a proud neoclassicist. By this I clearly don’t mean that I believe in perfect competition all the way. What I mean is that I prefer, when I can, to make sense of the world using models in which individuals maximize and the interaction of these individuals can be summarized by some concept of equilibrium. The reason I like that kind of model is not that I believe it to be literally true, but that I am intensely aware of the power of maximization-and-equilibrium to organize one’s thinking – and I have seen the propensity of those who try to do economics without those organizing devices to produce sheer nonsense when they imagine they are freeing themselves from some confining orthodoxy.

 

 Notes:

[1] Establishing that kinetic energy is ½mv2 where m is the mass and v the velocity of a moving body

[2] http://en.wikipedia.org/wiki/Tableau_économique

[3] http://en.wikipedia.org/wiki/MONIAC_Computer

[4] As quoted by Eric Smith and Duncan K Foley in Classical thermodynamics and economic general equilibrium theory.  Journal of Economic Dynamics & Control 32 (2008) 7–65.  This paper gives an interesting analysis of some important fundamental differences between the thermodynamic and economic analyses.

[5] For his 1996 talk to the European Association for Evolutionary Political Economy: http://web.mit.edu/krugman/www/evolute.html