What Goes Up Must Come Down

 

“Why do you think those adverts on TV are repeated so often? “

Many years ago this question was posed in an informal discussion, quite out of any context, by scientist and TV personality Professor Heinz Wolff.  It emerged from an insatiable curiosity about everything that is a hallmark of both the engaged scientist and innovator.  Memories of the day have long faded, except for that question.

Photo by P. G. Champion

Professor_Heinz_Wolff

The fact that such a mundane question can be posed by so eminent a scientist suggests that the answer might be important.  The same TV advertisements are indeed broadcast repeatedly.  They are expensive and so the benefits of doing so must be appreciable.  We have now reached a point to sketch an outline of what an answer might look like.

Adverts communicate information that enables TV viewers to perceive the value of their associated products or services.  In the vocabulary of this website, the broadcast information creates instances of Consumer Product Interaction with a consequent consumer perception of value.  A viewer may feel a closer association with their sporting hero by wearing the brand he is commissioned to promote.

The raising of value perceptions has been imagined as the elevation of a huge, wobbly marque-like structure referred to as a Value Surface, on which a single point marks an individual’s response at the time of that Consumer Product Interaction.  This value perception may subsequently go up or down depending on whatever information follows.  News of a drug scandal degrades the image of the sportsman and his sponsored products.  The Value Surface in its entirety is forever in an agitated seascape motion that is hopefully nudged upwards by each broadcast advertisement.

One mechanism of action for the advertisements is through the classical conditioning discussed in the previous post.  Just as the dogs of Pavlov could be taught to associate the sound of a bell with the arrival of food, the information and images in the advert may evoke an equivalent response in value perceived for the associated products or services.  Recognition of brand attributes has been associated with conditioned responses by Janiszewski and Warlop (2013) .

The requirement to frequently rebroadcast the same information clearly indicates that the association with value needs continual reinforcement.  This could be necessary to increase the number of recipients or enhance value perceived, that is to increase the overall breadth or height of a Value Surface.  TV advertisements certainly have a huge reach.  It is also possible that residual perceived value following the advertisement broadcast might naturally diminish without the subsequent reinforcement of the message.  One might consider that the advert has a role in propagating ideas as a meme, as discussed in New Economics of Innovation, which is analogous to the viral propagation of genetic information.  At any time there are many other possibly conflicting memes competing for survival in the consciousness of a recipient population.   Advertising agencies and media companies alike have done well out of this silent struggle.

With so much content passing every second through the information environment, markets might well be naturally forgetful.  We will explore this idea in more detail later.  At this point we should note that it is often hard work to raise a Value Surface and then to keep it aloft.  This effort appears in the investments and endeavours made by an enterprise and entrepreneur in making their information valuable.  Next we will transform the Value Surface into a single point that represents perceived value for a whole population of consumers.

 

The Neural Roots of an Economic Trajectory

The Value Surface concept discussed above and elsewhere is essentially a micro-analysis of the behaviour of a population of consumers[1].  This is needed to construct and interpret the viscoelastic-derived model that has been used to simulate the commercial operations of an enterprise.  To provide a more macroscopic view of that enterprise, we can collapse the three dimensions of the Value Surface into a single point that is representative of the entire value-creating innovative activities at a particular moment in time.

Taking the average across the multiple perceptions of value for a consumer population gives a single measure of height to which a product can be elevated by the endeavours of an enterprise[2].  Clearly some products will be more difficult to raise than others. The resistance to this elevation will be different for different products, and will inversely represent how attractive they are to their consumers. We can then follow this point of average perceived value with time to draw a trajectory of a product through an economic space.  The role of the enterprise is to “fly” the product through this space, as described in the companion Foreground papers: An Economic Trajectory  and Flightpaths and Forgetful Markets.

Here, with more freedom to speculate, we will consider the neural origins of a potential field through which an economic entity might fly.

In Valoris Cognita Barcelona we have considered how a person’s brain may interpret information received through the senses to build and adapt a continuously changing model of reality.  Consciousness may be considered to arise from an elimination of the errors that separate neural cognition from external reality.  It is a descent down an “error surface” to a point that is satisfactory for survival, but which inevitably leaves some unimportant residual error with an associated subjective perception.

Whilst the electrical impulses and neuro-chemical transmission of information that arise in neural cells and synaptic connections is clearly powered by the host individual, a neural model of reality may be changed by this information flow to become less transient and more ordered to some degree by the memories it engenders.  This explanation needs help from an analogy with another energy dissipative system.

In Writing the Information we considered various systems through which energy flows and thereby transforms the system that is its conduit.  One such example was the flow of water down a mountainous terrain.  The water flow itself was driven by its inherent potential energy acquired due to its height.  The direction of flow is not random but clearly follows a path to descend to the lowest point as fast as possible, and in doing so it sculpts distinctive river valleys into the terrain that serve to further accumulate and direct the water flow and erosion.  In summary, whilst the flow originates from the potential energy of the water and is transient, semi-permanent features are created and remain long after their creation.  In these features are written the full history of previous torrents through the energy dissipative erosive events they have engendered.

Could the energy flows through neural circuity have left similar features that we call memories through associated energy dissipative events?

Water Erosion Pattern

Could the information content that is etched into a brain and into individual neural models of reality be the source of potential energy fields we have hypothesised as resisting the elevation of a Value Surface and its associated Economic Trajectory?

Could a continuous neural remodelling response to new information together with the competition for vital attention explain why signals that are not reinforced dissipate leading to a natural forgetfulness of markets?

Can innovation that is making information valuable actually be a physical phenomenon or should we be content to have a suitable physical analogue of this fundamentally social process?

The latter question may not be as outrageous as it may first seem.  We have discussed in Semantics of Information that deletion of information has thermodynamic consequences.  There are similar issues in other domains for which information may have a physical manifestation.

 

Applying  Landauer’s principle to the information content of the universe  M. Paul Gough (2008) calculates that an information energy makes a significant contribution to the dark energy that is hypothesised to have determined the dynamics of the expanding universe throughout its entire history.

 

NASA and the European Space Agency.
Hubble_ultra_deep_field_largeNASA’s Hubble telescope reveals around 10,000 galaxies with the deepest view into the history of the Universe.

So why are adverts on TV repeated so often?

Imagine a world without the repeated marketing needed to encourage product purchase.  The dynamics of an economic trajectory indicate that goods may not simply remain frozen in space.  The potential retained within the Value Surface has already been diminished by previous Sale Events.  Now, the inevitably dynamics of descent will be brought into play and there will begin an increasing rate of loss of information in a reversal of the previous ascent phase of the trajectory.  Investment and prices will fall as companies seek to convert the residual potential in the collapsing Value Surface into income and consumers will forget how they once valued the goods.

This dynamic mechanical analogy provides an explanation for why commercial television repeats and repeats again advertisers’ content.  It provides an explanation of why salespersons continually need to repeat and reinforce the value proposition of their products.  So finally we can propose a derivative of the Labour Theory of Value Creation where labour is also working to prevent the depreciation of that value due to a natural forgetfulness of dynamic markets.  Innovation has a role therefore, not only in making information valuable in the first instance, but also in retaining the value of the information and opposing a tendency for that value to decline with time as a product matures.

As it is with commodities and technologies, so it is with whole companies that integrate the value of their various commercial activities.  The value of the company as it appears in its stock market valuation depends ultimately on the effective deployment of capital to propel the company on its own economic journey.  The forgetfulness of the market appears in the fluctuating patterns of valuation of public quoted stock.  Memories are short and a regular injection of good news is needed to counter a fall in value.  Volumes of press releases evidence this common practice. The similarity of the qualitative nature of the economic trajectories of commodities, technologies and companies, whilst these may differ substantially in time and duration, together with the voraciousness of consumers to absorb this “vital” information, point to a similar cause and effect arising from the action of an economic potential field.

 

Multiple images of beautiful independence might well convince a receptive individual of the transcendental properties of a particular perfume.

In doing so, their need to be repeated on multiple occasions may be essential to overcome a natural forgetfulness of the fragrance market.

 

 Dior j'adore sm

 Notes:

[1] It may be considered equivalent to the statistical mechanics description of the thermodynamics of a liquid or a gas, which is at the origin or models of economics developed by Paul Samuelson et al.

[2] Some consequences of taking an average measure of value perception are considered in A Labour Theory of Value Creation.

Of Dogs, Men and Crows

About one year ago we set out with a definition of innovation that is making information valuable and with an aim to explore innovation using the physical and biological sciences to add to the understanding of innovation as a social and economic activity.

In the previous posting Valoris Cognita Barcelona this journey led to sketching out of some principles whereby the information content of an innovation might manifest itself in the neural circuitry of the brains of consumers, who might then perceive an associated value.

Unfortunately, consumer brains like all models produce approximations of the real world. There can be no absolute reality in this subjective world, just opinions held to varying degrees of conviction depending on how sensory information fits to the cerebral model an individual consumer uses to understand the world in which he lives.  Furthermore, this cerebral modelling is not a uniquely human capability but is shared with other organisms.

Here we will start a catalogue of examples of other species for which information may be considered valuable.

The classical example of the perception of value associated with information is in the classical conditioning of animal behaviour discovered by Ivan Pavlov in the early years of the 20th century.  Famously, saliva secretion in dogs which normally occurs when they find food can be generated by other stimuli, such as the ringing of a bell, which through repetitive association the dog has learned to indicate the arrival of food.

One might interpret this conditioned response of the dog as valuing the information communicated by the ringing bell as it might value the arrival of the food itself.

Whilst alternative unconditioned responses, that are innate and naturally occurring, appear to be hardwired in deeper and more primitive parts of the brain, learned conditioned responses arise in the cerebral context that is responsible for higher order intelligent behaviour.  The initial neural correlations with the perception of value can thus be identified.

 Ivan_Pavlov

Since the discovery of the physiological basis of classical conditioning there has been widespread application of the concept in marketing and advertising[1].  It is unsurprising to consider that the learned response of Pavlov’s dogs to the various stimuli that provoked their salivation can be associated with the desires of consumers in shopping centres as they encounter the brands that line the shelves therein.  In this case the brand communicates the necessary information for a receptive consumer to perceived value in the associated product.

Wormhole:        Consumer Behaviour: There is a great deal to learn from other fields especially when it comes to consumer motivation and behaviour.

 

A more recent study of a quite different aspect of animal behaviour, that is to do with the reaction to inequality in the treatment of individuals, also indicates a perception of value in species that enjoy a complex social behaviour.

In 2013, Claudia Wascher and Thomas Bugnyar [2] reported on the behaviour of pairs of crows where one individual is rewarded preferentially in relation to a second.  The birds had learned to associate a token exchange with their reward with food.  The study of Wascher and Bugnyar revealed that the bird’s behaviour depended on the inequality the researchers introduced into the reward system:-

Crow 1

A view from the office window.  Crows and ravens have cognitive abilities similar to primates, especially in their social interactions, in various forms of cooperation and problem solving and with a high selectivity in partner choice and in coalition and alliance formation.

*   If only one bird of a pair was rewarded with food for the same exchange task, this diminished their willingness to participate in the token exchange[3].

*   If one bird received lesser quality food for the same exchange task, this also diminished willingness to participate in the token exchange.

*   A bird receiving lesser quality food for the same exchange task may even choose not to accept their reward, even though they had already paid the cost in the token exchange.

*   If one bird was given food as a gift whilst a second had to “work” for the food through a token exchange, then this also reduced their willingness to participate in the token exchange.

*   Different individuals respond differently to inequity in complex ways, making the above findings apparent in the statistics of the population rather than appearing in the individuals on every occasion.

This response to inequality is interesting in itself as it mirrors human preference for fairness in reward distribution, where even a person receiving a disproportionately higher reward can be dissatisfied by an unequal distribution.  Whilst primates also behave in a similar manner, the inequality response in dogs is determined solely by the presence or absence of the reward and not its quality.  Fish on the other hand appear completely insensitive to inequality.

Crows have a complex social behaviour and accordingly their behavioural response to inequality is highly sensitive.  Furthermore, for this reaction to occur, these birds must show attributes that are particularly relevant to the current discussion on value perception.  This sensitivity to inequality requires the birds to have:-

  • An inherent perception of the relative value of different items upon which their response to inequality is determined.
  • An inherent perception that the cost (in terms of token exchange) needs to be equivalent for the same reward as recognised in the received food.
  • A perception that the value of a reward is inherently related to the work required to acquire it.

So it seems that the Value Surface concept and even a Labour Theory of Value have behavioural roots that may have arisen independently in species of mammals and birds that share complex social interactions within their communities[4].

Smith, Ricardo and Marx could recognise their classical articulation of economic motivation in this fascinating insight into animal behaviour that appears to be associated with the co-operative tendencies of the species concerned.  All three repeatedly emphasised that such economic behaviour emerges through social interaction, and so it seems.  However, it also appears that social relationships can condition the associated neural responses in individuals.

It is therefore significant to note that perceptions of value, whilst they must originate through cognition and brain function, are fundamentally associated with a society and the complex social relationships that exist therein.  There is such a thing as society.  Furthermore, Sale Events are dependent on more than a simple individual comparison of cost and benefit.  Environmental factors within the society play a role, one of which is the fairness and equality that underpin the transactional behaviour.

 

Notes:

[1] Limbad, Shaileshkumar J.  “The Application of Classical Conditioning Theory in Advertisements” International Journal of Marketing and Technology3.4 (Apr 2013): 197-207.

[2] Wascher CAF, Bugnyar T (2013)  Behavioral Responses to Inequity in Reward Distribution and Working Effort in Crows and Ravens. PLoS ONE 8(2): e56885. doi:10.1371/journal.pone.0056885

[3] The willingness to participate in the token exchange is referred to as “exchange performance” and is the likelihood a token exchange will occur.  Crows like humans are varied in their responses to stimuli as is recognised in the Value Surface concept.

[4] This explanation seems more plausible than the alternative that these behavioural traits arose when birds and mammals had a common ancestry, that was around 320 million years ago.

Valoris Cognita Barcelona

In memory of Joe Egan, born 25th February 1916

Here we move tentatively into the terra incognito of the physiology of value perception.  Expect amendments to follow.

The previous posting on Ubiquitous Error Elimination leads us to consider that value perceptions gained through the automatic numerical meanderings of a Least Squares Method fitted model may have something in common with value perceived by real consumers of some actual goods.

A point of commonality is to consider one’s brain as a model that delivers consciousness of observed reality.  The observations are sensory inputs that comprise many analog signals, which an internal perception must seek to organise into a cognitive model with a minimum error value.  Although the physiological processes are largely unknown, one might expect some form of neural modelling and fitting to observed reality as part of the emergence of conscious awareness.  If so, then all such model fitting will be subject to the general principles of navigating a complex error surface.[1]

In The Grand Design (Bantam Books, New York, 2010) Stephen Hawking and Leonard Mlodinow set out to address some very big questions employing “model-dependent realism”, which assumes our brains form models of the world based on information received through the senses.  There is no definitive true reality and many such models can co-exist and may be adopted dependent on their usefulness and value.  Such individual perceptions of an external reality are likely to depend on intrinsic assumptions and will probably find acceptable limits of error that are sufficient to ensure survival.  To do more, at least in a primitive society, would be a waste of energy.

If one could link the concept of a potential field that we have hypothesised as acting to constrain the creation of value in the raising of a Value Surface, to an error surface associated with cognition and perception, then this could indicate some physiological foundations for the earlier hypothesising.  Some physiological potential must be driving a descent down an individual’s cognitive error surface to achieve a reliable perception of reality.  Otherwise one’s consciousness would have no physical cost and Maxwell’s Demon might happily defy the 2nd law of thermodynamics.  Information that confers survival, in a primitive society, and perhaps quality of life in a modern society, must be classified as more valuable, indicating a higher use-value to external objects or a greater exchange value, than a random replication of useless information.  Hence information can be made valuable through the very processes of biological perception.

This sketching out of an association between the physics of value perception and the biological origins of consciousness is entirely speculative.  A deeper analysis of this association must await another day.  However, the general concept of an intrinsic neural model that is fitted to observed reality does lead to some interesting observations.

This concept explains individual differences of opinion that are reflected in an oscillating Value Surface.  Individual perceptions of the real world clearly differ.  The start points to fit a neural model to these perceptions certainly should differ.  Different acceptable local minima on the error surface may provide different individuals with a different interpretation of the same reality.  Human beings probably do not process sensory data exactly in the same way and clearly can reach different conclusions when given similar scenarios to manage.  If people behaved as automaton robots, then each commodity Value Surface would be a rigid plane of equal valuation.

Yet people’s opinions and beliefs are extremely stable for such a dynamic fitting of internal model with external reality.  Such stability could arise if the start of every new minima search begins at the most recent minima for a comparable reality, perhaps retrieved from memory.  In this case only the change from the previous reality needs to be reconstructed in the modified internal model, which then provides the next start point in a continual modification of an internal neural model to reflect changing perceptions of a real world.

Barcelona Seafront

Whilst cogitating on this very subject of the fundamental origins of the conscious mind, the author was sitting on a bench on Barcelona waterfront.  A very brief interruption was made by a smart thirty-something year old who mixed languages rapidly in an urgent attempt to communicate.  Within ten seconds the chap had disappeared along with a bag containing everything that was valuable, snatched by a second person during the distraction.  Passport, wallet, travel tickets, laptop, money all had disappeared.  Yet in disbelief I imagined I could see my familiar grey stolen rucksack where it should have been, on the bench beside me, for a good few seconds before grim reality fully allowed itself to be recognised.  Reality had changed too quickly and it seemed the refitting of my internal model was taking long enough to notice the processing delay.  Later on at the UK Passport Office, I was informed that Barcelona is the bag-snatch capital of Europe and had I known this, the adjustment to the new reality might have been smoother.  Or maybe I would have protected my belongings with more conscious deliberation.

Several years on, the memory of that minor Barcelona trauma is fresh and easy to recall.

As considered in “Writing the Information”, can such vivid memories be the river valleys etched into the error surface of my consciousness by the cascading experience of these earlier events?  This is a subjective and even metaphysical suggestion, but such a cognitive system should certainly be an attribute in favour of survival and as such could be a selected epigenetic trait.  Important information would be considered valuable by its hosts.  I will be more careful of my luggage on any future visit to Barcelona[2].

Whatever are the mental mechanism and however current controversies on the nature of the mind and consciousness play out in the future, the subject is central to understanding innovation.  Not only are the intellectual processes that act on information at the very origins of innovation, but the subjective appreciation of value by the consumer, whatever the product of the imagination, can be traced back to its source in the obscure processing of the human brain and its constituent 100 billion information processing neuron cells.

 

Back in Barcelona in 1887 Santiago Ramón y Cajal started to work with a new Golgi staining method that used a silver preparation which, for the first time, enabled neurons to be clearly visible through a microscope.  It was the start of the modern discipline of neuroscience.  Ramón y Cajal used the Golgi method to produce many graphical illustrations of complex neuronal shapes.  On observing these cellular structures exemplified below, it is difficult not to see similarities to the dendritic patterns considered in “Writing the Information”, and to infer that the associated metaphor might extend into this neuroscience domain.  That is, the tree-like neuronal patterns once again infer, albeit circumstantially, that an energy transmission function is at the heart of these microscopic constituent cellular elements of the brain and central nervous system.

Santiago Ramón y Cajal shared a Nobel Prize with Camillo Golgi recognising their work on the structure of the nervous system which today forms a “Neuron Doctrine” that is a basis of the current understanding of the anatomy and physiology of the central nervous system.

 

Purkinje Neuron

Drawing of Purkinje neuron by Santiago Ramón y Cajal, 1899;
Instituto Santiago Ramón y Cajal, Madrid, Spain.
Acknowledgement to Wikipedia: http://en.wikipedia.org/wiki/File:PurkinjeCell.jpg

The dendritic structure of neuron anatomy and physiology enables the cellular behaviour to be mapped onto the generic “Green Box of Innovation” template introduced earlier.  In this case, an electrical signal flows from the multiply connected and complex dendritic structures, through to a central cell nucleus and a single axon strand of connected cells that can reach across millimetres, to stretch out to a branched terminal region, there to connect to dendrites from neighbouring neurons.  The axon-dendrite connection is known as a synapse in which a communicated signal is transferred by chemical means.  Here the information transfer through the synapse requires a transformation of electrical to chemical energy in neurotransmitters and then back to electrical energy as the neurotransmitter binds to synaptic cell receptors to begin the transmission through the next dendritic link of a connected neuron.

 

Neuronal Green Box

 

A synaptic link connecting neurons can either excite or inhibit the transmission of an electrical signal, known as an action potential in connected dendrite links. Perhaps 10,000 such dendrite signals converge on a cell nucleus to give rise to a single event which occurs at the axon hillock, the point where the filamentous axon connects with the cell nucleus.  This integration of the many dendrite signals that need to cross an energy threshold to determine whether that neuron will fire a single electrical pulse through its axon to communicate with its cellular neighbours is a main physiological function of the brain and other parts of the central nervous system.  These pulses may last for only a millisecond and each neuron may contribute to the information flow up to 100 times per second.  Clearly there is much information flowing through the average brain.

 

We have described an energy transfer process that needs to reach a critical threshold before a neuron will fire and propagate its signal.  Billions of such signals must converge to create a perception of value at a Consumer Product Interaction that is a precursor of a Sale Event.  Again this is an integration of received information into an “all-or-nothing” decision to purchase.  Though differing in terms of scale similarities appear in the energy flows of the action potentials of neural circuitry and those operating on consumer preferences  in the shopping centre.

There are Artificial Intelligence (AI) models that attempt to replicate on a very small scale the manner the brain naturally might function.  Neural networks, an example of which is shown in the figure below, are brain-like numerical models of layers of connected neurons whose connection properties provide a generic set of parameters that can be specified to characterise the behaviour of the system.  These connection parameter values can be estimated by using a Least Square Method, navigating to the lowest point on an error surface between a simulated behaviour and a known “training set” of real output values.  Once the ideal simulation with the smallest error has been found, then the associated neural network parameter values should faithfully reproduce the real world, so long as this is retained within the limited confines defined and exemplified by the training set.

Neural Network

A typical neural network connecting four input neurons to two output neurons
through a single intermediate layer of 6 intermediate neurons.

AI neural networks can be useful as they continuously learn from new data just as humans might.  The predictions they can make can be informative as are human intuitive predictions.  They are also susceptible to weaknesses of ambiguity in human understanding.  There may be many local minima on the error surface to trap the descending Least Squares Method.[3]  Also, like the brain, a neural network model is adaptable to fit with the many diverse challenges an organism might face, but this means the solution is an arbitrary fit to observable data.  There is nothing intrinsic in the model that represents the world that is being simulated, nor are there any overt assumptions that can intelligently be applied to simplify this real world.

In the real brain of the analyst, the real multi-billion neural network can be applied to explore the world using models with some conceptual simplification.  Effectively this is positioning the human processing power at the front end of the entire modelling process.

This is the origin of An Innovative Enterprise Simulation that uses the Method of Least Squares to provide a vision that would otherwise be unavailable to the unassisted human senses.

It is a model to explore the process of innovation itself.

 

Notes:

[1] An error surface emerging from the fit of neural systems to physiological signals will certainly comprise a huge number of dimensions.

[2] The points here are discussed in considerable detail in The Believing Brain by Michael Shermer (Constable and Robinson Ltd, London, 2012) who considers that many beliefs are hard-wired into our brains and then consciously rationalised often through the selective use of information and associated mechanisms of bias.

[3] Actually neural network algorithms can apply such mechanical concepts as momentum whereby the speedy descending searching for a minima can overrun the lowest local point and though it might then need to retrace its search, this can avoid getting stuck in a local crevice on the error surface.

Ubiquitous Error Elimination

Evolution by Error Elimination:  There is one feature on the landscape of innovation that has already been recognised and which will arise again in the future, time and time again.  It appears in all innovation management and evolutionary systems.  It is essential for the creation of new knowledge and in the perception of its value.  It is a fundamental process in the building of models and the fitting of these models to the real world.  These are some the guises of the ubiquitous Error Elimination.

In its most fundamental form Error Elimination appears in the epistemology of the philosopher of science Sir Karl Popper.  In The Logic of Scientific Discovery (1934), Popper recognised an asymmetry in the nature of knowledge that whilst no amount of empirical evidence can prove an assertion to be true, a single case alone may prove it to be false.  It follows that no theory can definitively be proven to be true.

In later work Popper went onto explore how scientific knowledge, which originates in the subjective mind of the scientist, goes onto become an “objective” feature of the world.  In Objective Knowledge: An Evolutionary Approach (1972) Popper develops a “three-worlds” view in which all physical artefacts are “World 1” objects and subjective thoughts and ideas belong to “World 2”.  Popper’s “World 3” is populated by things originating through the human mind but which have gone on to have an existence beyond the confines of that mind.  These include abstract concepts, the content of all books, designs, theories, etc.

Popper’s Three WorldsPopper’s Three-Worlds Relationship

Combining the approach to challenge the validity of existing theories with empirical tests designed specifically to bring about their failure, together with the creation of objective scientific knowledge for those theories that survive this ordeal of falsification, led Popper to conclude that scientific knowledge creation proceeds through an evolutionary sequence:-

Problem 1 >> Tentative Solution >> Error Elimination >> Problem 2

 

Here, the tentative solution to the initiating problem is continually refined in the light of new empirical evidence, until the new data fundamentally conflicts with existing knowledge, which gives rise to a new problem for the cycle to repeat.

 

Error Elimination Creates Value by Risk Reduction:  In earlier work we have extended the evolutionary epistemology of Karl Popper to reach technology innovations that might emerge from the scientific research upon which the original work of Popper is based (Egan et al., 2013, Williams et al., 2013).   This involves an explicit recognition of a subjective Value Appreciation stage which forms the link between subjective World 2 and objective World 3 in Popper’s evolutionary knowledge theory.

Indeed, for scientific knowledge, Popper describes such a value appreciation that is achieved through inter-subjective testing, expert peer review and publication and through which the knowledge becomes objective.

4-Point Innovation Cycle

Popper’s evolutionary epistemology cycle, including an explicit identification of Value Appreciation

Initially, there is often a high risk that a Tentative Solution will not consistently resolve its initiating problem in practice and proof of concept projects are required to understand and manage this risk.  This conforms to Popper’s Error Elimination stage the output of which may comprise accumulated information on designs, and the technical and commercial evaluations from which to conclude the potential benefits and residual risks of an innovation.  In fact, the reduction in risk though Error Elimination can be interpreted as a creation of value through innovation, as it is the value that is perceived by the consumer of this information.

In terms of the previous “Green Box of Innovation” that provides a generalisation of an innovation process based upon enhancing the value of information, it is the parameters of the “box” that determine the operational form through which input information is transformed into outputs that have utility and value.   Maximising the value of the outputs is once again an application of Error Elimination to discover the parameters that provide the best operational form for the Tentative Solution to resolve the real world problem it is tentatively designed to address.

In a direct analogy with the growth of scientific knowledge, the existing Tentative Solution should be repeatedly challenged.  The empirical information will continue to provide evidence of utility and thereby continually adjust perceptions of value.  Hence, feedback loops operate through which the value of the Tentative Solution can be enhanced through the Error Elimination process.

 

Error Elimination by Least Squares:   An innovator may deploy a powerful cocktail of creativity, intuition and experience to make a Tentative Solution relevant and valuable by Error Elimination.  Computers are not gifted with such human capabilities, but on the other hand they excel in their relentless ability the crunch numbers.

The Least Squares method is one of a number of numerical optimisation techniques whereby outputs of a computer simulation can be ‘fitted’ to real-world data.  To do this, some starting values of the model parameters are selected, without knowledge, and a simulated behaviour is derived.  The simulated outputs are compared with real-life and the difference is a measure of the error of that simulation.  This initial error can indicate how to adjust the model parameters to achieve a better fit to the empirical data.  The Least Squares approach enables a further better guess at the model parameters and onward thus rolls an iterative process of Error Elimination to continually improve upon the match between the simulated and the real, to minimize the error and hone in upon parameter values that may provide a new insight into the real world through the window of a best-fit model and its parameters that now describe real behaviour.

Error Elimination we have seen to be part of the process of innovation.  With the Least Squares method it becomes an algorithmic procedure to navigate an error surface.  It works as follows.

It is as though a blind wanderer is placed into a mountainous terrain (for a two parameter model, where the error is a vertical third dimension) with the task of finding the point of lowest altitude, for at this point of minimum error there can be found some useful insight.  Her tool is a stick of enormously variable length through which she can perceive the elevation of the surrounding landscape.  Down steeply sloping hillsides her stick will extend to accelerate descent and avoid the confusion of small rocky undulations.  Into the valley her guide is shortened to follow a meandering contour, always descending towards her goal.  When the topography becomes tortuous, progress is restricted to very small steps, frustrating advancement as the blind wanderer must squeeze through each crevice eventually perhaps to expose wider valleys.  Finally, when all around is higher from the shortest to the longest reach, the wanderer may wonder if she is at the unique point of minimum error.  The wanderer may mark that spot and start again and then again from distant and disparate origins to confirm uniqueness[1], although this might not be necessary.  She may have acquired a valuable insight.

Watching the Least Squares algorithm operate in the virtual world of a computer, it is easy to imagine the numerical model as a blind wanderer seeking the best fit to measurements of reality.  The patterns of descent show a striking resemblance to those previously described in “Writing the Information”, although the topography of an error surface runs through n+1 dimensions, where n is the number of model parameters.   However, this complexity is not relevant for the Least Squares algorithm as Error Elimination proceeds just as it would in our familiar three dimensions.

In an ideal world the final error could be completely eliminated.  It would be an unmistakeable match of a perfect model with perfect data.  Yet all measurements contain their own errors (noise) and in the output of all worldly processes the primary signal is polluted by artefacts which confound perfection with ambiguity.  Also, all models must necessarily be simplifications of the real world, with a judicious ignorance of secondary and tertiary influences.  A perfect model of the real world requires the real world to be the model. For the innovator, it is sufficient to be close enough for practical purposes.

So the innovator must still contribute an essential human element, to innovate upon the structure of the model to better conform to real world observations.  The investigator thus enters into a liaison with the computer to become an n+2 dimension of a hybrid man-machine error surface, which must be navigated to make the model converge towards reality.  Here the inventor is the creative agent giving the model its operational form and the innovator contributes by forging the relationship of the model with reality.  And there may be as many models as pictures hung in a gallery, for value is not in the picture itself but in the understanding gained of its subject.

It is perhaps surprising or even problematic that an automatic computer routine such as Least Squares may be suggested as a means or even a metaphor for innovation.  However, it is not a paradox if the algorithm works on new inputs, so that the path taken to descend the error surface is new and may lead to new and potentially valuable insights.  Of course if this is repeated using the same inputs it would be repetitious and nothing of value could emerge.  Nor is there any accumulation of value as the original path descends to the point of minimum error, as it is only when this point is reached that any value is realised in the insight provided by the “best-fit” model parameters and outputs.

In all the above cases innovation is making information valuable through a process of Error Elimination.  That analogous mechanisms appear in both human and machine applications suggests that the process of innovation itself may not be an entirely social phenomenon.

 

Notes:

[1] This may be considered to be a rather trivial instance of Popper’s challenge of falsification.

On Value, Capital and Energy

Innovation:  Making Information Valuable.  To begin to make sense of this boiled down definition we need to understand the nature of both Information and Value.  Here we will discuss the latter.

In fact, even the question “what is value?” can lead an individual to respond with the subject of their “values”, which are entirely different.  Dictionaries contribute with definitions that are rather circular; along the lines of value is how much something is worth.  For something more fundamental we need to go back to the days of the first classical economists.

In the works of the 19th century political economists the understanding of the nature of value is of major significance.  In 1817 David Ricado embarked On the Principles of Political Economy, and Taxation with an initial chapter “On Value” in which he developed the twin concepts of value in use and value in exchange proposed forty years earlier by Adam Smith.  Fifty years further on Karl Marx also developed these notions by virtue of his extremely detailed analysis in Capital.  What followed has been over one hundred years of controversy that continues to this day.

The subject begins traditionally by comparing the value after capture of a beaver and a deer in some primitive society.  Both have a use-value or utility which for the two animals are qualitative and different.  They also have an exchange value one for the other, whereby the beaver hunter might acquire the utility of the deer and vice versa through some mutual exchange with the deer hunter.  This exchange must depend on the relative value in exchange of the beaver and deer.

Ricado makes a primary assumption here:-

In the early stages of society, the exchangeable value of these commodities, or the rule which determines how much of one shall be given in exchange for another, depends almost exclusively on the comparative quantity of labour expended on each.

So if it takes twice the labour to capture a beaver than a deer, then one beaver will exchange for two deer.  On such considerations Ricardo follows up with a warning:-

That this is really the foundation of the exchangeable value of all things, excepting those which cannot be increased by human industry, is a doctrine of the utmost importance in political economy; for from no source do so many errors, and so much difference of opinion in that science proceed, as from the vague ideas which are attached to the word value.

The rationale here depends on the use value.  Both hunters have need for part of the other’s catch and have a choice to catch for it themselves or undertake a mutual exchange.  The exchange value has no meaning without an ultimate use value, and the proportions of exchange will depend on the relative effort to acquire the item that is needed.  Exchange values must be based on some quantitative relationship and the exchangeable quantity the beaver and the deer have in common is the amount of human labour devoted to their capture.

Innovation in this early state of society might improve the trap to catch the beaver.  In this case the exchange value of the beaver might be reduced to one deer as then the same amount of human labour is needed to capture either animal.

In a more diverse and developed market, Ricardo considers the role of capital to be the embodiment of the labour that was involved in its creation and again this capital, in machinery for example, contributes in an incremental and proportional manner to the total labour required to bring the traded goods to market.  Again innovation, whether it be applied to the product or to the efficiency of the capital in the means of production or distribution, the effect is the same, that is to diminish the total labour required and thereby reduce the exchange value of the product.

If fewer men were required to cultivate the raw cotton, or if fewer sailors were employed in navigating, or shipwrights in constructing the ship, in which it was conveyed to us; if fewer hands were employed in raising the buildings and machinery, or if these, when raised, were rendered more efficient, the stockings would inevitably fall in value, and consequently command less of other things.
……. Economy in the use of labour never fails to reduce the relative value of a commodity, whether the saving be in the labour necessary to the manufacture of the commodity itself, or in that necessary to the formation of the capital, by the aid of which it is produced. 

This is the long term or equilibrium effect of innovation in what has become known as the Labour Theory of Value.  In the long term innovated commodities become more socially accessible.  However, in developed markets there are time-dependent factors that also must be taken into account.  The value derived from a measure of total labour required might be considered a natural value, whereas the market value at any time might deviate around this due to numerous factors concerning the specific properties of the market and the individual preferences it comprises.

The awareness that the labour of different professions in reality contributes value in different degrees is simply accounted for.  This difference is a relatively fixed feature of commodity production and thus different periods of labour duration might be attributed to different skills or intensities of work.  And the fact that different individuals might labour with different intensities is similarly accommodated by taking an average value for a generic labour necessary at a particular time and under specific conditions of production – which Karl Marx refers to as the socially necessary abstract labour[1] and which he considers to be the value of the commodity.

By considering use values and exchange values, Marx identified two forms of transaction.  A transaction typical of the exchange of commodities, such as occurred between the beaver and deer hunters, first requires the intermediate transfer of goods into money.  This is the commodity-money-commodity (C-M-C) transaction through which the use values of the commodities are traded.

Quite different are money-commodity-money (M-C-M) transactions.  Here the initial money as capital is invested in materials, wages of labour and capital equipment in order to make commodities for sale.  This sequence starts and ends with money and the motivation of the capitalist to pursue the transaction is to finish with more money.  This net profit from the transaction is what Marx refers to as surplus value and is possible if the labourer is paid less than the value he is able to create through the deployment of his labour.

This is where innovation can acquire its time-dependent benefits.  The natural effect of innovation in increasing the productivity can be immediate, whilst the adjustments of relative value and price take time as the innovation propagates through the market.  For this period of time there is a relative surplus value from trading the commodities.  The market value will remain above the adjusted natural value.  In cases of a monopoly advantage provided by patent protection, for example, this adjustment might take some considerable time during which the relative surplus value effect created by innovation can provide for increased profits, wages or both.

In Capital, Karl Marx considers many consequences of the M-C-M transaction.  For Marx the existence of money is essential for the conversion of actual physical (concrete) labour to become the abstract labour deployed in the creation of value.  Eventually, in a fully developed capitalist market system based on the circulation of capital along with individual freedom and equality, there is a symbiotic relationship between the capitalist and the labourer.

Competition imposes the need for continuous capital accumulation on the capitalist.

Competition makes the immanent laws of capitalist production to be felt by individual capitalists as external coercive laws.  It compels him to keep constantly extending his capital in order to preserve it, but extend it he cannot except by means of progressive accumulation.

This translates into a never ending quest for relative surplus value. Innovation is a primary tool to bring about this increase in labour productivity [2].

On the other hand, survival of labour requires wages that are sufficient to pay for the commodities the labourers need for their reproduction – the “wage goods”.  Innovation here can increase the productivity of labour-power required to produce these wage goods making them relatively more accessible.  Marx also notes that wages may consequently fall enabling further access to relative surplus value for the capitalist.

In summary, whether society pursues a simple exchange of useful commodities or whether this exchange is undertaken to satisfy the demands of a capitalist economy, it is the embedded labour in goods and services that determines their value.

But this pre-eminence of the Labour Theory of Value seems to fundamentally contradict how the modern world operates.

Is not value, like beauty, really is in the eye of the beholder?  Surely it is the prerogative of the consumer to decide what is and what is not valuable.  And for those things that are considered valuable, the price is set by the laws of supply and demand whilst the income and idiosyncrasies of the consumer will determine that value.

Are not the endeavours of labour, like any other service, subject to the same laws of supply and demand?  And this labour-power needs to be of value to its customer, which in this case is the employer who will set remuneration according to this perceived value.

Merging the  prevailing consumer perception of value within a Labour Theory of Value is an essential step to provide an interpretation of value that is relevant for a modern society.  For this we have introduced the concept of a Value Surface.  This Value Surface combines with the classical Labour Theory of Value to form a Labour Theory of Value Creation in which consumers form a perception of value on the basis of the creative endeavours of the innovator.

The classical Labour Theory of Value presents a Value Surface as a flat plane which is held at a value equivalent to the labour deployed in production and distribution.  Onto this may be an additional residual surplus value that should slowly decay over time with the diffusion of innovation through the market.  Apart from these minor time-dependent features, the Value Surface is presented as a static and fixed statement of value for the associated commodities.  The value of the beaver is fixed to two deer!

In the construction of a more realistic Value Surface we should recognise that there will be a variable distribution of the appreciation of value across a population of potential consumers.  Furthermore, a potential energy might be considered retained in an elevated Value Surface.  The source of this energy could then be traced through the Labour Theory of Value back to the endeavours of the labourer as envisaged by the classical economists.  The Value Surface then is like a huge wobbly marquee erected using these energies, which have at their source the energy of labour as a biological phenomenon.

It is interesting to interpret the bioenergetic  transformations that drive the endeavours of the labourer in relation to the socially necessary abstract labour of Marx, which then could be equally replaced by the socially necessary abstract ADP, protein and carbohydrate.  It could take the form of socially necessary abstract sunlight [3].  Of course there are energy losses to heat in the transformations that take sunlight to labour-power, as there most certainly should be in the raising of the Value Surface by human labour.  But as there is no accounting of energy conservation this should not be of concern.

This subject of energy flow into capital and its role in value creation was actually considered 100 years before Karl Marx published Capital.

Around 1755 François Quesnay and his fellow Physiocrats stood at the origin of modern economic thinking.  Quesnay was then the first consulting physician to Louis XV and outside of his medical work at Versailles he was leader of the Physiocrats who opposed the repressive mercantile system with a radical idea that wealth and value arises only from the stock of  land, that rural agricultural communities were the source of value and the downstream artisan labour of the cities was technically unproductive, reworking the earlier founded wealth derived from the land, and merely consuming the resources supplied by rural communities [4].  What France needed to revive its flagging economic fortunes was stated in 14 maxims and summarised by laissez faire, laissez passer – free trade.

To illustrate the economic mechanisms in play, Quesnay produced a Tableau Économique in three versions between 1758 and 1763.  Along with its 22 notes of explanation, the Tableau was a dangerous document in pre-revolutionary Paris and it stirred a variety of different opinions.  The early French economist Mirabeau considered the Tableau to accompany the printing press and money as one of the world’s three greatest inventions.  To the philosopher J-J Rousseau it was the product of an odious, if legal, despotism [5].

 

Quesnay_TableauFigure 2.8:  3rd Edition of Tableau Économique  of François Quesnay (1763) In the Tableau Économique, the Third Edition which is shown adjacent, there are three columns.  The Tableau charts how the money flows through the economy of the farmer, the landowner and the industrialists who then produce the goods consumed by the landowner and the farmer.There is a continual flow of money between those working the land (left), those owning the land (centre) and those producing and distributing the commodities, lodging, clothing, etc. of industry (right).  Half the receipts of industry on the right are returned back to the land as payment for raw materials and other products of the land.  The other half is sterile being consumed unproductively.

Excess consumption by the unproductive right column was considered to deplete the capital that was needed for investment in future agricultural production.  “Hence it is seen that excess of decorative luxury may very promptly ruin by magnificence an opulent state.”

As a practicing physician Quesnay had developed the systematic analysis of expenditure, work, profit and consumption that was summarised in the Tableau Économique as an analogy to William Harvey’s principles on the circulation of blood that had become known a century earlier.  More generally and in harmony with the mechanical and physical advances that had recently emerged during the Enlightenment, the Physiocrats considered that an economy worked on a circular flow of money that operated on mathematical principles.

Adam Smith had cause to visit Paris in 1765 employed as personal tutor of Henry Scott, the young 3rd Duke of Buccleuch.  Quesnay met with Smith on a number of occasions and the ideas they shared impressed Smith such that he considered dedicating to Quesnay his An Inquiry into the Nature and Causes of the Wealth of Nations had not the latter died two-years before its publication [6].  In this book Smith identified industrial production as a source of national wealth and considered dangerous the view that economic phenomena are suitable for a mechanistic analysis and rather are entities resolved through social relationships.  Smith thus adhered to the views of his friend and mentor the Scottish philosopher David Hume that the credibility of systems of philosophy should be illustrated with examples drawn from common life and history.  David Ricardo and Karl Marx later followed and strengthened the fundamentally social interpretation of their works of political economy.

It is interesting to consider how an awareness of bioenergetics and the associated energy transformations that drive human labour might have illuminated the early conversations and speculations between MM Quesnay and Smith on the origins of economic phenomena.

The Tableau Économique is a document of its time.  It is a revolutionary document that stands at the very origin of contemporary society.  Because of this we should leave the final words of this section to François Quesnay on the harmony between this society and the natural world in which it is founded [7].

L’ordre naturel est essential des Sociétés Politiques ….. l’étude et la démonstration DES LOIS DE LA NATURE relatives à la subsistance, et la multiplication du genre humain.  L’observation universelle de ces lois est l’intérêt commun et général de tous les hommes.  La connaissance universelle de ces lois est donc le préliminaire indispensable, et le moyen nécessaire du bonheur de tous.

 

Notes:

[1] That is: “in a given state of society, under certain social average conditions or production, with a given social average intensity, and average skill of the labour employed.”

[2] Unbridled innovation can lead to instabilities and various checks to innovation, such as the need to achieve returns on existing fixed capital investment, monopoly protection of technology and chronic labour surpluses can act to limit changes brought by innovation to be reasonable for the applied capital.  Also the level of exploitation is limited by the need to retain the consumer behaviour of the working class, in order to provide a market for the commodities of the capitalists.

[3] Accounting also for additional nuclear power that is not received from the sun.

[4] These ideas were first published in Diderot’s Encyclopédie in 1756 and 1757

[5] An interesting collection of many divergent opinions on the ideas of the Physiocrats can be found at the beginning of The Physiocrats by Henry Higgs, Macmillan and Co 1897

[6] As recorded on page 624 of Dictionnaire de l’économie politique by Charles Coquelin

[7] Ephémérides du citoyen, No. 11 page 13 1769

On Metaphors and Mechanisms

We have explored in the previous posting how, in order to understand innovation, evolutionary economics loosely adopts, adapts and extends concepts of evolutionary biology to apply these in an economic context.  To avoid confounding a cardinal assumption of free will of the innovator and the entrepreneur, it is often emphasised that evolution here serves as a metaphor rather than a mechanism.  One should not directly transpose biological mechanisms into the social domain.  They are considered signposts to guide an enquiry, not rigid conduits to channel thinking.

Whilst biological metaphors have a natural resonance with innovation, it is the physical sciences that have had a more enduring and intimate contact with economic thought.  Back in the second half of the 18th century the two subjects found themselves in particularly close proximity.  Around this time, Adam Smith met Voltaire, friend and collaborator of Émilie du Châtelet who was instrumental in elucidating of the nature of kinetic energy[1] and later died from complications of childbirth in the same year as she had completed the first French translation of Newton’s Principia Mathematica.  In this pre-revolutionary epoch, in Paris the Lumières were fermenting ideas mixing early economic concepts of the Physiocrats lead by François Quesnay, with the science of Jean d’Alembert who with Diderot had recently created a new and novel Encyclopédie.  Smith among many others was present and could not have remained uninfluenced by such a prestigious gathering when ten years later he produced his seminal work on An Inquiry into the Nature and Causes of the Wealth of Nations.

 Quesnay_Tableau  MONIAC Prototype at Leeds
The Tableau économique from 1758 in which François Quesnay and the Physiocrats considered agricultural surpluses as the source of wealth, which then flowed back and forth to the landowners and to the industrialists in the cities[2]. A prototype  MONIAC (Monetary National Income Analogue Computer) from 1949 that uses fluidic logic to model the workings of the UK economy[3].(Exhibited at the University of Leeds)

The further development of Political Economics proceeded though the 19th century, involving such notable figures as David Ricado, John Stuart Mill and Karl Marx.  Then around 1870 there was the “Marginal Revolution” initiated through independent publications of Léon Walras, Joseph Stanley Jevons and Carl Menger.  Again these later developments can link their origins to the principles of mechanics, as is particularly evident in the words and equations of Walras.  This is no more clearly demonstrated than in his 1909 final summary publication Économique et mécanique, which concludes with the following translated remarks:-

In examining as carefully as one might wish the four theories given above namely, the theory of maximum satisfaction with the exchange [of commodities] and the maximum energy of a balanced beam, and also the theory of general [economic] equilibrium of the market and that of the universal equilibrium of celestial bodies, one will find between these two mechanical theories a single unique difference: the exteriority of the two mechanical phenomena and intimacy of the two economic phenomena, and thus, the ability to make everyone aware of the conditions of equilibrium of the beam and conditions of universal equilibrium of the sky, due to the existence of common measures for these physical phenomena, and the inability to demonstrate to anyone the conditions of equilibrium of the exchange and the conditions for a general equilibrium of the market, because of a lack of common measures for these psychological phenomena.  We have metres and centimetres to note the length of the lever arm of the beam and grams and kilogrammes to note the supported weights.  We also have instruments to determine the relative movement of stars.  We are not able to measure the intensity of need between those who exchange goods. But this should be of no consequence because with each exchange, consciously or unconsciously, a person will know deep down whether his needs are satisfied or not in proportion to the value of the goods exchanged.  Whether the measure be externally made or be internal, depending on whether the measurements are physical or psychological, this does not prevent the measurement itself and a comparison of quantities and quantitative relationships, and therefore as a consequence the science should be mathematical.

 

Walras goes on further to ask whether actually the masses and forces of mechanics and the equivalent utilities and scarcity of commodities in economics are not all abstract factors to make the mathematical equations work in their respective domains.

Following the classical mechanics of the balanced beam and the movement of stars used by Walras, later came an association with the newer science of thermodynamics, which describes the steam engine and all other systems through which mechanical work can be drawn from the flow of heat from a hotter to a cooler temperature.  While classical mechanics is time-invariant, the mechanisms of thermodynamics only work in one direction, heat flows only from high to low temperatures and the arrow of time has a single and specific direction.  Such properties make thermodynamics a natural metaphorical associate for economic phenomena.

Around 1875 the renowned physicist and mathematician Willard Gibbs in his publication On the Equilibrium of Heterogeneous Substances applied thermodynamic principles to chemical systems and their equilibria.  The complex mathematics delayed a broad adoption of this work, which later became recognised as one of the greatest achievements of 19th century science.

In his 1947 treatise Foundations of Economic Analysis, the American economist Paul Samuelson, who interestingly had a direct intellectual lineage back to Willard Gibbs himself, recast the mathematics of thermodynamics to apply to equilibrium phenomena in the marginal supply and demand of neoclassical economics. This opened up a rich seam of mathematical formalism that greatly enhanced the reach and power of neoclassical economic theory over subsequent decades.  Samuelson and several other economists became Nobel laureates as a result of these endeavours.

Mathematically the thermodynamic and economic equilibrium are both systems of constrained optimisation (on energy and utility) with a similar set of equations of state.  However, there has been little enthusiasm to link the parallel mechanical and economic worlds.  Samuelson[4] in 1960 asked:

Why should there be laws like the first or second laws of thermodynamics holding in the economic realm? Why should “utility” be literally identified with entropy, energy, or anything else? Why should a failure to make such a successful identification lead anyone to overlook or deny the mathematical isomorphism that does exist between minimum systems that arise in different disciplines?

 

Over time this view must have changed as in 1947 Samuelson begins his Foundations of Economic Analysis with the words:

The existence of analogies between central features of various theories implies the existence of a general theory which underlies the particular theories and unifies them with respect to those central features.   This fundamental principle of generalization by abstraction was enunciated by the eminent American mathematician E. H. Moore more than thirty years ago.

 

It is a matter of debate, therefore, whether we are dealing with metaphors or with mechanisms.

But where does innovation sit in this macroeconomic analysis?  In the mathematical development of neoclassical economic theory it is assumed that rational economic consumers maximise their utility and firms maximise their profit, to attain a stable equilibrium.  Innovation then appears as an external (exogenous) factor which accounts for increased growth through increased productivity with time.  It was not until the 1980s that technological change through innovation was brought inside the neoclassical economic analysis, with an Endogenous Growth Theory in which the mathematics is adjusted to accommodate the effect of technological innovation by adjusting for the otherwise conventional diminishing returns of deployed capital.

The weakness of mainstream neoclassical economics to deal with innovation is explored by Nathan Rosenberg in his Exploring the Black Box.  This book from 1994 includes an exploration of the “path-dependent aspects of technological change”, in which it is clear that there is a close interweaving of science and technology development that cannot be understood except from a historical and time dependent perspective.  As considered within evolutionary economics, technologies such as the transistor, laser and information theory have extended the impact of innovation well beyond their original domain of application and justification.  These uncertainties weaken the hold that neoclassical economics can take on innovation, as acquisition of knowledge is not costless but costly, and the outputs of R&D cannot be rationally selected at the point of investment.

Quite different forms of microeconomic analysis have been developed, aided by the availability of computational numerical models, in which concepts of statistical mechanics and the explanation of such physical phenomena as the phase changes of solids to liquids and gases have seen analogous mechanisms applied in economics.  In these numerical simulations of the behaviour of large numbers of independent entities, molecules or economic agents, unexpected patterns emerge from a soup of complex interactions and which resemble real world phenomena.  This emergent behaviour of complex systems can at least explain some of the unpredictability of economic phenomena even though the insights they provide are essentially qualitative.  These interesting ideas are brought together in Critical Mass: How One Thing Leads to Another by Philip Ball

Notwithstanding the merits of the above economic ideas based on classical mechanics, thermodynamics and statistical mechanics, they do not say very much at all about technological innovation.  Each theoretical approach is naturally dependent on the acuity of its underlying assumptions, such as the rational behaviour and economic equilibrium that leads to an optimum market exchange.

It is clear that an innovation theory should not be founded on an a priori assumption of economic equilibrium as novel ideas can be inherently disruptive to this.  It should recognise the relevance of the biological metaphor.  The strength of the relationship with mechanics, which has been deployed for two centuries, must therefore be adapted.

So we will propose a hybrid metaphor combining the biological with the mechanical, and linking these to the social and economics domain to create an integrated science of innovation.

Whether we end with a metaphor or a mechanism, it is not necessary to establish a bloodline between the mechanical, biological and innovation disciplines.  Rather, we leave the last words on this point to the Nobel Prize winning economist Paul Krugman[5]:-

In economics we often use the term “neoclassical” either as a way to praise or to damn our opponents. Personally, I consider myself a proud neoclassicist. By this I clearly don’t mean that I believe in perfect competition all the way. What I mean is that I prefer, when I can, to make sense of the world using models in which individuals maximize and the interaction of these individuals can be summarized by some concept of equilibrium. The reason I like that kind of model is not that I believe it to be literally true, but that I am intensely aware of the power of maximization-and-equilibrium to organize one’s thinking – and I have seen the propensity of those who try to do economics without those organizing devices to produce sheer nonsense when they imagine they are freeing themselves from some confining orthodoxy.

 

 Notes:

[1] Establishing that kinetic energy is ½mv2 where m is the mass and v the velocity of a moving body

[2] http://en.wikipedia.org/wiki/Tableau_économique

[3] http://en.wikipedia.org/wiki/MONIAC_Computer

[4] As quoted by Eric Smith and Duncan K Foley in Classical thermodynamics and economic general equilibrium theory.  Journal of Economic Dynamics & Control 32 (2008) 7–65.  This paper gives an interesting analysis of some important fundamental differences between the thermodynamic and economic analyses.

[5] For his 1996 talk to the European Association for Evolutionary Political Economy: http://web.mit.edu/krugman/www/evolute.html