Ubiquitous Error Elimination

Evolution by Error Elimination:  There is one feature on the landscape of innovation that has already been recognised and which will arise again in the future, time and time again.  It appears in all innovation management and evolutionary systems.  It is essential for the creation of new knowledge and in the perception of its value.  It is a fundamental process in the building of models and the fitting of these models to the real world.  These are some the guises of the ubiquitous Error Elimination.

In its most fundamental form Error Elimination appears in the epistemology of the philosopher of science Sir Karl Popper.  In The Logic of Scientific Discovery (1934), Popper recognised an asymmetry in the nature of knowledge that whilst no amount of empirical evidence can prove an assertion to be true, a single case alone may prove it to be false.  It follows that no theory can definitively be proven to be true.

In later work Popper went onto explore how scientific knowledge, which originates in the subjective mind of the scientist, goes onto become an “objective” feature of the world.  In Objective Knowledge: An Evolutionary Approach (1972) Popper develops a “three-worlds” view in which all physical artefacts are “World 1” objects and subjective thoughts and ideas belong to “World 2”.  Popper’s “World 3” is populated by things originating through the human mind but which have gone on to have an existence beyond the confines of that mind.  These include abstract concepts, the content of all books, designs, theories, etc.

Popper’s Three WorldsPopper’s Three-Worlds Relationship

Combining the approach to challenge the validity of existing theories with empirical tests designed specifically to bring about their failure, together with the creation of objective scientific knowledge for those theories that survive this ordeal of falsification, led Popper to conclude that scientific knowledge creation proceeds through an evolutionary sequence:-

Problem 1 >> Tentative Solution >> Error Elimination >> Problem 2

 

Here, the tentative solution to the initiating problem is continually refined in the light of new empirical evidence, until the new data fundamentally conflicts with existing knowledge, which gives rise to a new problem for the cycle to repeat.

 

Error Elimination Creates Value by Risk Reduction:  In earlier work we have extended the evolutionary epistemology of Karl Popper to reach technology innovations that might emerge from the scientific research upon which the original work of Popper is based (Egan et al., 2013, Williams et al., 2013).   This involves an explicit recognition of a subjective Value Appreciation stage which forms the link between subjective World 2 and objective World 3 in Popper’s evolutionary knowledge theory.

Indeed, for scientific knowledge, Popper describes such a value appreciation that is achieved through inter-subjective testing, expert peer review and publication and through which the knowledge becomes objective.

4-Point Innovation Cycle

Popper’s evolutionary epistemology cycle, including an explicit identification of Value Appreciation

Initially, there is often a high risk that a Tentative Solution will not consistently resolve its initiating problem in practice and proof of concept projects are required to understand and manage this risk.  This conforms to Popper’s Error Elimination stage the output of which may comprise accumulated information on designs, and the technical and commercial evaluations from which to conclude the potential benefits and residual risks of an innovation.  In fact, the reduction in risk though Error Elimination can be interpreted as a creation of value through innovation, as it is the value that is perceived by the consumer of this information.

In terms of the previous “Green Box of Innovation” that provides a generalisation of an innovation process based upon enhancing the value of information, it is the parameters of the “box” that determine the operational form through which input information is transformed into outputs that have utility and value.   Maximising the value of the outputs is once again an application of Error Elimination to discover the parameters that provide the best operational form for the Tentative Solution to resolve the real world problem it is tentatively designed to address.

In a direct analogy with the growth of scientific knowledge, the existing Tentative Solution should be repeatedly challenged.  The empirical information will continue to provide evidence of utility and thereby continually adjust perceptions of value.  Hence, feedback loops operate through which the value of the Tentative Solution can be enhanced through the Error Elimination process.

 

Error Elimination by Least Squares:   An innovator may deploy a powerful cocktail of creativity, intuition and experience to make a Tentative Solution relevant and valuable by Error Elimination.  Computers are not gifted with such human capabilities, but on the other hand they excel in their relentless ability the crunch numbers.

The Least Squares method is one of a number of numerical optimisation techniques whereby outputs of a computer simulation can be ‘fitted’ to real-world data.  To do this, some starting values of the model parameters are selected, without knowledge, and a simulated behaviour is derived.  The simulated outputs are compared with real-life and the difference is a measure of the error of that simulation.  This initial error can indicate how to adjust the model parameters to achieve a better fit to the empirical data.  The Least Squares approach enables a further better guess at the model parameters and onward thus rolls an iterative process of Error Elimination to continually improve upon the match between the simulated and the real, to minimize the error and hone in upon parameter values that may provide a new insight into the real world through the window of a best-fit model and its parameters that now describe real behaviour.

Error Elimination we have seen to be part of the process of innovation.  With the Least Squares method it becomes an algorithmic procedure to navigate an error surface.  It works as follows.

It is as though a blind wanderer is placed into a mountainous terrain (for a two parameter model, where the error is a vertical third dimension) with the task of finding the point of lowest altitude, for at this point of minimum error there can be found some useful insight.  Her tool is a stick of enormously variable length through which she can perceive the elevation of the surrounding landscape.  Down steeply sloping hillsides her stick will extend to accelerate descent and avoid the confusion of small rocky undulations.  Into the valley her guide is shortened to follow a meandering contour, always descending towards her goal.  When the topography becomes tortuous, progress is restricted to very small steps, frustrating advancement as the blind wanderer must squeeze through each crevice eventually perhaps to expose wider valleys.  Finally, when all around is higher from the shortest to the longest reach, the wanderer may wonder if she is at the unique point of minimum error.  The wanderer may mark that spot and start again and then again from distant and disparate origins to confirm uniqueness[1], although this might not be necessary.  She may have acquired a valuable insight.

Watching the Least Squares algorithm operate in the virtual world of a computer, it is easy to imagine the numerical model as a blind wanderer seeking the best fit to measurements of reality.  The patterns of descent show a striking resemblance to those previously described in “Writing the Information”, although the topography of an error surface runs through n+1 dimensions, where n is the number of model parameters.   However, this complexity is not relevant for the Least Squares algorithm as Error Elimination proceeds just as it would in our familiar three dimensions.

In an ideal world the final error could be completely eliminated.  It would be an unmistakeable match of a perfect model with perfect data.  Yet all measurements contain their own errors (noise) and in the output of all worldly processes the primary signal is polluted by artefacts which confound perfection with ambiguity.  Also, all models must necessarily be simplifications of the real world, with a judicious ignorance of secondary and tertiary influences.  A perfect model of the real world requires the real world to be the model. For the innovator, it is sufficient to be close enough for practical purposes.

So the innovator must still contribute an essential human element, to innovate upon the structure of the model to better conform to real world observations.  The investigator thus enters into a liaison with the computer to become an n+2 dimension of a hybrid man-machine error surface, which must be navigated to make the model converge towards reality.  Here the inventor is the creative agent giving the model its operational form and the innovator contributes by forging the relationship of the model with reality.  And there may be as many models as pictures hung in a gallery, for value is not in the picture itself but in the understanding gained of its subject.

It is perhaps surprising or even problematic that an automatic computer routine such as Least Squares may be suggested as a means or even a metaphor for innovation.  However, it is not a paradox if the algorithm works on new inputs, so that the path taken to descend the error surface is new and may lead to new and potentially valuable insights.  Of course if this is repeated using the same inputs it would be repetitious and nothing of value could emerge.  Nor is there any accumulation of value as the original path descends to the point of minimum error, as it is only when this point is reached that any value is realised in the insight provided by the “best-fit” model parameters and outputs.

In all the above cases innovation is making information valuable through a process of Error Elimination.  That analogous mechanisms appear in both human and machine applications suggests that the process of innovation itself may not be an entirely social phenomenon.

 

Notes:

[1] This may be considered to be a rather trivial instance of Popper’s challenge of falsification.

Bookmark the permalink.

Leave a Reply

Your email address will not be published. Required fields are marked *