2 Von Neumann's Problem:

John von Neumann and

John von Neumann and

... This result is obviously substantial, but to express its real force we must formulate it in such a way that it cannot be trivialized...--Burks [4, p. 49]

Arthur Burks makes this comment following a 46 page recapitulation of John von Neumann's design for a self-reproducing machine (realised as a configuration embedded in a 29-state, two dimensional, cellular automaton or CA). The comment, dating from 1970, can be fairly said to have initiated an extended debate about the significance of this work of von Neumann's, which has waxed and waned over time, but still persists to the present day.

Von Neumann's design is large and complex, and relies for its operation on exact and intricate interactions between the many relatively simple parts. In that sense, it is certainly substantial; but Burks is absolutely accurate in pointing out that this intricacy, in itself, is not necessarily interesting or significant. In particular, if the same ``results'' could be achieved, or the same ``problem'' solved, with drastically simpler machinery, then the interest of von Neumann's design would be critically undermined.

This is no idle concern on Burks' part. As he himself points out,
within the CA framework, one can easily formulate a simple rule
whereby a cell in a distinct state (labelled, say, `1`) will
cause adjacent quiescent cells (labelled, say, `0`) to
transit to state `1` also. By essentially the same
definition or criterion as was applied to von Neumann's system,
such a single cell, state `1`, configuration, would qualify as
a self-reproducing machine--and would seem to render von
Neumann's fabulously baroque design completely redundant.

Burks concluded that ``... what is needed is a requirement that
the self-reproducing automaton have some minimal complexity.''
And at first sight, this does seem eminently reasonable.
Presumably (?) it is relatively easy to construct a ``simple''
machine, by whatever means; therefore it need not surprise us
unduly if we manage to concoct a ``simple'' machine which can
construct other ``simple'' machines, including ones ``like''
itself, and which therefore qualifies as self-reproducing.
Whereas, it is relatively difficult to construct a ``complex''
machine, by any means; and therefore it may well be a challenging
problem to exhibit a ``complex'' machine that is capable of
self-reproduction. Von Neumann's machine certainly appears
``complex'', and certainly succeeds in constructing other
machines like itself (i.e., in reproducing itself). So if we
could just more formally express the precise sense in which von
Neumann's machine *is* ``complex'', then we might indeed be
able to clarify the ``real force'' of his achievement.

However, even at this point, we should be at least somewhat
wary--because, while von Neumann himself certainly did introduce
and discuss the notion of ``complexity'' in relation to this
work, he did *not* attempt any formalisation of it. Indeed,
he described his own concept of complexity as ``vague, unscientific
and imperfect'' [23, p. 78]. It
would therefore seem unlikely that the significance of his
eventual results should actually *rely* on such a
formalisation.

Nonetheless, Burks went on to propose just such a formal
criterion of complexity--namely the ability to carry out
universal computation. And by this criterion, von Neumann's
design (or, at least, a straightforward derivative of it) would
qualify as a ``complex'' self-reproducing machine, and thus be
clearly distinguished from the single cell, ```1` state
self-reproducer,'' which would remain merely a ``simple'' (and
thus *trivial*) self-reproducing machine.

This seems a reasonable enough suggestion by Burks; though I would emphasise again that, as far as I have been able to establish, such a thing was never proposed by von Neumann himself--and, indeed, it jars seriously with von Neumann's calculated refusal to formalise complexity.

In any case, it turns out that Burks' suggestion is
unsatisfactory and unsustainable. While it is true that von
Neumann's machine (suitably formulated) can satisfy Burks'
criterion for ``complex'' self-reproduction, this still
represents an interesting result only if this criterion cannot be
satisfied by very much simpler means. But in fact--and with
hindsight, this now seems unsurprising--Burks' criterion
*can* be satisfied by much simpler means than those deployed
by von Neumann. This is because universal computation, *per
se,* does not actually require particularly complex machinery
[see, for example, 14].

This fact was formally demonstrated by Herman [8] in 1973,
when he essentially showed how the basic single cell, `1`
state, self-reproducer described earlier, could be combined with
a single cell capable of universal computation. This results in
a CA system in which the individual cells are ``simpler'' than in
von Neumann's CA (i.e., have fewer states), and yet there are
single cell configurations capable of both self-reproduction and
universal computation. Granted, the universal computation ability
relies on an adjacent, indefinitely long, ``tape'' configuration;
but that was equally true of the universal computation ability
of von Neumann's design, and is not a relevant
distinguishing factor.

Herman draws the following conclusion [8, p. 62]:

...the existence of a self-reproducing universal computer-constructor in itself is not relevant to the problem of biological and machine self-reproduction. Hence, there is a need for new mathematical conditions to insure non-trivial self-reproduction.

So we see that Herman rejects Burks' specific criterion, while
still continuing to accept Burks' formulation of the *issue*
at stake--namely the identification of a suitable criterion for
distinguishing ``non-trivial'' self-reproduction, albeit in
Herman's version this is no longer explicitly tied to the notion
of ``complexity''.

The discussion was taken up again by Langton [9] in 1984 (though apparently without reference to Herman's work). He presented a rather different analysis of Burks' criterion, but with ultimately complementary results. Langton pointed out that, as a general principle, there is little evidence to suggest that living organisms contain universal computation devices embedded within them. Since the self-reproduction of living organisms is presumably to be regarded as non-trivial (by definition--at least in this context), we should not, therefore, adopt universal computation as a criterion. So in this respect, albeit for different reasons, Langton concurs with Herman.

More importantly, Langton goes on to suggest a specific alternative criterion. He points out that self-reproduction in living organisms relies on a decomposition of the organism into two parts or components playing very distinct roles in the reproductive process:

- The
*genotype*, being an informational pattern stored or recorded in some sort of quasi-static or stable carrier. This information is transcribed or*copied*into a corresponding carrier in the offspring. - The
*phenotype*, being the visible, active, dynamic, interacting part of the organism. The phenotype of the offspring is created by some sort of*decoding*or*interpretation*of the genotype (rather than by a copying of the parental phenotype).

Langton does not explain just *why* such a decomposition may
be important, but rather seems to accept its pervasiveness among
biological organisms as reason enough to adopt it as a criterion.
And, in some sense it ``works'', because, indeed, von Neumann's
self-reproducing design does have this architecture, whereas
(say) Herman's self-declared ``trivial'' design does not. So it
seems like this may be a satisfactory or even illuminating
demarcation.

However: Langton did not stop there. With this new criterion in
hand, he went on to consider whether it could be satisfied with a
design significantly simpler than von Neumann's--and it
transpires that it *can*. In fact, Langton was able to
present a design for a CA space which itself is rather simpler
than von Neumann's (i.e., having fewer states per cell), into
which he could embed a self-reproducing automaton which, like von
Neumann's, has an explicit decomposition into genotype and
phenotype, but which is vastly smaller and simpler, occupying a
region of just 150 cells--compared to the several hundred
thousand cells required for von Neumann's device!

Langton's automaton is certainly still quite intricate, and its
design involved a subtle interplay between designing the CA
itself and designing the automaton to be embedded within it. In
this sense it is a significant and substantive achievement. But
is remains very unsatisfactory from the point of view of
evaluating von Neumann's work. If Langton's criterion for
non-trivial self-reproduction is accepted as an appropriate
measure to judge von Neumann's work by, then we must still conclude
that the latter's design is vastly more complex than necessary. While
this *may* be true, I suggest that we should be reluctant to
accept it without some much more substantive rationale for
Langton's criterion. Or to put it another way, there may still
be more ``real force'' to von Neumann's achievement than is
captured or implied by Langton's criterion.

2 Von Neumann's Problem:

John von Neumann and

John von Neumann and

Copyright © 2000 All Rights Reserved.

Timestamp: 2002-11-07