4 Looking Backwards

John von Neumann and

2 Von Neumann's Problem:

As to the first claim, that this--rather than
``self-reproduction'' per se--was von Neumann's problem, it is
certainly relevant that he introduced his work in essentially
these terms in both his recorded public presentations of it--at
the Hixon Symposium in September 1948
(von Neumann, 1951, p. 312) and at the University
of Illinois in December 1949
(von Neumann, 1949, pp. 78-79). Granted, in
both cases he did *also* refer to the issue of
self-reproduction. Specifically, he pointed out that
self-reproduction is an example of a constructing machine where
the output or offspring is just precisely matched in complexity to
the parent--complexity neither increasing nor decreasing. But
the critical point here--the significance of
self-reproduction--is clearly as a special or ``watershed'' case
of the more general situation: it is interesting precisely, and
only, because it may mark a transition from strictly-degenerating
to potentially-growing complexity. Conversely, if we encounter a
form of ``self-reproduction'' that is *not* associated with
such a transition, then it will not be of relevance to von
Neumann's problem at all.

Indeed, von Neumann (1949, p. 86)
made this point even more explicitly, when he actually noted
the triviality of self-reproduction in ``growing crystals''
(essentially Burks' `1`-state reproducer in a CA). But
*pace* Burks, *von Neumann's* resolution of this was
not at all to impose a criterion of complexity, but rather to
stipulate that the reproductive process should be such as to
support ``inheritable mutations''. Taken in context, I think
this must be interpreted as supporting at least some
``mutations'' where the offspring is of *increased*
complexity--which again returns us squarely to the problem of
the evolutionary growth of complexity, rather than
self-reproduction.

My second claim--that von Neumann actually solved this problem--may seem more far fetched; but let us take it in steps.

First note that while von Neumann exhibited the design and operation of only one machine in detail--his self-reproducing machine--he consistently pointed out how his CA space could support an indefinite variety of quite arbitrary machine configurations and behaviours. In this sense, he certainly exhibited not just a single particular machine, but a whole class of machines (which he identified via the infinite set of arbitrary, finite, ``initially quiescent'' configurations in his CA). This can correspond to the class in my problem formulation above (though I will refine this somewhat below).

The next question is whether this class of machines spans a
significant range of complexity. Given that we are using only
the most vague definition of ``complexity'' here, the answer can
be at best informal and qualitative. To my knowledge, von
Neumann did not quite comment on this explicitly. He did explain
at length that in setting up this sort of ``axiomatic'' theory of
automata, there is necessarily a degree of arbitrariness in
selecting the primitive or atomic parts from which the composite
automata will be constructed. And indeed, between 1948 and 1953
he considered a number of alternative approaches before settling on
a particular CA formulation (the latter concept having been
suggested by Ulam--see Burks, 1966, p. 94). But given the
highly complicated (co-ordinated, systematic, purposeful)
behaviour of the one machine he did design in detail, it
certainly seems to me that if we allow, as von Neumann did, for
the construction of machines with indefinitely more parts, in
arbitrary configurations, then this set surely *might* span a
sufficient range of complexity to meet our requirements.

This then leaves what seems by far the most intractable aspect of
the problem: to show that there are constructional pathways
leading from the simplest to the most complex machines. At first
sight, it seems hopeless to demand a *demonstration* of
this; firstly because the measure of complexity is so vague; but
secondly because it would seem to demand separate analysis or
demonstration of the constructional potentialities for most if
not all of the elements of the infinite set . But this, it
seems to me, was where von Neumann's crucial insight occurred.

Drawing on the capability for a single universal Turing machine
to carry out the computations of any Turing machine at all, given
a suitable description of that machine, von Neumann recognised
that a single ``general constructive automaton''
(von Neumann, 1949, p. 85) might be able to
construct ``any'' machine at all (i.e., any element of his
specified ), given a ``description tape'' of that target
machine. This immediately suggests the possibility that certain
such special, ``programmable'', constructor (sub-)systems may
enable enormous constructive potentialities--and also,
incidentally (?) open up a powerful and *generic* route
toward self-reproducing capability.

Von Neumann exploited this general idea by making a minor, but
subtle, modification of his general constructive automaton (in a
manner incidentally having no analog or significance in pure
computation theory): as well as requiring it to *decode* a
description to produce an offspring, he required that it must
also *copy* the description, attaching this as part of the
offspring. As Langton surmised, this combination of decoding and
copying potentially has quite dramatic consequences--but the
challenge is still to make those consequences explicit.

Let me denote a basic general constructive (decoding and copying)
machine by . We require that .^{2} Let
denote a ``description of '', (relative to
). That is, letting
denote the
composition of and a tape describing an arbitrary , this
will result in the construction of an instance of , via
*decoding* of the description, itself composed with a
*copy* of the original tape, i.e., . We write
this as:

Since this applies for all it applies to itself and we have:

This is the single self-reproducing automaton, which has been commonly identified as von Neumann's ``result'' or ``achievement''. Certainly at this point we have a case of construction where complexity has been fully preserved--but the question remains whether this gives us an avenue into the case where complexity can grow.

Now note that can be combined or augmented with fairly arbitrary ancillary machinery, while still retaining its general constructive ability. That is, we can say that is still a general constructive automaton for ``almost all'' . The exception would be where in some sense interferes with or disrupts the normal operation of . Discounting those cases, we can say that the existence of a single general constructive automaton, , actually implies the existence of a whole infinite set of related general constructive automata, , where has the form (i.e., all the members of this set share the same ``general constructive sub-system''). This, in turn, implies the existence of a whole infinite set of self-reproductive automata, where each has the form . Furthermore, given that is derived from just by excluding some exceptional or pathological cases (i.e., where the operation of would be disrupted), we can say that , and thus , will still span essentially the same significant range of complexity as itself.

Now this is clearly a much stronger result than merely demonstrating the existence of a single self-reproducing machine. In particular, it indicates the possibility of arbitrarily complex machines that are still capable of self-reproduction. This in itself certainly distinguishes von Neumann's result from that of Langton (1984). Although the operation of Langton's machine can be decomposed into copying and decoding activities, it does not incorporate anything like a ``general constructive automaton''. The decoding capability is extremely impoverished so that, in effect, there is only one description that can be effectively processed--that of the one self-reproducing configuration which Langton exhibited.

But in any case, there is still a further critical step in the
analysis of von Neumann's work. The goal is not just to show a
class of arbitrarily complex constructors capable of constructing
machines of equal complexity (which is what self-reproduction
illustrates), but to demonstrate constructional pathways whereby
complexity can *grow*. And it turns out that this further
result can also be demonstrated by the set .

To see this, we imagine the possibility of
perturbations--``mutations''--to the description tape of a
machine
. With reasonable
assumptions about the description language, it can be arranged
that all, or almost all, tape configurations are ``legal'' (i.e.,
can be decoded to *some* target machine), and that the tape
can be decomposed into a part,
, coding for , and a part, , coding for the
ancillary machinery, . That being the case, a more or less
arbitrary modification of the portion of the tape will
result in a legal description of some other machine, .
The reproductive cycle will then result, not in
self-reproduction, but in a ``mutant'' offspring
; but this is still an element of , and thus
self-reproducing in its own right.^{3}

Now these mutations essentially open up additional constructional
pathways *between* the elements of . In particular it is
now clear that this can allow incremental, bootstrapping,
*increases* in complexity. In effect, the density of these
mutational pathways reflects the combinatorics of the description
code, so that we can be virtually guaranteed, *without any
detailed analysis,* that there will be constructional pathways
connecting the entire set . These will include, of course,
degenerating pathways, where the offspring are less complex; but
will also include vast numbers of pathways of increasing
complexity.

In this way, finally, we see how von Neumann's detailed design of a single machine implies at least a schematic solution of the generic problem of the evolutionary growth of machine complexity.

4 Looking Backwards

John von Neumann and

2 Von Neumann's Problem:

Copyright © 2000 All Rights Reserved.

Timestamp: 2001-01-03