As far as I am aware, following Zeleny's initial explorations, little further substantive development of computational autopoiesis was published between c. 1980 and the early 1990's.
As recounted in the preamble, my personal interest began around 1991. I was then completing a detailed re-evaluation of von Neumann's original work on ``self-reproducing'' automata. My conclusion from that study was that this could be best understood as focussed not on self-reproduction per se, but on self-reproduction as a mechanism for the evolutionary growth of complexity . Indeed, von Neumann had demonstrated how the architecture of a relatively passive genetic ``description tape'' coupled with a general purpose, programmable, ``constructor'', would give rise to a dense network of mutational pathways connecting relatively simple automata to arbitrarily complex ones. This is a very significant result; but it does not, in itself, suffice to realise the open-ended evolutionary growth of complexity in artificial systems.
The critical obstacle (one recognised clearly by von Neumann himself) is that his automata are extremely fragile. While they are logically capable of self-reproduction, they can do so only in a completely empty environment, with no disturbance to their operation. In practice, of course, Darwinian evolution could only happen within ecosystems where automata interact both with each other and with a more or less complex environment. Thus, to have a serious prospect of open-ended evolutionary growth of complexity, one requires automata which both have a von Neumann style genetic architecture and are also capable of surviving--maintaining and repairing themselves--in the face of environmental perturbation. Indeed, they should do this even while being open to turnover of their components (the latter being necessary to harvest materials for self-reproduction, if for no other reason).
Now von Neumann had exhibited--in detail--how to realize automata with his genetic architecture within a comparatively simple, two dimensional, discrete space. Similarly, Varela et. al. had exhibited how to realize automata capable of basic self-repair and turnover of components in a similar space (or, at least, one explicitly inspired by von Neumann's). The question therefore arises whether one can engineer a single artificial universe, hopefully of not much greater inherent complexity, in which these two kinds of automaton can be unified.
This then was the problem which I started to discuss with Francisco Varela in the early 1990's . My starting point was the original computational model of autopoiesis. The research programme was, initially at least, rather simple: re-implement that model, demonstrating the original, minimal, autopoietic phenomenology; and then explore elaborations (more particle types, more reactions, etc.) in the general direction of trying to unite the autopoietic functionality with something like a von Neumann genetic system.
However, this programme failed at the very first hurdle: i.e., re-creating the original, minimal, autopoietic phenomenology. It took some time, and several different attempts (which became intermittent, over a number of years), but it gradually became clear that the difficulties were more than superficial.
Eventually, in the Autumn of 1996 I had the opportunity (with support from Chris Langton at the Santa Fe Institute) to make a concentrated study of the problem. This resulted in a detailed analysis of a number of anomalies in the original BioSystems paper . These encompassed both difficulties in interpreting the algorithm, and apparent inconsistencies between the algorithm and the presented experimental results.
While a significant number of detailed issues with the early computational model were uncovered in this process, the great majority of these were relatively minor, and could have been ignored or easily corrected. However, there was one particular phenomenon which was more serious, and which was responsible for the fundamental difficulty in re-creating the original autopoietic phenomena. This is the problem of premature bonding of the L particles. According to the original algorithm, once L particles form they can spontaneously bond together; and, further, once bonded, they become immobile. But this is a fatal flaw for the supposed autopoietic closure of the reaction network: the free L particles are effectively sequestered by the side-reaction of spontaneous bonding. This both makes them essentially unavailable for membrane repair, and also clogs up the interior of the cell, eventually blocking further production.
Figures 1 and 2 show an experimental example of this. Figure 1 shows the initial configuration of an externally created cell. Figure 2 shows the state of the cell after 110 time steps. The membrane has not yet suffered any decay. However, the interior of the membrane is now completely clogged with bonded--and thus immobile--L particles. Only two open positions remain inside the membrane, one occupied by the K particle. Since the production reaction requires two S particles adjacent to each other and to the K particle, there is no longer any available site for further production within the membrane, and further production of L particles is impossible. It follows that, whenever the membrane does eventually rupture, there will be no mobile L particles available to effect a repair.
This is a very robust failure. It is not sensitive to any precise details of the implementation. Indeed, the same problem was retrospectively identified in two other, independent, attempts at re-implementation [15,34]; and may well have been present in the Zeleny work also (but superficially masked there through the introduction of non-local interactions etc.).
This then provided the basis for a more extended and detailed discussion with Francisco Varela. But, of course, there had been a long lapse of time since the original experimental work had been carried out, c. 1971. Further, it should be remembered that the work had taken place against a backdrop of great turbulence in Chile. Varela had been a prominent supporter of the Allende government. Following the coup d'etat and the coming to power of Pinochet in September 1973, Varela, like many others, was effectively forced into exile . These events co-incided directly with the development of the BioSystems paper. Circumstances had therefore combined to make it very difficult to recall or reconstruct technical details of that work, some 25 years later.
This was the situation of the discussion in late 1996, when chance intervened. Varela was now based in Paris, and he there happened to receive a shipment of papers and files, apparently belonging to him, which had just been located back at the University of Chile. Among these papers, he identified several contemporary, albeit fragmentary, records of the early computational exploration of autopoiesis. Critically, these included an early discursive description of the model and a listing of a version of the actual computer code (in FORTRAN IV).
With the help of these documents it was finally possible to identify, with reasonable confidence, exactly the causes of the discrepancies which had been identified in the BioSystems paper.2 In particular, an explanation was found for the control of premature bonding in the original model. It transpired that the model had actually included an additional, undocumented, interaction: chain-based bond inhibition. The effect of this was that free L particles in the immediate vicinity of the membrane (or, indeed, any chain of L particles) were inhibited from spontaneously forming bonds with each other. Provided the cell is not very big, this is sufficient to prevent premature bonding and ensure a continuing supply of free, mobile, L particles to repair membrane ruptures as and when they arise. Once this interaction was re-introduced, the original autopoietic phenomenology of ongoing self-repair, as reported in , was easily realised again. Detailed experiments documenting and demonstrating this were presented at ECAL 1997 .
The implicit limitation of the chain-based bond inhibition mechanism to comparatively small cells should be carefully noted: it reveals, in particular, that this original form of autopoietic cell will not easily support growth, or, more particularly, cell reproduction by fission. This is obviously rather important for any projected extension to realising Darwinian evolution of such cells. We will return to this limitation below.
While this eventual re-creation of the original autopoietic phenomenology ultimately hinged on a rather mundane technical refinement, I would argue that it was still a useful, if modest, contribution, for two distinct reasons:
It should be admitted that, in contrast to the other interactions in this model, chain-based bond inhibition has a distinctly ad hoc flavour. That being the case, it can reasonably be asked why this interaction should be considered as being ``acceptable''--that is, as not detracting from the significance of the reported phenomena. This is particularly moot, because certain other (conjectured) interactions in the later models of Zeleny were treated more critically above (section 4). However, it seems to me that a distinction should be drawn here between interactions that are ad hoc simply in the sense of ``unfamiliar'' or ``unphysical'', and interactions that are non-local (in some relevant ``space'') or that have pre-programmed time-variation. As already noted, the latter have the clear potential to allow both aggregate and time-structured phenomena to be effectively ``programmed in'' (either in single agents, or in the environment as a whole). For example, apparently macroscopic ``cellular'' organisation might, in effect, be an artefact constructed and co-ordinated by a single ``central control'' agent; in which case, its ``emergence'' would presumably be of little interest.
By contrast, chain-based bond inhibition is still a perfectly local, and time-independent, interaction, occurring on a smaller scale than, and un-influenced by, macroscopic cellular organisation. It is for this reason that, in my view, the identification of this additional interaction does not undermine the significance of the dynamic, macroscopic, cellular organisation originally reported for this model.
Of course, the question at stake here--of choosing appropriate abstractions or ``axiomatizations'' of primitive agents--is a difficult one, which has been at the heart of the field of Artificial Life right since its inception. Indeed, in a lecture originally dating from 1949, von Neumann captured this tension explicitly and succinctly:
... one may define parts in such numbers, and each of them so large and involved, that one has defined the whole problem away. If you choose to define as elementary objects things which are analogous to whole living organisms, then you obviously have killed the problem, because you would have to attribute to these parts just those functions of the living organism which you would like to describe or to understand. So, by choosing the parts too large, by attributing too many and too complex functions to them, you lose the problem at the moment of defining it. [47, p. 76]
This continues to be a matter of debate in the field, and I do not suggest that the particular demarcation I have suggested above--based specifically on ``locality'' and ``time-independence''--offers anything more than a very rough, heuristic, guide. Particular models, and their associated phenomenology, must still be critically evaluated on their individual merits.
In closing this discussion of the clarification of the original model of computational autopoiesis it is, perhaps, equally important to comment on something that this work did not demonstrate: namely, it certainly did not show that there was, or is, any obstacle of principle in realising autopoietic organisation in an artificial, computational, medium. This must be stated clearly, as precisely an opposite interpretation has recently been attributed to this work:
... The non-computability of Autopoietic systems, as advanced here, apparently collides with the simulation results involving tessellation automatas (sic) . But new versions of this simulation show that the original report of computational autopoiesis was flawed, as it used a non-documented feature involving chain-based bond inhibition . Thus the closure exhibited by tessellation automatas is not a consequence of the ``network'' of simulated processes, but rather an artifact of coding procedures .
--Letelier et al. [18, p. 270]
For context, the over-arching conjecture being advanced by Letelier et al. is that autopoiesis is, in some fundamental sense, a ``non-computable'' process. By this they mean that while it can (obviously?) be realised in real physico-chemical form, there is some essential aspect of this process that cannot, even in principle, be realised in a strictly computational system. It amounts to a specific denial of the Church-Turing thesis (regarding the ultimate computability of the entire physical universe). But clearly there is a logical conflict between such a conjecture and the original illustration of the very concept of autopoiesis via a perfectly computational model. The quotation above is an attempt to resolve this conflict.
However, this is a most peculiar analysis.
It is true, of course, that the original report of computational autopoiesis was ``flawed'', and, indeed, that this was primarily because ``it used a non-documented feature involving chain-based bond inhibition''. However, the citation for the last sentence in this quotation is simply wrong: there is no reference whatsoever in  to any distinction between ``simulated processes'' and ``coding procedures''. Nor can I conceive, even now, in retrospect, of any basis for assigning a different status to ``chain-based bond inhibition'' compared to any other interaction in that particular artificial chemistry (production, bonding, absorption, etc.).
Thus, there is no basis whatever for supposing that this clarification of the ``original report'' undermines the essential result. The only flaw was in enumerating the processes or interactions underlying them: one necessary interaction was omitted from this description. The phenomena originally reported were accurate, and ultimately reproducible in perfectly ``computable'' form. If it is accepted that the phenomena reported in the original paper constituted ``autopoiesis''--as it apparently is by Letelier et al.--then they still constitute autopoiesis. This was stated most explicitly in the conclusion of :
It should be emphasised that the substantive point of this paper is to correct the historical record. ... However, this correction does not add to, or modify, the original conceptual foundation of autopoiesis in any significant way.
I conclude that the overall thesis of Letelier et al.--of the ``non-computability'' of autopoietic systems--should be taken as refuted, rather than corroborated, by the results of .
Copyright © 2004 All Rights Reserved.