r, Paradigms and Perspectives The question of how to approach theoretically the empirical field implied in the term cybertext is a hard one. I have suggested that cybertext is more of a perspective on textuality than a category of it; but like all perspectives, it will necessarily emphasize certain types of text and marginalize others. Fundamentally, the answer becomes a definition of textuality in addition, rather than in opposition, to previous definitions such as the philological, phenomeno-logical, structural, semiotic, and post structural concepts of text, to mention a few. So why not use one of these approaches, instead of concocting a new (and most likely idiosyncratic) one? Simply because none of these have expressed the perspective of the text as a material machine, a device capable of manipulating itself as well as the reader. The various effects produced by cybertextual machines are not easily described by these textological epistemes, if they can be described at all. I might achieve something by trying each one, but since all of them so obviously conceive the material, historical, and textual artifact as a syntagmatic chain of signifiers and little else, that approach would most likely prove fruitless and desultory, and it would almost certainly not illuminate the idiomatic aspects of ergodic texts. Problems in Computer Semiotics Even semiotics, the most materially oriented of these epistemes, does not seem to offer any readily useful perspectives in this context. Per Aage Brandt notes that "neither the interpretative semiotics based on the Peircean tradition (such as Eco 1976), nor the structural semiotics of the Saussurean tradition (such as Greimas 1976)—though both are necessary—seem sufficient to follow up the substantial change induced by the on-going implementation of these machines in our 'life world,' probably for the very simple reason that even these often rather sophisticated semiotic elaborations 24 Paradigms and Perspectives 25 I ail to see what a 'symbolic machine' actually is and what it can do" (1993,128). Brandt's sensitive and candid critique (coming as it does from within the semiotic field) nevertheless trivializes the reason for recent semiotic theory's inability to account for cybernetic sign production, since these phenomena could not have been invisible to theoreticians such as Umberto Eco and A. J. Greimas, who surely Im Leo's case, evidently; see Eco 1994, 1-2) must have had some contact with the cybernetic ideas and experiments of contemporary individuals and groups such as Raymond Queneau (1961), Italo Cal-Vlno (1993), and Ouvroir de Littérature Potentielle (OuLiPo 1981). II iliese phenomena, together with computer machinery and prin-■ Iples in general, were indeed invisible to the semioticians of that hnie, then I suggest that the reason for this blind spot is to be found In i lie semiological paradigm (which seems inherently unable to ac-uinimodate the challenge from cybernetic sign systems) and not in die kick of historical opportunity. Nut all proponents of semiotics share Brandt's restraint. J. David HuliiT (1991) claims that "the theory of semiotics becomes obvi- iii almost trivially true, in the computer medium" (196), but this .....s to be based on a misreading of the semiotic (specifically, C. S. '> Irce's) notion of sign.1 As Allen Renear (1995, 308) points out, Hnlli'c does not support his claim with substantial analysis and argu- ..... As we shall see, however, even much more modest claims tin ml the relationship between computer technology and semiotics ......if problematic when put under closer scrutiny. Bolter's asser- ........Hist be read in light of the larger project within the hyper- |RI 'in nmunity of trying to connect their technolog y-ide o logy of n p next to various paradigms of textual theory, as "embodiments" .....I'iiiv I- David Bolter: "In a printed dictionary, we must move Írom page to Im i king up definitions, if we are to set in motion the play of signs" (1991a, Boll it equate; the mechanical processing of a hypertext link with what "takes .....hi heads" and sees both phenomena as "acts of interpretation." He also i li.il "in Peirce's terms, the computer system itself becomes the interpretant h i(;n" (199}. In Peirce's terms, perhaps, but not in any legitimate interpreta-»i HI "I hin.....ccpts. ^ Paradigms and Perspectives The question of how to approach theoretically the empirical field implied in the term cybertext is a hard one. I have suggested that cybertext is more of a perspective on textuality than a category of it; but like all perspectives, it will necessarily emphasize certain types of text and marginalize others. Fundamentally, the answer becomes a definition of textuality in addition, rather than in opposition, to previous definitions such as the philological, phenomeno-logical, structural, semiotic, and poststructural concepts of text, to mention a few. So why not use one of these approaches, instead of concocting a new (and most likely idiosyncratic) one? Simply because none of these have expressed the perspective of the text as a material machine, a device capable of manipulating itself as well as the reader. The various effects produced by cybertextual machines are not easily described by these textological epistemes, if they can be described at all. I might achieve something by trying each one, but since all of them so obviously conceive the material, historical, and textual artifact as a syntagmatic chain of signifiers and little else, that approach would most likely prove fruitless and desultory, and it would almost certainly not illuminate the idiomatic aspects of ergo die texts. Problems in Computer Semiotics Even semiotics, the most materially oriented of these epistemes, does not seem to offer any readily useful perspectives in this context. Per Aage Brandt notes that "neither the interpretative semiotics based on the Peircean tradition (such as Eco 1976), nor the structural semiotics of the Saussurean tradition (such as Greimas 1976) —though both are necessary—seem sufficient to follow up the substantial change induced by the on-going implementation of these machines in our 'life world,' probably for the very simple reason that even these often rather sophisticated semiotic elaborations 24 Paradigms and Perspectives 25 fail to see what a 'symbolic machine' actually is and what it can do" (1993,128). Brandt's sensitive and candid critique (coming as it does from within the semiotic field) nevertheless trivializes the reason for re- I ent semiotic theory's inability to account for cybernetic sign production, since these phenomena could not have been invisible to iheoreticians such as Umberto Eco and A. J. Greimas, who surely (m Eco's case, evidently; see Eco 1994, 1-2) must have had some contact with the cybernetic ideas and experiments of contemporary individuals and groups such as Raymond Queneau (1961), Italo Cal-Vino (1993), and Ouvroir de Littérature Potentielle (OuLiPo 1981). II i licse phenomena, together with computer machinery and prin-■ Iples in general, were indeed invisible to the semioticians of that ' 111 íl*, then I suggest that the reason for this blind spot is to be found in the semiological paradigm (which seems inherently unable to ac- .....imodate the challenge from cybernetic sign systems) and not in ílu' lack of historical opportunity. Nm all proponents of semiotics share Brandt's restraint. J. David I lot i er (1991) claims that "the theory of semiotics becomes obvinu .ílmost trivially true, in the computer medium" (196), but this Hrnis to be based on a misreading of the semiotic (specifically, C. S. !'■ In e's) notion of sign.1 As Allen Renear (1995, 308) points out, 11> >li er does not support his claim with substantial analysis and argu- iii. As we shall see, however, even much more modest claims I.....i the relationship between computer technology and semiotics .......problematic when put under closer scrutiny. Bolter's asser- i......mst be read in light of the larger project within the hyper- ii i 11 immunity of trying to connect their technology-ideo log y of iiv|>tiii'xt to various paradigms of textual theory, as "embodiments" i iiiii|i,iu- I. David Bolter: "In a printed dictionary, we must move from page to . limiting up definitions, if we are to set in motion the play of signs" (1991a, Holler equates the mechanical processing of a hypertext link with what "takes ....... heads" and sees both phenomena as "acts of interpretation." He also i hill "in Peirce's terms, the computer system itself becomes the interpretant (h ilKti" (199). In Peirce's terms, perhaps, but not in any legitimate interpreta-I In-, u incepts. 83 ..il incarnations"—in this case, for Bolter, "the embodiment of miotic views of language and communication" (1991,195). Behind Jl this, of course, lies the age-old dream of a technology that maps onto the workings of the mind, and here, at least, hypertext ideology and semiotics may have some common ground. These problems and issues cannot be fully addressed here, however, as our concern with semiotics must be limited to an investigation of whether it can provide a viable theoretical foundation for the study of cybernetic text ua lity For semiotics, as for linguistics, texts are chains of signs and, therefore, linear by definition (Hjelmslev 1961, 30). As Tomas Mal-donado (1993, 58-66) argues in his excellent essay on virtual reality, semiotics (with particular reference to the work of A. J. Greimas) has not managed to meet the challenge from "a whole typology of iconic constructions, very different from those studied by semiotics until now."2 The new constructions consist of "interactive dynamic" elements, a fact that renders traditional semiotic models and terminology, which were developed for objects that are mostly static, useless in their present, unmodified form. Maldonado's critique concerns the analysis of visual images, but it is equally relevant in the case of ergodic textuality, where the same difference applies. To be sure, efforts to describe cybernetic systems in terms of semiotics have been made. Jens F. Jensen (1990) calls for a "computer semiotics" as the potentially most effective paradigm for "formatting" the field of "computer culture" studies (12). It is easy to agree with Jensen that the humanistic study of information technological artifacts is characterized by a "theoretical, methodological and conceptual heterogeneity and inconsistency" (47) at the moment (although this is not necessarily a weakness at this still early stage of research), but his statement that this area of study is "basically and primarily a semiotic domain" (47) is much less self-evident. In his effort to claim the field for semiotics, he makes a number of assertions like "the computer is a semiotic machine" (47), "programs and data are representations, signs, symbols" (46), and "the computer is a medium that is based on signs as communication" (48), We should 2. The English translations of Maldonado and Jensen that follow are my own. Paradigms and Perspectives 27 then reasonably expect a definition of sign that will support his claim (and answer Maldonado's challenge), but this is not offered by Jensen. Instead, he offers an elaboration of Eco's discussion (1976) of the "lower threshold" between semiotics and the signals of information theory, which is interesting but ultimately disappointing, since it presupposes a dichotomy between semiosis (the process whereby signs are interpreted and translated into other signs) and information processing in which the latter must be considered as falling outside the territory of semiotics. Jensen sees computer programs as representations and models of some aspect of the real world (1990, 44) and, later, argues that "the symbols as strings of binary digits" (46) can only mean what the programmers and system designers by convention have defined them to mean. As the incarnation of the signal-sem i otic threshold, Jensen posits the "interface" (47), the visible front layer of the computer, since it functions both as borderline and membrane between the two systems. As Eco acknowledges (1976, 21), the idea of this threshold is problematic, and it seems to me that it also excludes the possibility that human mental processes could ever be explained in terms of information processing, a strong hypothesis that still remains to be proven. Notwithstanding the problems of artificial intelligence and cognitive science, there are several relevant cybernetic phenomena that question the validity of Jensen's dichotomous model of information processing and semiosis. Fundamentally, the threshold is invalidated along two interrelated dimensions: complexity and autonomy. When a system is sufficiently complex, it will, by intention, fault, or coincidence, inevitably produce results that could not be predicted even by the system designer. A typical example is a chess program that plays better than its programer. Even if there is no reason to suspect that anything but meaningless operations of shifting zeroes and ones go on inside the programed machine, it nevertheless displays a significant behavior that is not—-and in fact could not—be anticipated by its programer, even if it could be claimed that it was "intended." Furthermore, the ability to predict and counter its opponent's strategy is a form of interpretation (we could call it machine interpretation) that involves something (the signal) that stands for i.....1X1 111 (hing else [the move) giving rise to a third something (an es- ...........n of the opponent's strategy), to put it in Pence's terms. A tmiotician might dismiss the example on the grounds that it could be better classified as a dyadic relationship, in terms of stimulus (signal) and response (countersignal)—and so it could be! — but then the semiotician would have nothing further to say, since the phenomenon has been relegated to below the threshold. On the other hand, a theory of chess programing could then obviously not afford to be semiotic. Another type of threshold transgression occurs whenever there is a complexity that cannot be reduced to the finite structure of a specific program or machine; in other words, where the whole is greater than the sum of the perceived parts. A typical example here is the notoriously unstable state of global trade networks, in which the buying and selling of shares and currencies are automated to such an extent that prediction and explanation of events are best left to chaos theory. Such a transglobal system is clearly autonomous, since it cannot be controlled, shut down, or restructured by a single organization or even a country. Its machine-human borders are also unclear, since the interface could hide a human trader, a machine, or a cyborg, a combination of both. Such a system, even if it consisted purely of automatic agents, is not a model or a representation of something else; it is itself, a cybernetic entity that communicates with all and answers to no one. Again, the ongoing process might be described as semiosis, an endless reinterpretation of triadk signs (such as a share, its value, and the implied status of the corresponding company). Perhaps a semiotician watching two unknown trading entities through a stock exchange terminal would still insist that "while people participate in semiosis, machines participate in information processing" (Jensen 1990, 36), but this perspective would not make any difference to the reality of the symbolic exchange, nor would it be sufficient to specify the cybernetic nature of the participants. Yet another example would be self-replicating computer "viruses" that spread autonomously from machine to machine and that, in some cases, are programed to mutate their own "anatomy" to avoid detection by antivirus programs. Since their chances for survival Paradigms and Perspectives 29 depend on their success in transforming themselves to unrecog-nizability, their resulting semiotic shape is not the direct result of human sign construction but a product refined by an uncontrolled process of "natural selection." However, the question of whether or not the above examples can be said to imply semiosis seems to me ultimately inconsequential, since Hicir deep structures, accessible to us in a way mental processes (at present) are not, must be studied and catalogued if we are to make any sense of the surface signs to which they give rise. To find a name for these sign mechanisms should not be an essential issue. Perhaps we can follow Thomas A. Sebeok's suggestion and develop a notion of "cybersemiosis," agreeing, as he does, with C. S. Peirce that "the essential nature and lundamental varieties of possible semiosis... need not be a mental mode of being" (quoted in Sebeok 1991, 99).3 Jensen's decision to posit the interface as a border between human semiosis and machine processing, on the other hand, makes it hard to see what relevance a semiotic approach and the idea of semiosis can have in the study of sign-producing machines. As the examples ;ik)ve and the story generators discussed in chapter 6 should indicate, the quasi-autonomous nature of complex sign machines makes a behavioral study of surface sign phenomena rather inadequate and unsatisfactory. These constructs are not simply media by which n human programer communicates with human receivers; they are also comments on such communication: aesthetic or pragmatic modes for the exploration of sign production, (Of course, one could counterclaim that the programers are the media through which these structures reproduce themselves. Both claims are equally uninteresting, as they tell us nothing about the principles of the cybernetic production of signs.) The crucial issue here is how to view systems that feature what is known as emergent behavior, systems that are complex structures evolving unpredictably from an initial set of simple elements. The ■.i ience that studies such phenomena is sometimes called "artificial 3. As it happened, Peirce formulated the idea of using electrical circuits instead of mechanical ones to form the "logic gates" (AND, OR, NOT, etc.) of modern computers in 1886, almost sixty years before computers using this technology were constructed (see Burks 1986,10-15,42-45). 30 Cybertext r* Figure 2.1. Two Stages of a Gli^ Gun in John Conway's Game of Life life" and uses computers to l'uild artificial, autonomous "worlds" based on biological principles.Tne objects tneY focus °n are mathe-matical constructs known as tfUular automata, originally described by computer pioneer John Vor Neumann (see Levy 1992). The best-known example is probably Jithn Conway's Game of Life, which is a simple two-dimensional grii of cellular automata in which each position, or cell, can be in onr°f t™0 states: on (alive) or off (dead). Over time, a cell will survivf •* " «■ surrounded by two or three others, it will be born if it is ä" empty cell surrounded by exactly three others, or it will die if* is either overcrowded (surrounded by more than three ethers) oťsolated (surrounded by less than two others). From a random and erotic initial state, after a few generations the life grid will display'rderly patterns and is able to produce complex, multicelled structure with interesting, dynamic behavior. In figure 2.1 we see the famous glider g"11' a self-organize d machine that periodically prodr« offspring (the "gliders" escaping upward to the left). These s^tems are not models or representations of something else but,'iimer' evolving, self-organizing entities whose behavior cannot ¥ described as the sign production of a human programe*. It woullhe wron8 t0 classify them as simulations (dynamic models thí núnúc some aspects of a complex process), since there does no'kave to be any external phenomenon they can be said to simulaterne fundamental question, however, is whether a systerra capable t producing emergent behavior based on an initial state and a set í generative rules should be considered a semiotic system at all>ce [t can exist without any semiotic output, as a closed process fnning inside a computer, the semi- Paradigms and Perspectives 31 .....ispe;ct is clearly arbitrary and secondary to the process itself. To (lie i ľ searcher, the semiotic aspect is indispensable as a front end, a i>i n iical means to observe and gain knowledge of the evolutionary m-iH'i'ss going on inside, but this does not imply that the process is Ihiii ally a semiotic one or that the studied object should be classi-i" < I .is a sign, only that the activity of observation by necessity has m involve a semiotic system of some sort. II we turn to systems designed primarily to construct a readable ttl^n or message, such as a story generator, the problem is less easily i. ml vedl. The behavior of such a system could still be emergent, i.M instance if the generated story contained a totally unexpected imrrntive figure, but the teleology of this behavior is undoubtedly 111 untie, even if its intrinsic principles are identical to those of other cellular automata. The idea that cybernetic sign systems are basically mouthpieces 11 ii i heir human designers and programers can also be found in Peter Bflgh Andersen (1990,137). Andersen's effort to examine computer ľiimmuiúcation from within a semiotic episteme is a comprehensive tudy of computer systems from the perspective of Hjelmslevian .miotics; only a small part of it is addressed here. Like Jensen, Andersen uses the interface as the empirical domain for his semi-ology. In part 2 of his book, he presents a typology of "computer-I tased signs" derived from his studies of various computer programs, mostly for the Macintosh computer. Chief among his examples are two graphic action games, the classic video arcade game Breakout* where the user tries to demolish a "brick wall" by hitting it with ii ball steered by a paddle (see ibid., fig. 2.3), and the more advanced and impressive Dark Castle {DC), created by Jonathan Gay and Mark Stephen Pierce (1986). In DC, the player must move a "hero," or user-controlled character (