Skip to content


Writing Technologies: Vol 1

Unit(s) of assessment: English Language and Literature

School: School of Arts and Humanities


Introduction: Writing

Technology has frequently seemed to be antithetical to writing. When Jack Kerouac was accused of ‘not writing, but typing’, the insult implied an inhuman quality to his prose, as though the machine on which On the Road was produced had replaced the more transcendent humanity required of the writer. 1 Kerouac, it implied, had become a typewriter, and could therefore not really be considered a writer in the true sense at all. Something as quotidian, as material, as technology might feature in the world depicted by the writer but, this criticism implied, it had no place in the ethereal process of writing. Yet, the very term ‘writing’, though thoroughly naturalised as a metaphor for a particular sort of communicative mental activity, implies a relationship with technology, the pen, which is a medium that translates and directs thought as specifically as the typewriters on which Kerouac, or later William Gibson, famously tapped out their works. Before broaching these complex questions of technology, production and subjectivity, it is perhaps first worth considering the more prosaic ways in which technology is at issue in writing.

Technology does, of course, feature as a set of objects ‘in’ writing, in the sense of being invoked as part of the fabric of the world described by writers. While this may seem most obviously to be an issue in genres like science-fiction, which frequently takes technology as its subject, or procedural detective fiction, in which technologies of forensic investigation are central, it would be a mistake to assume that the most fertile ground for investigation necessarily lies in these areas. If technology is culturally significant, it is significant not only when its novelty directly impinges on our consciousness but also for the ways it is naturalised as an assumed fact of everyday life (indeed, Gibson’s search for a ‘superspecificity’ of reference in his science-fiction is in part an attempt to invest the novel technologies of the future with the everyday qualities of the everyday).

Without the technologies of shipbuilding, timekeeping, cartography, navigation, industrialisation, and civil and military administration and suppression, there could have been no European expansion into the wider world and no broader world of Empire into which to flee for all those characters of nineteenth-century realist fiction, like St. John Rivers in Jane Eyre, who leaves Britain to carry out missionary work, and Monks, in Oliver Twist, who gets his comeuppance far from home. Unassimilable at home, many of these characters can be tidily got rid of abroad, their disruptive influences lost in the margins of Empire.

Clearly, simply pointing out that technology is a necessary component in these fictional lives is a limited critical endeavour, in danger of reifying technology as something, as so much ‘stuff’, that exists above and beyond culture. True, most literature is full of technological stuff, even if we pass over most of it as so commonplace that we fail to note its existence and, true, one role for a technological criticism is to denaturalise our relationship with this stuff, to make us aware of it. However, a criticism that is serious about the role of technology must look not only for the way in which technology appears in or influences culture, nor even for how culture shapes technology, as if the two are separate territories, but must understand that they suffuse each other, with numerous, complex feedback mechanisms contributing to the ongoing development of a dynamic culture-technology.

This is, in part, a way of making the obvious point that technology is always ideologically inflected. This might mean acknowledging that technological innovation is the product of specific social and historical circumstances, and is not simply produced by individual inspiration, or communities of engineers, working within the prevailing conditions in available materials, scientific knowledge and so forth. It also means that technologies become, to use a technological metaphor, the lenses through which we see the world. The railway trains that kill the heroine in Anna Karenina and the flock of sheep at the beginning of The Octopus; the trains that take Hurstwood and Carrie out of Chicago in Sister Carrie; and the railway tracks that are a recurring image in Gravity’s Rainbow, are all riffs upon the theme of determinism, shaping its articulation in particular ways. They suggest, perhaps, a universe in which fate is no longer a matter for the Gods but is instead a meaningless product of contingent circumstances. In The Octopus the train is also what Leo Marx has called the ‘machine in the garden’, an intrusion of technology into the pastoral idyll, characteristic of United States literature. In Sister Carrie the railway tracks become the embodiment of the overwhelming forces, characteristic of the naturalist aesthetic, that inevitably sweep Carrie, a ‘fair example of the middle American class’, to her peculiarly unsatisfying success. In Gravity’s Rainbow they speak to Slothrop’s paranoia about the sinister forces shaping his life, they point to the sensitivity of post-war culture to the tracks onto which it is thrown by conditions established during World War Two (that one character is called Pointsman is not coincidental), and they conjure up associations with the death camp trains. It is the image of the train that facilitates all these meanings, although it does not, of course, mean that they could not exist without it.

As well as a world view, though, what technology gives us is a sense of self, whether through the hydraulic metaphors of the steam age that inform Freudian conceptions of repression, pressure and release, or through the information storage and processing technologies of the second half of the twentieth century that have revitalised the ‘mind as machine’ paradigm. It is here that the issue of ‘writing technologies’ becomes particularly pressing. If the specific ways in which we process information are what define us as human, then how does writing, itself wrapped up in the processes of coding, transmission and decoding, relate to this conception of self? While this question most obviously gives us a way into contemporary texts like Coupland’s Microserfs and JPod, where writing of a self into being is defined by the narrators’ relation to technology, particularly the word-processing technology that provides characteristic formal opportunities not available to someone using a typewriter or a pen (cut-and-paste; shifting font sizes; transformation of text through the application of algorithms; coding), it would be a mistake to suggest that it is only literature of the information age that is made available to us by the contemporary mind-as-machine paradigm. As well as obvious antecedents – concrete poetry and Burroughs’ ‘cut-up’ experiments spring most obviously to mind – in a sense all literature is a product of the collision between the chaos of reality and formal systems (the sonnet; genre; language itself) for making sense of, coding and transmitting that reality. Literature is itself an information-processing machine, albeit one that thrives on ambiguous communication and mistranslation.

The concepts of ‘writing’ and ‘technologies’, as well as the more singular idea of ‘writing technologies’ do, of course, raise questions that are not addressed by the above examples. What remains urgent and compelling is the need to interrogate technology’s centrality to emerging modes of representation, as well as its decisive role throughout literary and cultural history. In its efforts to question this complex relationship, Writing Technologies will ask:

  • What might a ‘technological criticism’ look like and how might it be related to other, more established, critical systems?
  • How might the ‘background’ (i.e. assumed) technologies in a text provide a way into both it and the culture from which it comes?
  • What is the relationship between technology and scientific worldviews?
  • Does technology imply the further expansion and transformation of the literary canon, and to what extent does this process blur the boundaries between literary studies and cultural & media studies? What are the formal technologies and technological forms of literature?
  • How are subjectivities shaped by medical, cybernetic and other technologies?
  • How do literary texts engage with the encounter between self and machine? What impact are epublishing and other online modes of production and distribution having on patterns of reading?
  • Is the ebook revolution, much hyped at the beginning of the century, failing to threaten printed textuality in the way that many feared? Is there a literature of the new informational economy?
  • If new networks of sociability are emerging – if new modes for the production of social life and social meaning now exist – are these reshaping the production of textual meaning? What is the speed of literature?


  • R.J. Ellis outlines the derivation of Capote’s offhand remark. R.J. Ellis, Liar! Liar!: Jack Kerouac – Novelist (London: Greenwich Exchange, 1999), p. 27.
  • ‘Superspecificity’ implies that future technology is rendered with the same nonchalant, and brand-oriented, terminology as we might apply to the contemporary world when we talk of, for instance, a ‘hoover’ rather than a ‘vacuum cleaner’ or an ‘iPod’ rather than a ‘portable MP3-playing device’. Gibson traced his influence in this respect to hardboiled detective fiction: ‘[Dashiell] Hammett may have been the guy who turned me on to the idea of superspecificity, which is largely lacking in most SF description. SF authors tend to use generics – “Then he got into his space suit” – a refusal to specify that is almost an unspoken tradition in SF’. Larry McCaffery, ‘An Interview with William Gibson’, in McCaffery, ed., Storming the Reality Studio: A Casebook of Cyberpunk and Postmodern Science Fiction (Durham: Duke UP, 1991), p. 269.
  • Charlotte Brontë, Jane Eyre (1847; London: Penguin, 1966); Charles Dickens, Oliver Twist (1837-38; Oxford: Oxford UP, 1999).
  • Leo Tolstoy, Anna Karenin (1873-77; trans. Rosemary Edmonds, London: Penguin, 1978). Frank Norris, The Octopus: A Story of California (1901; London: Penguin, 1986). Theodore Dreiser, Sister Carrie (1900; London: Penguin, 1981). Thomas Pynchon, Gravity’s Rainbow (1973; London: Picador, 1975).
  • Leo Marx, The Machine in the Garden: Technology and the Pastoral Ideal in America (Oxford: Oxford UP, 1964). Marx discusses The Octopus on pp. 343-44.
  • Theodore Dreiser, Sister Carrie (1900; London: Penguin, 1981), p. 4.
  • Richard Dawkins provides particularly incisive comment on the revision in popular notions of machines that are necessary if we are fully to understand ourselves, albeit that the focus of his work is largely on evolution not mind. The following, from the revised edition of The Selfish Gene, gives a taste both of his perspective and his acerbic style: ‘We are in the golden age of electronics, and robots are no longer rigidly inflexible morons but are capable of learning, intelligence, and creativity…. People who think that robots are by definition more “deterministic” than human beings are muddled…’. Richard Dawkins, The Selfish Gene, rev. ed. (Oxford: Oxford UP, 1989), p. 270.
  • Douglas Coupland, Microserfs (1995; London: Harper Perennial, 2004). Douglas Coupland, JPod (London: Bloomsbury, 2006).

Introduction: Technologies

Technology is frequently seen as an arriviste on the scene of writing. When Derrida argues that ‘it is not legitimate to contrast writing by hand and “mechanical” writing, like a pretechnological craft as opposed to technology’1 he seeks to disabuse literary critics, philosophers, and social commentators of the notion that writing was once an unmediated poiesis, now corrupted by modes of articulation which have turned the creative act of writing into a technical articulation, increasingly constrained and threatened by regulatory artifices. The quill, the pen, the mechanical typewriter, the electric typewriter, as well as the computer, for him instantiate writing’s enduring history as an intrinsically prosthetic and processed expression, these instruments offering various forms of mechanical resistance while at the same time allowing the act of writing to occur. Paper Machine is not, however, solely concerned to establish writing as an essentially technologized expression or to conceive it as the encoding of thought by an external apparatus. In addition to challenging the division between creativity and technicity that has prevailed – and stubbornly remains – in literary criticism, Derrida also questions the attempt to engineer the human as an entity that is, in essence, not technological.

For him, the instruments that are necessary for writing are also central to identity formation: technologies of writing, and writing itself as a technology, prosthetically inaugurate the human, and they do so in different ways. Paper, for example, has held (and continues to hold) a ‘sacred power’, authenticating the proper name and archiving memory by giving it a seemingly incorruptible permanence in the world. The pen allows us to dream of immediacy and provides us with a particular sense of how our interiority is externalised. Today’s technologies offer not just a departure from the fantasy of physicality that other writing technologies promote, but also a different experience of time (‘These new powers delete or blur the frontiers in unprecedented conditions, and at an unprecedented pace’) and space (affecting frontiers ‘between the national and the global, and even between the earth and the extraterrestrial, the world and the universe’).

Both the ontological persistence of and the specific effects that result from the writing-technology interface have been a constant source of fascination for some of the most prominent figures in cultural theory. Deleuze and Guattari stand as perhaps the most venerated of those who find an extreme saturation of the social and the subjective by technology, claiming that machinic assemblages pass through, shape, and (sometimes critically) reshape bodies and cultures. Certainly, this sense of the machinic as an embedded and ubiquitous force has found itself vigorously embraced by cultural theory in recent years, from DeLanda’s location of ‘the virtual’ in the physical and the natural (rather than in new technologies of representation alone)4 to accounts of the invisible and unpredictable complexity of informational systems.

Often ignored in this work on the machinic qualities of social, cultural, and natural strata, however, is Deleuze and Guattari’s claim that the book too is an assemblage – ‘a literary machine’ – which plugs into other machines, all functioning and failing in the production and transmission of meaning. When connected to their concepts of the rhizome (an acentred and immeasurable structure that defies positivism’s mania for the encyclopaedic) and rhizomatic writing, this concept of the machinic becomes a powerful resource for thinking the production of – and experiments with – printed textuality, as well as the alternative modes of articulation (such as hypertext, Wiki, blogs, Writely and networked writing) that are offered by emerging media and digital technologies.

Social, cultural, and literary studies might only now be seriously confronting the issues that are raised by writing’s technological locations, but they are already offering precise ideas about how to contest anthropocentric narratives that relegate technology to the status of artifice and instrumentality. The reshaping of local, national, and even continental identity by technologies which work at the global level is one development that has become subject to intense scrutiny; no longer treated merely as tools which smooth the emergence of a global community, technologies are now seen actively to interrupt the relationship between space and the social, reconfiguring cultural power and changing the ways in which collective belonging is experienced. Research published in a recent issue of Wired magazine challenges pro-globalist proclamations that horizontal structures of knowledge and power now prevail, pointing to the persistence of a ‘digital divide’ which produces a dramatically uneven distribution of information, pharmaceutical and agronomic technologies across the nations and regions of the world. But Wired also considers how many ‘developing’ nations have responded to this political and economic asymmetry by developing technologies – including file-sharing networks, open source software, digital piracy – that work against the interests of leading nations and transnational corporations.

The particularities of such an ambivalence – of the ways in which technology acts both as the new conduit for an old imperial dynamic and as the source for a resistant recoding of transnational power – are variously examined in Hardt & Negri’s claim that post-Fordist modes of production are resulting in corresponding modes of microcultural insurgence (new guerrilla movements not only ‘employ technologies such as the Internet as organizing tools, they also begin to adopt these technologies as models for their own organizational structures’); in Prakash’s work on the rewriting of colonial modernity’s scientific and technological narratives by colonized elites; and in Young’s account of the communications technologies that were central to Gandhi’s resistance campaign (‘In Gandhi’s hands the Indian liberation struggle took the form of the first media war, the first media revolution’).

The topography of the body and the landscape of history, as well as the contours of the nation-state, are also being reassessed in technological terms. Against the empiricist appetites that have dominated scientific thought, Fox Keller argues that metaphors – including those drawn from technological discourses – mediate our understanding of the biological body; for Turkle technologies are now investing the self with a different emotional and sexual charge.

Both claims connect with a more general sense that the human is being redefined as an organic entity and is, for some, in the process of becoming posthuman and postbiological (with Stelarc’s symborganic metabody and Orlan’s Carnal Art most dramatically embodying the biotechnologized and decorporealized body). Technotopian celebrations of the freedoms that are made possible by such a reinvention stand in sharp contrast with Virilio’s fears that a human catastrophe will result from the clamorous embracing of technology in the present; new technologies of vision not only have an essentially military function that is passed over whenever the global is conceived in terms of international markets and transnational communities, they are also rebuilding consciousness and the body at a rate that is unprecedented in human history.

Underlying Virilio’s claim that today’s culture is one in which perception functions at a different speed – that it is marked by the ‘acceleration of a dromological history’ – is the sense that a sudden break in history has occurred. Castells’ The Information Age offers a precise account of this epochal shift.

No longer organized around the state or its institutions, he argues, capitalism now operates through diffused informational networks which work at the symbolic level to produce new social structures occupying different spaces and operating in different temporalities; capitalism, as a result of this redistribution of power, has become more flexible and is, therefore, more resilient and durable. That this epoch results in a rewriting of representational codes is shown by Manovich: charting the shift from a modernist industrial aesthetic to an informational economy, he considers how information society’s ‘meta-media’ (‘the remixing of interfaces of various cultural forms and of new software techniques’)15 are producing different procedures for accessing the present and the past, as well as new languages of everyday life.

The concepts of ‘writing’ and ‘technologies’, as well as the more singular idea of ‘writing technologies’ do, of course, raise questions that are not addressed by the above examples. What remains urgent and compelling is the need to interrogate technology’s centrality to emerging modes of representation, as well as its decisive role throughout literary and cultural history. In its efforts to question this complex relationship, Writing Technologies will ask:

  • What might a ‘technological criticism’ look like and how might it be related to other, more established, critical systems?
  • How might the ‘background’ (i.e. assumed) technologies in a text provide a way into both it and the culture from which it comes?
  • What is the relationship between technology and scientific worldviews?
  • Does technology imply the further expansion and transformation of the literary canon, and to what extent does this process blur the boundaries between literary studies and cultural & media studies?
  • What are the formal technologies and technological forms of literature?
  • How are subjectivities shaped by medical, cybernetic and other technologies? How do literary texts engage with the encounter between self and machine?
  • What impact are epublishing and other online modes of production and distribution having on patterns of reading? Is the ebook revolution, much hyped at the beginning of the century, failing to threaten printed textuality in the way that many feared?
  • Is there a literature of the new informational economy? If new networks of sociability are emerging – if new modes for the production of social life and social meaning now exist – are these reshaping the production of textual meaning?
  • What is the speed of literature?


  1. Jacques Derrida, Paper Machine, trans. Rachel Bowlby (Stanford: Stanford University Press, 2005), p. 20.
  2. Derrida, Paper Machine, p. 58.
  3. Derrida, Paper Machine, p. 57.
  4. Manuel DeLanda, Intensive Science and Virtual Philosophy (London: Continuum, 2002).
  5. See, for example, Theory, Culture & Society vol. 22, no. 5 (2005).
  6. Gilles Deleuze & Félix Guattari, A Thousand Plateaus: Capitalism & Schizophrenia, trans. Brian Massumi (London: Athlone, 1988), p. 4
  7. ‘The Free and the Unfree’, Wired 146 (12.06.04), 146-55
  8. Michael Hardt & Antonio Negri, Multitude: War and Democracy in the Age of Empire (London: Penguin, 2006), p. 83.
  9. Gyan Prakash, Another Reason: Science and the Imagination of Modern India (Princeton: Princeton University Press. 1999).
  10. Robert Young, Postcolonialism: An Historical Introduction (Oxford: Blackwell, 2001), p. 330.
  11. Evelyn Fox Keller, Making Sense of Life: Explaining Biological Development with Models, Metaphors, and Machines. (Cambridge MA: Harvard University Press, 2002).
  12. Sherry Turkle, Life on the Screen: Identity in the Age of the Internet (London: Simon & Schuster, 1997).
  13. Paul Virilio, Ground Zero, trans. Chris Turner (London: Verso, 2002), p. 15.
  14. Manuel Castells, The Information Age, vols 1-3 (Oxford: Blackwell, 1996-7).
  15. Lev Manovich, ‘Understanding Meta-Media’,, accessed 10.08.06.

I wish to pick up on the issue of the writing/technology interface implicit within the editors’ introduction, and what I see as a deeply problematic area of the relation which is conjured up by their parallel lines of discourse that are implicit in their partioning of ‘writing’ and ‘technologies’ – a relation which is usually, of course, conceived as dialogue-across-partitions, but which also carries within it the threat of an alternative reading: parallel tracks which never meet; separation. The fact that this is a gap which we often close, in circular fashion, by using metaphors which are derived from technologies of communication – we talk (as I did above) of the ‘interface’ between the technological and the human; of ‘relays’ between them; of feedback, resonances, connections and reflections – suggests the scope of the problem rather than solving it.

One version of this question is raised by Derrida in Archive Fever and elsewhere. Derrida’s suggestion is that the relation between the human and the technological is fundamentally unanswerable. Timothy Clark puts it this way in his article on deconstruction and technology: ‘Deconstruction … upsets received concepts of the human and the technological by affirming their mutual constitutive relation or, paradoxically, their constitutive disjunction. Neither term acts as the anchor in relation to which the other can be understood ... The identity of humanity is a differential relation between the human and technics, supplements and prostheses’.

But at one level, this ‘differential relation’ can readily be experienced as a mismatch by anyone with a prosthesis, no matter how minor (that is, most of us), since we are always liable to encounter the friction between technological fix and body: the wearing of a hip replacement; discomfort with dentures; spectacles misplaced because they are not attached to our bodies; frozen shoulders produced by resistances machines at the gym. At such moments we hardly feel (to borrow one of the editors’ phrases) ‘postbiological’; the body is all too evidently with us in the self-identity of its pain rather than its connectedness. Derrida insists that there is no ‘natural originary body’ to which technology has been added; that writing and technology are always bound together as technics; that the self as conceived by Freud and others is circumscribed by technological metaphors – but the question remains of our own experience of disjuncture, of the gap which persists in our experience between ourselves and our technologies.

In part this is also a question of the technological as ‘other’, and of the possible autonomy of the realm of the technological – not simply in terms of the accelerated evolution of technology considered as having a logic separate from that of human society and biology, the dislocations of which have been the focus of one potent strain of thinking on the subject from George Beard’s Spenserian sense of overload and speed out-of-control in the late nineteenth century to Jacques Ellul’s more haunted, post-war sense of modernity as constant supercession.

Rather, that sense of disquieting autonomy is also a product of the way in which, within that evolutionary framework, we repeatedly inscribe a master-slave dialectic within the realm of technology (for Aristotle slaves are akin to machines, instruments of the master’s will): in the android or Matrix-type fantasy, the slave-machines threaten to take over; the fear is that they ultimately need us (as Hegel suggested) less than we need them.

The threatening autonomy of the technological has a long history: worries about the machine dwarfing or overwhelming the human scale of power and speed were first apparent in the early nineteenth century. Part of what is at issue is the alienation of the senses: physics has, since the late Victorian period, opened up areas of investigation which fall outside the scope of human perception; in which all that can be investigated is accounted for by the calibration of instruments against other instruments, or machines writing output for other machines. In such a science, the human observer is exiled, secondary. But more generally, that sense of exile may be related to the human subject’s being that is bound up in systems of feedback and exchange which have their focus in the human (and seem to require a human as point of connection), but which are logically distinct from the human. And those systems are, of course, incrementally bound up with modernity.

One mode of relating to this world of machinic interaction with humans and ‘their’ communication can be found in the systems theory of Niklas Luhmann. Luhmann’s sociology has received a relatively unenthusiastic reception in Anglophone cultural and media theory (with a few exceptions), in part because it has been associated with the static world of structuralism; in part because it has been seen as proposing a ‘colder’ and more radical version of the human sciences than most of us are willing to accept – in comparison, ‘cyborg theory’ seems (and in many ways is) a utopian romance of living dolls at play in the technosphere. To see the study of human communication as necessarily focussing on systems (rather than consciousness, feeling, intention, or even meaning) is to move beyond the Saussurian opposition of langue and parole, in which the individual speech act is privileged as creative, to a realm of formal disconnection. Human beings are peripheral to Luhmann’s analysis of the structures which humans have created: humans do not communicate, Luhmann insists; communication systems communicate – and the implication is that we cannot know if we say what we mean, or mean what we say, since meaning and saying are formally distinct.

Luhmann’s mode of thinking has often been challenged for its apparent conservatism, but it is useful in thinking about technology in at least two respects. Firstly, in its deployment of terms ultimately derived from the biological sciences (homeostasis, environment, etc.) it offers a counter to the metaphors we noted earlier (‘relay’ and the like), which in their circularity close off the question of technology before it has been properly opened. Secondly, in refusing our common-sense notion that technology (or even language) is a ‘tool’ subject to our will, Luhmann nevertheless helps us understand the experience of alienation and exclusion produced by technological systems, the ways we are subject to it, and the laws of unintended consequences seemingly written into their use.

A facile example is email, where most of us have feelings of alienation and inadequacy of response: ‘managing’ email and other software (mailboxes, addresses, templates, spam and virus filters, etc.) has become a process in which the maintenance of the system’s complexity is a major preoccupation. More fundamentally, the interface itself (in proprietary computer systems as in bureaucracies) presents us with a set of pre-formatted choices rather than real agency, leaving us uncertain about the assumptions written into the technology. (A good example is textual studies, where the uncertainties of the manuscript are necessarily rendered as a series of determinate choices made by editors, or, at best, hypertextual options.) As Lev Manovich comments, ‘While from one point of view, computerized media still displays structural organization that makes sense to its human viewers’ (images and texts), the computer’s organization of that data imposes fundamentally different ontological conditions and possibilities of operating on that data; conditions which Manovich defines as numerical representation, modularity, automation, variability, and transcoding.

This suggests, to return to our original question, that while we have always ‘written technology’ (it is in our writing), the technology may be continuing to write us in ways that we have barely begun to investigate. Implicit in the work of Luhmann, and elaborated much more specifically in that of Friedrich Kittler – neither of whom figure in the editors’ manifesto(s), but both of whom I would nevertheless see as fairly central to the project of Writing Technologies – there is an account of writing and technology which is attuned to discourse as technologically mediated, and to modes of language production and sensory storage which have become increasingly systematized, commoditized, and detached from human sources.

But I do not, here, mean to suggest that we should succumb to a determinism, to the fetishization of the technological which can creep into the work of thinkers like Kittler and Virilio. Rather we need to attend to the fragility of the written; to the discomforts and estrangements of its relation to the technological; and to the uneven flow of relations between the two. I write on a keyboard; some of its letters are effaced; I trip; or I turn to the internet; I worry about whether the hum I hear is a presage of hard disc failure. The flow of words must negotiate all this; the words must head across cyberspace where they may well be scanned for suspicious keywords by agencies I have never heard of; they must join other words in other machines and finally reside on paper and in the web (where they may again be mixed with other words, making their way, with some luck, into other writings).  All this is true, and part of writing as technology.

But any phenomenology of writing must nevertheless negotiate the way that they still seem my words; that they are evidence of a mind thinking, and a body writing, in a place and time which is part of a lived experience. We are, to quote Wallace Stevens, ‘Within the very object that we seek, / Participants of its being’, which means among other things that we will never cease to struggle to articulate our difference from our technology.


  1. This is, of course, a confirmation of Henry Adam’s comments on the ‘occult’ qualities of modern forces and instrumentation in ‘The Dynamo and the Vergin’ (1900). On one aspect of this question see Joel Snyder, ‘Visualization and Visuality’, in Picturing Science Producing Art, ed. Caroline A. Jones and Peter Galison (New York: Routledge, 1998), pp. 379-97.
  2. Exceptions include Thomas LeClair, In the Loop: Don DeLillo and the Systems Novel (Urbana: University of Illinois Press, 1987) and Mark Seltzer’s True Crime (New York: Routledge, 2006) .
  3. Lev Manovich, The Language of New Media (Cambridge, MA: MIT Press, 2001), p.45.
  4. See Friedrich A. Kittler, Discourse Networks 1800/1900, trans. Michael Metteer (Stanford: Stanford University Press, 1990); Friedrich A. Kittler, Gramophone, Film, Typewriter, trans. Geoffrey Winthrop-Young and Michael Wutz (Stanford: Stanford University Press, 1999). The first text in English to show a major influence from Kittler’s work was Avital Ronell, The Telephone Book: Technology, Schizophrenia, Electric Speech (Lincoln: University of Nebraska Press, 1989); more recent examples of work inflected by his approach include Lisa Giltelman, Scripts, Grooves, and Writing Machines: Representing Technology in the Edison Era (Stanford: Stanford University Press, 1999); Sara Danius, The Senses of Modernism: Technology, Perception and Aesthetics (Ithaca: Cornell University Press, 2002); Timothy C. Campbell, Wireless Writing in the Age of Marconi  (Minneapolis: University of Minnesota Press, 2006).
  5. Wallace Stevens, ‘Study of Images I’, in Collected Poetry and Prose, eds. Frank Kermode and Joan Richardson (New York: Library of America, 1997), p. 395.

One might suppose this was not a matter for speculation; that all manner of studies and theories already exemplify technological ways of reading, from books of literary criticism that examine the theme of technology in literature to approaches to literature that are themselves technological or machine-like. But technology poses a problem that few approaches to literature get to grips with. It’s a problem of radical ambivalence — or, rather, it is a problem of simultaneously constructing seemingly clear-cut distinctions of a fundamental, ontological kind, and confounding them.
It is there in the word technology itself: a fusion of techne and logos. The encounter between technology and writing is already implicit in the word.

And it points to an ambiguity about the word: does technology refer to specific artefacts, or to concepts of and discourses concerning them? Originally, technology signified the latter (the study of skills); but it has come to refer also to actual objects. However, it has never been reduced merely to things; the word still implies something large and abstract. To acquire a whole technology, rather than just particular instances of it, one needs more than just objects. Acquiring a technology implies the acquisition not only of things, but of expertise, organisation and infrastructure. There is more to this than just knowing which buttons to push. Accordingly, several theories of technology take it to comprise not just machines but machines and people integrated into systems and activities.

Mechanistic science as it emerges from the scientific revolution underlies technology’s ambivalence regarding the concrete and the abstract. It was by no means the only model for investigating nature to emerge from seventeenth-century thought; but it was the most influential. Its proponents were at pains to recommend their studies to existing social norms, political interests and religious orthodoxies. But it was impossible to disguise a radical separation on which mechanistic science depended: a separation of consciousness from reality. As the concept of the created world was systematically mechanised, the position of the conscious mind contemplating that world became problematic. Mechanistic nature was nature shorn of animating forces.

There were matter and motion, but not animation. Notoriously, for Descartes animals were simply machines. Only humanity possessed authentic animation, because only human beings possessed rational souls, and thus only human beings of all the entities in the material world possessed free will. Only free will distinguished us from machines; not life. Yet we were now embedded in a world of mechanistic determinism. And other thinkers (for example, Hobbes and, later, La Mettrie) declined to follow Descartes in safeguarding an ontologically distinct soul. Their systematic materialism promised to return consciousness to the material world. But it was a material world conceived so as to make consciousness anomalous. There remained a clear separation between subjective consciousness and the mechanistic model of the world into which it was inserted, and by which the mind was supposedly to be explained, even as the mind sought to explain that world. In other words, one runs into paradoxes of reflexivity. Yet that separation was also denied by the totality of the mechanistic world view. Hence the simultaneous drawing and confounding of ontological distinctions.

That ambivalence asserts itself in literature at the Romantic moment. Coleridge identifies mechanistic creation as the product of the limited faculty of fancy. True creation, by comparison, is the upshot of organic imagination, creating after the manner of God. Animation and wholeness become deeply vexed issues for such a literary theory, and notions of writing as craft accordingly lose out. Rhetorically informed criticism, with its sophisticated, ultimately instrumental, attitude to language, receives a blow in this period from which it has never recovered. Though Romantic writing is varied, there is a tendency for a version of idealism to emerge as a counterblast to materialism, even as imagination is elevated over reason, which, in its more limited forms, seems mechanistic. Shelley’s idealism is a kind of Platonism. Coleridge’s combines philosophy and theology. Wordsworth more commonly draws upon a kind of vitalism — though the Immortality Ode shows how far he could also invoke idealism. It is not every writer of the period — even in Britain — who exemplifies this turn to idealism. But this use of idealism and this invocation of Life, to counter mechanistic conceptions of the world and of ourselves, set the conditions for a nineteenth-century manifestation of technology’s radical ambivalence: the unstable and undecidable interplay of idealism and materialism.

The nineteenth century is a great age of materialist science, and a science increasingly manifest in its technological reshaping of the world. It is also fascinated by counterparts to this sense of the world as lifeless, integrated system. Hence the invocation of Life (by Nietzsche, for example), and the persistence of idealism. The book which most presciently grasps this dilemma is arguably Frankenstein. It repeatedly invokes and confounds the distinctions some other Romantic writers insisted upon. One way of reading the story is as an ironic commentary on Coleridge’s opposition of mechanical to organic creation.

The creature is a technological product made (after the manner of Coleridgean fancy) out of pre-existing parts, mixed and matched, instead of being conceived as an organic, living whole. Yet is the creature not alive? If Frankenstein presents the Romantic poet in the guise of a scientist, the creature is an arresting vindication of the organic imagination — and its most damning refutation.

This kind of ambivalence persists in many critical trends, playing off a totalising mechanistic anti-humanism against a correspondingly totalised life or spirit, or, latterly, trying to reject both of these totalities by resorting to a systematic anti-systematism, and a metaphysical anti-metaphysics. There is a phase in the development of film theory that describes part of this yo-yoing trajectory. Auteurism represented a late assertion of Romantic authorship. It claimed that film aspired to be an act of (self-)expression. If, instead of expressing a unique vision, a film merely reproduced existing forms and conventions, it had failed.

Film theory then abruptly flipped from this assertion of the author as the source of meaning to a structuralist denial of it, and its model of film language accordingly switched from expression to code. However, this was a denial of authorship that on the whole still declined to contemplate technology in relation to human organisation and cooperation. Yet auteurism and structuralism were not as different as they seemed, as auteur-structuralism revealed. Auteur-structuralism is crazy in principle, but in practice it proved suspiciously easy to marry these two foes to each other. Both appeal to a kind of super-Subject as the source of meaning: the auteur in one case, the code of codes in the other.

Conceptions of language modulate through correspondingly implausible, extreme positions as this yo-yoing proceeds. There is a seventeenth-century scientific war on metaphor, manifest in a nominalism that was determined to override the threat it posed to the meaningfulness of words by reforming language to attach words unshakably to things (something clear in Hobbes, and in Thomas Sprat’s account of the Royal Society’s programme; and mocked by Swift, whose natural philosophers in the Grand Academy of Lagado in Gulliver’s Travels carry bundles of objects with them to use instead of words, though this contrasts with the writing machine of another academician).

That is countered by a Romantic impulse to assert that language is essentially metaphorical. It anticipates later notions of language as a system in which signs relate primarily to each other, with the massive qualification that it reserves a privileged place for creative will, albeit a will so paradoxically conceived and presented as often to seem a function of expression rather than the source of it — at any rate, not a matter of individual will. One totalisation readily takes the place of another, and so language as the expression of spirit readily gives way to total, machine-like conceptions of language, which accordingly surface in linguistics.

The radical ambivalence of technology gives rise to queasy and questionable metaphysics — whether concerning the idea of the mechanistic as such, or various attempts to counter it. Hence various totalising and systematically anti-totalising gestures. Hence too in technology studies a shuttling between determinism and social constructionism, with will and consciousness again becoming the key problematic terms. Not that we can ever be free of such metaphysics (that was, after all, among the seventeenth-century scientific delusions that gave rise to this radical ambivalence). But we can be more aware of metaphysical assumptions — and we can attend more to the ways in which particular material practices and specific applications of technologies create possibilities of expression and shape forms, without reducing authorship to a function of discourse or discourse to a function of authorship. Some of this of work is available — yet it seldom becomes as central to relevant disciplines and curricula as it ought to be. It is hard, for example, to understand literature since the sixteenth century without understanding how print impinges upon the form and stability of knowledge and on the construction of authorship. The modern concept of technology arguably depends upon print-consciousness. According to Benedict Anderson so too does the nation-state. And modern concepts of authorship arguably depend upon the uniformity of a printed edition in all its copies. Without that uniformity, one can have little confidence that the details of a book are the expression of the author, rather than of a copyist. If this is so, then it is ironic that authorship as such, with its elevation of one kind of individual, depends on a technology that strips away individual differences in favour of uniformity. In fact, technologies embody particular forms of cooperative labour, even if there is a tendency to misrepresent this as the expression of a single coordinating will, reducing all others to functions of the system it runs.

It is typical of the kind of ambivalence I have been describing that the application of print technology should simultaneously produce mechanical uniformity and the figure of the Author. Yet to go beyond this in even the simplest way one needs to attend to the specific ways in which technologies — as assemblages of machines and organised and appropriately divided labour — function. One needs to attend to the work of such scholars as Eisenstein and Ong. Their work is respected, but tends to be relegated to courses on the history of the book or textual criticism rather than being seen as fundamental to any critical reading. Similarly, though there are studies of the technical processes and the technologically mediated division of labour of film production, it is rare to find a film studies programme which sees an understanding of film technology as foundational for a critical understanding of film. Though film technology figures in various film theories, many of them remain caught in the yo-yo effect I have described. So people end up analysing Citizen Kane with no notion of how, for example, an optical printer works. Yet without such knowledge it is impossible to assess what choices were made and what other choices were available.

So what would a technological criticism look like? I’d suggest it needs to be critically alive to the metaphysical notions that technology brings with it, and that it needs to attend to particular crafts and technologies of production, to understand production in terms of creative cooperation and division of labour, and in terms of skills, instruments, systems, agencies, capacities and constraints. The upshot is likely to be a return of the author, albeit as a modestly conceived figure who is one agent and factor among others. In the process the metaphysics won’t go away, but nor will they merely reproduce themselves to the exclusion of everything else in an ultimately tedious yo-yoing fashion. Perhaps ‘technological criticism’ is not the most appropriate term for such a project, since it demands a critique of technology, besides knowledge of technologies. But this kind of technologically wary and aware criticism may be one way out of an impasse that threatens otherwise to be reiterated endlessly, possibly disguising sameness with technological innovation, while dancing the same dance over and over, just in different clothes.


  1. See, for example, Paul Ginestier, The Poet and the Machine, trans. Martin B. Friedman (Chapel Hill: University of North Carolina Press, 1961); Herbert L. Sussman, Victorians and the Machine: The Literary Response to Technology (Cambridge, Mass: Harvard University Press, 1968); Hugh Kenner, The Mechanic Muse (Oxford: OUP, 1987); Bettina Liebowitz Knapp, Machine, Metaphor, and the Writer: a Jungian View (University Park: Pennsylvania State University Press, 1989); Nicholas Daly, Literature, Technology, and Modernity, 1860-2000 (Cambridge: Cambridge University Press, 2004).
  2. Cf. Bruno Latour's construction of modernity in We Have Never Been Modern, trans. Catherine Porter (New York: Harvester Wheatsheaf, 1993), pp. 10-13.
  3. On the etymology of technology see Carl Mitcham, Thinking Through Technology (Chicago: University of Chicago Press, 1994), ch. 5.
  4. On the history of the term, see Thomas P. Hughes, Human-Built World: How to Think About Technology and Culture (Chicago: University of Chicago Press, 2004), pp. 2-5.
  5. See, for example, Hughes, Human-Built World, pp. 175-6.
  6. Samuel Taylor Coleridge, Biographia Literaria (1817), ed. George Watson (London: Dent, 1975), ch. 13. See also M.H. Abrams, The Mirror and the Lamp: Romantic theory and the critical tradition (Oxford: OUP, 1953), pp. 167-77.
  7. See, for example, Toril Moi, Henrik Ibsen and the Birth of Modernism (Oxford: OUP, 2006), especially chs. 3 and 5, for an account of how idealism affected one writer.
  8. This is partly because Coleridge's theory of organic creation and aspects of the novel are drawing upon a debate about vitalism and the life sciences. See, for example, Nicholas Roe, Ed., Samuel Taylor Coleridge and the Sciences of Life (Oxford: OUP, 2001), and Marilyn Butler, introduction to Mary Shelley, Frankenstein (Oxford: OUP, 1994), pp. xv-xxi.
  9. Andrew Sarris's The American Cinema (New York: E.P. Dutton, 1968) remains one of the clearest instances of this.
  10. On structuralism's invocation of a super-Subject see Terry Eagleton, Literary Theory: an Introduction (Oxford: Blackwell, 1983), pp. 121-2.
  11. Thomas Hobbes, Leviathan (1651), ed. Richard Tuck (Cambridge: CUP, 1996), part 1, ch. 4; Thomas Sprat, The History of the Royal Society (1667) ed. Jackson I. Cope and Harold Whitmore Jones (London: Routledge & Kegan Paul, 1959), first part; Jonathan Swift, Gulliver's Travels, A Tale of a Tub, The Battle of the Books, etc. (London: OUP, 1919), part 3, ch. 5.
  12. See, for example, Percy Bysshe Shelley, A Defence of Poetry (1821), rptd. in Duncan Wu, Ed., Romanticism (Oxford: Blackwell, 1994), especially p. 957, on the language of poets as 'vitally metaphorical'. As the essay as a whole makes clear, for Shelley language and even reality have an ultimately metaphorical character, even if it's only poets, in his extended sense of the term, who are capable of animating and remoulding that metaphoricity. It's because the work of poets thus impinges upon the terms in which entire cultures think, feel and express themselves that Shelley claims that they "are the unacknowledged legislators of the world" (p. 969).
  13. See Roy Harris, The Language Machine (Ithaca: Cornell University Press, 1987).
  14. Benedict Anderson, Imagined Communities, rev. edn. (London: Verso, 1991), ch. 3 and pp. 61-5.
  15. Cf. nineteenth-century critiques of industrial technology as removing the worker's specific relation with the product of the work - in Marx and Ruskin, for example. Hence the insistence on retaining craft as the model for the arts, and the way the words artisan and artist head off in different directions.
  16. See, for example, Elizabeth Eisenstein, The Printing Press as Agent of Change: communications and cultural transformations in early-modern Europe, 2 vols. (Cambridge: CUP, 1979) and Walter J. Ong, Orality and literacy: the technologizing of the word (London: Methuen, 1982).
  17. See Seán Burke's introduction to his anthology Authorship from Plato to the Postmodern: a Reader (Edinburgh: Edinburgh University Press, 1995).

One important effect of a concerted focus on ‘writing technologies’ – that is, on the material mechanics of inscription – is a dilution of the textual idealism that is endemic to much literary study. Outside the area of bio-bibliographic research, where an attention to the specifics of manuscript variants is crucial, most literary scholars tend to operate as if any given version of a text is adequate for their scholarly or pedagogical purposes. The emergent field of book studies has done much in recent years to correct this assumption, showing compellingly how such extra- or para-textual features as publication format, illustrations, and mode of distribution work to condition how individual texts are interpreted by readers. It matters intimately to an informed grasp of Dickens’ novels, for example, that most of them were released in serial form, an arrangement that had appreciable effects on such intra-textual features as plot and characterization. Every text, whether an original publication or a reprint, is materially instantiated in a specific medium, accessible through particular modes of distribution, and amenable to discrete forms of reception. Encountering a story by H.P. Lovecraft or Dashiell Hammett in a pulp magazine such as Weird Tales or Black Mask is not the same thing as reading it in a Library of America edition.

These considerations apply with particular force in the field where much of my own research is centred, science fiction. For roughly the first thirty years of its existence, science fiction (SF) was essentially a magazine culture, sustained by pulp and digest publications appearing monthly or quarterly; a specialty book market was negligible until the late 1950s and did not achieve dominance until at least a decade later. This basic set of facts has important consequences for how we read SF texts. For example, early SF’s purplish prolixity—the adjectival profusion of the classic pulp style—may in part be explained by the fact that editors needed copy to fill pages and writers were paid by the word. Moreover, possible story structures were constrained by the serial format: some of the classic works in the field, such as Isaac Asimov’s I, Robot (1950), though marketed in book form as novels, were not initially planned as such, but rather as cycles of tales published over decades—hence their episodic plots and flattened, repetitive characterization (since characters had to be introduced anew to a fresh set of readers with each instalment).

Despite the evident salience of these contextual issues to an adequate interpretation of SF works published prior to the 1960s, many SF critics and teachers seem to assume that a text as presented in a current reprint edition is not substantially different from its appearance in a pulp magazine of the 1940s. This is a misleading assumption even if the reproduction is precisely word for word (which is often not the case since many SF authors, irritated by the persistent meddling of magazine editors, restored or revised their work when published in book form). A pulp story was seldom read in isolation but instead came bathed in the ambient culture of a particular magazine, with its editorial ideology, visual style, and layout—all of which hovered on the margins of the reading experience as an animating framework for interpretation. Broadly speaking, this encompassing context provided the ‘writing technology’ of the genre, and contemporary scholars who ignore it are in serious danger of generating blinkered or anachronistic readings of texts from pervious eras. Disciplined attention to a work’s material instantiation is thus an essential component of literary analysis, and not just for SF critics either.

Postphonetic writing has inaugurated the future of new media. As the technology continues to evolve and morph into something we may not yet know how to characterize, one of the first things we should interrogate is the idea of the phonetic alphabet. Inasmuch as the alphabet lies at the foundation of our literacy, literary theory, linguistics, and information theory, the theoretical implications of this construct need to be rethought in light of the advent of postphonetic writing and new media. Is the alphabet necessarily phonetic? This somewhat facetious question leads us to that other enduring, but contentious, issue which had troubled the philosopher Jacques Derrida: What is writing?

Derrida’s insistence on the primacy of writing is well known but somewhat curious from this perspective because it coincides with the development of biocybernetics and the discovery of the genetic code. On closer inspection, what seems like a coincidence is actually the philosopher’s reaction to the news of biocybernetics. Derrida evoked the ‘information within the living cell’ and ‘the cybernetic program’ to elaborate the notion of the grammè or graphemein his essay ‘The End of the Book and the Beginning of Writing’. More interestingly, he treated the biocybernetic developments of his time as contemporary instances of a generalized ‘writing’ that would seem to suggest radical possibilities for the project of critiquing Western metaphysics. This attempt to fold biocybernetics into grammatology raises the issue of whether the so-called ‘information within the living cell’ can supply the kind of evidence Derrida was looking for or whether it exemplifies the same rhetorical loop as he was unravelling elsewhere, in particular with respect to the European metaphysical tradition. No doubt that the decades-long deconstruction of logocentrism has proven extremely fruitful in clearing the way for innovative views of writing but it is time, I believe, to reassess the critical project of grammatology and its relevance for writing technologies.

It is often said that the technology of writing has been instrumental in the making of cities, empires, civilizations, long-distance trade and communication over the past millennia and brought about electronic global capitalism and increasingly networked societies in our own time. Nietzsche made his prescient remark in 1878 that ‘The press, the machine, the railway, the telegraph are premises whose thousand-year conclusion no one has yet dared to draw’. In this Nietzschean picture of future technologies, writing clearly dominates.

The sheer amount of written and printed record, and electronic information stored in data banks, libraries, museums, archival centres and global communication networks indicates the profound degree to which writing has transformed our lives and consciousness. But apart from a general consensus concerning the power of writing as technology, everything else seems up for grabs. Contemporary theorists who continue to work under the shadow of Marshall McLuhan exhibit a tendency of taking alphabetical writing for granted even as they analyse its relationship with print technology on the one hand and with electronic media on the other.

The slowness in recognizing the metamorphosis of alphabetical writing across the disciplinary divide has prevented us from knowing exactly how a given idea of writing migrates from discipline to discipline. For instance, did Claude Shannon and Roman Jakobson share the same view of the alphabet? How was postphonetic writing invented? Why was this writing deemed necessary by engineers of communication systems? Where does it stand in the making of biocybernetic systems?

If informatics and linguistics each depart from different assumptions about writing, they must arrive at rather different results in view of the ambiguous identity of alphabetical letters in respect to phonetics, visuality, and spatiality. Whereas modern linguistic theory has tended to perpetuate the phonocentrism of European comparative philology, algorithmic thinking has always revolved around the ideographic potentials of alphabetical writing thanks to the non-phonetic character of mathematical symbolism. In other words, writing persists in algorithmic thinking in spite of the linguistic sign.

In a recent study I devoted to exploring the interrelations of James Joyce, Claude Shannon, and Derrida, I tried to draw attention to one of Shannon’s theoretical constructs called ‘Printed English’. Shannon conceived of his Printed English as an ideographical alphabet with definable statistical structures which is composed of a 27-letter alphabet including letters A to Z plus a ‘space’ sign. Printed English entails a symbolic correspondence between the twenty-seven letters and their numeral counterparts and has nothing to do with phonemic units in the spoken language. As a post-phonetic system, this statistical English functions as a conceptual interface between natural language and machine language. As one of the most significant inventions since World War II, Printed English is a direct offspring of telegraphy because it is based on a close analysis of Morse code conducted by Shannon himself. The novelty of his Printed English lies not only in its mathematical elegance for encoding messages and designing information systems beyond Morse Code but also in the reinvention of the very idea of communication and of the relationship between writing and speech. Printed English functions as postphonetic writing precisely in this alphanumerical sense with profound implications for what Walter Ong has called ‘secondary orality’ because it refigures the biomechanics of human speech in such a way as sound and speech can both be produced, rather than reproduced, as an artefact of AI engineering, the example being TTS (text to speech) synthesis.

It is worth pointing out that the ‘space’ symbol in Printed English is a conceptual figure, not a visible word divider as is commonly observed in some writing systems. The centrality of printed symbol for technology has been well captured by Friedrich A. Kittler as follows: ‘in contrast to the flow of handwriting, we now have discrete elements separated by spaces’. The letter ‘space’ owes its existence to the statistical, rather than visual or phonemic, parameters of symbols. It has no linguistic meaning insofar as conventional semantics is concerned but it is functional as a meaningful ideographical notion. However, this point is difficult to grasp until we tackle the long-standing attribution of difference among non-alphabetical writing systems along the spectrum of pictography, ideography, and phonetic writing.

Ideographic writing has long been opposed to the phonetic alphabet as its non-phonetic other. The binary thinking exemplifies a metaphysical turn of the mind that Derrida tried to dismantle, although the exact relationship between the two appeared to elude his grasp for reasons I do not have the space to elaborate here. For a preliminary understanding of the subject, the first thing to do is NOT to associate ideographic inscription too quickly with the Chinese script. Despite the various claims to the contrary, the written Chinese character can no more be equated with ideography, much less pictography, than alphabetical writing can be reduced to phonocentrism. We must remember that ideographic inscription has been a European idea, like that of hieroglyph, which would be foreign to the Chinese scholars who have written voluminously on the subject of the zi (individual character) or the wen (text/writing) over a period of two thousand years. The equating of the Chinese script with an ideographic system has been the unfortunate result of misunderstandings and motivated translations by early Christian missionaries and linguists who were poor intermediaries when it comes to reporting on the state of Chinese writing to their home audiences and to unsuspecting philosophers. The situation has not improved much since the time of Leibniz.

But there is no reason why one should dismiss ideographical writing as a false idea. Even if this notion fails to inform us about the Chinese script, it has enjoyed a productive career in the West with a penchant for prolepsis, that is, a dream that some day alphabetical writing would be able to shed its local phonetic trappings to become a universal script. It is this Leibnizian dream of transcendence that has given ideography its aura of alterity in Western thought, so one can continue to fantasize about direct graphic inscriptions of abstract thought the way mathematical symbols or deaf reading and mute writing transcribe conceptual objects, namely, without the mediation of speech or sounds.

That aura appears to have persisted with or without the help of the Chinese script. More recently, a new course of events began to speed things up and brought the centuries-long pursuit of the universal script to a halt, if not to a sense of closure. I am referring to the cracking of the genetic code by molecular biologists in the latter half of the twentieth century. This monumental event and the subsequent mapping of the human genome have marked a turning point in how some of the basic questions about life, humanity, reproduction, social control, language, communication, and health are to be posed or debated in the public arena. These events are happening when conversations between the scientists and humanists are made ever more difficult by the nearly insurmountable disciplinary barriers and institutional forces that are there to shield the scientist from the critical eye while keeping the humanist away from the production of objective knowledge.

Despite the difficulty, the news of the genetic code has given rise to a number of major critical studies by humanistic scholars who took upon themselves the task of scrutinizing the discourse of coded writing as a master trope in molecular biology. Inasmuch as the discipline of molecular biology did not come to its own until the midst of the Cold War, many of these studies are devoted to examining how the vast resources of the military-industrial-academic complex of the United States have been put in the service of a new vision of weapons technology and a new ontology of the enemy in the form of information theory and cybernetics. These studies demonstrate how the path-breaking discoveries made by Norbert Wiener, Shannon, Von Neumann, George Gamow, and others in cybernetic warfare and cryptography had inspired the first generation of molecular biologists to transcribe and translate the biochemical processes of the living cell and organisms as coded message, information transfer, communication flow, and so on. Whereas the mathematician relied on the logic of cryptological decoding to unlock the enemy’s secret alphabet, the molecular biologist searched for the letters, codons (words), and punctuation marks of the nucleic acids to decode the speechless language of DNA in the Book of Life.

As the digital revolution dissolves older conceptual boundaries and introducing new ones, the spatial/temporal coordinates of a future cognitive world will emerge from ever intensified interdependence of human and machine or similar kinds of prosthetic conditions enabled by digital media. Of course, the numerical function of the alphabet has always been there since its invention but we are so addicted to thinking of alphabetic writing as a phonetic system of transcription that Shannon’s treatment of the English alphabet as a total ideographic system may still come as a shock. Alphabetic writing is one of the oldest technologies in world civilization and has become more thoroughly and universally digital and ideographic than it ever was. But what is happening to non-alphabetical writing systems in the meantime? An incontrovertible fact has been thrust upon our attention; namely, the digital technology is turning non-alphabetic writing systems such as Chinese into some kind of sub-codes of global English via the Unicode. It is as if a new metaphysics of communication has emerged on the horizon of universal communicability through Printed English.

Commenting on the state of metaphysics, Martin Heidegger provided a number of fascinating reflections upon the implications of cybernetics for philosophy in general. In his essay ‘The End of Philosophy and the Task of Thinking’ (1969) completed the year before his death, Heidegger pointed out:

No prophecy is necessary to recognize that the sciences now establishing themselves will soon be determined and steered by the new fundamental science which is called cybernetics. This science corresponds to the determination of man as an acting social being. For it is the theory of the steering of the possible planning and arrangement of human labour. Cybernetics transforms language into an exchange of news. The arts become regulated-regulating instruments of information.

If cybernetics is capable of turning language into an exchange of news as it seems to be doing in our time, we must also register the fact that language and writing had enabled the invention of the cybernetic idea in the first place as is well attested by Printed English. It seems that the drive toward universal communicability (visual, verbal, and tactile) will continue to raise fundamental issues to challenge an intellectual endeavour such as Writing Technologies. I am hopeful that this new journal will enlighten us on many aspects of the ethical, political, and psychic life of technology and push us toward a better understanding of the prosthetic coexistence of humans and other lives on this very fragile planet.


  1. Jacques Derrida, Of Grammatology, trans. Gayatri Chakravorty Spivak (Baltimore: Johns Hopkins University Press, 1976), p. 9.
  2. Friedrich Nietzsche, Human, All Too Human: A Book for Free Spirits, trans. R.J. Holingdale (Cambridge: Cambridge University Press, 1986), p. 378.
  3. Lydia H. Liu, 'iSpace: Printed English after Joyce, Shannon, and Derrida', Critical Inquiry 32 (Spring 2006): 516-550.
  4. See Walter Ong, Orality and Literacy (New York: Routledge, 1982), pp. 133-34.
  5. 'Text to speech' conversion denotes a branch of artificial intelligence that deals with the computational problem of converting from written text into a linguistic representation. This is one of the areas where the relationship between writing and speech can be fruitfully investigated for both engineering and theoretical purposes. See Richard Sproat, A Computational Theory of Writing Systems (Cambridge: Cambridge University Press, 2000).
  6. Friedrich A. Kittler, Gramophone, Film, Typewriter, trans. Geoffrey Winthrop-Young & Michael Wutz (Stanford: Stanford University Press, 1999), p.16.
  7. A report not very long ago in New York Times suggests that the world outside China is still very much in the dark about Chinese writing. See Emily Eakin, 'Writing as a Block for Asians', New York Times, May 3, 2003.
  8. For a discussion of the strained translation of the zi by the concept of the 'word' and the troubled beginnings of modern Chinese grammar, see 'The Sovereign Subject of Grammar' in my book The Clash of Empires: The Invention of China in Modern World Making (Cambridge, Mass.: Harvard University Press, 2004).
  9. I have in mind, for example, the pioneering work of Katherine Hayles, Mark Taylor, W.J.T. Mitchell, Paul N. Edwards, Mark Hansen, and Lily E. Kay.
  10. Martin Heidegger, 'The End of Philosophy and the Task of Thinking', in Heidegger, On Time and Being, trans. by Joan Stambaugh (New York: Harper & Row, 1972; Chicago: University of Chicago Press, 2002), pp. 55-73.

Is there a literature of the new information economy?

The answer to this question depends on how we interpret the word literature. For the purpose of this short piece I shall just focus on writing that endeavours for artistic and cultural expression, rather than more general communicative writing. In the current media landscape, however, these two types of writing are often intimately connected due to their distribution on the same electronic networks.

Just as the invention and proliferation of the technologies of the printing press enabled and promoted the development of the novel, as well as other literary forms, so we can also discern examples of artistic literature that have been made possible by new media technologies. It can be questioned whether these examples actually constitute new form/s of literature and, if so, why?

The artistic use of new media with literary activity has been traced back to the earliest days of electronic computing in the 1950s. However, the rapid growth of the Internet in the 1990s created a proliferation of activity in this field much of which centred on institutional hubs such as trAce in the UK and the Electronic Literature Organisation in the USA, as well as other fora and email lists.

One of many areas of interest related to the cross-fertilisation of new media technology and writing is the way these technologies affect writers and influence their practice. What is the writer when he is also a programmer? What is the role of computer code within a new media writer’s practice? 2 Is the writer still just a writer when he is also working with different modes of representation beyond the flat page, such as with audio and visual media, databases, and information networks?

As a simple example of how a new media writer approaches writing in a technological environment I will briefly look at an example of my own practice called Let Us Turn. It is significant to interpretation that this work is a response to a stanza from Walt Whitman’s Leaves of Grass which begins ‘I think

I could turn and live with the animals’, in which he praises the simple honesty of animals compared to people’s desire for possessions or religious servitude.

However, the interpretation of the piece is complicated by the text being heard alongside an ever-changing image flow. This image flow is automatically constructed as the piece executes by the random editing of sections of randomly selected film clips from a database. As such, each time the piece is viewed it might be different. This is of course the case with most hypertexts, unless they are strictly linear.

One way of understanding works such as these is by recognising the active role that technology has in creating meaning in the piece. In her article ‘I, Apparatus, You’, Jenny Weight utilises the phrase ‘text-as-apparatus’ to highlight the active role in the meaning-making process that technology plays in such works, thus pointing to the shift in the relationship of the author with the audience to one that is substantially mediated by the actual hardware and software that are utilised both to construct and view the work.

The text-as-apparatus becomes a crucial meaning-giving component that requires it to be brought into the interpretative matrix. Although Weight agrees that it is a controversial move to situate the apparatus in a central position within human dialogue, as it is not conscious, she argues that it ‘originates signs for someone else to interpret’ and also ‘reacts to the signs originated by human interlocutors. It operates within the “intersubjective motivational context” in which social interaction takes place’.

The implication for those interpreting works such as Let Us Turn is clear. We need to recognise the work of the coding beneath the surface, as it were, in partly generating the meaningful aspects of content, and gauge how much authorial control is exerted in the final outcome. The combination of text (as spoken word) and algorithmically coded image-flow calls for a new way of reading and interpreting Let Us Turn. The work overflows the interpretative power of theory that focuses on the purely textual.
With works that utilise many forms of representation, such as text, image, animation, audio, network artefact, and algorithm, as well as diverse ways of linking lexia, a revaluation is required regarding the way in which we read. Such a revaluation is that made by Jessica Laccetti who proposes the use of a new literary tool which seeks to take into account the multiple forms of interpretation required for fully understanding online works, which she calls multi-mimesis:

What we aim to show with the theory of multi-mimesis is not so much that in order to understand born digital works it is not that we need a completely new form of literacy, but that we can apply existing conventions and ways of reading simultaneously . . . We need to be able to read/understand/grasp various mediums (or modes) at once. It is in this way that interpreting born digital works is different.

Laccetti asserts that for works which utilise the many forms of representation outlined above ‘readers need to be able to navigate the multiple representational devices simultaneously, thus multi-mimesis refers to a way of interpreting multiple modes of representation simultaneously occurring’. Again, we must also point to the fact that these kinds of works differ, to a greater or lesser degree, upon each reading. For example a work such as Listening Post, which utilises found text objects on the network, will vary constantly and to a greater extent than a more ‘closed’ work like Let Us Turn. The amount of variation tends to depend on the size of the database and the kind of algorithm used to construct the piece.

Laccetti calls for a broader reading perspective than just analysing the literary aspect of such works. For example elements of film theory may be used to augment textual analysis. One way to read Let Us Turn is to try and interpret it in terms of floating motifs. This is an idea from film theory regarding elements that in themselves are strictly meaningless but which can, when repeated, acquire a range of expressive implications according to context. Slavoj Žižek explains this further in a discussion of the film Ivan:

The most interesting moments… occur when such motifs seem to explode their preordained space. Not only do they acquire a multitude of ambiguous meanings no longer covered by an encompassing thematic or ideological agenda but also, in the most excessive moments, such a motif seems even to have no meaning at all, instead just floating there as a provocation, as a challenge to find the meaning that would tame its sheer provocative power.

The random flow of images pulled from the database coupled with the rhythmic, almost repetitive speech in Let Us Turn, creates a synaesthetic blending that resists solely textual interpretation but could be interpreted as creating such motifs. Another aspect of the work that will affect reading is the shadowy and broken texture of the images. The use of Super film was an intentional ploy to heighten the sense of the uncanny generated from a few simple images of cows and trees.

Central to understanding this piece and indeed many new media works is recognising the central role of the database. Lev Manovich has argued that we should regard the database as a new symbolic form of the computer age as it provides a new way to ‘structure our experience of ourselves and of the world’. The structuring of the world that the database and algorithm construct in such works tends towards one where narrative becomes elusive, often giving way to multiple and sometimes contradictory meanings. Often the experience of reading such a text is like exploring a topographical search space. As Weight asserts, ‘it may be better to conceive of texts in the text-as-apparatus as environments rather than traditional narratives’.

From this cursory reading of just one example of one of many forms of new media involving writing (hypertext, generative poetry, and so on.) questions arise as to whether we should regard such works as predominantly literary. Are there too many boundaries being blurred here? When we need to call in elements of film studies and computer science to interpret a work should we still regard that work as literary? Answers to such questions are bound to divide opinion and are fundamental to research activity currently investigating the fledgling notion of transliteracy: ‘Transliteracy is the ability to read, write and interact across a range of platforms and tools from orality through print, TV, radio and film, to networked digital media’.
If such works are to be studied as literary, indeed as a new form of literature of the information economy, then as the theories outlined above demonstrate, we need to take account of both how writing sits alongside other representational forms and how it is affected by the mediation of technology. It is these kinds of questions that transliteracy aims to address.

There are many issues related to those outlined above, especially regarding work distributed via a network, which point to the difference of this form of literary output from those previous. For the author, the fact that they are unable to control the actual environment in which the work is displayed becomes pertinent, as Michael Atavar writes in an interview: ‘things crash, monitors are different and so change colours, browsers don't display correctly. It's a very challenging environment in which to make art practice’.

The fact that the majority of works are distributed freely as part of the gift-economy of the network and that technologies utilised come and go at an alarming speed also marks a significant variation. Hopefully, by pointing to just a few of the kinds of issues that the new media writer faces, it is apparent that there is a new form of literature of the information age which demands new strategies of creation and reading.


  1. Trace Archive [accessed 15 February 2007]; Electronic Literature Organisation [accessed 15 February 2007].
  2. I use the phrase new media writer here to denote an author who creates literature that requires digital technology for its existence. Another phrase which perhaps brings out this relationship more strongly is author-as-programmer.
  3. Available online here [accessed 15 February 2007].
  4. Jenny Weight, 'I, Apparatus, You', Convergence 12:4 (2006), p. 413-446.
  5. Weight, 'I, Apparatus, You', p. 415.
  6. Jessica Laccetti, 'Re: Multi-mimesis', personal email communication, [5 January 2007].
  7. Laccetti, 'Re: Multi-mimesis'.
  8. See The Ear Studio website [accessed 15 February 2007].
  9. Slavoj Žižek, Organs Without Bodies: On Deleuze and Consequences (London: Routledge, 2004), p.6.
  10. Lev Manovich, The Language of New Media (Cambridge, MA: MIT Press, 2002), p. 219.
  11. Weight, 'I, Apparatus, You', p. 434.
  12. Sue Thomas, Transliteracy homepage [accessed 2 January 2007].
  13. Michael Atavar, Interview with Simon Mills [accessed 2 January 2007].

French writers, both literary and theoretical (and who, more than the French, have so thoroughly and consistently challenged this distinction between modes, styles or genres of writing?), have long been at the forefront of reflections on technology. However central the railroad and locomotive may have been to the American cultural and geographical imagination in the nineteenth century, for instance, few literary evocations of the railways have as much resonance as Émile Zola’s La Bête humaine (1890). Zola’s novel is at once thoroughly embedded in the world of steam railways as realist setting – the result of extensive and painstaking research in the milieu – and profoundly infused with the technological imaginary of the steam locomotive. And if the trains in La Bête humaine are metaphorical – as they are, through and through – they are symbolic at once of technological progress and the development of the new, mobile business and leisure classes, and, at the same time, of that which is most archaic and unsophisticated, of the inherited instinctual desires that drive humanity blindly, belligerently forward.

The novel’s unforgettable closing image – in which a trainload of drunken soldiers on their way to war on the Prussian Front remain blithely ignorant of the fact that their locomotive is running out of control, the driver and fireman having wrestled each other from the machine in a pointless dispute – is a perfect condensation of the terrifying ambivalence of technology. When the image of a runaway train hurtling out of control was reprised recently in Steven Spielberg’s adaptation of H. G. Wells’s War of the Worlds (2005), it had lost none of its evocative power.

In the contemporary literary sphere, French writers are again in the vanguard of writing technologies, albeit writers who to some extent reject their national cultural inheritance and willingly align themselves with the more American genre of science fiction. Michel Houellebecq and Maurice G. Dantec, respectively one of the most successful and one of the most ambitious novelists working today, are exploring in writing some of the most pressing questions of our current, and future, relationships with technology.

The most compelling question of Houellebecq’s fiction is how biotechnology might enable, or perhaps enforce, an abandonment or surpassing of individual subjectivity as we know it. The future visions of Atomized (Les Particules élémentaires, 1998) and The Possibility of an Island (La Possibilité d’une île, 2005), with their cloned neo-humanities liberated from the drives for social, economic and, above all, sexual competition, exist somewhere between utopia and dystopia as they are classically conceived in science-fiction narrative and theory, exuding a kind of eerie – but not exactly sinister – calm that may ultimately question the very need or desire for narration, for representation, for writing. But how does the curiously flat tone of Houellebecq’s writing – complete with his deadpan sense of humour – relate to such technological considerations? And how does this seemingly featureless and impassive style – and, by extension, the much-maligned ‘blankness’ of a postmodern generation of writers, thinkers or actors – relate to the conceptions of post-war French theorists such as Roland Barthes, who identified a kind of ‘blank writing’ (‘écriture blanche’) or Maurice Blanchot who endlessly theorised the ‘worklessness’ or ‘unworking’ (‘désoeuvrement’) caused by the silence at the heart of literature?

The prolific and controversial novelist Maurice G. Dantec, meanwhile, has, for a decade or more, been conducting an urgent enquiry into the effects, the possibilities and the dangers of our current technological reality. On one hand, Dantec documents the social atomisation implicit in a thoroughly technologised society, through his brutal depictions of criminal networks engaged in serial murder on an industrial scale, existing as the negative counterpart to the commercial and political networks of legitimate society and making full use of information technologies at once to expand, document and market their operations, and to control and conceal them (La Sirène rouge, 1993; Les Racines du mal, 1996).

On the other hand, Dantec suggests that it is only through the accelerated, ungoverned development of technology, in direct but unprogrammable relation with the unpredictable evolution of organic life, that humanity will escape from its current amoral impasse through the emergence of its successor. Hence the range of post-human characters and concepts in Dantec’s work: from an artificial intelligence interface that evolves something like consciousness and escapes the control of its operator (Les Racines du mal); through a set of twins, mutated through contact with a virus and with their schizophrenic surrogate mother, born with a super-evolved global consciousness (Babylon Babies, 1999); to a part organic, part digital life-form that exists only in and through its connection to the global information network (Cosmos Incorporated, 2005).

At the same time, Dantec never stops asking what role literature and religion may have to play in this technological future – literature as religion and religion as literature. In his diaristic laments on the decline of western civilisation, as well as in his science-fiction prophecies, Dantec foresees the onset of a new Dark Age: as the twenty–first century succumbs to a new series of wars of religion, accompanied by accelerating environmental catastrophe, humanity’s true inheritance of science and philosophy, literature and theology is to be preserved by a generation or more of guardians who will be not so much clerics as warriors in the service of a future for humanity.

The greatest weapon in the war for the human soul will be a library – Dantec’s Bibliothôgon – an ideal collection of inscribed wisdom to be used against a new breed of heathens, philistines and infidels. If writing retains such power, it is because – in a tradition drawn advisedly from the religions of the book – writing names the real possibility of creation, as opposed to the simple reproduction, or culturally enforced creativity of a technological culture in which even the most market-oriented hardware manufacturer urges us to ‘go create’.1 If writing is thus – still and again – a technology, there is a sense, in Dantec’s work, that, as the oldest, darkest, and most mysterious of technologies – as the technology of technologies – it may allow the unbinding and defusing, the deforming and rethinking of technology.

Writing Technologies, with an intentional ambiguity about both terms, proposes in its series of questions nothing less than a sustained interrogation of the very basis of ‘texts’. And this is surely something most of us welcome, faced as we are with increasingly new (and often bizarre) versions of what Anna Everett termed ‘digitextuality’ (2003).

I see Writing Technologies as a space where debates about the local and regional miscegenation of writing ‘habits’ with global software and technologies can be conducted. Techno-criticism, as the editors suggest, relates in different ways to more established critical systems. It is possible that techno-criticism of the new ICTs might be increasingly called upon to relate it to postcolonial studies, considering the circulation of these technologies in the postcolonial nations and cultures. And it is this aspect of techno-criticism that is of interest to me as a cultural critic based in India.

Writing, as always, transforms what it writes about and what it writes with. It is important to ensure that any theory of digitextuality foregrounds textuality within larger cultural practices of writing and signification and does not just focus on the technologies and software alone. The ‘literature of the new informational economy’ of which the editors write is, of course, culture-specific, despite, or perhaps in part because of, the extraordinary dominance of Microsoft and American English spelling. Engagements with new global (and globalizing) technologies often result in new forms of writing that adapt techniques and practices that are local, particular and singular. Compuspeak with its ‘universal icons’ and deployment of English has, like all ‘conventional’ writing and literature in history, been appropriated and ‘morphed’ by local modes of narration. In terms of content, cyberspace might just remain determinedly local too.

An instance would be ‘Cybermohalla’, a popular digital culture initiative by the Sarai project (based in India’s Centre for the Study of Developing Societies, New Delhi) and an Ankur (an NGO). Writing and signification are altered in many ways here, from the location of the physical object (the PC) in a ‘Compughar’ (‘ghar’ meaning house in Hindi) within a project-area, the Sarai, to the writings.

‘Travellers’ have created stickers, scratch books and diaries, mostly dealing with local issues expressed in local slang and the aleatory mode favoured in conversations in India. The Cybermohalla diaries in the print version of the Sarai Readers document local individuals’, families’ and communities’ responses to technology – from the arrival of fluorescent light bulbs to computers and multimedia.

It is, I believe, a new mode of sociability that does not efface the face-to-face in favour of the virtual, but builds the virtual through the intimate and the corporeal. To me this cybercultural turn or twist to the local is a productive engagement with and counter to the ‘digital divide’ as it foregrounds the subjectivities of individuals in cyberspace. This form of sociability where minorities, the marginalized and often the minimally literate can record their experiences appears to be a technology that furthers democratic debate. It is a good example of what Lev Manovich terms ‘meta-media’, the ‘remixing of interfaces of various cultural forms and of new software techniques’. Cybermohalla mixes street language with English, the topos of the ‘sarai’ and the ‘mohalla’ (literally ‘locality’) with that of a-geographic cyberspace, and street conversation and the intimate diary with documentary forms.

This is ‘writing technologies’ of the sort that one always hopes for – the technology of the margins, where the products of transnational corporations and global finance are used to write the local. Cybermohalla broadly addresses some of the themes and questions the editors raise. Maybe a sustained examination of this glocal digital domain will enable us to understand the complexities of digitexts better. It is, as the editors point out in their piece, the reshaping of local and national identities by global technology. But it is also, at a very basic level, a ‘resistant recoding’ of such technologies.

  1. Anna Everett, 'Digitextuality and Click Theory: Theses on Convergence Media in the Digital Age', in Anna Everett and John T. Caldwell (eds) New Media: Theories and Practices of Digitextuality, New York and London: Routledge, 2003, 3-31.
  2. See the Sarai website [accessed 18 December 2006].
  3. 'Sarai' in Hindi means an enclosed space in a city or more commonly beside a highway where travellers find shelter; it signifies a meeting place, a tavern and a place of rest in the middle of a journey.
  4. See, for instance, Ravi Vesudevan et al (Editorial Collective), eds, Sarai Reader 03: Shaping Technologies (New Delhi: the Sarai Programme, CSDS and Amsterdam, The Waag Society for Old and New Media, 2003).
  5. Lev Manovich, 'Understanding Meta-Media' CTheory [accessed 18 December 2006

What impact are epublishing and other online modes of production and distribution having on patterns of reading? Is the eBook revolution, much hyped at the beginning of the century, failing to threaten printed textuality in the way many feared?

Jean Baudrillard, in his article ‘Violence of the Virtual and Integral Reality’, expresses his concerns about the electronically powered world of interactivity to which we now seem, more than ever, to be drawn. He states:

Machines produce only machines. The texts, images, films, speeches, and programs that come out of computers are machine products. They have the features of machine products: they are artificially expanded, facelifted by the machine; the movies are full of special effects, the texts full of lengthy passages and repetitions, which are the consequences of the malicious will of the machine to function at all costs (for that is its passion), and of the operator’s fascinations with the limitless opportunity of operating the machine.

This urgent proclamation of the evils or of the illusive complacency that the twenty-first century digital technologies generate is outweighed by the enthusiastic response of a number of artists, web designers, hypertext or digital narrative generators, and cultural theorists who proclaim the beneficial role that digital technologies are destined to play in the near future as regards the development and evolution of our literary habits. J. Yellowlees Douglas, one of the first female hypertext authors, states that ‘while interactive narratives do not generally reward random explorations of the text… they offer readers a series of options for experiencing the plot, rather than the singular skein that connects print novels and stories’.

The conceptions and misconceptions surrounding the future of print and electronic literary production can be evaluated by looking at one of the primary studies conducted in the early 90s by Nicole Yankelovich, Norman Meyrowitz, and Andries van Dam. As stated in their research outcomes, printed matter is regarded as disadvantageous, since ‘readers can never alter its content, cannot customize information [and] cannot conform to user preferences [as they are] limited to 2-D information, static text and graphics’; electronic texts are, in contrast, considered to be more ‘aesthetically appealing’ and ‘easy to read’, allowing for browsing and exploring, annotation and underlining as well as high-resolution print and graphics.

As is evident from the views stated above, what concerned literary critics and computer analysts in the 1990s was not so much the literary depth and aesthetic substance of electronic narratives but their technical novelty and capacity for storing data. Whether this is still considered a legitimate stance remains to be seen, since it is too early to comment on the literary strengths and weaknesses of electronic narratives as they are going through a transitional phase. However, if we were to view the print versus electronic controversy from a technological point of view, we would be overwhelmed by the latter’s technical capabilities and future web-related potential.

For example, the possibility of constructing a free-to-access HTML space, where segments of text would co-exist within a collaborative and communal e-textual environment, would be what a web designer would wish for.5 Moreover, how innocent is this kind of statement if one takes into account the subscription fees requested when one wishes to access certain specialized newspaper articles or encyclopaedia entries online? Will it ever be possible to assess the quality of the information contained in an online collaborative textual project, or will it be the plurality of opinions – not their quality – that would matter most instead?

With a World Wide Web mainly geared towards profit – being itself a continuously evolving software environment always in need of updates as well as sophisticated and highly advanced hardware equipment in order to run – it is difficult for us today to imagine a cybernetically-run textual databank which would be wholeheartedly resistant to the commercialization and commodification of the ideas and products advertised and circulating there. Whether this is going to affect the way we relate to an e-textual environment, how we perceive it on the basis of how it is written and read, is still uncertain.

Caitlin Fisher in her article ‘Electronic Literacies’ poses the following questions:

How do digital technologies and new media tools modify the relationships between language, texts, and culture? How do we speak to one another, now? What are the benefits of reading digital text as a material mode of creating shaped by ideological concerns? What is the future of storytelling? In short, how will our encounters with new digital texts and possibilities challenge and change us?

These questions open up a whole new territory of investigation where emphasis is no longer placed on the creator and the object of creation, but on the process of the object’s making and structure. This makes us realize, among other things, that what matters here is not whether print or electronic narratives are good or bad, or whether they are complying with certain literary and aesthetic criteria, but the extent to which the unanimous and universal appeal of digital technologies today will alter the way we think and feel about ourselves and others, as well as alter the way that facts and opinions are exchanged amongst the members of an online community.

These concerns become evident in Fisher’s article when she talks about the need for the emergence of ‘a new kind of literacy’ which will rely not only on the reading and evaluation of the material to be posted on the web, but also on exhibiting the ‘invisible intellectual labour’ that has gone into it; this is what she calls ‘thought sculptures’.7 What Fisher claims here, heralds the advent of a new range of reading and writing habits. These habits open up new vistas for the understanding and appreciation of electronic texualities by gradually moving beyond the ‘conventional or ‘progressive’ rivalries that have, until recently, dominated literary discourse about the future potential of print or electronic textual tools and practices. Nevertheless, it is still questionable whether this new kind of literacy can function within a real-time online community (not determined by certain geographical, cultural or political criteria) or be solely enjoyed by the members of academia.

When Michael Joyce published his hypertextual narrative afternoon, a story in 1987 (at a time when the World Wide Web had still not been introduced) everyone confronted it as a technological follow-up to print culture, as a technological means by which it would further its trajectories by altering the relationship that already existed between text, author and reader. In particular, the term hypertext referred to ‘the creation of interactive literature: stories, novels, and poems that require readers to make choices as they navigate a text’, in Joyce’s case via a CD-ROM saved program. In his reading instructions, Joyce claimed that with this text he invited readers to take an active part in the way the story was narrated by either following the storyline presented to them on the screen, or by choosing between various story scenarios, each one bearing a different title as listed in the hypertext menu.

Also, he added that the story could change according to the reader’s decision either to hit the enter button or to type their responses in the dialogue box provided, affecting in this manner the way the story was about to develop. In addition, readers were offered a few reading tools – Yes and No, Link, History and Bookmark buttons – as well as the option to save their place in the hypertext so as to resume reading at a later stage.

As for the way the story was written, readers were now presented with a paragraph or a short dialogue which appeared on the computer screen rather than on a printed page. So every time readers hit the enter button, a new paragraph or paragraphs, dialogue or dialogues would appear as if jumping from one section of a book to another. Although hypertext narratives seem to bear the name of a particular author, it is the reader himself or herself who is placed at a prominent position, since s/he is the one who now decides how the story will develop or evolve, choosing from a repository of textual segments that can be combined in a variety of ways according to the reader’s own textual prompts.

However, if one removes the excitement that the intervention of technology injects into the hypertextual experience already described, one realizes that Joyce’s narrative is part-author, part-reader led. The number of narrative storylines that this hypertext includes is limited, although variable in number with its 539 textual segments and 951 links. In this sense, Joyce’s hypertext does not differ that much from a print-bound text as far as textual integrity, browsing and exploring, typography, and graphic design are concerned. What its user-friendly technology allows readers to do is to look for or choose their own story paths, as well as appreciate the significance of following or requesting their own links.

Yellowlees Douglas notes that theorists ‘examining the process of reading from disciplines outside literary criticism… have claimed that reading is driven by readers’ needs to fill in gaps or spots of indeterminacy in the text’, although in the case of reading a printed book this process is often bypassed since it is automatically performed. In the case of hypertext technology, readers are allowed an overview of the links followed so as to choose a different path the next time they read the hypertextual narrative. In this manner, the plot of the narrative as well as its outcome is constantly altered and its interactive quality enhanced as it is able to satisfy a far larger group of readers.

Joyce characteristically states:

Like any reader and writer I still love the fetish of print, the beautifully bound volume, the sensuality of text. Increasingly I also value the vibrancy of electronic text, the dynamic of it…. In time there will be beautiful, even sensual, electronic objects which are utterly portable and transmutable… in ways that we cannot yet imagine for the book even after centuries of imagination of its beauties. Perhaps at that time we will have to see books for their multiplicity rather than their authority learning from electronic media to appreciate that their lastingness was not in their supposed canonicity but rather their actual community.

Academic and scholarly circles have experienced, and still experience, the benefits of electronic literary databases and archives where lengthy documents and literary texts ‘can be downloaded and used for personal and educational purposes without constraint’. Even if we accept the fact that the appearance of such databases may in the near future lead to the ‘democratization’ of literature – due to its distribution to everyone who can access the web and afford the technological equipment required – one cannot help wondering what will happen to library manuscripts and archival collections.

The danger inherent in such an endeavour is that prioritization will be given to certain documents over others, as well as to the ability to view certain documents at the expense of others. Silvio Gaggi claims that ‘If one is in the habit of getting all the information one needs conveniently from a terminal in one’s home or office, one will be less likely to go to a library or archive in order to seek out some text that isn’t available on the network’. In the long run, this may affect the way we interact with a printed document in terms of our reading and writing habits.

No longer being forced to read a text to locate a particular section or paragraph – ‘thanks’ to our electronic communications we are now able to copy and paste the section or paragraph we are after (notice the diversity of primary sources locatable with YouTube, Google, or Google Scholar, among others) – our critical ability will diminish. In addition, our writing ability will cease to be consistent and coherent, since less attention will be paid to how a literary document is conceptualized, planned, and organized. As for the equipment required for transforming printed matter into a byte-sized file, for transferring a printed document online or for even designing an HTML page, this can only be achieved by employing software at a high cost. As a result, certain social and cultural groups are prevented from accessing and benefiting from such data retrieval technology.

As already stated in the case of Joyce’s narrative, it is its supplementary value, rather than its annihilating power in relation to printing practices, that renders hypertext technology or hypertext narrative practices important within the context of print literature development. In this light, some of the larger publishing companies, such as Norton, have already acknowledged its importance by including certain hypertexts in their literature anthologies.

Also, Jay Bolter, a hypertext technologies advocate, suggests that ‘network culture could assist and not displace the culture of the book… by helping serious writers build communities of readers. This is exactly what Joyce’s afternoon, a story had attempted to do by bringing together a variable number of readers who were willing to contribute to, or participate in, this kind of literary, but electronically assisted, experience. Whether this kind of endeavour can be realized under the auspices of online technologies is still difficult to determine at a time when the preservation of literary quality and sustainability is more pertinent than ever. It is important, though, that we do not let ourselves be carried away by our technophobia and conservatism towards electronic technologies, by our unrestrained optimism about their interactive strengths, or by proclamations about their fostering of intellectual freedom.

With the Virtual described as ‘the ultimate predator, the plunderer of reality’, Baudrillard goes on to inform us that the world we are heading towards will have nothing to do with the world we are ready to leave behind. The new digital order which is about to emerge is neither as different nor as revolutionary as it claims to be, but an artificial replica which is a distant – yet technically manufactured echo – of the unrestrained and free circulation of ideas and intellectual exchange that it promised.

Gaggi observes that in a digitized world ‘interaction will be meaningless… because it will present itself as an enlargement of choices but limit those choices to the trivial, it will really be a means of control disguising itself as freedom’. However, the plethora of printed and electronic matter surrounding us nowadays keep on reminding us that there is still a textually active world around us, ready to tackle and comment upon any social or political issue on a global scale. In particular, reading and writing are acts in their own right, keeping us close to the material essence of reality by preserving our individual integrity as well as our intellectual capacity and uniqueness. Italo Calvino writes that ‘the lesson of a myth is in the literalness of its narrative’. Whether digital technologies insidiously desire to distance us from a literally textual world altogether remains to be decided.


  1. Jean Baudrillard, 'Violence of the Virtual and Integral Reality', in Light Onwords / Light Onwards. ed. by B. W. Powe (Canada: The Coach House Press, 2004), p. 133.
  2. J. Yellowlees Douglas, The End of Books - Or Books without End?: Reading Interactive Narratives (Ann Arbor: University of Michigan Press, 2000), p. 46
  3. Nicole Yankelovich, Norman Meyrowitz, and Adries Van Dam, 'Reading and Writing the Electronic Book', in Hypermedia and Literary Studies ed. by Paul Delany and George P. Landow (Cambridge, Massachusetts: The MIT Press, 1991), p. 54.
  4. Yankelovich et. al, 'Reading and Writing the Electronic Book', p. 54.
  5. George Landow in Hypertext: The Convergence of Contemporary Critical Theory and Technology writes: 'I contend that the history of information technology from writing to hypertext reveals and increasing democratization or dissemination of power'. George P. Landow, Hypertext: The Convergence of Contemporary Critical Theory and Technology (Baltimore: The Johns Hopkins University Press, 1992), p. 174.
  6. Fisher, 'Electronic Literacies', in Powe, ed., Light Onwords / Light Onwards, pp. 93-4.
  7. Fisher, 'Electronic Literacies', p. 98.
  8. Silvio Gaggi, From Text to Hypertext: Decentering the Subject in Fiction, Film, the Visual Arts, and Electronic Media (Philadelphia: University of Pennsylvania Press, 1997), p. 122.
  9. Yellowlees Douglas, The End of Books, p. 29.
  10. Shady Cosgrove, 'From an interview with Michael Joyce', afternoon, a story. CD-ROM. Watertown, MA: Eastgate, 1987.
  11. Gaggi, From Text to Hypertext, p.116.
  12. Gaggi, From Text to Hypertext, pp.117-18.
  13. See, for example, Paula Geyh, Fred G. Leebron, and Andrew Levy, eds, Postmodern American Fiction: A Norton Anthology (New York: W.W. Norton, 1997).
  14. Stuart Moulthrop, 'Pushing Back: Living and Writing in Broken Space', MFS Modern Fiction Studies 43:3 (1997), p. 669.
  15. Baudrillard, 'Violence of the Virtual and Integral Reality', p. 125.
  16. Gaggi, From Text to Hypertext, p. 121.
  17. Italo Calvino, cited in Baudrillard, 'Violence of the Virtual and Integral Reality', p. 140.

Wakon yōsai is a term that students of Japanese Studies can hardly fail to encounter in the course of their academic study. Often translated as ‘Japanese spirit, Western technology’, this word made its inception at the dawn of Japan’s modern era in the late nineteenth century. Wakon yōsai has proven itself to be quite a handy term for describing modern Japan’s relationship to the West and to what the West represents: knowledge, science, and advanced technology. The term’s usefulness stems from its character as a pair of compounds made of two supposedly opposing factors – Japan and the West, and culture and technology. This splitting, common in conceptualizing modern Japan, will serve as a rhetorical framework for this essay’s attempt to explain some of the trends in studies of modern Japanese culture and literature in relation to technology.

Japan’s confrontation with the West is an epistemological landmark that widely conditions our understanding of Japanese identity, culture and society. It has become dangerously normative for academics and non-academics alike to argue how Japan has been influenced, if not ‘invaded’, by imported knowledge from the West. In other words, writing about technology in the framework of Japanese literature and culture cannot deviate from the question of the imagined location of Japanese culture that precedes Westernization. It is in fact challenging to envision the location of Japanese culture beyond the dichotomy of Japan and the West, or culture and technology. Is there any Japanese culture outside this dilemma of wakon yōsai? Or, is it important anyway to seek Japan beyond the dual architecture of its modern identity?

Two of the most notable US scholarly writings that elucidate the intellectual history of Japan’s ambivalence about culture and technology are Tetsuo Najita’s ‘Culture and Technology’ in Postmodernism and Japan (1989) and Andrew Feenberg’s ‘The Problem of Modernity in the Philosophy of Nishida’ in Alternative Modernity (1995). Najita states that through translation of knowledge, from China and later from the West, ‘Japanese self-consciousness expressed itself with a primary reference to continuous “culture” and not to technological “work” – the latter, in the final analysis, being like Confucian knowledge attributable to the Other’. Forced to locate a ‘self’ between being and otherness, the Japanese have found their cultural selfhood in the dynamic formation of differences – the otherness they embrace is not the liminal Other but otherness embedded within the Japanese subject. Referring to Yukio Mishima’s essay ‘In Defense of Culture’ (Bunka bōei-ron, 1969) Najita further argues that the high-growth era of Japan in the 1960-70s left the question of ‘culture’ unanswered, and Mishima’s prophetic effort to separate culture from politics failed against the emerging high-consumerism.

Feenberg introduces discussions led by wartime philosopher Kitaro Nishida and his students, collected in ‘The Standpoint of World History and Japan’ (Sekaishiteki tachiba to Nihon, 1942), by arguing that Nishida launches a ‘simultaneous defense of traditional Japanese culture and affirmation of modern scientific-technical civilization’ (170-1). This philosophical trend, Feenberg continues, shares the same pattern with German reactionary modernism that succeeded in conceptualizing technology as Germany’s cultural heritage after World War I. Nishida’s idealized synthesis of cultural tradition and hegemonic technology in Japan was, however, too distant from the political reality of Japan in World War II – the nation was striving to be the father of all ‘Asian’ races and their cultural heritages while becoming the leader of ‘Western’ science and technology. Modern Japan cannot liberate itself from the double bind of the Orient that has also become the West.

What underlies both Najita’s and Feenberg’s arguments is the idea that Japan has constructed its imagined location by differentiating itself from the West. After the high-growth period in the 1960-70s, however, Japan seems to have shifted from an era of crisis in split identity to a culture of synthesis with technology. This ideological shift first appeared in American writings that later came to be called cyberpunk. Japan, which used to serve as the exotic Other of the West through its image of anachronistic spiritualism, now became the global icon of cutting-edge high technology, ranging from robotics and fuel-efficient cars, and Nintendo and Sony, to anime and manga today.

The fast spread of Donna Haraway’s cyborg philosophy, originating from her essay ‘A Manifesto for Cyborgs’ (1985), reinforced the welcoming mood in the US for a post-human identity that problematized the Western epistemological framing of the subject. By 1995, when Mamoru Oshii’s animation film Ghost in the Shell succeeded in the North American market, building on the earlier popularity of Katsuhiro Otomo’s Akira (1988), the equation of Japanese culture with future technology was fairly complete.

Haraway’s cyborg manifesto came almost too conveniently for Japanese and US scholars alike, because what it proposed sounded similar to the identity model developed by Japanese intellectuals, whether Soseki or Mishima, who struggled to hypothesize the subject that is modern yet Japanese. The conceptual confusion stemming from the 1980s technophilic shift was the inversion of historical and geographical order – Japan’s past was presented as the world’s future. It also signified that the Japanese model of modernity re-emerged as the postmodern. The question of culture, left unanswered in postwar Japan, returned in a Western philosophy of the 1980s modelled in part, in a configuration scarcely believable to the Japanese, on Japan’s global industry and pop culture. The decade observed an ironic integration of Japanese culture and technology never imagined by Nishida or Mishima.

For scholars, this crucial shift in philosophy signified that technology had become a core component of Japanese culture. The idea of techno-Orientalism, proposed by Morley and Robins in 1995, best explicated the trend in associating Japaneseness with technology – now the West’s fear of the Orient went hand in hand with the fear of high technology. Toshiya Ueno’s ‘Japanimation and Techno-Orientalism’ (1996) further argued that anime that adopts Asian settings from American cyberpunk ‘reproduces a “Japan” imaginarily separated from both West and East’ by appropriating the Otherness of Asian automatons. The application of cyborg philosophy to Japan actually meant rephrasing the same cultural condition in a new language of technology. The cyborgian philosophy transforms humans into transgressive beings on the metaphysical level, thereby providing us with the deconstructionist illusion of decolorization and desexualization. But the gender and racial signs inscribed on the cybernetic body in Japanese science fiction can hardly be sublimated by a philosophy of technology.

Reading Japanese identity in the technological milieu of film and literature has become popular in the 1990s. Some of the earliest academic writings on the subject were Chon Noriega’s ‘Godzilla and the Japanese Nightmare’ (1987) and Susan J. Napier’s ‘Panic Sites: The Japanese Imagination of Disaster from Godzilla to Akira’ (1993). Noriega points out that, whereas monsters in American science fiction films remain as the Other, those in Japanese films ‘challenge our constructions of the self and the other’ by presenting themselves as a product of cultural history with which people sympathize. Contemporaneously Takayuki Tatsumi identified various cyborgian identity models in Japanese science fiction, which are collected into the book Full Metal Apache (2006). Tatsumi genealogizes the metallocentrism of the Japanese body in science fiction from Ken Kaiko’s The Japanese Three Penny Opera (Nippon sanmon opera, 1959) and Sakyo Komatsu’s The Japanese Apache (Nippon apacchizoku, 1964), through Otomo’s Akira (1988) and Shin’ya Tsukamoto’s Tetsuo II (1992), to Korean immigrant writer Yang Sok Il’s Through the Night (Yoru o koete, 1994).

Today one can hardly miss anime or manga that present cybernetic characters, from the serious ventures of Ghost in the Shell and Serial Experiment Lain to the otaku-inducing consumerist hype of militant android angels and maids. Frankly, scholars are overwhelmed by the accelerated production and consumption of anime and manga, and deeply anxious about not being able to valorize the academic quality of this overflowing culture taking its rise from Japan. Perhaps a more disturbing problem for scholars of Japanese literature and culture is that what one says about Japan is no longer about ‘Japan’. Today cultural boundaries are increasingly eroding, as cultural differences are no longer invented through exports and imports across national borders, as in the modern era, but are produced by the acts of consumption in which one participates.

Recent scholarly publications, such as Koichi Iwabuchi’s Recentering Globalization: Popular Culture and Japanese Transnationalism (2002) and Anne Allison’s Millennial Monsters: Japanese Toys and the Global Imagination (2006), examine the globalization and localization of Japanese culture through marketing of anime, video games and so forth. Similarly, the business of configuring Japanese identity through the relationship between culture and technology will undergo changes expected from this gradual erasure of differences between culture and technology. One of the pressing questions for Japanologists will be about how the indigenousness of culture persists while the dominance of technology in defining culture advances.


  1. Wakon yōsai is a sort of travesty on wakon kansai (Japanese spirit, Chinese technology), a term proposed by the scholar of politics and Chinese poetry Michizane Sugawara (845-903). "Sai" signifies ability, aptitude, intelligence, and technology.
  2. Tetsuo Najita, 'Culture and Technology', in Postmodernism and Japan, ed. by Masao Miyoshi and H. D. Harootunian, Postmodernism and Japan (Durham: Duke University Press), p. 9.
  3. Andrew Feenburg, Alternative Modernity: The Technical Turn in Philosophy and Social Theory (Berkeley: University of California Press, 1995), pp. 170-1
  4. David Morley and Kevin Robins, 'Techno-Orientalism: Futures, Foreigners and Phobias', New Formations, 16 (1992), 136-56.
  5. Toshiya Ueno, 'Japanimation and Techno-Orientalism: Japan as the Sub-Empire of Signs', Yamagata International Documentary Film Festival, Documentary Box #9 [accessed 20 March 2007].
  6. Chon Noriega, 'Godzilla and the Japanese Nightmare: When "Them!" is U.S.', Cinema Journal27.1 (1987), 63-77 (p. 64). Susan J. Napier, 'Panic Sites: The Japanese Imagination of Disaster from Godzilla to Akira', Journal of Japanese Studies 19.2 (1993), 327-51.
  7. Takayuki Tatsumi, Full Metal Apache: Transactions Between Cyberpunk Japan and Avant-Pop America (Durham: Duke University Press, 2006).
  8. Koichi Iwabuchi, Recentering Globalization: Popular Culture and Japanese Transnationalism (Durham: Duke University Press, 2002). Anne Allison, Millennial Monsters: Japanese Toys and the Global Imagination (Berkeley: University of California Press, 2006).

In the final years of the fifteenth century, following the adoption of the mechanical press, the development of re-distributable type, and the use of high quality paper and oil-based inks, it was now possible to produce and distribute in virtually identical copies, texts and images. A fundamental product of the replicative capacities of the mechanical printing press was its capacity to produce ‘uniform spatio-temporal images’.

In this respect, we can think of the printed book as a form of memory appliance. It represented a means of storing and recovering complex information and ideas. Of course, printed texts could become corrupted or be reproduced from inferior originals. But print had the effect of ‘freezing’ an idea or a design at one stage of its evolution, which, in turn, made it easier to transmit technical information from one locality to another, or even, though time, from one generation to the next.

Improvement, the process by which a design or an idea could be re-worked so that it became more efficient, or re-designed entirely and applied to an entirely different task, paradoxically, rested on that quality of fixity that seemed so unique to print.

The importance of print to the growth of scientific and intellectual culture in Europe in the sixteenth century has been comprehensively explored over the past few years. Perhaps surprisingly, its importance to technological culture has been less widely appreciated. Printing was, after all, the application of mechanism to the task of generating texts, and hence disseminating ideas.

But it was also an offshoot of advances in metallurgy and the development of metal industries, particularly in southern Germany, in the early fifteenth century. We can, though, only speculate as to the extent to which the enormous growth in the circulation of printed material in both Europe and the ‘New World’ of the Americas (a printing press had been established in Mexico City as early as 1533), even within populations that were largely illiterate, fostered an interest in the mechanical culture which was both generated by, and helped in turn to foster the spread of the mechanical presses. Walter Ong, however, following Marshall McLuhan, has indicated some of the shifts in mentalités attributable to the advent of the printing press.

For Ong, the printing press heralded the primacy of sight over hearing, the development of indexes and (later) dictionaries, the sense of a book being ‘less like an utterance and more like a thing’, the exploitation of ‘typographic space’ to generate meaning as in a poem such as George Herbert’s ‘Easter Wings’, or space as a marker of silence or absence as in the instance of the famous blank page in Laurence Sterne’s Tristram Shandy (1759 – 1767). Even the development of the idea of the ‘point of view’, personal privacy, private ownership, and the sense of closure associated with literary texts, have been attributed to the advent of printing as a mechanical undertaking.

The social effects of this new form of mechanical labour were incalculable. Certainly, writers and intellectuals now had to be aware, as they need never have been before, of the importance of mechanism in a practical sense, to the generation and distribution of their ideas. Just as modern authors, in the digital age, have had to acquaint themselves with at least some vestigial idea of digital technology if they are to distribute their words and thoughts either in the traditional form of the book, or via the newer technologies of e-mail, the web page, or the blog, so Renaissance writers became more aware of mechanism as it impinged on their professional lives. The simple fact that, as their works passed through the press authors were often expected to attend the print shop in order to make corrections to the proof copies of their texts as they were thrown off the machines, introduced authors to the inky, mechanical world of mechanisms and their ever more skilful human servants. ‘Professors’ writes Eisenstein ‘came into closer contact with metal workers and mechanics’ and this inevitable proximity of intellectual and mechanical labour helped to bring about the redefinition of certain kinds of ‘work’.

The print shop, too, represented a means of organizing work and labour that accorded in its outline with that idea of the ‘division of labour’ that can be associated with Adam Smith’s ideas in the later eighteenth century. In the world of manuscripts the many skills and tasks involved in producing a book, which included the raising, feeding, and then slaughtering of animals, the manufacture of vellum from their skins, the mixing of inks from organic and mineral sources which in turn had to be mined, collected, or harvested and then prepared, the process of copying, illuminating, binding and so on, were distributed widely through the community. Producing a manuscript book, in the pre-print era, mobilised a galaxy of seemingly unrelated skills and crafts. Printing, on the other hand, for all that it drew on an equally wide range of distributed tasks, tended to compress activity into a shop structure, which, in turn, involved workers pooling their skills under one roof. Print brought people closer together, allowing them to learn from one another not only in communities of readers, but as producers of objects.

It was not, of course, that Europeans had never before had to work in conformity with a machine. The plough, after all, is a machine of sorts, though we more commonly refer to it, in its earlier forms, as a tool. But the ploughman’s work was solitary. Printing, on the other hand, in common with weaving and spinning, were gradually evolving into a ‘shop’ structure in Europe in the course of the fifteenth and sixteenth centuries. The fifteenth- and sixteenth-century print shop was a place of bustling group activity, where the workers had to learn to adapt their bodies and their minds to labour together, with their activity governed by the rhythm of the operation of the press itself.

The turn of the mechanical screw, quite literally, dictated the pace of labour and hence the rapidity (as well as the quality) with which bibles, almanacs, pamphlets, technical treatises, as well as the more familiar literary and philosophical works of the period could be generated and distributed. As Lucien Febvre and Henri-Jean Martin have commented, those who, in the late fifteenth and early sixteenth centuries, were learning to work with moveable type had to develop an entirely new range of skills. Speed was a factor in this process: ‘to work really fast a compositor has to handle the letters without pausing or looking: he has to become an automaton, just like a modern typist at the keyboard’.

What Michel Foucault has termed ‘the automatisim of habit’, by which the body is recomposed in conformity with some exterior force (whether the exigencies of military drill, the factory, or even the school conceived of as ‘a machine for learning’) had its roots in a mechanism designed to press a blank sheet of paper against an ink-covered surface, over and over again.

Authors were by no means aloof from this mechanical process. Indeed, they had to learn to accommodate themselves to the ‘timetable’ or ‘schedule’ that was a further manifestation of mechanical culture.

A common complaint of authors, in the first decades of print, was that the printers and their servants were working too quickly or ‘hedelynge [headlong] and in hast’ as one author complained in 1509, suggesting that it was only with some difficulty that authors adapted themselves to the new pace set by the mechanisms of print, if, indeed, they have ever succeeded. In the case of the print shop, the production of books was now working to a faster pace, since no printer would have wished the machines to stand idle, waiting for copy or emendations and corrections to the proofs. That bane of authors and publishers alike, the deadline (and with it the familiar litany of excuses for missing deadlines, or producing poor copy), was an aspect of mechanical culture that can be thought of as an offshoot of the development of print technology.

The idea that a book should be finished on a particular date, rather than when the author judged that the labour was at an end, was an entirely new facet of intellectual work, as was the calculation of the exact rate at which a given work could be printed. Speed, together with accuracy, would become new markers of ‘efficiency’, which would, in the course of time, become a key term in the deployment of machinery. Even one hundred years after the first appearance of the mechanical printing press, the efficiency of this device still had the capacity to astonish those who observed it in operation: ‘it would appear to be incredible if experience did not prove it to be true’, wrote an anonymous French writer some time before 1572, ‘that four or five workers can produce in one day as much excellent script as three or for thousand of the best scribes of the whole world by this most excellent art of printing’.

Of course, this was an exaggeration. As the bibliographer D. F. McKenzie has argued, the output of the early-modern print shop was certainly much lower than was once imagined by print historians. Nevertheless, for all that it is easy to exaggerate the volume of print production when compared to the production of texts by non-mechanical methods, there arises a complaint on the part of authors unknown in the world of the manuscript: that their works had been marred or spoilt by the haste of the printers, anxious to keep their machines running at higher capacity.

A Jacobean divine, Samuel Hieron, gives us a taste of the quickened pace of intellectual labour. In the preface to his collected sermons (published in 1614) Hieron explains that he lives ‘farre from the presse, and it requireth much time, to convey sheetes to and fro, betwixt the compositors and me’ and asks the reader to excuse the errors that have crept into his work due to the ‘hast of the printer, and my remoteness from the citie’.

But it was, in the end, the output of the print shops – the printed book itself – which was the true signifier of the arrival of mechanical culture. As Marshall McLuhan has famously argued, ‘every aspect of Western mechanical culture was shaped by print technology’ and he continued:

Printing, remember, was the first mechanization of a complex handicraft; by creating an analytic sequence of step-by-step processes, it became the blue-print of all mechanization to follow. The most important quality of print is its repeatability; it is a visual statement that can be reproduced indefinitely, and repeatability is the root of the mechanical principle that has transformed the world since Gutenberg. Typography, by producing the first uniformly repeatable commodity, also created Henry Ford, the first assembly line and the first mass production. Movable type was archetype and prototype for all subsequent industrial development. Without phonetic literacy and the printing press, modern industrialism would be impossible. It is necessary to recognize literacy as typographic technology, shaping not only production and marketing procedures but all other areas of life, from education to city planning.

One might quibble with many elements of McLuhan’s analysis here. Yet, there is a truth to McLuhan’s observations when we come to consider the idea of ‘repeatability’ which would, in time, give rise to the production lines of twentieth-century Detroit, Dagenham, or Tokyo. Print was indeed a ‘mechanism of repeatability’ as McLuhan has (elsewhere) written. In introducing the idea of repetition, both as an activity, and as an output in the form of the printed book itself, work, as well as intellectual culture, was transformed by mechanical process.

Quite simply, in virtually no other aspect of life, other than perhaps in the case of artefacts produced with the help of the highly skilled craft of working with the mechanical rotary motion of the potter’s wheel, had it ever been possible to contemplate the production of any human artefact in considerable quantities of near uniform design, appearance, size, and quality prior to the advent of the printing press.


  1. This essay offers a condensed version of some of the ideas pursued at more length in Jonathan Sawday, Engines of the Imagination: Renaissance Culture and the Rise of the Machine (New York and London: Routledge, 2007) chs.1 and 3.
  2. Elizabeth L. Eisenstein, The Printing Press as an Agent of Change: Communications and Cultural Transformations in Early Modern Europe (Cambridge: Cambridge University Press, 1979), p. 81.
  3. See Eugene S. Ferguson, Engineering and the Mind's Eye (Cambridge, Mass.: MIT Press, 1992), pp. 107 - 113; Thomas J. Misa, Leonardo to the Internet: Technology and Culture from the Renaissance to the Present (Baltimore and London: The Johns Hopkins University Press, 2004), pp. 26 - 28.
  4. As well as the work of Eisenstein and Bennett (see below), see Mark U. Edwards, Jr., Printing, Propaganda, and Martin Luther (Berkeley and Los Angeles: University of California Press, 1994); Alberto Manguel, A History of Reading (London: HarperCollins, 1997); Peter Murray Jones, 'Medicine and Science' in The Cambridge History of the Book in Britain (Vol. 3), ed. by Lotte Hellinga and J. B. Trapp (Cambridge: Cambridge University Press, 1999), pp. 433 - 448; Peter Burke, A Social History of Knowledge: From Gutenberg to Diderot (Cambridge: Polity, 2000), pp. 149 - 196; Ian Green, Print and Protestantism in Early Modern England (Oxford: Oxford University Press, 2000).
  5. Lotte Hellinga, 'Printing' in Hellinga and Trapp (eds.), The Cambridge History of the Book in Britain, p. 69.
  6. On the spread of the printing presses and the diffusion of books, see Neil Rhodes and Jonathan Sawday, 'Paperwolds: Imagining the Renaissance Computer' in Neil Rhodes and Jonathan Sawday, The Renaissance Computer: Knowledge Technology in the First Age of Print (London and New York: Routledge, 2000), p. 1.
  7. See Walter Ong, Orality and Literacy: The Technologizing of the Word (London and New York: Routledge, 1982), pp. 115 - 129.
  8. On author's attending (or failing to attend) the presses, see: H. S. Bennett, English Books and Readers 1475 - 1640 3 vols. (Cambridge: Cambridge University Press, 1970), III. pp. 211- 212.
  9. Eisenstein, Printing Press as an Agent of Change, p. 56.
  10. Eisenstein, Printing Press as an Agent of Change, p. 55.
  11. The European heavy plough, the carucca, was to transform agriculture in the middle ages. See Joel Mokyr, The Lever of Riches: Technological Creativity and Economic Progress (New York and Oxford: Oxford University Press, 1990), p. 32.
  12. On spinning and weaving in early-modern Europe, see Ann Rosalind Jones and Peter Stallybrass, Renaissance Clothing and the Materials of Memory (Cambridge: Cambridge University Press, 2000), pp. 104 - 133.
  13. Lucien Febvre and Henri-Jean Martin, The Coming of the Book trans. David Gerard (London and New York: Verso, 1997), p. 62.
  14. Michel Foucault, Discipline and Punish: The Birth of the Prison, trans. Alan Sheridan (Harmondsworth: Penguin Books, 1977), pp. 135, 165.
  15. Bennett, English Books and Readers, I, p. 218.
  16. Bennett, English Books and Readers, III, p. 204.
  17. On printing rates, see Bennett, English Books and Readers, II, p. 290. But note the forms of book production introduced in Paris in thirteenth century, which amounted to an industrialisation of the tasks of copying manuscripts. See Christopher de Hamel, A History of Illuminated Manuscripts (London: Phaidon Press, 1994), pp. 130-132.
  18. Anon, Plaidorie pour la reformation de l'imprimerie (Paris, 1572), fol. 3 r-v, quoted in Henry Heller, Labour, Science and Technology in France 1500 - 1620 (Cambridge: Cambridge University Press, 1996), p. 25.
  19. See D. F. McKenzie, Making Meaning: "Printers of the Mind" and other Essays ed. By Peter D. McDonald and Michael F. Suarez, S. J. (Amherst and Boston: University of Massachusetts Press, 2002), pp. 18 - 56.
  20. Samuel Hieron, All the sermons of Samuel Hieron minister of Gods Word, at Modbury in Devon heretofore sunderly published, now diligently reused, and collected together into one volume (London, 1614), sig.2.
  21. Marshall McLuhan, 'The Playboy Interview' Playboy Magazine (March 1969). [accessed 12 August 2005].
  22. For a comprehensive critique of McLuhan's ideas, see Eisenstein, Printing Press as an Agent of Change, pp. 16 - 17.
  23. Marshall McLuhan, The Guttenberg Galaxy: The making of Typographic Man (Toronto: The University of Toronto Press, 1962), p. 141.

Let me state an obvious yet important premise: writing is intimately connected to technology. Narrative is possible without technology, for it can exist in oral form; one can as easily imagine a story being told around a campfire today as from the beginnings of human civilisation. In contrast, writing and, by extension, the varied and difficult field we have come to know as ‘literature’ can only exist where there is technology. Without technology, writing is merely lines in the sand, glowing symbols in the air, vanishing almost as soon as they are produced.

From the first time a symbol was carved in stone, carved into wood, or drawn on a cave wall with the first rudimentary tools, literature was the inevitable product, a sign not of permanence (what sign is ever permanent?) but at least of endurance. It was the first step along the path, the first link in the chain. From these earliest marks on stone and on parchment to illuminated manuscripts, from these to the printing press, and from there to the more recent emergence of hypertexts and, more debatably, computer games narratives, technology has always played a major role in what ‘writing’ is.

The above paragraph of course implies a clear distinction between nature and technology; technology stands apart from nature as a means of preserving our narratives from the decay of time. To some, however, it has perhaps stifled narrative, codifying ‘literary’ forms and restricting the organic evolution of stories. To such critics, perhaps, we disappointingly no longer recount vast epics of our communal pasts, no longer bond as a ‘tribe’. I certainly see the logic that the gradual evolution of oral narratives has been lost as we set down such tales in a more enduring form, although I would disagree with such claims more generally.

One need only see the vast amount of writing done over the internet to see the endurance of this shared story-making. From blogs to MMORPGs (Massive Multiplayer Online Role-Playing Games), from phenomena such as vlogospheres (video-blog communities) to projects such as Novel Twists, 1 technology enables us to ‘write ourselves’ into a much larger tribe, to share our writing with others on a scale hitherto unimaginable. Equally, it gives us unprecedented control over that writing, and opens up the possibility of engaging at a much deeper level with another’s writing.

This is not to suggest, however, that technology always has a positive influence on writing. This is emphatically not the case. Rather, technologies change the way we write, and the ways in which we think about writing. For instance, although there is a demonstrable development from the printing press to hypertext (they are both aspects of ‘technologizing the word’, as Ong suggests), is there a more persuasive link to be made between illuminated manuscripts and hypertexts, with images, texts, and glosses all available on the same ‘page’?

If the answer to this is ‘yes’, then our paradigm for understanding how textuality differs from hypertextuality must account for this. (See Tolva’s ‘The Heresy of Hypertext’ for an interesting contribution to these debates). 2 Furthermore, it is important to understand precisely what a ‘digital environment’ is and how it affects the production and reception of texts, as well as the way in which key concepts in textual studies must be renegotiated to account for technological changes. It is for these reasons that we must examine the relationship between technology and writing through three key areas: authors and readers, editing and editions, and narrative structure.

How technology affects the relationship between authors and readers will be a central concern in coming years. With the proliferation of internet fan fiction, addressing issues that fans felt were omitted from or elided in the original text (Austen’s ‘pornographic’ texts aside, perhaps), texts become malleable; ‘open-source’ texts, with no clearly defined author, and open to continual rewriting, also make possible new avenues of exploration for redefining the role of the author:

  • Who ‘authors’ an open-source text or computer game? Is it possible to theorise the ‘hyper-author’ as we have the ‘hypertext’?
  • Given its lauding of ‘the democratisation of knowledge’, why is the academy suspicious of Wikipedia, and free-access and online journals? What is an ‘authoritative’ source in a digital environment?

Moreover, we must take up the challenge of defining how the concept of the reader has changed:

  • Do readers truly have power to change texts and in what manner might this empower them?
  • To what extent can potentially damaging texts (that is, those that offer offensive or ideologically ‘dangerous’ perspectives) be neutralised by technological innovations? How problematic is it that this neutralisation depends upon access? How does this relate to free speech and censorship?
  • How does the issue of ‘embodiment’ affect our understanding of the textuality of digital environments? Does the ‘implied reader’ now work on a technological level?

Issues of authorship also have an impact on editing and editions’, by which I mean the way in which texts are linked to each other, as well as the act of editing an author’s works. It is now much more straightforward to produce critical editions online, enabling students and researchers to cross-reference sections of a text for comparison, or explaining unusual words and intertextual relations by hyperlinking different texts together.

This clearly has an impact on writing about literature, but a much more urgent problem is related to textual drafts and revisions. One can easily imagine the loss to literature if William Wordsworth’s website were continually updated to reflect his changes to The Prelude or if James Joyce’s cramped marginalia had been overwritten by the word-processing program he used. We must ask ourselves the extent to which textual information is affected by technology:

  • How can technology improve contextual and intertextual awareness of texts? Does it improve cultural diversity by allowing access to foreign literatures or does the dominance of Anglophone writing online subvert this?
  • Will researchers and students suffer from ‘information-overload’ or loss of clarity because of massively intertextual projects (what might be called MMOEs—Massive Multiauthor Online Editions)? What is ‘context’ and what is ‘text’ if these are linked to digitisation projects of archives and museums?
  • What impact will technology have on our literary heritage and is there a way to minimise the damage? Has ‘endurance’ been replaced by ‘revision’?

Given these questions of who is writing and reading and how they are doing it, an allied area is the extent to which what is written has been affected. Narrative structure is one of the clearest examples of the way in which technology and writing interact and one of the most obvious examples of the effects that technology can have on structure is hypertext literature. In such texts, readers pick their way through the paths on offer and build a story through a hypertext’s ‘forking paths’ (see, for example, The Unknown),3 spatial construction (see Bruce Andrews’ Millennium Project),4 or stretchtext additions.

These texts are part of what Aarseth calls ‘ergodic literature,’ meaning that ‘nontrivial effort is required to allow the reader to traverse the text’ – the name derives from ‘the Greek ergon and hodos, meaning “work” and “path”’.5 This is also partly true of print-based literary forms, however (as Aarseth himself realises), so it is important to establish exactly where the boundaries lie between ‘text’ and ‘hypertext’. Hypertexts perhaps make more explicit the de-centring of the text and the paradigmatic structure of language, as well as the ‘play’ of the text, but the extent to which they innovate or advance our understanding of what constitutes a text is still in question:

  • What are the differences between ‘textuality’ and ‘hypertextuality’? To what extent does hypertext return us to older notions of textual production (text as writing and images rather than ‘just’ writing) or have the potential to update them (text as writing, images, and sound)?
  • Are the dominant themes of hypertextuality—agency, interactivity, spatialisation, intertextuality—a ‘literal reification or embodiment’ of textual practices and does it really ‘disturb status and power relations’?6 Is hypertextuality only a gloss on these issues or is it embedded in the digitisation or ‘technologisation’ of the text?
  • To what extent are the ‘spatialised’ narratives of texts such as Salvador Plascencia’s People of Paper, Steve Erickson’s Our Ecstatic Days, John Barth’s Coming Soon!!!, or Mark Z. Danielewski’s House of Leaves and Only Revolutions hypertextual? What narratological differences exist between print-based forms using hypertextual strategies and hypertexts?

Furthermore, many of the issues of narrative structure emerge in relation to computer games. Whilst there are debates around the extent to which such games can ever be considered narratives—in many respects, their ‘ludological’ structure is at odds with the narrative we might be able to read into it—reader-players nevertheless determine their path through games. The validity and purpose of games narratology thus faces questions such as:

  • What is the ‘text’ of a game—its narrative or its coding? What is ‘medium’ and what is ‘message’ in a game? To what extent do gameplay and narrative interact in a game?
  • What interpretative practices are available to games narratologists? Must different ‘genres’ of games be read in different ways?
  • How do players ‘read’ games? Is this reading the game or reading how the player played the game?

Finally, I wish to conclude on an issue that arises from the premise that ‘writing is intimately connected to technology’. As stated earlier, such an assertion implies that there is a distinction between writing and nature, for writing is inherently technological. If this is the case, what is a ‘digital environment’? Ecocriticism is a recent yet important area of textual studies, examining how nature is constructed by literature and affected by humanity, alongside exploring the significance of ‘place’ in writing. We must not forget that technology is responsible for many of the ecological problems facing our planet and so, perhaps, we must consider ecocriticism in translation to a digital environment:

  • What is ecocriticism in a digital environment? Is it the awareness that there is a real world outside the interface or merely that ‘(hyper-)sign pollution’ exists? Can there be such a thing as ‘digital environmentalism’ or does this make a mockery of environmentalism?
  • How do we ‘place’ or ‘situate’ online writing and hypertexts? Can hypertexts ever enact or perform environmental issues via (deliberately) broken links or via fictional or metafictional strategies because of their spatialisation?
  • To what extent can videogames promote ecological awareness through embodiment (saving the world from ecological catastrophe in Final Fantasy VII) or gameplay (such as pollution in the Civilization games)?

No definitive conclusions can be reached on many of these issues. The writing is not on the wall, for we do not yet know the impact new technologies will have on textual studies; rather, the ‘pixels are on the interface’ and we are trying to understand what they mean.


  1. Novel Twists [accessed 22 February 2007].
  2. John Tolva, 'The Heresy of Hypertext: Fear and Anxiety in the Late Age of Print' [accessed 22 February 2007]. Also available here [accessed 22 February 2007].
  3. The Unknown [accessed 22 February 2007].
  4. Bruce Andrews, The Millennium Project [accessed 22 February 2007].
  5. Espen Aarseth, Cybertext: Perspectives on Ergodic Literature (Baltimore: Johns Hopkins University Press, 1997), p. 1.
  6. George P. Landow, Hypertext 3.0 (Baltimore: Johns Hopkins University Press, 2006), p. 99.