Accidental Machines; the impact of popular participation in computer technology.
Michael Punt
This
article is about computers, electronic communications, and how the
current use of these technologies has been influenced by changes in
the popular understanding of what they mean. It is about the role
of the mass of ordinary users in defining the meaning of science and
its products. It shows how sometimes this is a source of conflict
between the narrow interests of the professional and the broader concerns
of the amateur. Extrapolating from this premise, the article rehearses
a history of the personal computer to suggest how the understandings
that motivate commercial innovators might sometimes be vastly at odds
with those of the consumer. It concludes by proposing that the type
of historical method that we use is important in the process of explaining
technology, and that this is a crucial problem to be addressed by
all those with an interest in what the future of the computer industry
and its products might be.
For most of us the term computer
has come to mean a small inscrutable personal machine no bigger than
the average television set, which sits on the desk top in the home
and office and has become an extension of our professional and recreational
lives. Typically it is used for a number of discrete operations like
word processing, spreadsheet analysis, publishing, multimedia applications
and telecommunications. This machine, commonly referred to as the
PC, but more properly called a microcomputer, is the paradigmatic
engineers 'black box'. Few of us know how the apparatus works in detail,
and, largely, there is little to be gained from the effort of understanding
the finer points of the hardware and its operating systems. This tendency
has been nurtured by the imperative of 'user friendliness' which informs
much good computer interface design. Graphical user interfaces (GUI)
make it unnecessary for us to do more than decode the occasional acronym
(like ROM and RAM for example)[i][i]
to which highly complex electronic modules known as chips have been
reduced. The PC is one of the most technically complex and mysterious
pieces of technology that we have to deal with on a daily basis, and
it is not a coincidence that it provides both a metaphor for the
mind and the architecture for working models of human intelligence.
The history of the PC is the story
of the transformation of a specialised piece of scientific equipment
into a popular consumer product. After starting life as an ambitious
laboratory project, the computer is now something that we switch on
and use for either work or play, depending on circumstances. These
machines, in utilitarian plastic cases, have insinuated themselves
into private spaces and provide an interface between the particular
and esoteric concerns of professional science and the uncomprehending
generality of the population. Increasingly they eliminate personal
relations as they are placed between one human resource and another
- answering telephones, tracking accounts, and administering communities.
Since they are such incomprehensible and socially alienating things
it may seem wilfully obtuse to want to insist that non-scientists
were important in the invention of the computer and that they continues
to exert influence on its current and future uses. More particularly
so when it is far from clear that they are greeted with unconditional
popular enthusiasm and are often regarded by many as a dubious gift
from the laboratory that reduce all human experience to bits of data.
The popular response to new and
complex technologies is a sensitive issue for the corporate concerns
in the computer and television industry. It seems inevitable that
the microcomputer will penetrate even further into professional and
domestic life and become the basis for all telecommunications in
the future. Among commercial and industrial interests there is considerable
debate as to whether, in the next few years, the standard domestic
entertainment and information platform will be cable television or
some sort of hybrid Internet link using telephone lines and microcomputers.
Some eminent commentators argue the case that television is on the
point of expiring and that the PC will take its place. Others see
the apparent financial ineptness of the computer industry, and its
inexperience at dealing with the entertainment world, as a significant
barrier to any shift away from the dominance of television as the
main information and entertainment medium. What is at stake in this
question is the perception of the developers of 'front end' products
like user interfaces and even the hardware, which will make different
kinds of information and entertainment available. If designers can
be persuaded to produce attractive products for one platform or the
other (television or the PC), then the battle between the corporate
interests will be half won irrespective of the best technology. It
has been understood for some time that without the so called 'killer
applications' new technologies, however good, will not achieve market
penetration. In the 1970s an 1980s, for example, the competing standards
of VHS and Betamax in the VCR (domestic video recorder) market was
eventually settled not by technical specifications but by the 'killer
application' of pre-recorded movies. The machine that is now most
widespread (VHS) is technically inferior to its competitors, but
the advantages of access to the huge back catalogue of the Hollywood
studios tipped the balance in favour of JVC's system.
New technology may be the beneficial
fallout from science, which a thriving industrial economy and smart
entrepreneurs can turn into profitable products, but when they reach
the market there are opportunities for new aesthetic forms. The impact
that popular culture has historically had on the form and uses of
technologies is often in competition with economic and institutional
intentions. The domestic video recorder (VCR) may be a successful
product with high levels of market penetration, but its use as a home
cinema stands as an indictment of the failure of broadcast television
to produce new aesthetic forms in response to the opportunities that
it offered. In their absence the VCR resurrected the international
film industry, television's most direct competitor, which was thought
to be in terminal decline during the early 1970s. The interaction
between technology and society is not a top down relationship but
something much more complex in which there is a struggle for what
inventions mean and how they will be used.
Something similar to the VCR story
is already happening in popular telecommunications. The Internet is
not only the consequence of certain technological developments, but
also owes much to astute political manoeuvres by the US government.
It was expanded as an international public resource mainly by enthusiasts
outside the market system working long hours without payment. Not
surprisingly it has not fulfilled the corporate expectations of social
cohesion as a computer based Citizen Band network but has become a
conduit for the exchange of radical politics and socially subversive
material. Much of the traffic passing between subscribers is profoundly
anti-technology, anti- establishment and especially anti-American.
The current struggle is between those who see the network as an ideal
forum for social criticism, and those who want to integrate it into
the capitalist systems. Although the odds always appear to be weighted
in favour of large institutional bodies, as the case of the VCR shows,
the users of a technology have some determining influence over its
eventual meaning, and it is by no means certain that the Internet
will indeed become the golden goose that many entrepreneurs and large
corporations hope.
The determining influence of users on the eventual
meaning of technology is not a particularly twentieth century phenomena.
David Nye's account of the electrification of America has shown how
the meaning of electricity was negotiated between the various constituencies
who produced it, those who used it and those who opposed it.[ii][ii]
I have shown elsewhere how the very invention of the cinema in the
closing years of the last century was to some extent the consequence
of a crisis in the interpretation of the function of both the professional
scientist and the amateur in the construction of new knowledge. The
thematic of this struggle has never disappeared from popular cinema
and remains in the foreground of many successful movies today. Hollywood
producers currently appear to find the problem of technological change
a compelling topic for major investments. The mainstream 'Blockbuster'
movie may be easily dismissed as aesthetically bankrupt, manipulative
and cynically exploitative of ordinary people's anxieties, but often
provides an archaeological trace of science and popular culture reaching
satisfactory compromise through a mythical reinterpretation of technology.
During the summer of 1996 cinema box office records were being broken
by Independence Day, a film that showed the terrible consequences
of letting technologists run the world. The annihilation of the human
race by the superior intelligence of both the aliens and the scientists,
with whom they had much in common, was ultimately overcome but cunning,
physical prowess, and the common sense of the lay person. The success
of Independence Day was closely followed by Mission Impossible
and Twister, films that also pitched the cloistered abstractions
of high science against a practical first hand experience of the world.
In each of these stories it is, finally, an ordinary man in
touch with himself and nature who saved the human race, and
it is the professional scientists who are dispatched into oblivion.[iii][iii]
The aftermath of the carnage in each case is a less scientific and
a more humanly centred world. In these films Hollywood, as usual,
manages to resolve the pressing and intransigent problem of technology
which is both a life enhancing, and dehumanising, by providing us
with a satisfactory imaginary solution.
Perhaps the movies are so good at articulating anxiety
about science and technology because producers and film makers share
some of the audience's ambivalence to them. During the early 1970s
it seemed that watching television had made going to the movies redundant.
The film industry fell into deep economic depression and almost bankrupted
itself, but somehow it managed to revive. As Thomas Schatz has convincingly
shown the,
Ensuing pronouncements of the 'death of Hollywood' proved to be greatly exaggerated, however, the [movie] industry not only survived [television] but flourished in a media market place. Among the more remarkable developments in recent media history, in fact, is the staying power of the major studios (Paramount, MGM, Warners, et al.) and of the movie itself...[iv][iv]
To even the most casual observer, the familiar logos
at the end of prime time shows, is evidence of the degree to which
television production is now in the economic control of Hollywood.'[v][v]
One factor in this revival of fortunes is the new technologies for
encoding and storing data. Computer techniques that reduce production
costs and enhance the product, electronic distribution of both texts
and images, and good definition television combined with cheap videocassette
recorders (VCR) have ensured that more people than ever before in
the history of the world are watching movies. The proliferation of
television channels, video entertainment, computer games and cheap
publishing has ensured that the film industry has never been more
profitable, and yet the technologies that have made this possible,
especially television and computers, are often demonised in the movies
themselves. The films that Hollywood prefers to make are often stern
reminders to the industry that the very inventions that appear to
make life better can quite suddenly also make it worse. In as much
as movie making is now inextricably tied in with electronic entertainment
media, the ambivalence and anxieties of Hollywood are also not far
below the confident exterior of the personal computer industry.
The uncertainties for the future of the entertainment
industry, precipitated by new network technologies, were highlighted
in the summer of 1996 with a number of well publicised and contradictory
predictions. Concurrent with the success of Independence Day,
for example, Microsoft announced a new software development
that would allow the home computer to become a terminal on the Internet.
This software will access files that are resident on remote machines
(servers) which will appear on the PC desktop as icons and can be
used in exactly the same way as if they were on the resident hard
drive. Apple too has been working on a system called Pippin
that will turn the home computer into a personalised item similar
in size to the Walkman.[vi][vi] These announcements herald the beginning of a new
generation of personal computers which will dispense with the key
board as an interface and uses a standard television instead of a
dedicated computer video display unit (VDU). Since this will reduce
the cost of the PC (to around $400) it is hoped that they will be
instrumental in the penetration of computer communications into a
mass market through the entertainment opportunities of the Internet.
Of course this system will incur telephone connection charges, but
increasingly the economics of cable television distribution, based
on selling consumers to advertisers, makes it attractive to offer
cheap and sometimes free local charges.[vii][vii]
There is a great deal at stake here, since if this does turn out to
be the future of personal computing, then as both Microsoft
and Apple realise, seizing the initiative could significantly
shift the balance of power between hardware and software producers
and this would have economic implications for the future of television
(and Hollywood).
These two companies are, as ever, competitively responding
to projected hardware trends with the software opportunities that
public access to the Internet offers.[viii][viii]
Phillips and The Interactive Digital Appliance Company,
who are respectable players in the hardware side of the industry,
are fighting back. They also see the future of the entertainment market
in a convergence of television and computer technologies and are
developing their products accordingly. By the end of 1996, Inteq
promised a 27 inch entertainment machine which '... will be equipped
with Zenith NetVision capability based on [a] broad
technology platform for information appliances'.
NetVision's capability will support a range of services, including
browsing the World Wide Web, accessing electronic mail and future
JAVA terminal applications.' 2 The NetVision set will include a fast telephone
modem, 'picture in picture' image (which allows the television to
be viewed simultaneously with the Internet), and 'Theatre Surround
Sound'. It is expected to sell for around $1000. The manufacturers
do not intend to replace the PC but instead allow the 'home theatre
enthusiast' to '... combine channel surfing with Web surfing'.[ix][ix]
Zenith too is attempting to use the profit potential of the
entertainment market to gain control of the technology through the
convergence of consumer electronics and digital networks. 'NetVision',
they claim, 'will allow consumers for the first time to experience
the Web without the expense or complexity of a PC.'[x][x]
These new products are based on the assumption that television will
remain the dominant platform for home entertainment.
George Gilder, a prolific cultural commentator on
the relationship between technology and electronics, on the other
hand, forcibly suggests that, for technological, social, and economic
reasons, the corporate preference for television technologies is a
wrong turning.
The combining of television
and the computer is, in his view, a 'convergence of corpses'.
[xi][xi] He argues that the electronic future lies with the
PC, and supports his claim with an historical analysis of the market
which shows that computer centred technologies, which may not have
the show biz appeal of television, are nonetheless more successful
than convergences. The PC market, Gilder points out, is expanding
much faster than was expected, and certainly more than television.
The current problem in the industry, as he sees it, is that much of
the PC's power is diverted away from rapid data management to cope
with compression protocols enforced by narrow bandwidths and this
results in the poor video images that are currently tolerated in teleconferencing
and on the Internet. This, he suggests, will be unimaginably transformed
so that broadcast quality images will be the standard when broader
bandwidths, are introduced. In George Gilder's view, once this happens,
multimedia will be a realisable goal rather than a rather sorry pastiche
of other modes of entertainment like television, the movies and photography.
Most critically, however, Gilder recognises that the
active participation of a broad public, interacting with the formal
properties of these new technologies is important for their economic
success. The television, he claims, cannot adequately deal with text
and is inherently a 'couch potato' device, principally because it
consumes human capital. By this he means that whereas television
(as an entertainment) is generally used as a distraction from business
and careers, personal computing technology encourages personal growth.
Even when the computer is used for recreation and entertainment,
it tends to develop skills and intellectual competence that are productive
in other domains. This asymmetry between the television and the PC
will tip the balance in favour of the computer as the dominant entertainment
platform. Gilder's further prediction is that, when this happens,
the 'PC age' will recuperate the lost cult of the amateur that preceded
television as people use it to advance particular interests. Moreover
it will stimulate an increase in book culture, which, in spite of
competition from other media, is currently enjoying an enormous economic
success.
Gilder's history and future of digital technology
is influenced by the recent return to favour of so called 'supply
side economics'. The emergence of the PC coincided with some influential
revision in the way that American, and some European, governments
attempted to control inflation. It was felt that only by stimulating
the movement of goods and services the flagging economies of the industrialised
nations would be revived. Cutting taxes and government intervention,
it was argued, stimulated investment and re-tooling, which in turn
would promote growth, increase revenues and control inflation. The
fashion for this policy stimulated a restructuring in some industries,
which had been built upon the early twentieth century preference for
vertical integration, in which all parts of the processes of manufacture,
distribution and retail were in the control of a single corporation.
New horizontal structures were put in place which exploited and controlled
the potential of network technologies. The largest corporations reorganised
themselves more as financiers than manufactures and distributors
of products. In the process of this restructuring limited opportunities
for small scale low investment businesses opened up, to provide
goods and services at prices and volumes which were controlled by
the mechanics of a market economy. The pros and cons of supply side
economics have been widely debated, but the extent to which George
Gilder's case rests upon a quite specific idealisation of the American
economy is illustrated by Robert X. Cringely's equally compelling,
although fundamentally different aetiology of the PC and prognosis
for its development.[xii][xii]
Since the early 1970s Robert X. Cringely, a gossip
columnist for InfoWorld, has assiduously followed the story
of the impact of individual speculators on the development of the
personal computer. His account of the PC industry is chronicled in
Accidental Empires which was first published in 1994 and revised
in 1996. It describes the growth of the personal computer from an
amateur obsession to the fourth most profitable industry in history
(after automobiles, energy production, and illegal drugs).[xiii][xiii]
Unlike George Gilder, who tells the story of computing from the point
of view of the winners, Cringely provides a more symmetrical causality
for the various changes in hardware and software technology by charting
the realisation of particular personal ambitions of some individuals
associated with the industry, as well as the near misses of others.
He shows how certain people with particular talents and similar social
inhibitions, accidentally met others and were able to temporarily
challenge the hegemony of the establishment (most notably the market
leader - IBM) by developing an alternative view of the computer as
a personal (rather than corporate) machine. With well-chosen examples
of spectacular financial misjudgements by major players in the industry,
he shows a gap between established powers in the industry and maverick
entrepreneurs (like the young Bill Gates) who were closely in touch
with an alternative view of what computers and computing 'meant'.
This interpretative group was a small, but obsessive, constituency
of amateurs who were interested in computing relative to semi-recreational
uses. Once equipped with basic machines many cemented their affiliation
with the community of other enthusiasts by writing inventive software.
Commercial exploitation was the obvious next step for the personally
ambitious, and companies such as IBM, who were committed to the idea
of computing as a hardware business, faced competition from unexpected
quarters.
These competing interpretative groups - the hardware
giants and the software based enthusiasts - have, according to Cringely,
reached some kind of consensus in the pattern of product development
and this accounts for the present characteristics of the industry.
For example he shows that hardware innovation is rapidly subsumed
by software applications. His predictions for the future of computing
as being dominated by software solutions from Microsoft are based
on his personality based overview of the past;
<The trend in information technologies is first to solve a problem with expensive, dedicated hardware, then with general purpose non dedicated hardware, and finally with software. The first digital computers, after all weren't really computers at all: they were custom built machines for calculating artillery trajectories or simulating atomic bombs. The fact that they used digital circuits was almost immaterial, since the early machines could not be easily programmed. The next generation of computers still relied on custom hardware, but could not be programmed for many types of the jobs, and computers today often substitute software, in the form of emulators, for what was originally done in custom hardware.>[xiv][xiv]
Cringely's approach suggests that understanding the
historical causality of technologies is vital if expensive investment
mistakes are to be avoided. As he observes, the errors of IBM are
even now being repeated on a vaster scale by Pacific Rim speculators.
'The hardware business is dying' he asserts, 'Let it. The Japanese
and Koreans are so eager to take over the PC hardware business that
they are literally trying to buy the future. But they're only buying
the past'.[xv][xv] By implication, Cringely's history could teach them
differently.
Hardware will, of course, change to some degree, and
like many commentators Cringely sees a convergence of television and
the computer in a set-top device with a highly efficient processing
chip to decode and decompress data. What will make this product successful
in the domestic market is computing power that is both cheap and better
than the average PC. Motorola is currently investing in this vision
of the future, with the Power PC 301 processor in the belief that
the market for new personal computers have levelled. New sales, based
on a periodic replacement strategy, will be used as an opportunity
to upgrade processing power (in much the same pattern as company car
renewal). As Cringely points out; 'The Power PC 301 yields a set-top
device that has the graphics performance equivalent to a Silicon Graphics
Indigo workstation, and yet will only cost users £250'. Who is going
to want to sit at their computer when they can find more computing
power (and more network services) available on their TV?'[xvi][xvi]
Motorola expects the demand for set top devices to be one billion
units in the next decade. Unlike Intel, (the other leading player
in chip production) who are going for ever higher specifications,
they are structuring their research and development, as well as their
marketing to produce a cheap fast chip. At the beginning of 1997,
in line with Cringely's predictions, the Motorola Company announced
that it intends to pull out of developing Power PC-based systems,
but, they added nodding in the direction of convergence, they will,
however, continue to work on Internet access devices using the chip.
Ultimately Robert Cringely's methodology, and his
careful sifting of the evidence provides both a convincing explanation
for the present, and, as the recent announcement from Motorola illustrates,
a credible forecasting tool. What distinguishes him from George Gilder
is a belief that technology and culture do not function confrontationally,
but rather more dialectically. Cringely maintains that technological
innovation is shaped by both the possibilities of the hardware and
the imagination of those who encounter them. In Cringely's history
of the personal computer he charts a transformation from the fixed
bulky machines, accessed by the professional elite, to ephemeral,
simple and cheap software that is, above all, popular. The causality,
as far as he is concerned, is the interaction of a new kind of machine
with '... disenfranchised nerds like Bill Gates who didn't meet the
macho standards of American maleness and so looked for a way to create
their own adult world and through that creation, gain the admiration
of their peers.'[xvii][xvii]
In a seductive homology, Cringely suggests that personal computers
are the fallout from 'nerds' replacing the heavy duty muscle of the
corporate hardware giants with 'brainy' software.
Cringely is concerned with the power politics of Silicon
Valley, his conceptual premise and methodology yield some brilliant
insights, but the deficit in his account, at least for product designers
working with multimedia, is what the personal computer might mean
for 'ordinary users' now. When new technologies meet ordinary people
they are sometimes transformed beyond recognition, and, moreover they
can continue to change. It is now well understood by historians of
early cinema, for example, that the basic apparatus became the foundation
of a mass cultural experience because of the interaction of technological,
economic, and social determinants.[xviii][xviii]
Histories of the invention of the cinematographe, and the economic
exploitation of the popular enthusiasm for moving pictures cannot
satisfactorily account for how the films looked. For this, we must
turn to the audience's interpretation and expectations of the technology
that were often (as they are today) confused and inconsistent. Early
producers were often also exhibitors and were able to adjust their
films in response to popular reception. They changed topics and treatments,
(and even projection speeds) to suit the audience. In a similar way,
Bill Gates and Paul Allen wrote operating systems for a constituency
of amateurs with whom they were closely in touch. However as the PC
industry took off, the distribution arrangements for software and
other products, distanced the designers from the users. The seductive
technological determinism (which suggests that culture is changed
by technology) precipitated significant wrong turnings in much of
the early design. People buying computers and software were often
expected to follow the enthusiasms and agenda of the 'disenfranchised
nerds' now turned entrepreneur. Some did, but the vast majority of
the potential market had different ideas about what playing with computers
meant.
Perhaps more remarkable than the rapid growth of the
market is that the PC survived as a consumer product given the gulf
between the consumers and the producers' ideas about the machines.
The uncritical technological determinism in the industry has inhibited
the creative possibilities of what was possibly one of the most promising
media to ever emerge from computer research and development - the
CD Rom. According to industry 'vapourware', interactive CD Rom based
on hypermedia architecture, was going to transform education and popular
entertainment in unbounded ways. This medium, it was predicted, would
be used to store data in a great variety of forms (text, image, sound,
graphics, movies) which would be accessed associatively to provide
a powerful value-added learning tool. CD Roms would transform libraries
by eliminating the costly storage of volumes and providing different
modalities of access to data, which would ultimately affect scholarship.[xix][xix] In short it would be a new episteme. None of this
seems to have happened except in a number of highly specific applications,
mostly concerning industrial training programmes. Even here, according
to some analysts, the changes are small. It has been noticed, for
example, that trainees invariably make paper printouts of the contents
of a whole disc (regardless of relevance) in order to be able to consult
the material away from a machine.
The CD Rom. publishing business was thought to be
worth around $5 Billion in 1994, which for a world market is minute,
(consider for example US telecommunications predicted at a trillion
Dollars by 2,000).[xx][xx] CD Roms, it appears, have been used to provide another
format for the existing collections of computer games, the distribution
of freeware (which works haphazardly), and pornography. z Invariably these do not exploit the non linear navigation,
which is the true potential of the medium, but instead re-circulate
existing conventions for information retrieval disguised in vogue
pop graphics.[xxi][xxi]
Consequently it has proved very difficult to market, since on the
one hand it was promoted as revolutionary, whilst in reality it is
more often than not simply an expensive, unwieldy and approximate
copy of something that already exists. The products have failed to
fulfil the imagination of the consumer, and retailers see little incentive
in promoting them with large displays and shelf space. As the focus
of attention has moved on to other promising forms, notably the Internet,
the CD Rom industry seems to have stabilised as a vanity publishing
medium rather than any real alternative way managing information.[xxii][xxii]
In the absence of any distinct advantage for using
CD Rom as a medium, it has, consistent with Cringely's prognosis,
been replace by a software equivalent that we know as the Internet.
Access to the Internet provides an associative database, and the storage
and intelligent search engines that formed the basis of many of the
earlier promises made for the emerging hardware medium of CD Rom have
been emulated. This is now achieved with a non-specific piece of
hardware called a modem that interfaces between the PC and the telephone
system. If this trajectory is followed through, the immanent 'convergence',
using set-top devices to connect homes with cable companies, will
be replaced by a programmed chip (presumably made by Motorola). In
which case the expected penetration of the personal computer into
the domestic space will quite possibly be as a software product. Rather
than innovative hardware, the application that is expected to support
the popular uptake of home computing is a hypermedia graphical web
browser of which Netscape and Microsoft's Internet Explorer are perhaps
the most well known.[xxiii][xxiii]
Once again, however, industry 'vapourware', advertising,
newspaper copy, and particularly home computing magazines provide
a dubious history. They tend to focus on the technological aspects
of interactive hypermedia and give the impression that personal computers
were always intended to be like this; all that was necessary was the
appropriate advances in science. The Internet, however, has been around
for a very long time (some suggest as early as 1969). It was the outcome
of a rather specialised project to develop an associative database
that could be accessed, and most importantly, added to, primarily
by professionals in scientific research. It was never intended as
a public access network and, consequently, most users were content
to work with somewhat forbidding chopped down machine codes rather
than the friendly graphical interfaces we use today. However in the
early 1990s a number of political decisions widened the constituency
of Internet to include amateurs, enthusiasts, and users without a
specialist technical background. This group saw the Internet rather
differently from the professionals, as a space for public access and
universal connectedness. As this different interpretation has prevailed,
more user friendly interfaces have emerged, not through the insights
and speculations of entrepreneurs, but from the creative resources
of the new community of users. Graphical web browsers are not so much
advances in software design, but can be seen as both software
emulators of existing hardware, and responses to a popular engagement
with the Internet which has produced new ideas about what personal
computing means, in the context of a broader culture than that reflected
by the industry's journalists.
Software emulations of existing hardware might mean
that the PC could easily disappear from personal computing. This may
be regarded as further evidence of a cultural slide towards the sort
of virtuality that was foreshadowed by the cinema a century ago, when
moving images were seen as substitutes for the realities that were
represented. In this analysis, resistance to an increasingly vicarious
existence is impossible, and keeping pace with a progressive de-materialisation
is almost an obligation for artists and designers. Another view of
this is that nothing has changed, and that the personal computer (like
the cinema) has always been a machine that had an imaginary dimension
for both scientists and a lay public, and that its various incarnations
since the second world war are simply recirculations in a process
of reinterpretation to which many technologies are subject.
The PC was not invented as a whole idea, nor was there
any fixed final objective to which its developers strove. As a piece
of technical hardware it emerged from applied scientific research
and a community of enthusiastic amateurs. Each group had significantly
different ideas about what the machine was and might do in the future.
In this sense it is an imaginary apparatus. It is an arrangement of
smaller discrete machines, (chips, drives, keyboards, screens, etc.)
a dispositif that can be regarded as material response to
imaginary scenarios about technology. The PC, as we now understand
it, (and this may just be a temporary interpretation) is a machine
linked to telephone networks with a storage and retrieval system.
It is, to use an engineering term, a kluge, or bricollage of a number
of discrete components. These come together to produce an entirely
new device, greater than the sum of its parts, that we recognise at
different times as the personal computer (or perhaps more accurately
a microcomputer). A closer examination of these parts, together with
an account of how they were put together, may reveal something of
the imaginaries of both scientist and enthusiast to show a different,
less determinist causality.
One of these discrete components is the telephone.
Its linkage with a computer in the mid 1980s gave us a new word 'telematics',
which roughly means the confluence of telephone and computer technologies.
One of its first manifestations in a consumer product was in 1981
when France Telecom introduced the Minitel. This was a small
video display unit with a key board and telephone hand set that was
connected via the exchange to various providers ranging from street
directories to call girls. A critical mass of connected users was
established by the free distribution of Minitel machines, as a replacement
for the huge Paris telephone directory. Many subscribers found this
apparatus compelling and, like some early Internet users, ran up crippling
bills. Minitel offered a reliable, compact, domestic version of prototypical
networking devices that had been used by computer enthusiasts for
some time. In these, data exchange with a host computer was made with
a device called an acoustic coupler. This was an electronic 'black
box' with a pair of rubber cups that engulfed the hand set in a rather
sinister manner. Since these devices used sound to transmit digital
data (much like a fax) they were prone to error through 'dirty' signals
and extraneous noise. Moreover the sheer bulk and inelegance of the
apparatus made them unattractive products for the mass market. They
remained an expensive item with a small constituency, but they did
show that quite interesting things could be done with the telephone.
This was not an obvious or natural extension of the computer, but
the telephone became implicated, almost accidentally, because its
network of wires and satellite relays were conveniently in place for
professionals and amateurs exploring the possibilities of exchanging
digital data. The diffusion of these pioneering efforts among like
minded enthusiasts had the effect of extending the imaginary dimension
of the PC into the realms of social interaction. Business interests,
and then domestic users also began to take an interest in these telecommunications
experiments. The ubiquitous modern modem, nowadays as chic as Reebok
trainers, connect directly into the network to replace a clumsy piece
of hardware with something smaller, more reliable and stylish. They
also now come as a standard component built into many small computers,
and the next step, is that this device too is likely to be replaced
by something more technically elegant, like an emulator chip inside
the ordinary television set.
The subsequent development of the Internet as a popular
medium has transformed the business plans of some of the largest companies
in the world. The telephone companies initially became involved in
network computing because their wires were already in place. This
turned out to very lucrative, but as the market for telematic services
has expanded there is increasing competition from other connective
systems. Microwave systems for portable phones, for example, are rapidly
replacing wires in developed economies and are becoming the start-up
standard in Third World and Pacific Rim countries. Cable television
also pose a competitive threat to the established networks, since
delivering television signals is compatible with telephone communications.
Established 'phone companies have responded with take-overs of digital
networks and diversification into entertainment. Simply to survive
in the changing market, the major telephone companies have become
directly involved in show business, which is something for which they
are barely equipped.[xxiv][xxiv]
Whilst this promises increased revenues it is not without its problems
as some of the largest corporations in the world are being forced
to make vast and risky investments in unfamiliar territory. The imperative
for this arises from the enthusiastic reception of the convergence
of the telephone and the kluge we know as the PC. New products like
those announced by Zenith, Phillips and The Interactive
Digital Network Corporation are not a technological inevitabilities
(as George Gilder suggests), but responses to the current struggle
for control of distribution networks precipitated by a new interpretation
of the small computer by consumers.
The most visible and dynamic evidence of this response
is the burgeoning Internet as a resource for recreational pleasure.
Retrieving data from this network involves searching for the appropriate
host and connecting with it to retrieve files. Early pioneers used
arcane instruction codes. Since then associative linking software,
known as browsers, combined with search engines and Internet protocol
management have provided one of the most widespread means of accessing
the Internet. Most of these browsers are derivatives of hypertext
programmes that use 'hot spots' (areas of the display which when clicked,
access another file in the database). Like the telephone, Hypertext
was not a computer technology but initially a text based microfilm
system for keeping track of scientific publications called MEMEX.[xxv][xxv]
In 1932, Vannevar Bush began working on the problem of extending
human memory. He was particularly concerned that scientific research
might be stifled by the massive growth of professional literature.
Following a draft paper in 1939, his idea was finally published in
1945.[xxvi][xxvi]
He proposed a system of 'windows' which allowed comparative analysis
of textual material from disparate sources and an input device that
enabled the user to add notes and comments. A new profession of 'trailblazers'
(as he called them) would be formed of people able to bundle links
together in a predetermined web so that specialist scientists could
follow their own threads, unencumbered by irrelevant material. Development
was slow since it was essentially a research project and an in-house
tool for scientists and a fully working version of the MEMEX was never
built. The prototypes that were made, however, did provide a number
of mechanisms that could model associative indexing. Using this device
as evidence he was able to show some deficits in contemporary ideas
about the human intelligence and thought processes.[xxvii][xxvii]
For example, the MEMEX was able to illustrate how the parallel processing
of serial data was possible with relatively small technical resources.
Much later his idea was used to form the basic architecture of hypermedia
applications likeNtescape, but its immediate significance for artificial
intelligence research ensured continued support of the American government
for thirty years until 1975.
One of the people who developed Bush's original idea
was Doug Englebart. He is now most well know for inventing the mouse
as a computer interface device. Quite coincidentally, it seems, he
was inspired by the brief description of the MEMEX that he came across,
at the end of the second world war, in one of the many published profiles
of Bush. [xxviii][xxviii]
It remained with him, as a generalised idea but it was not until 1962
that he introduced it into data management and computing. Although
both Bush and Englebart might be considered as pioneers of associative
indexing, what neither of them anticipated was that a revolutionary
idea for information management designed for specialist scientists
would subsequently be crucial in the growth of a popular enthusiasm
for personal computing. Nor did they envisage that it would become
the preferred modality for the twentieth century flanneur, surfing
the virtual arcades of the super highway.
In the early 1960s, however Ted Nelson, a media analyst,
was able to make that conceptual leap. He saw in the concept of MEMEX
the egalitarian ideal that was in tune with popular culture and the
democratic aspirations of the times. Nelson is generally credited
with the invention of the word 'hypertext' and he proposed a particular
use for Bush's original idea in a project he named (with characteristic
ambition) Xanadu.[xxix][xxix] Whereas a professional like Vannevar Bush was concerned
with maintaining the growth of scientific knowledge through narrowing
access to relevant data, Nelson's project was concerned with wide
public use of computer technology through associative linkage devices.
His imaginative concept was to establish a world wide network of information
centres to provide interactive access to all the scientific data and
creative literature that could be encoded. At these centres, which
would be as ubiquitous as Laundromats, hypertext interfaces would
enable users to both retrieve data and add material of their own,
which could then be accessed by others. The extent to which Xanadu
was ever a realisable project, or simply a platform for visionary
engagement with a new technology, is something of a question, but
as it became a technical and economic possibility it was increasingly
hampered by issues of copyright and intellectual property. Details
of the system of data verification and credit payments to authors
overtook the vision of his proposal for an international information-rich
culture. By the time it appeared that Xanadu might finally satisfy
the lawyers and accountants, the political decision to enable wider
public access to the Internet had already been taken. Ironically,
the debate about intellectual property was sidelined by the enthusiasm
and generosity of the surfing community and his project was completely
eclipsed. What remains of his (and Bush's) work, however, is a commitment
to hypertext (and hypermedia) as a data management tool for ordinary
people.
Hypertext is, as the name suggests, a system of accessing
textual material associatively and was a microfilm technology. It
was not immediately obvious that other kinds of information, like
pictures, sounds and movies, might be managed in this way. The extension
of Nelson's original idea to Hypermedia, was also not computer based
technology, but a videodisk project developed in the Media Lab at
MIT. By linking a computer driven interface and a domestic laser disc
player, images could be displayed at will on a screen at rates that
provided the illusion of movement. One of the most developed experiments
in this area was called The Aspen Movie Map. In this project
photographs were burned onto a laser disc and, when interfaced with
a simple PC, the material was accessed interactively so that an image
map of the whole of the town of Aspen Colorado, including the interior
of buildings, could be viewed. Rapid retrieval times, which are standard
in laser disc technology, meant that by using a joystick, the operator
could travel through the landscape at speeds of 110 kilometres per
hour. Like MEMEX, the Aspen Movie Map was intended as an in-house
project at MIT. Interactive laser disc technology has hardly been
exploited in the public domain, except as a superior playback device
for television and in arcade games. Apart from one spectacular military
use (at Entebee airport) and a few pioneering attempts in education
little has developed in this area. But laser disc technology, linked
to a computer, suggested possibilities for navigating and controlling
different kinds of information. The diffusion of the idea that data
could be stored in formats that were independent of the modality of
representation, introduced ideas to computing which appealed to amateur
enthusiasts rather than large business organisations and government
institutions.
In 1987 Macintosh responded to this new concept of
computing and launched a simple hypermedia programme called HyperCard.
In an inspired marketing coup they "bundled" it free with
their machines so that in effect it became public domain software.
This turned out to be a brilliant strategy for launching a culture
of 'third party' (independent) Apple software developers. Small scale
speculative programmers with no more than a reasonable machine and
creative flair 'invented' uses for HyperCard, ranging from address
books and stock control packages to interactive interfaces for laser
disc players. In proposing the visionary Xanadu project Ted Nelson
had correctly intuited a consensus around the idea of shared intellectual
property programming community, but Macintosh's HyperCard software
provided a concrete resource to express this ideal in informal low
capital software development. The enthusiasm for it confirmed the
willingness of a global community, particularly the group most enthusiastic
about electronics and computing, to sideline professional ambition
in the pursuit of a vaguely articulated social imperative that computers
would be good for democracy. For Macintosh it meant that its machines
and its operating system acquired a brand identity that became strongly
identified with a creative community and a healthy suspicion of corporate
exploitation. As a result of this effective low cost research and
development strategy, large software houses felt sufficiently confident
in Macintosh's consumer base and invested capital in ambitious projects
to devise applications which commercial designers could use. Despite
the failing fortunes of the company and its diminishing market share
in recent years, this unorthodox approach to technological development
has ensured that Macintosh still remains the preferred platform for
desk top publishing (DTP), creative design, and multimedia uses. One
consequence is that Hypermedia (as an associative cataloguing tool)
is currently the preferred standard for Internet data management,
computer based educational products like encyclopaedias, and on-line
catalogues in museums.
The facility with which computers can apparently cope
with this form of storage and retrieval gives the impression of a
certain technological inevitability. It is as though personal computing
and hypermedia are somehow synonymous. However, but for a series of
accidental meetings of scientists, the technological 'excess' of rapid
retrieval times in the domestic laser disc machine, and the brilliant
marketing coup of placing HyperCard in the public domain, the design
of microcomputer software, and the particular use of the machines
might be very different. There was not a single 'big idea' that lead
to hypermedia, but like many of the nineteenth century achievements
in the natural sciences, the development of computer software has
been the outcome of the steady accretion of the efforts of individuals
and small groups of programmers, who were essentially working on an
amateur basis (in the field of computer science at least), and who
were willing to freely share their findings. This is contrary to the
generally accepted 'Romance' histories that explain technological
change in terms of the dedicated and visionary genius of individuals
like Bill Gates.
The computer, like almost every invention, has an
accepted 'Romance' history to account for its invention.
Many authorities regard
the nineteenth century mathematician, Charles Babbage as the father
of computing, and his Analytical Engine as a mechanical progenitor
of the machines we know today.[xxx][xxx]
More teleological histories like those by Vernon Pratt or John Cohen
trace the ancestry of intelligent machines back to the abacus and
further.[xxxi][xxxi]
Together such accounts lead to the impression that computers are the
inevitable outcome of a long established desire to build machines
that could replicate human intelligence. One of the earliest realisations
of the electronic computer, however, the ENIAC, was not thought of
in such general terms, and its subsequent development was no less
accidental than the convergence of the telephone and PC, the use
of the Internet as a popular telematic technology, or Hypermedia as
an international standard for data management.
The ENIAC was a calculating machine originally designed
to meet a quite specific problem in producing firing tables for the
American gunnery during the Second World War. The task of compiling
a complete set of data for each new gun was so time consuming that
it threatened to delay the strategic objectives of the war effort.
The ENIAC (Electrical Numerical Integrator and Calculator), as the
name suggests, was developed as a massive adding machine for the American
military. The intention was that the multitude of repetitive calculations
necessary to precisely predict the trajectory of a shell from each
individual weapon under variable conditions could be undertaken with
sufficient speed and accuracy to keep up with the output from the
arsenals. It used a circuitry of valves and relays to perform certain
calculations and although in principal it worked well, it was technically
very vulnerable to component failure. J. Presper Eckert, a leading
figure in the development team, suggested a practical solution based
on an understanding of the ENIAC not as a single large machine, but
as an accretion of small independent parts working together to execute
a single complex task. He proposed the engineering concept of modular
circuits that could be temporarily removed and repaired whilst the
ENIAC was running without necessarily affecting the work in hand.
Some technical difficulties remained unresolved by the armistice but
the logic of his design suggested that a perfected machine might be
possible.[xxxii][xxxii]
Although the cessation of hostilities made further development less
pressing, the project was close to successful completion and represented
a huge investment in time and intellectual energy. For these reasons
alone it was thought to be worth pursuing, but perhaps more important
for what was to follow, the limitedsuccess of the ENIAC stimulated
visions of what such technology mean for the conduct of civilian life
in the post war period.
Further development work on the ENIAC, however, required
substantial financial support from public funds. Since the necessity
of making calculations for firing tables had all but disappeared,
and there was little popular enthusiasm for arithmetic, it was necessary
to show the ENIAC as something more exciting than a 'number cruncher'.
Illuminated ping-pong balls that flashed on and off were added to
the casing to 'show' it at work. These had no real function except
to provide a simple analogy for the invisible electrical processes
that made complex calculations possible. It was a brilliant publicity
strategy that gave the illusion of a logical process going on in the,
otherwise inscrutable, banks of valves glowing lethargically. It
also provided an enduring trope for machine intelligence that remains
with us today. In both science fiction movies and contemporary product
design small light emitting diodes (LEDs) have become almost mandatory
features that often have no other function other than to mark the
willingness of a machine to co-operate with human efforts to make
it work.
The task that the ENIAC was given for its public debut
was equally inspired and proved to have a lasting impact. To show
its power to a lay audience, the computer was used to calculate the
trajectory of a shell that took 30 seconds to reach its target. This
calculation took only 20 seconds; in effect the machine intelligence
got to the target ten seconds before the real shell.[xxxiii][xxxiii]
This demonstration suggested that a developed computer would not simply
be a super efficient calculator which analysed data and confirmed
empirical evidence, but could make evaluations of possible events
faster than they had actually happened. Funding was forthcoming for
the ENIAC project for a further five years but yielded little amounted
, more significant, however, was that the public developed a curiosity
about computers that it stimulated. The imaginary possibilities of
artificial life, at least as old as the Pygmalion myth, had acquired
an added impetus by various nineteenth century inventions (including
the cinema). Suddenly, in the mid twentieth century, these had a
new objectification in the twinkling circuit boards of the ENIAC.
At its most fantastic the idea of an intelligent machine furnished
both dystopian science fiction fantasies like those expressed in pulp
fiction and Hollywood 'B' pictures, and competing visions of utopian
idealism. In the latter, the drudgery of repetitive tasks, the last
burden of nature, might be consigned to machines that could enhance
human existence. At its most grounded, however, the particular collapse
of time and space that the ENIAC demonstrated excited the creative
imaginations of amateur scientists, artists and business people, who
wanted to have access to the technology to explore its social and
economic possibilities.
As a consequence of the visionary potential of computer
technology it became a focus for diverse interest groups that comprised
unusual mixes of people. Philosophers, poets and bankers - the traditional
habitués of the nineteenth-century salon, formed the constituency
of the many amateur computer science clubs that sprang up in the 1960s.
At these clubs the collective imperative of building a small computer
for experimental purposes overrode individual ambition and social
and professional hierarchies. Students, professors and technicians,
as well as those who were 'just interested' shared their experience
and showed their latest achievements at the regular meetings. The
most well known of these clubs is possibly the Homebrew Computer Club
at Stanford. It was at this club that Steve Wozniak modestly showed
a prototype machine, the Apple I, as a contribution to the shared
project of developing a small personal computer. Its development as
both a product and a computing concept was facilitated with the help
and encouragement of the more flamboyant (but no less ingenious) Steve
Jobs, whose chief interest was in computer games. Together they eventually
produced a machine and marketed it as the Apple II, (including a
colour version), and although it was by no means the first or only
personal computer available, its design caught the mood of the amateur
constituency of computer clubs and it found a market. The extraordinary
popular enthusiasm for the Apple II not only laid the financial foundations
of Apple Macintosh but also seemed to confirm that, aside from being
a sophisticated calculating machine, in the hands of imaginative
people the computer could become something else (even if, in the
late 1970s, no one was quite sure what that 'something else' might
be).
The Apple II and the many other amateur home computers
that were built in the late 1970s and early 1980s were different from
mainframes. They were not scaled down models of the mainframe, in
the sense that the Walkman is a miniature version of a tape recorder,
but more like fellow travellers with different objectives and different
applications. These small machines (known as microcomputers) shared
the engineering concept of modularity, open architecture and spirit
of adventure which had turned ideas about computing into the ENIAC.
Like the large scale laboratory prototype, microcomputers were built
from a bricollage of off-the-shelf and ad hoc electronic components
put together in new combinations. But their appeal was to a constituency
searching for new understandings of what computer technology meant.
The Apple II was a machine for the computer enthusiast and the experimental
programmer. It was built by and for people with open minds about the
future use of computer technology. To broaden the constituency of
users required the microcomputer to do something more than provide
a platform for programming. Cringely suggests that the compelling
application that achieved this for the Apple II was the spreadsheet
software, invented by Dan Bricklin, called VisiCalc.[xxxiv][xxxiv]
The particular brilliance of the application was that the outcome
of one changed accounting parameter in a project could be automatically
processed and expressed in terms of profits or losses. Since this
was a simple process that could be undertaken in relative privacy
there were no constraints on what financial fantasy might be modelled.
It saved professional accountants and entrepreneurs hundreds of hours
in the preparation of reports and business plans, and could run with
very modest technical means. VisiCalc confirmed what the ENIAC debut
had already suggested, that computers allowed imaginary scenarios
to be played out as if they were real. It provided a hard headed commercial
justification for investing in a microcomputer without entirely disavowing
the imaginative spirit of adventure that had launched these little
machines.
The emergence of the personal computer from the collaborative
atmosphere of campus clubs and the apparent ease with which committed
and talented individuals could shape products, and even the future
of the industry, gave rise to a carefree Bohemian optimism vastly
different to the corporate logic that dominated mainframe computing.
The large corporate computers were invariably used as accounting
and inventory machines. Individual access was on a grace and favour
'downtime' basis when the main work of financial control had been
done. Outputs were invariably sheets of printed textual and numerical
data, which was often collected from a central printing resource (or
even sent by post). The microcomputer on the other hand was available
on demand to the individual and processed text and high quality images
on a screen, which could be manipulated, stored and even distributed
without ever being committed to paper. Furthermore, when hard copy
was required, the screen display closely matched the final output.
This feature alone transformed some publishing operations from a high
investment institutional base to a decentralised 'cottage' industry.
Since that time, and the innovation of Hypermedia data management
software (the poet's equivalent of VisiCalc), networking and games,
the personal computer has become understood, at least in the popular
domain, as a quite different machine from its corporate ancestors.
Even though the production of microcomputers and software are virtually
monopoly industries, some personal computers still retain a suggestion
of a bohemian independence from large corporate institutions. To
be sure the personal computer was unthinkable without the scientific
research that produced prototypes like the ENIAC, but nothing could
be further from the number crunching scientific and military machines
than the personal microcomputer that is now so common in homes and
offices.
The amateur enthusiasm for computing in the 1970s
not only created a demand and a market for microcomputers but also
stimulated new scientific research. Miniaturisation, for example,
essential to the development of powerful domestic machines, eliminated
the problem that the ENIAC was developed for. Nowadays most ballistic
missiles have an on-board microcomputer that does the work of firing
tables whilst it is in flight. In less applied research, low power
machines opened up projects in artificial intelligence to wider groups
of scientists who re-focused attention on more biological and quotidian
models of the mind. In the 1980s, the hitherto predominance of linear
processing solutions in artificial intelligence broke down under the
weight of the burden of data that the human mind appears to cope with
at any given moment. Parallel processing or Connectionist models,
which had briefly been regarded as credible by scientists in the 1960s
were resurrected as a consequence of a new constituency of interest
in the field.[xxxv][xxxv] According to Daniel Dennett and others there were
quite specific cultural determinants for this.[xxxvi][xxxvi] Among other things, he points out that in a period
of relative freedom and hedonism, and a general preference among the
younger generation for West Coast lifestyle (beach culture, surfing
and transcendental experiences), artificial intelligence research
re-focused attention onto the 'wetware' of the brain. The connectionist
models looked at simple organisms such as insects, whereas linear
programming solutions had leant towards 'expert systems', (computer
programmes which could in some sense stand in for high level professional
expertise like medical diagnosis). They developed small programmes
which processed data simultaneously though associative networks. This
strand of artificial intelligence research has proved enduring and
has most recently been integrated with the earlier linear models.
In these experiments very basic instructions were shown to produce
intelligent behaviour in swarms of identically programmed robots.
A host of simple prototypes, with single chips on board, responded
to an imperative to avoid collision, for example, with intelligent
wall following behaviour. This work has proved fruitful, and a new
area of research, known as Behavioural AI, has opened up.
In artificial intelligence, as in many fields of computer
based research, the various contributions of the professional and
the amateur are hard to disentangle. This, however, is not the view
of the computer industry nor many of the high profile commentators.
'Romance' histories of brilliant, gifted, and sometimes lucky individuals
who are agents in the inevitable progress of technology tend to dominate
the most visible accounts. The idea of an inevitable technological
progress oversimplifies the complexity of the causality of technological
change. It predisposes commentary to future-watching in which evidence
is often difficult to disentangle from speculation. In popular literature,
'hip' magazines with unconventional typography, suggest that the
changes in the technology are too overwhelming and too fast for ordinary
people to understand.[xxxvii][xxxvii]
Culture, they suggest, is changed by technology. For many theorists
this technological determinism is problematic since it renders the
ordinary mortal ineffective in the construction of what is, after
all, a shared culture. Furthermore, the resolution of difficult social
problems require no action on the part of the individual since they
can be consigned to the pending file to await a new invention. Robert
Cringely's Accidental Empires, alternatively, does define
the limits of his methodology and consequently begins to offer an
explanation for what might be happening based on a history of the
major players in Silicon Valley over the last three decades. But both
he and George Gilder provide a view of digital media that is selective
in its historical evidence insofar as the mass of people who use these
machines, and give them meaning, are rendered as passive consumers,
which in view of the weight of evidence, is difficult to sustain.
They overlook the creative exploration of the uses of particular
technologies that the innovators have, for whatever reason, overlooked.
Nor do they recognise the impact of reflective practitioners outside
the business community whose interests and vision are informed by
different cultural imperatives. This image of the consumer and designer
as disenfranchised may be good for the rapid turnover of products
necessary to sustain investment in research and development programmes,
but if, through repetition, it becomes uncritically accepted, the
intellectual space for reflective practice and individual intervention
in technology is significantly curtailed. In the long term (as we
saw with the CD Rom) the potential for new modes of expression and
representation can simply atrophy.
The view of the history of the microcomputer set out
in this article suggests that it was not inevitable that the ENIAC
would eventually lead to the microcomputer, or that programming
would yield a new profession, which fused many fundamental graphic
design skills with computer science. Or even that the activity of
software design would be concerned principally with the development
of multimedia authoring programmes which these new professionals could
use. The histories of the telephone network, associative databases
and the microcomputer, outlined here, covers just some of the many
little machines that have been brought together to form a particular
information and entertainment apparatus that we now understand as
the microcomputer. There are others, such as the keyboard, the cathode
ray tube, the micro switch, the electro-acoustic loudspeaker, the
numerical calculator and so forth, whose accretion into a single machine
was neither anticipated nor inevitable. Their convergence was not
planned or even intuited independently by gifted individuals. Rather,
they were combined, through a broad constituency of imaginative and
social processes, which did not cease with the innovation of a product.
A crucial contribution to the development of the microcomputer was
the active and insistent intervention of the enthusiast who challenged
the professional interpretation of the machine. Had the technological
development of the computer remained with scientists, corporate developers,
and military strategists, it is possible that today it would still
be understood as a centralised database and number cruncher for government
institutions and multinational companies. Academic access and amateur
use would possibly only extend to some form of grace and favour timeshare
facility on a mainframe, or a public rental system, such as Ted Nelson
envisaged with the Xanadu project as a sort of information Laundromat.
It also seems unlikely that the chief preoccupation in scientific
projects concerning artificial intelligence and artificial life would
be so committed to connectionist models based on biological models
of the human being.
The computer industry may prefer to present itself
as startlingly new and unprecedented, but many features of its emergence
can be seen as a re-circulation of another period in which inventions
were enthusiastically received by amateurs who actively participated
in the projects of the scientist. In the nineteenth century, many
technologies that were the product of corporate and military ambitions
like the phonograph, the Xray and the movie camera, were taken over
by amateurs and demonstrators who developed not only new applications
for these devices, but new inventions, and, above all, new scientific
knowledge. The cultural framework of the first cinema, for example,
was not one of unsophisticated awe and hysterical distraction. The
audiences did not duck for cover as most histories would have it,
but responded with curiosity and astonishment at the technology.[xxxviii][xxxviii]
Its popular fascination was founded on intellectual goals that were
shared between the audience, professional science, and an emerging
class of technologists. The various meanings that were given to the
cinema by its inventors, even the various permutations of the little
machines which constitute the basic cinematic apparatus, emerged from
the struggles between the conflicting interests of these groups.[xxxix][xxxix] On the one hand a professional class attempted to
control the discourse and meaning of scientific enquiry whilst on
the other a lay public insisted on participation in the process. This
polemical opposition was marked out as early as 1850 in The Working
Man's Friend and Family Instructor which offered the following
thoughts about scientists and scientific enquiry,
Every person must have right or wrong thoughts, and there is no reason why a hedger and ditcher, or a scavenger, should not have correct opinions and knowledge as a prince or nobleman. Working men and working women have naturally the same minds or souls as lords or ladies, or queens ... if any one could have analysed or cut to pieces the soul of Lord Bacon, or Sir Isaac Newton, and that of a chimney-sweeper, it would have been found that both were made of the same divine material.[xl][xl]
The critical tone of this comment is reflected in
popular science journals throughout the rest of the century, but by
the early 1900s professional scientists established institutional
frameworks that systematically excluded lay participation. Experimental
practice progressively insisted on sophisticated instrumentation that
eliminated the human observer. Some of these instruments, like the
phonograph, the Xray, and the cinematographe, found their way into
the public domain as spectacular entertainments at theatres and fairgrounds.
Even then they were often reinterpreted; the phonograph, which was
designed as a business device became a music machine, the Xray was
a huge attraction until the health risk became clear, and the cinematographe,
which had its scientific origins in projects as diverse as the study
of movement for military purposes and three dimensional and colour
photography, was transformed into a mass entertainment.
The cinema, like personal computing, was not the
inevitable outcome of a technology but the mediation of a scientific
apparatus by audiences and producers negotiating economic and social
imperatives. Exhibitors listened to the audiences as they left the
seance and bought films accordingly. Producers (who were sometimes
also exhibitors) often came from scientific backgrounds and were sensitive
to the tensions between professionals and amateurs in the ongoing
struggle for the meaning of science and technology.[xli][xli]
They often responded to a perceived appetite for the visualisation
of a popular criticism of science and technology by concentrating
on the empirical reality of the appearance of the exotic. For the
first decade of cinema there were many fantasy films, but the majority
were, by far, of a documentary nature (so called actuality films)
which showed the world as it appeared to ordinary people - and usually
in colour. Many showed scientific and surgical procedures with an
awesome honesty and often, in their brutality, an implicit criticism.
The appetite for such material provided the revenues to build cinemas
as social institutions that provided audiences and protocols for yet
another use for the cinematographe, that is the profoundly unscientific
function of telling fantastic stories, many of which repeated the
confrontation of the determinist world view of the professional scientist.
Even after a century of cinema, successful mainstream
movies retain strong traces of this discourse engine in sci-fi movies,
the mad scientist genres of the Cold War, as well as the contemporary
Hollywood blockbuster like Terminator, Jurassic Park, Strange Days,
Mission Impossible, and Independence Day. And, of course,
Twister , which rehearses the polemic that was aired one hundred
and fifty years ago in The Working Man's Friend and Family Instructor.
It depicts theoretical science as generally corrupt or corrupting,
whilst field work, and especially the unassuming action of amateurs,
is shown as honest and effective. In the film relatively ordinary
people, are opposed by funded scientists supported by a fleet of vans
packed with instruments. This is a movie by Stephen Spielberg, a man
who, evidently, knows the power of audiences better than most directors.
Twister is a film about a science that was not possible until
the advent of the VCR and domestic video camera. Before 1980 only
a few films of typhoons existed. Since then, knowledge of these meteorological
events has grown exponentially as ordinary people filmed them with
their home movie equipment. The film is a celebration of the power
of ordinary people to control science, technology and knowledge in
a an entertainment medium which has a long historyof polemical opposition
to scientific elitism.
If popular culture and blockbuster films can be admitted
as evidence, then there is some suggestion that the market-lead technological
determinism, which is used to explain digital technologies, has a
dubious currency. The current criteria for hardware development are
photographic image quality, ever greater storage and faster clock
speeds. These impinge on the design concept of consumer products.
Consequently interactive CD titles, designed for the popular market,
for example, must work with leading edge of technology. Data transfer
times, image quality, and screen refreshment rates have to match the
latest microcomputers and the entertainment model of the cinema. They
seldom work quite as well as they should, they are slow, and far too
often, they do not work all and crash the system. This means that
frequently CD Roms have a provisional air to them, get bad word-of-mouth
publicity, and sell few copies. In contrast the network of networks,
which we call the Internet, is often frustrating, and irregular, yet
it has an enormous constituency of active participants trying to make
it work in unscientific ways. It is a hit and miss technology that
uses an ugly and burdensome language called html, slow screen refreshment
rates and low resolution, but it has captured a popular imagination
in ways that have taken industry and dedicated media analysts like
Ted Nelson by surprise.
The Internet's origins are somewhat nineteenth century.
It is a military technology that is appropriated by popular culture
and transformed in ways that its inventors could not have envisaged.
As with cinema in the early years, the Internet is an extraordinary
heterogeneous collection of entertainment and information, determined,
in part, by its audience. It provides a space for an imaginary interaction
and nearly anything is possible. It is widely used for finding and
retrieving specific information, but also as a vehicle for simply
moving cruising through dataspace with the impulsive curiosity of
the nineteenth century flanneur. To surf the 'net' is to witness the
amazing generosity of a large community of participants who are prepared
to expend resources realising personal imaginary worlds and making
them freely available. There is currently little room for commercial
production, nor does it seem likely that without a major reinterpretation
of the technology as a passive entertainment medium there ever will
be. Although much of the information that is available is banal and
unreliable, the Internet challenges the authority of high science
by recuperating it (perhaps sometimes crassly) into the public domain
for pleasure. What may finally tip the balance in favour of convergence
is not the issue of corporate strategy, the intellectual dissipation
of television or the usefulness of computing, but theinterpretation
of television and network computing as low resolution representations
that allow for a popular participation in its meaning.
A
more sophisticated, and representative understanding of how technologies
acquire meaning needs to be investigated. The issue of whether there
will be a convergence of television and the PC, or if indeed television
is dead will not finally be resolved by rhetoric and assertion alone.
As seems clear in the history of both personal computing and the cinema
(to take two examples), what a machine is must be negotiated within
a complex network of different interpretations that includes those
of the user.[xlii][xlii] The personal computer is a machine that has been
developed in a culture which, in some respects, considers scientific
knowledge to be the rightful domain of both the layman and professional.
The recent growth of the ecology movement, forcibly claiming control
of technological development, provides undeniable evidence of this.
As a casual survey of Internet traffic shows, the meaning of the
bricollage that we call the microcomputer is still volatile. Investors,
designers and media gurus might do well to remember that when writing
the history of popular computing the user is made of the same divine
material as Bill Gates.