Why I Was Such a Pain in the Ass in Grad School

Below is the text of something I wrote in the winter of 2014, my last year in graduate school.

 

The Various Things, Internetual and Otherwise, I’ve Been Reading and/or Thinking about Lately

Under consideration: James Gleick’s The Information, Raymond Tallis on Jacques Lacan, John Gardner’s reactionary faff, Dana Spiotta’s collage novels, and some other stuff.

 

1. Because the digital world is lonely and deracinating and alienating, and also because beyond that I am an introvert and find face-to-face interactions with people exhausting, I spend a lot of time alone. Because I spend a lot of time alone and am an introvert and am alienated and deracinated and lonely, I have come to be a denizen of a variety of online “communities”, viz, websites whereat lonely, deracinated, alienated introverts can gather and discuss things without having to look one another in the eye; usually these begin as single-serving websites, focussed on something specific, and become broader: the one I have spent the most time at, in my life, is Baseball Think Factory, which was originally a gathering spot for data-minded baseball enthusiasts — sports “geeks”, we were, which seemed paradoxical in 2002 but now feels totally normal and intuitive, since the geeks have taken over the world — but has since become a freewheeling society of (almost exclusively) dudes, complete with friendships, rivalries, enemies, politics, and entertainments; though baseball is still the most-discussed subject there, the most riotous arguments always erupt over real-world politics, with the majority trending left-libertarian and a vocal minority standing athwart history shouting “STOP!”

2. I’m going to make a distinction here, and it’s going to be important in a minute, and I want it near the top so nobody will miss it: there is a big difference between “data” and “information”. The superhuman geek-god Nate Silver might call one “noise” and the other “the signal”; what I think it really means is that you can get a lot of input these days, but not all of it means what you think it means — or anything at all. The most dangerous mental bias in the data age is probably apophenia: the human tendency to detect patterns in random data. The classic example is how we perceive there to be a face in the geologic forms on the moon, though of course there isn’t: the data, in this case, is the image of the moon; the information the mind wants to find there is the shape of a face. The information that is actually there is the history of the galaxy — if you know how to read it. Anyway, the Man in the Moon is harmless enough, but apophenia becomes dangerous when we are presented with an overwhelming amount of data about the world and the universe and start drawing implausible conclusions: that 9/11 was an inside job, for instance, or that global climate change is a natural process unaided by human inputs (or simply doesn’t exist because it snowed yesterday).

3. An example: the word beautiful contains a very great deal of data, but essentially no information, because if a person says another person is beautiful, we really have no idea what they mean by it: do they like tall, dark and handsome? Rough and rugged? Do they have a ginger fetish? Pretty much anything you could call beautiful — a face, a body, a landscape, a sunset, a night sky, a dream, an idea — is going to present this problem.

4. Anyway, the reason I started off talking about the internet is because there is a common practice on internet forums when someone else elegantly expresses an opinion that you share but don’t feel up to articulating: you quote the entirety of what they have said, and then follow it up with a simple, one-word sentence fragment: “This.” When I sat down to write this little thing, I kind of wanted to quote the entirety of Jonathan Lethem’s essay on postmodernism and The Man Who Shot Liberty Valance, and then type underneath: “This.”

5. It has become axiomatic, in this post-post-(post)-modernist age, that identity is composed and not innate and so on and so forth. I find this to be immensely troubling and ultimately kind of preposterous. It seems to me to be a kind of existential overreaction to the terrors of fascism & colonialism: because for so long the powerful presumed there was an innate quality in being white & Xian that gave them the right to do whatever they wanted to those who were not white &/or Xian, a lot of thinkers freaked out and decided that there was nothing innate about people at all and that the very concept of innateness was dangerous. And that’s understandable, because racialism or whatever you want to call it is a ridiculous and provably false set of ideas; equally, however, it is provably false that identity consists only of inputs. There is a unique processor somewhere in a human brain that causes similar inputs to output different people; it’s not that this is totally immutable or intractable — I am in a lather to assure you that I do not believe in the concept of a soul — but that there is a core to any person’s being that will cause them to compose themselves in a certain way, which has very little to do with culture or language, and may have something to do with genetics, though it’s of course important to note that the way this breaks down runs against the assumptions of race or class superiority that drove several generations of human thought. Pretending as though this isn’t true because it bothers us renders nobody a useful service. The British neuroscientist and philosopher Raymond Tallis writes in his crushing review of Jacques Lacan & co: a History of Psychoanalysis in France:

Future historians trying to account for the institutionalized fraud that goes under the name of ‘Theory’ will surely accord a central place to the influence of the French psychoanalyst Jacques Lacan. He is one of the fattest spiders at the heart of the web of muddled not-quite-thinkable-thoughts and evidence-free assertions of limitless scope which practitioners of theorrhoea have woven into their version of the humanities. Much of the dogma central to contemporary Theory came from him: that the signifier dominates over the signified; that the world of words creates the world of things; that the ‘I’ is a fiction based upon an Oedipalised negotiation of the transition from mirror to symbolic stages; and so on.

This.

6. There is something assaultive about living in the age of data. James Gleick’s magisterial The Information, which is sort of almost a biography of data, is subtitled, A History, a Theory, a Flood, and that feels right: god, there’s just so much of it. Climate data, the home / road offensive splits of the Seattle Mariners, the likely-voter adjustment in a Real Clear Media poll of Ohio voters, the live birth rate in Iran. Those are just the datastreams I, personally, have waded into over the last few days. It’s hard not to feel overwhelmed, and equally it’s hard to trust yourself to organize all of it, at least if you’re constantly aware of your own mental biases, which trend apopheniac.

The first “postmodernism” that requires a new name is our sense—I’m taking it for granted that you share it—that the world, as presently defined by the advent of global techno-capitalism, the McLuhanesque effects of electronic media, and the long historical postludes of the transformative theories, movements, and traumas of the twentieth century, isn’t a coherent or congenial home for human psyches. — Jonathan Lethem, “Postmodernism as Liberty Valance: Notes on an Execution”

This.

7. Have you ever stood in a swimming pool at such a depth that you had to tilt your head back and look at the sky in order to breathe, and even then water kept getting in your mouth and you found yourself wondering if this was a good idea and if maybe it was too late to pull the abort switch?

8. If one were to have an interest in watching a novelist grapple with the feeling that modern life overwhelms identities, it might be valuable to read Dana Spiotta’s Eat the Document, which is about a woman who was once the sort of bullshit-hippie-idiot-terrorist that I’m glad ceased to be in vogue ten years before I was born: you know, the white children of privilege farting about with pseudo-Marxism and blowing people up in the name of something, though just what has never really been clear to me. I guess it’s because I’m a GenXer and was born jaded that I find this kind of thing hard to sympathize with, but it strikes me as dangerously stupid to assume that there’s an ideology that’s going to cure society’s ills, and even stupider to assume that Marxism is it, but then I’m getting off track and anyway the principle narrator of Eat the Document has gone underground and sold out or bought in or whatever ridiculous thoughtcrime growing up is meant to be, and her past comes back to haunt her. Unlike the Lethem essay or the Tallis review or whatever I cannot simply quote a passage of the book and say “this, I believe this,” because on some level the book buys into a concept of authenticity that I just don’t believe in, ie, it seems to me that people are what they do and the attitude they hold when they do it or the, I don’t know, cultural background of their upbringing or whatever other fundamentally irrelevant data you want to bring into the equation doesn’t really matter. On the back cover, in my notes on the book, I wrote, The fetishization of authenticity results from a pointed anxiety about one’s own lack of it. It’s a kind of conservatism. So that — that’s what I think about that, I guess.

9. And then I made the mistake of reading John Gardner’s On Moral Fiction, which, wow, what a piece of shit that book is. Some of this reaction has to do with the fact that Gardner was a small-time novelist taking arrogant pot-shots at people who were vastly superior writers (anyone who dismisses Kurt Vonnegut out of hand, especially when allegedly thinking about how to deal with morality in fiction, pretty much goes straight to the bottom of my shit list). But more it has to do with the intellectual straitjacket that Gardner tries to fit on society, dismissing postmodernists as glib and “commercial”, which I guess might have some merit, but then when he talks about what a book or work of art art or whatever is supposed to get up to, he makes these vast generalizations that stand on a foundation of pure hot air, to wit: “True art is by its nature moral. We recognize true art by its careful, thoroughly honest search for and analysis of values.” Do I really have to explain why this is total faff? Aside from the fact that it’s reactionary and dangerously dismissive of what one might call the great polyphony of world tradition, it’s also the kind of big, baseless assertion that lies in the crumbly fundaments of religions (of which, by the way, Gardner was dismissive, somehow not quite seeing irony there). This is the kind of thing that sounds stern and brave but is really just an odiferous belch in the face of the challenges of modern life. It also precludes what I think of as the diagnostic role of some art in society: I once had a conversation with a woman I was dating about the state of the newspaper industry, which was then in the early stages of the death throes that continue to drag down a few papers a year; she wanted to know what I thought about, well, that, and I gave her my honest opinion: that the age of print media was basically over and in a few years we would live in a thunderous echo chamber made up mostly of tiny niche websites that would help us live in the self-organized feedback loops of corrupt data. She wanted to know, given such a dire prognosis, what I thought should be done about it. And I said, Nothing. It’s going to happen and we can’t stop it. This made her very angry and she said there was no point in having an opinion about something if you don’t have an opinion about how to change it for the better, which I find to be a completely ridiculous way of looking at the world, which I told her, and then I said, Sometimes the patient just has terminal cancer. She really didn’t like that. A few months later she moved to California and I didn’t go with her.

10. It is necessary these days to have one’s perimeter well-defended against bad data. There was recently a story that raced around social media among a certain stripe of conservative, which concerned a former Marine attending a college course taught by an atheistical professor who blasphemed loudly and demanded to know where God was to strike him down. The Marine then gets up, assaults his professor, and says, “God’s busy looking after our men and women who are out defending our freedoms, so I stood in for Him” [sic]. The sickly irony aside, this story is obviously a lie, and I suspect that many people who shared it around didn’t have any illusions as to its factuality. But it confirmed the way they thought about the world: professors and atheists bad; soldiers and Xians good. It may not have been factual, but it was true, as far as they were concerned. Encountering this on the Twitter and Facebook feeds of my more conservative relatives drove me crazy; but more pernicious by far, at least in the Life of Liberal Joseph, is the mirror image of that story, one in which the atheist is tolerant and triumphant, and the right-wing macho man is served justice. Such stories exist, I am certain of it. But I may be too blinded by my biases to properly ferret them out.

11. Liberals loved Nate Silver as long as he was was reassuring them that Mitt Romney wasn’t much of a threat to Barack Obama. Those feelings have become much more complex now that he seems to think the Senate will flip red this fall.

12. Change your passwords. They know.

13. Yeah, but who are they?

14. THE SYSTEM IS BLINKING RED THE SYSTEM IS BLINKING THE SYSTEM IS THE SYSTEM

15. What was I driving at? Oh, right — there does seem to be a semi-radical consensus going around that the way we live now is somehow difficult to take, in a way that it didn’t used to be. I’m not sure I find that particularly persuasive (living in the age of data is certainly not worse than living, for instance, in wartime Europe, or Soviet Russia during the famines, or really medieval anywhere), but does it seem to anybody else that we are all somehow far from home? I think I’m comfortable stipulating that the way we live is qualitatively difficult in a different way, in that there is a shattered, unfocussed, drowning quality to day-to-day life (combined with a stultifying unstimulated stillness of the body); but I’m not prepared to say that life is more difficult to tolerate than it used to be. I think the feeling I’m trying to describe, which is nebulous and which I don’t completely understand myself, comes from an intolerable clash between the fact that there is a core identity to each of us and it’s struggling to combat &/or process an oceanic amount of input in order to fashion a self. We live in postmodern times but do not possess a postmodern I, in the convenient, destabilized, meaningless, ultimately quite wrong way that Lacan and his many acolytes, students, scholars, fellow-travelers and dipshits would have us believe.

16. But what does this mean? Should you read The Information? Yes, I suppose you probably should, if my experience of it — that it was accessible, fascinating, and completely full of thoughts that seemed new to me — is one that can be generalized. Should you read Eat the Document? That’s a more complicated question. Eat the Document is about how the shredded remains of a life story cannot be completely disregarded or disposed of, though its execution sometimes seems more intellectually sound than — what’s the word? — oh, satisfying, that old critic’s saw. It’s a collage of voices and sorts of documents, which is interesting; but ultimately Spiotta fails properly to inhabit all the voices she’s telling her story in: a teenage boy writes in much the same way that his 40-something mother does, and it’s a problem. Her next novel, Stone Arabia, avoids this, sort of, by narrating itself in a weird amalgam of the first and third persons, so the voice makes more sense, but the book ends arbitrarily and is probably a hundred pages too short for its own good (not a common complaint in this day and age, but there you have it). Should you read On Moral Fiction? Yes, if you’re an aesthetic reactionary, or maybe if you’re participating in a bit of ancestor-slaying like what I’m doing here; otherwise, no, of course not, it’s a silly book, overfull of generalizations about what art is and what it’s for and why one should pursue it and . . . I don’t know, it didn’t make a lot of sense to me. Maybe if I annoy you, if after reading this little bit of post-post-(post)-modernist dithering you find everything I have to say stupid and silly and objectionable (an entirely reasonable reaction, if you ask me, as I am a bit of a pain in the ass), then it’s the book for you, and you should go off and buy it and read it and write modernist, realist, moralist fiction that my friends and I can sneer at and write dismissive reviews of and then send links to said reviews to one another via our Twitter accounts and then you guys can review what we write and poke fun and nobody will listen to anybody else and we’ll all just live as one big unhappy family in a choking atmosphere of self-arranged, self-reinforcing data that can mean whatever we want it to mean. I don’t know, guys. Who am I to tell you what to do?

Of Computing.

In continuing to put off part two of my DFW essay, I offer some thoughts on computers & computing. Follow links for footnotes.

***

1. What Is a Computer For?

The first computer I ever owned was a little black-and-white Macintosh that my mother brought home from some far-off electronics store in the suburbs. It was about the size of a fish tank set on its side,[1] and had a screen not much larger than a grown man’s hand. I was thirteen. I took it into my bedroom and set it up on a spare dining chair that had been mouldering in the darkest corners of the basement. Then, for a few weeks, it sat, unused: what is a computer for? I had learned to type on an electric typewriter as a child. I had played rudimentary games on an Apple IIe at school. They had keyboards and screens. I remember wishing I could attach an antenna to it and watch TV.

Eventually I discovered a few games hidden deep in its files, simple things like Cannon Fodder, which involved an extremely abstract simulacrum of wartime artillery — no more than blips floating across the screen, a game that boiled down to an intuitive sense of geometry. I procured Claris Works, that long-ago ancestor of Pages, the word processor I’m using now, some nineteen years later, to write this. That’s what a computer was for: it was a video game system and a typewriter that you could keep on a wood-backed chair next to your bed. I never crunched a number with it, and certainly never communicated with a human being beyond the bounds of my bedroom using it, except inasmuch as I might have written and then printed out a letter that I later put in the mail. Its chief bonus over a typewriter was that I didn’t have to keep track of reams of paper if I wrote a story on it.

The Online Etymology Dictionary entry for “computer” reads as follows:

computer (n.)

1640s, “one who calculates,” agent noun form compute. Meaning “calculating machine” (of any type) is from 1897; in modern use, “programmable digital electronic computer” (1945; theoretical from 1937, as Turing machine). ENIAC (1946) is usually considered the first.

There’s more, but that’s the relevant bit. ENIAC is a hyperlink which leads to a curious entry in the Online Etymology Dictionary (“acronym from ‘electronic numeral integrator and computer,’ device built 1946 at the University of Pennsylvania …” — it contains no actual etymological data of value, and beyond that defines a word not in use as anything other than a proper noun for a single instantiation of a device). At the bottom of the entry for “computer” is the following citation: “WASHINGTON (AP) — A New York Congressman says the use of computers to record personal data on individuals, such as their credit background, ‘is just frightening to me.’ [news article, March 17, 1968]”.

So what is a computer for? When it was a human being, it was a person who did mathematical operations. The initial serious gestures at taking this activity out of fallible human hands were not actually given the same name; Charles Babbage, who labored at many curious pursuits, designed a theoretical “Difference Engine” whose purpose was to produce accurate arithmetical tables to coordinate trains — but before long, in the mechanized, optimistic Victorian age, the device began to seem real enough that it no longer wanted a strange, capitalized name, and it became the same mundane thing a man with a slate and an abbacus was. The first successful programmable and fully automated computers were built, not coincidentally, in the early 1940s — Germany had the Z3, the British Alan Turing’s Bombe, the Americans were hard at work on ENIAC. These computers did math, usually in the service of making or breaking military codes. They computed, quite literally.

In order to do this, they had to have memory — to perform the several steps of a complicated logarithm, a computer had to be able to “remember”, so to speak, the steps that had come before and the the ones that were to come. By 1968, the memory function had become at least as important as the mathematical one — and, just as computers had been weaponized as targeting systems, code makers, and code breakers, the memory now became something weird and frightening, a threat to privacy. The nature of a computer was changing, growing: memory was more than math.

By 1993, when my parents gave me a (probably used) Macintosh Classic, memory had grown cheap, and our credit histories were stored on computers all right — but memory had been miniaturized, and brought into the home, and again the nature of the computer was changing: the memory machine had become a toy and a tool; it was not just a storage device, but something that interacted with a user, and had graphical outputs that made much more than math possible. Underlying this somewhere was still a programmable electronic machine that operated on a mathematical principle, but my little Macintosh had very little to do with Babbage’s Difference Engine, in size, shape, purpose, or use.

2. Of Copying.

Charles Babbage was a wealthy polymath, the Lucasian Professor of Mathematics at Cambridge, a train enthusiast, a compulsive cataloguer and taxonomist,[2] and probably the smartest man in the western world for a while in the 19th century. He was also, as noted above, an early advocate of the idea that automated mathematical calculations could remove human error in arithmetical tables, thereby increasing both safety and productivity for train conductors, among other things. Stempunk novels, including one by William Gison called The Difference Engine, often posit an alternate history in which Babbage’s Difference Engine, which proved to be too cumbersome and pricey for Victorian manufacture, was successfully built, spurring the kind of explosion in computing that occurred after Turing and others built their code machines.

Mathematics was not the only place where Babbage saw the utility of automation.[3] One of the places that his bizarre compulsion for cataloguing manifested itself was in the abstruse 1832 tome On the Economy of Machinery and Manufacture. In the chapter “Of Copying”, which is about “COPYING, taken in its most extensive sense” [viz., inclusive of things like bullets, bedframes and boxes], he delineates a theory of labor that fits nicely in the path that runs from Adam Smith to Karl Marx. He notices that, in such copying work, “the instrument or tool actually producing the work, shall cost five or even ten thousand times the price of each individual specimen of its power”, and seems to be trying to answer the question: how could this possibly be a workable economic model? How can we pay for an expensive machine by making things that we sell cheaply?

The answer is twofold: efficiency, and time. Both are encapsulated in a somewhat throwaway line at the end of his description of the printing of calico “by Cylinders”: “A piece of calico twenty-eight yards in length rolls through this press and is printed in four or five minutes”. Efficiency: a great deal of material can be processesed, and so can be sold cheaply because sold in bulk, and thereby still pay for itself. Time: it can be done in four or five minutes, leaving a laborer with time to make more calico — or to go do something else, something inventive and important. (Or, no doubt, to cause a street nuisance.) Whereas many previous theories of labor operated on the assumption that there was a fininte amount of work available to be done in the world, most increase in wealth coming from land and husbandry, Babbage imagined a universe in which economies expanded to meet the ever-multiplying abilities of their laborers and inventors. This has its antecedents in Smith’s The Wealth of Nations, in which the great dour man of the dismal science noticed, among other things, “[t]hat the industry which is carried on in towns is, every where in Europe, more advantageous than that which is carried on in the country”.

I mention Smith here because he spends a fair amount of time in The Wealth of Nations (published in 1776) arguing in favor of a public education system, to combat an ill he sees as resultant from the very same manufacture that Babbage would, forty-odd years later, so exhaustively catalogue:

In the progress of the division of labour, the employment of the far greater part of those who live by labour … comes to be confined to a few very simple operations… . The man whose whole life is spent in performing a few simple operations, of which the effects too are, perhaps, always the same … has no occasion to exert his understanding, or to exercise his invention.

The efficiencies of manufacture make city work dull, and this results in a dull populace. The time afforded by manufacture, however, allows for the education of the masses. Babbage begs Smith’s question in “Of Copying”.

Babbage spends a fair portion of  “Of Copying” considering the kind of copying that feels familiar to a modern reader: the copying and distribution of text. This is, I think, not a coincidence: the copying of texts (and arithmetical tables) was an information technology, and a tool for education; Guttenberg, though he went broke, invented mass communication on a scale before incomprehensible, by figuring out how to copy texts quickly. The efficiencies of manufacture, public education for laborers, the incipient literacy of the working classes: information and its distribution, in the age of Babbage, were becoming hugely important.

Information and its distribution. What would a modern Babbage, in writing his “Of Copying”, come to say about the copying of text? Because something weird has happened to copying: the vanguard in Babbage’s day — movable type — has become declassé. The very fact that it makes many, many copies, copies which take up space, and can be lost, and cost money to make and maintain, and take time to deliver,[4] is now seen as a drawback. Instead, a decendant of Babbage’s theoretical Difference Engine has made it so that, in fact, there need only be one copy of many texts, one copy that can be accessed, in theory, infinitely, by infinitely many people with computers: in order for “Of Copying” to be available in public education, one need not print it out thousands upon thousands of times, but simply copy it once into Google Books. The efficiencies have been turned on their heads.

3. ARPANET: Mind as Internet.

Not long after that New York congressman quoted in the Online Etymology Dictionary entry for “computer” expressed his dismay at the thought of computers as giant, intrusive memory machines, Science and Technology Magazine published an article by J.C.R. Licklider and Robert W. Taylor entitled, “The Computer as a Communication Device”. It begins with a bold assertion: “In a few years, men will be able to communicate more effectively through a machine than face to face”. They then go on to detail a meeting “held through a computer”, in which “the group accomplished with the aid of a computer what normally might have taken a week.” No matter that they were all in the same room; they might well have been on opposite sides of the planet.

Taylor was, at the time, director of the computer research program at the U.S. Defense Department’s Advanced Research Projects Agency [ARPA]. Licklider was a psychiatrist and electrical engineer and Taylor’s boss, the head of ARPA’s Behavioral Sciences and Command and Control (read: real-time decision-making) departments. The team that Licklider assembled would be nicknamed the Intergalactic Computer Group, dedicated largely to to what they called “communications engineering”, and to a radical new idea of what communication meant, and what it did:

We believe that communicators have to do something nontrivial with the information they send and receive. And we believe that we are entering a technological age in which we will be able to interact with the richness of living information — not merely in the passive way that we have become accustomed to using books and libraries, but as active participants in an ongoing process, bringing something to it through our interaction with it, and not simply receiving something from it by our connection to it.

“The richness of living information”: they believed that computers were not memory banks, but minds, or at the very least a physical extension of human minds. Though the idea of information as being alive and powerful has now become conventional, at the time it may have been difficult to conceptualize: information came, and would continue to come for most people and for some time, in an inert form, as ink on a page, with all the connotations of permanence and authority that that form still holds.

They were talking about interactive computing, and they were inventing ARPAnet, the precursor to the internet that went live in the early 1970s. Licklider would be gone by the time the switch was thrown, working at Xerox, which invented the graphic interface appropriated by Steve Jobs and now used by nearly every computer on the face of the planet. But he had begun yet another quantum shift in what a “computer” was.

What’s a computer for? In 1968, in the popular imagination, it was a giant machine that calculated telemetry, plotted routes for nuclear missles, and maybe — just maybe — collected information of a sort that we weren’t totally sure we wanted collected all in one place. The popular imagination, as it always has in this area, lagged: computers had been memory machines for a long time. And it isn’t that they ceased to be memory machines the day Licklider and Taylor had their staff meeting in a room with a bunch of networked computers, collaborating to make sure that they were thinking similar things — what they call “modelling” in their paper. But they became something else, too. When people could remotely access that memory, change it and leave it different, computers suddenly became wildly more powerful: they not only stored our thoughts for us, but projected them outward into the world, opened them up for discussion. The possibilities were endless.

4. The Information.

In the 1992 film Sneakers, Robert Redford and Ben Kingsley play aging former hackers, men who were in on the ground floor of those networks that Taylor and Licklider were writing about and inventing. Twenty years on, they have discovered themselves in conflict. In the climactic scene, Kingsley’s villainous character, Cosmo, delivers a thundering bad-guy-speech: “There’s a war out there, old friend. A world war. And it’s not about who’s got the most bullets. It’s about who controls the information. What we see and hear, how we work, what we think — it’s all about the information!”

This film came out about a year before I got my first computer, and though I loved it dearly — I must have seen it seven or eight times in my teenage years — I had no way of conceptualizing what that climactic speech was about. Who controls the information? Information is controllable? Even after I got a computer, it didn’t make a hell of a lot of sense to me. That thing didn’t have any information on it. It had a game called Cannon Fodder. Eventually it had a bunch of half-finished short stories. I wasn’t sure any of that qualified as information.[5] And how could you have a war over any of it?

Years later, as a sophomore in college, I first plugged a computer into a fast internet line, and I understood. At Pomona College in the late 1990s, there was an excess of access to what we then called the Information Superhighway:[6] we had a T3 LAN line, enough bandwidth for a huge company in the business of web hosting, dedicated to 1500 undergraduates with nothing better to do than download music and send IMs. There wasn’t that much to do on the internet in those days: there was no YouTube, no Facebook, and yet there was so much — and without Google, it all seemed undifferentiated, hard to find, difficult to access. How did we hear about new websites in those days, without Reddit and Twitter? I honestly cannot remember.

But still. The deluge: within weeks I had found a baseball chat board, where I learned a great deal — and more than that, probably above and beyond anything that Licklider or Taylor could have imagined back when they were operating in obscurity (if not secret), I made friends and enemies. There were strange outcroppings of internetual rock, dissertations on the subjects of role playing games I had never heard of, fake people telling obvious lies for reasons unknown, libertarians living in socialist Scandinavia, and more pictures of naked women then I ever could have imagined before I plugged in. James Gleick writes in his book The Information that many people considered early telegraphs — done visually, without electricity, but literally with smoke and mirrors — “a nervous system for the earth”. It seems astouding to say so: controlled and funded by Napoleon, the telegraph network was not “an instrument of knowledge or of riches, but … an instrument of power”. Is not the internet, and all its appurtanences, a much better model of a nervous system — an instrument of knowledge and wealth and power and … well, one hesitates to call videos of cats falling into bathtubs any of those things, but I suppose it is information of a variety.

What is a computer for? What does it do? I am not entirely convinced that the modern computer is anywhere near enough like its ancestors to be considered the same thing — the laptop I’m writing on now has little other than a binary system of storing information in common with Alan Turing’s Bombe, which did only math and occupied an entire room. Though Turing was always at pains to assert that the inherent essence of computing was not in the electricity but in the logic, it does seem that on some fundamental level things changed — possibly more than once — in the interim.

Turing’s Bombe and the Z3 and the ENIAC did math. They were programmable — they ran according to a system of rules and gave consistent results[7] — but what they did, in essence, was compute logarithms and probabilities. They had memory, but it lived in punch cards and vacuum tubes, easily lost, fundamentally no better as a system of organizing information than typing it onto a sheet, paper cuts, ink-stained fingers and all.

The computers that provoked anxiety in the congressman from New York, they did math, too — but they were memory machines, able to store and organize information in a way that was fundamentally different to how it was done in a file cabinet: this memory was searchable, accessible, and though it still had to go on disks it no longer involved a paper cut. Perhaps that’s what troubled the congressman from New York: not that someone might someday know his credit score, or those of his constituents; but that it would be stored in an automaton, and easily found — not forgotten. Such a process increases the alienation in the world. It renders human judgement increasingly moot.

The little Macintosh with the Cannon Fodder? One begins to grow so far removed from a giant, overheating machine designed to break codes that one begins to wonder if we’re really still dealing with the same beast as before. There are some principles in common, but no purposes that I can discern. Once we turn on the network — once the computers are talking to each other, and the people using them talking through them, then, I believe, we have reached a point at which it is not really possible to assert with a straight face that what we have on our hands is really the same thing at all. I suppose it comes down to a philosophical question: what defines a thing, the rules by which it was built, or the use to which it is put? Because my computer does not do math. I use it to manipulate the rich store of living information, and to create more.