Red Team

In the course of my recent researches, I’ve come across a term that I rather like: red teaming. It refers to a practice in US wargames and intelligence, whereby a group of experts and spies will be broken into a blue team and a red team, with the blue team taking on the role of the US military and apparatuses of state, and the red team takes on the role of the enemy. Some of this is about trying to estimate enemy tactics, but its most vital function, from what I’ve read, is to highlight flaws in your own. Red teamers have to be creative thinkers, highly knowledgable, and willing to detach themselves from classic my-side biases they usually operate under. Most of the greatest intelligence failures in US history — notably 9/11 and Pearl Harbor — were, in some sense, a failure of red teaming: we didn’t know where the weaknesses in our defenses were, or inasmuch as we did we weren’t worrying about them.*

*Another classic red team activity is that of the white hat hacker, who breaks through the defenses of a company or government department in an attempt to highlight their weaknesses. At the beginning of the Robert Redford classic Sneakers, Redford’s ragtag crew of techies are performing this kind of function for banks.

This term spoke to me because — if you’ll forgive the impertinence — I sometimes think I’m a born red-teamer, which can be frustrating in an age of curated information silos, unchecked motivated reasoning, and rampant confirmation bias. I’ve long thought of this as a form of contrarianism, though it’s really not that: I hold fairly standard lefty views on things like the welfare state, social justice, and the value of a polyethnic, polyphonous society. It’s that my instinct, when presented with people I agree with, is to ferret out the hypocrisies and weaknesses of their arguments — arguments which are, after all, often my own arguments. I realize that this might sound like a kind of bragging, and maybe it is; but in reality, this is not something I did, exactly. It’s just a habit of mind. I could spend a lot of time analyzing where it came from, psychologically, but it’s not that interesting, even to me. Suffice it to say that my brothers are both natural red-teamers, too, so it probably comes from our childhood and/or genetics, somehow.

Maybe this is just self-flattery, but I’ve come to believe that a lack of red-teaming is really plaguing liberal (or progressive, ugh, what a shitty word) thought these days. Everybody from the campus speech police to the activist base of the Democratic Party is suffering from a problem where their ideas aren’t trouble-shot by smart people; in an environment where political ideas have become conflated with cultural identities, it seems to me that it’s really hard to have the kind of cross-political discussion that results in understanding where your arguments fall apart. This leads to an assumption that our ideas are inevitable, or obvious, or incontrovertible. (And before you get on the they’re-worse-than-us-about-this horse, I’m sure the right has this problem, too. But I’m not a conservative red-teamer, I’m a liberal red-teamer, so I don’t care about that.) It feeds self-righteousness, and I honestly think it’s part of the vicious cycle whereby politics and identity became conflated in the first place.

Let’s take, for example, the concept of privilege. I’m not going to sit here and tell you that the variety of behaviors, incentives, and cultural forces that currently go by the name of privilege on the left, and in academia, don’t exist; that strikes me as patently absurd. Of course they do. The problem is that the term privilege is completely destructive. The word was loaded long before it became a byword of the social justice movement; being told that you were privileged was tantamount to being told that you were weak, you had never earned a thing, that you were, in short, the bad guy in the story. That was before it got larded up with a bunch of complex stuff about race and gender. The word is judgmental; the word is mean. It makes people feel attacked. And that’s why going on and on about privilege is of extremely little value.

I can just hear your voice, dear blue-teamer, as you groan. I understand that instinct. There’s some blend of I don’t really care about hurting a bunch of white people’s feelings and Of course you think that, you’re a white dude in there. I’m not here to plead for the left to be nicer to white men, or at least not chiefly. I’m here to ask you what the term privilege achieves. Because I would posit that what it very distinctly does not achieve is an erosion of privilege, or the conversion of the privileged to liberal ideals. Instead, it (A) increases factionalism, and (B) alienates those who could be allies. This is about what it does in the mind of the person who wields the term, as well as the mind of the person at whom it is wielded.

As a for-instance, think about the reaction among some people on the left to the tragic case of Otto Warmbier, the American college student who was detained in North Korea, and recently died shortly after being released from that nation’s custody. Inasmuch as people were paying attention (I’ll confess that I wasn’t, really, and wouldn’t have been able to remember his name until he was released last week), most people expressed shock, and horror, along with incredulity about the North Korean government’s explanation for why Warmbier was detained (that he had stolen a government propaganda poster). But there was a distinct strain of thought among the insufferably woke segment of the left that basically said this: Warmbier’s white male privilege had led him to believe he could get away with anything, and this was his just deserts. (See Alyssa Rosenberg’s roundupfor a decent compendium of a few of those reactions. And before you start attacking Rosenberg’s politics, dear blue-teamer, remember that she was an early product of that notorious incubator of reactionary politics known as . . . ThinkProgress.) This is the use of the privilege frame to reinforce the ugly politics of identity and difference. People who gloried in Warmbier’s arrest and sentence could not have known much more about him than that he was white, male, American, a member of a fraternity, and a student at the University of Virginia — one of America’s best public universities, no doubt, but also a classic stronghold of segregation and patriarchy. But to weaponize the idea of privilege in this way is, in fact, unfair, stupid, destructive, and ugly. Whether or not Warmbier was a member of a fraternity at a conservative school, the truth is that the only thing that matters is that he’s a person, and whether or not it was his privilege that caused him to feel empowered to pull down a poster (and I’d argue that [A] we have no idea if he actually did that, and [B] teenagers of all races and genders are prone to doing things like that), it is unacceptable to dehumanize him using the privilege frame. And yet this kind of of thing happens a lot. It’s rarely this egregious, but because privilege is a word that was so loaded to begin with, it often ends up as a tool of dehumanization.

The other idea is a little squidgier, and if your instinctive feeling is that you don’t care about offending a few white dudes, then you won’t cotton to it. But as a member of the red team, I have to tell you that there are a lot of intelligent people who could be made into allies, except for the fact that they feel attacked, and ultimately alienated, when the word privilege starts getting thrown around. The word feels like an attack. Hell, the word often is an attack, dressed in a posture of defense. And there are (at least) two things to be remembered here: (1) that human behavior is ruled by cognitive experience, not objective fact, and so if someone is told they’re privileged when they feel like they’re they’re the opposite, they’re almost certainly going to react with hostility; and (2) that, shifting demographics aside, there are an awful lot of white men in this country, and if you call yourself progressive, ie, what you seek is progress, you will get farther by describing the truths behind the concept of privilege without making white men feel under siege. I’m not trying to blackmail anyone into conceding to the will of the majority, or the historically powerful; I’m not asking people to kowtow to historical elites — I’m asking people to think about gains and losses, allies and enemies, and basic humanity. You should not willfully make enemies of those who might be allies. To do so by swinging around the club of privilege willy-nilly is just dumb. Sorry, blue team. This is one of your weaknesses.

Anyway, I hope you see what I’m saying. I am in no way contending that men, or white people, have not been systematically advantaged by cultural, social, and legal forces, more or less since the founding of the republic. I’m not here to tell you that affirmative action is evil or that we should stop putting people of color in action movies or positions of power or any of that stuff. I’m red-teaming this idea. I’m trying to see where its weak spots are, so we can make our arguments better.

There are a lot of ideas on the left that could use a little constructive red-teaming, by the way. I hadn’t actually intended to make this whole post about privilege, as a term, but as usual I got away from myself. I’d say that the idea that mounting more “progressive” candidates in house races, as a way to appeal to “the base”, is the way forward for Congressional Democrats, isdefinitely one of them. A nationwide $15 minimum wage is another. Allergy to globalism and free trade is yet another. The assumption that the implementation of a social welfare state would be win-win, if only we could get greedy Republicans to leave office. That the white working class is the key to the future of the Democratic Party. (That one, in particular, strikes me as not only deserving of a little red-teaming, but of total destruction.) The list goes on.

Ugh. Now I’m tired. I wrote this whole thing in less than an hour. ZZZzzzzzzZZZzzzzzz . . . sorry. I don’t have the energy for an artful conclusion.

Of Computing.

In continuing to put off part two of my DFW essay, I offer some thoughts on computers & computing. Follow links for footnotes.

***

1. What Is a Computer For?

The first computer I ever owned was a little black-and-white Macintosh that my mother brought home from some far-off electronics store in the suburbs. It was about the size of a fish tank set on its side,[1] and had a screen not much larger than a grown man’s hand. I was thirteen. I took it into my bedroom and set it up on a spare dining chair that had been mouldering in the darkest corners of the basement. Then, for a few weeks, it sat, unused: what is a computer for? I had learned to type on an electric typewriter as a child. I had played rudimentary games on an Apple IIe at school. They had keyboards and screens. I remember wishing I could attach an antenna to it and watch TV.

Eventually I discovered a few games hidden deep in its files, simple things like Cannon Fodder, which involved an extremely abstract simulacrum of wartime artillery — no more than blips floating across the screen, a game that boiled down to an intuitive sense of geometry. I procured Claris Works, that long-ago ancestor of Pages, the word processor I’m using now, some nineteen years later, to write this. That’s what a computer was for: it was a video game system and a typewriter that you could keep on a wood-backed chair next to your bed. I never crunched a number with it, and certainly never communicated with a human being beyond the bounds of my bedroom using it, except inasmuch as I might have written and then printed out a letter that I later put in the mail. Its chief bonus over a typewriter was that I didn’t have to keep track of reams of paper if I wrote a story on it.

The Online Etymology Dictionary entry for “computer” reads as follows:

computer (n.)

1640s, “one who calculates,” agent noun form compute. Meaning “calculating machine” (of any type) is from 1897; in modern use, “programmable digital electronic computer” (1945; theoretical from 1937, as Turing machine). ENIAC (1946) is usually considered the first.

There’s more, but that’s the relevant bit. ENIAC is a hyperlink which leads to a curious entry in the Online Etymology Dictionary (“acronym from ‘electronic numeral integrator and computer,’ device built 1946 at the University of Pennsylvania …” — it contains no actual etymological data of value, and beyond that defines a word not in use as anything other than a proper noun for a single instantiation of a device). At the bottom of the entry for “computer” is the following citation: “WASHINGTON (AP) — A New York Congressman says the use of computers to record personal data on individuals, such as their credit background, ‘is just frightening to me.’ [news article, March 17, 1968]”.

So what is a computer for? When it was a human being, it was a person who did mathematical operations. The initial serious gestures at taking this activity out of fallible human hands were not actually given the same name; Charles Babbage, who labored at many curious pursuits, designed a theoretical “Difference Engine” whose purpose was to produce accurate arithmetical tables to coordinate trains — but before long, in the mechanized, optimistic Victorian age, the device began to seem real enough that it no longer wanted a strange, capitalized name, and it became the same mundane thing a man with a slate and an abbacus was. The first successful programmable and fully automated computers were built, not coincidentally, in the early 1940s — Germany had the Z3, the British Alan Turing’s Bombe, the Americans were hard at work on ENIAC. These computers did math, usually in the service of making or breaking military codes. They computed, quite literally.

In order to do this, they had to have memory — to perform the several steps of a complicated logarithm, a computer had to be able to “remember”, so to speak, the steps that had come before and the the ones that were to come. By 1968, the memory function had become at least as important as the mathematical one — and, just as computers had been weaponized as targeting systems, code makers, and code breakers, the memory now became something weird and frightening, a threat to privacy. The nature of a computer was changing, growing: memory was more than math.

By 1993, when my parents gave me a (probably used) Macintosh Classic, memory had grown cheap, and our credit histories were stored on computers all right — but memory had been miniaturized, and brought into the home, and again the nature of the computer was changing: the memory machine had become a toy and a tool; it was not just a storage device, but something that interacted with a user, and had graphical outputs that made much more than math possible. Underlying this somewhere was still a programmable electronic machine that operated on a mathematical principle, but my little Macintosh had very little to do with Babbage’s Difference Engine, in size, shape, purpose, or use.

2. Of Copying.

Charles Babbage was a wealthy polymath, the Lucasian Professor of Mathematics at Cambridge, a train enthusiast, a compulsive cataloguer and taxonomist,[2] and probably the smartest man in the western world for a while in the 19th century. He was also, as noted above, an early advocate of the idea that automated mathematical calculations could remove human error in arithmetical tables, thereby increasing both safety and productivity for train conductors, among other things. Stempunk novels, including one by William Gison called The Difference Engine, often posit an alternate history in which Babbage’s Difference Engine, which proved to be too cumbersome and pricey for Victorian manufacture, was successfully built, spurring the kind of explosion in computing that occurred after Turing and others built their code machines.

Mathematics was not the only place where Babbage saw the utility of automation.[3] One of the places that his bizarre compulsion for cataloguing manifested itself was in the abstruse 1832 tome On the Economy of Machinery and Manufacture. In the chapter “Of Copying”, which is about “COPYING, taken in its most extensive sense” [viz., inclusive of things like bullets, bedframes and boxes], he delineates a theory of labor that fits nicely in the path that runs from Adam Smith to Karl Marx. He notices that, in such copying work, “the instrument or tool actually producing the work, shall cost five or even ten thousand times the price of each individual specimen of its power”, and seems to be trying to answer the question: how could this possibly be a workable economic model? How can we pay for an expensive machine by making things that we sell cheaply?

The answer is twofold: efficiency, and time. Both are encapsulated in a somewhat throwaway line at the end of his description of the printing of calico “by Cylinders”: “A piece of calico twenty-eight yards in length rolls through this press and is printed in four or five minutes”. Efficiency: a great deal of material can be processesed, and so can be sold cheaply because sold in bulk, and thereby still pay for itself. Time: it can be done in four or five minutes, leaving a laborer with time to make more calico — or to go do something else, something inventive and important. (Or, no doubt, to cause a street nuisance.) Whereas many previous theories of labor operated on the assumption that there was a fininte amount of work available to be done in the world, most increase in wealth coming from land and husbandry, Babbage imagined a universe in which economies expanded to meet the ever-multiplying abilities of their laborers and inventors. This has its antecedents in Smith’s The Wealth of Nations, in which the great dour man of the dismal science noticed, among other things, “[t]hat the industry which is carried on in towns is, every where in Europe, more advantageous than that which is carried on in the country”.

I mention Smith here because he spends a fair amount of time in The Wealth of Nations (published in 1776) arguing in favor of a public education system, to combat an ill he sees as resultant from the very same manufacture that Babbage would, forty-odd years later, so exhaustively catalogue:

In the progress of the division of labour, the employment of the far greater part of those who live by labour … comes to be confined to a few very simple operations… . The man whose whole life is spent in performing a few simple operations, of which the effects too are, perhaps, always the same … has no occasion to exert his understanding, or to exercise his invention.

The efficiencies of manufacture make city work dull, and this results in a dull populace. The time afforded by manufacture, however, allows for the education of the masses. Babbage begs Smith’s question in “Of Copying”.

Babbage spends a fair portion of  “Of Copying” considering the kind of copying that feels familiar to a modern reader: the copying and distribution of text. This is, I think, not a coincidence: the copying of texts (and arithmetical tables) was an information technology, and a tool for education; Guttenberg, though he went broke, invented mass communication on a scale before incomprehensible, by figuring out how to copy texts quickly. The efficiencies of manufacture, public education for laborers, the incipient literacy of the working classes: information and its distribution, in the age of Babbage, were becoming hugely important.

Information and its distribution. What would a modern Babbage, in writing his “Of Copying”, come to say about the copying of text? Because something weird has happened to copying: the vanguard in Babbage’s day — movable type — has become declassé. The very fact that it makes many, many copies, copies which take up space, and can be lost, and cost money to make and maintain, and take time to deliver,[4] is now seen as a drawback. Instead, a decendant of Babbage’s theoretical Difference Engine has made it so that, in fact, there need only be one copy of many texts, one copy that can be accessed, in theory, infinitely, by infinitely many people with computers: in order for “Of Copying” to be available in public education, one need not print it out thousands upon thousands of times, but simply copy it once into Google Books. The efficiencies have been turned on their heads.

3. ARPANET: Mind as Internet.

Not long after that New York congressman quoted in the Online Etymology Dictionary entry for “computer” expressed his dismay at the thought of computers as giant, intrusive memory machines, Science and Technology Magazine published an article by J.C.R. Licklider and Robert W. Taylor entitled, “The Computer as a Communication Device”. It begins with a bold assertion: “In a few years, men will be able to communicate more effectively through a machine than face to face”. They then go on to detail a meeting “held through a computer”, in which “the group accomplished with the aid of a computer what normally might have taken a week.” No matter that they were all in the same room; they might well have been on opposite sides of the planet.

Taylor was, at the time, director of the computer research program at the U.S. Defense Department’s Advanced Research Projects Agency [ARPA]. Licklider was a psychiatrist and electrical engineer and Taylor’s boss, the head of ARPA’s Behavioral Sciences and Command and Control (read: real-time decision-making) departments. The team that Licklider assembled would be nicknamed the Intergalactic Computer Group, dedicated largely to to what they called “communications engineering”, and to a radical new idea of what communication meant, and what it did:

We believe that communicators have to do something nontrivial with the information they send and receive. And we believe that we are entering a technological age in which we will be able to interact with the richness of living information — not merely in the passive way that we have become accustomed to using books and libraries, but as active participants in an ongoing process, bringing something to it through our interaction with it, and not simply receiving something from it by our connection to it.

“The richness of living information”: they believed that computers were not memory banks, but minds, or at the very least a physical extension of human minds. Though the idea of information as being alive and powerful has now become conventional, at the time it may have been difficult to conceptualize: information came, and would continue to come for most people and for some time, in an inert form, as ink on a page, with all the connotations of permanence and authority that that form still holds.

They were talking about interactive computing, and they were inventing ARPAnet, the precursor to the internet that went live in the early 1970s. Licklider would be gone by the time the switch was thrown, working at Xerox, which invented the graphic interface appropriated by Steve Jobs and now used by nearly every computer on the face of the planet. But he had begun yet another quantum shift in what a “computer” was.

What’s a computer for? In 1968, in the popular imagination, it was a giant machine that calculated telemetry, plotted routes for nuclear missles, and maybe — just maybe — collected information of a sort that we weren’t totally sure we wanted collected all in one place. The popular imagination, as it always has in this area, lagged: computers had been memory machines for a long time. And it isn’t that they ceased to be memory machines the day Licklider and Taylor had their staff meeting in a room with a bunch of networked computers, collaborating to make sure that they were thinking similar things — what they call “modelling” in their paper. But they became something else, too. When people could remotely access that memory, change it and leave it different, computers suddenly became wildly more powerful: they not only stored our thoughts for us, but projected them outward into the world, opened them up for discussion. The possibilities were endless.

4. The Information.

In the 1992 film Sneakers, Robert Redford and Ben Kingsley play aging former hackers, men who were in on the ground floor of those networks that Taylor and Licklider were writing about and inventing. Twenty years on, they have discovered themselves in conflict. In the climactic scene, Kingsley’s villainous character, Cosmo, delivers a thundering bad-guy-speech: “There’s a war out there, old friend. A world war. And it’s not about who’s got the most bullets. It’s about who controls the information. What we see and hear, how we work, what we think — it’s all about the information!”

This film came out about a year before I got my first computer, and though I loved it dearly — I must have seen it seven or eight times in my teenage years — I had no way of conceptualizing what that climactic speech was about. Who controls the information? Information is controllable? Even after I got a computer, it didn’t make a hell of a lot of sense to me. That thing didn’t have any information on it. It had a game called Cannon Fodder. Eventually it had a bunch of half-finished short stories. I wasn’t sure any of that qualified as information.[5] And how could you have a war over any of it?

Years later, as a sophomore in college, I first plugged a computer into a fast internet line, and I understood. At Pomona College in the late 1990s, there was an excess of access to what we then called the Information Superhighway:[6] we had a T3 LAN line, enough bandwidth for a huge company in the business of web hosting, dedicated to 1500 undergraduates with nothing better to do than download music and send IMs. There wasn’t that much to do on the internet in those days: there was no YouTube, no Facebook, and yet there was so much — and without Google, it all seemed undifferentiated, hard to find, difficult to access. How did we hear about new websites in those days, without Reddit and Twitter? I honestly cannot remember.

But still. The deluge: within weeks I had found a baseball chat board, where I learned a great deal — and more than that, probably above and beyond anything that Licklider or Taylor could have imagined back when they were operating in obscurity (if not secret), I made friends and enemies. There were strange outcroppings of internetual rock, dissertations on the subjects of role playing games I had never heard of, fake people telling obvious lies for reasons unknown, libertarians living in socialist Scandinavia, and more pictures of naked women then I ever could have imagined before I plugged in. James Gleick writes in his book The Information that many people considered early telegraphs — done visually, without electricity, but literally with smoke and mirrors — “a nervous system for the earth”. It seems astouding to say so: controlled and funded by Napoleon, the telegraph network was not “an instrument of knowledge or of riches, but … an instrument of power”. Is not the internet, and all its appurtanences, a much better model of a nervous system — an instrument of knowledge and wealth and power and … well, one hesitates to call videos of cats falling into bathtubs any of those things, but I suppose it is information of a variety.

What is a computer for? What does it do? I am not entirely convinced that the modern computer is anywhere near enough like its ancestors to be considered the same thing — the laptop I’m writing on now has little other than a binary system of storing information in common with Alan Turing’s Bombe, which did only math and occupied an entire room. Though Turing was always at pains to assert that the inherent essence of computing was not in the electricity but in the logic, it does seem that on some fundamental level things changed — possibly more than once — in the interim.

Turing’s Bombe and the Z3 and the ENIAC did math. They were programmable — they ran according to a system of rules and gave consistent results[7] — but what they did, in essence, was compute logarithms and probabilities. They had memory, but it lived in punch cards and vacuum tubes, easily lost, fundamentally no better as a system of organizing information than typing it onto a sheet, paper cuts, ink-stained fingers and all.

The computers that provoked anxiety in the congressman from New York, they did math, too — but they were memory machines, able to store and organize information in a way that was fundamentally different to how it was done in a file cabinet: this memory was searchable, accessible, and though it still had to go on disks it no longer involved a paper cut. Perhaps that’s what troubled the congressman from New York: not that someone might someday know his credit score, or those of his constituents; but that it would be stored in an automaton, and easily found — not forgotten. Such a process increases the alienation in the world. It renders human judgement increasingly moot.

The little Macintosh with the Cannon Fodder? One begins to grow so far removed from a giant, overheating machine designed to break codes that one begins to wonder if we’re really still dealing with the same beast as before. There are some principles in common, but no purposes that I can discern. Once we turn on the network — once the computers are talking to each other, and the people using them talking through them, then, I believe, we have reached a point at which it is not really possible to assert with a straight face that what we have on our hands is really the same thing at all. I suppose it comes down to a philosophical question: what defines a thing, the rules by which it was built, or the use to which it is put? Because my computer does not do math. I use it to manipulate the rich store of living information, and to create more.