Digital

In the twenty-first century, we tend to associate the word “digital” with computation, but its origins hark back to ancient times. The term derives from digitus in classical Latin, meaning “finger,” and, later, from digit, which refers both to whole numbers less than ten and to fingers or toes. Digital procedures long predate the development of electronic computers, and we might understand a number of earlier devices or systems to operate by digital principles. For instance, the abacus is a simple digital calculator dating from 300 BC, while Morse code and Braille represent more recent digital practices. What each of these examples has in common—from fingers to digital computers—is a particular use of the digital to refer to discrete elements or to separate numbers. This focus on the discrete and the separate is central to the functioning of today’s digital electronics, which, at a basic level, operate by distinguishing between two values, zero and one. While the digital predates computation, today the two terms are closely linked, and the adjective “digital” is typically a shorthand for the binary systems that underpin computation. Thus, we are living through a “digital revolution,” are at risk of an increasing “digital divide,” and are plugged into “digital devices” that play “digital audio” and store our “digital photographs.” Some of us practice the “digital humanities.” The slippage between the digital and computation seems so complete that it is easy to assume that the two terms are synonymous.

Computers have not always been digital. In the early decades of modern computation from the 1940s through the 1960s (and as we moved from mechanical to electrical machines), scientists were developing both analog and digital computers. Analog computers derived from earlier devices such as the slide rule. While the abacus used discrete beads to represent individual digits, the slide rule displayed a continuous scale. On an analog clock, time sweeps smoothly around a circular face; a digital clock represents time via discrete numbers, not as a continuous flow. Electronic analog computers functioned by analogy; that is to say, they built models of the problem to be solved and usually worked with continuous values rather than with the discrete binary states of digital computation. They converted the relationships between a problem’s variables into analogous relationships between electrical qualities (such as current and voltage). They were often used (and still are) to simulate dynamic processes such as air flight and to model the physical world. Digital computers work differently. They process digital data as discrete units called bits, the zeroes and ones of binary code. A transistor in a digital computer has two states, on or off; a capacitor in an analog computer represents a continuous variable. The digital privileges the discrete and the modular; the analog represents continuity. As humans, we perceive the world analogically, as a series of continuous gradations of color, sound, and tastes.

Historians of computation typically narrate the transition from analog to digital computing as a story of efficiency and progress. Such evolutionary accounts suggest that digital machines win out because they are more precise, have greater storage capacities, and are better general-purpose machines. These teleological schemes can make it hard to understand the many cultural, economic, and historical forces that are in play during any period of technological change. Much recent scholarship has attended to the specificity of the digital, defining its key features and forms (Wardrip-Fruin and Monfort 2003). Lev Manovich observes in his important The Language of New Media (2001) that digital media can be described mathematically, are modular, and are programmable, that is, are subject to algorithmic manipulation. He proposes that media and cultural studies should turn to computer science to understand the digital. General histories of computers and much of new media theory tend toward evolutionary or formalist explanations for the emergence of the digital as the dominant computational paradigm, but we might also understand the shift as cultural and historical along a number of registers.

Instead of asking “what is the digital?” American studies and cultural studies might shift focus and ask, “how did the digital emerge as a dominant paradigm within contemporary culture?” Why, if we experience the world analogically, did we privilege machines that represent the world through very different methods? Scholars have begun to answer this question by highlighting the ways in which the move from analog to digital computing promoted notions of “universal” disembodied information while also concealing the computer’s own operations from view (Chun 2011; Fuller 2008; Galloway 2004; Hayles 2012; Lanier 2010). The ascendancy of digital computation exists in tight feedback loops with the rise of new forms of political organization post–World War II—including neoliberalism, a mode of economic organization that encourages strong private property rights, expansive free markets, and corporate deregulation—as well as with the rise of modern genetics (Chun 2011).

During this period, early developments in digital computing were also intertwined with shifting racial codes. The introduction of digital computer operating systems at midcentury installed an extreme logic of modularity and seriality that “black-boxed” knowledge in a manner quite similar to emerging logics of racial visibility and racism, the covert modes of racial formation described by sociologists Michael Omi and Howard Winant (1986/1994). An operating system such as UNIX (an OS crucial to the development of digital computers) works by removing context and decreasing complexity; it privileges the modular and the discrete. Early computers from 1940 to 1960 had complex, interdependent designs that were premodular. But the development of digital computers and software depended on the modularity of UNIX and languages such as C and C++. We can see at work here the basic contours of an approach to the world that separates object from subject, cause from effect, context from code. We move from measuring to counting and from infinite variation to discrete digit. We move from the slide rule, which allowed the user to see problem, process, and answer all at once, to the digital calculator, which separated input from output, problem from answer. There is something particular to the very forms of the digital that encourages just such a separation (McPherson 2012a).

We may live in a digital age, and the privileged among us might feel closely connected to our digital devices, but the sensations we feel as we touch our keyboards and screens are analog feelings, rich in continuous input and gradations of the sensory. We must remember that the digital is embedded in an analog world even as it increasingly shapes what is possible within that world. “Digital” emerges from and references particular histories, and these histories have consequences. By examining how these histories came to be, we will better understand and, perhaps, shape our present.

Embodiments, Histories, Methodologies
Works Cited
Permanent Link to this Essay