'Technology has been turned against humanity's very survival'


"Today, the revolutionary subject is under siege and in doubt: beset by an algorithmically fragmented reality and an intensely managed digital control."

In Revolutionary Mathematics, Justin Joque carefully extricates the logic of capitalist production from the abstract shape of a digital algorithm to debunk the notion that the algorithmic systems that now govern our everyday life are the objective arbiters of truth and not the latest in subjective machinations of our political economy and its extant violence and division. In this crisis of knowledge and epistemological alienation, Joque’s proposal is that the greatest pitfalls of machine learning may also hold the key to its greatest promises. 

REVOLUTIONARY MATHEMATICS is now 30% off on our website 

In a short article in Wired, Chris Anderson, a Silicon Valley evangelist made rich by his undying faith in machine learning technology, spells out the theoretical implications of this development. “Today companies like Google, which have grown up in an era of massively abundant data, don’t have to settle for wrong models,” he writes. “Indeed, they don’t have to settle for models at all.” He continues, repeating a realization that Hegel noted long before modern computation: at a certain point a change in quantity becomes a change in quality. 

The Petabyte Age is different because more is different . . . At the petabyte scale, information is not a matter of simple three- and four-dimensional taxonomy and order but of dimensionally agnostic statistics. It calls for an entirely diff erent approach, one that requires us to lose the tether of data as something that can be visualized in its totality. It forces us to view data mathematically first and establish a context for it later. For instance, Google conquered the advertising world with nothing more than applied mathematics. It didn’t pretend to know anything about the culture and conventions of advertising—it just assumed that better data, with better analytical tools, would win the day. And Google was right. Google’s founding philosophy is that we don’t know why this page is better than that one: If the statistics of incoming links say it is, that’s good enough. 

A “good enough” probabilistic world ousts those traditional Enlightenment models and theories of causality. While it is beyond doubt that the efficacy of machine learning, at least in certain applications, is impressive, these technologies and their ideological commitments are not as direct or unmediated as they are often made out to be. What Anderson presents as a direct apperception of the real is, in fact, mediated through decisions about how to construct models, how to avoid overfitting and how to understand what those results tell about the world. 

The possible advantage to this correlational approach, as informatics scholar Geoffrey Bowker argues, “is that it avoids funneling our findings through vapid stereotypes. Thus, in molecular biology, most scientists do not believe in the categories of ethnicity— and are content to assign genetic clusters to diseases without passing through ethnicity (e.g., Karposi’s sarcoma as initially a Jewish disease).” Yet, while many social categories are founded on questionable premises—race, gender, value, and so on—the histories and impacts of these categories are still highly important because “the world is structured in such a way as to make the categories have real consequences.” Although these categories are problematic, to do away with them completely would leave us unable to identify these real consequences. 

Media theorist Wendy Hui Kyong Chun’s recent work on the concept of homophily—the love of the same—is especially insightful in this regard. Focusing on the use of network science as an analytical and computational technique, she argues that these systems push people toward those who are the same, creating digital spaces that are segregated along the multiple and intersecting vectors of social existence. And while these digital systems are more mobile and fluid than earlier forms, she states, “there are no random initial conditions.” These digital systems are built on histories and in societies that have long and complicated histories of racial and gendered violence. Thus, it was “the rise of the modern concept of race during the era of Enlightenment; its centrality to colonization and slavery; its seeming zenith during the era of eugenics; [and] its transformations aft er World War II” and beyond that set the initial conditions for these systems and determine the social world they tend to reproduce. 

Despite what Anderson claims, mathematics routed through machine learning is still, in the end, a form of mediated abstraction, and thus fundamentally intertwined with the development of our social, economic and political situation. These machine learning systems function in a way that is analogous—and, as we shall see, metaphysically tied—to capitalism: they move the locus of social domination from the material world into the abstract one of capital and probability, yet they do not oust history. In fact, because their aim is only to predict, they actively reproduce it. While these systems make some categories more fluid and open, they simultaneously work to solidify extant social systems: capitalism, racism, patriarchy and imperialism, among others. And they do so in ways that are potentially more insidious and harder to resist, presenting their outputs as objective facts. It is thus necessary to account for the metaphysical force of this objectification—something we can ascertain only by tracing the ways in which probability and statistics function socially and economically.