We are not actually doing what we say we do publicly.
– Internal Review at Facebook, 2019
In an era of rampant platforming, the deplatformed have emerged as a kind of internet legend, conspicuous by their absence. Over the course of the last five years or so, a nonrandom sample of relatively open-and-shut cases have become synonymous with deplatforming, a phenomenon as newfangled as it is misunderstood.
Like the deplatformed, who have become representative of platforming “done right,” often used to frame Silicon Valley as a site of creative destruction, the demonetized have become representative of platforming “gone awry,” often used to frame Big Tech as a force that is destroying creativity. Yet, beneath the various headlines of villainy and victimhood, both narratives revolve around the same protagonist: the platform, whose role is certainly different in degree, but not necessarily in kind.
In 1965, Doug Stewart published a paper entitled “A Platform with Six Degrees of Freedom,” which described and diagrammed the mechanics of an aircraft simulator – a inside look at an outward-facing machine. To this day, platforms are constructed according to the logic and logistics of visibility. In the words of media scholar Tarleton Gillespie: “Platforms are intricate, algorithmically-managed visibility machines.”
In light of this definition, the demonetized become but another casualty of deplatforming; creators whose accounts were not killed, but maimed. When no longer framed as an on/off binary, but rather a high/low spectrum of visibility, deplatforming becomes the natural and necessary flipside of platforming: the black mirror of the interfacial regime.
Like intergalactic space, the farthest reaches of infrastructure are often hidden from view: a horizontal no man’s land of plausible deniability. There is no angle from which one can capture the metro system within a single frame. As infrastructures of scale and speed hitherto unseen, platforms have come to preside over vast swathes of gray area: social, political, legal and moral. Akin to the stealthy and liminal rise of Uber, which baked its business model into the cracks of transport infrastructures around the world, the MATAMA collective – Meta, Alphabet, Twitter, Amazon, Microsoft, Apple – has come to rely on the unphotographable aspect of infrastructure.
Where’s the line? That’s for MATAMA to decide, undecide and redecide behind closed doors. Like mobile phones, which constitute the focus of Michael Arnold’s treatise on the phenomenology of technology, platforms are Janus-faced: “not reducible to a direction or valence tipped with a single arrowhead, but better understood as a conflation of tangential implications, at least some of which can be read as ironically site and paradoxically self-contradicting phenomena.” Interpretative flexibility is no accident, but an insurance policy.
Deplatformings are hypervisible events: often the final straw for public-facing platforms, for whom infrastructural invisibility is central to operations. With the regard to the cases of Lara Loomer, Milo Yiannopoulos, Alex Jones, and (when the time was ripe) Donald Trump, the decision to deplatform outrightly was rather straightforward from a business perspective: each creator had become an edge case, whose virality had become “unfriendly” to core audiences, core advertisers and, most damningly, core infrastructure. Times are a-changin’, however.
In the words of P.D. James: “It’s taken governments a long time to realize that you don’t need to manipulate unwelcome news. Just don’t show it.” It has taken platforms less time to realize that, oftentimes, you do not need to delete content. Just show less of it. Rather than deleting individual nodes, which draws direct attention to what Benedict Singleton has called the “invisible director-of-affairs lurking in the figurative ‘offstage’ of everyday,” deplatforming has taken to weakening networks, jamming pathways, slowing traffic, so as to hide action and reaction simultaneously. Why wait for Adolph Hitler to become a problem, so goes the logic, when one can smother baby Hitler in the crib? It is a form of predictive policing, Minority Report-esque, whose methods are only growing more strategic, more systemic and more silent over time.
The mid-twentieth century Hollywood blacklist was deplatforming in analog form: a means of reducing the cultural visibility and viability of supposed Communists and alleged sympathizers. In the twenty-first century, however, now that cultural production has flipped from what Clay Shirky has labeled a filter-then-publish to a publish-then-filter model, the blacklist has become the norm. Across the new mediascape, whitelisting has become the gatekeeping strategy du jour: a subtle reframing of content moderation from the demotion of “inappropriate content” to the promotion of “trusted creators.”
On Facebook, for example, the XCheck system reroutes the content of high-profile celebrities and public figures into VIP booths full of careful, charitable, human moderators. Moreover, as one member of Facebook’s Mistakes Prevention Team stated, “VIP lists continue to grow”: a growth that amounts to an effectual backsliding of content moderation for the blacklisted majority, who must fend for themselves against the acontextual approximations of machine learning. On YouTube, meanwhile, post hoc whitelisting has become a feature of the appeals process, whereby channels with over 10,000 subscribers automatically skip the queue.
For a long time, the existence of shadowbanning was considered as little more than Instagram folklore: a figment of the imagination of the cultural fringe. “Shadowbanning is essentially Instagram’s way of policing the community without overt censorship,” Taylor Lorenz wrote; yet Instagram has never explicitly acknowledged the existence of the practice. In a since-deleted Facebook post, however, Instagram admitted to understanding that a large segment of users may have experienced problems with posts not surfacing, though stopped short of offering an explanation.
On Twitter, meanwhile, shadowbanning (under a variety of euphemisms) is conducted in the cold light of day: “Tweets that are popular are likely to be interesting and should be higher ranked; Tweets from bad-faith actors who intend to manipulate or divide the conversation should be ranked lower.” In emphasizing that the platform does not “shadowban,” but may “limit Tweet visibility,” rendering certain categories of borderline content harder (but not impossible) to find, Twitter makes a distinction without a difference. For users, running afoul Twitter’s “Rules and Policies” can trigger a cascade of visibility-limiting effects, which range from “Downranking Tweets in replies” and “Make Tweets ineligible for amplification in Top Search results” to “Excluding Tweets and/or accounts in email or in-product recommendations.” Not life or death for the blue-checked Twitterati, yet often lights-out for those seeking to be seen.
Like Maurice Merleau-Ponty and the rest of the humankind, creators encounter the world through invisibilities. “YouTube’s unfeeling, opaque and shifting algorithms” are an inexorable part of the lived experience of content creators, writes Amanda Hess. Across the MATAMA universe, Taine Bucher concurs, “there is not so much a ‘threat of visibility’ as there is a ‘threat of invisibility.’” For creators who work close to the edge of platforms, however, skirting the outer limits of outsourced and overworked moderators, overcautious algorithms and overfitted databases, one can certainly become too visible to the wrong pair of eyes (human or otherwise).
“A raised level surface on which people or things can stand.” While the Oxford English Dictionary definition of platform is not exactly threatening, the supportive aspect of the word fails to tell the other side of the story. For while the center of platforms affords standing, the edge of platforms affords a spectrum of (mis)behavior – nudging, shaking, shadowboxing, dizzying, spooking, staring, menacing – whose potential effects has a visible and explicit effect on the ecosystem, despite the invisibility and implicitness of the cause. “It is my deep-seated conviction that public service is not encouraged nor promoted by placing the sword of Damocles over the heads of broadcasters at renewal time,” Senator John O. Pastore said in a speech to the National Association of Broadcasters in 1969. In 2022, there is only renewal time: dematerialized and decentralized, the sword of Damocles can strike anyone, anywhere, anytime.
Along the MATAMA border, deplatforming has fostered an omertà-like system of implicit disincentives and informal constraints: an unwritten rulebook, known to creators and moderators alike, whose implicit assumptions and axiomatics form the bases of “High Quality Principles” and “trusted creators” on YouTube, as well as cryptic equivalents for Facebook and others. “It is not the iron fist of repression, but the velvet glove of seduction that is the real problem,” wrote Glen Loury in 1994, touching upon the bidirectionality of (dis)incentive structures, which smoothen the transition from the systematic to the systemic, the explicit to the implicit, the top-down to the bottom-up.
For as the ambiguities of algorithms become part of the collective stock of knowledge, habits and routines concretize: a new normal takes root, which not only celebrates the status quo, but camouflages existing alternatives. The fact that Apple will not autocorrect obscene words like “abortion” has little consequence in and of itself, yet strange things happen at scale. As Philip W. Anderson famously wrote, more is different. Like spirals of silence, the asymmetrical effects of deplatforming could reverberate throughout the creator economy for years to come, precipitating a systemic return to the mean – which is to say, the mainstream.
Within the first quarter of the twenty-first century, social media platforms have become part of the cultural furniture: seamless and seen less. Though partly from the justifiable aspects of dreams and ambition, a large chunk of the creator economy comes from the burn and churn of e-waste: the byproducts of false hope and false rope. It is a soft form of stochastic terrorism, which simultaneously (dis)incentivizes edgework. The infrastructural edge becomes a retractable ledge, which can be pulled back at the slightest whisper of public or market pressure. To paraphrase economist Rolfe Winkler: the margins take all the risk, the mainstream takes all the reward.
For platforms, who would prefer both the margins and the masses remain in the dark, deplatforming has become a meal best served silently. In a leaked speech from 2015, however, Dick Costolo, the former CEO of Twitter, said the quiet part out loud: “We’re going to start kicking these people off right and left and making sure that when they issue their ridiculous attacks, nobody hears them” [Italics added]. If platforming can be defined as a process whereby platforms become invisible and end-users become visible, perhaps deplatforming can reveal the opposite: a conceptual lens through which, to borrow the words of Benjamin Bratton, “creeping spread of cyber-empire” can be glimpsed “through the slits of a Guy Fawkes mask.”