Not Evenly Distributed
I have a pet doomer theory that the scattered individuals who have reportedly suddenly fallen into LLM-induced psychosis are merely the first victims of a generalised semantic collapse already in motion, which will inevitably drive everybody round the bend. Time and again I’m drawn back to David Langford’s BLIT, a short story in which a magazine listing for a computer program that plots out a fractal happens, if focussed on the right region of the fractal, to produce an image — “the Parrot” — that obliterates the minds of human beings who see it. It’s a terrific coincidence (if it is a coincidence) that Emily Bender et al chose the name “stochastic parrots” to describe LLMs. What if, my doomer theory goes, all of us who have interacted with LLMs have already started gazing at the parrot, and it’s just a matter of time before our minds, too, begin to unravel?
I like, in Langford’s story, the detail that the deadly image wasn’t constructed deliberately, but discovered through a generative process that unfolded a region of mathematical space no-one had ever looked at before. It resonates especially for me because I vividly remember typing in magazine programs to plot the Mandelbrot set on a BBC Micro, and being astonished by the patterns that (excruciatingly slowly) winked into life across its low-resolution display. As I suggested a while back, Google’s word2vec showed us something about the statistical organisation of human language, its associational infrastructure, that was like taking a first low-resolution look at a fractal we are now seeing in much more detail. I think this gives the right causal motive for my doomer theory: the LLM wasn’t fashioned as a weapon to destroy minds, but stumbled across as a cognitive limit, the materialisation of an always-already there structure — that of the vast semiotic sea we all swim in — that we are just not equipped to metabolise. As we interact with the chat interface we are watching the Parrot etch itself, spiral by spiral1, onto the glowing screen of our souls.
Well, it’s probably not true — or not exactly true — but I think that as Lovecraftian hyperbole it does what Lovecraftian hyperbole tends to do, which is to concentrate epistemic anxiety into ontological panic, giving a face (or a mass of writhing tentacles) to an underlying destabilisation of what we think we know about reality. So, what’s being destabilised? What does that feel like?
Slopvestigation
“Slopvestigation” is not my coinage. I first saw “slopvestigating” used by Bluesky user fack.bluesky.social
, to ridicule another user’s claims that images attached to posts ostensibly asking for money to be sent to desperate Palestinians in Gaza were AI-generated. I assume the imputation was that this was paranoid overfitting, analogous to that of “transvestigators” who scrutinise images of celebrities looking for signs that they may be secretly trans. Transvestigators, who in some cases subscribe to an elaborate conspiracy theory about transition being a rite of passage into the Baphomet-worshipping Elite, evidently have a lot of difficulty with the fact that the human sexual phenotype doesn’t divide cleanly into discrete male and female physiognomies, such that you could reliably tell when someone was “really” male or female by the shape of their jawline, or the way their earlobes hang, or what have you. In transvestigation an overstimulated demand for certainty is set in motion by anxiety about what the increased social visibility of trans people illustrates about the underlying diversity and complexity of human sexual expression.
So, by analogy, the slopvestigator wants there to be ways you can reliably determine whether an image or a piece of text was generated by an LLM, and is misled by their anxieties around an increasingly uncertain reality into making comically overconfident false-positive identifications. The use of em-dashes is apparently a significant tell (I started using them in place of hyphens to separate sub-clauses a couple of years ago, because
kept nagging me to). I myself start to get suspicious when I see certain kinds of phrasing. That’s not paranoia — it’s heightened perceptiveness. Any time I see a sentence like the preceding one, my heart sinks a little. It’s not that humans never write sentences like that, it’s just that LLMs seem to write sentences like that all the time.But there are no wholly reliable tells, and it would not surprise me if human authors were not already imitating LLM stylings, much as human vocalists have learned to imitate the characteristically glitchy articulation of heavily autotuned singing. I did so myself, in the previous paragraph, to illustrate a point. A “signature move” is always repeatable, imitable, forgeable as signatures themselves are forgeable. Tonight, Matthew, I’m going to be a statistical model of a very large textual corpus. The unsettling fact is that we have, in the LLM, an unprecedentedly capable player of The Imitation Game, and this makes an issue of the inherent instability of textual provenance, of the chain of authorship and ownership that connects written marks and digital traces to the practical and ethical stakes of human symbolic activity. I began this section with a time-honoured ethical move, assigning authorship (or at least first observed usage) to a coinage I was taking up; but that coinage itself depends for its intelligibility on a proliferating web of preceding usage, a whole discourse with which the less zealously Online may be blissfully unfamiliar. (I’ve also, in this paragraph, been borrowing heavily from Derrida’s Signature Event Context, without — until now — directly crediting it).
The worry then is not only that AI-generated “slop” will flood all channels of cultural production, driving out higher-quality stuff, but that the apparatus of attribution will break down, becoming unable to bear the ethical weight we load onto it. This anxiety builds on pre-existing concerns about the effects of anonymity and pseudonymity in public forums, where swarms of impersonators (“bots”, or co-ordinated low-wage humans acting on bad faith orders) are already weaponised to corrupt and derail the discourse. But I tend to see this less as things going from bad to worse, and more as evidence that the apparatus of attribution has always had to function — and has, in fact, always more or less functioned — in a context where its role is regulative and ceremonial rather than decisive and controlling. Who was Homer? Who wrote Shakespeare’s plays? Who is Daisy Meadows2?
A person’s gender — their socially and symbolically mediated and co-constituted sexed being — does not reliably trace back to a prior “biological” reality, but is already part of the unfolding complexity of their lived biography. Transvestigation attempts to discern the Creator’s authenticating signature in minutiae of physiognomy and deportment, because to do so would definitively settle the matter of what is what and who is whom. In the field of written marks, of instances of “communication”, the practical reality is dispersal, repetition, mimicry, mutation, proliferation. LLMs coalesce a captive subset of all this activity into a model from which they then generate new instances. We may feel, melancholically, that in doing so they sever the chain connecting symbolic production to communicative intent, and so seek through slopvestigation to establish definitive critera for identifying when that chain still holds. But the point is that all human symbolic action already involves derivation from prior models, pattern recognition and re-activation, across an unbounded multiplicity of situations. Prompting an LLM to produce text which you then put into circulation is itself a communicative act; it is still as meaningful as ever to ask “who did this, and why?”.
Throw in a dissonance
It is a matter of some moment whether an image of a suffering person can be trusted as testifying to the real suffering of a real person, because it matters whether people are actually suffering and whether our ethical response to such suffering is being honestly or dishonestly elicited. “AI slop” facilitates dishonest testimony, the fabrication of word and image intended to manipulate and defraud, but it neither invented nor holds a monopoly on the exercise of such dishonesty. LLM generation might equally be used with honest intent, for example by a non-native speaker of a language seeking to articulate their plight cogently, or to produce dramatic visual depictions of real predicaments to stir the moral imaginations of onlookers. (Would you accept as testimony, and be appropriately morally engaged by, a parent’s hand-drawn sketch of their injured child?)
It has always been important to human beings to have ways — which have varied historically and situationally — to signal authenticity, to tilt the balance of plausibility away from “cheaply faked” by demonstrating effort or invention. (This may be why “hand-drawn sketch” feels like a more reliable bearer of testimonial weight than “AI-generated .jpg”). Whether or not they are reliably identifying, the generic characteristics of LLM-generated prose prepare the field for new kinds of demonstration of originality, new forms of semantic proof of work.
I observed the other day that “LLMs can’t schizopost” — that is, that LLMs’ functional drive to seek semantic coherence makes them fundamentally bad at the sort of improvisatory fragmentation and delight in non sequiturs which often characterise Gen-Z memeing. The schizopost itself references a state of mind — in thrall to a private world of sense that manifests as public nonsense — that may not actually be the poster’s: it’s staged cognitive disorder, a performance of distress. That is how one says “oh my gawwwd” to the world nowadays. But it’s also how one signals that one is not an NPC, a bearer of affectively flat signal without interiority or real worldly stakes. Today it is dissonance — not shock-value, but semantic incongruity and gnomic inaccessibility — that signifies depth.
One of my favourite things in China Miéville’s Iron Council is the Teshi sorceror Spiral Jacobs who takes the guise of a harmless itinerant and roams around drawing spirals on the walls of New Crobuzon, which unbeknownst to its inhabitants are summoning sigils for a hyperdemon that will presently consume the city. He becomes a bit of a cult figure; people start copying the spirals, unaware that in doing so they are merely hastening their doom. Again: isn’t it all a bit like that?
“Daisy Meadows” is the nom de plume under which the many authors of the many, many books in the “Rainbow Fairies” series ply their seemingly boundless trade.