A little over a year ago I sat down with my friend Tareq, a perspicacious man, to talk about where my own career in technology might go next. Among the useful advice he had to offer was an open-ended instruction: figure out what your stance on AI is going to be. We are at a moment now where everyone working in technology has to place a bet, one way or another, on how LLMs are going to impact what we do, how we do it, and whether we can expect to go on getting paid for it. It will be advantageous to give a credible appearance of foresight, since strategy-makers at the highest levels are asking themselves what they should do next. It is not even necessary to be completely right, at least in the short term, but now is the time to form a clear position.
I have taken my time, and I have taken more of it thinking about the phenomenology of LLM use and the epistemology of LLMs’ model construction than I have trying out new tools and being alternately wowed and disappointed by their capabilities. I have no opinion on the relative power or reliability of GPT 4o and Gemini, for example. I’ve spent a lot of time pushing at the limits of what ChatGPT can do, across a range of activities, but without focussing on “outputs” such as lines of code, replies to emails, pull request descriptions or other things of that kind. I felt it was necessary to go deep and broad; I also had an intuition, from quite early on, that the value of LLMs would not show itself in the ability to replace human labour in generating such outputs. (Then again, I don’t think that the true value of human labour is to be found in the metric-satisfying properties of its outputs either).
My focus has instead been on understanding what LLMs are “for us”, how introducing one into the loop of my own symbolic activity — reasoning, writing, evaluating statements and arguments — changed how I thought, and how I thought about how I thought. Has it changed my relationship to language, to code? Has it enhanced or diminished my ability to reason independently? Has it affected my sense of purpose, of meaningfulness in what I do?
Here’s my position: I think both sides of the current “will AI replace humans?” debate are wrong, and the debate itself is fundamentally ill-framed.
On one side, there is what I’ll call an AI-deflationist argument, which asserts that LLMs are both wholly derivative of human intellectual élan (“plagiarism machines”) and endemically unreliable in their simulation of thought. They consume authenticity and output “slop”, a poor facsimile of nourishing human communication that nobody with any taste could possibly wish to consume. On the other side, there is the AI-booster argument, which asserts that we are in the early days of an explosion in automated intelligence that already closely matches and will soon wholly outpace human productivity. So what if vibe-coded software lacks the good taste of hand-crafted artisanal human code — if the eventual maintainer is also going to be an AI, there’s no need to build in fancy affordances like a consistent architectural style or even human-readable variable names.
In effect, the present argument positions AI as either a degraded substitute for human thought, or a turbocharged successor. The bet we are then asked to place is on which of these will turn out to be the case.
In scenarios premised on the substitute model, AI makes everything it touches worse, terminally degrades the capabilities of organisations, ruins art and culture for perhaps a decade, and is eventually knocked off its perch by a return of the human. The fired engineers are rehired to fix the mess the vibe-coding middle-managers have made; media audiences tire of AI-generated prolefeed and demand rises for verifiable authenticity. It’s a fad, albeit a potentially harmful one; perhaps it will do secular damage to the position of labour, which will be difficult to undo, and perhaps that’s what the C-suite is really leveraging it to accomplish. A lot of this is quite plausible, but it also presupposes that nobody ever finds a way to use AI that actually improves anything at all; and that is at odds, I think, with what people are already using LLMs to do.
In scenarios premised on the successor model, AI lawyers and oncologists are actually a thing, and you can command a team of AI agents to build any software you can imagine, to analyse competitors’ product strategies and devise marketing campaigns, and so on. It’s funny how many of these fantasies assume that existing business roles and relationships will continue to exist under such conditions. To whom are you marketing your product, if purchasing decisions are being made by something that is both aloofly impervious to bullshit marketing rhetoric, and in any case equally capable of building it itself?
I do not see the LLM as a plausible successor of this kind, and while the idea of artificial general intelligence has its allure, any near-future scenario premised on its realisation strikes me as a vehicle for fraud. But that doesn’t mean I think we should default to the substitute model either, although it does well at highlighting the damage that successful perpetration of that fraud might inflict on our ability to get things done (or really enjoy any television program ever again).
I have instead come to see the LLM as a symbolic prosthesis, a way for human beings who navigate the world using language to explore the things they mean when they say what they say, and enhance their expressive and reasoning power by using the LLM to see around corners in their own thinking. We can use it to connect the things we know and believe to a vastly broader context of perception and opinion, and provided we do not mistake its resynthesis of that context for veridical statement of truth about the world we can use it to widen our outlook and steer enquiry along new paths. These are remarkable capabilities. They are not without challenges and dangers, especially in a pedagogical setting where students may use the LLM to synthesise whole assignments rather than stimulate and dialogically situate their own thinking, but that does not mean they are without use even there. (I have been submitting this essay paragraph by paragraph to ChatGPT as I have been writing it, and taken note of its comments and suggestions. It sometimes has a heavy hand in suggesting enhancements and alterations, but I’m stubborn: what you’re reading here represents, for better or worse, my own choice of words).
Within this framing, I’m broadly optimistic that human beings will absorb the shock of the novel kind of technical object the LLM presents, adapt it to their purposes, and fit it into their lives and work as an assistive technology. Learning to do this, steering away from real dangers in the process, and stabilising a culturally metabolised sense of what the LLM is “for” will likely take many years and much human effort. My considered stance on AI is this: it’s a good time to pitch in.