In my previous post, I discussed a specific scientific study that I felt was exceptionally shoddy. I'm going to go a little further today and explain why it has stuck in my mind.
Synchronicity
"Gentlemen: when two separate events occur simultaneously, pertaining to the same object of inquiry, we must always pay strict attention."
- FBI Special Agent Dale Cooper, Twin Peaks
Perhaps it was just that my mother gave me The Celestine Prophecy at a vulnerable age, but I've always paid attention to coincidences. They tend to occupy my mind more than chance occurrences should. If two separate people mention the same book to me, I'll read it. If two separate people mention the same film, I'll watch it. That sort of thing.
This Biggest Loser thing happened several years ago. And, to perhaps add some clarity, the thing that bothered me was not so much that I felt the study itself was flawed, but that most people seemed to accept it uncritically or even discuss it all that much.
Recently, I watched a great video on YouTube by HBomberGuy (Harry Brewis) about the anti-vaccination movement and disgraced former doctor Andrew Wakefield.
His video owed a lot to Brian Deer's investigative journalism, and one thing that Brewis repeatedly calls attention to is that, in many cases, Deer didn't have to dig particularly deep to uncover malfeasance. He just had to read what was there, take good notes, and research what he found. This was a matter of one person who read carefully and asked good questions.
And these turned out to be shockingly rare qualities in first British and later American journalism as Wakefield's theories spread. His theories were spread by the media largely breathlessly and uncritically.
I watched this video around the same time that I started to become increasingly concerned about LLMs. Don't get me wrong; I love ChatGPT. I think it's amazing. I love GitHub Copilot too. I think it's an incomplete technology, but I think we'll be able to slot other technologies into it (as is already happening with plugins, Wolfram Alpha, etc) and the potential is enormous.
However, the internet isn't just Wikipedia; it's also spam, linkfarming, CSAM, and SEO. How do LLMs fit into this? Disgustingly. Most of our tools can be used against us as well.
Lossy Abstractions
In Future Shock, Alvin Toffler discussed a future that moved so quickly that we would not be able to keep up. It's been literal decades since I read the book, so I might be misremembering, but I don't think he accounted for the possibility that we would have so many tools; tools for everything.
And that is largely what the internet has become, for most people. A tool for finding recipes. A tool for buying cat food. A tool for learning how to do squats better. A tool for finding a guy to repair the drywall.
And a tool for finding a tool to write my blog on, so that I don't actually have to program anything myself, or worry about my hosting, or my security, or anything like that.
Not that there's anything intrinsically wrong with any of these things (including the last one; this current project notwithstanding, I had a Livejournal and a Deadjournal back in the early 00's, and used Angelfire and Geocities years before that). I'm not looking down on anyone for using the internet. That's a big reason why it's so marvelous.
But I think this is a lossy abstraction. Something is lost when we don't produce our own food, when we don't get to the grocery store under our own power, when we can flick a switch and have powerful electric light at no apparent cost. These things separately and together lead to a loss of mindfulness, I think.
Again, I'm not making a moral statement here. There are so many people alive that we simply could not engage in homesteading without massive famine, even if we all collectively wanted to. And I don't want to. And I can't make my own arthritis medicine, or make my child's underwear, or whatever goes into my wife's moisturizer.
(I'm skeptical of the idea that making your own X, or growing your own Y, somehow makes you more of a man or whatever. I think you're highly selective of what you DIY, and let that go to your head.)
Something can be a lossy abstraction but still valuable. Nevertheless, the lossiness persists. And when something requires less effort, and less thought, then less effort and less thought go into it. And perhaps less strength and less wisdom come out.
Large Language Model Centipede
Just for an example, let's take the job application process.
Artificial intelligence:
- generates the job description, based on the job descriptions everyone else has been using
- generates the applicant's résumé, tailored to match the job description
- generates the applicant's cover letter, based on the job description and the cover letters everyone else has been using
- determines, based on the job description, résumé, and cover letter, whether the applicant deserves an interview
- generates interview questions so that the interviewer doesn't have to think of any
- conducts a mock interview so that the interviewee may practice for the interview
I assume it's only a matter of time until the interviewer and interviewee are both replaced by AI. Eventually the entire process will be so automated that a jobseeker will receive a laptop in the mail and only then realize that they have been hired for a job their AI agent applied for based on their experience and expressed salary range.
And I see that happening, well, basically anywhere we use the internet at present.
I did not invent this insight, of course; people have been warning for some time about the biases demonstrated in artificial decisionmaking. This is ultimately little more than cloud-scale redlining.
But I'm not here to discuss AI as a tool for automatically enforcing and perpetuating systemic injustice (although it obviously will). I'm just here to talk about how it synergizes with the Dead Internet Theory, general enshittification of the internet (and of the human race), and how this brings us a step or two closer to being the human race as envisioned by WALL•E.
function void *doWhatIWant(x, y)
So as established previously, we already don't bother to think if we can help it. And now we have general availability of tools that appear to think as long as you don't look too closely. (And again, that's all we need a lot of the time, and that's fine.) Which is basically what the majority of mass media has been doing for decades at this point.
In Ender's Game, "Locke" and "Demosthenes" guided public policy by dominating public discourse; at the time, the idea that treaties and laws might be informed, let alone decided by internet trolls probably seemed bizarre and improbable. I think that idea ceased to be laughable some time in the past seven or eight years.
I think, though, that this might not be a substantial political menace moving forward. It'll always be there, but I think there's a kind of novelty that is required for fake news that LLMs will struggle with. After all, LLMs produce only a vague sort of consensus-quality thought and writing. Spreading conspiracies about JFK Jr. faking his death, etc, seems a bit above it at present. (Leaving open the possibility that I will eat my words later.)
I'm not sure if I have conclusions, or a point. I might just be directionlessly disquieted. Or this might just be a criticism of society where I don't really see a clear path forward. I see a warrior religion there, a fire spreading across the universe with the Atreides green-and-black banner waving at the head of fanatic legions drunk on spice liquor, all marked by the hawk symbol from the shrine of my father's skull, but I also see thiccc Pikachu, etc etc etc.