Blog Credo

The whole aim of practical politics is to keep the populace alarmed (and hence clamorous to be led to safety) by menacing it with an endless series of hobgoblins, all of them imaginary.

H.L. Mencken

Thursday, July 24, 2025

Enshittification And AI

 Paul Krugman has been writing about Cory Doctorow's thesis of "Enshittification." Basically, the model is that tech companies create a platform that creates "network effects" - the more people who use it, the better it is. The early days of Facebook, when it was so awesome to reconnect with people you hadn't seen in years or decades, is a great example of network effects.

Once you get enough users, you "enshittify" by exploiting your captured customer base - higher fees, more ads, more and more ads, worse customer service and experience. The general pissed off-edness of America in 2025 is at least in part a product of people being drawn together and then hating the platforms and experiences that bring them together. This backlash, posits Krugman, has led to the backlash against Big Tech.

Now, I don't think every Big Tech executive has become Elon Musk or Peter Thiel. There's an interesting read by an actual scientist about the delusions of tech billionaires. Basically, the ecosystem of Silicon Valley is geared towards the Start Up, whereby you hype some new platform or tech and get your IPO millions and then coast on to your next project. The problem is, according to Adam Becker, is that most of their ideas (especially Musk's and those like him) are just laughable, derived from adolescent infatuation with science fiction. 

Silicon Valley is not about science, it's about venture capitalism. Elon Musk is not a scientist, inventor or engineer, he's a VC guy with just enough scientific knowledge to create shit like the Cyber Truck.

Which brings me to AI.

AI is a great example of the Silicon Valley hype machine. Becker:

There’s also no particular reason to believe that the kinds of machines that we are building now and calling “AI” are sufficiently similar to the human brain to be able to do what humans do. Calling the systems that we have now “AI” is a kind of marketing tool. You can see that if you think about the deflation in the term that’s occurred just in the last 30 years. When I was a kid, calling something “AI” meant Commander Data from Star Trek, something that can do what humans do. Now, AI is, like, really good autocomplete. 

That’s not to say that it would never be possible to build an artificial machine that does what humans do, but there’s no reason to think that these can and a lot of reason to think that they can’t. And the self-improvement thing is kind of silly, right? It’s like saying, “Oh, you can become an infinitely good brain surgeon by doing brain surgery on the brain surgery part of your brain.” 

As as educator, I find AI deeply troubling. As a high powered search engine...OK. There is something nice about searching for something like "How to grow strawberries" and getting a decent summary of the conventional wisdom about how to make a wee strawberry patch. 

However, AI is pretty much just a hyper-powered predictive text machine. Right now, I could ask it to write an essay on a given prompt - By 1876, who had won the argument about the future of America, Thomas Jefferson or Alexander Hamilton - and I could get a B+ essay.

Sam Altman and other AI acolytes suggest that AI will be far more impactful. As Becker summarizes:

 (Altman)said something like, “Oh, global warming is a really serious problem, but if we have a super-intelligent AI, then we can ask it, ‘Hey, how do you build a lot of renewable energy? And hey, how do you build a lot of carbon capture systems? And hey, how do we build them at scale cheaply and quickly?’ And then it would solve global warming.” What Sam Altman is saying is that his plan for solving global warming is to build a machine that nobody knows how to build and can’t even define and then ask it for three wishes.

You can see the toxic tech positivity at work there. Then you get a combination of Groupthink and financial FOMO, where everyone jumps on the hype-train.

Meanwhile, in the real world, students are using AI to cheat and short circuit their learning and consuming massive amounts of energy to do so.

What will happen, if enshittification continues, is that people will become more and more reliant on AI for basic tasks that might have required them to acquire real-life skills (academic or otherwise) and then once companies have captured people into AI dependency, they will exploit them for profit.

The only reason to believe that is because it's what they have always done before.

No comments: