Why ChatGPT won't replace work

· 5 min read
Why ChatGPT won't replace work

I personally find it hilarious that us Americans only started losing our collective minds when automation came for white-collar jobs. Every day I see someone saying "ChatGPT will replace writers and developers. Stable Diffusion will replace artists. What will these people (that we never really cared for in the first place) do now that their jobs are obsolete? Look, it even wrote me a haiku about Ticketmaster Corp. v. Tickets.com, Inc.!"

Screenshot of ChatGPT: I ask it to "Summarize the court case 'Ticketmaster Corp. v. Tickets.com, Inc.' in the form of a haiku." It responds, "Ticketmaster sues, Tickets.com fights back hard, Linking was the cause."

"Automation" has been coming for a decent amount of people's jobs for a few years now. It's just that the techies and cryptobros in the industry didn't really value the livelihoods of the people it was replacing. What difference does it make to a hustle-lifestyle influencer who will work until they die if the McDonald's they go to has one less human? Or if a machine at the Amazon warehouse that packages their Steve Jobs cosplay happened to take their order instead of an underpaid teacher of mine? They want efficiency, they want less overhead, if it means they can behave on Twitter like they're Steve Jobs' kid who bought a little too hard into a pyramid scheme.

John Oliver of Last Week Tonight did an interesting episode on how Amazon has been quietly (and sometimes extremely proudly) automating away workers' jobs, not out of concern for them but out of concern for their "efficiency." If Amazon can make money by forcing humans to work in warehouses alongside two-ton robots, by all means they will do so and scream in their shareholders' ears about it.

I don't have the investigative journo chops that the people over at HBO have. But I do have a decent background in tech, being the token tech kid in every classroom I've been in for the last ten years.

Everyone that says "ChatGPT will replace us all!" either:

  • Doesn't know anything about artificial intelligence
  • Sees that people are interested in all things AI and wants to capitalize with a scary headline, or
  • Has a bridge to sell you.

ChatGPT is one of many large language models. These are exactly what they sound like.

  • Large, expansive, diverse
  • Language, good at writing
  • Model, algorithm.

LLMs have existed for years, and AI researcher Janelle Shane's blog AI Weirdness was there before they blew up. ChatGPT is really just a better version of the sort of thing she's been researching for years. Check out her book, by the way, it's really cool and explains a lot about how AI works:

Book: You Look Like A Thing — Janelle Shane

To put it simply, according to the first result in a Google search for "How do large language models work", large language models are very good at understanding how languages work. To ChatGPT or Bard or whatever, English is just a really large map of terms, definitions and links. I honestly don't understand all of it.

But what I do know, is that they don't really know anything. Sure, they can understand what a user is asking to a certain degree, but they know just about as much about what you're asking as I know about baseball. My sibling used to play it, so I have background knowledge. But I don't understand why the rules are the way they are. I kinda know them, but I don't understand them. I know the language in which they're written, though, and I can understand them upon reading. ChatGPT only knows how to write code because there's so much freely available on the internet. It can only explain abstract concepts because there's so much information already online about them. It isn't smart. Someone had to be way smarter than it could ever be and explain the concepts in a human-readable way for it to be able to summarize it.

When any large language model is explaining an abstract concept, it isn't thinking. It's just using words that it thinks works well with the concept based on a massive virtual mindmap of words and concepts. It ain't smart. It's just good with language.

With the idea of people turning bullet points into five hundred word emails using AI only to have the recipient summarize it into bullet points using the same tool becoming a meme, I think that AI will turn into kind of an assistant. I wish that people would stop talking about the whole "second brain" approach, but in reality it's the most likely scenario. Imagine being replaced by an AI that at best already needs correction in everything that it does. They won't be replacing America's precious upper-middle class, they'll just allow them to do more work for the same cost.

But I have a bone to pick with every Twitter influencer who has claimed that AI will replace artists and developers. The fact that they even believe such a thing is possible proves that they just don't understand the field. They did this with crypto a short time ago, and before then Software As A Service products. They're either just ignorant, or they're scaremongering people into buying into their thing that'll disappear when anyone closes an API. Please, don't give them attention. These techfluencers want to be the next Steve Jobs, but unless they have one of these moments:

Don't give them money or user data. Not even an email address.

If they do end up replacing me, I'll see you on the non-robot internet. Listen to the Vergecast, it's awesome and hilarious, Nilay Patel is a gift to this earth, and goodbye.

P.S., check out my work on The Verdict. I've got some cool stuff over there too.

The Verdict
Searching, for Justice
Microblog