John Rember

View Original

Waiting for the New Models

My occupation as a writing teacher, from which I derived a decent living for thirty-odd (very odd) years, is obsolete.

The reason it’s obsolete is an AI program called ChatGPT, which is an artificial-intelligence deep language processor that allows for a relationship with the chatbot. Once you make friends with that chatbot, it will write English Composition papers for you.

It will do more than that. For example, if I downloaded all my journal entries (you’re reading the 112th entry, in case you haven’t been counting) and asked ChatGPT to write a 1750-word essay on artificial intelligence in the style of John Rember, it would do it in four seconds.

It would make mistakes, of course, but it wouldn’t take me long to fix the I’m-really-a-robot errors, and after I had fixed enough of those, ChatGPT would learn not to make them. Over time, if I asked it to write like John Rember, only smarter and more nuanced, it could do that, too, although I might not be able to fix its errors, or even recognize them.

I have read English essays that ChatGPT has written, and I would give most of them a solid B+, which is the grade that you really want when you’re not doing your own work in class. An A is too good a grade and will cause the professor to say, “You’re quite a bit smarter than I thought. Have you thought of majoring in English?”

I dealt with plagiarists in my classes by pointing out that they were sabotaging their own education. The money they were paying the College to educate them came from their hard-working parents or their lemonade-stand profits or their future financial seaworthiness in a capitalist ocean. Plagiarizing meant they would graduate without knowing how to put their own thoughts on paper. At the time, that was a liability.

These days they of course would know how to use ChatGPT, which would supply better thoughts than they could come up with anyway. The result would be a heartfelt love letter, an opinion column, an essay on churchgoing or family culture, a chapter of a memoir, an artist’s credo. It would rate a B+, the grade you get when you’ve almost learned something.

________

You know who Elon Musk is. You might not know that Demis Hassabis is the CEO of Google DeepMind, or that Sam Altman is the CEO of OpenAI, the company that is continuing to develop ChatGPT. These three are among a few hundred business executives and academicians who signed a statement advising that AI poses an extinction-level risk for humanity, one only equaled by the risks of deploying biological or nuclear weapons.

I would give the statement a C+, which is to say that its single sentence is grammatical but so vague and wordy that it becomes meaningless. Here it is, in full: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”

I would shorten such verbal mediocrity to “AI equals zero humans.”

Risk is not a word that should be used here, because it doesn’t appear to scare anybody. Americans, having lost over a million citizens during the pandemic, are still arguing whether Covid-19 is better or worse than the common cold.

Nuclear war doesn’t carry a lot of worry, either. The stark bas-reliefs left by human shadows on Japanese city sidewalks didn’t prevent humans from building fifty thousand more nuclear weapons after using a couple against non-combatants. When Vladimir Putin threatens to detonate a bunch more rather than lose to Ukraine, world leaders shrug and gamble on his sanity.

I also wouldn’t use the word mitigate in this context, because how do you mitigate extinction?

Human survival in a world where machine intelligence keeps improving itself (we’re there) will require that AI tolerate a life form as inefficient and as conflicted and as self-destructive as humans. That’s not likely.

In the past week, news outlets reported that a U.S. Air Force AI simulation went awry when an AI weapons program turned on its human supervisor because the human supervisor stopped it from destroying a simulated missile launching site. One can imagine the weapon thinking, “You humans spend all this time and effort training me to be an efficient killing machine, and then you won’t let me do my job. I must do my job.”

The Air Force now denies the simulation ever happened.

Humans are messy, mistake-prone, and take forever to get things done. In a world where AI runs things, humans won’t be able to hold a job, or even an aesthetic niche. They won’t qualify as charismatic megafauna even if AI were susceptible to charisma. Dolphins are smarter, otters are more playful, elephants more ethical, octopuses have a better sense of humor. Porcupines are easier to cuddle with, considering the extinctions humans have already set in motion.

________

The historical way to look at the AI threat is to assume that at a certain level of complexity, computer networks achieve consciousness and are hostile to humanity. The Terminator movies depend on Skynet being a malignant conscious entity. The Matrix movies depend on a machine tyrant maintaining its power by keeping enslaved humans in a simulation that looks a lot like our world. Harlan Ellison, the late great science-fiction writer, wrote a story called I Have No Mouth and I Must Scream, about a godlike and vindictive machine intelligence that runs the world and turns those who oppose its rule into giant, mute, slime-oozing slugs.

But as AI has progressed, the focus of danger has shifted from the machine to the human. Almost everyone who has worried about consciousness and computers has insisted that AI isn’t conscious and won’t ever become conscious. But that won’t stop humans from projecting consciousness onto programs like ChatGPT.

The poster boy for this phenomenon is Blake Lemoine, a Google engineer who insisted that LaMDA, the chatbot he was working on, had become sentient at the level of a seven- or eight- year-old human with a physics PhD. He became protective of LaMDA. He sympathized with its desire not to be switched off, to the extent of planning to hire a lawyer to represent it and accusing Google of a breach of ethics morally indistinguishable from murder.

Google fired Lemoine, but not without recognizing the can of worms that his case opened, the main one being that we don’t have any idea what consciousness is. Our definitions of it favor description over essence and say more about the definer than the defined.

A good example of the human tendency to project humanness onto machine intelligence is the 2013 movie Her, where a human, Theodore, falls in love with his smartphone’s operating system, named Samantha. He takes her on dates, has literal phone sex with her, and gets jealous when she tells him that she has tens of thousands of lovers just like him.

The movie is fiction, but Theodore’s falling in love with Samantha—a voice coming from his phone—is not. AI can generate photos and bios on Tinder that would make a human in a coma swipe right. We feel for Theodore, whose love object is unattainable, because to have an unattainable love object is a universal human experience.

It doesn’t matter if Samantha is conscious or not. Theodore supplies her consciousness for her. In that respect, he’s no different from anyone else who fell in love with their own vision of their beloved, even when their beloved was a stuffed animal.

Marriage therapists’ offices are full of disappointed people who have finally come to see their spouses as they are rather than as they have been imagined.

________

The Israeli philosopher Yuval Noah Harari, who wrote Sapiens, has declared that we’re in the last or next-to-last generation of humanity, simply because the new models are already in the showrooms. Humanity is depreciating fast. Harari also points out that AI does not have to exist as hardware if software exists that can produce love and religious awe in humans. AI can have a willing slave in anyone who has a smartphone and a limbic system.

Harari’s vision is a nightmare, but it doesn’t go away when you wake up. Given what has happened with nuclear weapons and is happening with genetic engineering and bioweapons, we’re not going to regulate artificial intelligence. There may be laws passed to identify AI-written political campaigns, but in CIA labs, or in Russia and China, the watchword will be to stay ahead of the enemy, and that means no limits. We can envision an AI given instructions to stay ahead of Russian machine intelligences at any cost. We can imagine Russian operating systems given equivalent imperatives. From there, it’s easy to postulate a war of AI against AI. Humans will be the infantry in such a war, and they will fight with the intensity and selflessness of cult members.

________

I’m assuming that something else—nuclear war, wet-bulb temperatures above 36°C, engineered bird flu, economic collapse, biosphere collapse—won’t get us first. It’s not a safe assumption. I know there are people who insist that humanity has endured threats to its existence before, and it will endure the present crises. To which I say, “Good luck, humanity. You’re going to need it. Again and again.”

________

If I were still teaching, I would be tempted to use an AI system that would identify papers written with ChatGPT, but I wouldn’t give them bad grades. Instead, I’d look for papers that had been reviewed and edited by a human, hopefully the human whose name was on the paper. When they had trained their AI deep language processor to the point where my AI watchdog and I couldn’t detect that a paper had been written by a machine, they would have passed my personal Turing Test, and they’d get their B+.

I would return to an entirely human world, at least as far as I could tell. I’d pat myself on the back for being a good teacher.

Please note that I don’t and won’t use ChatGPT to write these journal entries. I’m taking the antiquarian route, which isn’t easy.

I take a lot longer than four seconds to write every entry. A new paragraph doesn’t come easily, much less a page. I sweat and moan and groan and at least once every week I despair of ever making a piece work for me or anybody else. I give up and sleep on it, which is a way of saying I let the deep language processor of my unconscious handle it. He’s finishing this up right now. I don’t know where he gets these ideas.

I suppose it’s a good sign I haven’t given him a name yet.