When, in the entirety of human history, has there ever been a group of people so willing to relinquish their autonomy?
They say we homo sapiens are set apart by our intellect. Our advanced language systems, our prefrontal cortexes, and our ability to “do” science puts us at the top of the food chain, no contest. The sustained progress of feats of human capacity (fire, modern medicine, Curb Your Enthusiasm) has led us, as such, to the latest frontier: Artificial Intelligence.
As invention goes, AI is becoming increasingly “normal” in everyday life. And, as human nature goes, once we build our world around these inventions, we are quick to forget how to function without them.
A few weeks ago, most of the Iberian Peninsula experienced a complete blackout. The energy grids had failed. As such, trains stalled and traffic lights stopped working, payment via “tap” and “chip” was no longer acceptable, refrigerators became boxes in which to store rotting food, and as phone and computer batteries dwindled, so did their imminent and indefinite death intensify; there were no functioning power outlets to bring them back to life again.
In typical Spaniard fashion, many experiencing the blackout simply went outside to enjoy the day off of work; the power came back about nine hours later, and if anything it was, for most, a good story to tell. But it raised a question for me (and inspired a certain new kind of motivation to store cash under the mattress). If (when) shit inevitably does hit the fan in a real, tangible way, how am I going to be okay?
My mother, somewhat of a compulsive doomscroller, bless her, often repeated slightly ominous advice to me growing up. One of her favorites is that, in the apocalypse, the most important thing you need is the three H’s.
Heart: goodness, kindness (self explanatory).
Hands: the ability to make and do things (be it shelter, food, weaponry).
Head: knowledge, intelligence, brainpower (or at least not total stupidity).
I worry that the AI arms race will coincide with the race over who can lose their head the fastest.
The week after the blackout, I settled down at a cafe in Paris which sells itself on a letter-writing gimmick. You can write and address a letter to yourself, decorate it with stickers and markers and kisses, and the cafe will send it back to you in 1 or 5 or 10 years. As I was finishing up, a couple settled in next to me, and the girlfriend squealed with delight: “ChatGPT knows me so well. I asked it what to write about and it said, ‘write about where you are in your life right now, your job transition, your relationship with Joel.’”
It took everything in me not to look at Joel like, dude, run.
There isn’t enough time to go through exactly what alarmed me about that statement, which was everything: that she was so visibly pleased by the response, that she was personifying a chatbot, that it knew exactly what was going on in her life, down to her boyfriend’s name. But what struck me most was the fact that she had asked ChatGPT at all; that she genuinely couldn’t come up with the idea to talk about her life on her own.
I’ve written at length about my frustration with addictive algorithms, the cooking down of our attention spans, the submission to brainrot, and, most distressingly, our refusal to challenge these very recent and rapid developments in the slightest. We accelerate these trends by relying on AI to service our every need, and moreover, by not even trying to think of anything on our own.
The largely unregulated AI industry and the AI softwares themselves environmentally destructive, susceptible to Holocaust-denying code, frequently mis- or dis-informed, and prone to confirming biases with what Sam Altman calls a “sycophant-y personality,” should be enough for the wider public to agree that AI, or the current iteration of such, holds more net negatives for society than positives; but the most disconcerting, in my opinion at least, is the implication that it makes us dumb.
There is a surplus of evidence, albeit mostly anecdotal, that demonstrates people’s (especially students’) compulsive use of AI, and the destruction of our capacity to think for ourselves, let alone develop critical thought.
Have we learned nothing from a lifetime of watching dystopian sci-fi movies?
Pixar’s Wall-E (2008) depicted humans in the year 2805 who had spent their entire lives immobile, with screens constantly in front of their faces; Best Picture nominee Her (2013), featured a romantic relationship between a man and his Operating System; cult-classic Idiocracy (2006) satirized the gradual dumbing down of society so that when a man of average intelligence wakes up from a government-induced slumber 500 years in the future, he is venerated as a genius.
Others have made comparisons between reading AI-generated summaries rather than parsing through a text, and the plot of books like Fahrenheit 451. It’s become commonplace to instruct a chatbot to write up an email instead of taking the time to draft it oneself. I must concede that it is tempting to shove innocuous busy work onto the computer. But once we stop exercising these kinds of abilities, it is a short time before they become obsolete altogether.
Before typing and texting, we had pens and paper, and after the expansion of compulsory public education in lieu of child labor, we seemed to arrive at a sweet spot in which widespread literacy coincided with longhand documentation by default. Teaching students to write legible cursive and print was fundamental. For this reason, my grandmother and her contemporaries have exceptionally impressive penmanship, the scrawl of their throwaway grocery lists beautiful enough to be framed; today, school children can hardly write in cursive, let alone read it. Casual calligraphy, so to speak, is a lost art.
What happens when our kids continue to turn in essays in which the only thing they wrote was the prompt?
It’s a meme at this point that, should you ever need to go in for an important surgery or require legal help, to find a doctor or lawyer who graduated before 2020. But in addition to intellect, I worry we also run the risk of failing to exercise the soft interpersonal skills required of writing emails. There’s nothing quite like searching “professional ways to tell someone to do their job” and putting it into your own words.
Lately, since Google Gemini has infiltrated my every mindless question without my consent, I’ve starting putting “-ai” at the end of my searches. It blocks the automated AI response, and instead I’m directed to skim a Wikipedia page, or search around on my own; what I’ve found is that, more often than not, the first thing I read through Gemini is wrong. Blind faith in AI, based on the understanding that it has access to all the world’s knowledge, leads to embarrassing errors and the even more nefarious risk of disseminating false information when not fact checked by a human.
If we can’t do anything, if we’re constantly misinformed, if we can’t think for ourselves, what does that make us? Easy to control. Easy to manipulate. Did you really think the ultra wealthy tech bros of the world had your best interest at heart?
Anyways, it should be obvious that this type of intelligence is exactly what it says it is— artificial.
this post was so validating for me. lately i've been feeling like ai is literally ruining my enitire life despite the fact that i dont use chatgpt or any of those ai apps. group mates i worked with in college kept getting ai to write their stuff. the most frustrating part wasn't necessarily the ai work itself (still quite frustrating though), but the fact that the work wasn't actually hard at all. the teacher's gave us slide summaries and you really only had to go through it once to get a grasp of the content. the worst part is there's no line of thought in ai writing, or proper elaboration, accepting that is so scary. the idea of being this prone to propaganda this willingly is awful man
While I agree that AI is making us dumb, it's easy to be cynical when half the population already has a child-like grasp on reality.