The More Young People Use AI, the More They Hate It

5 min read

It's been almost three years since Silicon Valley started aggressively pushing large language model-based chatbots like ChatGPT as the supposedly inevitable future of everything, and there's no group that has felt the pressure quite like Gen Z. Like with many tech trends before it, young people are among the biggest adopters of AI chatbot tools. But contrary to the tales spun by tech companies, polling data shows that Gen Z students and workers are a big part of a wider cultural backlash. Even as they utilize these tools, vast swaths of young people are deeply acrimonious and even resentful of the AI-centric future they feel is being forced on them.

Far from the stereotype of lazy people looking for shortcuts, Gen Zers have had some of the loudest objections to generative AI use. Meg Aubuchon, a 27-year-old art teacher, says her response has been to avoid chatbot tools entirely. "It just makes me want to dig my heels into a career where I never have to use AI, even if that's a career that isn't going to pay as well," Aubuchon says. Emerging into an increasingly brutal job market, young people face a contradiction: they are told these tools will eliminate millions of jobs, yet they must use them to avoid falling behind. They are the first generation of adults to navigate a world flooded with generative AI slop, all while fearing its documented impacts on the environment, disinformation, and emotional well-being.

"The part that feels scariest to me is the human impact, because it impacts people on an individual level and how they relate to other people," said Aubuchon. Sharon Freystaetter, 25, worked as a cloud infrastructure engineer at a major Silicon Valley company but left right as AI hype started to take off, citing ethical concerns and environmental impacts. Now a food service worker, she avoids chatbots and disables AI features whenever possible. "I think everyone in my immediate peer group is not using AI and is actively against it, besides my friends who are essentially mandated to use it," Freystaetter said. "Suddenly everything was saying 'You need to use AI to get this job' in the requirements."

Fears that chatbots are wrecking critical thinking are common among young adults. According to a Harvard-Gallup study, 74 percent of young adults in the U.S. use a chatbot at least once a month. At the same time, 79 percent "expressed concern that AI makes people lazier," and 65 percent said using chatbots "promotes instant gratification, not real understanding," preventing critical engagement with ideas. In a more recent Gallup poll, Gen Z's opinion of AI tools hit a new low: only 18 percent say they are hopeful about the technology, down from 27 percent last year. The number of Gen Z workers who think AI's risks outweigh its benefits has increased to almost 50 percent. While 56 percent say the tools help them work faster, eight in 10 admit that using AI in this way makes actual learning more difficult.

To make matters worse, many university students are seeing school administrations shoehorn AI into higher education and pen multimillion-dollar deals with AI companies. Young people are also graduating into a job market made difficult to navigate as AI automation tools arbitrarily filter out job applications. Alex Hanna, the director of research at the Distributed AI Research Institute (DAIR), says the inundation of AI hype is driving resentment. "Universities are hearing from employers that they want students who know how to use these tools," Hanna said. "This is not because the tools actually have shown much value-add — they want Gen Z to show them where the value-add is."

AI companies and universities are taking an "integrate first, find use cases later" approach that essentially recruits students as marketing for the AI industry. At Arizona State University, the administration is using a tool to automatically synthesize professors' lectures into bite-sized materials. Last month, the editorial board of the University of Pennsylvania's student newspaper published a scathing piece criticizing the integration of AI into the curriculum. The authors wrote that AI "can only degrade" education. "Schools are some of the only places we have left to explore and wrestle with human thought," the students wrote. "AI is now corrupting those few sacred spaces."

The Oberlin College Luddite Club similarly rejected initiatives to experiment with AI-centric education, stating it would "jettison our student body down a lazy, irredeemable tunnel of intellectual destruction." The fear of a permanent loss of critical thinking skills is backed up by data. A study from the MIT Media Lab found that EEG scans showed decreased brain activity in people writing essays using AI tools. This process, known as "cognitive offloading," can diminish skepticism and the ability to discern truth from deception.

Gen Z seems hyper-aware of the tools' limitations, from "hallucinations" to emotional hazards. "Gen Z is more realistic about what the tools actually can do," Hanna said. "They can handle text-based work that they don't want to do, but they are often rather savvy about their limits." This is true even among those who find the tools useful. Emma Gottlieb, who works in technical sales, says she uses AI to sift through technical documents but knows better than to take outputs at face value. "I wouldn't say it's a significant time-saver, but I think it's just like fast food — it's easy, it's cheap, and it's there."

Another explanation for Gen Z's stance is that AI use has become culturally toxic. Many young people find it fake and uncool, especially when used to circumvent the creative process. Lacking clear rules, AI use also causes distrust among peers. According to a University of Pittsburgh study, students viewed the use of AI tools as a "red flag" that causes them to "think less" of their peers. Hanna suggests a more critical approach is necessary — one that focuses on the material conditions pressuring young people to use these tools. "Why do they feel compelled to use them? What material conditions do they face at school?" Hanna asks.

Looking forward, there is concern for the generations that follow, who may lose the chance to develop healthy relationships with technology when it becomes mandatory. "These are the kids who are growing up with AI integrated into everything," Freystaetter said. "They grow up not knowing that they should be critical of it, and that they're being influenced by it."