"These days, everyone sounds a bit the same": how AI is changing university classes.

 

"These days, everyone sounds a bit the same": how AI is changing university classes.


EDITOR'S NOTE |  The author is a third-year student at Yale University in New Haven, Connecticut, and discussed her experience using artificial intelligence in the classroom with classmates while writing this article.

At this stage of her final year at Yale University, Amanda knows that many of her classmates are using artificial intelligence chatbots to write papers and perform other academic tasks.

However, he began to notice something unusual in his seminars with smaller classes. His colleagues would sit behind their laptops with very well-structured discussion points and arguments, but the conversations that followed often ended up lacking any depth in the various disciplines.

In one of the classes, "the conversation stopped and I looked to my left, where I saw someone furiously typing on their laptop, asking (a chatbot) the question that the professor had just asked about the text," Amanda recalled to CNN.

Amanda and two other students, Jessica and Sophia, attend Yale University. The three young women requested anonymity for fear of reprisals from classmates and professors, so CNN agreed to change their names in this article.

The student confessed to being surprised. Until that day, she hadn't realized that her classmates were using text-generating systems in class and sharing the results they created. Now, she notices the impact this trend is having on debates among university students.

Nowadays, the young woman added, "everything sounds a bit the same." The student emphasized that, during her first year of college, she sat in seminars where everyone had something different to add and that, although the students supported each other's ideas, "they approached the topics from different perspectives and offered distinct comments."

As artificial intelligence becomes increasingly integrated into education, teachers and researchers are finding that this technology may be eroding students' capacity for original thought and expression.

A study published in March in the scientific journal Trends in Cognitive Sciences revealed that large language models are systematically homogenizing human expression and thought across three dimensions: language, perspective, and reasoning. Both students and teachers report witnessing the effects of this trend in the classroom.

And this makes many students seem to have the same voice.

Why do students use AI in the classroom?

Jessica, a senior at Yale, revealed to CNN that she uses artificial intelligence every day in class. In an economics seminar where the professor asks students surprise questions, the young woman noticed that, at the beginning of the class, "you could see everyone inserting each of the PDFs" into a chatbot.

The university student also uses these tools when she has difficulty putting her thoughts into words. Often she has a concept she wants to comment on, but doesn't know how to formulate the sentence on her own, so she asks the program "to make it more cohesive".

A spokesperson for Yale University explained that students "continue to test the use of AI in classes," assuring that the institution is aware of the ways in which the technology is used in a teaching context, including those described in this article.

To support learning and engagement, the same source told CNN that there is a broader trend of teachers designing courses with limited or no use of laptops, "prioritizing printed materials, original thinking, and direct interaction with colleagues and teachers."

Thomas Chatterton Williams, visiting professor of humanities and principal investigator at the Hannah Arendt Centre at Bard College, has witnessed the impact of students' decisions to use these tools.

Students' reliance on artificial intelligence has paradoxically raised the bar for classroom discussion to a generally higher level in subjects with difficult concepts, but has also tended to exclude "more unusual, eccentric, and original thinking." This perspective is defended by Williams, who is also a non-resident researcher at the American Enterprise Institute, a think tank focused on educational research.

The academic expressed his deepest concern about the possibility that many brilliant young people will never find their own voice. In fact, he fears that a surprising number of these students "will not even fully appreciate the merit of authorship and owning a point of view."

Jessica admitted that she had become lazier since she started using these systems to help her with her university courses.

The young woman reflected on how much work she had stopped doing, feeling that her work ethic had "completely diminished since high school."

Why does AI make people sound the same?

Large language models are trained to predict the next most likely word from a statistical point of view, taking into account everything that preceded it. This explanation is offered by Zhivar Sourati, a doctoral student at the University of Southern California and lead author of the study on the subject.

The data with which these models are trained disproportionately represent dominant languages ​​and ideas, so the answers to users' questions "naturally reflect a narrow and biased slice of human experience," the researchers wrote in their study. The result is "a narrowing of the conceptual space in which the models write, speak, and reason."

Technology-induced homogenization occurs in three dimensions: language, perspective, and reasoning strategies, the authors explained. This happens because algorithms tend to reproduce what researchers classify as viewpoints of Western, educated, industrialized, wealthy, and democratic societies, even when they receive explicit instructions to represent other identities.

One possible consequence, Sourati warned, is that dominant language and perspectives may come to be seen as more credible and "more socially correct," marginalizing other approaches. A similar phenomenon is observed in reasoning, where the popular technique of guiding the machine through step-by-step logical thought may be stifling more intuitive, culturally specific, and creative ways of solving a problem.

The University of Southern California researcher explained that when a group of people repeatedly interacts with these systems, their creativity is "flattened compared to the same group without AI assistance."

This flattening is raising concerns in educational institutions at all levels.

When faced with open-ended and subjective questions, without a single correct answer, teachers could expect a wide range of reactions. However, if all students use technological tools, their answers may become more polished, but they end up falling into a mere handful of similar categories. They thus lose the diversity of thought that classroom debates aim to foster.

Zhivar Sourati's biggest fear is that homogenization is affecting those who are precisely developing the ability to generate new ideas creatively. The researcher warns that if university students continue to rely on technology instead of developing their own thought processes, "they won't even learn to think for themselves and have their own perspectives."

Morteza Dehghani, a professor of psychology and computer science at the University of Southern California, shared that he has heard of people using chatbots to decide who to vote for in elections, something he considers "quite frightening."

The study's co-author highlighted the consequences of this situation, stressing that if the population loses diversity in its thinking or falls into intellectual laziness, "it is obvious that this will profoundly affect our society."

Sophia, a third-year student at Yale, believes that her anthropology classmates are using these platforms to draft outlines of what to say in class because they feel insecure about what they don't know.

The university student added that creativity is declining because "we've lost the ability to make connections."

Morteza Dehghani agrees that if people continue to delegate their reasoning to technology, communities will lose creative innovation and the ability to critique dominant ideas or even political candidates.

The researchers noted that as more people use artificial models to write and think, these results are reabsorbed into human discourse and, ultimately, into the data used to train the next generation of algorithms. Thus, homogenization continues to worsen.

The professor warned that, by transferring our reasoning to these models, "we can easily be persuaded by what they tell us."

In the field of education, Dehghani expressed concern about a generation of students who are learning and being tutored by chatbots. The academic argued that these young people will be more homogeneous in how they think and write, "so this will have long-term influences."

People are not learning to reason.

Sophia, who is trying to resist the use of technology in college, believes that her classmates are downplaying their own thinking in favor of using "really expensive words".

The young woman confessed that she preferred to admit to the teacher that she didn't know what they were talking about. The student warned that, even if all the texts were entered into an intelligent program, the machine wouldn't have "our past experiences that make us critical thinkers."

Amanda agreed with this view, recalling that in the past people had much more to say because they felt truly connected to the subject matter. The student noted that now classroom discussions don't delve into the issues, concluding that much of this has to do with chatbots, "but also with the fact that there's no longer as much desire to create a personal connection with the study material."

Disappointed, the university student added that it was boring to be in a class where everyone has the same thing to say and where "nobody wants to go deeper or contradict what is directly stated in the text or the standard."

Daniel Buck, a researcher at the American Enterprise Institute and former English teacher for seven years in primary and secondary schools, expressed concern that students were circumventing the cognitive effort required to participate in classroom discussions and complete homework.

According to this academic, much of the learning happens in the tedious details and the difficulty. The researcher explained that young people only retain what they have invested time in consciously processing. Thus, if a student delegates thinking to a machine, they may be able to reproduce an argument in class, but they will not have developed the underlying skills to apply that knowledge in other situations.

Daniel Buck drew a clear distinction between the new artificial intelligence models and the shortcut technology that preceded them, a popular literary summary platform called SparkNotes. The former lecturer recalled that professors could easily detect when students were using this site to find chapter summaries of works.

Artificial intelligence is a super-powerful version of those old summary platforms and is capable of "answering any question you ask it," the researcher compared. While older methods offered a fixed set of analyses, AI can answer any question posed by the teacher, making it much more difficult to identify when it is not the students themselves reasoning.

The difference lies in how people think. Morteza Dehghani clarified that, instead of being used merely as a reference, like books or search engines, this technology acts as an active participant in "problem-solving and perspective-taking."

Thomas Chatterton Williams asserted that what we are witnessing now is fundamentally different from other periods of homogenization of expression and thought. The professor argued that if even professional writers are finding it extremely difficult to resist outsourcing the arduous task of debating words and ideas, he doesn't see how "younger generations, who haven't experienced a world before highly sophisticated, on-demand AI-generated writing, will be able to do so, at least not on this scale."

Daniel Buck fears that university students will graduate without having built relationships with professors and without the habit of sustained cognitive effort, which will lead to future difficulties in solving real-world problems.

The former professor confessed the enormous pleasure he takes in reading original essays written by his students. Buck emphasized that, even when the argumentation isn't as solid as he would like, it's gratifying to see young people beginning to think for themselves, to analyze and to think critically, a process he compared to seeing his own children walk for the first time, "where they stumble and fall, and that's fantastic, keep doing it."

Reading and interacting with students' original ideas in class helps teachers understand how they think and express themselves.

The academic highlighted that there is an often-ignored interpersonal exchange as students get to know each other, they get to know the teacher and begin to trust their assessment, lamenting that "this is also lost when everything starts to be done through artificial intelligence."

How teachers are circumventing AI.

Sun-Joo Shin, a philosophy professor at Yale, considered it a major task for anyone involved in teaching to "continue exploring ways to ensure that students maintain critical and creative thinking in the current technological age."

The teacher acknowledged that we are experiencing an interesting and exciting transition, expressing her desire for her students to understand the subject matter, a wish that remains unchanged regardless of the emergence of these innovations. At the same time, she wants them to use this fantastic tool to their advantage and not become victims of it. The great dilemma for an educator, the teacher observed, lies in discovering how to help or force students "to learn the subject matter and to think creatively, without abandoning technological tools, but also without copying them."

Until the first semester of the 2024 academic year, Sun-Joo Shin was not concerned that algorithms would affect students' understanding in his mathematical logic course. The teaching staff had tested the worksheets with the existing models, and these proved incapable of solving the problems.

However, since then, technology has been catching up, so programs are now able to answer questions "quite well" when university students enter their course notes and study materials. Given this, the professor began to consider additional requirements for the course, far beyond simply submitting answers at home.

The academic argued that, after all, "it would be extremely unfair to give good grades to answers given by artificial intelligence."

Yale University provides guidelines on the use of these tools, for both students and professors. One of the institution's websites states that "the use of generative AI is subject to the individual policies of each discipline." The university encourages faculty to adapt the model guidelines to their teaching objectives, warning that "AI detection tools are unreliable and currently lack any support."

The university provides standard policies for different types of classes, such as creative writing seminars or mid-level theoretical classes in the sciences. These guidelines range from discouraging its use, with explicit rules on when AI cannot be activated, to allowing its use only as a source of ideas. They also include the possibility of encouraging the use of technology in research papers, although the submission of texts generated entirely by chatbots is expressly prohibited.

Daniel Buck warns that any assignment sent home cannot be guaranteed to be the sole work of the student. To overcome student trickery, educators are returning to reading texts aloud in the classroom, handwritten and assigned essays, as well as "paper and pen assessments."

Knowledge testing in class often comes in the form of surprise quizzes. The researcher illustrated that a student who asks a program for a summary of a book, instead of reading it in its entirety, may grasp the general outline, but there is a strong probability that a specific detail required in the test will not be included in that summary.

The official indicated that anyone who has read the text will find the test very easy, adding that if they haven't, "there will be no way to disguise the gap."

Professor Sun-Joo Shin took drastic measures, implementing "a rather significant change" in the requirements for her two logic classes. While maintaining the sets of exercises in class, she reduced their weight in the final grades. Currently, these worksheets are evaluated solely based on their completion, with students receiving a performance assessment instead of a numerical grade.

Using the tests completed at home as a question bank, the teacher began administering two intermediate tests and a final exam, always in person. The teacher specified that some questions are taken from previous exercises, others undergo slight modifications, "and there are also questions that require identifying an error in a test or filling in gaps in exercises that have already been solved in other classes."

In her computability and logic course, the professor explained that she has been applying oral and individualized tests for several years, as well as requiring presentations to the class long before the era of generative technologies, a method that has "proven to be very positive." Now, exams, oral tests, and theoretical presentations have a significantly greater weight in the final grade, to the detriment of homework.

Professor Thomas Chatterton Williams arrived at a similar point through a different approach. He decided to move all writing assignments into the classroom, making them spontaneous exercises. At the end of the semester, he evaluates the classes through oral exams.

In an email, the professor confessed that he could not give students any writing assignment with the necessary confidence without seeing them transfer their ideas to paper by hand, in his own presence. The academic views this reality as a terrible but strictly necessary loss, concluding that "the temptation and availability of technology are too great."

Interference in the education of others.

Although teachers can limit the role of machines in assessments, it is equally important that university students have the intention and willingness to reduce their technological dependence while learning, especially since this attitude affects the education of their fellow students.

Student Amanda described the situation as frustrating because, although she tries to distance herself from these types of platforms, she confessed she couldn't "stop others from using them." The university student emphasized that the fact that others use them also affects her education, taking away "the value of the two hours of my seminar."

Basil Ghezzi, a first-year student at Bard College who actively avoids using technological tools in her studies, is particularly concerned about the environmental costs associated with using these models. As an alternative, she encourages seeking support from the living resources that already surround them.

The university student urged her classmates to speak with their tutors, talk to their professors, and reach out to the people around them, encouraging them to have "meaningful conversations with the people in their lives."

Still, not everyone has an all-or-nothing approach to these systems. Morteza Dehghani revealed that he writes in bullet points to capture his own ideas and then asks the machine to find flaws in his work.

The professor hopes that more companies will invest in systems capable of generating variety and reflecting the plurality of thought in today's society. For now, however, Dehghani suggests that people resist using artificial intelligence in the generation and ideation of reasoning.

The University of Southern California researcher issued a clear warning on the subject. In his view, artificial intelligence models should act as mere collaborators and "should not be agents that do everything on our behalf."

Post a Comment

0 Comments