[The View from 30,000 Feet is an occasional entry that offers a big-picture view of some of the topics covered on this blog.]
Many interested observers of higher education would argue that when colleges and universities are not slow in responding to critical changes in the education market, they often respond wrongheadedly. Such is the argument made by Michael Clune in his recent essay in The Atlantic on the academy's response to the leaps and bounds being made in artificial intelligence. The opening lines of Clune's essay set the stage: "After three years of doing essentially nothing to address the rise of generative AI, colleges are now scrambling to do too much. Over the summer, Ohio State University, where I teach, announced a new initiative promising to 'embed AI education into the core of every undergraduate curriculum, equipping students with the ability to not only use AI tools, but to understand, question and innovate with them—no matter their major.' Similar initiatives are being rolled out at other universities, including the University of Florida and the University of Michigan. Administrators understandably want to 'future proof' their graduates at a time when the workforce is rapidly transforming. But such policies represent a dangerously hasty and uninformed response to the technology. Based on the available evidence, the skills that future graduates will most need in the AI era—creative thinking, the capacity to learn new things, flexible modes of analysis—are precisely those that are likely to be eroded by inserting AI into the educational process." From here Clune argues that the academy's response to advances in AI should relate to the answers to two questions: "What abilities do students need to thrive in a world of automation? And does the incorporation of AI into education actually provide those abilities?" Clune argues that students should be equipped with the ability to query AI, critically analyze its responses (which includes assessing the responses' weaknesses and inaccuracies) and to fold new information into our existing knowledge base. As Clune puts it, "The automation of routine cognitive tasks . . . places greater emphasis on creative human thinking. Students must be able to envision new solutions, make unexpected connections, and judge when a novel concept is likely to be fruitful. Finally, students must be comfortable and adept at grasping new concepts. This requires a flexible intelligence, driven by curiosity. Perhaps this is why the unemployment rate for recent art-history graduates is half that of recent computer-science grads." Clune's essay advances by asking a secondary question: "Will a radically new form of AI-infused education develop these skills?" As he points out, a growing body of research suggests that it will not. "Professors with the most experience teaching students to use technology believe that no one yet understands how to integrate AI into curricula without risking terrible educational consequences," Clune adds. Clune's summary of the problem relates directly to the extensive discussion in which Turner College faculty engaged during the November meeting. As his essay summarizes, "We don’t have good evidence that the introduction of AI early in college helps students acquire the critical- and creative-thinking skills they need to flourish in an ever more automated workplace, and we do have evidence that the use of these tools can erode those skills. This is why initiatives—such as those at Ohio State and Florida—to embed AI in every dimension of the curriculum are misguided. Before repeating the mistakes of past technology-literacy campaigns, we should engage in cautious and reasoned speculation about the best ways to prepare our students for this emerging world." Clune's prescription for integrating AI into higher education is generally straightforward. As he writes, "The most responsible way for colleges to prepare students . . . is to teach AI skills only after building a solid foundation of basic cognitive ability and advanced disciplinary knowledge. The first two to three years of university education should encourage students to develop their minds by wrestling with complex texts, learning how to distill and organize their insights in lucid writing, and absorbing the key ideas and methods of their chosen discipline. These are exactly the skills that will be needed in the new workforce. Only by patiently learning to master a discipline do we gain the confidence and capacity to tackle new fields. Classroom discussions [plus] long hours of closely studying difficult material will help students acquire that magic key to the world of AI: asking a good question. After having acquired this foundation, in students’ final year . . . AI tools can be integrated into a sequence of courses leading to senior capstone projects. Then students can benefit from AI’s capacity to streamline and enhance the research process. By this point, students will (hopefully) possess the foundational skills required to use—rather than be used by—automated tools."
Comments
Post a Comment