The Harvard Gazette | What if AI could help students learn, not just do assignments for them?
Click to read.
Professors find promise in ‘tutor bots’ that offer more flexible, individual, interactive attention in addition to live teaching: “...students in the AI-supported classroom reported significantly better engagement with the course and more motivation to learn…with an AI tutor, she added, students can go at their own pace and ask as many questions as they like, any time they like — without fear of being judged. And that’s the most important aspect of the tutor bot, the researchers said — to make sure it prompts students to think and ask questions, rather than do the thinking for them.”
The Atlantic | Colleges Are Preparing to Self-Lobotomize
The skills that students will need in an age of automation are precisely those that are eroded by inserting AI into the educational process.
Kudos to Mita for sharing this article which connects well with the MIT study referred by James Campbell on the cognitive dept when using AI for writing tasks in the previous AI BEAT. This article is helpful as it mentions advances on how to mitigate some of these effects. The author draws attention to the larger issue they are observing: "...the current push to integrate AI into all aspects of curricula is proceeding without proper attention to these safeguards, or sufficient research into AI’s impact on most fields of study."
JCAM’s thoughts on MIT AI research
James Campbell: In this MIT study they monitored (through EEG monitors) the neural engagement of groups of students in essay writing tasks. Some were allowed to use AI LLM's (large language models like ChatGPT), some were only allowed to use a search engine, some were "brain only" and others were asked to move from one mode to another. The study found that "cognitive activity scaled down in relation to external tool use" and that "LLM users consistently underperformed at neural, linguistic, and behavioral levels" over the course of the 4 month study.
LEVR’s thoughts NY Times article
Rebecca Levere: Some sobering advice from the Vice Provost at NYU responsible for helping faculty and students adapt to digital tools. A couple of passages that stood out to me:
“Our A.I. strategy had assumed that encouraging engaged uses of A.I. — telling students they could use software like ChatGPT to generate practice tests to quiz themselves, explore new ideas or solicit feedback — would persuade students to forgo the lazy uses. It did not.
We cannot simply redesign our assignments to prevent lazy A.I. use. (We’ve tried.) If you ask students to use A.I. but critique what it spits out, they can generate the critique with A.I. If you give them A.I. tutors trained only to guide them, they can still use tools that just supply the answers. And detectors are too prone to false accusations of cheating and too poor at catching lightly edited output for professors to rely on them.”
(And tldr: The ‘hated but only real solution’ to the AI cheating crisis: have students write or speak in front of you, without tech.)
Stanford University’s Generative AI for Education Hub Research Study Repository
Unsure if your use of GenAI is backed by the peer-reviewed literature? Why not test your lesson idea against 700 vetted papers in the Research Study Repository of Stanford University’s Generative AI for Education Hub. Go to ChatGPT and enter a prompt along the lines of “...our school is struggling with middle school math outcomes. What does the Research Study Repository at Stanford’s GenAI in Education Hub’s say around GenAI tools that teachers can use to enhance numeracy skills among middle school students?”.