thank you for your service here. 🙏 i have been side-eyeing posts about this for weeks, but haven't been able to look into it yet. the thing about not being able to quote from their own essay when using an LLM seems so obvious to me, and also so obviously not a sign of brain damage.. learning just takes time and repetition that you don't get when you copy and paste? refocusing the education and AI convo on economics is soo sorely needed in the face of all this moral panic about the kids not being able to read anymore!
I saw all the hubbub about this and felt it smelled funny. Turns out I was right, once again the mainstream media takes a scientific study, omits all the extremely important conditions and constraints, and just runs with a ~scary~ conclusion.
I don't have the chops to comment on the actual study itself, but I can probe the apparent conclusion that LLMs worsen educational attainment. The question of "Does using them makes students "worse"?" implicitly assumes all was well before the arrival of the nefarious LLM and students were learning bigly from these assignments. Which is so preposterous an assumption that I barely can't. Rote, unengaging, unrewarding assignments have always been met with students going about them in an efficient as possible fashion. Back in my day, the late 90s/early 00s, that meant copying summaries of books to name just one thing (I vividly remember the consternation about the Internet making this easier to do).
If you don't give people a reason to give a fuck by making tasks rewarding, engaging, etc. it's no wonder they'll go about them in the way that's the most energy efficient? That's a far more difficult conversation though, as you rightly point out, as it touches on far deeper causes in our socio-political processes that most people either don't want to acknowledge or are oblivious to. It's much easier to blame LLMs for it. As they did Wikipedia before that, and Google before that, and probably publications summarizing books/literature before that.
To share a personal anecdote: I viscerally loathed those assignments in school, the English language does not have the words to properly express just how deep I absolutely abhorred doing them, and did whatever I could to get through them while expending as little effort as possible. And yet... I like writing. I regularly just throw hundreds, if not thousands, of words on the paper. Because I want to. Because I enjoy it. Because communicating some thought or observation I have clearly tickles my brainulum something fierce. Something the educational system did nothing to foster, and yet here I am.
Speaking of motivation, I think it really says something that the researchers requested people (or at least, rhose publishing the findings) not use terms like "stupid/dumb" or "brain damage/rot" and these were the exact words so many people's first reached for when discussing this study. Even supposed progressives and pro disability rights people have been saying stuff like this! Degeneration theory is back with a veneer of concerns about ethics and education, and it's kind of despair-inducing TBH! I'm glad you wrote about it, as someone with a bit of a bigger platform and more clout that the one (1) post I saw debunking the claims people were running with (shout out to ms-demeanor on tumblr!)
Well done. I have one on deck about this that I have been weighing posting or not. I think you covered it more extensively. I hesitate because I don't want to dump on the study, which is not saying the same thing as the various viral posts/threads on X, The Everything App™️, are saying.
I ~do~ have frustration with the scope/possibly the motive of the study and what exactly it informs if they are only measuring brain activity during LLM use, but that could prove to be a component to something important later. Who knows! Still, the limited scope tees up sensational nonsense and that has been annoying.
Post it!! I think this conversation desperately needs more nuance, exploration, and questioning. And yes, I think what the researchers examined is worthwhile, but it's also hardly groundbreaking -- different parts of the brain "light up" during different activities, water is wet. It is still useful to document exactly which neural bands are activated because we know so little about the brain, still, in general.
Holy F. This is one of the most intelligent yet understandable essays about this topic I've read. Thank you for sharing it with us. I'll be passing it along to many of my fearful colleagues.
I really appreciate your work and the clarity you bring to complicated issues, especially around neurodivergence and social justice. That said, I think there’s a blind spot here around cognitive equity.
Framing AI tools as something that takes “the struggle and reward out of writing” unintentionally reinforces a very narrow idea of what valid creative labor looks like. For many of us—especially those with executive dysfunction, trauma blocks, or non-linear processing (especially GLP’s)—AI isn’t replacing our voice. It’s helping us access it.
If we’re serious about inclusive communication and dismantling ableist expectations, we also need to respect the tools that allow more of us to participate in dialogue, expression, and meaning-making. That doesn’t cheapen writing. It broadens who gets to be part of it.
I also am concerned about this issue and thinking about it is difficult for me. I am not at all anti-AI in any overarching way but I deplore many aspects of the current AI industry. I see AI slop everywhere now and feel both appalled and baffled by what to do about it. I worry about what seems like ill-considered adoption of AI in many sectors, including K-12 education.
I deal with cognitive issues from ADD and CPTSD. If I were working now, I expect I would be using AI writing tools, as friends with similar challenges do. And like them, I would feel troubled about doing so. I would love to read more nuanced discussions about how we can, as individuals and a society, balance the various potentials and harms these tools afford in ways that are transparent and inclusive.
Thanks for your comment and to Devon for the article.
Thanks for this conversation. I think there are many technological tools that disabled people have basically been forced into using by circumstance that are, also, a terrible force for the world. Grubhub and Amazon same day delivery come to mind. I will always want to acknowledge all the facets in play here -- the lack of accessibility and assistance that motivates use of these tools, and the effects they have in practice on the environment, workforce, etc. Can LLMs be a completely neutral tool? In a completely different economic context, probably?
I appreciate your thoughtfulness here, and I agree it’s important to name the environmental and labor costs of any tool. That said, I’m not sure comparing LLMs to services like Grubhub quite holds—cognition is not a convenience, it’s core to access. For many disabled and neurodivergent people, LLMs are not just tools of efficiency but participation. While economic context deeply shapes how AI is deployed, I think we also have to account for its intrapersonal, interpersonal, and cultural implications—independent of, though entangled with, those economic structures.
I wrote that it is injudicious use of AI that can cheapen writing, as I also don't believe it always does or innately has to. As another commenter brought up, the very nature of how a lot of college writing assignments are written and graded already cheapens writing, long before AI became a part of the picture!
A thousands time yes to this. I have more capacity now for art, reading, dialogue, because I’m not using every single ounce of energy on tedious tasks, that frankly have little to zero impact in the workplace.
I can totally understand the concern about the rise of this new technology. It also reminds me of the late 9os conversations around google and the expansion of the internet as a whole. Some of them founded, some of them not. I will state that on a personal level for someone with learning disabilities, it’s been a god send to be able to have a document reviewed for grammar, flow, and readability so that I don’t need to spend hours going over something multiple times to miss stuff anyway. This then gives me the capacity for the projects that require nuanced and thoughtful work.
thank you for your service here. 🙏 i have been side-eyeing posts about this for weeks, but haven't been able to look into it yet. the thing about not being able to quote from their own essay when using an LLM seems so obvious to me, and also so obviously not a sign of brain damage.. learning just takes time and repetition that you don't get when you copy and paste? refocusing the education and AI convo on economics is soo sorely needed in the face of all this moral panic about the kids not being able to read anymore!
I saw all the hubbub about this and felt it smelled funny. Turns out I was right, once again the mainstream media takes a scientific study, omits all the extremely important conditions and constraints, and just runs with a ~scary~ conclusion.
I don't have the chops to comment on the actual study itself, but I can probe the apparent conclusion that LLMs worsen educational attainment. The question of "Does using them makes students "worse"?" implicitly assumes all was well before the arrival of the nefarious LLM and students were learning bigly from these assignments. Which is so preposterous an assumption that I barely can't. Rote, unengaging, unrewarding assignments have always been met with students going about them in an efficient as possible fashion. Back in my day, the late 90s/early 00s, that meant copying summaries of books to name just one thing (I vividly remember the consternation about the Internet making this easier to do).
If you don't give people a reason to give a fuck by making tasks rewarding, engaging, etc. it's no wonder they'll go about them in the way that's the most energy efficient? That's a far more difficult conversation though, as you rightly point out, as it touches on far deeper causes in our socio-political processes that most people either don't want to acknowledge or are oblivious to. It's much easier to blame LLMs for it. As they did Wikipedia before that, and Google before that, and probably publications summarizing books/literature before that.
To share a personal anecdote: I viscerally loathed those assignments in school, the English language does not have the words to properly express just how deep I absolutely abhorred doing them, and did whatever I could to get through them while expending as little effort as possible. And yet... I like writing. I regularly just throw hundreds, if not thousands, of words on the paper. Because I want to. Because I enjoy it. Because communicating some thought or observation I have clearly tickles my brainulum something fierce. Something the educational system did nothing to foster, and yet here I am.
It definitely felt like preparation for corporate world now that I think about it.
Speaking of motivation, I think it really says something that the researchers requested people (or at least, rhose publishing the findings) not use terms like "stupid/dumb" or "brain damage/rot" and these were the exact words so many people's first reached for when discussing this study. Even supposed progressives and pro disability rights people have been saying stuff like this! Degeneration theory is back with a veneer of concerns about ethics and education, and it's kind of despair-inducing TBH! I'm glad you wrote about it, as someone with a bit of a bigger platform and more clout that the one (1) post I saw debunking the claims people were running with (shout out to ms-demeanor on tumblr!)
Well done. I have one on deck about this that I have been weighing posting or not. I think you covered it more extensively. I hesitate because I don't want to dump on the study, which is not saying the same thing as the various viral posts/threads on X, The Everything App™️, are saying.
I ~do~ have frustration with the scope/possibly the motive of the study and what exactly it informs if they are only measuring brain activity during LLM use, but that could prove to be a component to something important later. Who knows! Still, the limited scope tees up sensational nonsense and that has been annoying.
Post it!! I think this conversation desperately needs more nuance, exploration, and questioning. And yes, I think what the researchers examined is worthwhile, but it's also hardly groundbreaking -- different parts of the brain "light up" during different activities, water is wet. It is still useful to document exactly which neural bands are activated because we know so little about the brain, still, in general.
Agreed. Posted it! https://petercoffin.substack.com/p/ai-is-rotting-your-brain-and-other
Holy F. This is one of the most intelligent yet understandable essays about this topic I've read. Thank you for sharing it with us. I'll be passing it along to many of my fearful colleagues.
I really appreciate your work and the clarity you bring to complicated issues, especially around neurodivergence and social justice. That said, I think there’s a blind spot here around cognitive equity.
Framing AI tools as something that takes “the struggle and reward out of writing” unintentionally reinforces a very narrow idea of what valid creative labor looks like. For many of us—especially those with executive dysfunction, trauma blocks, or non-linear processing (especially GLP’s)—AI isn’t replacing our voice. It’s helping us access it.
If we’re serious about inclusive communication and dismantling ableist expectations, we also need to respect the tools that allow more of us to participate in dialogue, expression, and meaning-making. That doesn’t cheapen writing. It broadens who gets to be part of it.
I also am concerned about this issue and thinking about it is difficult for me. I am not at all anti-AI in any overarching way but I deplore many aspects of the current AI industry. I see AI slop everywhere now and feel both appalled and baffled by what to do about it. I worry about what seems like ill-considered adoption of AI in many sectors, including K-12 education.
I deal with cognitive issues from ADD and CPTSD. If I were working now, I expect I would be using AI writing tools, as friends with similar challenges do. And like them, I would feel troubled about doing so. I would love to read more nuanced discussions about how we can, as individuals and a society, balance the various potentials and harms these tools afford in ways that are transparent and inclusive.
Thanks for your comment and to Devon for the article.
Thanks for this conversation. I think there are many technological tools that disabled people have basically been forced into using by circumstance that are, also, a terrible force for the world. Grubhub and Amazon same day delivery come to mind. I will always want to acknowledge all the facets in play here -- the lack of accessibility and assistance that motivates use of these tools, and the effects they have in practice on the environment, workforce, etc. Can LLMs be a completely neutral tool? In a completely different economic context, probably?
Yes. I also am physically disabled and I feel conflicted every time I get a grocery delivery or pharmacy package. Ugh
I appreciate your thoughtfulness here, and I agree it’s important to name the environmental and labor costs of any tool. That said, I’m not sure comparing LLMs to services like Grubhub quite holds—cognition is not a convenience, it’s core to access. For many disabled and neurodivergent people, LLMs are not just tools of efficiency but participation. While economic context deeply shapes how AI is deployed, I think we also have to account for its intrapersonal, interpersonal, and cultural implications—independent of, though entangled with, those economic structures.
I wrote that it is injudicious use of AI that can cheapen writing, as I also don't believe it always does or innately has to. As another commenter brought up, the very nature of how a lot of college writing assignments are written and graded already cheapens writing, long before AI became a part of the picture!
A thousands time yes to this. I have more capacity now for art, reading, dialogue, because I’m not using every single ounce of energy on tedious tasks, that frankly have little to zero impact in the workplace.
I can totally understand the concern about the rise of this new technology. It also reminds me of the late 9os conversations around google and the expansion of the internet as a whole. Some of them founded, some of them not. I will state that on a personal level for someone with learning disabilities, it’s been a god send to be able to have a document reviewed for grammar, flow, and readability so that I don’t need to spend hours going over something multiple times to miss stuff anyway. This then gives me the capacity for the projects that require nuanced and thoughtful work.