Get stories like this delivered straight to your inbox. Sign up for The 74 Newsletter
Class Disrupted is an education podcast featuring author Michael Horn and Futre’s Diane Tavenner in conversation with educators, school leaders, students and other members of school communities as they investigate the challenges facing the education system in the aftermath of the pandemic — and where we should go from here. Find every episode by bookmarking our Class Disrupted page or subscribing on Apple Podcasts, Google Play or Stitcher.
Techno-optimists have high hopes for how AI will improve learning. But what’s the merit of the “bull case”, and what are the technology’s risks? To think through those questions, Michael and Diane sit down with Ben Riley of Cognitive Resonance, a “think and do” tank dedicated to improving decisions using cognitive science. They evaluate the cases made for AI, unpack its potential hazards, and discuss how schools can prepare for it.
Listen to the episode below. A full transcript follows.
Diane Tavenner: Hi there, I’m Diane, and what you’re about to hear is a conversation Michael and I recorded with our guests, Ben Riley. It’s part of our series exploring the potential impact of AI in education, where we’re interviewing optimists and skeptics.
Here are two things from the episode that I keep thinking about:
First, our conversations are starting to make me wonder if AI is going to disrupt the model of education we’ve had for so long, as I think Ben perhaps fears, or if it’s actually going to strengthen and reinforce our existing models of the schoolhouse with classrooms filled with a teacher and students.
The second thing that I was really thinking about and that struck me was that Ben’s sort of one case for what could be beneficial about AI is something that’s directly related to his work and interest in understanding the brain. And kind of how learning occurs. To be fair, there’s a theme emerging across all the conversations we’re having with people where they see value in the thing that they value themselves. And perhaps that’s an artifact of the early stages and who knows, but it’s making me curious.
And speaking of curious, a reflection I’m having after talking with Ben is about the process of change. Ben is a really well reasoned, thoughtful skeptic of AI’s utility in education. He comes to his views at least partially from using AI. I would consider myself much more of an optimist and yet I’m finding myself a little bit annoyed right now, that every time I want to write an email or join a meeting or send a text or make a phone call that I’ve got AI pretty intrusively jumping in to try to help me. And it’s really got me thinking about the very human process of change, which is one of the many reasons why I’m really looking forward to sense making conversations with Michael after all of these thought provoking interviews.
In the interim, we’d both love to hear your thoughts and reflections. So please do share. But for now, I hope you enjoy this conversation on Class Disrupted.
Michael Horn: Hey, Diane. It is good to see you again.
Diane Tavenner: You too. And I’m really excited to be back. Coming off of our last conversation around AI and education, it’s making me even more excited about what we’re going to be learning in this series. And I think today will be no exception in really stretching our minds and our thinkings about the possibilities, the limitations, the potential harms of AI and its intersection with education.
Michael Horn: Yeah, I think that’s right, Diane. And to help us think through these questions, today, we’re bringing someone on the show that I think both of us have known for quite a long time. His name is Ben Riley. He previously founded the Deans for Impact in I believe 2014. And Deans for Impact is a nonprofit that connects cognitive science to teacher training. And then Ben stepped aside a couple years ago, and has most recently founded Cognitive Resonance, which is a think and do tank, in its words, and a consultancy organization that’s really, its focus actually is on this topic of AI and learning, which is perfect and makes Ben the perfect guest for us today. So, Ben, welcome.
Ben Riley: Thanks so much for having me. We’ll see if you still think I’m the perfect guest by the end of it, but I appreciate being invited to speak to both of you.
Ben Riley’s Journey to the Work
Michael Horn: Absolutely. Well, before we get into a series of questions that we’ve been asking our guests, we’d love you to share with the audience about how you got into AI so deep, specifically because I will confess and I’ll give folks background, I’ve been reading. I’ve actually been an editor on a couple of the things that you’ve submitted into Education Next on AI, and I found them super intriguing. And then somehow I had no idea that you created this entire life for yourself around AI and education. And you have some language on this that I think is really interesting on the site where you say the purpose is to influence how people think about Gen AI systems by actually using the lens of cognitive science. And you believe that will help make AI more intelligible, less mysterious, which will actually help influence what people do with it in the years to come. And then you write that you see it as a useful tool, but one with strengths and limitations that are predictable. And so we really have to understand those if we want to harness them in essence. So how and why did you make this your focus?
Ben Riley: Yeah. Well. And thank you for clearly having read the website’s cognitiveresonance.net or the Substack Build Cognitive Resonance, in many ways, the organization reflects my own personal journey because several years ago I started to become aware that something was happening in the world of AI, and at the time it was called deep learning, and that was the phrase that was starting to emerge. And to be completely candid, my focus has always been, and in some ways still very much is on how human cognition works. And so AI, artificial intelligence, is considered kind of one of the disciplines within cognitive science, along with psychology and neuroscience and linguistics, philosophy. There’s like it’s an interdisciplinary field. And for me, quite honestly, AI was sort of like this thing happening somewhere over there that I had maybe a loose eye on. And I got in touch with someone named Gary Marcus at the time, and we’ll come back to Gary in a second, and then just said, hey, Gary, can you explain deep learning to me and what it is and what’s going on? And that, you know, sort of began that conversation. And then quite frankly, I just kind of squirreled away and didn’t think much about it. And then, like it did for all of us, ChatGPT came into our lives. And I was stunned. I was completely stunned when I first sat down with it and started using it. And what really irked me was that I didn’t understand it. You know, I was like, I don’t get how this is doing, what it’s doing. So I am now going to try to figure out how it’s doing, what it’s doing. And that is not easy. At least it wasn’t easy for me. I don’t think it’s even now. I don’t think it’s easy for those who might have spent their entire lives, much less those of us who are coming in late in the game or just trying to make sense of this new technology in our lives. And what I was able to draw upon was both sort of the things that I do know and have learned over the last decade plus around human cognition and frankly draw on a lot of relationships I have with people who are in cognitive science broadly, and just start having a bunch of conversations, doing a bunch of reading, and really trying to, you know, build a mental model of what’s taking place with these tools and with large language models specifically. And when I finished all that, I thought, well, geez, it seems like, you know, that took a lot of work. Maybe it would be helpful to sort of try to pass this along and bring others into the conversation. So that’s really the thesis of Cognitive Resonance.
AI’s Educational Upside
Diane Tavenner: Ben, everything you just described is just so consistent with my experience with you over the years and the conversations that we’ve had and what my perception is what you care about. And I’m so glad you brought it together in that way, because I’ll be honest, when I was like, wait, Ben is doing AI? Like, that didn’t totally land with me. And so what I’m hearing from you is like, well, I’m super curious for this conversation because I’m. I’m not getting the vibe that you’re a total AI skeptic. I’m not getting the vibe that you’re a total cheerleader. I’m guessing we’re gonna have a really nuanced conversation here about this right now. So let’s start there. Like, let’s start with kind of that polar, and then see where we go. Can you make the argument for us of how AI is going to positively impact education? And I’m not saying it has to be your argument, but can you just stand up an argument for us based on what you’ve learned about how it could. Like, what’s the best case to be made for AI positively impacting it?
Ben Riley: Yeah. So this is what people are now calling steel manning, right? Like, can you steel man the argument that you may not agree with. I had a law school professor who taught me that the best way to write a good legal brief is to take the other side’s best argument, make it even better than they can make it, and then defeat it. And you all gave me this question in advance, and I’ve been thinking about it since you did, and I don’t know if I can make one best case. What I want to do is make three cases which I think are the positive bull cases. So number one, one that I think should be familiar to both of you because we’ve been having this debate for nearly a decade, is sort of personalized learning, a dream deferred, but now it can be real. When we said we were going to use big data analytics and use that to figure out how to teach kids exactly what they want to know, when they need to know it. Like, what we meant was we needed large language models that could do that. And now, lo and behold, we have that tool. And as Dan Meyer likes to joke, it can harness the power of a thousand suns. It’s got all of the knowledge that’s ever been put into some sort of data form that can be scraped from the Internet or from other sources, not always disclose what those sources are, but nonetheless, there’s a lot of data going into them and using these somewhat mysterious processes that they have of autoregression and back propagation. And we can go as deep as you want in the weeds on some of those terms, but we doing that, we can actually finally give kids like an incredibly intelligent, incredibly patient, incredibly, some would even say loving, some have said that, tutor. And we can do that at scale, we can probably do it cheaply. And boom, Benjamin Bloom’s dream, two sigma gains. It’s happening finally. There we go. All right, so that’s argument number one. Call that personalized maximization argument. Argument number two, I think, is the sort of AI as a fundamental utility argument. And the argument here is something along the lines of, look, this is a big deal technologically in the same way the Internet or a computer is a big deal technologically, and it’s one of those technologies that’s going to become ubiquitous in our society, the same way the computer or the Internet has become ubiquitous in our society. And we don’t even know all the many ways in which it’s going to be woven into the fabric of our existence. But that includes our education system. And so some benefits will accrue as a result of its many powers. Okay, so that’s the utility argument. The third argument would say something like this. It would say the process of education fundamentally is the process of trying to change mental states in kids. And I mean, frankly, doesn’t have to be kids, but we’ll just talk about it from teachers to students.
Michael Horn: Sure.
Ben Riley: And, there’s some really big challenges with that. When you just distill it down to the act of trying to make a kid think about something. One of the challenges is that we cannot see inside their head. So the process of what’s taking place, cognition or not, is opaque to us, number one. And number two, experiments are really, really hard. They’re not impossible. But you can’t really do the sort of experiments that you can do in other realms of life the same way. It’s just for ethical reasons, but also just frankly from like scientific, technical reasons. Because again, we can’t see what’s happening in the head. So even when you run an experiment, you’re getting approximations of what’s happening inside the head. Some would then say, well, now we have something that is kind of like a mind and we can kind of emphasis on kind of, see inside it. And we definitely can run experiments on it in a way that doesn’t implicate sort of the same ethical concerns and others. That argument, and I’ll call that the cognitive arguments, human and artificial, would say that can use this tool to better help us understand ourselves. In some ways it might help us by being similar to what’s happening with us, but in other ways it might help us by being different and showing those differences. So those are the three arguments that I see.
Evaluating the Case for AI
Diane Tavenner: Yeah. Super interesting. Thank you for making those cases. Which of any of them do you actually believe? Now you, I’m curious about your opinion and why?
Ben Riley: Yeah. So I have bad news for you. The first one, the personalized maximization dream, is going to fail for the same reason that I would like to say I predicted that personalization using big data analytics would fail. We could spend the entire podcast with me unpacking why that is. I’m not going to do that. So I’m going to limit it just to two arguments. Okay. The first would be that these tools fundamentally lack a theory of mind. Okay. So that’s a term that cognitive scientists will use for the capacity that we humans have to imagine the mental states of another. And these tools can’t do that. There’s some dispute in the literature and researchers will say, well, if you run these sort of tests, maybe they’re kind of capable of it. I’m not buying it. I don’t think it’s true. And there’s plenty of evidence on the other side as well saying that they just don’t have that capacity. Fundamentally, what they’re doing is making predictions about what text to produce. They’re not imagining a mental state of the user who’s inputting things into it. Number two, I would say, is that it obviously misses out on a huge part of the cultural aspect of why we do and why we have education institutions and the relationships that we form. And I think that the claim that students are going to want to engage and learn from digitized tutors the likes of which Khan Academy and others are putting out, I think is woefully misguided and runs counter to literally thousands, if not hundreds of thousands of years of human history. Okay, so number one, doomed. Number two is to me like a kind of like, so what? Right? So I use the example of computers and the Internet as ubiquitous technologies that AI might join. So, like, let’s say that’s true. Let’s say that comes to pass. So what? Like, we have the Internet now, we have computers now. We’ve had both of these things for decades. They have not, I would argue, radically transformed education outcomes. The ways in which technologies like this become sort of utilities in our lives, transforms our day to day existence. But just because a technology is useful or relevant in some way or form does not mean emphasis, does not mean that it is somehow useful for education purposes and for improving cognitive ability. So I have absent a theory as to in what ways these tools are going to do that. Whether or not they become, you know, ubiquitous background technologies is kind of a, so what for me. Number three, the argument, the cognitive argument that this tool could be a useful example and non example of human cognition, I have a great deal of sympathy for. I am very curious about. There’s a lot, a lot that has changed just within linguistics, I would say, in the last several years in terms of how we conceptualize what it is these tools are doing and what that says about how we think and deploy language for our own purposes. We may have just scratched the surface with that. The new models that are getting released that are now quote unquote reasoning models have a lot of similarities in their functionality to things in cognitive science like worked examples and why those are useful in helping people learn. A worked example being something that sort of lays the steps out for a student as to here, think about this, then think about this, then think about this. Well, it turns out if you tell a large language model, do this, then do this, then do this, do then this, or just sort of program it to do that, their capabilities improve. So you know, without sounding too much like I’m high on my own supply, this is the cognitive resonance enterprise. It’s sort of to say, okay, let’s put this in front of us and instead of focusing so much and using it as a means to an end, let’s study it as an end unto itself, as an artificial mind, quote unquote, and see what we can learn from that.
Michael Horn: Super interesting, Ben, on, on that one. And I’m just thinking about an article I read literally this morning about where it falls short of mimicking, you know, the true neural networks, if you will, in our brain. So I’m pondering on that one now. I guess I, before we go to the outright skeptic take if you will, I’m sort of curious on like other things that you think AI won’t help with in your view, beyond what you just listed in terms of, you know, this broad notion of personalizing learning or AI as utility, if you will, and, and the so what question, like are there other things that people are making claims around where they think AI is really going to advance the ball here. And you’re like, I just, I don’t see that as a useful application for it.
Ben Riley: Well, you know, we launched into this conversation and we didn’t define what we’re talking about when we talk about AI. Right, sure.
Michael Horn: There’s different streams of it. Yep.
Ben Riley: Yeah. And I think that, like, when I’m talking about AI, and least have been talking about it in this context thus far, I’m talking about generative AI, mostly large language models, but it includes any sort of version of generative AI that is in essence, sort of pulling a large amount of data together and then sort of trying to make predictions based on that, using sort of an autoregressive process or diffusion in the case of imagery, but sort of like trying to essentially aggregate what’s out there, and as a result of that, aggregation produce something that sort of relates to that. If you’re talking about beyond that, like, who knows? I mean, there’s just so many different varied use cases. There’s, I was mentioning off air, but I’ll say now on air, there’s a great book, AI Snake Oil, written by a couple of academics at Princeton, which talks about sort of the predictive AI, which they put in a sort of separate category from generative AI, and they’re very skeptical about any of those uses. My fundamental thing is that to the extent people think like the big claim, right? And unbelievably, Sam Altman, the CEO of OpenAI, just a few days ago declared that, like, we’ve already figured out how to create artificial general intelligence. In fact, that’s like a solved problem. Now we’re on to super intelligence. I think people should be very, very skeptical of that claim. And there’s a lot of reasons why I would say that, which again, could eat up the entire podcast. But I’ll just give you one. What we now know is true, I think from a scientific perspective about human thought, is that it exists, it does not depend on language. Language is a tool that we use to communicate our thoughts. So if that’s true, and I would argue in humans, it is almost unassailably true. And I can give you the evidence for why I think we think that or why we know that, then it would be very strange if we could recreate all of the intelligence that humans possess simply by creating something like a large language model and using all of the power of all the Nvidia chips to harness what’s in that knowledge. Now what people will say, and frankly, this is where all the billions and the leading thinkers on this are trying to do is okay, well now we can only go so far with language. How about we try to do it for other cognitive capacities? Can we do that? Can we create neuro symbolic, as it’s called, AI that is as powerful, powerful as generative AI with large language models and sort of start to piece this together in the same way that we may piece together various cognitive capacities in our own brain and then loop that together and call it intelligence. To which I say, well, good luck. I mean, honestly, good luck. But there’s no reason to think that just because we’ve done it with large language models that we’re going to have the same sort of breakthroughs in the other spaces. So don’t know if this fundamentally answers your question, Michael, but I would say that it’s sort of like, you can have progress in this one dimension. It can actually be quite fascinating and interesting. But I would urge people to sort of slow down in thinking that it just means that, you know, all of science and humanity and these huge questions around whether we will ever be able to fully emulate the human mind have suddenly been solved.
The Skeptical Take
Diane Tavenner: Yeah. Wow. So fascinating. I have so many things coming to me right now, including my long journey and experience with people who make extraordinary com, you know, claims and then kind of make the work a little bit challenging for the rest of us who are actually doing it behind them. But let’s turn now, we’re kind of steering in that direction, but let’s go all the way in on the skeptical take. And so I feel confident you’ve got some good material here for us. Like what is AI going to hurt specifically in education? Let’s start there, and how’s it going to do harm?
Ben Riley: Yeah, well, I don’t think we should use the hypothetical or the future. Let’s talk about what it’s harming right now. So I mean, the big danger right now is that it’s a tool of cognitive automation. Right? So what it does is fundamentally offer you an off ramp to doing the sort of effortful thinking that we typically want students doing in order to build the knowledge that they will have in their head that they can then use in the rest of their life. And this is so fundamentally misunderstood. It was misunderstood when Google was starting to become a thing and the Internet was becoming a thing. You would hear in education, well meaning people say, well, why do we need to teach it? If you can Google it. Right? That was a thing that many people said, put up on slides. I used to stop and listen and look. It makes sense if you don’t spend any time with cognitive science and you don’t spend any time thinking about how we think. And so I don’t, I don’t want to throw those people too far under the bus, but just a little, because now we know. We know this. Like, this is a scientific, like, as established as anything else is established. It’s like our ability to understand new ideas in the world comes from the existing knowledge that we have in our head. That is the bedrock principle of cognitive science, as I like to describe it. So suddenly we have this tool that says, you know, to the extent you need to express whether or not you have done this thinking, let me do that for you. You know like, this exists in order to, to, to solve for that problem. And guess what? It is very much solving for that problem. Like, I think the most stunning fact that I have heard in the last year is that OpenAI says that the majority of its users are students. Okay, the majority. Now, I don’t know what the numerator and denominator is for that, and I’m talking to some folks trying to figure that out, but they have said that at the OpenAI education conference, Lea Crusey, who some of you may know who was over at Coursera, got up and said, and they said, and I think they meant this is like, they were happy about this, that their usage in The Philippines jumped 90% when the school year started. What are those kids using it for? Yeah, you know, what are those kids using it for? Like, I don’t think, like, we need to stop pretending that this isn’t a real issue. And for me, people sort of go, well, it’s plagiarism, you could always plagiarize. And it’s like, not exactly. Not exactly like. And I think it actually is sort of both overstates and understates the case to talk about it in the context of plagiarism. Because again, the real issue here is that we will lose sight of what the education process is really about. And we already have, I think, too many students and too much of the system sort of oriented around get the right answer, produce the output. And I think teachers make this mistake, unfortunately, too often, I think a lot of folks in the system make this mistake of we just want to see the outcome and we are not thinking about the process because that’s really what matters. And building that knowledge over time. And you’ve got now, I mean I literally sometimes lose sleep over this. You’ve got a generation of students whose first experience of school was profoundly messed up because of the pandemic. And then right on top of that, we have now introduced this tool that can be used as a way of offloading effortful thinking. And I don’t think we have any idea what the consequences are going to be for that cohort of students and the potentially, like, dramatic deficiencies in a quality education that they will have been provided. That’s one big harm. There’s another. I mean, there’s many others, but there’s another that I’ll highlight here, too. I don’t know if you, either of you watched, I imagine you did, the introduction of ChatGPT multimodal system last year, which included the family Khan, Sal Khan and his son Imran were on there. I thought it was fascinating and speaks again to the amount of users who are students that OpenAI chose Saul and his son to debut that major product. If you watch that video closely, and you should, you’ll see something, I think, that is worth paying attention to, which is at multiple points, they interrupt the multimodal tutor that they’re talking to. And why not, right? It’s not a life form. It doesn’t have feelings. And we know that, it’s a robot. You know, to a degree. I don’t think we’ve really grappled with the implications of introducing something like human like into an education system and then having students who are students who are still learning about how to interact with other humans, that’s another part of education and saying, you know what, it’s okay to behave basically however you want with this tool, right? Like the norms and the sort of, you know, ways in which schools inculcate values and inculcate, sort of how it is we relate to one another could be profoundly affected in ways that we haven’t even begun to imagine, except in the realm of science fiction. And I think it’s worth looking at science fiction and pointing to how we tell these stories. I don’t know if either of you watched HBO’s Westworld, particularly the first season before the show went off the rails. But if you watch the, if you watch.
Diane Tavenner: Season one was a little intense, too.
Ben Riley: Season one was intense, but it was good. I thought it was good. And, and, but it was haunting. And one of the things that was haunting about it is it’s like for those who haven’t watched the show, it’s a It’s filled with cyborgs who are quasi sentient, but they, you know, people come and they’re at amusement parks and it’s like the old west and what can you do? You can kill them. You can kill them and people do that or worse.
Diane Tavenner: Right, yeah. Well, talk about the other bad thing.
Ben Riley: Right, right. I mean, but, you know, but it’s sort of like the fact that we now can imagine that sort of thing being a future where you could like humans, but not. The philosopher Daniel Dennett, who passed away, talked about the profound dangers of counterfeiting humanity. And I think that’s the sort of concern that is just almost not even being discussed at any real level as we start to see this tool infect the education system.
AI’s Impact on How We Think
Michael Horn: I suspect that’s going to be something we visit a few times in this series. But you’ve just, you’ve done a couple things there. One, you’ve, I think, more articulately answered, you know, a lot of the bad behavior we’ve seen on social media. How that actually could get exacerbated is not through deep fakes per se, but in terms of actually how we relate to one another. But you also answered another one of my questions that I’ve had, which is I can’t remember a consumer technology where education has been the featured use case in almost every single demo repeatedly. And you may have just answered that as well. I’m curious, a different question because I know you and Bror Saxberg have had sort of a back and forth about, you know, where is certain things that maybe it’s harming going to be less relevant in the future. And he loves to cite the Aristotle story. Right. About we’re not going to be memorizing Homeric length poems anymore. And maybe that’s okay because it freed up working memory for other things. I’m sort of curious to get your reflection on that conversation at the moment because I think Diane and I would strongly agree. Replacing effortful thinking, thinking that you can just, you know, have people not grapple with knowledge and build mental models and things like that, that’s going to have a clearly detrimental impact. Are there things where you say actually it’s going to hurt this, but that may be less relevant because of how we accomplish work or something like that in the future? I don’t know your take on that.
Ben Riley: Yeah, I don’t think you’ll like my answer, but I’m going to give you my honest answer.
Michael Horn: I don’t know that I have an opinion. Like, I’m just curious.
Ben Riley: Yeah, I mean, I’m not a futurist and I’ve made very few predictions ever in my life, at least professionally. One of the few that I did was that I thought personalized learning was a bad idea in education. And I’d be curious, I don’t know in this conversation another, whether you two reflecting back on that would go actually, you know, knowing what we know now, there were reasons to be skeptical of it and the, the I’m annoyed at the turn he seems to have taken because I used to like to quote Jeff Bezos. So with all the caveats around, you know, Jeff Bezos and anybody right now from big tech, he has said something that I think is relevant, which is he said, he’s asked all the time, you know, how the, what’s going to change in the future and how to prepare for that. And he says that’s the wrong question. He says, you know, the thing that you should plan around is what’s not going to change. He’s like, when I started Amazon, he was like, you know, I knew that people wanted stuff, they wanted variety, they wanted it cheap and they wanted it fast. And he’s like, that, as far as I could tell, wasn’t going to change. Like, people weren’t going to like, I want to spend more or take longer to get to me. And it’s like I said, once you have the things that won’t change, build around those. So I said it earlier, I’ll say it again. The thing that’s not going to change is fundamentally our cognitive architecture is the product of certainly hundreds of thousands, if not millions of years of biological evolutionary processes. It is further, I think, the product of thousands of years, tens of thousands of years of cultural evolution. We now have something, we have digital technologies that can affect that culture. So it does not mean, and I am not contending that our cognitive architecture is some sort of immutable thing, far from it. But on the other hand, it would suggest that what we should do is A, not plan around changes that we can’t possibly imagine, but B, maybe more importantly, and I would say this to both of you, not try to push for that future, you know, that we should fundamentally be small c, very small c, conservative about these things, because we don’t know, you know, I don’t know what the amount of time that took place back in Socrates and Aristotle’s time in terms of the cognitive transitions that took place, but they took place. My strong hunch not so much as the product of any deliberate choice, but to get a sort of social conversation about which ways in which should we talk to one another. And it was clearly the case that writing things down proved to be valuable in many dimensions. It may prove to be the case that having this tool proves very valuable in many dimensions. But let the time and experience sort that out rather than trying to predict it.
What Schools Can Do To Prepare
Diane Tavenner: Super helpful. I love where you’re taking us, which is into actual schools. So I appreciate that you’re like, let’s talk about what’s actually happening right now. And, you know, that is where my, like, heart and work always is, is in real schools. And so given what we are seeing, what you’re articulating about what’s actually happening right now in schools, and given that, well, I won’t say it as a given. What do schools need to do to mitigate the challenges you just said to, to recognize this as a reality that is coming our way that maybe can’t be put back in the box. Now, I’m going to say that with a caveat because I’m reading in the last day or two too, that it’s people declaring, you know, that they’ve won the cell phone war and cell phones are going to be out of schools here pretty soon. So maybe, maybe you actually believe it’s possible to kind of put it back in the box in schools. But, like, what’s the impact on schools and what do they do literally right now, given what you’re saying is actually happening already?
Ben Riley: Yeah. So great questions, all of them. So, I mean, thank you for bringing up the cell phone example, because I cite that often and even before there was this sort of wave now, both at the international level, national level, state by state, district by district, to suddenly go, these tools of distraction aren’t great for the experience of going to school and having you concentrate on hopefully what the teacher is trying to impart through the act of teaching. So we can, it’s not easy, but we can take control of this. Nothing is inevitable. So, you know, people always say, well, you can’t put it back in the box. You know, AI will exist, but how do we behave towards it? What ethics and norms do we try to impart around it? These are all choices we get to make. I like the phrase, and I’m borrowing this from someone named Josh Break, who’s a professor at Harvey Mudd. He has a wonderful Substack called I think It’s Just the Absent Minded Professor. But he writes a lot about AI in education. And his phrase is just you have to engage with it, but that doesn’t mean integrate. Right? So what I do think, you know, Diane, you kept saying schools. I just think it’s teachers, educators need to engage with it. That can still mean that the answer after you engage with it is no, not for me, and also no, not for my students. I think that’s a perfectly acceptable thing to say. And look, maybe the students won’t follow it, but that, you know, you’ve done what you can, right? And, and that is all you can do. There’s a teacher out there who I’m desperately trying to get in touch with, but she made waves. Her name is Chanea Bond. She teaches here in Texas. She made waves on Twitter a while back by saying, look, I’ve just banned it from my kids because it’s not good for their thinking. People are like, what? And it was like, she was like, yeah, no, it’s not good. Like it’s interfering with their thinking. So I’ve banned it. So that’s a perfectly reasonable answer. I also think that, you know, once you start to understand it at a basic level, I’m not talking about getting a PhD in back propagation and artificial neural networks, but just starting to understand it, you’ll start to understand why it’s actually quite untrustworthy and fallible and that you know, if you just think that everything it’s telling you is going to be accurate, you have another think coming, you know, and one of the things in the workshops that I’ve led that I’ve been very satisfied by is when people come out on the other side of them, they’re like, yeah, okay, so this thing isn’t reasoning and it’s not this all knowing oracle. And once you have that knowledge, once you’ve demystified it a bit, I think it gets a lot easier to sort of grapple with it and make your own choices and your own decisions about how you want to do it. I will say that right now, in the education discourse, it’s like, you know, things are way out of balance between sort of the hype and enthusiasm versus the sort of, hey, pump the brakes, or at least have you thought about this, if you’ll forgive me, but again, sort of, you know, it’s a, it’s a free resource. But if you go to cognitiveresidence.net we’ve put out a document called the Education Hazards of Generative AI, which literally just tries to, in very bite size and hopefully accessible form, sort of say, here are all the things you really need to think about and might be some cautionary notes across a number of dimensions, whether you’re using it for tutoring or material creation, for feedback on student work. Like, there’s a lot of things that you need to be thinking about and aware of. One of the things that frustrates me is that I see a lot of enthusiasts and this ranges from nonprofits to the companies that make these tools, sort of saying, well, teachers, fundamentally, it all falls to you. Like, if this thing is not factual or it hallucinates, like, it’s your job to fact check it. And it’s like, well, come on, like, A, that’s never going to happen, and B, like, not fair, you know, like not fair to put that on educators and just kind of wipe your hands clean. So I do think that’s something that, like, we’re still going to have to sort of sort through society on a, you know, social level as well as within schools and well as like individual teacher and ultimately students are going to have to bear some agency themselves about what choices they make around whether and how to use it at all.
What We’re Reading and Watching
Diane Tavenner: I’m so appreciative of this idea of agency here. And I do think that that’s like, certainly a place that I’ve always been and is core to my values and beliefs as an educator is the importance of agency, not only for educators, but for young people themselves. And so, I love that this is such a rich conversation. We go on and on and on. But I feel like maybe leave it there. Like really real people, real teachers, real students, real agency. So grateful for everything that you brought up, so much to think about. And we’re gonna pester you for one last thought, which is Michael and I have this ritual of, at the end of every episode, we share what we’ve been reading, watching, listening to. We try to push ourselves to do it outside of our day jobs. And sometimes we seep back into the work because it’s so compelling. And so we want to invite you, if you have thoughts for us and to share them.
Ben Riley: So I told you I had a weird one for you here. So I was just in New Orleans and when I was in high school, for reasons that I won’t go in detail here, my family got really into the Kennedy assassination and the movie JFK by Oliver Stone came out. And I don’t know whether either of you have watched that film in a long time. It’s an incredible movie. It’s also filled with lies and untruths, and it’s much like in large language.
Michael Horn: I think we watched it in high school, but keep talking.
Ben Riley: Yeah. Yeah. Well, the thing that, the reason I bring it up is because Lee Harvey Oswald lived in New Orleans in the summer of 1963. And that movie is based on the case that was brought by the New Orleans District Attorney, a guy named Jim Garrison. But there’s a bunch of real life people who are in that movie or portrayed in that movie. And I just started to think about accidents of history where all of a sudden you could be, you know, just a person of relative obscurity as far as, you know, anyone broadly paying attention to your life. And all of a sudden something happens and now you become sort of this focus of study. And trust me when I tell you that every single person who had any connection with Lee Harvey Oswald in his life has become this object of study to people and books have been written. And so I’m trying, this is very bizarre, I know, but what I’m trying to do is think about and understand what it is like for people in that situation. Like what it is like to suddenly have your story told that you don’t have control of it anymore, you know, and if you know where, this isn’t supposed to be work related but in a way I think it does connect backup because it goes back to the fact that these tools are taking a lot of human created knowledge and sort of reappropriating it for their own right. And we haven’t got touched on that. I don’t think we need to now. But it’s sort of like it’s, there are a lot of artists who feel a profound sense of loss because of what’s happening in a our society today. That’s another thing I think worth thinking about.
Diane Tavenner: Wow, you’re right. I didn’t see that one coming. But it’s fascinating. Thank you for sharing it. I am unfortunately not going to stray from work today. I can’t help myself. Three of my very good friends have recently released a book called Extraordinary Learning for All. And that’s Aylon, Jeff Wetzler, Janee Henry Wood. And it’s really about the story of how they work closely with communities on the design of their schools and in a really profound and inclusive way. And so I’m deep in that, been involved in that work for a long time and think it’s just a really powerful kind of inspiration slash how to guide of how communities can really take agency over their schools and own them and figure out what they want and what matters and what they need and how they design accordingly.
Michael Horn: So I was gonna say now, Jeff has appeared twice in a row in our book recs, I think, on episodes or something like that. So love that. Diane, I’ll wrap up with saying I’m gonna go completely outside of, I think, the conversation today. But, Ben, you may say it actually relates as well, because I’ve been binging on season two of Shrinking. I loved season one and season two, with the exception of a couple episodes in the middle has been no exception, I think. So I’m. I’m really, really enjoying that so far. And I suppose you could connect that back to.
Ben Riley: What is Shrinking? I don’t know. I have to. I don’t know what it is.
Michael Horn: Okay, it’s basically about three therapists in a practice and one who’s grappling with the deep personal tragedy. And Harrison Ford is outrageously hilarious. Yeah.
Diane Tavenner: So good. It’s so good. Okay, well, I’m gonna tag on to your, you know, out of work one and say yes, we love Shrinking as well.
Michael Horn: Perfect. Perfect. All right, well, we’ll leave it there. Ben, huge thanks for joining us. For all of you tuning in, huge thanks for listening. We look forward to your thoughts and comments off this conversation and continuing to learn together. Thank you so much as always, for joining us on Class Disrupted.
Get stories like these delivered straight to your inbox. Sign up for The 74 Newsletter