Class Disrupted is an education podcast featuring author Michael Horn and Futre’s Diane Tavenner in conversation with educators, school leaders, students and other members of school communities as they investigate the challenges facing the education system in the aftermath of the pandemic — and where we should go from here. Find every episode by bookmarking our Class Disrupted page or subscribing on Apple Podcasts, Google Play or Stitcher.
On this episode, Diane and Michael welcome guest Julia Freeland Fisher, the director of education research from the Clayton Christensen Institute. The conversation explores the potential and challenges AI presents in the educational landscape. Julia shares her insights on the importance of using AI to enhance personalized learning experiences and facilitate real-world connections for students. She also voices her concerns about AI’s impact on human connection, emphasizing the risk of AI replacing genuine interpersonal relationships.
Listen to the episode below. A full transcript follows.
Diane Tavenner: Hey there, I’m Diane, and what you’re about to hear is a conversation Michael and I recorded with our guest Julia Freeland Fisher as part of our series exploring the potential impact of AI in education. This is where we’re interviewing optimists and skeptics, and I really enjoy talking with Julia and keep thinking about a few key ideas from the conversation. First, Julia’s expertise related to social networks gives her a really important perspective on AI and the potential for it to either harm or help with social networking, which is such a critical factor in career and life opportunities for young people. She was really compelling in talking about how real experiences matter, and I think you’re going to enjoy listening to her talk about how using AI to create what she calls infrastructure in digital experiences could enable young people to build social networks. Infrastructure is in contrast to sort of chatbots or agents, which are a really different experience. The conversation caused me to deeply reflect on my own social network, how I created it, and how I use it, and how complex it is. And at the same time, I’m thinking a lot about a handful of young people I know and what their social network is currently, and how AI may or may not be interrupting them building the social networks that they need and will depend on in the future. And then I’m also thinking about that for me and my age and stage, and what does that mean? It’s been a fascinating rabbit hole that I’m really hopeful will yield some positive impacts on the product I’m building in the future, and how my behaviors as the leader of a company sort of evolve and respond to this moment in time. All of that to say, I truly cannot wait to thoroughly think through all of these ideas with Michael, but until then, I think you’ll really enjoy this conversation we had with Julia.
Diane Tavenner: Hey, Michael.
Michael Horn: Hey, Diane. Good to see you.
Diane Tavenner: You too. So, Michael, since we started this miniseries on AI and have begun interviewing all these really interesting people, I’ve started to notice AI literally everywhere in my life. And so I remembered something about this from my psychology days, and I had a conversation with yes, GPT about this to try to make sense of what’s going on. And it turns out there is a particular psychological phenomenon that is going on here. I think I’m going to pronounce this correctly. It’s the Baader Meinhof phenomenon. It’s also known as the frequency illusion.
And basically what happens is when you learn something new, you think about it, you focus on it, and then you start noticing it everywhere. And this is the result of two cognitive biases. So the first one is selective attention bias, where your brain is now sort of primed to notice the thing you’ve been thinking about, so you pay more attention to it in your environment. And then the second is confirmation bias. You know, once you notice it repeatedly, you interpret this as evidence that it’s suddenly more common, even though its frequency hasn’t actually changed. I don’t know about you, I find this fascinating how our brain sort of filters and amplifies information based on what we’re focused on. And so, yeah, that’s happening to me. Just to illustrate my confirmation bias.
I’m actually going to say out loud that I do think AI is everywhere. And I’m betting the person we are about to talk to today might feel the same because a lot of her recent work is on AI
Michael Horn: Well, I think you have nailed the lead in Diane. That’s a perfect segue as well for today’s guest, who I suspect is nodding along, excited to have her Julia Freeland Fisher. I’ll say up front, I’m really excited about this one because Julia has been a longtime colleague and friend of mine. I hired her at the Christensen Institute as a research fellow. And then when I left full time a decade ago, just about a decade ago, she stepped in as my successor. It was like a version 2.5 or 3.0 or something like that. We jumped ahead several generations.
So it was terrific. And I couldn’t be more thrilled, frankly, about the work that she’s done since because she’s really elevated the important topic of social capital into the education conversation. She’s frankly, taught me a ton along the way. The book that if you want to sort of catch up on it that she wrote a few years ago is Who You Know. But most recently, she published some really interesting research about AI in education titled Navigation and Guidance in the Age of AI, which we’ll link to in the show notes. But I’m sure we’re going to get into that and much more. But first, Julie, before we do that, just welcome to Class Disrupted. Great to see you.
Julia Freeland Fisher: Thank you. So honored to be here with both of you.
Michael Horn: Well, we hope you’ll still feel that way by the end, but yeah, but before we get into a series of questions we have for you, actually, let’s table set a little bit and share with the audience. How did you get so deep into this topic of AI itself? Because, as we said, you’ve been researching social capital for several years now in education. You’ve thought a lot about the role of technology in that equation, clearly. And you thought a lot about how schools perhaps should redesign themselves to become more permeable, if you will, to the outside world. But why AI and what’s been the scope of your research around it?
Reimagining EdTech for Human Connectivity
Julia Freeland Fisher: Yeah, absolutely. So I think, you know, historically I was sort of obsessed with the concept of, and I’m putting this in air quotes, edtech that connects. I’ve been really disheartened, but still optimistic that there’s a long runway of innovation if we were to start to think about education technology, not just in service of content delivery or productivity or assessment, but also in service of connection, that young people could overcome the boundaries of their existing networks, they could connect with peers and protect professionals that shared their interests, that there’s just so much possibility if we started to do in the classroom what many of us do in our working lives, using technology to connect across time and space. So I’ve been studying that for a long time, and it has been a small but mighty market, certainly not something that has grown significantly and that has made me painfully aware of just how much the ed tech market ignores connection as part of the value proposition of school. And so enter AI, and we’ll get into this more. But you know, for all of its sort of fantastic productivity, upside and intelligence, the piece of AI that I’ve been paying attention to is the tendency to anthropomorphize it and to make it human-like, to make it capable of mimicking human emotion and empathy and conversation. Because what I see unfolding, and this is not inevitable, it just has to do with how the market absorbs it is a true possibility of disrupting human connection as we know it because we don’t value it to the level the market ought to.
And because the technology has suddenly taken this dramatic turn towards human-like behavior, affect, tone, etc. So I’m just fascinated by that. And I want those of us inside of education, I want parents to be awake to this kind of dimension of the technology that like wasn’t really, it was maybe lurking, but it wasn’t really dominant in the edtech sort of old days, the sort of version one of ed tech where we weren’t giving these tools the same sort of voice and emotion that I’m seeing now. So that’s a little bit of it. But I want to, you know, at various conferences I’ve been labeled a pessimist and a doomer. I really want to come to this conversation as a realist. Like I’m, I work for the Clayton Christensen Institute for Disruptive Innovation. I am not anti-technology. I am worried about the market conditions inside of which the technology is evolving.
Diane Tavenner: Well, Julia, I’m so glad we started there to like just ground everyone in the work you’re doing and how you think about it. And I’m going to give you your moment to sort of be the realist. Let’s start with just inviting you to sort of make the steel man argument in favor of AI in education. Like in your mind, what’s the best case possible scenario for AI in education from your perspective, given your work, you know, as a mother even, you know, like, what’s the best possible outcome we could reach?
Rethinking Personalized Learning Potential
Julia Freeland Fisher: Yeah, I want to first just describe how surreal it is to have Michael B. Horn and Diane Tavener asking me that question. Like I’m chatting with two luminaries that I’ve learned so much from in thinking about the potential of tech to really personalize learning. And I know that term gets overused and now it’s maybe out of fashion, but like it’s a little absurd that I would be providing an answer to you on that. But here I go. Anyways, I think, just quickly, I think the potential to scale a system of personalized content, experiences and support and thinking about those three things actually as kind of separate strands or value propositions being key. The adaptive content and assessment piece may be the most obvious, the most familiar sort of evolution on top of how we’ve talked about ed tech in the past. But I’m actually probably equally or more excited about the possibility of seamless infrastructure to support a mastery based system that also gets students connected to new people and learning in the real world.
And it’s infrastructure doing that. It’s not the AI talking to the student that’s doing that. And I’m not sure how much investment we’re seeing that you guys may know more than I do, but that’s kind of my vision of what the more time I spend with the tech, the more I see how much that could actually be feasible in a way that even 10 years ago, I think we all had sort of dreams of that. But the tech was a little bit clunky and was, you know, it could create a pathway. But the idea of flexible pathways that actually were adaptive in real world contexts felt a little more out of reach.
Diane Tavenner: So let’s stay here for just a minute, Julia, because I want to make sure people really understand what you’re saying by infrastructure. We’ve had dialogue around and by the way, I’m working on this, you know, I’m working on this. Got one person in your corner. We’re getting closer and closer, but like, we’ve had a bunch of conversations about sort of chat bots or agents or things like that. And when you’re talking infrastructure, that’s kind of in contrast to the experience that I think most people are having right now. So just illuminate that a little bit for us. Like make it, make it. So everyone can visualize what you mean.
Julia Freeland Fisher: Yeah. Let me name two pieces of infrastructure, one of which I know Michael has featured in some of his work, and then another of which I’m not sure if you guys have talked about. So one is a tool called Protopia. It’s used in higher education founders Max Leisten, I believe. And the tool that Max has built, you know, he partners with alumni engagement offices. And the way the tool works is students can go onto their career services website, ask a question, and based on the content of the question, Max’s tool will call through the alumni directory of that school, find the alum who is best suited to answer the question, and email them directly to their email. There’s not a clunky app that you have to go through and if they answer that student’s question, fine. If not, it will go to the next best alum to answer the question.
So that’s infrastructure. It’s behind the scenes, it’s facilitating an opportunity for learning. And in this case, obviously I’m highlighting it because it’s facilitating connection as well. But it’s sort of doing the behind the scenes manual work that is not like high quality human work, but is necessary if you want a system where students are moving beyond just a singular predetermined path and actually having opportunities or conversations beyond it. The other one I want to highlight that I actually think is illustrative of why this is exciting and also why I’m like a little bit getting labeled. The doomer is a tool called Project LEO that spun out of Da Vinci schools in Los Angeles. And it’s designed to create bespoke individualized projects aligned with the principles of project based learning based on students’ interests in they’re like ikigai, which is that Japanese Venn diagram thing.
What was so exciting in the initial version of this tool is that they were then not only did students get a personalized project aligned to their interests, that aligned also to the teacher’s sort of core content that they were trying to hit on, but that it would also connect them to a working professional who would give feedback on their project. Now, as they’ve rolled out the product, the demand or the willingness to pay for that last feature has been quite limited. So it’s not currently sort of part of the main product. And I say that to say like infrastructure for project based learning, that’s exciting to me. Right. It’s been perennially hard to scale project based learning that’s interest based. Diane, this is like again absurd for me to explain, explain this to you, but that’s really exciting, right, that it doesn’t sit on a teacher’s desk to have to create 25 unique projects.
I would like to though see the market mature in a way where demand for that last mile connection out to the real world is also there and people are willing to pay a premium for real world experience. So those are just two examples of like it’s the behind the scenes creation of stuff that students then do. It’s not necessarily a student facing adaptive tool, which I’m not totally down on. Like I think there’s a place for that. But that’s the infrastructure conversation.
Diane Tavenner: Super helpful.
AI: Pessimistic and Realistic Concerns
Michael Horn: Yeah, yeah. So Julie, if you’ve painted that picture of what could be and frankly a layer of AI that’s much more invisible, I think facilitating these sort of interactions, experiences, connections and so forth, I’d love you to take now the flip side. And you said you’ve been labeled a pessimist, so maybe it’s. I was gonna say give us the skeptical take, natural side, maybe it’s the realistic take. But, let me ask it in this way a little bit more directed because I. We want this part of the conversation which is what do you fear that AI is going to hurt and how and although I’m sure you could also offer like a real, you know, sort of steel man argument here as well. I think that your research has a lot to say around what you’re seeing and what implications that might make mean that we ought to be wary or at least on guard about right now.
Julia Freeland Fisher: Yeah. So there’s, there’s two things I want to name here, and one of them that I could go on and on about, which is human connection. So I’m going to let me say the first one briefly, which is I’m worried about it harming the concept of experiential learning. And then we’ll get to human connection. The concept of experiential learning is so exciting to me. It’s what I want for my kids. It’s what I want for all kids. And as much as I think that I just described two examples of infrastructure that could get us there, I think the market is much bigger for simulated experiences than actual experiences.
And I think a lot of the hype around AI is like, these bots can simulate anything. They can be anyone. You can be pretending to talk to fill in the blank. And yes, that may be a context to develop skills in a more applied way, but it’s not real experience. And I’m worried about that for two reasons. One, I think that you run the risk of young people becoming accustomed to sort of synthetic interaction. But two, because if you look at what employers are demanding of entry level work, it is experience, it’s not just skills. And Ryan Craig has written a lot about this, the experience gap.
As AI actually chips away at entry level work, Higher ed needs to step in and actually prepare students in new ways. But the piece of that I think we’re not paying attention to in the education conversation is that that actually requires true experiential learning, not just simulated skills, not sort of performance tasks. And at least from what I’m seeing, and Diane, I’m right where you are at the beginning of the episode of like, I’m just reading all of this stuff through my little doomer lens now. But I just think there’s so much more hype, partly because employers are willing to pay for like, simulation experience stuff in the L and D market. There’s much more hype around simulation than around, what would it take to scale true experiential learning, which, by which I mean learning skills in an applied context with other humans. Yeah, so that’s my number one.
But now that was like, not my real rant. My real rant is, I actually think, Michael, that’s something you probably thought more about than, than I have.
Michael Horn: So yeah, let’s hear number two then.
Threat to Human Connection
Julia Freeland Fisher: Okay, so number two, what I think it could hurt is human connection. And I want to put this in a context of what I said initially around bots being anthropomorphized. And this is happening across many different pockets of both the consumer and ed tech market. I think we should be way more worried about the consumer applications. So we’re talking here about romantic companion apps like Replica, character AI where people in general and young people included are being drawn into parasocial relationships with bots that emulate and can even exceed sort of human behaviors in meeting those users emotional needs. That is emerging against the backdrop of a long standing loneliness epidemic, which is a lagging indicator of our underinvestment in human connection and inside of schools, it’s emerging against the backdrop of what I have observed over the past decade of my research of a lot of sentiment about relationship, but very little strategy, very few metrics guiding whether students are actually connected, very little budget dedicated to human connection again, as a value proposition in its own right. And so it’s really, and Michael taught me this, right, Michael taught me disruptive innovation theory.
It is a classic disruption story in that loneliness is providing a foothold in the market for these bots to take hold. And there is very little stopping their upward march in the market. There is very little to hinder their growth because we as a society have basically said go get less lonely on your own, like go solve this loneliness thing by yourself. Which is ironic at best and really dangerous at worst. So that’s my big concern again, I don’t think ed tech is going to be the straw that breaks the camel’s back. Like if we asked over the last 20 years what technology most affected young people’s lives, like, I’m sure some of our colleagues would like to be like Khan Academy, but I think many of us would agree, like, no, it was commercial tech.
Michael Horn: Yeah, sure, yeah. In particular. Well, so let’s stay on that because I think you’ve raised two very interesting challenges and the consumer. I mean, we also know from schools right now that frankly what plays off in the consumer space impacts how engaged teens are and so forth.
AI’s Impact on Human Learning
Michael Horn: In the school experience as well. So I think something that has been on both of Diane and my mind’s around the AI conversation is what AI hurts of that, like what will still be relevant, if you will, in the future. Right. And how much is this about replacing outdated structures? I’M going to guess that you think real human relationships and social capital and the like will still be important in the future. I’m hoping you’re going to tell me that, but I guess I’d love you to play with this theme a little bit and get a little bit more nuanced, like, so take the experiential learning piece, right? If we’re offering simulations as entry level to get someone information of, hey, is this something you want to explore more as an entry point to then get something different, you know, is that a bad thing? Or like, where’s the slippery slope? And where is it really chipping away at something that’s fundamentally what makes us human and that we ought to really be concerned about handing over to AI.
Julia Freeland Fisher: Yeah, totally. I mean, I think let’s look at the upsides real quick, both on the experiential and the human connection front. Like on the experiential, these simulations are a way to scale practice, which we know, again, we use the shorthand of skills, but it’s actually we should always be talking about skills and practice. And so I don’t want to claim that like simulated practice is a bad thing. It’s a great cross training for like developing skills. I think I just worry that the market is so blunt that it treats that as the outcome of interest versus applied skills plus human connections. On the human connection front, you know, I’ve been looking at the navigation guidance space and there’s really two stories emerging. On the one hand, we have the potential to disrupt the social capital advantage that has perpetuated opportunity gaps by giving students from all sorts of backgrounds access to resources, information and guidance that otherwise often travels through inherited networks.
So that’s huge, right? Like, democratizing access to information and advice is not something that we should devalue in some sentimental name of like preserving human connection. The piece of it, the slippery slope though, right is that what I found in my research, at least based on our interviews with the supply side, is that the demand side really treats navigation and guidance as an information gap, not a connection gap. And we know that an estimated half of jobs and internships come through personal connections. So if you just use AI to solve the information gap piece, you’re not doing the last mile work of actually addressing opportunity gaps. You’re improving, you’re sort of. It’s like a rising tide lifts all boats, but the gaps are still going to be there if you don’t get the social connection piece right.
So that’s where I’m very wary of these like self help bots that, you know, tout democratizing access and opportunity but are actually sending the wrong message to young people about just how social the opportunity equation in America is.
Diane Tavenner: Yeah. Oh, I could not agree more. Literally. Okay, let’s, let’s take a little bit of a turn here, Julia, you probably can guess this if you don’t know it. One of the things I do for fun in my spare time is imagine the designs of new schools that I would be excited about teaching in or my child would be excited to go to. And so let’s go there for a minute. Like if you had a magic wand, you could design the school to look any way you wanted to, presumably using this new technology we have.
What parts of AI could you take advantage of and you know, what would you avoid because it’s not going to work well. And like what would that actually look like in a school?
Julia Freeland Fisher: Yeah, again, maybe I’ll stick with the relationship theme partly because I’m like Diane, you just tell me your answer and I’ll copy it as like the school designer in this conversation. And there’s a lot of people in the field who I trust more to sort of think about the like whole school design. But when I think about like how do we design a deeply connected school experience for young people in the age of AI? I think there’s three kind of main things I’m looking at. One is, and most of them are infrastructure, just to be clear. One is infrastructure to support high touch webs of support for each and every kid. So this is very clear in the youth development literature that young people don’t just need one caring adult. Even though for some reason that term like, like people grabbed onto it and has stuck.
Young people need webs of support and they are most effective when the people in those webs are connected to one another. This is research from John Zaff and Shannon Varga at BU. It’s informed really great models like City Connects and Bar, but those are expensive to run and the data systems to actually make them highly responsive and even predictive of what a young person needs just like don’t really exist. So that’s number one, high touch webs of support. The second though is more diversified networks aligned with students’ interests. And that’s what we found in our own evaluations of particularly career connected learning efforts at the high school level that are trying to expand students’ options. Young people were least likely to report that they were connected to people who shared their interests. And so I think there’s a ton of opportunity there again to like use AI to detect young people’s interests to,
Conversations and Confidence in Networking
Julia Freeland Fisher: Michael, to your point, to do some front end exploration of like future possible selves. Diane, I know you’re thinking a ton about this, but then to build the middleware so that you are starting to have conversations with people who share those interests. And maybe the best unit to think about there is conversations, not relationships. These don’t have to be long lasting connections necessarily. But how is the high school experience a constant stream of conversations with other humans? And then lastly, you know, I, I do think that the one place I’m interested in these self help bots and I know I’m giving them that sort of derisive term and it’s on purpose, I think we need to be wary of them. But I am really interested in something we see time and again when it comes to building and deepening and diversifying young people’s networks is confidence is really the moderating variable that you can teach young people communication skills. You can do these kind of surface level, here’s how to write a professional email. But confidence makes or breaks whether they go out and mobilize networks on their own, whether they even start having new types of conversations with people they already know.
And I do think that’s like a little wedge in the system where these self help bots could make a difference. A couple providers playing in that space now climb together, Kindred, Backers. These are all sort of startups that I think are keying into like what if AI could de-risk help seeking or reaching out, which for an adolescent can be like so daunting. So those are a couple thoughts of like those being in the background. So that high school, and I’m thinking mostly of high school is like an inherently networked experience. It’s not just if you are outgoing or wear your ambitions on your sleeve or do an independent study, but for every student.
Diane Tavenner: Yeah, that, that’s so fascinating. You know, quick just personal anecdote here, I’m stunned at how reluctant sort of the younger generation is to ever make a phone call. Literally they don’t call people. It’s not a thing. And you know, my son worked on the campaign, the presidential campaign and he had a quota of 175 phone calls a day. And he actually thinks, and I agree with him, this is one of his greatest skill sets now like month after month doing that, like that ability to just talk to people is so missing in our world right now in that generation. So that really resonates with me.
Let’s do one more, if you’re okay, I’d love to zoom out because I know given the work that you do, you’re influencing people, how they’re thinking about policy and procedure and, you know, all of those things, like, what’s on your mind in this moment in time? What are you telling people that they should be looking at, thinking about, you know, wary of promoting in terms of policy, procedure, and, you know, you pick the level, whatever.
Julia Freeland Fisher: Yeah, well, I’ll riff on your last point about your son to answer that initially, Diane, which is something that came out in our research time and again. And this was talking to founders like yourself, but who are incredibly thoughtful about the design of their products and services. And time and again, and you were not one of them, Diane, because you are not pro chatbot, at least in what you’re currently building. But time and again, folks would bring up, and again, this is in the guidance advising space. You know, sometimes students would rather tell a chatbot something than a human. And it’s a safe space and it’s a place for sort of less, there’s less risk involved.
Exploring Student Reliance on Chatbots
Julia Freeland Fisher: And I came away from that research being like, is that a feature or a bug? Like, how are we internalizing the fact that students don’t want to talk to humans? And what is that a reflection of? And so I think that’s number one. Like, what I hope at this, like, sort of ecosystem level people start thinking about is like, if students want to be talking to chatbots like that, let’s actually interrogate that a little bit more. I think the second piece is around really starting to come up with language and some markers of what I’m calling pro social technology. So again, I don’t think AI is inevitably going to disrupt human connection. But I think if bots are not trained to nudge students into the real world offline, if bots are actually trained to keep students engaged, if consumer tech, right, is making money on engagement, that is all moving in an antisocial direction. And I just think we need more language around that because, like, I was in a, like, off the record chat with someone from a, one of the big who recently left one of the big AI companies. And, you know, everyone’s worried about like, national security and China and things that I know I should also be worried about while I’m like, lying awake about AI companions.
But, you know, I said to him, like, what about the fact that these are being anthropomorphized and like, encroaching on what we sort of hold dear as human. He was like, yeah, everyone working in industry is, like, creeped out by that, but has no idea what to do about it. And that was revealing, right, that there’s a real prisoner’s dilemma here. That, like, there’s a creep factor. But it’s like bullet seven on slide four. Like, no one’s really as worried about it as I think we should be. So that’s number two. And then the last thing is really much more parent facing.
Like, I think whether you agree with the, like, moral panic, Jonathan Haidt stuff around cell phones over the past year, he’s tapped into parent anxiety that I’m like, this is the right anxiety in some ways around screen time and addiction. But, like, we’re not even talking about what’s coming. And, you know, if you think social media was designed to appeal to our deeply wired need to connect, AI companions, are that on steroids and so I am not myself, like, a parent organizer, that’s like, not. I wish that was, like, who I was born to be. But I’m hoping that there will be more conversations around parent organizing around just like, not creating barriers to innovation. This is the tightrope we need to walk right, like, not shutting down the tech, but being super aware that, like, we have seen this movie before.
Michael Horn: Yeah.
Julia Freeland Fisher: So those are my big three.
Diane Tavenner: Well, I got carried away there, Michael. Any other questions you want to ask before I take.
Michael Horn: I think we asked the right questions. This been fascinating.
Diane Tavenner: Okay, good. Yeah, I couldn’t help myself. I so appreciate your thoughts, Julia. And we’re going to ask you for one more. So we always invite our guests to join in our sort of end of show ritual, which is where we share what we’re reading, listening to, watching. You know, we try to do it outside of work, but we often, you know, regress back into to work. But we’d love to hear what’s been up for you lately?
Julia Freeland Fisher: Yeah, so I just finished this, like, breathtakingly beautiful book called Nobody Will Tell You This But Me by Bess Kalb. It’s a memoir about her grandmother, and it’s done really beautifully. It’s like her grandmother is talking to her. Like, the form she chose is just stunning. And yeah, it was just intergenerational connection is, like, one of the most beautiful things. It was beautifully done. And I was actually thinking about it when I was.
And then I’ll stop talking, I promise. But I was listening to your guys’s last episode on AI and you were talking about Notebook LM. And like putting a chapter of a book into that and just how much texture of like the brilliance of what she did would be lost listening to these, like, TED Talk adjacent fake voices, like, riffing on it. And like, our kids deserve to live in nuance and to detect it. And like, how do we. Anyways, that book in particular is just such a beautiful, like only a human could have written it. And I know all sorts of people in Silicon Valley will debate me on that, but. Highly recommend.
Diane Tavenner: Yeah, for sure. I love that recommendation. I’m working on planning a dinner called Generations Over Dinner and so that might be a fun.
Julia Freeland Fisher: Oh, my gosh, check it out it’s beautiful
Diane Tavenner: So I might add that in there. I will. Okay, what’s up for me right now? Well, I’m gonna stick with my biases that I introduced at the top of the show and say that we just finished the second season of the Foundation, which is a series on. I forget one of those. I don’t know. It’s on something. Anyway, based on the writings of Isaac Asimov, you can tell how good I am at tv. Not very.
And yes, one of the big plot lines is all about AI. There’s no doubt about it. And so I’m seeing that literally everywhere. I will say it’s for me, having not read the books, unlike my kiddos, it’s a little bit hard. It’s a lot going on there. It’s hard to follow. I don’t remember everything. I was glad I had some guides, human, actual guides, sort of coaching me through it, and it came together for me at the end and felt worthwhile. So it’s certainly beautifully done and well acted and. And all of that. How about you, Michael?
Michael Horn: This may be my entree, Diane, into it. Because I’ve struggled with the books. Sal Khan has actually tried a couple occasions and I just I cannot get into them. So I like that. I will also stay with biases, but on a totally different front. I feel like I’m going to stereotype myself here or everyone listening is going to be like, yep, that’s Michael. So I just recently finished the Master: The Long Run and Beautiful Game of Roger Federer by Christopher Clary. My tennis fandom, I think, continually comes out recently on this podcast. So beautifully organized book. Really enjoyed it. I will say I’m like, there’s Rafael Nadal people. They’re Roger Federer people. I’m a Nadal, Pete Sampras sort of vintage person. But I was really glad I read the book, gained a deeper appreciation of Federer, and frankly, actually picked up some tips that I wish I had known much earlier in my professional career from practices that he would employ at the tournaments that he would show up at with everyone around the tournament, not actually the playing itself, which was not something I expected. So we can offline about that later. But it’s all about relationships, it turns out.
Diane Tavenner: So you have me curious now. I wasn’t expecting to be curious afterwards.
Michael Horn: But it’s all about relationships. It’s comes back to Julia’s thesis. And with that, a thank you, Julia, for joining us and taking us through this fascinating conversation that we’re going to be reflecting on for a while. I know. And thank you to all of you, our listeners. And we’ll see you next time on Class Disrupted.
Get stories like these delivered straight to your inbox. Sign up for The 74 Newsletter