14:29 GMT - Tuesday, 04 March, 2025

Shaping the future before it shapes us

Home - Careers & Education - Shaping the future before it shapes us

Share Now:

Posted 3 hours ago by inuno.ai


I’ve worked closely with colleagues in Silicon Valley throughout my career. Through these interaction, there are always new ideas, and the level of confidence in predictions typically starts strong and only gets stronger. This time felt different. Last week during a visit to Silicon Valley, I repeatedly heard the following as a preface to a prediction, and I can’t say I’ve ever heard it before when engaging with my most techno-optimistic colleagues: “I could be wrong, but …”

A few innocent words, but a rhetorical hedge that suggests even the most confident among us understand that the AI era is pretty, pretty complicated.

I was there to attend the Annual AI+Education Summit 2025, hosted by Stanford’s Institute for Human-Centered Artificial Intelligence (HAI) and the Stanford Accelerator for Learning. The theme—Human-Centered AI for a Thriving Learning Ecosystem—framed discussions that were both urgent and inspiring. AI is not just on the horizon; it is actively reshaping the educational landscape. Our responsibility is to ensure this transformation augments human potential rather than diminishes it.

The summit brought together leading researchers, educators and policymakers to explore AI’s role in personalizing learning, empowering educators and bridging educational divides. The pace of change is staggering—today, half of students use AI tools at least weekly, both inside and outside the classroom. Institutions must act now to shape AI’s role in education intentionally rather than reactively.

The Power of Collective Action in Higher Education

One of the key messages from the summit was that no single institution, company, innovator or researcher can tackle this challenge alone. A coordinated effort across higher education is essential to ensure AI serves students, faculty and society in equitable and effective ways.

At the University of Michigan, we have seen firsthand how faculty innovators are experimenting with generative AI to enhance teaching and learning. Our most recent call for proposals at the Center for Academic Innovation resulted in a diverse set of AI-enhanced teaching and learning projects designed to explore AI’s potential across disciplines, from medical education to humanities. These projects demonstrate not only how AI can enrich classroom experiences but also how it can deepen engagement, personalize learning and extend human creativity. We are helping faculty translate emerging technologies into meaningful applications, creating impactful learning experiences on campus and beyond.

Organizations like U-M’s Center for Academic Innovation and Stanford’s HAI and the Stanford Accelerator for Learning play a critical role in leading this work—through experimentation, research and convening communities of practice. Without spaces to explore AI’s potential responsibly, without research to test its effectiveness and without convenings to align efforts, the future of AI in education would be left to chance rather than deliberate innovation.

Michigan’s work is part of a broader movement. Across higher education, institutions are launching AI-driven initiatives to explore the role of AI in teaching, learning and research. One example is the California State University system, which recently announced a partnership with OpenAI to explore AI’s potential across its 23 campuses. This initiative, like many others, underscores the need for systemwide efforts to develop responsible and scalable AI solutions.

These efforts—faculty-led experiments at Michigan, large-scale system initiatives like CSU’s, and global convenings like Stanford’s AI+Education Summit—demonstrate the range of approaches to AI in education. Stanford’s summit, in particular, highlighted outstanding faculty-led experiments exploring AI’s role in augmenting learning, fostering creativity and addressing challenges in equitable access to technology. These initiatives reinforce the importance of institutional collaboration in shaping the future of AI in education. But the big question remains: How do we shape AI’s role in education to serve our preferred future rather than react to an imposed one?

5 Key Takeaways From the AI+Education Summit

  1. AI is transforming education, but its role must be purposeful.

AI is already reshaping how students learn and how educators teach. We must ensure AI serves as a tool for augmentation rather than automation. How do we steer away from optimizing automation and toward optimizing AI’s ability to augment human creativity, problem-solving and collaboration?

  1. Faculty innovation is leading the way—with institutional support.

Some of the most compelling AI applications in education are emerging from faculty-led experimentation. Universities must create conditions for responsible innovation by investing in faculty training, providing resources for experimentation and developing ethical frameworks that support AI integration while prioritizing student learning. We need to understand what’s working for whom and be ready to quickly invest further in the most impactful efforts.

  1. AI ethics and governance must be at the forefront.

AI’s potential to amplify biases and exacerbate inequities is well documented. Institutions must focus on governance, transparency and bias mitigation to ensure AI benefits all learners. Without clear institutional leadership, regulation will fill the void. Can we build governance frameworks that protect learners and help them to flourish while also fostering innovation and global competitiveness and security?

  1. AI literacy is urgent—but we lack consensus on what it means.

There is universal agreement that students, educators and institutions need to accelerate AI literacy. However, what constitutes AI literacy remains unclear. Should AI literacy be about technical proficiency? Ethical responsibility? Practical applications? Probably all of the above—but the right balance is elusive. I could be wrong, but if we don’t actively shape this now, we may find that AI literacy is defined for us in ways that don’t align with our values. Definitions vary, but there is broad consensus that we need highly accessible and scalable opportunities for anyone to acquire AI literacy—and soon.

  1. We need a shared vision for AI in education.

The AI+Education Summit made it clear that AI’s impact should be shaped by the collective choices of educators, institutions and policymakers. Without a shared vision, the future will be dictated by market forces alone. Speakers at the conference described the future they want to see: one that designs for the widest range of learners to support human flourishing, strengthens the essential relationship between teachers and students, and works for everyone—practically, equitably and responsibly.

Institutions have taken very different approaches to AI—some choosing to ban it, restricting its use until clearer guidelines emerge, while others have opted to embrace it, fostering a culture of experimentation and innovation. Others have decided to take a wait-and-see approach, uncertain about how AI will ultimately shape higher education. Perhaps all of these strategies have their merits. Or maybe in a few years we’ll look back and realize the most effective approach was something we haven’t even considered yet. I could be wrong—but that’s precisely why we need a wide range of perspectives shaping this conversation now.

Questions for Our Growing AI-in-Education Community

As institutions embrace AI, we should ask ourselves:

  • How can we ensure AI enhances equity and access rather than reinforcing existing disparities?
  • How do we ensure AI supports human creativity and critical thinking rather than replacing them?
  • How do we balance experimentation with the need for institutional policies that safeguard students and educators?
  • What models of collaboration—between institutions, industry and policymakers—can accelerate responsible AI adoption in higher education?
  • How can institutions maintain trust with learners and faculty as AI adoption accelerates?
  • What does a thriving, AI-enhanced learning ecosystem look like in five years? How do we get there?

The AI+Education Summit reinforced that we are not passive observers of AI’s impact on education—we are active participants in shaping its trajectory. The work happening at Stanford, Michigan, CSU and across the broader higher ed community signals a growing recognition that AI is not just another technology to integrate but a transformational force that demands intentionality, collaboration and vision.

Yet, it would be a collective failure if we simply make it easy for students to offload critical thinking. AI must not become a shortcut that undermines the cognitive skills we seek to develop in our learners and citizens.

Now is the time for institutions and individuals to come together, share knowledge and create our preferred future for AI in education. We don’t have all the answers, and some of today’s best ideas may prove incomplete or even misguided. It feels like there is little time for passive observation. AI’s role in education will be defined—either by us or for us. Let’s build the future we prefer—because if we don’t, well … I could be wrong, but I doubt we’ll like the alternative.

James DeVaney is special adviser to the president, associate vice provost for academic innovation and the founding executive director of the Center for Academic Innovation at the University of Michigan.

Highlighted Articles

Subscribe
Notify of
0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments

You may also like

Stay Connected

Please enable JavaScript in your browser to complete this form.