Published on November 7, 2025 1:10 AM GMT
This is a continuation of my previous post, and will be discussing the second day targeted towards educators, “Learning in the age of AI.” There were around 70 attendees, near double the previous day, of which it seemed roughly 40% K12 teachers, 30% administrators/curriculum creators and 20% college professors, with a few “others,” a couple students and me.
If the first day of the AI Innovation Summit had one main question (how can attendees use AI to enhance their business), the second day had three.
1. How can teachers use AI to enhance their teaching?
2. How do teachers deal with student use of AI?
3. How …
Published on November 7, 2025 1:10 AM GMT
This is a continuation of my previous post, and will be discussing the second day targeted towards educators, “Learning in the age of AI.” There were around 70 attendees, near double the previous day, of which it seemed roughly 40% K12 teachers, 30% administrators/curriculum creators and 20% college professors, with a few “others,” a couple students and me.
If the first day of the AI Innovation Summit had one main question (how can attendees use AI to enhance their business), the second day had three.
1. How can teachers use AI to enhance their teaching?
2. How do teachers deal with student use of AI?
3. How does the existence of AI change what students need to be taught?
Of these, only the first one has relatively easy answers. This combined with the more general tendency among teacher-types to equivocate, to say “It’s more about asking the right questions than having the right answers”, to seek more agreement more than getting into specific metrics, etc. made the whole day feel significantly more open-ended and less “alright, here was the takeaway.”[1] It’s not my preferred style of engagement but I was glad to be there and engage. One benefit of the day is that I was able to hear more people voice their opinion/perspective on the changes that AI is bringing, which I was glad for.
Keynote Address: Another Innovative Initiative: AI in Education
This talk was given by two people who notably were leaders of Minnesota Generative AI Alliance for Education (MNGAIA), a community organization that’s to become a nonprofit in the beginning of 2026. This group describes themselves as “a coalition dedicated to the ethical and effective integration of AI into education, ensuring that humans remain at the forefront of decision-making and learning.”
Their talk is a little difficult to summarize, it kind of jumped around and even included some audience participation, but I’d split it up into three main points. 1) AI is a scary new technology, but many other now-normal technologies were scary when they were new. 2) What’s actually important to us? Does AI change that that much? 3) Discussing their MNGAIA organization, how that started, what it’s done so far.
For their first point, they compared AI to many other technologies, putting it in a line of progress from scribes to printing press to personal computers, and then AI. Plato was worried that the rise of books would lead to forgetfulness, but we’re all happy that books exist now. There wasn’t really concern about how AI could or would qualitatively change, and they’re generally keeping it in the “AI as tool” sort of framing. It’s natural for people to have anxiety about change, but also it’s sort of not up to the teachers whether they can stop or significantly affect the change. AI is happening to us and with us, and people generally should try to work within that change rather than work against it.
For the second, it got more into the “hard to take a concrete position” often seen in this conference. The word “Human-centered” was thrown around many times, that we need to build AIs and systems that serve human values, that we should promote good things like lifelong learning, adaptability, and human relationships. I agree, but I think there was far too much just saying we need to make it human-centered and not enough talking about how that actually happens.
For the third point, they started just as a group of teachers/administrators that would have a monthly call to discuss AI and related issues, and have emerged into a full-fledged organization that hosted their own AI-related Summit back in June. They host a discussion forum, have testified in front of the MN Senate, do some research, and offer resources and tools to MN educators. These resources include different policy documents and guiding principles for AI use in the classroom, which I may look into later.
Breakout Session One: The Human Advantage: Equipping Learners for an AI-Enabled World.
This session focused on the work being done in one high school, mainly focusing on developing “irreplaceable human skills.” Her focus was very much on figuring out how to prepare students to find jobs in the future, and cited the World Economic Forum’s 2025 Future of Jobs Report, which states that while 92 million jobs are likely to be displaced by current trends, 170 million new jobs will be created this decade. The question is how to prepare students for jobs that we don’t yet know about, in response to which the presenter described her school’s six human-centered competencies of character, communication, citizenship, creativity, collaboration, and critical thinking. These all seem like good things, but weren’t really defined well in the time we had, and of course I’m not convinced that humans will continue to be better than AI at these (or even are currently better in some cases). She then had us do an activity where we would use AI to help us make a short presentation on how to develop one of these skills and go around the room, which I didn’t get much out of. Thus ended the session.
Breakout Session Two: Launching the Institute of Applied AI.
This session was led by MSUM’s Institute of Applied AI Executive Director, and was essentially to explain its reason for existence and their future plans/goals. It’s very much “we think AI will shift what skills are valuable and we need to be a college that can prepare our graduates for the workforce.” To do this, they plan to host AI workshops, have faculty fellows with expertise in AI, give AI tool demonstrations, other readiness work, partner with regional companies to see what they’re seeking from graduates, and more including hopefully some sort of micro-credentialing in the future. These all seem like interesting ideas that will be good in a very slow takeoff world. In a world with faster takeoff, I’m skeptical on how much will be relevant, but I would also think that they would be able to react more quickly to faster takeoff than a university without such infrastructure. The Institute is still in its infancy, beginning its existence this past spring, and it’s only projected to be fully implemented in 2027-2028, but I plan to keep an eye out and what it’s doing and where it’s going.
Breakout Session Three: Supporting and Preparing Students in the Emerging Age of AI.
This session had the most ‘meat’, including three leaders at MSUM answering real questions about current AI teaching issues. Their first discussion was about how college students are different now than in the past. Whether it was the pandemic, an effect of more high school students going to college, or other factors, students are less prepared for a college curriculum like would have been had in the past. One said “students were college-ready, now we have to be student-ready.”
Moving on to the job market, they acknowledged that right now the job market is difficult, and they don’t know yet what the many jobs created by AI will look like. Through this they see two main paths forward: either students (and others) can find ways to upskill and be prepared for these new jobs, or they will be un/underemployed. Colleges, who need to prepare students for jobs to justify their tuition, need to keep up with the trends of industry and be closely connected.
Speaking of which, how are employers hiring right now? They say there’s somewhat of a shift going on from experience/credential-based hiring to more skill-based hiring. This is one of the reasons behind the Institute’s desire to create micro-credential programs, to provide proof of useful skills in job-seeking. Colleges also need to consider possible industry training programs as potential competitors.
However, multiple people on the panel also consider it important that colleges remain more than just job skills training programs, that they should focus on developing the soft skills of students, harkening back to sort of a traditional view of higher education, but for the purpose of increasing ability to find jobs in an uncertain future. I really wonder how you measure/track the development of these soft skills, but that would be a question for another day.
Breakout Session Four: Critical AI Literacy Roundtable.
The point of this session was to define “Critical AI Literacy,” mostly contrasting it with “AI literacy,” and “technological literacy.” My notes for this session weren’t the greatest, mostly thanks to it including more active participation than the others, but also due to me not thinking it was very helpful. The most interesting part to me was some discussion of the conflict that occurs in educators in regards to AI use. According to one attendee, the students she sees tend to be fairly polarized with regard to using AI, in that some will use it all the time even when teachers would rather they not, and some students don’t want to use AI at all, generally for moral/environmental reasons. The question raised by this teacher was how to deal with the students. The educators want their students to be able to hold moral positions and stick to them under pressure, but they also want those students to be able to get a good job postgraduation, and that may require AI skills. We didn’t have a good answer to this. I overall think we should talk to people about their concerns (I think the AI environmental issue is hugely overblown, mostly thanks to the work done by Andy Masley) and do our best to present the situation to them honestly.
Ending Critique:
The main way I thought it was lacking: Nobody seems to feel the AGI. People aren’t projecting out plans to how to live with AI that surpasses us in most every way, they’re figuring out how to make stuff work with AI in its current state They see it as a tool and aren’t imagining that it will be something other than that. I know it’s hard to to plan for future eventualities with really high weirdness but I do think it’s important to be projecting the future advancement of AI when you’re discussing incoming high school freshman starting to make career plans.
- ^
If you’ve read Spiral Dynamics (which I wouldn’t exactly endorse, but it’s an interesting map of the territory), it was aggressively Green vMeme.
Discuss