It’s easy to conjure a mental image of a science fiction-inspired dystopian future when we hear cautionary tales from the classroom about ChatGPT completing homework assignments, anecdotes of the chatbot fooling a human into bypassing bot controls in its stead by claiming to be a person with a visual impairment, or even an admission from the OpenAI CEO that he’s “a little bit scared” of his own technology.
But what if it isn’t all robot-overlord doom and gloom? While the implications can indeed be “scary” if we don’t take the time to keep up with and critically think about new technology, there’s ample opportunity to learn. Educators, in particular, are poised to support students in developing critical skills that can help them navigate AI and make responsible, informed decisions about putting it to use. This work necessitates an adjustment in practice and behaviors, but it’s something we need to contend with – as we learned at The Learning Accelerator (TLA) while taking on a recent project.
The Skills We Needed to Solve Our Own “AI Problem”
Last month, our team worked alongside a national education funder to better understand how educators access online resources and implement them in practice. We opened up a public survey to collect data and invited interested respondents to dive deeper with us in focus groups. The response we received was instant – and curious. Within hours, we were flooded with relevant, intelligent responses to our questions. Taken in isolation, each response could have feasibly come from a real-life educator detailing their experiences and needs around digital resources. But looking at them side-by-side, we spotted a number of “tells” that sounded the alarms.
The timing of responses struck us initially as suspicious – with several responses generated in minutes – that seemed like they had been phrased by the same source.
“It is important that students can understand the material.”
“It is important that teachers have the time to use the resources.”
“It is important that resources work with my curriculum.
Could these have been written by a real educator? Absolutely. But was it likely – given the identical sentence stem starter and the rapid-fire submission of these responses one after another? Probably not. Sorting through the seemingly endless pile of suspiciously similar responses required our team to use some “soft skills” that students can use across situations and modalities. Students need teachers to equip them with these in-demand skills in order to make sense of a world where generative AI can produce responses that fool us into thinking they’re human.
How educators can support students: In sorting out the valid responses from the chatbot fakes, we tapped into critical thinking and problem-solving skills. We know that AI can make efficient work of the prompts we provide it, but a keen human eye is needed to evaluate the quality, coherence, relevance, and applicability of the information it produces. Educators can emphasize the steps students will need to take in order to properly analyze AI outputs through reasoning, collecting evidence, hypothesis-testing, and asking targeted questions.
In addition, educators can support students in deepening their skills for adaptability and decision-making. Students will need to know how to take the sometimes disparate (or potentially incorrect or outdated) pieces of information generated by AI to adapt and make sense of them. They will need to make choices about how to best fit what they’ve generated with the assistance of AI into the problems they’re trying to solve.
Our team noticed that the “suspicious” answers had the tendency to provide a large amount of detail, often about the same ideas and topics, but sometimes in different orders. When we asked respondents to describe the most pressing factors to them in selecting resources, many responses were multiple paragraphs long. “Relevance,” one respondent would say, and then proceed to define what they meant by relevance, followed by another noun and a definition, then another.
How educators can support students: A lack of creativity and variation in the responses were significant weaknesses in the chatbot responses. One place where AI excels is serving as a “jumping-off point.” Students can use these tools to generate ideas that help to better structure or inspire deeper, more creative thinking. Educators can encourage students to develop further nuance and expand basic ideas into larger endeavors by tapping into their own creativity skills.
In TLA’s project, respondents interested in joining our focus groups were prompted to share their role and school. While the bot answers would correctly identify a school, its location, and a teaching role, our team couldn’t find records of these respondents as real staff/faculty at any of the named institutions. We wanted to make sure we had a strong, representative sample of educators, taking into account identities, equity, and a balance of opinions from practitioners at various levels – so we had to make sure we did our part by inviting real-life educators into these groups.
How educators can support students: Working responsibly with AI will require students to bring integrity into their work. Students will need to grapple with the ethical and equity-based considerations that come along with the use of AI. Educators can encourage students to consider when the right time is to use AI, who the technology benefits (and who it harms), and how these tools can be used to create positive change.
AI is Already Here!
While our “AI problem” wasn’t one we expected to deal with, it got us thinking about the ways AI has already proliferated into our daily lives and the immediate implications for K-12 education. Although our team had to deal with the nuisance introduced by the technology, there are exciting benefits to working with AI. Workers are automating more mundane processes so they can devote their mental energy toward deeper work. Teachers are using chatbots to discover new ways to personalize learning for the students in their classrooms. Students are using these tools to discover more about their interests and better understand content they may struggle with.
Despite school districts’ attempts to ban ChatGPT, interested students will inevitably find ways to use AI in their work. Rather than rail against a tool that is already part of many students’ routines, we encourage educators to instead posit important considerations to their students and help them develop the skills they’ll need to effectively and responsibly integrate AI into their work.
This blog was the final installment in a series on AI and K-12 education. Catch up on the blog series here:
- Can AI Solve the Uniquely Human Challenges Facing Educators Today?
- AI’s Role in the Future of Innovation in Education
- Generative AI in Schools: Leaders Ask for Guidance
- Innovating While Leading Schools is Hard — Lessons from Using ChatGPT
Have a learning or idea about generative AI tools in education that you’d like to share? Tag us on Twitter, we’d love to hear from you: @LearningAccel