AI and Agency at Embark, Part 1

The Limitations of AI Chatbots

I recently presented at the Association for Middle-Level Education annual conference. While there, I attended multiple sessions about AI, knowing I needed all the help I could get on the topic. To give you some context: I am a 40-year-old career educator who cares deeply about research and writing skills. I am also a moderate Luddite. I’m on Instagram but not TikTok. I read the New York Times online but never scroll through Reddit. I know enough to recognize that AI will soon impact my learners and their work, regardless of my desires. So, I dutifully attended a handful of sessions at AMLE, only to walk away disappointed. I had hoped the sessions would help me guide learners’ responsible use of AI as a tool for learning. Instead, the sessions focused on two main themes:

Students think and act for themselves to positively shape their learning and lives.
— Embark's definition of agency
  • How to determine if students are using AI to cheat, and

  • How educators can use AI to work more efficiently - providing resources to have AI draft rubrics, lesson plans, unit outlines, emails to parents, and feedback on student work.

I left the conference with a better understanding of AI but no concrete strategies for supporting my learners' use. Fortunately, I teach at Embark, where we believe that learners’ capabilities are practically limitless when given the proper instruction, support, and guidance. Since others hadn’t helped me understand how youth can responsibly use AI, I decided that Embark learners would do the work themselves. So began a fantastic journey into AI and learner agency.

Exploring AI’s Uses and Limitations

My first goal was to have learners understand that AI technology is not new. With the arrival of AI chatbots, it’s just reaching a higher ability level. We watched a great short film by Mozilla titled “What is Artificial Intelligence?” We discussed Netflix’s recommendation engine, which uses an algorithm to predict your next favorite movie, and Google’s Smart Compose machine learning technology that offers word suggestions and sentence completion options. These examples helped learners better understand how AI works.

I then had learners give ChatGPT their version of the prompt: write a persuasive/informative essay explaining [insert a topic you care about here.] Learners were immediately impressed with how quickly ChatGPT composed a multi-paragraph essay. Then I asked them to actually read the essay and tell me what critical feedback they would give to a friend if he/she/they had genuinely composed this essay. It didn’t take long for them to come up with two critical points:

ChatGTP 3.5’s kind refusal to cite its sources.

  1. The essay was repetitive and contained many general statements but little specific information.

  2. Their friend, ChatGPT, did not cite any sources. In fact, it gives you a rather blunt answer if you ask it to cite the sources it used.

At this point, we referenced a great Student Guide for AI Use created by AI for Education, which introduced learners to many common uses and limitations of AI chatbots. With this baseline in mind, I asked learners to return to their ChatGPT-generated essay and fact-check its information and claims.

Learners’ Discoveries

Through this exercise, learners developed an acute understanding of ChatGPT’s limitations. No matter how hard they tried, it would not provide them with sources, a list of research links, or even incorporate information from specific websites they provided. In doing this work, learners proved to themselves that AI Chatbots are not useful for conducting research or writing an essay from start to finish. As Dixon wrote in his reflection on this lesson, “The limitations of the ChatGPT are citing sources, explaining the ideas in the essay, and getting up-to-date information.” Walden stated that AI “does not provide examples. Examples are key in writing, and having no examples in essays will have a negative impact.” Chiara brought up a further challenge presented by ChatGPT’s lack of sources, “you also can't check to see if it is biased information because it won't cite its sources.”

Learners also discovered the challenges presented by ChatGPT 3.5’s limited access to recent source material. (The free version can only access information published before January 2022.) Max asked it to write a persuasive essay explaining why the Denver Nuggets are the best basketball team in America. ChatGPT's essay omitted the crucial information that the Nuggets won the 2022 NBA Finals because it didn’t have access to that fact. Another learner came to the same limitation from a different angle. Hazel asked ChatGPT to create a list of Christmas gifts to purchase for her mother. Once Hazel provided the AI with personal details about her mother, like age, background, and likes, it gave her specific suggestions. However, Hazel found the information so dated that it was unhelpful. “It doesn't have recent information, so it doesn't know what is popular or in style. Because of this, the gifts it talks about aren't in style anymore,” Hazel wrote in her reflection.

The limitations of the ChatGPT are citing sources, explaining the ideas in the essay, and getting up-to-date information.
— Dixon, 7th-grade learner

Others learned firsthand that ChatGPT cannot find enough relevant information to answer a question; it hallucinates or makes up information that sounds good. When Natalia prompted it to write a persuasive essay detailing her fantastic dancing abilities, it made up all kinds of stuff. The essay contained many great but general things a skilled dancer would likely do, but none was true for Natalia. She does not have “unwavering dedication and passion for the art form” or an” innate sense of timing and coordination.”

Because learners were given permission, trust, and time to explore the capabilities of ChatGPT, they uncovered a wealth of limitations to the technology. This experience also opened lines of communication about AI. Rather than viewing the technology as cheating and using it in secret, learners developed an understanding of how to use the technology appropriately. Hazel wrote that AI, “could be useful…when making an essay because it can make an outline.” While Maddie wrote that, “AI can be helpful if you are trying to improve an essay…. It can help you find better words, and shorten run-on sentences.”

Like what you learned here? Stay tuned for the next installment in the AI and Agency at Embark series, Exploring AI Chatbots’ Uses.

Carissa SolomonComment