UMD Team Uses AI to Enhance Mnemonic Learning

Whether it’s memorizing vocabulary for a test or learning new foreign language phrases, we’ve all used keyword mnemonics at some point.

A mnemonic is a memory technique that helps people learn and remember a new word by associating it with something that’s easy to recall, and then determining how the two ideas are linked. For example, one might say that melancholy sounds like melody, suggesting the idea of a slow, sad song, which connects it to the meaning of melancholy as a feeling of sadness.

But helpful as they are, crafting mnemonics can be tedious and frustrating. What if there was a way to make using this classic technique easier?

Enter SMART—an AI-driven keyword mnemonic generator created by University of Maryland researchers in the Computational Linguistics and Information Processing (CLIP) Lab. Designed to simplify mnemonic generation for students, SMART was showcased at the Conference on Empirical Methods in Natural Language Processing (EMNLP) in Miami.

“I enjoy working with mnemonics—they’re fun and effective,” says Nishant Balepur, a second-year Ph.D. student in computer science who is leading the project. “When I was studying for the GRE [the admissions test for graduate school], I had to learn hundreds of new vocabulary terms—a painfully tedious process.”

Mnemonics made the process more engaging, Balepur recalls, but coming up with them on his own was difficult. Soon thereafter, large language models (LLMs) like ChatGPT entered the picture, and Balepur realized they might be the perfect tool to help students like himself generate mnemonics.

Working with others, including researchers from Yale University and George Washington University, Balepur developed SMART by using LLM-training techniques in a two-step process. The first step involved fine-tuning, which involves collecting input-output pairs (vocabulary terms matched with mnemonics), followed by preference tuning, which refines the model’s outputs based on user feedback.

The researchers’ analysis revealed a fascinating insight.

“We found there was a discrepancy between expressed preferences, what students thought helped them learn better, and observed preferences, which are the actual learning outcomes,” explains Balepur.

To bridge this gap, Balepur and his team used Bayesian modelling—an advanced statistical approach—to account for, and balance, both types of preferences. This was where the team’s work diverged from prior research in the field. Instead of solely relying on what students said helped them, the researchers determined which mnemonics actually helped students learn better.

As a result, Balepur says, SMART can design mnemonics that are both engaging and educationally effective.

“This approach has helped us build a model that’s as good as, but cheaper and more efficient than GPT-4, the LLM that powers ChatGPT,” he says.

But as powerful as SMART and other LLMs are, they’re not without their limitations. In their published work, the team said that a human language expert they worked with was better than both SMART and GPT-4 in crafting high-quality mnemonics.

“Creating a mnemonic may sound simple, but it involves a lot of creativity and common sense, areas where AI still has room for improvement,” Balepur explains.

The team is determined to address their model’s blind spots and hope to build upon their findings to enhance SMART’s memorability.

Amid the project’s many challenges, Balepur acknowledges the valuable support he’s received from his advisers and other team members, whose expertise in a diverse range of areas strengthened the project’s impact.

Balepur is co-advised by Professor of Computer Science Jordan Boyd-Graber and Assistant Professor of Computer Science Rachel Rudinger, who both have appointments in the University of Maryland Institute for Advanced Computer Studies (UMIACS) and are core members of the CLIP Lab. Boyd-Graber is also a member of the Institute for Trustworthy AI in Law & Society (TRAILS).

Balepur also expressed appreciation for UMIACS’ role in supporting the project.

“If it weren’t for UMIACS’ computational resources, our work wouldn't have been possible,” he says. “The staff’s responsiveness was essential in keeping our project on track.”

—Story by Aleena Haroon, UMIACS communications group

###

"A SMART Mnemonic Sounds like 'Glue Tonic': Mixing LLMs with Student Feedback to Make Mnemonic Learning Stick" was presented at the Conference on Empirical Methods in Natural Language Processing (EMNLP) in Miami. It is one of one of 21 papers co-authored by CLIP researchers that were accepted to EMNLP this year.


Next
Next

UMIACS Team Aims to Boost High-Performance Computing Software Development Using AI