News Story
CLIP Researchers Create Computational Models to Explore How Adults Learn New Languages
There are myriad benefits to learning a new language—from conversing with people from other backgrounds, to easing international travel, to advancing your career. But acquiring a new language as an adult is not always easy, particularly if a person is trying to distinguish phonetic sounds not often heard in their native language.
With funding from the National Science Foundation (NSF), researchers in the Computational Linguistics and Information Processing (CLIP) Laboratory at the University of Maryland are exploring this phenomenon, using computational modeling to investigate learning mechanisms that can help listeners adapt their speech perception of a new language.
Naomi Feldman, an associate professor of linguistics with an appointment in the University of Maryland Institute for Advanced Computer Studies, is principal investigator of the $496K grant.
Feldman is overseeing five students in the CLIP Lab who are heavily involved in the project, including two who are pictured below. Craig Thorburn, is a fourth-year doctoral student in linguistics, and Saahiti Potluri, is an undergraduate double majoring in applied mathematics and finances.
For their initial work, the researchers are taking a closer look at the specific difficulties native Japanese speakers face when learning English.
As an adult, it is often difficult to alter the speech categories that people have experienced since childhood, particularly as it relates to non-native or unfamiliar speech sounds. For example, native English speakers can easily distinguish between the “r” and “l” sound, which native Japanese speakers are not accustomed to.
Feldman’s research team is developing two types of computational models based on adult perceptual learning data: probabilistic cue weighting models, which are designed to capture fast, trial-by-trial changes in listeners’ reliance on different parts of the speech signal; and reinforcement learning models, which are designed to capture longer term, implicit perceptual learning of speech sounds. Thorburn and Potluri are working on the latter models.
With guidance from Feldman, the two researchers are exploring a reward-based mechanism that research suggests is particularly effective in helping adults acquire difficult sound contrasts when learning a second language.
“We're trying to uncover the precise mechanism that makes learning so effective in this paradigm,” Thorburn says. “This appears to be a situation in which people are able to change what they learned as an infant, something we refer to as having plasticity—the ability of the brain to adapt—in one’s representations. If we can pin down what is happening in this experiment, then we might be able understand what causes plasticity more generally.”
Potluri says that the powerful computational resources provided by UMIACS are critical to the project, noting that the model they are working with goes through hundreds of audio clips and “learns” over thousands of trials.
“The lab's servers can run these experiments in a matter of hours. Whereas with less computational power, it would literally take days to run a single experiment,” she says. “After running the model, we also need to analyze the massive datasets generated by the trials, and they are easier to store and manipulate—without concerning memory issues—on the lab's servers.”
Potluri says it was her interest in learning languages and a desire to get involved in linguistics research that drew her to apply to work in CLIP as an undergraduate. Despite having very little previous coursework in the subject, she and Feldman found that the NSF-funded project was a great area for her to exercise her knowledge in math while gaining new skills.
Feldman says the complementary skill sets of Thorburn and Potluri make them a good team to assist on the project.
“Craig and Saahiti have interests that are very interdisciplinary—spanning everything from language science to computer science to applied math—which makes them a perfect fit for research that uses computational models to study how people learn language,” she says. “Their collaborative work has already proven to be very impressive, and I am glad to have them on our team.”
Original story by Melissa Brachfeld at the Univeristy of Maryland Institute for Advanced Computer Studies
Published October 29, 2021