What is wordform recognition?
But they don’t just have to remember individual words by themselves. Another part of wordform recognition is the ability to slice words out of fluent speech as distinct, meaningful sounds. In real life, there’s no physical space or time between words when someone is speaking to you– it’s one continuous flow of sound. Segmentation is simply the grouping together of certain sounds in speech as one word, and separating them from the rest of the speech stream. Take this sentence for example:
To someone who doesn’t speak or read English, that will look like a long string of nonsense. But to experienced English readers, the letters group together into familiar words and form a coherent sentence. While we use different sensory processes to understand written versus spoken language versus signed language, we do a similar thing when we read, hear speech sounds, or see language signs, and babies need to get good at this very early on in their language journey.
Both phonetic representation and word segmentation build infants’ wordform recognition ability. Like with most things, practice makes perfect – the more times we hear a word, the stronger our expectations of how it should sound, and the faster we are able to recognize it (Zevin, 2009). Later on in development, as infants learn the meanings of words, they match meanings to the recognized collection of sounds stored in their lexicon (Carbajal, Peperkamp, & Tsuji, 2021). This process becomes faster and easier as their word comprehension gets better.
How do we know if babies can do this?
To know if an adult knows a word, we can ask. With babies, not so much. To test whether or not an infant can recognize a wordforms like “dog” (remember our phonemes /dɔg/), researchers can use discrimination studies to see if the infant can tell the difference between familiar and unfamiliar words. The idea here is that, if the infant already has a phonetic representation of familiar words, they should be able to treat them differently from unfamiliar words.
One study design that is used to do this is the Head -Turn Preference Procedure. As the name suggests, this procedure makes use of the direction that the infant turns their head to determine a difference in preference between two types of stimuli (familiar vs unfamiliar words) (Nelson, 1995).
One nice thing about babies is that we can count on them to do certain things. For instance, if there’s a blinking light in a dark room, they’re likely to look towards it just like grownups are. Taking advantage of this, this method flashes lights and plays sounds as long the baby looks at the flashing light on either side of them. This very basic method lets us know whether babies can tell certain types of sounds apart (e.g. trumpets vs. saxophones, foreign speech vs. native language speech, or known vs. unknown wordforms). If babies show a significant difference in the time they spend looking at the light between the two types of sounds overall, it tells us that they have spotted the difference, and thus recognized the wordforms of the familiar words.
Samantha is a junior at Duke double-majoring in Neuroscience and Linguistics with a minor in Psychology. She is specifically interested in how music and language impact someone’s perception of their surroundings. Having grown up speaking multiple languages and playing various instruments, she would love to know how that has impacted her cognitive development and helped her navigate the world. Most of her free time is spent drawing and playing Minecraft. The rest is spent napping with her cats and listening to music.