Anna Corke learns just what a birdbrain can do.
A Fairchild conference room, containing mostly graduate students and professors, stroked its collective chin as Dr. Sarah Woolley, a relaxed brunette dressed in brown and green, began her surprisingly understandable lecture, Natural Sound Processing in the Song Bird Brain, by showing photographs of a Bengalese finch, a song sparrow, and a zebra finch. First she played the three bird species’ songs, then explained that although these songs originate from closely related species, they are quite unique in structure and content. More importantly, a bird’s ability to sing these species-specific songs (think different human languages) depends on the bird’s auditory perception. Dr. Woolley’s interest is in how bird brains process sound, what the results show about avian singing ability, and what the broader neurological implications of these results might be.
Clicking to the next slide, Woolly explained that during early development, birds must first memorize the song of an adult of the same sex, then practice vocalization to match the memorized song. Finally, they must stabilize a mature song with improvisation. A past experiment in which birds were temporarily deafened showed that birds store learned songs in their brain after adolescence. Woolley then posed a question: How is song actually coded in the auditory system?
The next slide showed complex diagrams of sound frequencies and neuron spike paths obtained by hooking an electrode to neurons in the opened brain of an anesthetized finch. As if not aware that the room was full of life-long scientists, Woolley assured her audience that she would explain exactly how to understand and interpret the diagrams. What followed was a tutorial in Spectro-temporal Receptive Fields (STRFs), which show the specific components of a song to which a neuron reacts. Temporal shifts are related to tonal frequency, the strength of a particular sound. Spectral shifts are related to intensity of sound.
To clarify the difference between spectral and temporal audio, Woolley played altered recordings of a human voice. The first, containing only spectral components of the sound, was like an old man with his mouth too close to a microphone crackling with static. The second sounded like a woman reciting her vowels with her tongue sticking out—more tonal, but equally incomprehensible. The unedited clip uncovered a young man: “The radio was playing too loudly.”
STRF data revealed that bird neurons respond only to temporal shifts in songs—variations in amplitude over time—and not their spectral counterparts. This result perplexed Woolley, since she had found opposite results in her other test subject, bats.
At the end of the lecture Woolley speculated about her goals. In addition to studying the coevolution of auditory and vocal systems and differences in auditory coding and behavior, Woolley has been working with adolescent birds. One gray bearded scientist asked her whether she had examined STRFs of baby bird neurons to determine whether they have the same perception reactions as their parents. Though she hasn’t concluded anything decisively, Woolley has tried forcing chicks to learn a different species’ song. So far she’s found that the birds cannot physically produce a complete song of any species but their own. Bird languages might be genetic to some extent.
Woolley expressed her confusion over her current findings—they weren’t what she expected after past research on bats’ brains—but also said that her research was beneficial because bird song was “a good way to study how complex perception works.” Bird song, as a form of animal communication, is easy to study. The plethora of varied song cultures from different species provides an enormous bank of data. And understanding bird brains could certainly reveal something about our own psychology. The other professors present nodded, complacent in their lab coats, as if this inkling were enough.