Guest post: Language on the left?
The human brain is split into two halves, the left and the right hemisphere. But to what extent are language functions found mainly in one hemisphere, and why this might be? In the first in a series of posts from scientist bloggers, Professor Sophie Scott describes how there are two sides to language in the brain.
Speech and language were the first ‘functions’ to be associated with particular brain areas. In 1861, Paul Broca described a brain area that appeared to be very important in speech production. The area, in the posterior third of the left inferior frontal gyrus, came to be known as ‘Broca’s area’, and he determined this by studying the brain of one of his patients, known as Tan, who had a brain tumour and could only say the word, “Tan” (in addition to a few, more colourful, words).
In 1881, Carl Wernicke described lesions in the left temporal lobe (figure 2) that were associated with problems in the perception of spoken language. Both of these findings were incredibly influential, partly because acquired language problems still typically follow damage to the left side of the brain.
As one might imagine, 150 years of neuropsychology and neuroscience have enabled us to refine some of our ideas about the neural basis of speech perception and production. One big aspect of this has been to show that the right hemisphere is significantly involved in the perception and production of speech. This contribution can, however, be quite complex. For example, in normal, conversational speech production, the region corresponding to Broca’s area in the right hemisphere is actively suppressed (Blank et al, 2003).
I am interested in what might modulate this suppression. For example, when those with the language disorder Broca’s aphasia recover speech production, this is associated with increased activation in the right hemisphere Broca’s area. But is right Broca’s area associated with other kinds of speech change? We continuously modulate our voices to adapt to our environment, for instance, both in terms of acoustics and social factors. We unconsciously change our voices when we speak in a noisy room, and this change can be quite specific to different kinds of background noise (Cooke and Lu 2010). We also change our voices a lot depending on to whom we are talking – people tell me I talk exactly like my mother after I’ve been speaking with her on the phone, but I can’t tell that I am doing it. In my lab, we are just starting to identify the brain systems involved in this kind of modulation, and it will be interesting to see to what extent and how right hemisphere mechanisms are involved in these changes.
In speech perception, things have become yet more complex. While linguistic information in speech is processed in the left temporal lobe regions that Wernicke described, we also see very strong activation of the right temporal lobe in spoken language. This seems to reflect the fact that, when we speak, we produce an incredibly complex sound – human speech is probably the most complex sound being made by a single sound source. And when we speak we express information not just about the words we say, but also about who we are, where we come from, how old we are, how well we are, whether we are a man or a woman, as well as our mood.
Functional imaging studies indicate that the right temporal lobe areas are especially concerned with processing many of these other aspects of the information in our voices. Thus, the recognition of a speaker is associated with right temporal lobe areas – patients who cannot recognize a speaker by their voice typically have lesions in this area.
Of course, in normal speech perception these factors interact – studies have shown that we very rapidly adapt to speaker-specific idiosyncrasies, but we don’t retune our whole speech perception network (Eisner and McQueen, 2006). Thus, if I hear Jonathan Ross describing a “wed wobin” I am not surprised when the person he’s talking to says “yes, a red robin”. These speaker-specific mechanisms are strong – we understand people better if we are familiar with their voice. It suggests that though the right and left temporal lobes might be sensitive to different aspects of the speech signal, they work together to enable us to understand speech.
The elephant in the room is why linguistic representations and processes are so associated with the brain’s left hemisphere in the first place. The left lateralisation of language is seen in 96 per cent of right-handed people, and is still there in 73 per cent of left-handed people (Knecht et al, 2000). It is there for men and women equally. People whose language centres are not in their left hemisphere have it in their right hemisphere: there is no evidence for people who have an intermediate, more equally divided representation of language across the left and right sides of the brain. And if the language-dominant hemisphere is damaged, the non-dominant hemisphere can take over function. Does this mean that the non-dominant hemisphere still performs linguistic functions in some low-key way? Or that it can adapt following damage to the brain (or perhaps even that it is released from some form of suppression)?
Ideas about why the linguistic aspects of speech perception are left lateralised tend to be a bit circular – one prominent argument is that the left hemisphere is good at processing the acoustic properties of speech. Apart from the evidence for this, why should there be such differences between the left and the right sides of the brain to start with? Is speech perception on the left because speech production is left lateralised? Possibly, but that still leaves the question of why speech production would be left lateralised.
Perhaps by focusing on language we are looking in the wrong place altogether. There are other kinds of asymmetry between the hemispheres. Damage to the right hemisphere can lead to lasting problems with orienting attention in space and time, such as in left spatial neglect, where people do not pay attention to the left side of space, don’t talk to people who stand on that side of them, don’t eat food on that side of their plate and so on. People with left hemisphere damage can show the opposite pattern of ignoring the right-hand side of their world, but this is considered less common and patients tend to recover quickly.
This suggests that there may be differences in how attentional mechanisms operate in the two hemispheres. Along different lines, my UCL colleague Professor Tim Shallice, comes to this problem through his research into high-level thought and monitoring systems. He thinks of the left hemisphere as the side that is good at categorising and classifying objects in the world. When we think about language, it forms a good model in which categorisation and classification are critical processes. I suggest that, in order to understand the lateralisation of language in the human brain, we should keep other, non-linguistic processes in mind.
Sophie Scott is a Professor of Cognitive Neuroscience at UCL and a Wellcome Trust Senior Research Fellow in Basic Biomedical Science. She blogs at Listening In.