Skip to content

Watch your language

27 Apr, 2009
Dr Mairéad MacSweeney and colleagues are conducting research into language processing in the brain, with a focus on deaf people and sign language.

Dr Mairéad MacSweeney and colleagues are conducting research into language processing in the brain, with a focus on deaf people and sign language.

Chrissie Giles meets Dr Mairéad MacSweeney, whose research into sign language and the brain is shedding light on how the brain processes language in both hearing and deaf people, as well as highlighting the importance of gaining language skills early in life.

Most of us are born into a world of noise, and even before birth the sounds of our mothers’ surroundings filter through to us ‘in utero’. But for people born profoundly deaf, there is no sound. They, and their brains, develop in a silent world.

How does a brain that never receives auditory input process language, and what impact does deafness have on cognitive development in general? These questions are among those being investigated by Dr Mairéad MacSweeney, a researcher at the Institute of Cognitive Neuroscience, University College London.

“Most of what we know about language development comes from studying spoken language,” Dr MacSweeney says. “By looking at sign language we can ask if it’s language ‘per se’ that’s activating particular brain areas, or if the activation depends on the modality of the language – in other words, whether it’s spoken or signed.” She and her colleagues use brain imaging to investigate which parts of people’s brains are activated as they process language.

There is considerable evidence that the brains of deaf people and of hearing people process language in broadly similar ways. For example, Dr MacSweeney and colleagues have shown that deaf native signers (people who use sign language as their first language) recruit the main language centres in the brain – Broca’s area and Wernicke’s area – when they’re watching sign language, just as hearing people do when they watch and listen to someone speaking.

There are some differences between the brain activation of the two groups too. Dr MacSweeney explains: “There’s a lot more movement in sign language than in watching someone speak, so the parts of the brain sensitive to movement are more active when you’re watching someone sign than when you’re watching someone speak.”

Studies of profoundly deaf people can also shed light on plasticity – the ability of the brain to reorganise its function under changed circumstances. In this case, the researchers are investigating what happens to the auditory cortex (the part of the brain that processes sounds) if it doesn’t receive auditory inputs early in life. They have shown that, in people born profoundly deaf, some parts of the secondary auditory cortex are used to process visual input, including sign language.

Rhyme and reason

Although British Sign Language (BSL) and English seem very different, signed languages share key linguistic features with spoken languages. In one study, Dr MacSweeney and colleagues looked at phonological awareness. This is knowledge about the internal structure of a spoken word – for example, knowing that ‘cat’ consists of ‘C-A-T’ and that it rhymes with ‘bat’.

Profoundly deaf people can’t rely on hearing words to tell whether they rhyme or not. Dr MacSweeney and colleagues showed pictures of objects to deaf people and asked them to say whether the names of the objects rhymed or not. Surprisingly, the deaf people did better than expected, getting more right than if they had been relying on guesswork. They seemed to use a part of the brain called Broca’s area (involved in speech production) more than hearing people.

Why would this be? Well, to test whether words rhyme, hearing people conjure up an ‘auditory image’, something akin to hearing the words in your head. The researchers think that deaf people, unable to do this, instead rely more on the articulatory component (i.e. how you say something).

Broadly speaking, though, the brain seems to process phonology the same way in sign language and spoken language, for native users at least, adding to the evidence that the brain deals with all kinds of language in a very similar way.

In development

Dr MacSweeney’s work covers not only the brain pathways involved in processing sign language, but also how being born deaf can affect a person’s development.

Some 95 per cent of deaf people are born to hearing parents. This can be a shock to parents, who may have limited experience communicating with deaf people, and little or no knowledge of sign language. Children in this situation may not have full access to sign language and, even with a cochlear implant or hearing aid, won’t be able to access spoken language fully. This can lead to what researchers call impoverished early language exposure.

In some circumstances, deaf children might not be fully exposed to sign language until they start school – long after their peers have become fluent in a language. “Language is so critical to everything you do. If you don’t have a good, robust early language then all kinds of other skills are going to suffer as a result,” says Dr MacSweeney.

Cochlear implants can help deaf children to gain access to spoken language, and one school of thought suggests that children with cochlear implants should not be taught sign language, in case it ‘conflicts’ with the acquisition of spoken language – something Dr MacSweeney disagrees with. “We know that children can cope in bilingual situations very well,” she says. For her, the use of BSL alongside spoken English is no different from a household where parents speak two different languages to their children.

Early diagnosis of deafness can also help children to gain exposure to an accessible language. Steps in the UK have been taken to make this possible. Since March 2006, all babies in the UK have been offered a hearing screen within a few days of birth. Yet there is some evidence that, even after early identification of hearing loss, there can be gaps in the follow-up services provided. The research that led to this policy being introduced, led by Professor Colin Kennedy at the University of Southampton, was funded by the Wellcome Trust.

Research suggests that learning a language late can impair a deaf child’s subsequent cognitive development. On average, a deaf 16-year-old leaves school with a reading age of 10-11 years, despite having a non-verbal IQ no lower than average. “Hearing people tend to have around five years’ experience of spoken language before they are faced with learning to read that language. Profoundly deaf people don’t. This means that they have very little to map the written language on to – making reading a very difficult task,” says Dr MacSweeney.

However, her research shows that learning any language early could boost a child’s ability to read. They found that deaf native signers tend to be better readers than deaf non-native signers, suggesting that learning even a different language (including sign language) before learning to read can help children to read better.

Previously supported by a Wellcome Trust Advanced Training Fellowship, Dr MacSweeney is now using her Research Career Development Fellowship to investigate further the link between learning sign language and reading English. She is also continuing to study the neurobiology of sign language – research that will boost our understanding of how the brain processes language, whether it’s spoken out loud or expressed through actions of the hands and body.

Further reading

MacSweeney M et al. The signing brain: the neurobiology of sign language processing. Trends Cogn Sci 2008;12(11):432-40.
MacSweeney M et al. Phonological processing in deaf signers and the impact of age of first language acquisition. Neuroimage 2008;40(3):1369-79.
Waters D et al. Fingerspelling, signed language, text and picture processing in deaf native signers: the role of the mid-fusiform gyrus. Neuroimage 2007;35(3):1287-302.

Research in context

Deafness Research UK Chief Executive Vivienne Michael says: “We need a far better understanding of both the effectiveness and the longer-term implications of the various communication choices available to deaf children and their families. Dr MacSweeney’s research is vital in this respect but also fascinating in that it sheds light on the nature of language itself, which could have profound implications for hearing and non-hearing people alike.”

Read the signs

It is a common misconception that British Sign Language (BSL) is, basically, spoken English with gestures. Lip-reading, reading and writing are all visual derivatives of spoken English, but BSL is, in fact, totally independent of spoken language.

Other facts about sign language:

  • Unlike spoken language, which is expressed audiovisually, sign language is conveyed in a visual-spatial manner, relying on actions of the hands, upper body and face.
  • Deaf communities around the world use sign languages that are mutually unintelligible – e.g. BSL is used in the UK, while American Sign Language is used in the USA.
  • Signed languages share many linguistic features with spoken language, including those related to syntax (the rules for constructing sentences), semantics (the study of meaning) and phonology (the study of speech sounds).
One Comment leave one →
  1. Elizabeth Molnar permalink
    16 Jul, 2013 1:04 am

    An interesting paper: when I worked with the hearing children of deaf parents of various causes, I found that their acquisition of the use of tense in their grammar was delayed, relative to children from hearing families. Every sentense was in the present tense until after school entry.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: