To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
How may the structure of a new linguistic community shape language emergence and change? The 1817 founding of the US's first enduring school for the deaf, the American School for the Deaf (ASD) in Hartford, Connecticut, heralded profound changes in the lives of deaf North Americans. We report the demographics of the early signing community at ASD through quantitative analyses of the 1,700 students who attended the school during its first fifty years. The majority were adolescents, with adults also well represented. Prior to 1845, children under age eight were absent. We consider two groups of students who may have made important linguistic contributions to this early signing community: students with deaf relatives and students from Martha's Vineyard. We conclude that adolescents played a crucial role in forming the New England signing community. Young children may have pushed the emergence of ASL, but likely did so at home in deaf families, not at ASD.
Research on spoken languages has shown that response particles may indicate the truth of a previous utterance or the polarity of the response. In responses to negative antecedents, the two functions come apart and particles become ambiguous. We present the first quantitative study on response strategies in sign languages by discussing data from a production experiment in German Sign Language (Deutsche Gebärdensprache; DGS). The results indicate that DGS does not exploit the potential of simultaneous manual and nonmanual strategies to disambiguate responses. Still, the type of articulator influences the choice of response element. We propose an optimality-theoretic model to account for the role of articulator type, the disambiguation potential, and the morphosyntax of response elements in DGS.
In a number of signed languages, the distinction between nouns and verbs is evident in the morphophonology of the signs themselves. Here we use a novel elicitation paradigm to investigate the systematicity, emergence, and development of the noun-verb distinction (qua objects vs. actions) in an established sign language, American Sign Language (ASL), an emerging sign language, Nicaraguan Sign Language (NSL), and in the precursor to NSL, Nicaraguan homesigns. We show that a distinction between nouns and verbs is marked (by utterance position and movement size) and thus present in all groups—even homesigners, who have invented their systems without a conventional language model. However, there is also evidence of emerging crosslinguistic variation in whether a base hand is used to mark the noun-verb contrast. Finally, variation in how movement repetition and base hand are used across Nicaraguan groups offers insight into the pressures that influence the development of a linguistic system. Specifically, early signers of NSL use movement repetition and base hand in ways similar to homesigners but different from signers who entered the NSL community more recently, suggesting that intergenerational transmission to new learners (not just sharing a language with a community) plays a key role in the development of these devices. These results bear not only on the importance of the noun-verb distinction in human communication, but also on how this distinction emerges and develops in a new (sign) language.
The challenge of supporting literacy among deaf children is as much linguistic as educational, since a major stumbling block can be the lack of a firm first language foundation. It is critical to meet this challenge, given the range of serious negative correlates to illiteracy. Students on the campuses of Gallaudet University and Swarthmore College collaborate to address this issue in an inter-institutional course in which we make bimodal-bilingual videobooks designed for enjoyable shared reading activities between deaf children and their caretakers. These videobooks bring a good signing model into the home and help develop a range of essential preliteracy skills.
Typologically, the world's languages vary in how they express universal quantification and negative quantification. In patterns of concord, a single distributive or negative meaning is expressed redundantly on multiple morphological items. Sign languages, too, show semantic variation, but, surprisingly, this variation populates a specific corner of the full typological landscape. When we focus on manual signs, sign languages systematically have distributive concord but tend to not have negative concord in its canonical form. Here, I explain these typological facts as the reflection of an abstract, iconic bias. Recent work on distributive concord and negative concord has proposed that the phenomena can be explained in relation to the discourse referents they make available. The use of space in sign language also invites iconic inferences about the referents introduced in discourse. I show that these iconic inferences coincide with the meaning of distributive concord but contradict the meaning of negative concord. The sign language typology is thus explained based on what is easy and hard to represent in space.
The spatial affordances of the visual modality give rise to a high degree of similarity between sign languages in the spatial domain. This stands in contrast to the vast structural and semantic diversity in linguistic encoding of space found in spoken languages. However, the possibility and nature of linguistic diversity in spatial encoding in sign languages has not been rigorously investigated by systematic crosslinguistic comparison. Here, we compare locative expression in two unrelated sign languages, Turkish Sign Language (Türk İşaret Dili, TİD) and German Sign Language (Deutsche Gebärdensprache, DGS), focusing on the expression of FIGURE-GROUND (e.g. cup on table) and FIGURE-FIGURE (e.g. cup next to cup) relationships in a discourse context. In addition to similarities, we report qualitative and quantitative differences between the sign languages in the formal devices used (i.e. unimanual vs. bimanual; simultaneous vs. sequential) and in the degree of iconicity of the spatial devices. Our results suggest that sign languages may display more diversity in the spatial domain than has been previously assumed, and in a way more comparable with the diversity found in spoken languages. The study contributes to a more comprehensive understanding of how space gets encoded in language.
This is the first comparative analysis of prosody in deaf, native-signing children (ages 5;0-8;5) and adults whose first language is American Sign Language (ASL). The goals of this study are to describe the distribution of prosodic cues during acquisition, to determine which cues across age groups are most predictive in determining clausal and prosodic boundaries, and to ascertain how much isomorphy there is in ASL between syntactic and prosodic units. The results show that all cues are acquired compositionally, and that the prosodic patterns in child and adult ASL signers exhibit important differences regarding specific cues; however, in all groups the manual cues are more predictive of prosodic boundaries than nonmanual markers. This is evidence for a division of labor between the cues that are produced to mark constituents and those that contribute to semantic and pragmatic meaning. There is also more isomorphy in adults than in children, so these results add to the debates about isomorphy, suggesting that while there is clear autonomy between prosody and syntax, productions exhibiting nonisomorphy are relatively rare overall.
How do sensory experiences shape the words we learn first? Most studies of language have focused on hearing children learning spoken languages, making it challenging to know how sound and language modality might contribute to language learning. This study investigates how perceptual and semantic features influence early vocabulary acquisition in deaf children learning American Sign Language and hearing children learning spoken English. Using vocabulary data from parent-report inventories, we analyzed 214 nouns common to both languages to compare the types of meanings associated with earlier Age of Acquisition. Results revealed that while children in both groups were earlier to acquire words that were more strongly related to the senses, the specific types of sensory meaning varied by language modality. Hearing children learned words with sound-related features earlier than other words, while deaf children learned words with visual and touch-related features earlier. This suggests that the easiest words to learn are words with meanings that children can experience first-hand, which varies based on children’s own sensory access and experience. Studying the diverse ways children acquire language, in this case deaf children, is key to developing language learning theories that reflect all learners.
Within the domains of morphosyntax and syntax, sign languages have been shown to share many interesting properties with spoken languages. At the same time, it has been demonstrated that sign languages are not a homogeneous group. Rather, they differ from each other structurally and, what is more, the attested differences often align with typological patterns that have been identified based on the study of spoken languages. In this chapter, we offer a discussion of selected syntactic phenomena that have been studied from a cross-modal perspective, drawing data from a wide variety of sign languages. We address linearization issues (e.g., constituent order), wh-questions, and various types of complex sentences, and for each of these topics we evaluate to what extent it is shaped by the affordances of the visual-spatial modality. The general picture that emerges is that sign languages – notwithstanding certain modality-specific characteristics – generally exhibit structural complexity and variation fully on a par with spoken languages. This, in turn, strongly suggests that formal models developed for spoken languages can and should be applied to sign languages.
Early language development has rarely been studied in hearing children with deaf parents who are exposed to both a spoken and a signed language (bimodal bilinguals). This study presents longitudinal data of early communication and vocabulary development in a group of 31 hearing infants exposed to British Sign Language (BSL) and spoken English, at 6 months, 15 months, 24 months and 7 years, in comparison with monolinguals (exposed to English) and unimodal bilinguals (exposed to two spoken languages). No differences were observed in early communication or vocabulary development between bimodal bilinguals and monolinguals, but greater early communicative skills in infancy were found in bimodal bilinguals compared to unimodal bilinguals. Within the bimodal bilingual group, BSL and English vocabulary sizes were positively related. These data provide a healthy picture of early language acquisition in those learning a spoken and signed language simultaneously from birth.
Describe the challenges children face in learning language; understand key features of child language development; explain the strategies children use to learn sounds, words, and grammar.
This chapter provides a tour of several additional forms of human language communication apart from spoken language. Visual speech (which also contributes to audiovisual speech) requires not only visual cortex, but regions such as posterior temporal sulcus which may help integrate signals across modality. Nonverbal communication, including productions such as crying or laughter, relate to activity in the superior temporal lobes but also in other regions including the cingulate cortex and insula. Reading and the ability to decode written language highlights portions of the visual system, including the ventral occipitotemporal cortex (often referred to as the visual word form area, or VWFA). Learning to read is a complex process that involves written language, knowledge of speech sounds, and motivation. Co-speech gestures are present in children’s language development and can convey semantic information alongside spoken language; integration of such semantic gestures involves left inferior frontal gyrus and premotor cortex.
We present an overview of constructional approaches to signed languages, beginning with a brief history and the pioneering work of William C. Stokoe. We then discuss construction morphology as an alternative to prior analyses of sign structure that posited a set of non-compositional lexical signs and a distinct set of classifier signs. Instead, signs are seen as composed of morphological schemas containing both specific and schematic aspects of form and meaning. Grammatical construction approaches are reviewed next, including the marking of argument structure on verbs in American Sign Language (ASL). Constructional approaches have been applied to the issue of the relation between sign and gesture across a variety of expressions. This work often concludes that signs and gesture interact in complex ways. In the final section, we present an extended discussion of several grammatical and discourse phenomena using a constructional analysis based on Cognitive Grammar. The data come from Argentine Sign Language (LSA) and includes pointing constructions, agreement constructions, antecedent-anaphor relations, and constructions presenting point of view in reported narrative.
In this chapter we consider aspects of phonology for bimodal bilinguals, whose languages span distinct modalities (spoken/signed/written). As for other bilinguals, the primary issues concern the representation of the phonology for each language individually, ways that the phonological representations interact with each other (in grammar and in processing), and the development of the two phonologies, for children developing as simultaneous bilinguals or for learners of a second language in a second modality. Research on these topics has been sparse, and some have hardly been explored at all. Findings so far indicate that despite the modality difference between their two languages, phonological interactions still occur for bimodal bilinguals, providing crucial data for linguistic theories about the locus and mechanisms for such interactions, and important practical implications for language learners.
The history, and parameters, of specialized dictionaries are of necessity somewhat vague, as they invite examination of the role and content of a dictionary. This chapter looks at works ranging from hard word dictionaries to treatises on communicating via naval flags and telegraphic code books, all of which might be categorized as specialized dictionaries.
Even though the word has been around for over one thousand years, bitch has proven that an old dog can be taught new tricks. Over the centuries, bitch has become a linguistic chameleon with many different meanings and uses. Bitch has become a shape-shifter too, morphing into modern slang spellings like biatch, biznatch, and betch. Bitch is a versatile word. It can behave like a noun, an adjective, a verb, or an interjection, while it also makes a cameo appearance in lots of idioms. Bitch can be a bitch of a word. Calling someone a bitch once seemed to be a pretty straightforward insult, but today – after so many variations, reinventions, and attempts to reclaim the word – it’s not always clear what bitch really means. Nowadays, the word appears in numerous other languages too, from Arabic and Japanese to Spanish and Zulu. This chapter takes a look at bitch in the present day, and beyond.
Phonology has generally been neglected as a nexus of philosophical interest despite certain debates within the field both inviting and needing philosophical reflection. Yet, the few who have attempted such inquiry have noted something special about the field and its target. On the one hand, it shares formal and structural aspects with syntax. On the other, it seems to require more literal interpretation in terms of components such as hierarchy and sequential ordering. In this chapter, the nature of the phoneme, the theoretical centrepiece of traditional phonology, comes under scrutiny. The notion, as well as the field itself, is extended to other modalities, such as sign, in accordance with the contemporary trajectory of the field. This extension, and the connections with language and gesture in general, open up the possibility of a philosophical action theory with phonology as its basis. Motor and action theory have been proffered recently in connection with syntax, with little success. However, it’s argued that phonology serves as a better point of comparison. The chapter discusses a range of issues from autosegmental phonology, feature grammar, and sign language, to gestural grammar, motor cognition, and recent 4E approaches to cognition.
Recurrent gestures are stabilized forms that embody a practical knowledge of dealing with different communicative, interactional, cognitive, and affective tasks. They are often derived from practical actions and engage in semantic and pragmatic meaning-making. They occupy a place between spontaneous (singular) gestures and emblems on a continuum of increasing stabilization. The chapter reconstructs the beginnings of research on recurrent gestures and illuminates different disciplinary perspectives that have explored processes of their emergence and stabilization, as well as facets of their communicative potential. The early days of recurrent-gesture research focused on the identification of single specimens and on the refinement of descriptive methods. In recent years, their role in self-individuation, their social role, and their relationship to signs of sign language have become a focus of interest. The chapter explores the individual, the linguistic, and the cultural side of recurrent gestures. Recurrent gestures are introduced as sedimented individual and social practices, as revealing the linguistic potential of gestures, and as a type that forms culturally shared repertoires.
The classical approach to gesture and sign language analysis focuses on the forms and locations of the hands. This constitutes an external point of view on the gesturing subject. The kinesiological approach presented in this chapter looks at gesture from the inside out, at how it is produced, taking a first-person perspective. This involves a physiological description of the parts of the body that are moving (the segments) and the joints at which they can move (providing the degrees of freedom of movement). This type of analysis allows for such distinctions of proper movement of segments from displacement caused by movement of another segment. Movement is distinguished according to muscular properties such as flexion versus extension, abduction versus adduction, exterior versus interior rotation, and supination versus pronation. The propagation of movement in the body is considered in terms of its flow across connected segments of the body, from more proximal to more distal segments or vice versa. These distinctions distinguish different functions of gestures (e.g. showing that you don’t care vs. expressing negation) and different meanings of signs in a sign language.
Gestures associated with negation have become a well-defined area for gesture studies research. The chapter offers an overview of this area, identifies distinct empirical lines of enquiry, and highlights their contribution to aspects of linguistic and embodiment theory. After relating a surge of interest in this topic to the notion of recurrent gestures (but not restricted to it), the chapter offers a visualization of the widespread geographical coverage of studies of gestures associated with negation, then distils a set of common observations concerning the form, organizational properties, and functions of such gestures. This area of research is then further thematized by exploring distinct chains of studies that have adopted linguistic, cognitive-semantic, functional, psycholinguistic, comparative, and cultural perspectives to analyze the gestural expression of negation. Studies of gestures associated with negation are shown to have played a vital role in shaping understandings of the multimodality of grammar, the embodiment of cognition, and the relations between gestures and sign.