Read a short summary of NONMANUAL here.
Sign languages, in addition to using the hands, also use positions and movements of other articulators: the body, the head, the mouth, the eyebrows, the eyes and the eyelids, to convey lexical, grammatical, and prosodic information. This linguistic use of the nonmanual articulators is known as nonmanuals. Contrary to current assumptions in the field of sign linguistics, this project proposes the hypothesis that all sign languages use the same basic universal building blocks (nonmanual movements) but that each language is different in how it combines these building blocks both sequentially and simultaneously; languages also differ in the regularity, frequency, and the alignment properties of the nonmanuals.
In order to test this hypothesis, the project will investigate formal properties of nonmanuals in five geographically, historically, and socially diverse sign languages using data from published naturalistic corpora of the sign languages, Computer Vision for extracting measurements of the movement of nonmanual articulators, and a statistical techniques of Non-linear Mixed Effect Modelling and Functional Data Analysis for a quantitative comparison of dynamic nonmanual contours. This will result in the first quantitative formal typology of nonmanuals grounded in naturalistic corpus data. The novel methodology proposed in this project requires testing, adjustment, and development, which constitutes an important component of the project. The developed methodological pipeline will be a secondary output enabling large-scale reliable quantitative research on nonmanuals in future.
Finally, the established typology of formal properties of nonmanuals in the five sign languages will serve as basis for a cross-modal comparison between nonmanuals and prosody/intonation in spoken languages in order to separate truly universal features of the human linguistic capacity from the effects of the visual vs. auditory modalities.