Analyzing Literary Texts in Lithuanian Sign Language
We have recently published a short paper titled "Analyzing Literary Texts in Lithuanian Sign Language with Computer Vision: A Proof of Concept".
Previous research found that literary texts in sign languages use a number of features that distinguish them from non-literary texts, including phonetic features such as the hand movement amplitude, symmetry and enhanced use of nonmanual markers (facial expressions and body movements). However, this type of research has been largely based on manual annotation and observation.
We use Computer Vision (MediaPipe Holistic) to automatically extract measurements from video recordings in order to quantify such phonetic features. The data set comprises of 2D video recordings of five short literary pieces in Lithuanian Sign Language that were recorded by their authors (originals) and then retold by the same authors in a non-literary form (as “prose” retellings). We extract landmark coordinates from the videos and measure the following features: variation in hand movement amplitude; symmetry as a comparative measure of activity of the two hands; sideward body leans; eyebrow movement. We discuss the steps necessary to use the measurements, namely filtering and normalization by body size. We compare the original literary pieces with their retelling according to these features. We find that the original literary pieces clearly have more/larger hand movements, larger sideward body leans, and more use of eyebrow movement.
For example, consider the following figures showing more (more varied) horizontal and vertical movement of the right hand in the originals, and more eyebrow movement in the original.
However, we do not find clear symmetry differences between the originals and the retellings. The study is a proof of concept for the use of Computer Vision in phonetic analysis of sign languages, also in the context of literary analysis.
Read the full paper here: https://ceur-ws.org/Vol-3431/paper5.pdf