\n \n \n
\n
\n\n \n \n \n \n \n \n Truedepth Measurements of Facial Expressions: Sensitivity to the Angle Between Camera and Face.\n \n \n \n \n\n\n \n Lyke Esselink, Marloes Oomen, & Floris Roelofsen.\n\n\n \n\n\n\n In
2023 IEEE International Conference on Acoustics, Speech, and Signal Processing Workshops (ICASSPW), pages 1-5, 2023. \n
\n\n
\n\n
\n\n
\n\n \n \n Paper\n \n \n\n \n \n doi\n \n \n\n \n link\n \n \n\n bibtex\n \n\n \n \n \n abstract \n \n\n \n \n \n 9 downloads\n \n \n\n \n \n \n \n \n \n \n\n \n \n \n \n \n \n \n\n\n\n
\n
@inproceedings{esselink:23b,\n author = {Esselink, Lyke and Oomen, Marloes and Roelofsen, Floris},\n booktitle={2023 IEEE International Conference on Acoustics, Speech, and Signal Processing Workshops (ICASSPW)}, \n title={Truedepth Measurements of Facial Expressions: Sensitivity to the Angle Between Camera and Face}, \n year={2023},\n volume={},\n number={},\n pages={1-5},\n doi={10.1109/ICASSPW59220.2023.10193107},\n url = {https://doi.org/10.1109/ICASSPW59220.2023.10193107},\n keywords = {Facial expressions, Depth-sensing cameras},\n abstract = {Facial expressions play an important role in communication, especially in sign languages. Linguistic analysis of the exact contribution of facial expressions, as well as the creation of realistic conversational avatars, especially sign language avatars, requires accurate measurements of the facial expressions of humans while engaged in linguistic interaction. Several recent projects have employed a TrueDepth camera to make such measurements. The present paper investigates how reliable this technique is. In particular, we consider the extent to which the obtained measurements are affected by the angle between the camera and the face. Overall, we find that there are generally significant, and often rather substantial differences between measurements from different angles. However, when the measured facial features are highly activated, measurements from different angles are generally strongly correlated.}\n}\n\n
\n
\n\n\n
\n Facial expressions play an important role in communication, especially in sign languages. Linguistic analysis of the exact contribution of facial expressions, as well as the creation of realistic conversational avatars, especially sign language avatars, requires accurate measurements of the facial expressions of humans while engaged in linguistic interaction. Several recent projects have employed a TrueDepth camera to make such measurements. The present paper investigates how reliable this technique is. In particular, we consider the extent to which the obtained measurements are affected by the angle between the camera and the face. Overall, we find that there are generally significant, and often rather substantial differences between measurements from different angles. However, when the measured facial features are highly activated, measurements from different angles are generally strongly correlated.\n
\n\n\n
\n\n\n
\n\n\n
\n
\n\n \n \n \n \n \n \n Exploring automatic text-to-sign translation in a healthcare setting.\n \n \n \n \n\n\n \n Lyke Esselink, Floris Roelofsen, Jakub Dotlačil, Shani Mende-Gillings, Maartje de Meulder, Nienke Sijm, & Anika Smeijers.\n\n\n \n\n\n\n 2023.\n
Publication forthcoming in Universal Access in the Information Society.\n\n
\n\n
\n\n
\n\n \n \n Paper\n \n \n\n \n\n \n link\n \n \n\n bibtex\n \n\n \n \n \n abstract \n \n\n \n \n \n 3 downloads\n \n \n\n \n \n \n \n \n \n \n\n \n \n \n \n \n\n\n\n
\n
@unpublished{esselink:23d,\n author = {Esselink, Lyke and Roelofsen, Floris and Dotla{\\v{c}}il, Jakub and Mende-Gillings, Shani and Meulder, Maartje de and Sijm, Nienke and Smeijers, Anika},\n title = {Exploring automatic text-to-sign translation in a healthcare setting},\n year = {2023},\n keywords = {manuscript},\n url = {https://www.signlab-amsterdam.nl/publications/Esselink22.pdf},\n abstract = {Communication between healthcare professionals and deaf patients has been particularly challenging during the COVID-19 pandemic. We have explored the possibility to automatically translate phrases that are frequently used in the diagnosis and treatment of hospital patients, in particular phrases related to COVID-19, from Dutch or English to Dutch Sign Language (NGT). The prototype system we developed displays translations either by means of pre-recorded videos featuring a deaf human signer (for a limited number of sentences) or by means of animations featuring a computer-generated signing avatar (for a larger, though still restricted number of sentences). We evaluated the comprehensibility of the signing avatar, as compared to the human signer. We found that, while individual signs are recognized correctly when signed by the avatar almost as frequently as when signed by a human, sentence comprehension rates and clarity scores for the avatar are substantially lower than for the human signer. We identify a number of concrete limitations of the JAsigning avatar engine that underlies our system. Namely, the engine currently does not offer sufficient control over mouth shapes, the relative speed and intensity of signs in a sentence (prosody), and transitions between signs. These limitations need to be overcome in future work for the engine to become usable in practice.},\n note = {Publication forthcoming in Universal Access in the Information Society.}\n}\n\n
\n
\n\n\n
\n Communication between healthcare professionals and deaf patients has been particularly challenging during the COVID-19 pandemic. We have explored the possibility to automatically translate phrases that are frequently used in the diagnosis and treatment of hospital patients, in particular phrases related to COVID-19, from Dutch or English to Dutch Sign Language (NGT). The prototype system we developed displays translations either by means of pre-recorded videos featuring a deaf human signer (for a limited number of sentences) or by means of animations featuring a computer-generated signing avatar (for a larger, though still restricted number of sentences). We evaluated the comprehensibility of the signing avatar, as compared to the human signer. We found that, while individual signs are recognized correctly when signed by the avatar almost as frequently as when signed by a human, sentence comprehension rates and clarity scores for the avatar are substantially lower than for the human signer. We identify a number of concrete limitations of the JAsigning avatar engine that underlies our system. Namely, the engine currently does not offer sufficient control over mouth shapes, the relative speed and intensity of signs in a sentence (prosody), and transitions between signs. These limitations need to be overcome in future work for the engine to become usable in practice.\n
\n\n\n
\n\n\n
\n\n\n
\n
\n\n \n \n \n \n \n \n First steps towards a procedure for annotating non-manual markers in sign languages.\n \n \n \n \n\n\n \n Marloes Oomen, Lyke Esselink, Tobias Ronde, & Floris Roelofsen.\n\n\n \n\n\n\n In Suet-Ying Lam, & Satoru Ozaki., editor(s),
NELS53 Proceedings. 2023.\n
Publication forthcoming in NELS 2023 proceedings.\n\n
\n\n
\n\n
\n\n \n \n Paper\n \n \n\n \n\n \n link\n \n \n\n bibtex\n \n\n \n \n \n abstract \n \n\n \n \n \n 10 downloads\n \n \n\n \n \n \n \n \n \n \n\n \n \n \n\n\n\n
\n
@incollection{oomen:23b,\n author = {Marloes Oomen and Lyke Esselink and Tobias de Ronde and Floris Roelofsen},\n year = {2023},\n title = {First steps towards a procedure for annotating non-manual markers in sign languages},\n booktitle = {NELS53 Proceedings},\n editor = {Suet-Ying Lam and Satoru Ozaki},\n url = {https://www.signlab-amsterdam.nl/publications/NELS53_Proceedings_Oomen.pdf},\n abstract = {We report on the development, application, and evaluation of a procedure for annotating non-manual markers (NMM) in experimentally obtained sign language data. We also share resources to enable other researchers investigating NMM in sign languages or multimodal communication to utilize the annotation protocol we developed in their own research.},\n note = {Publication forthcoming in NELS 2023 proceedings.}\n}\n\n
\n
\n\n\n
\n We report on the development, application, and evaluation of a procedure for annotating non-manual markers (NMM) in experimentally obtained sign language data. We also share resources to enable other researchers investigating NMM in sign languages or multimodal communication to utilize the annotation protocol we developed in their own research.\n
\n\n\n
\n\n\n
\n
\n\n \n \n \n \n \n \n Distractor-Based Evaluation of Sign Spotting.\n \n \n \n \n\n\n \n Natalie Hollain, Martha Larson, & Floris Roelofsen.\n\n\n \n\n\n\n In
2023 IEEE International Conference on Acoustics, Speech, and Signal Processing Workshops (ICASSPW), pages 1-5, 2023. \n
\n\n
\n\n
\n\n
\n\n \n \n Paper\n \n \n\n \n \n doi\n \n \n\n \n link\n \n \n\n bibtex\n \n\n \n \n \n abstract \n \n\n \n \n \n 5 downloads\n \n \n\n \n \n \n \n \n \n \n\n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@inproceedings{hollain:23a,\n author = {Hollain, Natalie and Larson, Martha and Roelofsen, Floris},\n booktitle={2023 IEEE International Conference on Acoustics, Speech, and Signal Processing Workshops (ICASSPW)}, \n title={Distractor-Based Evaluation of Sign Spotting}, \n year={2023},\n volume={},\n number={},\n pages={1-5},\n doi={10.1109/ICASSPW59220.2023.10193484},\n url = {https://doi.org/10.1109/ICASSPW59220.2023.10193484},\n keywords = {Sign spotting, Evaluation, Distractors},\n abstract = {Sign spotting is a subtask of sign language processing in which we determine when a given target sign occurs in a given sign sequence. This paper proposes a method for evaluating sign spotting systems, which we argue to be more reflective of the degree to which a system would satisfy the user’s requirements in practice than previously proposed evaluation methods. To deal with an incomplete ground truth, we introduce the concept of distractors: signs which are similar to the target sign according to a given distance measure. We assume that the performance of a sign spotting model when distinguishing a given target sign from the associated distractors will reflect the performance of the model on the complete ground truth. We develop a sign spotting model to demonstrate our evaluation method.}\n}\n\n
\n
\n\n\n
\n Sign spotting is a subtask of sign language processing in which we determine when a given target sign occurs in a given sign sequence. This paper proposes a method for evaluating sign spotting systems, which we argue to be more reflective of the degree to which a system would satisfy the user’s requirements in practice than previously proposed evaluation methods. To deal with an incomplete ground truth, we introduce the concept of distractors: signs which are similar to the target sign according to a given distance measure. We assume that the performance of a sign spotting model when distinguishing a given target sign from the associated distractors will reflect the performance of the model on the complete ground truth. We develop a sign spotting model to demonstrate our evaluation method.\n
\n\n\n
\n\n\n
\n
\n\n \n \n \n \n \n \n Analyzing the Potential of Linguistic Features for Sign Spotting: A Look at Approximative Features.\n \n \n \n \n\n\n \n Natalie Hollain, Martha Larson, & Floris Roelofsen.\n\n\n \n\n\n\n In
Proceedings of the Second International Workshop on Automatic Translation for Signed and Spoken Languages, pages 1–10, Tampere, Finland, jun 2023. European Association for Machine Translation\n
\n\n
\n\n
\n\n
\n\n \n \n Paper\n \n \n\n \n\n \n link\n \n \n\n bibtex\n \n\n \n \n \n abstract \n \n\n \n \n \n 3 downloads\n \n \n\n \n \n \n \n \n \n \n\n \n \n \n\n\n\n
\n
@inproceedings{hollain:23b,\n title = {Analyzing the Potential of Linguistic Features for Sign Spotting: A Look at Approximative Features},\n author = {Hollain, Natalie and Larson, Martha and Roelofsen, Floris},\n booktitle = {Proceedings of the Second International Workshop on Automatic Translation for Signed and Spoken Languages},\n month = {jun},\n year = {2023},\n address = {Tampere, Finland},\n publisher = {European Association for Machine Translation},\n url = {https://aclanthology.org/2023.at4ssl-1.1},\n pages = {1--10},\n abstract = {Sign language processing is the field of research that aims to recognize, retrieve, and spot signs in videos. Various approaches have been developed, varying in whether they use linguistic features and whether they use landmark detection tools or not. Incorporating linguistics holds promise for improving sign language processing in terms of performance, generalizability, and explainability. This paper focuses on the task of sign spotting and aims to expand on the approximative linguistic features that have been used in previous work, and to understand when linguistic features deliver an improvement over landmark features. We detect landmarks with Mediapipe and extract linguistically relevant features from them, including handshape, orientation, location, and movement. We compare a sign spotting model using linguistic features with a model operating on landmarks directly, finding that the approximate linguistic features tested in this paper capture some aspects of signs better than the landmark features, while they are worse for others.},\n}\n\n
\n
\n\n\n
\n Sign language processing is the field of research that aims to recognize, retrieve, and spot signs in videos. Various approaches have been developed, varying in whether they use linguistic features and whether they use landmark detection tools or not. Incorporating linguistics holds promise for improving sign language processing in terms of performance, generalizability, and explainability. This paper focuses on the task of sign spotting and aims to expand on the approximative linguistic features that have been used in previous work, and to understand when linguistic features deliver an improvement over landmark features. We detect landmarks with Mediapipe and extract linguistically relevant features from them, including handshape, orientation, location, and movement. We compare a sign spotting model using linguistic features with a model operating on landmarks directly, finding that the approximate linguistic features tested in this paper capture some aspects of signs better than the landmark features, while they are worse for others.\n
\n\n\n
\n\n\n
\n
\n\n \n \n \n \n \n \n A Crosslinguistic Database for Combinatorial and Semantic Properties of Attitude Predicates.\n \n \n \n \n\n\n \n Deniz Özyıldız, Ciyang Qing, Floris Roelofsen, Maribel Romero, & Wataru Uegaki.\n\n\n \n\n\n\n In
Proceedings of the 5th Workshop on Research in Computational Linguistic Typology and Multilingual NLP, pages 65–75, 2023. \n
\n\n
\n\n
\n\n
\n\n \n \n Paper\n \n \n\n \n\n \n link\n \n \n\n bibtex\n \n\n \n \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n \n \n \n\n\n\n
\n
@inproceedings{ozyildiz2023crosslinguistic,\n title={A Crosslinguistic Database for Combinatorial and Semantic Properties of Attitude Predicates},\n author={{\\"O}zy{\\i}ld{\\i}z, Deniz and Qing, Ciyang and Roelofsen, Floris and Romero, Maribel and Uegaki, Wataru},\n booktitle={Proceedings of the 5th Workshop on Research in Computational Linguistic Typology and Multilingual {NLP}},\n pages={65--75},\n year={2023},\n url = {https://aclanthology.org/2023.sigtyp-1.7/},\n abstract = {We introduce a cross-linguistic database for attitude predicates, which references their combinatorial (syntactic) and semantic properties.\nOur data allows assessment of cross-linguistic generalizations about attitude predicates as well as discovery of new typological/cross-linguistic patterns.\nThis paper motivates empirical and theoretical issues that our database will help to address, the sample predicates and the properties that it references, \nas well as our design and methodological choices. \nTwo case studies illustrate how the database can be used to assess validity of cross-linguistic generalizations.}\n}\n\n
\n
\n\n\n
\n We introduce a cross-linguistic database for attitude predicates, which references their combinatorial (syntactic) and semantic properties. Our data allows assessment of cross-linguistic generalizations about attitude predicates as well as discovery of new typological/cross-linguistic patterns. This paper motivates empirical and theoretical issues that our database will help to address, the sample predicates and the properties that it references, as well as our design and methodological choices. Two case studies illustrate how the database can be used to assess validity of cross-linguistic generalizations.\n
\n\n\n
\n\n\n
\n
\n\n \n \n \n \n \n \n Focused NPIs in statements and questions.\n \n \n \n \n\n\n \n Sunwoo Jeong, & Floris Roelofsen.\n\n\n \n\n\n\n
Journal of Semantics, 40(1): 1–68. 2023.\n
\n\n
\n\n
\n\n
\n\n \n \n Paper\n \n \n\n \n\n \n link\n \n \n\n bibtex\n \n\n \n \n \n abstract \n \n\n \n \n \n 6 downloads\n \n \n\n \n \n \n \n \n \n \n\n \n \n \n\n\n\n
\n
@article{jeong2023focused,\n title={Focused NPIs in statements and questions},\n author={Jeong, Sunwoo and Roelofsen, Floris},\n journal={Journal of Semantics},\n volume={40},\n number={1},\n pages={1--68},\n year={2023},\n publisher={Oxford University Press},\n url = {https://academic.oup.com/jos/article-pdf/40/1/1/50326592/ffac014.pdf},\n abstract = {Negative Polarity Items (NPIs) with emphatic prosody such as ANY or EVER, and\nminimizers such as lift a finger or sleep a wink are known to generate particular\ncontextual inferences that are absent in the case of non-emphatic NPIs such as\nunstressed any or ever. It remains an open question, however, what the exact status\nof these inferences is and how they come about. In this paper, we analyze these\ncases as NPIs bearing focus, and examine the interaction between focus semantics\nand the lexical semantics of NPIs across statements and questions.}\n}\n\n
\n
\n\n\n
\n Negative Polarity Items (NPIs) with emphatic prosody such as ANY or EVER, and minimizers such as lift a finger or sleep a wink are known to generate particular contextual inferences that are absent in the case of non-emphatic NPIs such as unstressed any or ever. It remains an open question, however, what the exact status of these inferences is and how they come about. In this paper, we analyze these cases as NPIs bearing focus, and examine the interaction between focus semantics and the lexical semantics of NPIs across statements and questions.\n
\n\n\n
\n\n\n
\n
\n\n \n \n \n \n \n \n Quexistentials and Focus.\n \n \n \n \n\n\n \n Kees Hengeveld, Sabine Iatridou, & Floris Roelofsen.\n\n\n \n\n\n\n
Linguistic Inquiry, 54(3): 571-624. 06 2023.\n
\n\n
\n\n
\n\n
\n\n \n \n Paper\n \n \n\n \n \n doi\n \n \n\n \n link\n \n \n\n bibtex\n \n\n \n \n \n abstract \n \n\n \n \n \n 1 download\n \n \n\n \n \n \n \n \n \n \n\n \n \n \n\n\n\n
\n
@article{10.1162/ling_a_00441,\n author = {Hengeveld, Kees and Iatridou, Sabine and Roelofsen, Floris},\n title = "{Quexistentials and Focus}",\n journal = {Linguistic Inquiry},\n volume = {54},\n number = {3},\n pages = {571-624},\n year = {2023},\n month = {06},\n abstract = "{Many languages have words that can be interpreted either as question words or as existentials. We call such words quexistentials. It has been claimed in the literature (e.g., Haida 2007) that, across languages, quexistentials are (a) always focused on their interrogative interpretation and (b) never focused on their existential interpretation. We refer to this as the quexistential-focus biconditional. The article makes two contributions. The first is that we offer a possible explanation for one direction of the biconditional: the fact that quexistentials are generally contrastively focused on their interrogative use. We argue that this should be seen as a particular instance of an even more general fact—namely, that interrogative words (quexistential or not) are always contrastively focused—and propose an account for this fact. The second contribution of the article concerns the other direction of the biconditional. We present evidence that, at least at face value, suggests that focus on a quexistential does not necessarily preclude an existential interpretation. Specifically, we show that it is possible for Dutch wat to be interpreted existentially even when it is focused. We attempt to explain this phenomenon.}",\n issn = {0024-3892},\n doi = {10.1162/ling_a_00441},\n url = {https://doi.org/10.1162/ling\\_a\\_00441},\n eprint = {https://direct.mit.edu/ling/article-pdf/54/3/571/2136750/ling\\_a\\_00441.pdf},\n}\n\n\n\n
\n
\n\n\n
\n Many languages have words that can be interpreted either as question words or as existentials. We call such words quexistentials. It has been claimed in the literature (e.g., Haida 2007) that, across languages, quexistentials are (a) always focused on their interrogative interpretation and (b) never focused on their existential interpretation. We refer to this as the quexistential-focus biconditional. The article makes two contributions. The first is that we offer a possible explanation for one direction of the biconditional: the fact that quexistentials are generally contrastively focused on their interrogative use. We argue that this should be seen as a particular instance of an even more general fact—namely, that interrogative words (quexistential or not) are always contrastively focused—and propose an account for this fact. The second contribution of the article concerns the other direction of the biconditional. We present evidence that, at least at face value, suggests that focus on a quexistential does not necessarily preclude an existential interpretation. Specifically, we show that it is possible for Dutch wat to be interpreted existentially even when it is focused. We attempt to explain this phenomenon.\n
\n\n\n
\n\n\n\n\n\n