var bibbase_data = {"data":"\"Loading..\"\n\n
\n\n \n\n \n\n \n \n\n \n\n \n \n\n \n\n \n
\n generated by\n \n \"bibbase.org\"\n\n \n
\n \n\n
\n\n \n\n\n
\n\n Excellent! Next you can\n create a new website with this list, or\n embed it in an existing web page by copying & pasting\n any of the following snippets.\n\n
\n JavaScript\n (easiest)\n
\n \n <script src=\"https://bibbase.org/show?bib=www.signlab-amsterdam.nl/bib/signlab.bib&theme=simple&commas=true&group0=year&fullnames=1&jsonp=1&jsonp=1\"></script>\n \n
\n\n PHP\n
\n \n <?php\n $contents = file_get_contents(\"https://bibbase.org/show?bib=www.signlab-amsterdam.nl/bib/signlab.bib&theme=simple&commas=true&group0=year&fullnames=1&jsonp=1\");\n print_r($contents);\n ?>\n \n
\n\n iFrame\n (not recommended)\n
\n \n <iframe src=\"https://bibbase.org/show?bib=www.signlab-amsterdam.nl/bib/signlab.bib&theme=simple&commas=true&group0=year&fullnames=1&jsonp=1\"></iframe>\n \n
\n\n

\n For more details see the documention.\n

\n
\n
\n\n
\n\n This is a preview! To use this list on your own web site\n or create a new web site from it,\n create a free account. The file will be added\n and you will be able to edit it in the File Manager.\n We will show you instructions once you've created your account.\n
\n\n
\n\n

To the site owner:

\n\n

Action required! Mendeley is changing its\n API. In order to keep using Mendeley with BibBase past April\n 14th, you need to:\n

    \n
  1. renew the authorization for BibBase on Mendeley, and
  2. \n
  3. update the BibBase URL\n in your page the same way you did when you initially set up\n this page.\n
  4. \n
\n

\n\n

\n \n \n Fix it now\n

\n
\n\n
\n\n\n
\n \n \n
\n
\n  \n 2023\n \n \n (24)\n \n \n
\n
\n \n \n
\n \n\n \n \n \n \n \n \n Morphology in sign languages: Theoretical issues and typological contrasts.\n \n \n \n \n\n\n \n Roland Pfau, & Markus Steinbach.\n\n\n \n\n\n\n In Peter Ackema, Sabrina Bendjaballah, Eulàlia Bonet, & Antonio Fábregas., editor(s), The Wiley Blackwell companion to morphology, pages 1–37. Oxford: Wiley Blackwell, 2023.\n \n\n\n\n
\n\n\n\n \n \n \"MorphologyPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n  \n \n 7 downloads\n \n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@incollection{pfau:23,\n  title = {Morphology in sign languages: Theoretical issues and typological contrasts.},\n  author = {Pfau, Roland and Steinbach, Markus},\n  publisher={Oxford: Wiley Blackwell},\n  booktitle = {The Wiley Blackwell companion to morphology},\n  editor = {Ackema, Peter and Bendjaballah, Sabrina and Bonet, Eulàlia and Fábregas, Antonio},\n  pages = {1--37},\n  year = {2023},\n  url = {https://www.signlab-amsterdam.nl/publications/Pfau_Steinbach_2023_Wiley.pdf},\n  doi = {10.1002/9781119693604.morphcom048},\n  keywords = {Morphology}\n}\n\n
\n
\n\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Negative Concord in Sign Language of the Netherlands: a journey through a corpus.\n \n \n \n \n\n\n \n Cindy van Boven, Marloes Oomen, Roland Pfau, & Lotte Rusch.\n\n\n \n\n\n\n In Advances in sign language corpus linguistics, pages 30–65. John Benjamins Publishing Company, 2023.\n \n\n\n\n
\n\n\n\n \n \n \"NegativePaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n  \n \n 11 downloads\n \n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@incollection{boven:23a,\n  title = {Negative Concord in Sign Language of the Netherlands: a journey through a corpus},\n  author = {Boven, Cindy van and Oomen, Marloes and Pfau, Roland and Rusch, Lotte},\n  publisher={John Benjamins Publishing Company},\n  booktitle = {Advances in sign language corpus linguistics},\n  pages = {30--65},\n  year = {2023},\n  url = {https://doi.org/10.1075/scl.108.02van},\n  doi = {10.1075/scl.108.02van},\n  keywords = {Corpus NGT, Cross-modal typology, Doubling, Negation, Negative Concord, Variation},\n  abstract = {In a Negative Concord (NC) configuration, two negative elements co-occur in a clause but the polarity of that clause still remains negative. NC involving two manual negators has been observed in various sign languages, but relevant examples are usually presented in the context of broader investigations on negation in a particular sign language. Also, examples are not usually drawn from natural discourse. In this chapter, we offer the first comprehensive study on NC in a single sign language, namely, the Sign Language of the Netherlands (NGT), based entirely on corpus data. We find that NC is attested in NGT, but that it is optional and rather infrequent. First, our contribution is of a typological nature, as we distinguish different types of NC and compare our corpus-based results with those reported for other signed and spoken languages. Second, we describe in detail our “journey”, that is, our search procedure and inclusion criteria, thereby offering methodological guidelines for future endeavors.}\n}\n\n
\n
\n\n\n
\n In a Negative Concord (NC) configuration, two negative elements co-occur in a clause but the polarity of that clause still remains negative. NC involving two manual negators has been observed in various sign languages, but relevant examples are usually presented in the context of broader investigations on negation in a particular sign language. Also, examples are not usually drawn from natural discourse. In this chapter, we offer the first comprehensive study on NC in a single sign language, namely, the Sign Language of the Netherlands (NGT), based entirely on corpus data. We find that NC is attested in NGT, but that it is optional and rather infrequent. First, our contribution is of a typological nature, as we distinguish different types of NC and compare our corpus-based results with those reported for other signed and spoken languages. Second, we describe in detail our “journey”, that is, our search procedure and inclusion criteria, thereby offering methodological guidelines for future endeavors.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Nominal plurals in Sign Language of the Netherlands: Accounting for allomorphy and variation.\n \n \n \n \n\n\n \n Cindy van Boven, Silke Hamann, & Roland Pfau.\n\n\n \n\n\n\n Glossa: a journal of general linguistics, 8(1): 1–47. 2023.\n \n\n\n\n
\n\n\n\n \n \n \"NominalPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n  \n \n 2 downloads\n \n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@article{boven:23b,\n  title = {Nominal plurals in Sign Language of the Netherlands: Accounting for allomorphy and variation},\n  author = {Boven, Cindy van and Hamann, Silke and Pfau, Roland},\n  journal = {Glossa: a journal of general linguistics},\n  volume = {8},\n  number = {1},\n  pages = {1--47},\n  year = {2023},\n  url = {https://www.glossa-journal.org/article/id/9686/},\n  doi = {https://doi.org/10.16995/glossa.9686},\n  keywords = {Allomorphy, Corpus, Plural reduplication, Sign Language of the Netherlands, Stochastic OT, Variation},\n  abstract = {In both signed and spoken languages, reduplication is a common process in the formation of morphologically complex structures, expressing, e.g., plurality and certain aspectual meanings. A framework in which spoken language reduplication has been formalized frequently is Optimality Theory (OT). While an important attribute of OT-constraints is their universality, to date, the question to what extent such constraints are modality-independent, and thus work for sign language reduplication as well, remains largely unanswered. In the present study, we offer the first OT-formalization of plural reduplication in Sign Language of the Netherlands (NGT). The NGT-data reveal that this language features different plural allomorphs, the choice of which depends on phonological properties of the base noun. However, we also identify variation, e.g., all noun types allow for zero marking.\n  \n  In our formalization, we aim to introduce constraints that are maximally modality-independent, using constraint types that have previously been proposed for spoken language reduplication. Our formalization is the first to take into account base-reduplicant faithfulness for a sign language, and also the first to account for variation in sign language data by employing stochastic OT, whereby some noise is added to the ranking value of each constraint at evaluation time. Evaluating the modality-(in)dependence of our proposed account suggests that the types of constraints we employ as well as the evaluation in the spirit of stochastic OT are not specific to a modality, while the featural implementation is inevitably modality-dependent.}\n}\n\n
\n
\n\n\n
\n In both signed and spoken languages, reduplication is a common process in the formation of morphologically complex structures, expressing, e.g., plurality and certain aspectual meanings. A framework in which spoken language reduplication has been formalized frequently is Optimality Theory (OT). While an important attribute of OT-constraints is their universality, to date, the question to what extent such constraints are modality-independent, and thus work for sign language reduplication as well, remains largely unanswered. In the present study, we offer the first OT-formalization of plural reduplication in Sign Language of the Netherlands (NGT). The NGT-data reveal that this language features different plural allomorphs, the choice of which depends on phonological properties of the base noun. However, we also identify variation, e.g., all noun types allow for zero marking. In our formalization, we aim to introduce constraints that are maximally modality-independent, using constraint types that have previously been proposed for spoken language reduplication. Our formalization is the first to take into account base-reduplicant faithfulness for a sign language, and also the first to account for variation in sign language data by employing stochastic OT, whereby some noise is added to the ranking value of each constraint at evaluation time. Evaluating the modality-(in)dependence of our proposed account suggests that the types of constraints we employ as well as the evaluation in the spirit of stochastic OT are not specific to a modality, while the featural implementation is inevitably modality-dependent.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Subject agreement in control and modal constructions in Russian Sign Language: Implications for the hierarchy of person features.\n \n \n \n \n\n\n \n Evgeniia Khristoforova.\n\n\n \n\n\n\n Sign Language & Linguistics, 26(1): 64-116. 2023.\n \n\n\n\n
\n\n\n\n \n \n \"SubjectPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n  \n \n 1 download\n \n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@article{Khristoforova:23a,\n   author = {Khristoforova, Evgeniia},\n   title = {Subject agreement in control and modal constructions in Russian Sign Language: Implications for the hierarchy of person features},\n   journal= {Sign Language \\& Linguistics},\n   year = {2023},\n   volume = {26},\n   number = {1},\n   pages = {64-116},\n   doi = {https://doi.org/10.1075/sll.21005.khr},\n   url = {https://www.jbe-platform.com/content/journals/10.1075/sll.21005.khr},\n   publisher = {John Benjamins},\n   issn = {1387-9316},\n   type = {Journal Article},\n   keywords = {Sign language, Person feature, Finiteness, Control clause, Complement clause, Russian Sign Language, Verbal agreement},\n   abstract = {The present research combines three fields of inquiry in sign language linguistics: verbal agreement, person features, and syntactic complexity. These topics have previously been addressed in isolation, but little is known about their interaction. This study attempts to fill this gap by investigating subject agreement in complement clauses in Russian Sign Language. By means of corpus investigation and grammaticality judgments, I found that subject agreement in clausal complements of the control predicates <span class="jp-small">try, love, want, begin</span>, and modal <span class="jp-small">can</span> may be deficient – in particular, it can be reduced to the forms identical to first-person marking even in the case of a third-person subject controller. Deficient subject agreement in complement clauses is thus reminiscent of non-finite verbal forms in spoken languages. I further argue that the choice of first-person forms in deficient agreement reveals a default status of first person in sign languages, which is consistent with proposals regarding the modality-specific properties of first-person reference in these languages.},\n  }\n\n
\n
\n\n\n
\n The present research combines three fields of inquiry in sign language linguistics: verbal agreement, person features, and syntactic complexity. These topics have previously been addressed in isolation, but little is known about their interaction. This study attempts to fill this gap by investigating subject agreement in complement clauses in Russian Sign Language. By means of corpus investigation and grammaticality judgments, I found that subject agreement in clausal complements of the control predicates try, love, want, begin, and modal can may be deficient – in particular, it can be reduced to the forms identical to first-person marking even in the case of a third-person subject controller. Deficient subject agreement in complement clauses is thus reminiscent of non-finite verbal forms in spoken languages. I further argue that the choice of first-person forms in deficient agreement reveals a default status of first person in sign languages, which is consistent with proposals regarding the modality-specific properties of first-person reference in these languages.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n The Implicational Complementation Hierarchy and size restructuring in the Sign Language of the Netherlands.\n \n \n \n \n\n\n \n Evgeniia Khristoforova.\n\n\n \n\n\n\n In Suet-Ying Lam, & Satoru Ozaki., editor(s), Proceedings of the Fifty-Third Annual Meeting of the North East Linguistic Society, volume 2, pages 119. 2023.\n \n\n\n\n
\n\n\n\n \n \n \"ThePaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@incollection{Khristoforova:23b,\n    author =    {Khristoforova, Evgeniia},\n    year =      {2023},\n    title =     {The Implicational Complementation Hierarchy and size restructuring in the Sign Language of the Netherlands},\n    booktitle = {Proceedings of the Fifty-Third Annual Meeting of the North East Linguistic Society},\n    editor =    {Suet-Ying Lam and Satoru Ozaki},\n    url = {https://www.signlab-amsterdam.nl/publications/Khristoforova_NELS53_2023.pdf},\n    volume = {2},\n    pages = {119}\n}\n\n
\n
\n\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Indexicals under role shift in Sign Language of the Netherlands.\n \n \n \n \n\n\n \n Evgeniia Khristoforova, & Blunier David.\n\n\n \n\n\n\n FEAST. Formal and Experimental Advances in Sign language Theory, 5: 63-75. 2023.\n \n\n\n\n
\n\n\n\n \n \n \"IndexicalsPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n  \n \n 1 download\n \n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@article{Khristoforova:23c, \n    title={Indexicals under role shift in Sign Language of the Netherlands}, \n    volume={5}, \n    url={https://raco.cat/index.php/FEAST/article/view/422436}, \n    DOI={10.31009/FEAST.i5.06}, \n    journal={FEAST. Formal and Experimental Advances in Sign language Theory}, \n    author={Khristoforova, Evgeniia and David, Blunier}, \n    year={2023}, \n    pages={63-75} \n}\n\n
\n
\n\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Syntactic functions of nonmanuals in Russian Sign Language.\n \n \n \n \n\n\n \n Svetlana Burkova, Evgeniia Khristoforova, & Vadim Kimmelman.\n\n\n \n\n\n\n In Advances in sign language corpus linguistics, pages 90–122. John Benjamins Publishing Company, 2023.\n \n\n\n\n
\n\n\n\n \n \n \"SyntacticPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n  \n \n 4 downloads\n \n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@incollection{Burkova:23,\n  title = {Syntactic functions of nonmanuals in Russian Sign Language},\n  author = {Burkova, Svetlana and Khristoforova, Evgeniia and Kimmelman, Vadim},\n  publisher={John Benjamins Publishing Company},\n  booktitle = {Advances in sign language corpus linguistics},\n  pages = {90--122},\n  year = {2023},\n  url = {https://doi.org/10.1075/scl.108.04bur},\n  doi = {10.1075/scl.108.04bur},\n  keywords = {Russian Sign Language, Nonmanuals, Questions, Topics, Conditionals, Concessives},\n  abstract = {This chapter presents the Russian Sign Language (RSL) Corpus and demonstrates its capabilities as a research tool by summarizing three corpus-based studies primarily focused on syntactic functions of nonmanual markers. The first study considers question marking in regular wh-questions and in question-answer pairs. It shows that the two constructions have very different nonmanual markers. The second study analyzes marking of topics in RSL, and shows that nonmanual markers of topics are typologically common, but are infrequent in naturalistic corpus data. The third study investigates conditional and concessive constructions in RSL. It demonstrates that these constructions make extensive and frequent use of nonmanual markers, but that no single marker is specialized for the function of expressing conditional or concessive meaning. Instead, complex combinations of multiple markers are employed in these constructions. All three studies also contribute to sign language typology by providing novel descriptions of syntactic and discourse phenomena in RSL.}\n}\n\n
\n
\n\n\n
\n This chapter presents the Russian Sign Language (RSL) Corpus and demonstrates its capabilities as a research tool by summarizing three corpus-based studies primarily focused on syntactic functions of nonmanual markers. The first study considers question marking in regular wh-questions and in question-answer pairs. It shows that the two constructions have very different nonmanual markers. The second study analyzes marking of topics in RSL, and shows that nonmanual markers of topics are typologically common, but are infrequent in naturalistic corpus data. The third study investigates conditional and concessive constructions in RSL. It demonstrates that these constructions make extensive and frequent use of nonmanual markers, but that no single marker is specialized for the function of expressing conditional or concessive meaning. Instead, complex combinations of multiple markers are employed in these constructions. All three studies also contribute to sign language typology by providing novel descriptions of syntactic and discourse phenomena in RSL.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Agent-Backgrounding in Sign Language of the Netherlands: A Corpus Investigation.\n \n \n \n \n\n\n \n Anna-Lina Mörking.\n\n\n \n\n\n\n 2023.\n Bachelor's thesis, University of Amsterdam.\n\n\n\n
\n\n\n\n \n \n \"Agent-BackgroundingPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n  \n \n 3 downloads\n \n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@unpublished{morking:23,\n  title = {Agent-Backgrounding in Sign Language of the Netherlands: A Corpus Investigation},\n  author = {M{\\"o}rking, Anna-Lina},\n  year = {2023},\n  url = {https://www.signlab-amsterdam.nl/publications/Morking_thesis.pdf},\n  keywords = {bachelorsthesis},\n  abstract = {In agent-backgrounding constructions the causer of a linguistic event is pushed out of the focus, that is, it is backgrounded. In this thesis, for the first time, agent-backgrounding strategies were researched in Sign Language of the Netherlands (NGT). This is a corpus-based investigation on impersonalisation strategies such as impersonal uses of personal pronouns (ex. you), dedicated referentially deficient pronouns (ex. someone), and valency-reducing operations (ex. passive constructions). The results confirm that many of the same strategies are used in NGT as in other spoken and sign languages. These findings are highly relevant, as agent-backgrounding has not previously been researched in NGT, so our study fiills an important gap in the literature.},\n  note = {Bachelor's thesis, University of Amsterdam.}\n}\n\n
\n
\n\n\n
\n In agent-backgrounding constructions the causer of a linguistic event is pushed out of the focus, that is, it is backgrounded. In this thesis, for the first time, agent-backgrounding strategies were researched in Sign Language of the Netherlands (NGT). This is a corpus-based investigation on impersonalisation strategies such as impersonal uses of personal pronouns (ex. you), dedicated referentially deficient pronouns (ex. someone), and valency-reducing operations (ex. passive constructions). The results confirm that many of the same strategies are used in NGT as in other spoken and sign languages. These findings are highly relevant, as agent-backgrounding has not previously been researched in NGT, so our study fiills an important gap in the literature.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Analyzing the ZELF and other reflexive constructions in Sign Language of the Netherlands from a Functional Discourse Grammar perspective: a corpus- based typological and theoretical study.\n \n \n \n \n\n\n \n Sybil Vachaudez.\n\n\n \n\n\n\n Master's thesis, University of Amsterdam, 2023.\n \n\n\n\n
\n\n\n\n \n \n \"AnalyzingPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n  \n \n 6 downloads\n \n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@mastersthesis{vachaudez:23,\n  title = {Analyzing the ZELF and other reflexive constructions in Sign Language of the Netherlands from a Functional Discourse Grammar perspective: a corpus- based typological and theoretical study},\n  author = {Vachaudez, Sybil},\n  school = {University of Amsterdam},\n  year = {2023},\n  url = {https://www.signlab-amsterdam.nl/publications/Vachaudez_thesis.pdf},\n  abstract = {This paper presents the first corpus-based study of reflexivity in Sign Language of the Netherlands (henceforth, NGT) and the first study of reflexivity in a sign language from a Functional Discourse Grammar perspective. Importantly, seven reflexive constructions were identified in NGT: i) ZELF constructions with a pronominal pointing sign, ii) ZELF constructions without a pronominal sign, iii) constructions with a reflexivized agreeing verb, iv) constructions with a reflexivized agreeing verb and a pronominal pointing sign, v) EIGEN constructions, vi) constructions with a pronominal pointing sign and vii) constructions with object omission. I argue that the first five constitute specialized reflexive constructions and show that the Functional Discourse Grammar model can successfully account for reflexivity in sign languages and that NGT possesses all three types of reflexive constructions proposed by the model: two-place reflexives, one-place reflexives, and mixed reflexives. Furthermore, two non- reflexive uses of ZELF were found: the possessive use and the anticausative use. I conclude by locating the NGT data within the landscape of reflexivity in both signed and spoken languages, commenting on the cognitive saliency of reflexivity and event participant structure, and raising questions for future research. The data for this study comes from Corpus NGT.}\n}\n\n
\n
\n\n\n
\n This paper presents the first corpus-based study of reflexivity in Sign Language of the Netherlands (henceforth, NGT) and the first study of reflexivity in a sign language from a Functional Discourse Grammar perspective. Importantly, seven reflexive constructions were identified in NGT: i) ZELF constructions with a pronominal pointing sign, ii) ZELF constructions without a pronominal sign, iii) constructions with a reflexivized agreeing verb, iv) constructions with a reflexivized agreeing verb and a pronominal pointing sign, v) EIGEN constructions, vi) constructions with a pronominal pointing sign and vii) constructions with object omission. I argue that the first five constitute specialized reflexive constructions and show that the Functional Discourse Grammar model can successfully account for reflexivity in sign languages and that NGT possesses all three types of reflexive constructions proposed by the model: two-place reflexives, one-place reflexives, and mixed reflexives. Furthermore, two non- reflexive uses of ZELF were found: the possessive use and the anticausative use. I conclude by locating the NGT data within the landscape of reflexivity in both signed and spoken languages, commenting on the cognitive saliency of reflexivity and event participant structure, and raising questions for future research. The data for this study comes from Corpus NGT.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Giving a sign to the next generation: a corpus study of the grammaticalization of GIVE in Sign Language of the Netherlands (NGT).\n \n \n \n \n\n\n \n Vivianne Sylvana Joosten.\n\n\n \n\n\n\n Master's thesis, University of Amsterdam, 2023.\n \n\n\n\n
\n\n\n\n \n \n \"GivingPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n  \n \n 2 downloads\n \n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@mastersthesis{joosten:23,\n  title = {Giving a sign to the next generation: a corpus study of the grammaticalization of GIVE in Sign Language of the Netherlands (NGT)},\n  author = {Joosten, Vivianne Sylvana},\n  school = {University of Amsterdam},\n  year = {2023},\n  url = {https://www.signlab-amsterdam.nl/publications/Joosten_thesis.pdf},\n  abstract = {The transfer verb GIVE is a fruitful base for metaphorical extension and grammaticalization across languages. Previous work by Bos (1996/2016) and Couvee and Pfau (2018) has shown that the sign GIVE is used beyond its underlying concrete transfer meaning in sign language of the Netherlands (NGT) as well. In this corpus study, I find GIVE in NGT to be used in a prototypical concrete transfer meaning in fewer than 30\\% of the instances. Other uses attested in the corpus include (i) abstract transfer of linguistic type items such as INFORMATION, (ii) light verb use as well as serial verb use where GIVE marks a RECIPIENT rather than describing the transfer action, (iii) used in a causative construction and (iv) passive auxiliary use. I propose two different grammaticalization paths of GIVE in NGT, as well as evaluating NGT GIVE in a typological context. Extensions of GIVE all have a RECIPIENT-focus in NGT and are comparable to the extensions of GIVE found in other languages, both spoken and signed.}\n}\n\n
\n
\n\n\n
\n The transfer verb GIVE is a fruitful base for metaphorical extension and grammaticalization across languages. Previous work by Bos (1996/2016) and Couvee and Pfau (2018) has shown that the sign GIVE is used beyond its underlying concrete transfer meaning in sign language of the Netherlands (NGT) as well. In this corpus study, I find GIVE in NGT to be used in a prototypical concrete transfer meaning in fewer than 30% of the instances. Other uses attested in the corpus include (i) abstract transfer of linguistic type items such as INFORMATION, (ii) light verb use as well as serial verb use where GIVE marks a RECIPIENT rather than describing the transfer action, (iii) used in a causative construction and (iv) passive auxiliary use. I propose two different grammaticalization paths of GIVE in NGT, as well as evaluating NGT GIVE in a typological context. Extensions of GIVE all have a RECIPIENT-focus in NGT and are comparable to the extensions of GIVE found in other languages, both spoken and signed.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Distractor-Based Evaluation of Sign Spotting.\n \n \n \n \n\n\n \n Natalie Hollain, Martha Larson, & Floris Roelofsen.\n\n\n \n\n\n\n In 2023 IEEE International Conference on Acoustics, Speech, and Signal Processing Workshops (ICASSPW), pages 1-5, 2023. \n \n\n\n\n
\n\n\n\n \n \n \"Distractor-BasedPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n  \n \n 5 downloads\n \n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n\n\n\n
\n
@inproceedings{hollain:23a,\n  author = {Hollain, Natalie and Larson, Martha and Roelofsen, Floris},\n  booktitle={2023 IEEE International Conference on Acoustics, Speech, and Signal Processing Workshops (ICASSPW)}, \n  title={Distractor-Based Evaluation of Sign Spotting}, \n  year={2023},\n  volume={},\n  number={},\n  pages={1-5},\n  doi={10.1109/ICASSPW59220.2023.10193484},\n  url = {https://doi.org/10.1109/ICASSPW59220.2023.10193484},\n  keywords = {Sign spotting, Evaluation, Distractors},\n  abstract = {Sign spotting is a subtask of sign language processing in which we determine when a given target sign occurs in a given sign sequence. This paper proposes a method for evaluating sign spotting systems, which we argue to be more reflective of the degree to which a system would satisfy the user’s requirements in practice than previously proposed evaluation methods. To deal with an incomplete ground truth, we introduce the concept of distractors: signs which are similar to the target sign according to a given distance measure. We assume that the performance of a sign spotting model when distinguishing a given target sign from the associated distractors will reflect the performance of the model on the complete ground truth. We develop a sign spotting model to demonstrate our evaluation method.}\n}\n\n
\n
\n\n\n
\n Sign spotting is a subtask of sign language processing in which we determine when a given target sign occurs in a given sign sequence. This paper proposes a method for evaluating sign spotting systems, which we argue to be more reflective of the degree to which a system would satisfy the user’s requirements in practice than previously proposed evaluation methods. To deal with an incomplete ground truth, we introduce the concept of distractors: signs which are similar to the target sign according to a given distance measure. We assume that the performance of a sign spotting model when distinguishing a given target sign from the associated distractors will reflect the performance of the model on the complete ground truth. We develop a sign spotting model to demonstrate our evaluation method.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Analyzing the Potential of Linguistic Features for Sign Spotting: A Look at Approximative Features.\n \n \n \n \n\n\n \n Natalie Hollain, Martha Larson, & Floris Roelofsen.\n\n\n \n\n\n\n In Proceedings of the Second International Workshop on Automatic Translation for Signed and Spoken Languages, pages 1–10, Tampere, Finland, jun 2023. European Association for Machine Translation\n \n\n\n\n
\n\n\n\n \n \n \"AnalyzingPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n  \n \n 4 downloads\n \n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@inproceedings{hollain:23b,\n    title = {Analyzing the Potential of Linguistic Features for Sign Spotting: A Look at Approximative Features},\n    author = {Hollain, Natalie and Larson, Martha  and Roelofsen, Floris},\n    booktitle = {Proceedings of the Second International Workshop on Automatic Translation for Signed and Spoken Languages},\n    month = {jun},\n    year = {2023},\n    address = {Tampere, Finland},\n    publisher = {European Association for Machine Translation},\n    url = {https://aclanthology.org/2023.at4ssl-1.1},\n    pages = {1--10},\n    abstract = {Sign language processing is the field of research that aims to recognize, retrieve, and spot signs in videos. Various approaches have been developed, varying in whether they use linguistic features and whether they use landmark detection tools or not. Incorporating linguistics holds promise for improving sign language processing in terms of performance, generalizability, and explainability. This paper focuses on the task of sign spotting and aims to expand on the approximative linguistic features that have been used in previous work, and to understand when linguistic features deliver an improvement over landmark features. We detect landmarks with Mediapipe and extract linguistically relevant features from them, including handshape, orientation, location, and movement. We compare a sign spotting model using linguistic features with a model operating on landmarks directly, finding that the approximate linguistic features tested in this paper capture some aspects of signs better than the landmark features, while they are worse for others.},\n}\n\n
\n
\n\n\n
\n Sign language processing is the field of research that aims to recognize, retrieve, and spot signs in videos. Various approaches have been developed, varying in whether they use linguistic features and whether they use landmark detection tools or not. Incorporating linguistics holds promise for improving sign language processing in terms of performance, generalizability, and explainability. This paper focuses on the task of sign spotting and aims to expand on the approximative linguistic features that have been used in previous work, and to understand when linguistic features deliver an improvement over landmark features. We detect landmarks with Mediapipe and extract linguistically relevant features from them, including handshape, orientation, location, and movement. We compare a sign spotting model using linguistic features with a model operating on landmarks directly, finding that the approximate linguistic features tested in this paper capture some aspects of signs better than the landmark features, while they are worse for others.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Towards Explainable Sign Spotting Systems: an Exploration of Approximative Linguistic Features and Evaluation Methods.\n \n \n \n \n\n\n \n Natalie Hollain.\n\n\n \n\n\n\n Master's thesis, Radboud University, 2023.\n \n\n\n\n
\n\n\n\n \n \n \"TowardsPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n  \n \n 1 download\n \n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@mastersthesis{hollain:23c,\n  title = {Towards Explainable Sign Spotting Systems: an Exploration of Approximative Linguistic Features and Evaluation Methods},\n  author = {Hollain, Natalie},\n  school = {Radboud University},\n  year = {2023},\n  url = {https://www.signlab-amsterdam.nl/publications/Natalie-Hollain-thesis.pdf},\n  abstract = {This research project carried out an initial exploration into how sign spotting, the task of detecting when a target sign occurs in a given video, can be performed in a more explainable manner. Explainability demands that a system is correct, robust and interpretable to humans [1], [2]. Inspired by domain knowledge being used to increase the explainability and interpretability of systems in other domains [3], [4], we investigate the possibility of using a knowledge-based approach for sign spotting. \n  \n  One manner in which knowledge about sign language can be incorporated into sign language systems is through linguistic insights. Current sign spotting systems typically do not make use of such knowledge [5], thus limiting their interpretability. Similarly, evaluation methods for sign spotting do not draw on linguistic knowledge, resulting in a lack of explainability since they fail to robustly estimate model performance given an incomplete ground truth. Updates to the known ground truth, in particular the addition of challenging sign annotations, can significantly alter the estimated performance. Moreover, current evaluations do not reflect user expectations for sign spotting systems because spottings are allowed to occur after a relevant segment has already started. Users thus have to put in effort, such as rewinding the video, to watch the full relevant segment, which was found to not reflect what users expect [6]. \n  \n  The goal of this thesis is to address these limitations using a knowledge-based approach. We incorporate linguistic knowledge about sign language into a sign spotting system and evaluation method. We aim to enhance explainability by enabling a sign spotting analysis based on linguistic insights. Furthermore, we develop linguistic features to ensure our model uses knowledge-based inputs as the basis for its decision-making. In this way, we hope to increase the explainability of current methods. \n  \n  To address the need for explainable sign spotting systems, we implemented features for a sign spotting model that approximate the basic phonological properties of signs, including handshape, orientation, location and movement [7], [8]. Our features are extracted from landmarks, which are keypoints in the body, such as the fingertips and shoulders, that we detected using a landmark detection tool. As far as we are aware, we are the first to implement a sign spotting model which extracts such features from landmarks. By taking into account the basic four phonological properties, we aim to create explainable sign representations for our model to encode. As a result, it is possible to perform a failure analysis for our model that is facilitated by the linguistic features.\n  \n  To address the need for explainable evaluation methods for sign spotting, we developed an evaluation that is rooted in the concept of tolerance to irrelevance (TTI) [9]. TTI builds on the assumption that users, given an entry point in a video or audio stream, keep watching or listening until their tolerance to irrelevant content is reached. Through this means, our evaluation method reflects the effort it takes for users to use a sign spotting system.\n  \n  However, TTI, like existing sign spotting evaluations, relies on a full ground truth to reliably determine a model’s performance, which may not be available for a sign spotting dataset. We address this limitation by using a novel approach that uses only the most challenging known cases to assess our model performance. These hardest cases are called distractors, which we define as those signs that are most similar to the target sign based on a distance measure. In our work, we develop a novel linguistic distance measure to determine the similarity between signs. Through the usage of these distractors, we estimate the performance for the full ground truth based solely on the hardest cases from the known ground truth, and assume that this makes our performance estimation robust to the addition of new annotations. We validated this assumption by investigating the effects of updates to the annotations on the performance estimates by our distractor-based evaluation compared to a baseline evaluation that uses random, as opposed to hard, cases. Our results show that the distractor-based evaluations provides a more conservative estimate of the performance of a model and is comparably robust to changes in the annotations compared to the baseline.\n  \n  We validated our linguistic features using an empirical analysis, where we compared the effectiveness of a non-linguistic baseline that used landmarks directly, to a model using our more explainable, linguistically motivated features that are extracted from landmarks. Moreover, we investigated whether a combination of linguistic and baseline landmark features would result in better performance. The conditions were compared through the use of our distractor-based evaluation. We determined that the combination of the features provided the best performance at the cost of linguistic representativeness.}\n}\n\n
\n
\n\n\n
\n This research project carried out an initial exploration into how sign spotting, the task of detecting when a target sign occurs in a given video, can be performed in a more explainable manner. Explainability demands that a system is correct, robust and interpretable to humans [1], [2]. Inspired by domain knowledge being used to increase the explainability and interpretability of systems in other domains [3], [4], we investigate the possibility of using a knowledge-based approach for sign spotting. One manner in which knowledge about sign language can be incorporated into sign language systems is through linguistic insights. Current sign spotting systems typically do not make use of such knowledge [5], thus limiting their interpretability. Similarly, evaluation methods for sign spotting do not draw on linguistic knowledge, resulting in a lack of explainability since they fail to robustly estimate model performance given an incomplete ground truth. Updates to the known ground truth, in particular the addition of challenging sign annotations, can significantly alter the estimated performance. Moreover, current evaluations do not reflect user expectations for sign spotting systems because spottings are allowed to occur after a relevant segment has already started. Users thus have to put in effort, such as rewinding the video, to watch the full relevant segment, which was found to not reflect what users expect [6]. The goal of this thesis is to address these limitations using a knowledge-based approach. We incorporate linguistic knowledge about sign language into a sign spotting system and evaluation method. We aim to enhance explainability by enabling a sign spotting analysis based on linguistic insights. Furthermore, we develop linguistic features to ensure our model uses knowledge-based inputs as the basis for its decision-making. In this way, we hope to increase the explainability of current methods. To address the need for explainable sign spotting systems, we implemented features for a sign spotting model that approximate the basic phonological properties of signs, including handshape, orientation, location and movement [7], [8]. Our features are extracted from landmarks, which are keypoints in the body, such as the fingertips and shoulders, that we detected using a landmark detection tool. As far as we are aware, we are the first to implement a sign spotting model which extracts such features from landmarks. By taking into account the basic four phonological properties, we aim to create explainable sign representations for our model to encode. As a result, it is possible to perform a failure analysis for our model that is facilitated by the linguistic features. To address the need for explainable evaluation methods for sign spotting, we developed an evaluation that is rooted in the concept of tolerance to irrelevance (TTI) [9]. TTI builds on the assumption that users, given an entry point in a video or audio stream, keep watching or listening until their tolerance to irrelevant content is reached. Through this means, our evaluation method reflects the effort it takes for users to use a sign spotting system. However, TTI, like existing sign spotting evaluations, relies on a full ground truth to reliably determine a model’s performance, which may not be available for a sign spotting dataset. We address this limitation by using a novel approach that uses only the most challenging known cases to assess our model performance. These hardest cases are called distractors, which we define as those signs that are most similar to the target sign based on a distance measure. In our work, we develop a novel linguistic distance measure to determine the similarity between signs. Through the usage of these distractors, we estimate the performance for the full ground truth based solely on the hardest cases from the known ground truth, and assume that this makes our performance estimation robust to the addition of new annotations. We validated this assumption by investigating the effects of updates to the annotations on the performance estimates by our distractor-based evaluation compared to a baseline evaluation that uses random, as opposed to hard, cases. Our results show that the distractor-based evaluations provides a more conservative estimate of the performance of a model and is comparably robust to changes in the annotations compared to the baseline. We validated our linguistic features using an empirical analysis, where we compared the effectiveness of a non-linguistic baseline that used landmarks directly, to a model using our more explainable, linguistically motivated features that are extracted from landmarks. Moreover, we investigated whether a combination of linguistic and baseline landmark features would result in better performance. The conditions were compared through the use of our distractor-based evaluation. We determined that the combination of the features provided the best performance at the cost of linguistic representativeness.\n
\n\n\n
\n\n\n \n\n\n \n\n\n \n\n\n \n\n\n
\n \n\n \n \n \n \n \n \n Biased polar question forms in Sign Language of the Netherlands (NGT).\n \n \n \n \n\n\n \n Marloes Oomen, & Roelofsen Floris.\n\n\n \n\n\n\n FEAST. Formal and Experimental Advances in Sign language Theory, 5: 156-168. 2023.\n \n\n\n\n
\n\n\n\n \n \n \"BiasedPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n  \n \n 2 downloads\n \n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@article{oomen:23a, \n  author={Oomen, Marloes and Floris, Roelofsen}, \n  title={Biased polar question forms in Sign Language of the Netherlands (NGT)}, \n  journal={FEAST. Formal and Experimental Advances in Sign language Theory}, \n  volume={5}, \n  url={https://raco.cat/index.php/FEAST/article/view/422450}, \n  DOI={10.31009/FEAST.i5.13}, \n  year={2023}, \n  pages={156-168},\n  keywords = {Sign Language of the Netherlands (NGT), polar questions, bias, headshake},\n  abstract = {We identify several polar question forms in Sign Language of the Netherlands (NGT) through a production experiment in which we manipulate two types of biases: (i) the prior expectations of the person asking the question, and (ii) the evidence available in the immediate context of utterance. Our analysis in the present paper focuses on forms involving headshake. We find that in some cases headshake expresses negation, as ex- pected, but in other cases it fulfils another function, namely, it is part of a sentence-final phrase either expressing uncertainty or signalling a request for a response from the ad- dressee, or possibly both at the same time. We further observe that each question form has a distinct ‘bias profile’, indicating a certain combination of prior expectations and contextual evidence. Besides these empirical findings, our study also makes a method- ological contribution: our experimental design could be used in future work to identify polar question forms with different bias profiles in sign languages other than NGT, as well as visual cues accompanying polar questions with different bias profiles in spoken languages.}\n}\n\n
\n
\n\n\n
\n We identify several polar question forms in Sign Language of the Netherlands (NGT) through a production experiment in which we manipulate two types of biases: (i) the prior expectations of the person asking the question, and (ii) the evidence available in the immediate context of utterance. Our analysis in the present paper focuses on forms involving headshake. We find that in some cases headshake expresses negation, as ex- pected, but in other cases it fulfils another function, namely, it is part of a sentence-final phrase either expressing uncertainty or signalling a request for a response from the ad- dressee, or possibly both at the same time. We further observe that each question form has a distinct ‘bias profile’, indicating a certain combination of prior expectations and contextual evidence. Besides these empirical findings, our study also makes a method- ological contribution: our experimental design could be used in future work to identify polar question forms with different bias profiles in sign languages other than NGT, as well as visual cues accompanying polar questions with different bias profiles in spoken languages.\n
\n\n\n
\n\n\n \n\n\n
\n \n\n \n \n \n \n \n \n Some properties of neg-raising in three sign languages.\n \n \n \n \n\n\n \n Marloes Oomen, Santoro Mirko, & Geraci Carlo.\n\n\n \n\n\n\n FEAST. Formal and Experimental Advances in Sign language Theory, 5: 145-155. 2023.\n \n\n\n\n
\n\n\n\n \n \n \"SomePaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n  \n \n 1 download\n \n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@article{oomen:23c, \n  author={Oomen, Marloes and Mirko, Santoro and Carlo, Geraci},\n  title={Some properties of neg-raising in three sign languages}, \n  journal={FEAST. Formal and Experimental Advances in Sign language Theory}, \n  volume={5}, \n  url={https://raco.cat/index.php/FEAST/article/view/422449}, \n  DOI={10.31009/FEAST.i5.12}, \n  year={2023}, \n  pages={145-155},\n  keywords = {Neg-raising, Negative Polarity Items, Negative quantifiers, Headshake},\n  abstract = {Neg-raising, the phenomenon whereby a negation in the main clause of a complex construction is interpreted as if belonging to the embedded clause, has been intensively studied in spoken languages. The same cannot be said for sign languages. In this paper, we investigate the properties of Neg-raising constructions in three sign languages: French Sign Language, Italian Sign Language, and Sign Language of the Netherlands. We report on two syntactic tests we applied to disambiguate Neg-raising and non-Neg-raising readings, showing that Neg-raising constructions have similar properties in the three sign languages that we studied, as well as in comparable constructions in spoken languages. We also discuss some intricate headshake spreading patterns we found in Neg-raising constructions in Sign Language of the Netherlands, a non-manual dominant sign language.}\n}\n\n
\n
\n\n\n
\n Neg-raising, the phenomenon whereby a negation in the main clause of a complex construction is interpreted as if belonging to the embedded clause, has been intensively studied in spoken languages. The same cannot be said for sign languages. In this paper, we investigate the properties of Neg-raising constructions in three sign languages: French Sign Language, Italian Sign Language, and Sign Language of the Netherlands. We report on two syntactic tests we applied to disambiguate Neg-raising and non-Neg-raising readings, showing that Neg-raising constructions have similar properties in the three sign languages that we studied, as well as in comparable constructions in spoken languages. We also discuss some intricate headshake spreading patterns we found in Neg-raising constructions in Sign Language of the Netherlands, a non-manual dominant sign language.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Language Prejudice and Language Structure: On Missing and Emerging Conjunctions in Libras and Other Sign Languages.\n \n \n \n \n\n\n \n Angélica Rodrigues, & Roland Pfau.\n\n\n \n\n\n\n In Gladis Massini-Cagliari, Rosane Andrade Berlinck, & Angelica Rodrigues., editor(s), Understanding Linguistic Prejudice: Critical Approaches to Language Diversity in Brazil, pages 157–185. Springer International Publishing, 2023.\n \n\n\n\n
\n\n\n\n \n \n \"LanguagePaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@incollection{Rodrigues:23,\n  author = {Rodrigues, Ang{\\'e}lica and Pfau, Roland},\n  editor = {Massini-Cagliari, Gladis and Berlinck, Rosane Andrade and Rodrigues, Angelica},\n  title = {Language Prejudice and Language Structure: On Missing and Emerging Conjunctions in Libras and Other Sign Languages},\n  bookTitle = {Understanding Linguistic Prejudice: Critical Approaches to Language Diversity in Brazil},\n  year = {2023},\n  publisher = {Springer International Publishing},\n  pages = {157--185},\n  isbn = {978-3-031-25806-0},\n  doi = {10.1007/978-3-031-25806-0_10},\n  url = {https://doi.org/10.1007/978-3-031-25806-0_10},\n  abstract = {Sign languages, just like many indigenous languages, have traditionally been subject to language prejudice and suppression, based in no small part on invalid claims about their structural complexity. In this chapter, we aim to contribute to the discussion on language prejudice by broadening the scope to include deaf communities and their sign languages. We review and reject claims according to which languages can be classified as ``primitive'' or ``evolved'' based on the absence of certain grammatical categories. One such grammatical category are conjunctions, and we investigate in some detail their absence or presence in sign languages. We present examples from various sign languages which show that -- just as in spoken languages -- conjunctions may emerge through diachronic processes of borrowing and grammaticalization. Focusing on data from Brazilian Sign Language, we then demonstrate that both these processes are at play in the language and that manual conjunctions marking disjunctive, adversative, conditional, and causal relations have entered the lexicon. Our hope is that these findings will contribute -- albeit modestly -- to the status of Brazilian Sign Language.}\n}\n\n
\n
\n\n\n
\n Sign languages, just like many indigenous languages, have traditionally been subject to language prejudice and suppression, based in no small part on invalid claims about their structural complexity. In this chapter, we aim to contribute to the discussion on language prejudice by broadening the scope to include deaf communities and their sign languages. We review and reject claims according to which languages can be classified as ``primitive'' or ``evolved'' based on the absence of certain grammatical categories. One such grammatical category are conjunctions, and we investigate in some detail their absence or presence in sign languages. We present examples from various sign languages which show that – just as in spoken languages – conjunctions may emerge through diachronic processes of borrowing and grammaticalization. Focusing on data from Brazilian Sign Language, we then demonstrate that both these processes are at play in the language and that manual conjunctions marking disjunctive, adversative, conditional, and causal relations have entered the lexicon. Our hope is that these findings will contribute – albeit modestly – to the status of Brazilian Sign Language.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n The indefinite-interrogative affinity in sign languages: the case of Catalan Sign Language.\n \n \n \n \n\n\n \n Raquel Veiga Busto, Floris Roelofsen, & Alexandra Navarrete González.\n\n\n \n\n\n\n In Proceedings of the 4th Workshop on Inquisitiveness Below and Beyond the Sentence Boundary (InqBnB4), pages 50–60, 2023. Association for Computational Linguistics\n \n\n\n\n
\n\n\n\n \n \n \"ThePaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n  \n \n 2 downloads\n \n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@inproceedings{veigabusto:23a,\n  title = {The indefinite-interrogative affinity in sign languages: the case of Catalan Sign Language},\n  author = {Veiga Busto, Raquel and Roelofsen, Floris and Navarrete González, Alexandra},\n  booktitle = {Proceedings of the 4th Workshop on Inquisitiveness Below and Beyond the Sentence Boundary (InqBnB4)},\n  year = {2023},\n  pages={50--60},\n  publisher = {Association for Computational Linguistics},\n  url = {https://iwcs2023.loria.fr/files/2023/06/inqbnb4_proceedings.pdf},\n  abstract = {Prior studies on spoken languages have shown that indefinite and interrogative pronouns may be formally very similar. Our research aims to understand if sign languages exhibit this type of affinity. This paper presents an overview of the phenomenon and reports on the results of two studies: a cross-linguistic survey based on a sample of 30 sign languages and an empirical investigation conducted with three deaf consultants of Catalan Sign Language (LSC). Our research shows that, in sign languages, certain signs have both existential and interrogative readings and it identifies the environments that make existential interpretations available in LSC.}\n}\n\n
\n
\n\n\n
\n Prior studies on spoken languages have shown that indefinite and interrogative pronouns may be formally very similar. Our research aims to understand if sign languages exhibit this type of affinity. This paper presents an overview of the phenomenon and reports on the results of two studies: a cross-linguistic survey based on a sample of 30 sign languages and an empirical investigation conducted with three deaf consultants of Catalan Sign Language (LSC). Our research shows that, in sign languages, certain signs have both existential and interrogative readings and it identifies the environments that make existential interpretations available in LSC.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Person and Number. An Empirical Study of Catalan Sign Language Pronouns.\n \n \n \n \n\n\n \n Raquel Veiga Busto.\n\n\n \n\n\n\n De Gruyter Mouton, Berlin, Boston, 2023.\n \n\n\n\n
\n\n\n\n \n \n \"PersonPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n  \n \n 2 downloads\n \n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@book{veigabusto:23b,\ntitle = {Person and Number. An Empirical Study of Catalan Sign Language Pronouns},\nauthor = {Veiga Busto, Raquel},\npublisher = {De Gruyter Mouton},\naddress = {Berlin, Boston},\nurl = {https://doi.org/10.1515/9783110988956},\ndoi = {doi:10.1515/9783110988956},\nisbn = {9783110988956},\nyear = {2023},\nkeywords={Catalan Sign Language, Llengua de Signes Catalana, Person (Grammar), Number (Grammar)},\nabstract={Person and number are two basic grammatical categories. However, they have not yet been exhaustively documented in many sign languages. This volume presents a thorough description of the form and interpretation of person and number in Catalan Sign Language (LSC) personal pronouns. This is the first book exploring together the two categories (and their interaction) in a sign language.\nBuilding on a combination of elicitation methods and corpus data analysis, this book shows that person and number are encoded through a set of distinctive phonological features: person is formally marked through spatial features, and number by the path specifications of the sign. Additionally, this study provides evidence that the same number marker might have a different semantic import depending on the person features with which it is combined.\nResults of this investigation contribute fresh data to cross-linguistic studies on person and number, which are largely based on evidence from spoken language only. Furthermore, while this research identifies a number of significant differences with respect to prior descriptions of person and number in other sign languages, it also demonstrates that, from a typological standpoint, the array of distinctions that LSC draws within each category is not exceptional.}\n}\n\n
\n
\n\n\n
\n Person and number are two basic grammatical categories. However, they have not yet been exhaustively documented in many sign languages. This volume presents a thorough description of the form and interpretation of person and number in Catalan Sign Language (LSC) personal pronouns. This is the first book exploring together the two categories (and their interaction) in a sign language. Building on a combination of elicitation methods and corpus data analysis, this book shows that person and number are encoded through a set of distinctive phonological features: person is formally marked through spatial features, and number by the path specifications of the sign. Additionally, this study provides evidence that the same number marker might have a different semantic import depending on the person features with which it is combined. Results of this investigation contribute fresh data to cross-linguistic studies on person and number, which are largely based on evidence from spoken language only. Furthermore, while this research identifies a number of significant differences with respect to prior descriptions of person and number in other sign languages, it also demonstrates that, from a typological standpoint, the array of distinctions that LSC draws within each category is not exceptional.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Intérprete de lengua de signos.\n \n \n \n \n\n\n \n Raquel Veiga Busto.\n\n\n \n\n\n\n In Sheila Queralt Estévez (coord.)., editor(s), Lingüistas de hoy. Profesiones para el siglo XXI, pages 117–122. Editorial Síntesis, 2023.\n \n\n\n\n
\n\n\n\n \n \n \"IntérpretePaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n  \n \n 2 downloads\n \n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@incollection{veigabusto:23c,\n  title = {Intérprete de lengua de signos},\n  author = {Veiga Busto, Raquel},\n  publisher = {Editorial Síntesis},\n  booktitle = {Lingüistas de hoy. Profesiones para el siglo XXI},\n  editor={Queralt Estévez (coord.), Sheila},\n  pages = {117--122},\n  year = {2023},\n  url = {https://www.sintesis.com/claves%20de%20la%20ling%C3%BC%C3%ADstica-200/ling%C3%BCistas%20de%20hoy-ebook-3113.html}\n}
\n
\n\n\n\n
\n\n\n\n\n\n
\n
\n\n
\n
\n  \n 2022\n \n \n (17)\n \n \n
\n
\n \n \n
\n \n\n \n \n \n \n \n \n Negation and Negative Concord in Georgian Sign Language.\n \n \n \n \n\n\n \n Roland Pfau, Tamar Makharoblidze, & Hedde Zeijlstra.\n\n\n \n\n\n\n Frontiers in Psychology, 13(734845). 2022.\n Special issue \"Sign language research sixty years later: current and future perspectives\".\n\n\n\n
\n\n\n\n \n \n \"NegationPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n  \n \n 4 downloads\n \n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@article{pfau:22,\n  title = {Negation and Negative Concord in Georgian Sign Language},\n  author = {Pfau, Roland and Makharoblidze, Tamar and Zeijlstra, Hedde},\n  journal = {Frontiers in Psychology},\n  volume = {13},\n  number = {734845},\n  year = {2022},\n  publisher = {Frontiers Media SA},\n  doi = {10.3389/fpsyg.2022.734845},\n  url = {https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9333067/},\n  abstract = {Negation is a topic that has received considerable attention ever since the early days of sign language linguistics; also, it is one of the grammatical domains that has given the impetus for sign language typology. In this paper, we offer a typological and theoretical contribution to the study of sign language negation. As for the typological side, we add Georgian Sign Language (GESL) to the pool of languages investigated. Our description reveals that GESL displays a number of typologically unusual features: a considerable number of negative particles, including emphatic, prohibitive, and tense-specific particles; specialized negative modals; and a wide range of possibilities for Negative Concord (NC) involving two manual negative signs, including a unique tense-specific instance of NC. Most of the patterns we report -- available negative particles, their clausal position, and NC possibilities -- are clearly different from those attested in spoken Georgian. As for the theoretical contribution, we investigate how the highly complex GESL negation system compares to existing taxonomies of NC and Double Negation systems, and we conclude that GESL aligns with certain languages that have been classified as atypical NC languages.},\n  note = {Special issue "Sign language research sixty years later: current and future perspectives".}\n}\n\n
\n
\n\n\n
\n Negation is a topic that has received considerable attention ever since the early days of sign language linguistics; also, it is one of the grammatical domains that has given the impetus for sign language typology. In this paper, we offer a typological and theoretical contribution to the study of sign language negation. As for the typological side, we add Georgian Sign Language (GESL) to the pool of languages investigated. Our description reveals that GESL displays a number of typologically unusual features: a considerable number of negative particles, including emphatic, prohibitive, and tense-specific particles; specialized negative modals; and a wide range of possibilities for Negative Concord (NC) involving two manual negative signs, including a unique tense-specific instance of NC. Most of the patterns we report – available negative particles, their clausal position, and NC possibilities – are clearly different from those attested in spoken Georgian. As for the theoretical contribution, we investigate how the highly complex GESL negation system compares to existing taxonomies of NC and Double Negation systems, and we conclude that GESL aligns with certain languages that have been classified as atypical NC languages.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Emergence or Grammaticalization? The Case of Negation in Kata Kolok.\n \n \n \n \n\n\n \n Hannah Lutzenberger, Roland Pfau, & Connie de Vos.\n\n\n \n\n\n\n Languages, 7(1): 23. 2022.\n \n\n\n\n
\n\n\n\n \n \n \"EmergencePaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n  \n \n 2 downloads\n \n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@article{lutzenberger:22,\n  title = {Emergence or Grammaticalization? The Case of Negation in Kata Kolok},\n  author = {Lutzenberger, Hannah and Pfau, Roland and Vos, Connie de},\n  journal = {Languages},\n  volume = {7},\n  number = {1},\n  pages = {23},\n  year = {2022},\n  publisher = {MDPI},\n  doi = {10.3390/languages7010023},\n  url = {https://www.mdpi.com/2226-471X/7/1/23/htm},\n  abstract = {Typological comparisons have revealed that signers can use manual elements and/or a non-manual marker to express standard negation, but little is known about how such systematic marking emerges from its gestural counterparts as a new sign language arises. We analyzed 1.73 h of spontaneous language data, featuring six deaf native signers from generations III-V of the sign language isolate Kata Kolok (Bali). These data show that Kata Kolok cannot be classified as a manual dominant or non-manual dominant sign language since both the manual negative sign and a side-to-side headshake are used extensively. Moreover, the intergenerational comparisons indicate a considerable increase in the use of headshake spreading for generation V which is unlikely to have resulted from contact with Indonesian Sign Language varieties. We also attest a specialized negative existential marker, namely, tongue protrusion, which does not appear in co-speech gesture in the surrounding community. We conclude that Kata Kolok is uniquely placed in the typological landscape of sign language negation, and that grammaticalization theory is essential to a deeper understanding of the emergence of grammatical structure from gesture.}\n}\n\n
\n
\n\n\n
\n Typological comparisons have revealed that signers can use manual elements and/or a non-manual marker to express standard negation, but little is known about how such systematic marking emerges from its gestural counterparts as a new sign language arises. We analyzed 1.73 h of spontaneous language data, featuring six deaf native signers from generations III-V of the sign language isolate Kata Kolok (Bali). These data show that Kata Kolok cannot be classified as a manual dominant or non-manual dominant sign language since both the manual negative sign and a side-to-side headshake are used extensively. Moreover, the intergenerational comparisons indicate a considerable increase in the use of headshake spreading for generation V which is unlikely to have resulted from contact with Indonesian Sign Language varieties. We also attest a specialized negative existential marker, namely, tongue protrusion, which does not appear in co-speech gesture in the surrounding community. We conclude that Kata Kolok is uniquely placed in the typological landscape of sign language negation, and that grammaticalization theory is essential to a deeper understanding of the emergence of grammatical structure from gesture.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Active Learning for Multilingual Fingerspelling Corpora.\n \n \n \n \n\n\n \n Shuai Wang, & Eric Nalisnick.\n\n\n \n\n\n\n In Adaptive Experimental Design and Active Learning in the Real World, 2022. \n \n\n\n\n
\n\n\n\n \n \n \"ActivePaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n  \n \n 3 downloads\n \n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@inproceedings{wang:22,\n  title = {Active Learning for Multilingual Fingerspelling Corpora},\n  author = {Wang, Shuai and Nalisnick, Eric},\n  booktitle = {Adaptive Experimental Design and Active Learning in the Real World},\n  year = {2022},\n  url = {https://www.signlab-amsterdam.nl/publications/Active_Learning_for_Multilingual_Fingerspelling_Corpora.pdf},\n  abstract = {We apply active learning to help with data scarcity problems in sign languages. In particular, we perform a novel analysis of the effect of pre-training. Since many sign languages are linguistic descendants of French sign language, they share hand configurations, which pre-training can hopefully exploit. We test this hypothesis on American, Chinese, German, and Irish fingerspelling corpora. We do observe a benefit from pre-training, but this may be due to visual rather than linguistic similarities.}\n}\n\n
\n
\n\n\n
\n We apply active learning to help with data scarcity problems in sign languages. In particular, we perform a novel analysis of the effect of pre-training. Since many sign languages are linguistic descendants of French sign language, they share hand configurations, which pre-training can hopefully exploit. We test this hypothesis on American, Chinese, German, and Irish fingerspelling corpora. We do observe a benefit from pre-training, but this may be due to visual rather than linguistic similarities.\n
\n\n\n
\n\n\n \n\n\n
\n \n\n \n \n \n \n \n \n Co-designing a Sign Language Avatar for Railway Announcements.\n \n \n \n \n\n\n \n Britt van Gemert.\n\n\n \n\n\n\n Master's thesis, Radboud University, 2022.\n \n\n\n\n
\n\n\n\n \n \n \"Co-designingPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@mastersthesis{gemert:22,\n  title = {Co-designing a Sign Language Avatar for Railway Announcements},\n  author = {Gemert, Britt van},\n  school = {Radboud University},\n  year = {2022},\n  url = {https://www.signlab-amsterdam.nl/publications/van_Gemert.pdf},\n  abstract = {Public transport organisations, such as Dutch railway operator Nederlandse Spoorwegen (NS), generally communicate important messages through spoken announcements in the train or station. As a result, Deaf people miss crucial information. Because of varying levels of reading proficiency, textual information is not always a solution. Moreover, video footage is not scalable and will not yield satisfactory results when individual video parts need an update. Virtual sign language avatars can provide consistent, anonymous and scalable translations. For the development of a comprehensible and fluent sign language avatar, collaboration between experts in language technology, avatar technology and sign language is fundamental. Unfortunately, inclusive approaches are not evident in existing technologies. To address these problems, we present a case study on co-designing a sign language avatar for the automatic translation of railway announcements into Sign Language of the Netherlands (NGT). For the initial design, video sign language translations of NS announcements created by the Dutch Sign Center were annotated and used for the translation basis. A scripted keyframe animation technique for the JASigning avatar engine makes it possible to efficiently create many variants of a given template, without expensive equipment. Three iterative co-design sessions took place with an interdisciplinary research team including deaf sign language experts. Simultaneously, hearing passengers from NS were consulted for additional remarks on the sign language avatar. Subsequently, multiple phrase variants of the system were evaluated within a diverse focus group audience to account for demographic differences. Combining the various disciplines led to mayor adjustments of the avatar (manual movements, facial expressions, mouthing, grammar, transitions between signs, camera angle and speed). With this case study, we aimed to effectively and inclusively develop sign language avatar technology. More research and focus groups are necessary to ensure high quality translations (improved mouthings, facial expressions and prosody), resulting in enhanced comprehensibility and natural appearance of the avatar.}\n}\n\n
\n
\n\n\n
\n Public transport organisations, such as Dutch railway operator Nederlandse Spoorwegen (NS), generally communicate important messages through spoken announcements in the train or station. As a result, Deaf people miss crucial information. Because of varying levels of reading proficiency, textual information is not always a solution. Moreover, video footage is not scalable and will not yield satisfactory results when individual video parts need an update. Virtual sign language avatars can provide consistent, anonymous and scalable translations. For the development of a comprehensible and fluent sign language avatar, collaboration between experts in language technology, avatar technology and sign language is fundamental. Unfortunately, inclusive approaches are not evident in existing technologies. To address these problems, we present a case study on co-designing a sign language avatar for the automatic translation of railway announcements into Sign Language of the Netherlands (NGT). For the initial design, video sign language translations of NS announcements created by the Dutch Sign Center were annotated and used for the translation basis. A scripted keyframe animation technique for the JASigning avatar engine makes it possible to efficiently create many variants of a given template, without expensive equipment. Three iterative co-design sessions took place with an interdisciplinary research team including deaf sign language experts. Simultaneously, hearing passengers from NS were consulted for additional remarks on the sign language avatar. Subsequently, multiple phrase variants of the system were evaluated within a diverse focus group audience to account for demographic differences. Combining the various disciplines led to mayor adjustments of the avatar (manual movements, facial expressions, mouthing, grammar, transitions between signs, camera angle and speed). With this case study, we aimed to effectively and inclusively develop sign language avatar technology. More research and focus groups are necessary to ensure high quality translations (improved mouthings, facial expressions and prosody), resulting in enhanced comprehensibility and natural appearance of the avatar.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Using avatar animation software for sign language synthesis.\n \n \n \n \n\n\n \n Roel de Jeu.\n\n\n \n\n\n\n 2022.\n Bachelor's thesis, University of Amsterdam.\n\n\n\n
\n\n\n\n \n \n \"UsingPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n  \n \n 11 downloads\n \n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@unpublished{Jeu:22,\n  title = {Using avatar animation software for sign language synthesis},\n  author = {Jeu, Roel de},\n  year =  {2022},\n  url = {https://dspace.uba.uva.nl/server/api/core/bitstreams/5f83a355-f4c2-4bd6-b263-885a9d6a6df6/content},\n  keywords = {bachelorsthesis},\n  abstract = {Deaf people can have a reading and writing delay due to the fact that the sign language of their country is their first language. Most hearing people do not understand sign language. This causes a language barrier to exist between deaf and hearing people. While tools like Google Translate support many languages, not a single sign language is included. This research reduces the language barrier between deaf and hearing people by developing a synthesis tool for sign language. This research focuses on evaluating different gaming and Virtual Reality avatar animation software to create signing avatars used for the synthesis tool. It also looks at converting motion capture data in to animations for the signing avatars. This conversion is implemented fully for one avatar. The resulting animation shows some unnatural movement but looks promising.},\n  note = {Bachelor's thesis, University of Amsterdam.}\n}\n\n
\n
\n\n\n
\n Deaf people can have a reading and writing delay due to the fact that the sign language of their country is their first language. Most hearing people do not understand sign language. This causes a language barrier to exist between deaf and hearing people. While tools like Google Translate support many languages, not a single sign language is included. This research reduces the language barrier between deaf and hearing people by developing a synthesis tool for sign language. This research focuses on evaluating different gaming and Virtual Reality avatar animation software to create signing avatars used for the synthesis tool. It also looks at converting motion capture data in to animations for the signing avatars. This conversion is implemented fully for one avatar. The resulting animation shows some unnatural movement but looks promising.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Mouth visualizations on modern avatars as support for speech comprehension for the Dutch language and NGT.\n \n \n \n \n\n\n \n Loes Gennissen.\n\n\n \n\n\n\n 2022.\n Bachelor's thesis, University of Amsterdam.\n\n\n\n
\n\n\n\n \n \n \"MouthPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n  \n \n 6 downloads\n \n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@unpublished{Gennissen:22,\n  title = {Mouth visualizations on modern avatars as support for speech comprehension for the Dutch language and NGT},\n  author = {Gennissen, Loes},\n  year = {2022},\n  url = {https://dspace.uba.uva.nl/bitstreams/a4d4666b-27aa-4a7f-99f8-e13621d2b91d/download},\n  keywords = {bachelorsthesis},\n  abstract = {This thesis explores the effect of mouth visualizations on modern avatars as support for speech comprehension. The mouth visualizations of the avatars were acquired by two state-of-the-art facial visualization methods in the field of deep learning and computer vision. An initial exploration was performed to inspect the possibilities and advantages of the two methods. Furthermore, the two methods were tested on their applicability to the Dutch language and the Dutch Sign Language (NGT). To analyze the effect of different kinds of avatars on speech comprehensibility, a survey experiment was conducted to gather information on the performance and opinions of participants.},\n  note = {Bachelor's thesis, University of Amsterdam.}\n}\n\n
\n
\n\n\n
\n This thesis explores the effect of mouth visualizations on modern avatars as support for speech comprehension. The mouth visualizations of the avatars were acquired by two state-of-the-art facial visualization methods in the field of deep learning and computer vision. An initial exploration was performed to inspect the possibilities and advantages of the two methods. Furthermore, the two methods were tested on their applicability to the Dutch language and the Dutch Sign Language (NGT). To analyze the effect of different kinds of avatars on speech comprehensibility, a survey experiment was conducted to gather information on the performance and opinions of participants.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Use of a modern avatar for sign language synthesis: Movement tracking of manual gestures.\n \n \n \n \n\n\n \n Merel Atia.\n\n\n \n\n\n\n 2022.\n Bachelor's thesis, University of Amsterdam.\n\n\n\n
\n\n\n\n \n \n \"UsePaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n  \n \n 12 downloads\n \n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@unpublished{Atia:22,\n  title = {Use of a modern avatar for sign language synthesis: Movement tracking of manual gestures},\n  author = {Atia, Merel},\n  year = {2022},\n  url = {https://dspace.uba.uva.nl/bitstreams/7a1dbcc9-2e6f-428f-8e13-ee65c504399e/download},\n  keywords = {bachelorsthesis},\n  abstract = {It is of great importance to make Dutch Sign Language (NGT) accessible to hearing people in order to improve the relationship between the deaf and hearing communities. We want to achieve this by creating an application that converts Dutch text to NGT performed by an avatar. To achieve this, an avatar must be created that is able to perform the different components of speaking sign language. In this research, the approach will be to let the avatar adapt motions performed by real live NGT speakers using a motion tracker. The results of previous research showed that gestures with specific finger positions, mainly when fingers overlap, were difficult to track. This research solves this problem by recording a NGT speaker from different views, and combining the results using a rigid body transformation. This way, the number of detected outliers are reduced significantly. The addition of cameras provides more insight into depth usage and estimation of the z-axis. It enables to detect bodyparts that are blocked from the main camera, which leads to more accurate tracking of gestures.},\n  note = {Bachelor's thesis, University of Amsterdam.}\n}\n\n\n
\n
\n\n\n
\n It is of great importance to make Dutch Sign Language (NGT) accessible to hearing people in order to improve the relationship between the deaf and hearing communities. We want to achieve this by creating an application that converts Dutch text to NGT performed by an avatar. To achieve this, an avatar must be created that is able to perform the different components of speaking sign language. In this research, the approach will be to let the avatar adapt motions performed by real live NGT speakers using a motion tracker. The results of previous research showed that gestures with specific finger positions, mainly when fingers overlap, were difficult to track. This research solves this problem by recording a NGT speaker from different views, and combining the results using a rigid body transformation. This way, the number of detected outliers are reduced significantly. The addition of cameras provides more insight into depth usage and estimation of the z-axis. It enables to detect bodyparts that are blocked from the main camera, which leads to more accurate tracking of gestures.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n Learning to introduce referents in narration is resilient to the effects of late sign language exposure.\n \n \n \n\n\n \n C Gür, & Beyza Sümer.\n\n\n \n\n\n\n Sign Language & Linguistics, 25(2). 2022.\n \n\n\n\n
\n\n\n\n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@article{gur:22,\n  title = {Learning to introduce referents in narration is resilient to the effects of late sign language exposure},\n  author = {G{\\"u}r, C and S{\\"u}mer, Beyza},\n  journal = {Sign Language \\& Linguistics},\n  year = {2022},\n  volume = {25},\n  number = {2}\n}\n\n
\n
\n\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Cross-modal investigation of event component omissions in language development: a comparison of signing and speaking children.\n \n \n \n \n\n\n \n Beyza Sümer, & Aslı Özyürek.\n\n\n \n\n\n\n Language, Cognition and Neuroscience,1–17. 2022.\n \n\n\n\n
\n\n\n\n \n \n \"Cross-modalPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@article{sumer:22,\n  title = {Cross-modal investigation of event component omissions in language development: a comparison of signing and speaking children},\n  author = {S{\\"u}mer, Beyza and {\\"O}zy{\\"u}rek, Asl{\\i}},\n  publisher = {Taylor \\& Francis},\n  journal = {Language, Cognition and Neuroscience},\n  pages = {1--17},\n  year = {2022},\n  doi = {10.1080/23273798.2022.2042336},\n  url = {https://www.tandfonline.com/doi/pdf/10.1080/23273798.2022.2042336?needAccess=true},\n  abstract = {Language development research suggests a universal tendency for children to be under- informative in narrating motion events by omitting components such as Path, Manner or Ground. However, this assumption has not been tested for children acquiring sign language. Due to the affordances of the visual-spatial modality of sign languages for iconic expression, signing children might omit event components less frequently than speaking children. Here we analysed motion event descriptions elicited from deaf children (4–10 years) acquiring Turkish Sign Language (TİD) and their Turkish-speaking peers. While children omitted all types of event components more often than adults, signing children and adults encoded more Path and Manner in TİD than their peers in Turkish. These results provide more evidence for a general universal tendency for children to omit event components as well as a modality bias for sign languages to encode both Manner and Path more frequently than spoken languages.}\n}\n\n
\n
\n\n\n
\n Language development research suggests a universal tendency for children to be under- informative in narrating motion events by omitting components such as Path, Manner or Ground. However, this assumption has not been tested for children acquiring sign language. Due to the affordances of the visual-spatial modality of sign languages for iconic expression, signing children might omit event components less frequently than speaking children. Here we analysed motion event descriptions elicited from deaf children (4–10 years) acquiring Turkish Sign Language (TİD) and their Turkish-speaking peers. While children omitted all types of event components more often than adults, signing children and adults encoded more Path and Manner in TİD than their peers in Turkish. These results provide more evidence for a general universal tendency for children to omit event components as well as a modality bias for sign languages to encode both Manner and Path more frequently than spoken languages.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Late sign language exposure does not modulate the relation between spatial language and spatial memory in deaf children and adults.\n \n \n \n \n\n\n \n Dilay Z Karadöller, Beyza Sümer, Ercenur Ünal, & Aslı Özyürek.\n\n\n \n\n\n\n Memory & Cognition,1–19. 2022.\n \n\n\n\n
\n\n\n\n \n \n \"LatePaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n  \n \n 1 download\n \n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@article{karadoller:22,\n  title = {Late sign language exposure does not modulate the relation between spatial language and spatial memory in deaf children and adults},\n  author = {Karad{\\"o}ller, Dilay Z and S{\\"u}mer, Beyza and {\\"U}nal, Ercenur and {\\"O}zy{\\"u}rek, Asl{\\i}},\n  publisher = {Springer},\n  journal = {Memory \\& Cognition},\n  pages = {1--19},\n  year = {2022},\n  doi = {10.3758/s13421-022-01281-7},\n  url = {https://link.springer.com/content/pdf/10.3758/s13421-022-01281-7.pdf},\n  abstract = {Prior work with hearing children acquiring a spoken language as their first language shows that spatial language and cognition are related systems and spatial language use predicts spatial memory. Here, we further investigate the extent of this relationship in signing deaf children and adults and ask if late sign language exposure, as well as the frequency and the type of spatial language use that might be affected by late exposure, modulate subsequent memory for spatial relations. To do so, we compared spatial language and memory of 8-year-old late-signing children (after 2 years of exposure to a sign language at the school for the deaf) and late-signing adults to their native-signing counterparts. We elicited picture descriptions of Left-Right relations in Turkish Sign Language (Türk İşaret Dili) and measured the subsequent recognition memory accuracy of the described pictures. Results showed that late-signing adults and children were similar to their native-signing counterparts in how often they encoded the spatial relation. However, late-signing adults but not children differed from their native-signing counterparts in the type of spatial language they used. However, neither late sign language exposure nor the frequency and type of spatial language use modulated spatial memory accuracy. Therefore, even though late language exposure seems to influence the type of spatial language use, this does not predict subsequent memory for spatial relations. We discuss the implications of these findings based on the theories concerning the correspondence between spatial language and cognition as related or rather independent systems.}\n}\n\n
\n
\n\n\n
\n Prior work with hearing children acquiring a spoken language as their first language shows that spatial language and cognition are related systems and spatial language use predicts spatial memory. Here, we further investigate the extent of this relationship in signing deaf children and adults and ask if late sign language exposure, as well as the frequency and the type of spatial language use that might be affected by late exposure, modulate subsequent memory for spatial relations. To do so, we compared spatial language and memory of 8-year-old late-signing children (after 2 years of exposure to a sign language at the school for the deaf) and late-signing adults to their native-signing counterparts. We elicited picture descriptions of Left-Right relations in Turkish Sign Language (Türk İşaret Dili) and measured the subsequent recognition memory accuracy of the described pictures. Results showed that late-signing adults and children were similar to their native-signing counterparts in how often they encoded the spatial relation. However, late-signing adults but not children differed from their native-signing counterparts in the type of spatial language they used. However, neither late sign language exposure nor the frequency and type of spatial language use modulated spatial memory accuracy. Therefore, even though late language exposure seems to influence the type of spatial language use, this does not predict subsequent memory for spatial relations. We discuss the implications of these findings based on the theories concerning the correspondence between spatial language and cognition as related or rather independent systems.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Compensation in Verbal and Nonverbal Communication after Total Laryngectomy.\n \n \n \n \n\n\n \n Marise Neijman, Femke Hof, Noelle Oosterom, Roland Pfau, Bertus van Rooy, Rob J.J.H. van Son, & Michiel M.W.M. van den Brekel.\n\n\n \n\n\n\n In Proc. Interspeech 2022, pages 3613–3617, 2022. \n \n\n\n\n
\n\n\n\n \n \n \"CompensationPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@inproceedings{neijman:22,\n  author = {Neijman, Marise and Hof, Femke and Oosterom, Noelle and Pfau, Roland and Rooy, Bertus van and Son, Rob J.J.H. van and Brekel, Michiel M.W.M. van den},\n  title = {{Compensation in Verbal and Nonverbal Communication after Total Laryngectomy}},\n  year = {2022},\n  booktitle = {Proc. Interspeech 2022},\n  pages = {3613--3617},\n  doi = {10.21437/Interspeech.2022-369},\n  url = {https://www.isca-speech.org/archive/interspeech_2022/neijman22_interspeech.html},\n  abstract = {Total laryngectomy is a major surgical procedure with life-changing consequences. As a result of the surgery, the upper and lower airways are disconnected, the natural voice is lost, and patients breathe through a tracheostoma in the neck. Tracheoesophageal speech is the most common speech rehabilitation technique. Due to the lack of air volume, and the amount of muscle tension in the esophagus, some patients may suffer from a hyper- or hypo-tonic voice, resulting in less intelligible speech. To communicate as intelligibly as possible, patients likely adapt their verbal and nonverbal communication to their physical disabilities. The current study aimed to explore the compensation techniques in verbal and nonverbal communication after total laryngectomy focusing on the complexity of grammar and the use of co-speech gestures. We analyzed previously obtained interviews of eight laryngectomized women on the syntactic complexity in speech and the use and type of co-speech gestures. Results were compared with analyses of productions by healthy controls. We found that laryngectomized women reduce the syntactic complexity of their speech, and use nonverbal gestures in their communication. Further research is needed with systematically obtained data and more suitable match-groups. Index Terms: total laryngectomy, communication, speech, co-speech gestures, grammar, compensation}\n}\n\n
\n
\n\n\n
\n Total laryngectomy is a major surgical procedure with life-changing consequences. As a result of the surgery, the upper and lower airways are disconnected, the natural voice is lost, and patients breathe through a tracheostoma in the neck. Tracheoesophageal speech is the most common speech rehabilitation technique. Due to the lack of air volume, and the amount of muscle tension in the esophagus, some patients may suffer from a hyper- or hypo-tonic voice, resulting in less intelligible speech. To communicate as intelligibly as possible, patients likely adapt their verbal and nonverbal communication to their physical disabilities. The current study aimed to explore the compensation techniques in verbal and nonverbal communication after total laryngectomy focusing on the complexity of grammar and the use of co-speech gestures. We analyzed previously obtained interviews of eight laryngectomized women on the syntactic complexity in speech and the use and type of co-speech gestures. Results were compared with analyses of productions by healthy controls. We found that laryngectomized women reduce the syntactic complexity of their speech, and use nonverbal gestures in their communication. Further research is needed with systematically obtained data and more suitable match-groups. Index Terms: total laryngectomy, communication, speech, co-speech gestures, grammar, compensation\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n From improvisation to learning: how naturalness and systematicity shape language evolution.\n \n \n \n \n\n\n \n Yasamin Motamedi, Lucie Wolters, Danielle Naegeli, Simon Kirby, & Marieke Schouwstra.\n\n\n \n\n\n\n Cognition, 228. 2022.\n \n\n\n\n
\n\n\n\n \n \n \"FromPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@article{motamedi:22,\n  title = {From improvisation to learning: how naturalness and systematicity shape language evolution},\n  author = {Motamedi, Yasamin and Wolters, Lucie and Naegeli, Danielle and Kirby, Simon and Schouwstra, Marieke},\n  publisher = {Elsevier},\n  journal = {Cognition},\n  volume = {228},\n  year = {2022},\n  doi = {10.1016/j.cognition.2022.105206},\n  url = {https://reader.elsevier.com/reader/sd/pii/S0010027722001949?token=30EF48857FED119FAFE86A8780B2A3F46374F7A756DC93BE40B355F59199CFD7D202EA7569D0ED9CB6DB465E74E8B841&originRegion=eu-west-1&originCreation=20220928152802},\n  abstract = {Silent gesture studies, in which hearing participants from different linguistic backgrounds produce gestures to communicate events, have been used to test hypotheses about the cognitive biases that govern cross-linguistic word order preferences. In particular, the differential use of SOV and SVO order to communicate, respectively, extensional events (where the direct object exists independently of the event; e.g., girl throws ball) and intensional events (where the meaning of the direct object is potentially dependent on the verb; e.g., girl thinks of ball), has been suggested to represent a natural preference, demonstrated in improvisation contexts. However, natural languages tend to prefer systematic word orders, where a single order is used regardless of the event being communicated. We present a series of studies that investigate ordering preferences for SOV and SVO orders using an online forced-choice experiment, where English-speaking participants select orders for different events i) in the absence of conventions and ii) after learning event-order mappings in different frequencies in a regularisation experiment. Our results show that natural ordering preferences arise in the absence of conventions, replicating previous findings from production experiments. In addition, we show that participants regularise the input they learn in the manual modality in two ways, such that, while the preference for systematic order patterns increases through learning, it exists in competition with the natural ordering preference, that conditions order on the semantics of the event. Using our experimental data in a computational model of cultural transmission, we show that this pattern is expected to persist over generations, suggesting that we should expect to see evidence of semantically-conditioned word order variability in at least some languages.}\n}\n\n
\n
\n\n\n
\n Silent gesture studies, in which hearing participants from different linguistic backgrounds produce gestures to communicate events, have been used to test hypotheses about the cognitive biases that govern cross-linguistic word order preferences. In particular, the differential use of SOV and SVO order to communicate, respectively, extensional events (where the direct object exists independently of the event; e.g., girl throws ball) and intensional events (where the meaning of the direct object is potentially dependent on the verb; e.g., girl thinks of ball), has been suggested to represent a natural preference, demonstrated in improvisation contexts. However, natural languages tend to prefer systematic word orders, where a single order is used regardless of the event being communicated. We present a series of studies that investigate ordering preferences for SOV and SVO orders using an online forced-choice experiment, where English-speaking participants select orders for different events i) in the absence of conventions and ii) after learning event-order mappings in different frequencies in a regularisation experiment. Our results show that natural ordering preferences arise in the absence of conventions, replicating previous findings from production experiments. In addition, we show that participants regularise the input they learn in the manual modality in two ways, such that, while the preference for systematic order patterns increases through learning, it exists in competition with the natural ordering preference, that conditions order on the semantics of the event. Using our experimental data in a computational model of cultural transmission, we show that this pattern is expected to persist over generations, suggesting that we should expect to see evidence of semantically-conditioned word order variability in at least some languages.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Modelling the Emergence of Linguistic Conventions for Word Order: The Roles of Semantics, Structural Priming, and Population Structure.\n \n \n \n \n\n\n \n Loı̈s Dona, & Marieke Schouwstra.\n\n\n \n\n\n\n In Proceedings of the Annual Meeting of the Cognitive Science Society, volume 44, 2022. \n \n\n\n\n
\n\n\n\n \n \n \"ModellingPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n  \n \n 3 downloads\n \n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@inproceedings{dona:22,\n  title = {Modelling the Emergence of Linguistic Conventions for Word Order: The Roles of Semantics, Structural Priming, and Population Structure},\n  author = {Dona, Lo{\\"\\i}s and Schouwstra, Marieke},\n  booktitle = {Proceedings of the Annual Meeting of the Cognitive Science Society},\n  volume = {44},\n  number = {44},\n  year = {2022},\n  url = {https://escholarship.org/content/qt5dp2600s/qt5dp2600s.pdf?t=reckca&v=lg},\n  abstract = {We used agent-based modelling to study the emergence of linguistic conventions for basic word order (the order of subject, object and verb) in different populations. As a starting point, we take word order variation based on semantic properties, as observed in improvised gesture experiments. In our first simulation we explore the relative contributions of two pressures, one for semantically conditioned variation, and the other structural priming (which takes place when two individuals engage in communication), and show that a relatively increasing influence of structural priming best explains an increase in word order regularity. Next we implement a larger simulation, investigating how properties of the population affect regularization of word order. Our models compare population sizes with different population densities, and show that the speed of regularization in languages is heavily influenced by population density, and population size has little effect.}\n}\n\n
\n
\n\n\n
\n We used agent-based modelling to study the emergence of linguistic conventions for basic word order (the order of subject, object and verb) in different populations. As a starting point, we take word order variation based on semantic properties, as observed in improvised gesture experiments. In our first simulation we explore the relative contributions of two pressures, one for semantically conditioned variation, and the other structural priming (which takes place when two individuals engage in communication), and show that a relatively increasing influence of structural priming best explains an increase in word order regularity. Next we implement a larger simulation, investigating how properties of the population affect regularization of word order. Our models compare population sizes with different population densities, and show that the speed of regularization in languages is heavily influenced by population density, and population size has little effect.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Cross-cultural differences in the emergence of referential strategies in artificial sign languages.\n \n \n \n \n\n\n \n Danielle Naegeli, David Peeters, Emiel Krahmer, Marieke Schouwstra, Yasamin Motamedi, & Connie De Vos.\n\n\n \n\n\n\n In Proceedings of the Annual Meeting of the Cognitive Science Society, volume 44, 2022. \n \n\n\n\n
\n\n\n\n \n \n \"Cross-culturalPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@inproceedings{naegeli:22,\n  title = {Cross-cultural differences in the emergence of referential strategies in artificial sign languages},\n  author = {Naegeli, Danielle and Peeters, David and Krahmer, Emiel and Schouwstra, Marieke and Motamedi, Yasamin and De Vos, Connie},\n  booktitle = {Proceedings of the Annual Meeting of the Cognitive Science Society},\n  volume = {44},\n  number = {44},\n  year = {2022},\n  url = {https://escholarship.org/content/qt7861289g/qt7861289g.pdf?t=recks3&v=lg},\n  abstract = {While the grammatical use of space for referential strategies is attested across many sign languages of Western Europe, Kata Kolok, a rural sign language used in Northern Bali, has not developed anaphoric pointing in space nor agreement verbs (Engberg-Pedersen, 1993; Liddell, 2003; de Vos, 2012). To find out whether such typological differences can be explained by differences in the respective co-speech gesture systems, this preregistered study is collecting data for a cross-cultural comparison. Building on Motamedi et al. (2021), we are conducting studies in Bali and the Netherlands using an iterated learning silent gesture paradigm in which hearing people communicate transitive events using only gestures. Preliminary data indeed suggest that our Balinese participants do not employ space the same way our Dutch participants do. We will present comparative analyses and evaluate the role of co-speech gesture systems in sign language emergence in the lab or when evolving from spontaneous interaction.}\n}\n\n
\n
\n\n\n
\n While the grammatical use of space for referential strategies is attested across many sign languages of Western Europe, Kata Kolok, a rural sign language used in Northern Bali, has not developed anaphoric pointing in space nor agreement verbs (Engberg-Pedersen, 1993; Liddell, 2003; de Vos, 2012). To find out whether such typological differences can be explained by differences in the respective co-speech gesture systems, this preregistered study is collecting data for a cross-cultural comparison. Building on Motamedi et al. (2021), we are conducting studies in Bali and the Netherlands using an iterated learning silent gesture paradigm in which hearing people communicate transitive events using only gestures. Preliminary data indeed suggest that our Balinese participants do not employ space the same way our Dutch participants do. We will present comparative analyses and evaluate the role of co-speech gesture systems in sign language emergence in the lab or when evolving from spontaneous interaction.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Investigating Word Order Emergence: Constraints From Cognition and Communication.\n \n \n \n \n\n\n \n Marieke Schouwstra, Danielle Naegeli, & Simon Kirby.\n\n\n \n\n\n\n Frontiers in Psychology,1855. 2022.\n \n\n\n\n
\n\n\n\n \n \n \"InvestigatingPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@article{schouwstra:22,\n  title = {Investigating Word Order Emergence: Constraints From Cognition and Communication},\n  author = {Schouwstra, Marieke and Naegeli, Danielle and Kirby, Simon},\n  publisher = {Frontiers},\n  journal = {Frontiers in Psychology},\n  pages = {1855},\n  year = {2022},\n  doi = {10.3389/fpsyg.2022.805144},\n  url = {https://www.frontiersin.org/articles/10.3389/fpsyg.2022.805144/full},\n  abstract = {How do cognitive biases and mechanisms from learning and use interact when a system of language conventions emerges? We investigate this question by focusing on how transitive events are conveyed in silent gesture production and interaction. Silent gesture experiments (in which participants improvise to use gesture but no speech) have been used to investigate cognitive biases that shape utterances produced in the absence of a conventional language system. In this mode of communication, participants do not follow the dominant order of their native language (e.g., Subject-Verb-Object), and instead condition the structure on the semantic properties of the events they are conveying. An important source of variability in structure in silent gesture is the property of reversibility. Reversible events typically have two animate participants whose roles can be reversed (girl kicks boy). Without a syntactic/conventional means of conveying who does what to whom, there is inherent unclarity about the agent and patient roles in the event (by contrast, this is less pressing for non-reversible events like girl kicks ball). In experiment 1 we test a novel, fine-grained analysis of reversibility. Presenting a silent gesture production experiment, we show that the variability in word order depends on two factors (properties of the verb and properties of the direct object) that together determine how reversible an event is. We relate our experimental results to principles from information theory, showing that our data support the “noisy channel” account of constituent order. In experiment 2, we focus on the influence of interaction on word order variability for reversible and non-reversible events. We show that when participants use silent gesture for communicative interaction, they become more consistent in their usage of word order over time, however, this pattern less pronounced for events that are classified as strongly non-reversible. We conclude that full consistency in word order is theoretically a good strategy, but word order use in practice is a more complex phenomenon.}\n}\n\n
\n
\n\n\n
\n How do cognitive biases and mechanisms from learning and use interact when a system of language conventions emerges? We investigate this question by focusing on how transitive events are conveyed in silent gesture production and interaction. Silent gesture experiments (in which participants improvise to use gesture but no speech) have been used to investigate cognitive biases that shape utterances produced in the absence of a conventional language system. In this mode of communication, participants do not follow the dominant order of their native language (e.g., Subject-Verb-Object), and instead condition the structure on the semantic properties of the events they are conveying. An important source of variability in structure in silent gesture is the property of reversibility. Reversible events typically have two animate participants whose roles can be reversed (girl kicks boy). Without a syntactic/conventional means of conveying who does what to whom, there is inherent unclarity about the agent and patient roles in the event (by contrast, this is less pressing for non-reversible events like girl kicks ball). In experiment 1 we test a novel, fine-grained analysis of reversibility. Presenting a silent gesture production experiment, we show that the variability in word order depends on two factors (properties of the verb and properties of the direct object) that together determine how reversible an event is. We relate our experimental results to principles from information theory, showing that our data support the “noisy channel” account of constituent order. In experiment 2, we focus on the influence of interaction on word order variability for reversible and non-reversible events. We show that when participants use silent gesture for communicative interaction, they become more consistent in their usage of word order over time, however, this pattern less pronounced for events that are classified as strongly non-reversible. We conclude that full consistency in word order is theoretically a good strategy, but word order use in practice is a more complex phenomenon.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n Person and number in Catalan Sign Language pronouns.\n \n \n \n\n\n \n Raquel Veiga Busto.\n\n\n \n\n\n\n Sign Language and Linguistics. 2022.\n \n\n\n\n
\n\n\n\n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@article{veigabusto:22,\ntitle = {Person and number in Catalan Sign Language pronouns},\nauthor = {Veiga Busto, Raquel},\njournal = {Sign Language and Linguistics},\nyear = {2022},\ndoi = {10.1075/sll.00069.vei},\nabstract = {The category of person encodes the semantic distinction between discourse roles, and number specifies the numerosity of the referents. While these are two basic grammatical categories of human languages, they have not yet been documented in detail in many sign languages. This dissertation provides the first in-depth analysis of person and number in Catalan Sign Language (LSC) pronouns. To a lesser extent, the form and interpretation of number in nouns is also addressed. The dissertation takes a descriptive approach, but it also offers formal arguments to account for the interpretation of person and number markers. Besides, the results are contrasted with prior findings from signed and spoken languages to determine whether LSC complies with the tendencies documented in the two cat- egories across the world’s languages.\nDrawing on a combination of elicited and corpus data analysis, this study shows that person and number in LSC are formally marked through a set of dis- tinctive phonological features: person is encoded through spatial features, and number by the path specifications of the sign. Further, this thesis demonstrates that person marking has an influence on the meaning of certain number mor- phemes. Thus, the same morphological operation might yield different interpre- tations depending on the person features with which it is combined.}\n}\n\n
\n
\n\n\n
\n The category of person encodes the semantic distinction between discourse roles, and number specifies the numerosity of the referents. While these are two basic grammatical categories of human languages, they have not yet been documented in detail in many sign languages. This dissertation provides the first in-depth analysis of person and number in Catalan Sign Language (LSC) pronouns. To a lesser extent, the form and interpretation of number in nouns is also addressed. The dissertation takes a descriptive approach, but it also offers formal arguments to account for the interpretation of person and number markers. Besides, the results are contrasted with prior findings from signed and spoken languages to determine whether LSC complies with the tendencies documented in the two cat- egories across the world’s languages. Drawing on a combination of elicited and corpus data analysis, this study shows that person and number in LSC are formally marked through a set of dis- tinctive phonological features: person is encoded through spatial features, and number by the path specifications of the sign. Further, this thesis demonstrates that person marking has an influence on the meaning of certain number mor- phemes. Thus, the same morphological operation might yield different interpre- tations depending on the person features with which it is combined.\n
\n\n\n
\n\n\n\n\n\n
\n
\n\n
\n
\n  \n 2021\n \n \n (26)\n \n \n
\n
\n \n \n
\n \n\n \n \n \n \n \n \n Phonological restrictions on nominal pluralization in Sign Language of the Netherlands: evidence from corpus and elicited data.\n \n \n \n \n\n\n \n Cindy van Boven.\n\n\n \n\n\n\n Folia Linguistica. 2021.\n \n\n\n\n
\n\n\n\n \n \n \"PhonologicalPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n  \n \n 1 download\n \n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@article{boven:21a,\n  title = {Phonological restrictions on nominal pluralization in Sign Language of the Netherlands: evidence from corpus and elicited data},\n  author = {Boven, Cindy van},\n  publisher = {De Gruyter Mouton},\n  journal = {Folia Linguistica},\n  year = {2021},\n  doi = {10.1515/flin-2021-2039},\n  url = {https://www.degruyter.com/document/doi/10.1515/flin-2021-2039/html},\n  abstract = {This study focuses on nominal pluralization in Sign Language of the Netherlands (NGT). The aim is to offer a comprehensive description of nominal pluralization processes in the language, based on both corpus data and elicited data, taking into account potential phonological restrictions. The results reveal that NGT nouns can undergo several pluralization processes, the main ones being simple reduplication (i.e., repeating the noun sign at one location) and sideward reduplication (i.e., repeating the noun sign while moving the hand sideward). The choice of pluralization process depends on phonological properties of the base noun: (i) nouns that are body-anchored or involve a complex movement undergo simple reduplication; (ii) nouns articulated at the lateral side of the signing space undergo sideward reduplication; (iii) nouns articulated on the midsagittal plane can undergo both simple and sideward reduplication. Strikingly, the data show considerable variation, and all types of nouns can be zero-marked, that is, plural marking on the noun is not obligatory. The results further suggest that all nouns can undergo at least one type of reduplication. Thus, while phonological properties of the base noun influence the type of reduplication, they do not block reduplication altogether. Plural reduplication in NGT is therefore less constrained than has been reported for other sign languages, where certain noun types cannot undergo reduplication. This shows that reduplication – despite being iconically motivated – is subject to language-specific grammatical constraints.}\n}\n\n
\n
\n\n\n
\n This study focuses on nominal pluralization in Sign Language of the Netherlands (NGT). The aim is to offer a comprehensive description of nominal pluralization processes in the language, based on both corpus data and elicited data, taking into account potential phonological restrictions. The results reveal that NGT nouns can undergo several pluralization processes, the main ones being simple reduplication (i.e., repeating the noun sign at one location) and sideward reduplication (i.e., repeating the noun sign while moving the hand sideward). The choice of pluralization process depends on phonological properties of the base noun: (i) nouns that are body-anchored or involve a complex movement undergo simple reduplication; (ii) nouns articulated at the lateral side of the signing space undergo sideward reduplication; (iii) nouns articulated on the midsagittal plane can undergo both simple and sideward reduplication. Strikingly, the data show considerable variation, and all types of nouns can be zero-marked, that is, plural marking on the noun is not obligatory. The results further suggest that all nouns can undergo at least one type of reduplication. Thus, while phonological properties of the base noun influence the type of reduplication, they do not block reduplication altogether. Plural reduplication in NGT is therefore less constrained than has been reported for other sign languages, where certain noun types cannot undergo reduplication. This shows that reduplication – despite being iconically motivated – is subject to language-specific grammatical constraints.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Habituals in Sign Language of the Netherlands: A corpus-based study.\n \n \n \n \n\n\n \n Cindy van Boven, & Marloes Oomen.\n\n\n \n\n\n\n Linguistics in Amsterdam, 14(1): 160–184. 2021.\n \n\n\n\n
\n\n\n\n \n \n \"HabitualsPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@article{boven:21c,\n  title = {Habituals in Sign Language of the Netherlands: A corpus-based study},\n  author = {Boven, Cindy van and Oomen, Marloes},\n  journal = {Linguistics in Amsterdam},\n  volume = {14},\n  number = {1},\n  pages = {160--184},\n  year = {2021},\n  url = {https://www.researchgate.net/profile/Marloes-Oomen/publication/349685298_Habituals_in_Sign_Language_of_the_Netherlands_A_corpus-based_study/links/603cad7d4585158939d99668/Habituals-in-Sign-Language-of-the-Netherlands-A-corpus-based-study.pdf},\n  abstract = {In this corpus-based study on habituals in Sign Language of the Netherlands (NGT), we investigate the manual and non-manual marking of habituality in naturalistic data. We show that both reduplication of the predicate and adverbials with a habitual flavor are used in habitual contexts, but both of these manual markers appear to be optional. As for non-manual markers, even more variation is attested; left-to-right head and body movements and narrowed eyes are the most frequently occurring non-manuals in habitual contexts but are by no means obligatory. The findings contrast with the results reported in two previous studies on habituals in NGT (Hoiting & Slobin 2001; Oomen 2016), which can be partially explained by the fact that these studies used elicitation methods. As such, the present study underscores the importance of using a combination of different methods in investigating linguistic phenomena.}\n}\n\n
\n
\n\n\n
\n In this corpus-based study on habituals in Sign Language of the Netherlands (NGT), we investigate the manual and non-manual marking of habituality in naturalistic data. We show that both reduplication of the predicate and adverbials with a habitual flavor are used in habitual contexts, but both of these manual markers appear to be optional. As for non-manual markers, even more variation is attested; left-to-right head and body movements and narrowed eyes are the most frequently occurring non-manuals in habitual contexts but are by no means obligatory. The findings contrast with the results reported in two previous studies on habituals in NGT (Hoiting & Slobin 2001; Oomen 2016), which can be partially explained by the fact that these studies used elicitation methods. As such, the present study underscores the importance of using a combination of different methods in investigating linguistic phenomena.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Question-answer pairs in Russian Sign Language: a corpus study.\n \n \n \n \n\n\n \n Evgeniia Khristoforova, & Vadim Kimmelman.\n\n\n \n\n\n\n In volume 4, pages 101–112, 2021. FEAST. Formal and Experimental Advances in Sign language Theory\n \n\n\n\n
\n\n\n\n \n \n \"Question-answerPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n  \n \n 1 download\n \n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@inproceedings{khristoforova:21,\n  title = {Question-answer pairs in Russian Sign Language: a corpus study},\n  author = {Khristoforova, Evgeniia and Kimmelman, Vadim},\n  publisher = {FEAST. Formal and Experimental Advances in Sign language Theory},\n  volume = {4},\n  pages = {101--112},\n  issn = {2565-1781},\n  year = {2021},\n  doi = {10.31009/FEAST.i4.08},\n  url = {https://www.signlab-amsterdam.nl/publications/khristoforova21.pdf},\n  abstract = {We describe basic morphosyntactic and semantic properties of question-answer pairs(QAPs) collected from the online corpus of Russian Sign Language (RSL). We identified two classes of QAPs: classical and discourse QAPs, which are different in the semantic relation between the question and answer parts. We discovered that non-manual marking and word order in both types of QAPs are different from other constructions involving wh-signs, namely regular questions and free relative clauses. Guided by the similarity between non-manual marking of QAPs and role shift marking, we hypothesize on a possible grammaticalization process connecting the two constructions}\n}\n\n
\n
\n\n\n
\n We describe basic morphosyntactic and semantic properties of question-answer pairs(QAPs) collected from the online corpus of Russian Sign Language (RSL). We identified two classes of QAPs: classical and discourse QAPs, which are different in the semantic relation between the question and answer parts. We discovered that non-manual marking and word order in both types of QAPs are different from other constructions involving wh-signs, namely regular questions and free relative clauses. Guided by the similarity between non-manual marking of QAPs and role shift marking, we hypothesize on a possible grammaticalization process connecting the two constructions\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Information structure: Theoretical perspectives.\n \n \n \n \n\n\n \n Vadim Kimmelman, & Roland Pfau.\n\n\n \n\n\n\n In The Routledge Handbook of Theoretical and Experimental Sign Language Research, pages 591–613. Routledge, 2021.\n \n\n\n\n
\n\n\n\n \n \n \"InformationPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@incollection{kimmelman:21,\n  title = {Information structure: Theoretical perspectives},\n  author = {Kimmelman, Vadim and Pfau, Roland},\n  publisher = {Routledge},\n  booktitle = {The Routledge Handbook of Theoretical and Experimental Sign Language Research},\n  pages = {591--613},\n  year = {2021},\n  url = {http://vadimkimmelman.com/papers/Kimmelman%20Pfau%202021%20IS.pdf},\n  abstract = {This chapter discusses the terminology commonly used in the information structure literature: in particular, topic, focus, contrast, and emphasis. An important component of our discussion is the impact of the visual-gestural modality on the syntactic and prosodic encoding of information structure. Kimmelman argued that in RSL and NGT, doubling is also used for information structure-related functions, but proposed that the functions of doubling are better described as foregrounding. Information structure is a field of linguistics covered in numerous books and articles. Information structure in sign languages has also been investigated almost from the first days of sign linguistics; however, as is often the case, most of the available studies focus on a very small number of sign languages, and among these, American Sign Language is the one most prominently represented. The chapter aims to theoretical research, It discusses the few available experimental or psycholinguistic studies on information structure in sign languages.}\n}\n\n
\n
\n\n\n
\n This chapter discusses the terminology commonly used in the information structure literature: in particular, topic, focus, contrast, and emphasis. An important component of our discussion is the impact of the visual-gestural modality on the syntactic and prosodic encoding of information structure. Kimmelman argued that in RSL and NGT, doubling is also used for information structure-related functions, but proposed that the functions of doubling are better described as foregrounding. Information structure is a field of linguistics covered in numerous books and articles. Information structure in sign languages has also been investigated almost from the first days of sign linguistics; however, as is often the case, most of the available studies focus on a very small number of sign languages, and among these, American Sign Language is the one most prominently represented. The chapter aims to theoretical research, It discusses the few available experimental or psycholinguistic studies on information structure in sign languages.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n A descriptive grammar of Sign Language of the Netherlands.\n \n \n \n \n\n\n \n Ulrika Klomp.\n\n\n \n\n\n\n Ph.D. Thesis, 2021.\n \n\n\n\n
\n\n\n\n \n \n \"APaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n  \n \n 4 downloads\n \n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@phdthesis{klomp:21,\n  title = {A descriptive grammar of Sign Language of the Netherlands},\n  author = {Klomp, Ulrika},\n  year = {2021},\n  publisher = {Netherlands Graduate School of Linguistics {(LOT)}},\n  abstract = {Sign Language of the Netherlands (Nederlandse Gebarentaal, NGT) is a minority language in the Netherlands, but has only recently gained legal recognition of this status. It is estimated that 60,000 people in the Netherlands sign NGT, of whom 10,000 are early onset deaf signers. This book is the first comprehensive descriptive grammar of NGT. It offers a detailed description of its phonology, morphology, and selected aspects of its syntax. Whenever possible, the linguistic phenomena are illustrated by naturalistic corpus data. The grammatical description is complemented by a brief overview of the socio-historical background of NGT and of the Dutch sign language community. The European SIGN-HUB project, of which this dissertation is a result, hosts a platform where a digital version of this grammar can be found, along with video fragments providing illustrations of many of the linguistic characteristics addressed in the book: www.sign-hub.eu/grammar. This resource may be used for cross-linguistic research, for the development of NGT acquisition materials, and as a reference work.},\n  url = {https://www.researchgate.net/profile/Ulrika-Klomp-2/publication/351188644_A_descriptive_grammar_of_Sign_Language_of_the_Netherlands/links/60bdca06458515218f9a1558/A-descriptive-grammar-of-Sign-Language-of-the-Netherlands.pdf}\n}\n\n
\n
\n\n\n
\n Sign Language of the Netherlands (Nederlandse Gebarentaal, NGT) is a minority language in the Netherlands, but has only recently gained legal recognition of this status. It is estimated that 60,000 people in the Netherlands sign NGT, of whom 10,000 are early onset deaf signers. This book is the first comprehensive descriptive grammar of NGT. It offers a detailed description of its phonology, morphology, and selected aspects of its syntax. Whenever possible, the linguistic phenomena are illustrated by naturalistic corpus data. The grammatical description is complemented by a brief overview of the socio-historical background of NGT and of the Dutch sign language community. The European SIGN-HUB project, of which this dissertation is a result, hosts a platform where a digital version of this grammar can be found, along with video fragments providing illustrations of many of the linguistic characteristics addressed in the book: www.sign-hub.eu/grammar. This resource may be used for cross-linguistic research, for the development of NGT acquisition materials, and as a reference work.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Iconicity as a mediator between verb semantics and morphosyntactic structure: a corpus-based study on verbs in German Sign Language.\n \n \n \n \n\n\n \n Marloes Oomen.\n\n\n \n\n\n\n Ph.D. Thesis, 2021.\n \n\n\n\n
\n\n\n\n \n \n \"IconicityPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@phdthesis{oomen:20b,\n  title = {Iconicity as a mediator between verb semantics and morphosyntactic structure: a corpus-based study on verbs in German Sign Language},\n  author = {Oomen, Marloes},\n  publisher = {Netherlands Graduate School of Linguistics {(LOT)}},\n  volume = {24},\n  number = {1},\n  pages = {132--141},\n  year = {2021},\n  abstract = {In many sign languages around the world, some verbs can express grammatical agreement with not just one but two arguments, while other verbs do not express agreement at all. Moreover, and rather curiously, there is a remarkable degree of semantic overlap across sign languages between verbs that possess agreement properties. It has been suggested that iconicity has some part to play in this: in sign languages, there is the potential for aspects of verb meaning to be iconically represented in a verb’s form. In this dissertation, I investigate how semantics and morphosyntactic structure interact in constructions containing verbs with varying agreement properties in German Sign Language (DGS), using naturalistic dialogues between signers from the DGS Corpus as the primary data source. I show that certain semantic properties – also known to govern transitivity marking in spoken languages – are predictive of verb type in DGS, where indeed systematic iconic mappings play a mediating role. The results enable the formulation of cross-linguistic predictions about the interplay between verb semantics and verb type in sign languages. A subsequent analysis of a range of morphosyntactic properties of different verb types leads up to the conclusion that even ‘plain’ verbs, in fact, grammatically agree with their arguments. This in turn motivates a unified syntactic analysis in terms of agreement of constructions with verbs that do and do not overtly express it, thus presenting a novel solution to the typological puzzle that supposedly only verbs of a (partially) semantically definable subset agree in DGS and other sign languages.},\n  url = {https://www.researchgate.net/profile/Marloes-Oomen/publication/341178832_Iconicity_as_a_mediator_between_verb_semantics_and_morphosyntactic_structure_A_corpus-based_study_on_verbs_in_German_Sign_Language/links/5eb26e6f92851cbf7fa9492f/Iconicity-as-a-mediator-between-verb-semantics-and-morphosyntactic-structure-A-corpus-based-study-on-verbs-in-German-Sign-Language.pdf}\n}\n\n
\n
\n\n\n
\n In many sign languages around the world, some verbs can express grammatical agreement with not just one but two arguments, while other verbs do not express agreement at all. Moreover, and rather curiously, there is a remarkable degree of semantic overlap across sign languages between verbs that possess agreement properties. It has been suggested that iconicity has some part to play in this: in sign languages, there is the potential for aspects of verb meaning to be iconically represented in a verb’s form. In this dissertation, I investigate how semantics and morphosyntactic structure interact in constructions containing verbs with varying agreement properties in German Sign Language (DGS), using naturalistic dialogues between signers from the DGS Corpus as the primary data source. I show that certain semantic properties – also known to govern transitivity marking in spoken languages – are predictive of verb type in DGS, where indeed systematic iconic mappings play a mediating role. The results enable the formulation of cross-linguistic predictions about the interplay between verb semantics and verb type in sign languages. A subsequent analysis of a range of morphosyntactic properties of different verb types leads up to the conclusion that even ‘plain’ verbs, in fact, grammatically agree with their arguments. This in turn motivates a unified syntactic analysis in terms of agreement of constructions with verbs that do and do not overtly express it, thus presenting a novel solution to the typological puzzle that supposedly only verbs of a (partially) semantically definable subset agree in DGS and other sign languages.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Iconicity and Verb Agreement: A Corpus-Based Syntactic Analysis of German Sign Language.\n \n \n \n \n\n\n \n Marloes Oomen.\n\n\n \n\n\n\n Volume 15 De Gruyter Mouton, 2021.\n \n\n\n\n
\n\n\n\n \n \n \"IconicityPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n  \n \n 1 download\n \n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@book{oomen:21,\n  title = {Iconicity and Verb Agreement: A Corpus-Based Syntactic Analysis of German Sign Language},\n  author = {Oomen, Marloes},\n  publisher = {De Gruyter Mouton},\n  volume = {15},\n  year = {2021},\n  url = {https://books.google.nl/books?hl=en&lr=&id=hw5QEAAAQBAJ&oi=fnd&pg=PP13&ots=HiMEkpbZjP&sig=K47YaIrlIk4oedkZASJt1gFNNcM&redir_esc=y#v=onepage&q&f=false},\n  keywords = {textbook}\n}\n\n
\n
\n\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Our Lives–Our Stories: Life Experiences of Elderly Deaf People.\n \n \n \n \n\n\n \n Roland Pfau, Asli Göksel, & Jana Hosemann.\n\n\n \n\n\n\n Volume 14 De Gruyter Mouton, 2021.\n \n\n\n\n
\n\n\n\n \n \n \"OurPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@book{pfau:21a,\n  title = {Our Lives--Our Stories: Life Experiences of Elderly Deaf People},\n  author = {Pfau, Roland and G{\\"o}ksel, Asli and Hosemann, Jana},\n  volume = {14},\n  year = {2021},\n  keywords = {textbook},\n  publisher = {De Gruyter Mouton},\n  abstract = {Sign languages are non-written languages. Given that the use of digital media and video recordings in documenting sign languages started only some 30 years ago, the life stories of Deaf elderly signers born in the 1930s-1940s have – except for a few scattered fragments in film – not been documented and are therefore under serious threat of being lost. The chapters compiled in this volume document important aspects of past and present experiences of elderly Deaf signers across Europe, as well as in Israel and the United States. Issues addressed include (i) historical events and how they were experienced by Deaf people, (ii) issues of identity and independence, (iii) aspects of language change, (iv) experiences of suppression and discrimination. The stories shared by elderly signers reveal intriguing, yet hidden, aspects of Deaf life. On the negative side, these include experiences of the Deaf in Nazi Germany and occupied countries and harsh practices in educational settings, to name a few. On the positive side, there are stories of resilience and vivid memories of school years and social and professional life. In this way, the volume contributes in a significant way to the preservation of the cultural and linguistic heritage of Deaf communities and sheds light on lesser known aspects against an otherwise familiar background.},\n  doi = {10.1515/9783110701906},\n  url = {https://www.degruyter.com/document/doi/10.1515/9783110701906/html}\n}\n\n
\n
\n\n\n
\n Sign languages are non-written languages. Given that the use of digital media and video recordings in documenting sign languages started only some 30 years ago, the life stories of Deaf elderly signers born in the 1930s-1940s have – except for a few scattered fragments in film – not been documented and are therefore under serious threat of being lost. The chapters compiled in this volume document important aspects of past and present experiences of elderly Deaf signers across Europe, as well as in Israel and the United States. Issues addressed include (i) historical events and how they were experienced by Deaf people, (ii) issues of identity and independence, (iii) aspects of language change, (iv) experiences of suppression and discrimination. The stories shared by elderly signers reveal intriguing, yet hidden, aspects of Deaf life. On the negative side, these include experiences of the Deaf in Nazi Germany and occupied countries and harsh practices in educational settings, to name a few. On the positive side, there are stories of resilience and vivid memories of school years and social and professional life. In this way, the volume contributes in a significant way to the preservation of the cultural and linguistic heritage of Deaf communities and sheds light on lesser known aspects against an otherwise familiar background.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Much more than a treasure: the life stories of elderly Deaf people.\n \n \n \n \n\n\n \n Roland Pfau, Aslı Göksel, & Jana Hosemann.\n\n\n \n\n\n\n In Our Lives–Our Stories: Life Experiences of Elderly Deaf People, volume 14, pages 1–15. De Gruyter Mouton, 2021.\n \n\n\n\n
\n\n\n\n \n \n \"MuchPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@incollection{pfau:21b,\n  title = {Much more than a treasure: the life stories of elderly Deaf people},\n  author = {Pfau, Roland and G{\\"o}ksel, Asl{\\i} and Hosemann, Jana},\n  booktitle = {Our Lives--Our Stories: Life Experiences of Elderly Deaf People},\n  publisher = {De Gruyter Mouton},\n  volume = {14},\n  pages = {1--15},\n  year = {2021},\n  doi = {10.1515/9783110701906-001},\n  url = {https://www.degruyter.com/document/doi/10.1515/9783110701906-001/html}\n}\n\n
\n
\n\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Pink sign: Identity challenges, choices, and changes among elderly Deaf homosexuals in the Netherlands.\n \n \n \n \n\n\n \n Roland Pfau, Annemieke van Kampen, & Menno Harterink.\n\n\n \n\n\n\n In Our Lives–Our Stories: Life Experiences of Elderly Deaf People, volume 14, pages 129–167. De Gruyter Mouton, 2021.\n \n\n\n\n
\n\n\n\n \n \n \"PinkPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n  \n \n 1 download\n \n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@incollection{pfau:21c,\n  title = {Pink sign: Identity challenges, choices, and changes among elderly Deaf homosexuals in the Netherlands},\n  author = {Pfau, Roland and Kampen, Annemieke van and Harterink, Menno},\n  booktitle = {Our Lives--Our Stories: Life Experiences of Elderly Deaf People},\n  publisher = {De Gruyter Mouton},\n  volume = {14},\n  pages = {129--167},\n  year = {2021},\n  doi = {10.1515/9783110701906-001},\n  url = {https://www.degruyter.com/document/doi/10.1515/9783110701906-006/html}\n}\n\n
\n
\n\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Number in Sign Languages.\n \n \n \n \n\n\n \n Roland Pfau, & Markus Steinbach.\n\n\n \n\n\n\n In The Oxford Handbook of Grammatical Number, pages 644–660. 2021.\n \n\n\n\n
\n\n\n\n \n \n \"NumberPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@incollection{pfau:21d,\n  title = {Number in Sign Languages},\n  author = {Pfau, Roland and Steinbach, Markus},\n  booktitle = {The Oxford Handbook of Grammatical Number},\n  pages = {644--660},\n  year = {2021},\n  doi = {10.1093/oxfordhb/9780198795858.013.31},\n  url = {https://watermark.silverchair.com/303222023.pdf?token=AQECAHi208BE49Ooan9kkhW_Ercy7Dm3ZL_9Cf3qfKAc485ysgAAAsQwggLABgkqhkiG9w0BBwagggKxMIICrQIBADCCAqYGCSqGSIb3DQEHATAeBglghkgBZQMEAS4wEQQMAM4265XWzaPig-A1AgEQgIICd4bTyMHkxH5x94H-3xgHeMC-KCtNdBW0tnBEJ16xiMm57KFcg7pgUnG8rqAhJYSBE3rX0vGkrTnDxt-lMawtzO5h7h-rx2lOIDnKzuByTbc4Tn9fi0TvuGjSqWqbFF7Tv8eU-2UmFcTbU28iHrp64ZGBpMY-8rxAdBW1K-sadkMVh0tzKgrfx76KZaAG2W_EnBX1xfdIW9hMaRrTeJ64rFAaKifrL8FrQ7hkyibqqEwaIMAsNkOE6nxzCRW8uZ8iSdKJMwAdIF5UYt9yvbjmavELriA2fzxYthELFPCfkH6L8wT10B0_DgKVwURP_SzyXrnSwKuIQspmpi9rxtIAPrPPvtULyka9B4mJN3_5bUmWCmHBIzg6dQMjoo9VnbE3LK3scgAr1QsUaZJZOcVMBJWbU9C70m60rItcNpvSBTYurjRjorX6cL9LIlXaTYHKauEyfRibi-5Mrxjya1uVY4j-oruZWuqQdXbtYl2ZiorXaUsHQFbYvuk2LLdlfjgRgpzkwdU-WEjY825ZZJzGjAnrSTovMk62RRKqzpHbzax3okEvARsZvc4u4i5CJOuCmR1UxtPtwwIVC2ruaiakCmpCh2BLx-VbOTmlqHoVLk6cn7-Uq_1T7N3fAKfDyvmH3FSmPl8trwFUSuAM2EHykj93r-Vgc4Fre7WWBFgX7uXv2GTnzHWzXPAUeCoqiqJyivNnkqwXSRkB9mMc2D7TR-76btmjAjnyMMLQ7sN0VuC0TdKPobDMd5gG3FYxZOlIWrTQ5orMacdav7NJNb_zLIhJMDiCtSQiyNIu-EXGCpQ8EFYAZsouLrwWE-QUkCFznZIc9L5Xtr0}\n}\n\n
\n
\n\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Asymmetry and contrast: Coordination in Sign Language of the Netherlands.\n \n \n \n \n\n\n \n Katharina Hartmann, Roland Pfau, & Iris Legeland.\n\n\n \n\n\n\n Glossa: a journal of general linguistics, 6(1): 1–33. 2021.\n \n\n\n\n
\n\n\n\n \n \n \"AsymmetryPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@article{hartmann:21,\n  title = {Asymmetry and contrast: Coordination in Sign Language of the Netherlands},\n  author = {Hartmann, Katharina and Pfau, Roland and Legeland, Iris},\n  journal = {Glossa: a journal of general linguistics},\n  volume = {6},\n  number = {1},\n  year = {2021},\n  pages = {1--33},\n  doi = {10.16995/glossa.5872},\n  url = {https://www.glossa-journal.org/article/id/5872/},\n  abstract = {This paper investigates coordination in Sign Language of the Netherlands (NGT). We offer an account for a typologically unusual coordination pattern found in this language. We show that the conjuncts of a coordinated structure in NGT may violate a constraint governing coordinated structures in spoken languages, which we refer to as the ‘Parallel Structure Constraint’. The violation consists in asymmetric fronting in the second conjunct of a coordinated structure. We argue that a violation of the Parallel Structure Constraint is acceptable in NGT in order to express a contrast across the conjuncts. Hence asymmetric reordering in the second conjunct is a strategy that allows signers to obtain the desired strength of marking when in situ marking is insufficient.},\n  publisher={Open Library of Humanities}\n}\n\n
\n
\n\n\n
\n This paper investigates coordination in Sign Language of the Netherlands (NGT). We offer an account for a typologically unusual coordination pattern found in this language. We show that the conjuncts of a coordinated structure in NGT may violate a constraint governing coordinated structures in spoken languages, which we refer to as the ‘Parallel Structure Constraint’. The violation consists in asymmetric fronting in the second conjunct of a coordinated structure. We argue that a violation of the Parallel Structure Constraint is acceptable in NGT in order to express a contrast across the conjuncts. Hence asymmetric reordering in the second conjunct is a strategy that allows signers to obtain the desired strength of marking when in situ marking is insufficient.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n The Routledge Handbook of Theoretical and Experimental Sign Language Research.\n \n \n \n \n\n\n \n Josep Quer, Roland Pfau, & Annika Herrmann.\n\n\n \n\n\n\n Routledge, 2021.\n \n\n\n\n
\n\n\n\n \n \n \"ThePaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@book{quer:21,\n  title = {The Routledge Handbook of Theoretical and Experimental Sign Language Research},\n  author = {Quer, Josep and Pfau, Roland and Herrmann, Annika},\n  publisher = {Routledge},\n  year = {2021},\n  doi = {10.4324/9781315754499},\n  url = {https://www.taylorfrancis.com/books/edit/10.4324/9781315754499/routledge-handbook-theoretical-experimental-sign-language-research-josep-quer-roland-pfau-annika-herrmann},\n  keywords = {textbook},\n  abstract = {The Routledge Handbook of Theoretical and Experimental Sign Language Research bridges the divide between theoretical and experimental approaches to provide an up-to-date survey of key topics in sign language research. With 29 chapters written by leading and emerging scholars from around the world, this Handbook covers the following key areas: On the theoretical side, all crucial aspects of sign language grammar studied within formal frameworks such as Generative Grammar;   On the experimental side, theoretical accounts are supplemented by experimental evidence gained in psycho- and neurolinguistic studies;  On the descriptive side, the main phenomena addressed in the reviewed scholarship are summarized in a way that is accessible to readers without previous knowledge of sign languages. Each chapter features an introduction, an overview of existing research, and a critical assessment of hypotheses and findings. The Routledge Handbook of Theoretical and Experimental Sign Language Research is key reading for all advanced students and researchers working at the intersection of sign language research, linguistics, psycholinguistics, and neurolinguistics.}\n}\n\n
\n
\n\n\n
\n The Routledge Handbook of Theoretical and Experimental Sign Language Research bridges the divide between theoretical and experimental approaches to provide an up-to-date survey of key topics in sign language research. With 29 chapters written by leading and emerging scholars from around the world, this Handbook covers the following key areas: On the theoretical side, all crucial aspects of sign language grammar studied within formal frameworks such as Generative Grammar;   On the experimental side, theoretical accounts are supplemented by experimental evidence gained in psycho- and neurolinguistic studies;  On the descriptive side, the main phenomena addressed in the reviewed scholarship are summarized in a way that is accessible to readers without previous knowledge of sign languages. Each chapter features an introduction, an overview of existing research, and a critical assessment of hypotheses and findings. The Routledge Handbook of Theoretical and Experimental Sign Language Research is key reading for all advanced students and researchers working at the intersection of sign language research, linguistics, psycholinguistics, and neurolinguistics.\n
\n\n\n
\n\n\n \n\n\n \n\n\n \n\n\n
\n \n\n \n \n \n \n \n \n Use of a modern avatar for sign language synthesis: Visualisation of non-manual NGT gestures.\n \n \n \n \n\n\n \n Alon Shilo.\n\n\n \n\n\n\n 2021.\n Bachelor's thesis, University of Amsterdam.\n\n\n\n
\n\n\n\n \n \n \"UsePaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n  \n \n 3 downloads\n \n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@unpublished{Shilo:21,\n  title = {{Use of a modern avatar for sign language synthesis: Visualisation of non-manual NGT gestures}},\n  author = {Shilo, Alon},\n  year = {2021},\n  url = {https://dspace.uba.uva.nl/bitstreams/fc79c65e-e35c-411e-bced-052a55ed8300/download},\n  keywords = {bachelorsthesis},\n  abstract = {This thesis explores the implementation for visualising the translation from Dutch sentences to the sign language of the Netherlands (NGT). This is one of three theses, each focusing on different parts of the avatar, which are eventually combined into one program. For this thesis, the goal is to implement a part of the non-manual side of NGT to create a proof of concept. The non-manual body parts of NGT include the head, face, shoulders and upper body. NGT is a complex language. Therefore it will take more time to completely visualize NGT using a new modern avatar, but this thesis provides a foundation for further research and implementation of the non-manual gestures of NGT.},\n  note = {Bachelor's thesis, University of Amsterdam.}\n}\n\n
\n
\n\n\n
\n This thesis explores the implementation for visualising the translation from Dutch sentences to the sign language of the Netherlands (NGT). This is one of three theses, each focusing on different parts of the avatar, which are eventually combined into one program. For this thesis, the goal is to implement a part of the non-manual side of NGT to create a proof of concept. The non-manual body parts of NGT include the head, face, shoulders and upper body. NGT is a complex language. Therefore it will take more time to completely visualize NGT using a new modern avatar, but this thesis provides a foundation for further research and implementation of the non-manual gestures of NGT.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Use of a modern avatar for sign language synthesis: Hand animations and the signing space.\n \n \n \n \n\n\n \n Matthijs van de Vijver.\n\n\n \n\n\n\n 2021.\n Bachelor's thesis, University of Amsterdam.\n\n\n\n
\n\n\n\n \n \n \"UsePaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@unpublished{Vijver:21,\n  title = {Use of a modern avatar for sign language synthesis: Hand animations and the signing space},\n  author = {Vijver, Matthijs van de},\n  year = {2021},\n  url = {https://dspace.uba.uva.nl/bitstreams/128a1797-658b-49c2-8562-9cd1d3f81872/download},\n  keywords = {bachelorsthesis},\n  abstract = {For deaf people, the written and spoken language of their country is often their second language. The first language for most deaf people is the sign language of their country. Because deaf people often have a reading and writing delay and not many hearing people speak sign language, a barrier is formed between deaf people who speak sign language and hearing people who do not. To overcome this problem ’signing avatars’ have been researched in the past decade: 3D computer models that can perform sign language. For the Dutch sign language and many other sign languages, the animation techniques that are used for this are deprecated. This research is focused on using modern techniques for animating these signing avatars. A model was built that shows potential in using these modern techniques to perform signs that are described in a dedicated markup language.},\n  note = {Bachelor's thesis, University of Amsterdam.}\n}\n\n\n
\n
\n\n\n
\n For deaf people, the written and spoken language of their country is often their second language. The first language for most deaf people is the sign language of their country. Because deaf people often have a reading and writing delay and not many hearing people speak sign language, a barrier is formed between deaf people who speak sign language and hearing people who do not. To overcome this problem ’signing avatars’ have been researched in the past decade: 3D computer models that can perform sign language. For the Dutch sign language and many other sign languages, the animation techniques that are used for this are deprecated. This research is focused on using modern techniques for animating these signing avatars. A model was built that shows potential in using these modern techniques to perform signs that are described in a dedicated markup language.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Modern avatar simulation for sign language synthesis: Hand configurations of the SiGML.\n \n \n \n \n\n\n \n Mark van Hofwegen.\n\n\n \n\n\n\n 2021.\n Bachelor's thesis, University of Amsterdam.\n\n\n\n
\n\n\n\n \n \n \"ModernPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@unpublished{Hofwegen:21,\n  title = {Modern avatar simulation for sign language synthesis: Hand configurations of the SiGML},\n  author = {Hofwegen, Mark van},\n  year = {2021},\n  url = {https://dspace.uba.uva.nl/bitstreams/473c469e-4102-4133-9e53-e377263d5782/download},\n  keywords = {bachelorsthesis},\n  abstract = {Loss of hearing at young age has devastating effects on the development of a child. One of these effects is a learning deficit. This learning deficit can be strongly mitigated by introducing sign language as soon as deafness is diagnosed. However, most of deaf children are born to hearing parents who can not speak the Dutch Sign Language (NGT). Language acquisition of NGT is difficult for Dutch speaking people, since NGT has its own grammar and vocabulary. The main purpose of this thesis is to find out if modern avatar simulation techniques can be used to create a virtual sign language avatar to aid the learning process of NGT. Results show that modern software is applicable, but three main components are needed by the software in order to do so. A phonetic representation of the sign, a compatible rig with as many control points as needed by the phonetic representation and animation software to alter the orientation/position of the control points.},\n  note = {Bachelor's thesis, University of Amsterdam.}\n}\n\n\n
\n
\n\n\n
\n Loss of hearing at young age has devastating effects on the development of a child. One of these effects is a learning deficit. This learning deficit can be strongly mitigated by introducing sign language as soon as deafness is diagnosed. However, most of deaf children are born to hearing parents who can not speak the Dutch Sign Language (NGT). Language acquisition of NGT is difficult for Dutch speaking people, since NGT has its own grammar and vocabulary. The main purpose of this thesis is to find out if modern avatar simulation techniques can be used to create a virtual sign language avatar to aid the learning process of NGT. Results show that modern software is applicable, but three main components are needed by the software in order to do so. A phonetic representation of the sign, a compatible rig with as many control points as needed by the phonetic representation and animation software to alter the orientation/position of the control points.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Spatial language use predicts spatial memory of children: Evidence from sign, speech, and speech-plus-gesture.\n \n \n \n \n\n\n \n Dilay Z Karadoller, Beyza Sümer, Ercenur Ünal, & Aslı Özyürek.\n\n\n \n\n\n\n In Proceedings of the Annual Meeting of the Cognitive Science Society, volume 43, 2021. \n \n\n\n\n
\n\n\n\n \n \n \"SpatialPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@inproceedings{karadoller:21a,\n  title = {Spatial language use predicts spatial memory of children: Evidence from sign, speech, and speech-plus-gesture},\n  author = {Karadoller, Dilay Z and S{\\"u}mer, Beyza and {\\"U}nal, Ercenur and {\\"O}zy{\\"u}rek, Asl{\\i}},\n  booktitle = {Proceedings of the Annual Meeting of the Cognitive Science Society},\n  volume = {43},\n  number = {43},\n  year = {2021},\n  issn = {1069-7977},\n  url = {https://escholarship.org/content/qt4vp063gj/qt4vp063gj.pdf?t=qwi375&v=lg},\n  abstract = {There is a strong relation between children’s exposure to spatial terms and their later memory accuracy. In the current study, we tested whether the production of spatial terms by children themselves predicts memory accuracy and whether and how language modality of these encodings modulates memory accuracy differently. Hearing child speakers of Turkish and deaf child signers of Turkish Sign Language described pictures of objects in various spatial relations to each other and later tested for their memory accuracy of these pictures in a surprise memory task. We found that having described the spatial relation between the objects predicted better memory accuracy. However, the modality of these descriptions in sign, speech, or speech-plus-gesture did not reveal differences in memory accuracy. We discuss the implications of these findings for the relation between spatial language, memory, and the modality of encoding.}\n}\n\n
\n
\n\n\n
\n There is a strong relation between children’s exposure to spatial terms and their later memory accuracy. In the current study, we tested whether the production of spatial terms by children themselves predicts memory accuracy and whether and how language modality of these encodings modulates memory accuracy differently. Hearing child speakers of Turkish and deaf child signers of Turkish Sign Language described pictures of objects in various spatial relations to each other and later tested for their memory accuracy of these pictures in a surprise memory task. We found that having described the spatial relation between the objects predicted better memory accuracy. However, the modality of these descriptions in sign, speech, or speech-plus-gesture did not reveal differences in memory accuracy. We discuss the implications of these findings for the relation between spatial language, memory, and the modality of encoding.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Effects and non-effects of late language exposure on spatial language development: Evidence from deaf adults and children.\n \n \n \n \n\n\n \n Dilay Z Karadöller, Beyza Sümer, & Aslı Özyürek.\n\n\n \n\n\n\n Language Learning and Development, 17(1): 1–25. 2021.\n \n\n\n\n
\n\n\n\n \n \n \"EffectsPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@article{karadoller:21b,\n  title = {Effects and non-effects of late language exposure on spatial language development: Evidence from deaf adults and children},\n  author = {Karad{\\"o}ller, Dilay Z and S{\\"u}mer, Beyza and {\\"O}zy{\\"u}rek, Asl{\\i}},\n  publisher = {Taylor \\& Francis},\n  journal = {Language Learning and Development},\n  volume = {17},\n  number = {1},\n  pages = {1--25},\n  year = {2021},\n  doi = {10.1080/15475441.2020.1823846},\n  url = {https://www.tandfonline.com/doi/pdf/10.1080/15475441.2020.1823846?needAccess=true},\n  abstract = {Late exposure to the first language, as in the case of deaf children with hearing parents, hinders the production of linguistic expressions, even in adulthood. Less is known about the development of language soon after language exposure and if late exposure hinders all domains of language in children and adults. We compared late signing adults and children (MAge = 8;5) 2 years after exposure to sign language, to their age-matched native signing peers in expressions of two types of locative relations that are acquired in certain cognitive-developmental order: view-independent (IN-ON-UNDER) and view-dependent (LEFT-RIGHT). Late signing children and adults differed from native signers in their use of linguistic devices for view-dependent relations but not for view-independent relations. These effects were also modulated by the morphological complexity. Hindering effects of late language exposure on the development of language in children and adults are not absolute but are modulated by cognitive and linguistic complexity.}\n}\n\n
\n
\n\n\n
\n Late exposure to the first language, as in the case of deaf children with hearing parents, hinders the production of linguistic expressions, even in adulthood. Less is known about the development of language soon after language exposure and if late exposure hinders all domains of language in children and adults. We compared late signing adults and children (MAge = 8;5) 2 years after exposure to sign language, to their age-matched native signing peers in expressions of two types of locative relations that are acquired in certain cognitive-developmental order: view-independent (IN-ON-UNDER) and view-dependent (LEFT-RIGHT). Late signing children and adults differed from native signers in their use of linguistic devices for view-dependent relations but not for view-independent relations. These effects were also modulated by the morphological complexity. Hindering effects of late language exposure on the development of language in children and adults are not absolute but are modulated by cognitive and linguistic complexity.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n The effects of iconicity and conventionalisation on word order preferences.\n \n \n \n \n\n\n \n Yasamin Motamedi, Lucie Wolters, Marieke Schouwstra, & Simon Kirby.\n\n\n \n\n\n\n 2021.\n \n\n\n\n
\n\n\n\n \n \n \"ThePaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n  \n \n 1 download\n \n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@misc{motamedi:21a,\n\ttitle = {The effects of iconicity and conventionalisation on word order preferences},\n\tauthor = {Motamedi, Yasamin and Wolters, Lucie and Schouwstra, Marieke and Kirby, Simon},\n  publisher = {PsyArXiv},\n  year = {2021},\n  doi = {10.31234/osf.io/u5amg},\n\turl = {https://doi.org/10.31234/osf.io/u5amg},\n  abstract = {Of the 6 possible orderings of the 3 main constituents of language (subject, verb and object), two —- SOV and SVO —- are predominant cross-linguistically. Previous research using the silent gesture paradigm in which hearing participants produce or respond to gestures without speech, has shown that different factors such as reversibility, salience and animacy can affect the preferences for different orders. Here, we test whether participants’ preferences for orders that are conditioned on the semantics of the event change depending on i) the iconicity of individual gestural elements and ii) the prior knowledge of a conventional lexicon. Our findings demonstrate the same preference for semantically-conditioned word order found in previous studies, specifically that SOV and SVO are preferred differentially for different types of events. We do not find that iconicity of individual gestures affects participants’ ordering preferences, however we do find that learning a lexicon leads to a stronger preference for SVO-like orders overall. Finally, we compare our findings from English speakers, using an SVO-dominant language, with data from speakers of an SOV-dominant language, Turkish. We find that, while learning a lexicon leads to an increase in SVO preference for both sets of participants, this effect is mediated by language background and event type, suggesting that an interplay of factors together determine preferences for different ordering patterns. Taken together, our results support a view of word order as a gradient phenomenon responding to multiple biases.}\n}\n\n
\n
\n\n\n
\n Of the 6 possible orderings of the 3 main constituents of language (subject, verb and object), two —- SOV and SVO —- are predominant cross-linguistically. Previous research using the silent gesture paradigm in which hearing participants produce or respond to gestures without speech, has shown that different factors such as reversibility, salience and animacy can affect the preferences for different orders. Here, we test whether participants’ preferences for orders that are conditioned on the semantics of the event change depending on i) the iconicity of individual gestural elements and ii) the prior knowledge of a conventional lexicon. Our findings demonstrate the same preference for semantically-conditioned word order found in previous studies, specifically that SOV and SVO are preferred differentially for different types of events. We do not find that iconicity of individual gestures affects participants’ ordering preferences, however we do find that learning a lexicon leads to a stronger preference for SVO-like orders overall. Finally, we compare our findings from English speakers, using an SVO-dominant language, with data from speakers of an SOV-dominant language, Turkish. We find that, while learning a lexicon leads to an increase in SVO preference for both sets of participants, this effect is mediated by language background and event type, suggesting that an interplay of factors together determine preferences for different ordering patterns. Taken together, our results support a view of word order as a gradient phenomenon responding to multiple biases.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n I Know You Know I’m Signaling: Novel gestures are designed to guide observers’ inferences about communicative goals.\n \n \n \n \n\n\n \n Amanda Royka, Marieke Schouwstra, Simon Kirby, & Julian Jara-Ettinger.\n\n\n \n\n\n\n 2021.\n \n\n\n\n
\n\n\n\n \n \n \"IPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@misc{royka:21,\n  title = {I Know You Know I’m Signaling: Novel gestures are designed to guide observers’ inferences about communicative goals},\n  author = {Royka, Amanda and Schouwstra, Marieke and Kirby, Simon and Jara-Ettinger, Julian},\n  publisher = {PsyArXiv},\n  year = {2021},\n  doi = {10.31234/osf.io/2h5vu},\n  url = {psyarxiv.com/2h5vu},\n  abstract = {For a gesture to be successful, observers must recognize its communicative purpose. Are communicators sensitive to this problem and do they try to ease their observer’s inferential burden? We propose that people shape their gestures to help observers easily infer that their movements are meant to communicate. Using computational models of recursive goal inference, we show that this hypothesis predicts that gestures ought to reveal that the movement is inconsistent with the space of non-communicative goals in the environment. In two gesture-design experiments, we find that people spontaneously shape communicative movements in response to the distribution of potential instrumental goals, ensuring that the movement can be easily differentiated from instrumental action. Our results show that people are sensitive to the inferential demands that observers face. As a result, people actively work to help ensure that the goal of their communicative movement is understood.}\n}\n\n
\n
\n\n\n
\n For a gesture to be successful, observers must recognize its communicative purpose. Are communicators sensitive to this problem and do they try to ease their observer’s inferential burden? We propose that people shape their gestures to help observers easily infer that their movements are meant to communicate. Using computational models of recursive goal inference, we show that this hypothesis predicts that gestures ought to reveal that the movement is inconsistent with the space of non-communicative goals in the environment. In two gesture-design experiments, we find that people spontaneously shape communicative movements in response to the distribution of potential instrumental goals, ensuring that the movement can be easily differentiated from instrumental action. Our results show that people are sensitive to the inferential demands that observers face. As a result, people actively work to help ensure that the goal of their communicative movement is understood.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n The emergence of systematic argument distinctions in artificial sign languages.\n \n \n \n \n\n\n \n Yasamin Motamedi, Kenny Smith, Marieke Schouwstra, Jennifer Culbertson, & Simon Kirby.\n\n\n \n\n\n\n Journal of Language Evolution, 6(2): 77–98. 2021.\n \n\n\n\n
\n\n\n\n \n \n \"ThePaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@article{motamedi:21b,\n  title = {The emergence of systematic argument distinctions in artificial sign languages},\n  author = {Motamedi, Yasamin and Smith, Kenny and Schouwstra, Marieke and Culbertson, Jennifer and Kirby, Simon},\n  publisher = {Oxford University Press},\n  journal = {Journal of Language Evolution},\n  volume = {6},\n  number = {2},\n  pages = {77--98},\n  year = {2021},\n  doi = {10.1093/jole/lzab002},\n  url = {https://academic.oup.com/jole/article/6/2/77/6303767},\n  abstract = {Word order is a key property by which languages indicate the relationship between a predicate and its arguments. However, sign languages use a number of other modality-specific tools in addition to word order such as spatial agreement, which has been likened to verbal agreement in spoken languages, and role shift, where the signer takes on characteristics of propositional agents. In particular, data from emerging sign languages suggest that, though some use of a conventional word order can appear within a few generations, systematic spatial modulation as a grammatical feature takes time to develop. We experimentally examine the emergence of systematic argument marking beyond word order, investigating how artificial gestural systems evolve over generations of participants in the lab. We find that participants converge on different strategies to disambiguate clause arguments, which become more consistent through the use and transmission of gestures; in some cases, this leads to conventionalized iconic spatial contrasts, comparable to those found in natural sign languages. We discuss how our results connect with theoretical issues surrounding the analysis of spatial agreement and role shift in established and newly emerging sign languages, and the possible mechanisms behind its evolution.}\n}\n\n
\n
\n\n\n
\n Word order is a key property by which languages indicate the relationship between a predicate and its arguments. However, sign languages use a number of other modality-specific tools in addition to word order such as spatial agreement, which has been likened to verbal agreement in spoken languages, and role shift, where the signer takes on characteristics of propositional agents. In particular, data from emerging sign languages suggest that, though some use of a conventional word order can appear within a few generations, systematic spatial modulation as a grammatical feature takes time to develop. We experimentally examine the emergence of systematic argument marking beyond word order, investigating how artificial gestural systems evolve over generations of participants in the lab. We find that participants converge on different strategies to disambiguate clause arguments, which become more consistent through the use and transmission of gestures; in some cases, this leads to conventionalized iconic spatial contrasts, comparable to those found in natural sign languages. We discuss how our results connect with theoretical issues surrounding the analysis of spatial agreement and role shift in established and newly emerging sign languages, and the possible mechanisms behind its evolution.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Regularisation, systematicity and naturalness in a silent gesture learning task.\n \n \n \n \n\n\n \n Yasamin Motamedi, Lucie Wolters, Danielle Naegeli, Marieke Schouwstra, & Simon Kirby.\n\n\n \n\n\n\n In Proceedings of the Annual Meeting of the Cognitive Science Society, volume 43, 2021. \n \n\n\n\n
\n\n\n\n \n \n \"Regularisation,Paper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@inproceedings{motamedi:21c,\n  title = {Regularisation, systematicity and naturalness in a silent gesture learning task},\n  author = {Motamedi, Yasamin and Wolters, Lucie and Naegeli, Danielle and Schouwstra, Marieke and Kirby, Simon},\n  booktitle = {Proceedings of the Annual Meeting of the Cognitive Science Society},\n  volume = {43},\n  number = {43},\n  year = {2021},\n  url = {https://escholarship.org/content/qt8xf3216h/qt8xf3216h.pdf?t=qwi3pz&v=lg},\n  abstract = {Typological analysis of the world’s language shows that, of the 6 possible basic word orders, SOV and SVO orders are predominant, a preference supported by experimental studies in which participants improvise gestures to describe events. Silent gesture studies have also provided evidence for natural ordering patterns, where SOV and SVO orders are used selectively depending on the semantics of the event, a finding recently supported by data from natural sign languages. We present an artificial language learning task using gesture to ask to what extent preferences for natural ordering patterns, in addition to biases for regular languages, are at play during learning in the manual modality.}\n}\n\n
\n
\n\n\n
\n Typological analysis of the world’s language shows that, of the 6 possible basic word orders, SOV and SVO orders are predominant, a preference supported by experimental studies in which participants improvise gestures to describe events. Silent gesture studies have also provided evidence for natural ordering patterns, where SOV and SVO orders are used selectively depending on the semantics of the event, a finding recently supported by data from natural sign languages. We present an artificial language learning task using gesture to ask to what extent preferences for natural ordering patterns, in addition to biases for regular languages, are at play during learning in the manual modality.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Constituent order in silent gesture reflects the perspective of the producer.\n \n \n \n \n\n\n \n Fiona Kirton, Simon Kirby, Kenny Smith, Jennifer Culbertson, & Marieke Schouwstra.\n\n\n \n\n\n\n Journal of Language Evolution, 6(1): 54–76. 2021.\n \n\n\n\n
\n\n\n\n \n \n \"ConstituentPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@article{kirton:21,\n  title = {Constituent order in silent gesture reflects the perspective of the producer},\n  author = {Kirton, Fiona and Kirby, Simon and Smith, Kenny and Culbertson, Jennifer and Schouwstra, Marieke},\n  publisher = {Oxford University Press},\n  journal = {Journal of Language Evolution},\n  volume = {6},\n  number = {1},\n  pages = {54--76},\n  year = {2021},\n  doi = {10.1093/jole/lzaa010},\n  url = {https://academic.oup.com/jole/article/6/1/54/6179035},\n  abstract = {Understanding the relationship between human cognition and linguistic structure is a central theme in language evolution research. Numerous studies have investigated this question using the silent gesture paradigm in which participants describe events using only gesture and no speech. Research using this paradigm has found that Agent–Patient–Action (APV) is the most commonly produced gesture order, regardless of the producer’s native language. However, studies have uncovered a range of factors that influence ordering preferences. One such factor is salience, which has been suggested as a key determiner of word order. Specifically, humans, who are typically agents, are more salient than inanimate objects, so tend to be mentioned first. In this study, we investigated the role of salience in more detail and asked whether manipulating the salience of a human agent would modulate the tendency to express humans before objects. We found, first, that APV was less common than expected based on previous literature. Secondly, salience influenced the relative ordering of the patient and action, but not the agent and patient. For events involving a non-salient agent, participants typically expressed the patient before the action and vice versa for salient agents. Thirdly, participants typically omitted non-salient agents from their descriptions. We present details of a novel computational solution that infers the orders participants would have produced had they expressed all three constituents on every trial. Our analysis showed that events involving salient agents tended to elicit AVP; those involving a non-salient agent were typically described with APV, modulated by a strong tendency to omit the agent. We argue that these findings provide evidence that the effect of salience is realized through its effect on the perspective from which a producer frames an event.}\n}\n\n
\n
\n\n\n
\n Understanding the relationship between human cognition and linguistic structure is a central theme in language evolution research. Numerous studies have investigated this question using the silent gesture paradigm in which participants describe events using only gesture and no speech. Research using this paradigm has found that Agent–Patient–Action (APV) is the most commonly produced gesture order, regardless of the producer’s native language. However, studies have uncovered a range of factors that influence ordering preferences. One such factor is salience, which has been suggested as a key determiner of word order. Specifically, humans, who are typically agents, are more salient than inanimate objects, so tend to be mentioned first. In this study, we investigated the role of salience in more detail and asked whether manipulating the salience of a human agent would modulate the tendency to express humans before objects. We found, first, that APV was less common than expected based on previous literature. Secondly, salience influenced the relative ordering of the patient and action, but not the agent and patient. For events involving a non-salient agent, participants typically expressed the patient before the action and vice versa for salient agents. Thirdly, participants typically omitted non-salient agents from their descriptions. We present details of a novel computational solution that infers the orders participants would have produced had they expressed all three constituents on every trial. Our analysis showed that events involving salient agents tended to elicit AVP; those involving a non-salient agent were typically described with APV, modulated by a strong tendency to omit the agent. We argue that these findings provide evidence that the effect of salience is realized through its effect on the perspective from which a producer frames an event.\n
\n\n\n
\n\n\n\n\n\n
\n
\n\n
\n
\n  \n 2020\n \n \n (16)\n \n \n
\n
\n \n \n
\n \n\n \n \n \n \n \n \n Learning to use space: A study into the SL2 acquisition process of adult learners of Sign Language of the Netherlands.\n \n \n \n \n\n\n \n Eveline Boers-Visker.\n\n\n \n\n\n\n Ph.D. Thesis, 2020.\n \n\n\n\n
\n\n\n\n \n \n \"LearningPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n  \n \n 1 download\n \n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@phdthesis{boers:21,\n  title = {Learning to use space: A study into the SL2 acquisition process of adult learners of Sign Language of the Netherlands},\n  author = {Boers-Visker, Eveline},\n  publisher = {Netherlands Graduate School of Linguistics {(LOT)}},\n  year = {2020},\n  url = {https://www.lotpublications.nl/Documents/569_fulltext.pdf},\n  abstract = {This dissertation addresses the acquisition of Sign Language of the Netherlands (Nederlandse Gebarentaal, NGT) in adult learners with a spoken language background. These learners acquire a new language in a new modality, the visual-spatial modality, which differs from the oral-auditory modality of their native language, Dutch. One of the modality-specific linguistic features attested in signed languages, but not in spoken languages, is the use of space to express grammatical and topographical relations. Our knowledge of the acquisition of linguistic devices related to the use of space (e.g., pointing signs, agreement verbs, classifier predicates and signs marked for location), and of appropriate pedagogical practices to teach these structures, is very limited. This thesis contributes to filling this gap by improving our understanding of processes underlying the acquisition of these devices in L2-learners of NGT, and by investigating whether certain pedagogical practices, which have been shown to be effective for L2-learners of a spoken language, would facilitate the acquisition of these devices. Four studies were carried out. The first three studies, in which we analyze (semi-)natural and elicited production data of novel NGT learners who were followed longitudinally, serve as basis for the fourth study, in which we investigate whether learners benefit from pedagogical interventions aimed at focusing their attention on the form-meaning mappings of one of the devices under investigation, agreement verb forms. This dissertation provides valuable information for practitioners in the field, and adds to our understanding of the intersecting fields of sign language linguistics, second language acquisition and pedagogy, as well as gesture studies.}\n}\n\n
\n
\n\n\n
\n This dissertation addresses the acquisition of Sign Language of the Netherlands (Nederlandse Gebarentaal, NGT) in adult learners with a spoken language background. These learners acquire a new language in a new modality, the visual-spatial modality, which differs from the oral-auditory modality of their native language, Dutch. One of the modality-specific linguistic features attested in signed languages, but not in spoken languages, is the use of space to express grammatical and topographical relations. Our knowledge of the acquisition of linguistic devices related to the use of space (e.g., pointing signs, agreement verbs, classifier predicates and signs marked for location), and of appropriate pedagogical practices to teach these structures, is very limited. This thesis contributes to filling this gap by improving our understanding of processes underlying the acquisition of these devices in L2-learners of NGT, and by investigating whether certain pedagogical practices, which have been shown to be effective for L2-learners of a spoken language, would facilitate the acquisition of these devices. Four studies were carried out. The first three studies, in which we analyze (semi-)natural and elicited production data of novel NGT learners who were followed longitudinally, serve as basis for the fourth study, in which we investigate whether learners benefit from pedagogical interventions aimed at focusing their attention on the form-meaning mappings of one of the devices under investigation, agreement verb forms. This dissertation provides valuable information for practitioners in the field, and adds to our understanding of the intersecting fields of sign language linguistics, second language acquisition and pedagogy, as well as gesture studies.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Space oddities: The acquisition of agreement verbs by L2 learners of Sign Language of the Netherlands.\n \n \n \n \n\n\n \n Eveline Boers-visker, & Roland Pfau.\n\n\n \n\n\n\n The Modern Language Journal, 104(4): 757–780. 2020.\n \n\n\n\n
\n\n\n\n \n \n \"SpacePaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@article{boers:20,\n  title = {Space oddities: The acquisition of agreement verbs by L2 learners of Sign Language of the Netherlands},\n  author = {Boers-visker, Eveline and Pfau, Roland},\n  publisher = {Wiley Online Library},\n  journal = {The Modern Language Journal},\n  volume = {104},\n  number = {4},\n  pages = {757--780},\n  year = {2020},\n  doi = {10.1111/modl.12676},\n  url = {https://onlinelibrary.wiley.com/doi/full/10.1111/modl.12676},\n  abstract = {This article reports the results of the first longitudinal study that systematically investigates the acquisition of verb agreement by hearing learners of a sign language. During a 2-year period, 14 novel learners of Sign Language of the Netherlands (NGT) with a spoken language background performed an elicitation task 15 times. Seven deaf native signers and NGT teachers performed the same task to serve as a benchmark group. The results obtained show that for some learners, the verb agreement system of NGT was difficult to master, despite numerous examples in the input. As compared to the benchmark group, learners tended to omit agreement markers on verbs that could be modified, did not always correctly use established locations associated with discourse referents, and made characteristic errors with respect to properties that are important in the expression of agreement (movement and orientation). The outcomes of the study are of value to practitioners in the field, as they are informative with regard to the nature of the learning process during the first stages of learning a sign language.}\n}\n\n
\n
\n\n\n
\n This article reports the results of the first longitudinal study that systematically investigates the acquisition of verb agreement by hearing learners of a sign language. During a 2-year period, 14 novel learners of Sign Language of the Netherlands (NGT) with a spoken language background performed an elicitation task 15 times. Seven deaf native signers and NGT teachers performed the same task to serve as a benchmark group. The results obtained show that for some learners, the verb agreement system of NGT was difficult to master, despite numerous examples in the input. As compared to the benchmark group, learners tended to omit agreement markers on verbs that could be modified, did not always correctly use established locations associated with discourse referents, and made characteristic errors with respect to properties that are important in the expression of agreement (movement and orientation). The outcomes of the study are of value to practitioners in the field, as they are informative with regard to the nature of the learning process during the first stages of learning a sign language.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Fill the gap: A novel test to elicit nominal plurals in Sign Language of the Netherlands.\n \n \n \n \n\n\n \n Cindy van Boven.\n\n\n \n\n\n\n In volume 3, pages 56–67, 2020. FEAST. Formal and Experimental Advances in Sign Language Theory\n \n\n\n\n
\n\n\n\n \n \n \"FillPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@inproceedings{boven:21b,\n  title = {Fill the gap: A novel test to elicit nominal plurals in Sign Language of the Netherlands},\n  author = {Boven, Cindy van},\n  publisher = {FEAST. Formal and Experimental Advances in Sign Language Theory},\n  volume = {3},\n  pages = {56--67},\n  year = {2020},\n  doi = {10.31009/FEAST.i3.05},\n  url = {https://www.signlab-amsterdam.nl/publications/boven21b.pdf},\n  abstract = {The present study introduces a novel gap-filling test to elicit plural nouns in Sign Language of the Netherlands (NGT). As of yet, nominal plurals in NGT have not been described in detail, as eliciting plural nouns is not without challenges. In previous research on NGT (Zwitserlood and Nijhof 1999), native signers were asked to describe pictures of plural objects. However, when describing pictures, the signers automatically also expressed the spatial distribution of the objects depicted on the stimulus picture, using localization. As a consequence, it remains unclear what ‘pure’ plurals – without localization – look like. The goal of our gap-filling task is to disentangle pluralization from localization: participants are asked to insert plural nouns in signed sentence contexts where the spatial distribution of the referents is irrelevant. After piloting the task, five deaf native signers participated. The task succeeded in eliciting pure plural forms that were not spatially distributed, and the results show that NGT optionally employs reduplication to mark the pure plural of nouns. We conclude that our gap-filling task successfully controls for localization, targeting the desired structure without using written language. In future studies, the gap-filling task can be applied to other sign languages, targeting also other construction types.}\n}\n\n
\n
\n\n\n
\n The present study introduces a novel gap-filling test to elicit plural nouns in Sign Language of the Netherlands (NGT). As of yet, nominal plurals in NGT have not been described in detail, as eliciting plural nouns is not without challenges. In previous research on NGT (Zwitserlood and Nijhof 1999), native signers were asked to describe pictures of plural objects. However, when describing pictures, the signers automatically also expressed the spatial distribution of the objects depicted on the stimulus picture, using localization. As a consequence, it remains unclear what ‘pure’ plurals – without localization – look like. The goal of our gap-filling task is to disentangle pluralization from localization: participants are asked to insert plural nouns in signed sentence contexts where the spatial distribution of the referents is irrelevant. After piloting the task, five deaf native signers participated. The task succeeded in eliciting pure plural forms that were not spatially distributed, and the results show that NGT optionally employs reduplication to mark the pure plural of nouns. We conclude that our gap-filling task successfully controls for localization, targeting the desired structure without using written language. In future studies, the gap-filling task can be applied to other sign languages, targeting also other construction types.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Argument structure of classifier predicates in Russian Sign Language.\n \n \n \n \n\n\n \n Vadim Kimmelman, Roland Pfau, & Enoch O Aboh.\n\n\n \n\n\n\n Natural Language & Linguistic Theory, 38(2): 539–579. 2020.\n \n\n\n\n
\n\n\n\n \n \n \"ArgumentPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@article{kimmelman:20,\n  title = {Argument structure of classifier predicates in Russian Sign Language},\n  author = {Kimmelman, Vadim and Pfau, Roland and Aboh, Enoch O},\n  publisher = {Springer},\n  journal = {Natural Language \\& Linguistic Theory},\n  volume = {38},\n  number = {2},\n  pages = {539--579},\n  year = {2020},\n  doi = {10.1007/s11049-019-09448-9},\n  url = {https://link.springer.com/content/pdf/10.1007/s11049-019-09448-9.pdf},\n  abstract = {We analyze classifier predicates in Russian Sign Language (RSL) using a combination of naturalistic corpus and elicited data in order to determine their argument structure, and to test the generalization, based on research on other sign languages, that there is a clear relation between argument structure and classifier type (Benedicto and Brentari 2004). We propose that whole-entity classifier predicates are intransitive unaccusative, and that body-part classifier predicates are optionally transitive. Contrary to previous research on other sign languages, we argue that handling classifier predicates in RSL describe complex events with two subevents: one of handling, and one of movement, which are not necessarily causally connected. We further suggest that the ‘moving legs’ classifier predicate in RSL also describes a complex event consisting of two subevents. To account for these facts, we develop a formal analysis of classifier predicates in RSL. Specifically, we argue that whole-entity and body-part classifier handshapes are agreement markers, while handling classifier handshapes as well as the ‘moving legs’ classifier handshape represent an argument in combination with a verbal root. This casts doubt on the observation made in the literature that classifiers straightforwardly determine the argument structure of classifier predicates, since different classifiers in RSL represent different grammatical phenomena. In addition, we show that event structures associated with some classifier predicates are more complex than those associated with monoclausal structures in spoken languages.}\n}\n\n
\n
\n\n\n
\n We analyze classifier predicates in Russian Sign Language (RSL) using a combination of naturalistic corpus and elicited data in order to determine their argument structure, and to test the generalization, based on research on other sign languages, that there is a clear relation between argument structure and classifier type (Benedicto and Brentari 2004). We propose that whole-entity classifier predicates are intransitive unaccusative, and that body-part classifier predicates are optionally transitive. Contrary to previous research on other sign languages, we argue that handling classifier predicates in RSL describe complex events with two subevents: one of handling, and one of movement, which are not necessarily causally connected. We further suggest that the ‘moving legs’ classifier predicate in RSL also describes a complex event consisting of two subevents. To account for these facts, we develop a formal analysis of classifier predicates in RSL. Specifically, we argue that whole-entity and body-part classifier handshapes are agreement markers, while handling classifier handshapes as well as the ‘moving legs’ classifier handshape represent an argument in combination with a verbal root. This casts doubt on the observation made in the literature that classifiers straightforwardly determine the argument structure of classifier predicates, since different classifiers in RSL represent different grammatical phenomena. In addition, we show that event structures associated with some classifier predicates are more complex than those associated with monoclausal structures in spoken languages.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Syntax of relativization in Russian Sign Language: Basic features.\n \n \n \n \n\n\n \n Evgeniia Khristoforova, & Vadim Kimmelman.\n\n\n \n\n\n\n Voprosy Jazykoznanija, 6: 48–65. 2020.\n \n\n\n\n
\n\n\n\n \n \n \"SyntaxPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n\n\n\n
\n
@article{Khristoforova:20,\n  title = {Syntax of relativization in Russian Sign Language: Basic features},\n  author = {Khristoforova, Evgeniia and Kimmelman, Vadim},\n  journal={Voprosy Jazykoznanija},\n  volume={6},\n  pages = {48--65},\n  year = {2020},\n  url = {https://doi.org/10.31857/0373-658X.2020.6.48-65},\n  doi = {10.31857/0373-658X.2020.6.48-65},\n  keywords = {Relative constructions, Russian Sign Language, sign languages},\n  abstract = {This paper provides a first syntactic description of relativization in Russian Sign Language (RSL). We collected production data from nine signers performing a picture-based task. The signers produced 88 instances of relative constructions with the head noun being the subject or direct object in the main clause and in the relative clause. We found that RSL has head-external (postnominal) relative clauses, head-internal relative clauses, and double-headed relative clauses. Relative clauses might also be extra-posed to the sentence-final position or fronted. The main clause may be doubled, so that a part of it is repeated after the relative clause. Relative clauses might contain optional relative elements WHICH and INDEX, in clause-initial or clause-final position, or in both positions, and the two elements can co-occur. Finally, we found that relative clauses are nearly always prosodically separate from the main clause. The most frequent non-manual markers in relative construction are eye blinks; in addition, head leans and turns, eyebrow raise, and squints are sometimes used. However, no marker is specialized for marking the relative clause itself: they either are simply markers of boundaries of prosodic units (eye blinks), or they have some other functions (which we cannot fully identify yet). We conclude that RSL generally fits patterns found in other spoken and signed languages. However, we also observe specific differences, especially in the domain of non-manual marking.}\n}\n\n
\n
\n\n\n
\n This paper provides a first syntactic description of relativization in Russian Sign Language (RSL). We collected production data from nine signers performing a picture-based task. The signers produced 88 instances of relative constructions with the head noun being the subject or direct object in the main clause and in the relative clause. We found that RSL has head-external (postnominal) relative clauses, head-internal relative clauses, and double-headed relative clauses. Relative clauses might also be extra-posed to the sentence-final position or fronted. The main clause may be doubled, so that a part of it is repeated after the relative clause. Relative clauses might contain optional relative elements WHICH and INDEX, in clause-initial or clause-final position, or in both positions, and the two elements can co-occur. Finally, we found that relative clauses are nearly always prosodically separate from the main clause. The most frequent non-manual markers in relative construction are eye blinks; in addition, head leans and turns, eyebrow raise, and squints are sometimes used. However, no marker is specialized for marking the relative clause itself: they either are simply markers of boundaries of prosodic units (eye blinks), or they have some other functions (which we cannot fully identify yet). We conclude that RSL generally fits patterns found in other spoken and signed languages. However, we also observe specific differences, especially in the domain of non-manual marking.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n From meaning to form and back in American Sign Language verbal classifier morphemes.\n \n \n \n \n\n\n \n Vanja de Lint.\n\n\n \n\n\n\n Word Structure, 13(1): 69–101. 2020.\n \n\n\n\n
\n\n\n\n \n \n \"FromPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@article{lint:20,\n  title = {From meaning to form and back in American Sign Language verbal classifier morphemes},\n  author = {Lint, Vanja de},\n  publisher = {Edinburgh University Press},\n  journal = {Word Structure},\n  volume = {13},\n  number = {1},\n  pages = {69--101},\n  year = {2020},\n  doi = {10.3366/word.2020.0160},\n  url = {https://www.euppublishing.com/doi/10.3366/word.2020.0160},\n  abstract = {In a seminal paper, Benedicto & Brentari (2004) present a theoretical proposal in which they analyze American Sign Language (ASL) classifier morphemes as instantiations of functional heads F1 and F2 that determine the external or internal position of the argument that lands in their specifier through a structural agreement relation. It has served as a ground for several follow-up studies investigating argument structure in sign language classifier constructions. However, their proposal requires both theoretical amendment and empirical corroboration. In this paper, I critically assess the proposal by Benedicto & Brentari (2004) and provide empirical support for a modified version.}\n}\n\n
\n
\n\n\n
\n In a seminal paper, Benedicto & Brentari (2004) present a theoretical proposal in which they analyze American Sign Language (ASL) classifier morphemes as instantiations of functional heads F1 and F2 that determine the external or internal position of the argument that lands in their specifier through a structural agreement relation. It has served as a ground for several follow-up studies investigating argument structure in sign language classifier constructions. However, their proposal requires both theoretical amendment and empirical corroboration. In this paper, I critically assess the proposal by Benedicto & Brentari (2004) and provide empirical support for a modified version.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Spatial verbs are demonstration verbs.\n \n \n \n \n\n\n \n Marloes Oomen.\n\n\n \n\n\n\n Revista Linguı́stica, 16(3): 227–249. 2020.\n \n\n\n\n
\n\n\n\n \n \n \"SpatialPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@article{oomen:20a,\n  title = {Spatial verbs are demonstration verbs},\n  author = {Oomen, Marloes},\n  journal = {Revista Lingu{\\'\\i}stica},\n  volume = {16},\n  number = {3},\n  pages = {227--249},\n  year = {2020},\n  doi = {10.31513/linguistica.2020.v16n3a36966},\n  url = {https://www.researchgate.net/profile/Marloes-Oomen/publication/349591032_Spatial_verbs_are_demonstration_verbs/links/60379931299bf1cc26edcac2/Spatial-verbs-are-demonstration-verbs.pdf},\n  abstract = {The literature has been divided over the question of whether spatial verbs should be subsumed into a single verb class with agreement verbs. The main point of contention has been that, even if the nature of the elements that these verb types agree with differs, the morphosyntactic mechanism, i.e. a path movement, appears to be the same. Contributing to this debate, this corpus-based study scrutinizes the morphosyntactic properties of a set of spatial verbs in German Sign Language (DGS). It is shown that spatial verbs display striking variability in where they begin and end their movement in space. They may align with locations or person loci, but often they simply mark arbitrary locations, which may convey meaningful yet less specific information about the (direction of) movement of a referent relative to the signer. Furthermore, null subjects are found to occur remarkably often in constructions with spatial verbs, despite the absence of systematic subject marking on the verb itself. These results stand in contrast with those reported for regular agreement verbs in DGS (OOMEN, 2020), and thus provide support for a distinction between the two types. It is proposed that spatial verbs in DGS involve a demonstration component (cf. DAVIDSON, 2015) which ensures the recoverability of referents involved in the event denoted by the verb, thus loosening the restrictions on both agreement marking and subject drop that apply to regular agreement verbs. As such, spatial verbs are argued to be somewhere in between conventionalized lexical verbs and classifier predicates.}\n}\n\n
\n
\n\n\n
\n The literature has been divided over the question of whether spatial verbs should be subsumed into a single verb class with agreement verbs. The main point of contention has been that, even if the nature of the elements that these verb types agree with differs, the morphosyntactic mechanism, i.e. a path movement, appears to be the same. Contributing to this debate, this corpus-based study scrutinizes the morphosyntactic properties of a set of spatial verbs in German Sign Language (DGS). It is shown that spatial verbs display striking variability in where they begin and end their movement in space. They may align with locations or person loci, but often they simply mark arbitrary locations, which may convey meaningful yet less specific information about the (direction of) movement of a referent relative to the signer. Furthermore, null subjects are found to occur remarkably often in constructions with spatial verbs, despite the absence of systematic subject marking on the verb itself. These results stand in contrast with those reported for regular agreement verbs in DGS (OOMEN, 2020), and thus provide support for a distinction between the two types. It is proposed that spatial verbs in DGS involve a demonstration component (cf. DAVIDSON, 2015) which ensures the recoverability of referents involved in the event denoted by the verb, thus loosening the restrictions on both agreement marking and subject drop that apply to regular agreement verbs. As such, spatial verbs are argued to be somewhere in between conventionalized lexical verbs and classifier predicates.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Multidimensionality in Sign Language Synthesis: Translation of Dutch into Sign Language of the Netherlands.\n \n \n \n \n\n\n \n Adriana Corsel.\n\n\n \n\n\n\n 2020.\n Bachelor's thesis, University of Amsterdam.\n\n\n\n
\n\n\n\n \n \n \"MultidimensionalityPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n  \n \n 1 download\n \n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@unpublished{Corsel:20,\n  title = {{Multidimensionality in Sign Language Synthesis: Translation of Dutch into Sign Language of the Netherlands}},\n  author = {Corsel, Adriana},\n  year = {2020},\n  url = {https://dspace.uba.uva.nl/server/api/core/bitstreams/0bb6b586-7133-49ad-b722-2cfc08555dcc/content},\n  keywords = {bachelorsthesis},\n  abstract = {This paper explores the implementation of a translator to sign language, using the JASigning avatarsoftware. Specifically, translation of Dutch into Sign Language of the Netherlands (NGT). As Dutch has been translated to a textual version of NGT in previous research, this research translated from textual NGT to synthesised NGT, the output signed by an avatar. This is one of three papers, each focusing on a different aspect of the translator: Lexical Resources, the Signing Space, and Multidimensionality. This paper pertains to the latter, and thus attempts to implement multiple dimensions into the avatar: the manual and non-manual component, which are both necessary to properly articulate a sign. The non-manual component of a sign being, for example, a facial expression, a head shake, the posture ofthe signer, etc. Specifically, it attempts to implement the non-manual markers paired with interrogative and negative constructs in NGT. Although the scope of this project is limited, it could provide a decent foundation for further research.},\n  note = {Bachelor's thesis, University of Amsterdam.}\n}\n\n
\n
\n\n\n
\n This paper explores the implementation of a translator to sign language, using the JASigning avatarsoftware. Specifically, translation of Dutch into Sign Language of the Netherlands (NGT). As Dutch has been translated to a textual version of NGT in previous research, this research translated from textual NGT to synthesised NGT, the output signed by an avatar. This is one of three papers, each focusing on a different aspect of the translator: Lexical Resources, the Signing Space, and Multidimensionality. This paper pertains to the latter, and thus attempts to implement multiple dimensions into the avatar: the manual and non-manual component, which are both necessary to properly articulate a sign. The non-manual component of a sign being, for example, a facial expression, a head shake, the posture ofthe signer, etc. Specifically, it attempts to implement the non-manual markers paired with interrogative and negative constructs in NGT. Although the scope of this project is limited, it could provide a decent foundation for further research.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n The signing space for the synthesis of directional verbs in NGT.\n \n \n \n \n\n\n \n Shani Mende-Gillings.\n\n\n \n\n\n\n 2020.\n Bachelor's thesis, University of Amsterdam.\n\n\n\n
\n\n\n\n \n \n \"ThePaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@unpublished{Mende-Gillings:20,\n  title = {{The signing space for the synthesis of directional verbs in NGT}},\n  author = {Mende-Gillings, Shani},\n  year = {2020},\n  url = {https://dspace.uba.uva.nl/server/api/core/bitstreams/6d3b7a8d-3927-4032-8837-a91659a48a9d/content},\n  keywords = {bachelorsthesis},\n  abstract = {Hearing parents who have deaf children are often put in a difficult position. They most likely know no sign language and resources for learning are limited and expensive. Research into sign language translation and synthesis has increased in the past two decades, due to a growing interest into making public spaces and services more accessible to Deaf people.<br> This research looks at the potential of a translator from Dutch to Dutch sign language (NGT) using a sign language avatar. In particular the focus lies on the synthesis of directional verbs, a critical grammatical component of sign language. The implementation is very limited and therefore the sentences it produces are not always comprehensible and the signs do not always look natural. It does, however, also demonstrate the potential of a sign language avatar as a tool for learning sign language.},\n  note = {Bachelor's thesis, University of Amsterdam.}\n}\n\n
\n
\n\n\n
\n Hearing parents who have deaf children are often put in a difficult position. They most likely know no sign language and resources for learning are limited and expensive. Research into sign language translation and synthesis has increased in the past two decades, due to a growing interest into making public spaces and services more accessible to Deaf people.
This research looks at the potential of a translator from Dutch to Dutch sign language (NGT) using a sign language avatar. In particular the focus lies on the synthesis of directional verbs, a critical grammatical component of sign language. The implementation is very limited and therefore the sentences it produces are not always comprehensible and the signs do not always look natural. It does, however, also demonstrate the potential of a sign language avatar as a tool for learning sign language.\n
\n\n\n
\n\n\n \n\n\n
\n \n\n \n \n \n \n \n \n Learn Sign Online! A proposal for an online platform to learn Dutch Sign Language.\n \n \n \n \n\n\n \n Jasmijn Bleijlevens.\n\n\n \n\n\n\n 2020.\n Bachelor's thesis, Amsterdam University College.\n\n\n\n
\n\n\n\n \n \n \"LearnPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@unpublished{Bleijlevens:20,\n  title = {{Learn Sign Online! A proposal for an online platform to learn Dutch Sign Language}},\n  author = {Bleijlevens, Jasmijn},\n  year = {2020},\n  url = {https://www.signlab-amsterdam.nl/publications/Bleijlevens-2020.pdf},\n  keywords = {bachelorsthesis},\n  abstract = {This project studies the possibilities of designing an online platform for acquiring Dutch Sign Language (NGT), specifically focused on parents of deaf children. Children need a lot of (diverse) language input to lay a groundwork for their language development and it is thus important that people in their surroundings speak a language the child can interpret. Over the past decades, a lot of different distance learning technology have been developed for second language acquisition. This research will look into NGT and discuss all the grammatical topics which should be explained to people wanting to learn the language. Then, different technologies and will analyse which would suit learning NGT the best. Lastly, the two research topics will be combined and a design for a platform is proposed.},\n  note = {Bachelor's thesis, Amsterdam University College.}\n}\n\n
\n
\n\n\n
\n This project studies the possibilities of designing an online platform for acquiring Dutch Sign Language (NGT), specifically focused on parents of deaf children. Children need a lot of (diverse) language input to lay a groundwork for their language development and it is thus important that people in their surroundings speak a language the child can interpret. Over the past decades, a lot of different distance learning technology have been developed for second language acquisition. This research will look into NGT and discuss all the grammatical topics which should be explained to people wanting to learn the language. Then, different technologies and will analyse which would suit learning NGT the best. Lastly, the two research topics will be combined and a design for a platform is proposed.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n No effects of modality in development of locative expressions of space in signing and speaking children.\n \n \n \n \n\n\n \n Beyza Sümer, & Aslı Özyürek.\n\n\n \n\n\n\n Journal of Child Language, 47(6): 1101–1131. 2020.\n \n\n\n\n
\n\n\n\n \n \n \"NoPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@article{sumer:20,\n  title = {No effects of modality in development of locative expressions of space in signing and speaking children},\n  author = {S{\\"u}mer, Beyza and {\\"O}zy{\\"u}rek, Asl{\\i}},\n  publisher = {Cambridge University Press},\n  journal = {Journal of Child Language},\n  volume = {47},\n  number = {6},\n  pages = {1101--1131},\n  year = {2020},\n  doi = {10.1017/S0305000919000928},\n  url = {https://www.cambridge.org/core/services/aop-cambridge-core/content/view/BBC33DFE646C026A6821511B4CD3CD68/S0305000919000928a.pdf/no-effects-of-modality-in-development-of-locative-expressions-of-space-in-signing-and-speaking-children.pdf},\n  abstract = {Linguistic expressions of locative spatial relations in sign languages are mostly visually motivated representations of space involving mapping of entities and spatial relations between them onto the hands and the signing space. These are also morphologically complex forms. It is debated whether modality-specific aspects of spatial expressions modulate spatial language development differently in signing compared to speaking children. In a picture description task, we compared the use of locative expressions for containment, support, and occlusion relations by deaf children acquiring Turkish Sign Language and hearing children acquiring Turkish (age 3;5–9;11). Unlike previous reports suggesting a boosting effect of iconicity, and/or a hindering effect of morphological complexity of the locative forms in sign languages, our results show similar developmental patterns for signing and speaking children's acquisition of these forms. Our results suggest the primacy of cognitive development guiding the acquisition of locative expressions by speaking and signing children.}\n}\n\n
\n
\n\n\n
\n Linguistic expressions of locative spatial relations in sign languages are mostly visually motivated representations of space involving mapping of entities and spatial relations between them onto the hands and the signing space. These are also morphologically complex forms. It is debated whether modality-specific aspects of spatial expressions modulate spatial language development differently in signing compared to speaking children. In a picture description task, we compared the use of locative expressions for containment, support, and occlusion relations by deaf children acquiring Turkish Sign Language and hearing children acquiring Turkish (age 3;5–9;11). Unlike previous reports suggesting a boosting effect of iconicity, and/or a hindering effect of morphological complexity of the locative forms in sign languages, our results show similar developmental patterns for signing and speaking children's acquisition of these forms. Our results suggest the primacy of cognitive development guiding the acquisition of locative expressions by speaking and signing children.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Iconicity in spatial language guides visual attention: A comparison between signers’ and speakers’ eye gaze during message preparation.\n \n \n \n \n\n\n \n Francie Manhardt, Aslı Özyürek, Beyza Sümer, Kimberley Mulder, Dilay Z Karadöller, & Susanne Brouwer.\n\n\n \n\n\n\n Journal of Experimental Psychology: Learning, Memory, and Cognition, 46(9): 1735. 2020.\n \n\n\n\n
\n\n\n\n \n \n \"IconicityPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@article{manhardt:20,\n  title = {Iconicity in spatial language guides visual attention: A comparison between signers’ and speakers’ eye gaze during message preparation.},\n  author = {Manhardt, Francie and {\\"O}zy{\\"u}rek, Asl{\\i} and S{\\"u}mer, Beyza and Mulder, Kimberley and Karad{\\"o}ller, Dilay Z and Brouwer, Susanne},\n  publisher = {American Psychological Association},\n  journal = {Journal of Experimental Psychology: Learning, Memory, and Cognition},\n  volume = {46},\n  number = {9},\n  pages = {1735},\n  year = {2020},\n  doi = {10.1037/xlm0000843},\n  url = {https://psycnet.apa.org/doiLanding?doi=10.1037%2Fxlm0000843},\n  abstract = {To talk about space, spoken languages rely on arbitrary and categorical forms (e.g., left, right). In sign languages, however, the visual–spatial modality allows for iconic encodings (motivated form-meaning mappings) of space in which form and location of the hands bear resemblance to the objects and spatial relations depicted. We assessed whether the iconic encodings in sign languages guide visual attention to spatial relations differently than spatial encodings in spoken languages during message preparation at the sentence level. Using a visual world production eye-tracking paradigm, we compared 20 deaf native signers of Sign-Language-of-the-Netherlands and 20 Dutch speakers’ visual attention to describe left versus right configurations of objects (e.g., “pen is to the left/right of cup”). Participants viewed 4-picture displays in which each picture contained the same 2 objects but in different spatial relations (lateral [left/right], sagittal [front/behind], topological [in/on]) to each other. They described the target picture (left/right) highlighted by an arrow. During message preparation, signers, but not speakers, experienced increasing eye-gaze competition from other spatial configurations. This effect was absent during picture viewing prior to message preparation of relational encoding. Moreover, signers’ visual attention to lateral and/or sagittal relations was predicted by the type of iconicity (i.e., object and space resemblance vs. space resemblance only) in their spatial descriptions. Findings are discussed in relation to how “thinking for speaking” differs from “thinking for signing” and how iconicity can mediate the link between language and human experience and guides signers’ but not speakers’ attention to visual aspects of the world.}\n}\n\n
\n
\n\n\n
\n To talk about space, spoken languages rely on arbitrary and categorical forms (e.g., left, right). In sign languages, however, the visual–spatial modality allows for iconic encodings (motivated form-meaning mappings) of space in which form and location of the hands bear resemblance to the objects and spatial relations depicted. We assessed whether the iconic encodings in sign languages guide visual attention to spatial relations differently than spatial encodings in spoken languages during message preparation at the sentence level. Using a visual world production eye-tracking paradigm, we compared 20 deaf native signers of Sign-Language-of-the-Netherlands and 20 Dutch speakers’ visual attention to describe left versus right configurations of objects (e.g., “pen is to the left/right of cup”). Participants viewed 4-picture displays in which each picture contained the same 2 objects but in different spatial relations (lateral [left/right], sagittal [front/behind], topological [in/on]) to each other. They described the target picture (left/right) highlighted by an arrow. During message preparation, signers, but not speakers, experienced increasing eye-gaze competition from other spatial configurations. This effect was absent during picture viewing prior to message preparation of relational encoding. Moreover, signers’ visual attention to lateral and/or sagittal relations was predicted by the type of iconicity (i.e., object and space resemblance vs. space resemblance only) in their spatial descriptions. Findings are discussed in relation to how “thinking for speaking” differs from “thinking for signing” and how iconicity can mediate the link between language and human experience and guides signers’ but not speakers’ attention to visual aspects of the world.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n The emergence of word order conventions: improvisation, interaction and transmission.\n \n \n \n \n\n\n \n Marieke Schouwstra, Kenny Smith, & Simon Kirby.\n\n\n \n\n\n\n 2020.\n \n\n\n\n
\n\n\n\n \n \n \"ThePaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@misc{schouwstra:20,\n title = {The emergence of word order conventions: improvisation, interaction and transmission},\n author = {Schouwstra, Marieke and Smith, Kenny and Kirby, Simon},\n publisher = {PsyArXiv},\n year = {2020},\n doi = {10.31234/osf.io/wdfu2},\n url = {psyarxiv.com/wdfu2},\n abstract = {When people improvise to convey information by using only gesture and no speech (‘silent gesture’), they show language-independent word order preferences: SOV for extensional events (e.g., boy-ball-throw), but SVO for intensional events (e.g., boy-search-ball). Real languages tend not to condition word order on this kind of semantic distinction but instead use the same order irrespective of event type. Word order therefore exemplifies a contrast between naturalness in improvisation and conventionalised regularity in linguistic systems. We present an experimental paradigm in which initially-improvised silent gesture is both used for communication and culturally transmitted through artificial generations of lab participants. In experiments 1 and 2 we investigate the respective contributions of communicative interaction and cultural transmission on natural word order behaviour. We show that both interaction and iterated learning lead to a simplification of the word order regime, and the way in which this unfolds over time is surprisingly similar under the two mechanisms. The resulting dominant word order is mostly SVO, the order of the native language of our participants. In experiment 3, we manipulate the frequency of different semantic event types, and show that this can allow SOV order, rather than SVO order, to conventionalise. Taken together, our experiments demonstrate that where pressures for naturalness and regularity are in conflict, naturalness will give way to regularity as word order becomes conventionalised through repeated usage.}\n}\n\n
\n
\n\n\n
\n When people improvise to convey information by using only gesture and no speech (‘silent gesture’), they show language-independent word order preferences: SOV for extensional events (e.g., boy-ball-throw), but SVO for intensional events (e.g., boy-search-ball). Real languages tend not to condition word order on this kind of semantic distinction but instead use the same order irrespective of event type. Word order therefore exemplifies a contrast between naturalness in improvisation and conventionalised regularity in linguistic systems. We present an experimental paradigm in which initially-improvised silent gesture is both used for communication and culturally transmitted through artificial generations of lab participants. In experiments 1 and 2 we investigate the respective contributions of communicative interaction and cultural transmission on natural word order behaviour. We show that both interaction and iterated learning lead to a simplification of the word order regime, and the way in which this unfolds over time is surprisingly similar under the two mechanisms. The resulting dominant word order is mostly SVO, the order of the native language of our participants. In experiment 3, we manipulate the frequency of different semantic event types, and show that this can allow SOV order, rather than SVO order, to conventionalise. Taken together, our experiments demonstrate that where pressures for naturalness and regularity are in conflict, naturalness will give way to regularity as word order becomes conventionalised through repeated usage.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Do all aspects of learning benefit from iconicity? Evidence from motion capture.\n \n \n \n \n\n\n \n Asha Sato, Marieke Schouwstra, Molly Flaherty, & Simon Kirby.\n\n\n \n\n\n\n Language and Cognition, 12(1): 36–55. 2020.\n \n\n\n\n
\n\n\n\n \n \n \"DoPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@article{sato:20,\n  title = {Do all aspects of learning benefit from iconicity? Evidence from motion capture},\n  author = {Sato, Asha and Schouwstra, Marieke and Flaherty, Molly and Kirby, Simon},\n  publisher = {Cambridge University Press},\n  journal = {Language and Cognition},\n  volume = {12},\n  number = {1},\n  pages = {36--55},\n  year = {2020},\n  doi = {10.1017/langcog.2019.37},\n  url = {https://www.cambridge.org/core/services/aop-cambridge-core/content/view/55EDF990ED0E81A85100F8F01988B7C2/S1866980819000371a.pdf/do-all-aspects-of-learning-benefit-from-iconicity-evidence-from-motion-capture.pdf},\n  abstract = {Recent work suggests that not all aspects of learning benefit from an iconicity advantage (Ortega, 2017). We present the results of an artificial sign language learning experiment testing the hypothesis that iconicity may help learners to learn mappings between forms and meanings, whilst having a negative impact on learning specific features of the form. We used a 3D camera (Microsoft Kinect) to capture participants’ gestures and quantify the accuracy with which they reproduce the target gestures in two conditions. In the iconic condition, participants were shown an artificial sign language consisting of congruent gesture–meaning pairs. In the arbitrary condition, the language consisted of non-congruent gesture–meaning pairs. We quantified the accuracy of participants’ gestures using dynamic time warping (Celebi et. al., 2013). Our results show that participants in the iconic condition learn mappings more successfully than participants in the arbitrary condition, but there is no difference in the accuracy with which participants reproduce the forms. While our work confirms that iconicity helps to establish form–meaning mappings, our study did not give conclusive evidence about the effect of iconicity on production; we suggest that iconicity may only have an impact on learning forms when these are complex.}\n}\n\n
\n
\n\n\n
\n Recent work suggests that not all aspects of learning benefit from an iconicity advantage (Ortega, 2017). We present the results of an artificial sign language learning experiment testing the hypothesis that iconicity may help learners to learn mappings between forms and meanings, whilst having a negative impact on learning specific features of the form. We used a 3D camera (Microsoft Kinect) to capture participants’ gestures and quantify the accuracy with which they reproduce the target gestures in two conditions. In the iconic condition, participants were shown an artificial sign language consisting of congruent gesture–meaning pairs. In the arbitrary condition, the language consisted of non-congruent gesture–meaning pairs. We quantified the accuracy of participants’ gestures using dynamic time warping (Celebi et. al., 2013). Our results show that participants in the iconic condition learn mappings more successfully than participants in the arbitrary condition, but there is no difference in the accuracy with which participants reproduce the forms. While our work confirms that iconicity helps to establish form–meaning mappings, our study did not give conclusive evidence about the effect of iconicity on production; we suggest that iconicity may only have an impact on learning forms when these are complex.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n From the world to word order: deriving biases in noun phrase order from statistical properties of the world.\n \n \n \n \n\n\n \n Jennifer Culbertson, Marieke Schouwstra, & Simon Kirby.\n\n\n \n\n\n\n Language, 96(3): 696–717. 2020.\n \n\n\n\n
\n\n\n\n \n \n \"FromPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@article{culbertson:20,\n  title = {From the world to word order: deriving biases in noun phrase order from statistical properties of the world},\n  author = {Culbertson, Jennifer and Schouwstra, Marieke and Kirby, Simon},\n  publisher = {Linguistic Society of America},\n  journal = {Language},\n  volume = {96},\n  number = {3},\n  pages = {696--717},\n  year = {2020},\n  doi = {10.1353/lan.2020.0045},\n  url = {https://www.linguisticsociety.org/sites/default/files/08_96.3Culbertson.pdf},\n  abstract = {The world’s languages exhibit striking diversity. At the same time, recurring linguistic patterns suggest the possibility that this diversity is shaped by features of human cognition. One well-studied example is word order in complex noun phrases (like these two red vases). While many orders of these elements are possible, a subset appear to be preferred. It has been argued that this ordering reflects a single underlying representation of noun phrase structure, from which preferred orders are straightforwardly derived (e.g. Cinque 2005). Building on previous experimental evidence using artificial language learning (Culbertson & Adger 2014), we show that these preferred orders arise not only in existing languages, but also in improvised sequences of gestures produced by English speakers. We then use corpus data from a wide range of languages to argue that the hypothesized underlying structure of the noun phrase might be learnable from statistical features relating objects and their properties conceptually. Using an information-theoretic measure of strength of association, we find that adjectival properties (e.g. red) are on average more closely related to the objects they modify (e.g. wine) than numerosities are (e.g. two), which are in turn more closely related to the objects they modify than demonstratives are (e.g. this). It is exactly those orders which transparently reflect this—by placing adjectives closest to the noun, and demonstratives farthest away—that are more common across languages and preferred in our silent gesture experiments. These results suggest that our experience with objects in the world, combined with a preference for transparent mappings from conceptual structure to linear order, can explain constraints on noun phrase order.}\n}\n\n
\n
\n\n\n
\n The world’s languages exhibit striking diversity. At the same time, recurring linguistic patterns suggest the possibility that this diversity is shaped by features of human cognition. One well-studied example is word order in complex noun phrases (like these two red vases). While many orders of these elements are possible, a subset appear to be preferred. It has been argued that this ordering reflects a single underlying representation of noun phrase structure, from which preferred orders are straightforwardly derived (e.g. Cinque 2005). Building on previous experimental evidence using artificial language learning (Culbertson & Adger 2014), we show that these preferred orders arise not only in existing languages, but also in improvised sequences of gestures produced by English speakers. We then use corpus data from a wide range of languages to argue that the hypothesized underlying structure of the noun phrase might be learnable from statistical features relating objects and their properties conceptually. Using an information-theoretic measure of strength of association, we find that adjectival properties (e.g. red) are on average more closely related to the objects they modify (e.g. wine) than numerosities are (e.g. two), which are in turn more closely related to the objects they modify than demonstratives are (e.g. this). It is exactly those orders which transparently reflect this—by placing adjectives closest to the noun, and demonstratives farthest away—that are more common across languages and preferred in our silent gesture experiments. These results suggest that our experience with objects in the world, combined with a preference for transparent mappings from conceptual structure to linear order, can explain constraints on noun phrase order.\n
\n\n\n
\n\n\n\n\n\n
\n
\n\n\n\n\n
\n\n\n \n\n \n \n \n \n\n
\n"}; document.write(bibbase_data.data);