HeadTalk, HandTalk and the corpus: towards a framework for multi-modal, multi-media corpus development. Knight, D.; Evans, D.; Carter, R.; and Adolphs, S. Corpora, 4:1-32.
HeadTalk, HandTalk and the corpus: towards a framework for multi-modal, multi-media corpus development [link]Paper  doi  abstract   bibtex   
In this paper, we address a number of key methodological challenges and concerns faced by linguists in the development of a new generation of corpora: the multi-modal, multi-media corpus--that which combines video, audio and textual records of naturally occurring discourse. We contextualise these issues according to a research project which is currently developing such a corpus: the ESRC-funded Understanding New Digital Records for e-Social Science (DReSS) project based at the University of Nottingham.2 2For further information, results and publications related to the project, please refer to the main DReSS website, at: http://web.mac.com/andy.crabtree/NCeSS_Digital_Records_Node/Welcome.html This paper primarily explores the questions of the functionality of the corpus, identifying the problems we faced in making multi-modal corpora `usable' for further research. We focus on the need for new methods for categorising and marking up multiple streams of data, using, as examples, the coding of head nods and hand gestures. We also consider the challenges faced when integrating and representing the data in a functional corpus tool, to allow for further synthesis and analysis. Here, we also underline some of the ethical challenges faced in the development of this tool, exploring the issues faced both in the collection of data and in the future distribution of video corpora to the wider research community.
@article{knight_headtalk_2009,
	Author = {Knight, Dawn and Evans, David and Carter, Ronald and Adolphs, Svenja},
	Date = {2009},
	Date-Modified = {2017-04-19 08:04:07 +0000},
	Doi = {10.3366/E1749503209000203},
	File = {Attachment:files/6031/Knight et al. - 2009 - HeadTalk, HandTalk and the corpus towards a framework for multi-modal, multi-media corpus development.pdf:application/pdf},
	Journal = {Corpora},
	Keywords = {corpus, CV citació, language resources, multimodality},
	Pages = {1-32},
	Title = {HeadTalk, HandTalk and the corpus: towards a framework for multi-modal, multi-media corpus development},
	Url = {http://dx.doi.org/10.3366/E1749503209000203},
	Volume = {4},
	Abstract = {In this paper, we address a number of key methodological challenges and concerns faced by linguists in the development of a new generation of corpora: the multi-modal, multi-media corpus--that which combines video, audio and textual records of naturally occurring discourse. We contextualise these issues according to a research project which is currently developing such a corpus: the ESRC-funded Understanding New Digital Records for e-Social Science (DReSS) project based at the University of Nottingham.2 2For further information, results and publications related to the project, please refer to the main DReSS website, at: http://web.mac.com/andy.crabtree/NCeSS\_Digital\_Records\_Node/Welcome.html This paper primarily explores the questions of the functionality of the corpus, identifying the problems we faced in making multi-modal corpora `usable' for further research. We focus on the need for new methods for categorising and marking up multiple streams of data, using, as examples, the coding of head nods and hand gestures. We also consider the challenges faced when integrating and representing the data in a functional corpus tool, to allow for further synthesis and analysis. Here, we also underline some of the ethical challenges faced in the development of this tool, exploring the issues faced both in the collection of data and in the future distribution of video corpora to the wider research community.},
	Bdsk-File-1 = {YnBsaXN0MDDUAQIDBAUGJCVYJHZlcnNpb25YJG9iamVjdHNZJGFyY2hpdmVyVCR0b3ASAAGGoKgHCBMUFRYaIVUkbnVsbNMJCgsMDxJXTlMua2V5c1pOUy5vYmplY3RzViRjbGFzc6INDoACgAOiEBGABIAFgAdccmVsYXRpdmVQYXRoWWFsaWFzRGF0YV8QXS4uLy4uLy4uL0JpYmxpb2dyYWZpYS9QYXBlcnMvS25pZ2h0L0hlYWRUYWxrLCBIYW5kVGFsayBhbmQgdGhlIGNvcnB1cyB0b3dhcmRzIGEgZnJhbWV3b3JrLnBkZtIXCxgZV05TLmRhdGFPEQJOAAAAAAJOAAIAAAxNYWNpbnRvc2ggSEQAAAAAAAAAAAAAAAAAAADL9h/OSCsAABCGbwEfSGVhZFRhbGssIEhhbmRUYWxrIzEwODY2RjA0LnBkZgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAEIZvBNQJ038AAAAAAAAAAAADAAQAAAkgAAAAAAAAAAAAAAAAAAAABktuaWdodAAQAAgAAMv2A64AAAARAAgAANQJt18AAAABABQQhm8BEIZljgAF/EcABfuYAADARgACAGRNYWNpbnRvc2ggSEQ6VXNlcnM6AGpvYXF1aW1fbGxpc3RlcnJpOgBCaWJsaW9ncmFmaWE6AFBhcGVyczoAS25pZ2h0OgBIZWFkVGFsaywgSGFuZFRhbGsjMTA4NjZGMDQucGRmAA4AdAA5AEgAZQBhAGQAVABhAGwAawAsACAASABhAG4AZABUAGEAbABrACAAYQBuAGQAIAB0AGgAZQAgAGMAbwByAHAAdQBzACAAdABvAHcAYQByAGQAcwAgAGEAIABmAHIAYQBtAGUAdwBvAHIAawAuAHAAZABmAA8AGgAMAE0AYQBjAGkAbgB0AG8AcwBoACAASABEABIAbFVzZXJzL2pvYXF1aW1fbGxpc3RlcnJpL0JpYmxpb2dyYWZpYS9QYXBlcnMvS25pZ2h0L0hlYWRUYWxrLCBIYW5kVGFsayBhbmQgdGhlIGNvcnB1cyB0b3dhcmRzIGEgZnJhbWV3b3JrLnBkZgATAAEvAAAVAAIAGP//AACABtIbHB0eWiRjbGFzc25hbWVYJGNsYXNzZXNdTlNNdXRhYmxlRGF0YaMdHyBWTlNEYXRhWE5TT2JqZWN00hscIiNcTlNEaWN0aW9uYXJ5oiIgXxAPTlNLZXllZEFyY2hpdmVy0SYnVHJvb3SAAQAIABEAGgAjAC0AMgA3AEAARgBNAFUAYABnAGoAbABuAHEAcwB1AHcAhACOAO4A8wD7A00DTwNUA18DaAN2A3oDgQOKA48DnAOfA7EDtAO5AAAAAAAAAgEAAAAAAAAAKAAAAAAAAAAAAAAAAAAAA7s=},
	Bdsk-Url-1 = {http://dx.doi.org/10.3366/E1749503209000203}}
Downloads: 0