\n \n \n
\n
\n\n \n \n Bell, E.; and Pugin, L.\n\n\n \n \n \n \n Approaches to Handwritten Conductor Annotation Extraction in Musical Scores.\n \n \n \n\n\n \n\n\n\n In Fields, B.; and Page, K., editor(s),
DLfM 2016. Proceedings of the 3rd International Workshop on Digital Libraries for Musicology, of
ACM International Conference Proceeding Series, pages 33–36, New York, NY, 2016. Association for Computing Machinery\n
\n\n
\n\n
\n\n
\n\n \n\n \n \n doi\n \n \n\n \n link\n \n \n\n bibtex\n \n\n \n \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n \n \n \n\n\n\n
\n
@inproceedings{Bell_2016,\n abstract = {Conductor copies of musical scores are typically rich in handwritten annotations. Ongoing archival efforts to digitize orchestral conductors' scores have made scanned copies of hundreds of these annotated scores available in digital formats.\n\nThe extraction of handwritten annotations from digitized printed documents is a difficult task for computer vision, with most approaches focusing on the extraction of handwritten text. However, conductors' annotation practices provide us with at least two affordances, which make the task more tractable in the musical domain.\n\nFirst, many conductors opt to mark their scores using colored pencils, which contrast with the black and white print of sheet music. Consequently, we show promising results when using color separation techniques alone to recover handwritten annotations from conductors' scores.\n\nWe also compare annotated scores to unannotated copies and use a printed sheet music comparison tool to recover handwritten annotations as additions to the clean copy. We then investigate the use of both of these techniques in a combined method, which improves the results of the color separation technique.\n\nThese techniques are demonstrated using a sample of orchestral scores annotated by professional conductors of the New York Philharmonic. Handwritten annotation extraction in musical scores has applications to the systematic investigation of score annotation practices by performers, annotator attribution, and to the interactive presentation of annotated scores, which we briefly discuss.},\n author = {Bell, Eamonn and Pugin, Laurent},\n title = {Approaches to Handwritten Conductor Annotation Extraction in Musical Scores},\n pages = {33–36},\n publisher = {{Association for Computing Machinery}},\n isbn = {978-1-4503-4751-8},\n series = {ACM International Conference Proceeding Series},\n editor = {Fields, Ben and Page, Kevin},\n booktitle = {DLfM 2016. Proceedings of the 3rd International Workshop on Digital Libraries for Musicology},\n year = {2016},\n address = {New York, NY},\n doi = {10.1145/2970044.2970053}\n}\n\n\n
\n
\n\n\n
\n Conductor copies of musical scores are typically rich in handwritten annotations. Ongoing archival efforts to digitize orchestral conductors' scores have made scanned copies of hundreds of these annotated scores available in digital formats. The extraction of handwritten annotations from digitized printed documents is a difficult task for computer vision, with most approaches focusing on the extraction of handwritten text. However, conductors' annotation practices provide us with at least two affordances, which make the task more tractable in the musical domain. First, many conductors opt to mark their scores using colored pencils, which contrast with the black and white print of sheet music. Consequently, we show promising results when using color separation techniques alone to recover handwritten annotations from conductors' scores. We also compare annotated scores to unannotated copies and use a printed sheet music comparison tool to recover handwritten annotations as additions to the clean copy. We then investigate the use of both of these techniques in a combined method, which improves the results of the color separation technique. These techniques are demonstrated using a sample of orchestral scores annotated by professional conductors of the New York Philharmonic. Handwritten annotation extraction in musical scores has applications to the systematic investigation of score annotation practices by performers, annotator attribution, and to the interactive presentation of annotated scores, which we briefly discuss.\n
\n\n\n
\n\n\n
\n
\n\n \n \n Byrd, D. A.; and Isaacson, E.\n\n\n \n \n \n \n \n A Music Representation Requirement Specification for Academia.\n \n \n \n \n\n\n \n\n\n\n 2016.\n
Revised version of the 2003 paper in Computer Music Journal\n\n
\n\n
\n\n
\n\n \n \n Paper\n \n \n\n \n\n \n link\n \n \n\n bibtex\n \n\n \n\n \n \n \n 3 downloads\n \n \n\n \n \n \n \n \n \n \n\n \n \n \n\n\n\n
\n
@misc{Byrd_2016,\n abstract = {},\n author = {Byrd, Donald A. and Isaacson, Eric},\n year = {2016},\n title = {A Music Representation Requirement Specification for Academia},\n url = {http://homes.soic.indiana.edu/donbyrd/Papers/MusicRepReqForAcad.doc},\n originalyear = {2003},\n note = {Revised version of the 2003 paper in Computer Music Journal}\n}\n\n\n
\n
\n\n\n\n
\n\n\n
\n
\n\n \n \n Crawford, T.; and Lewis, R.\n\n\n \n \n \n \n \n Review: Music Encoding Initiative.\n \n \n \n \n\n\n \n\n\n\n
Journal of the American Musicological Society, 69(1): 273–285. 2016.\n
\n\n
\n\n
\n\n
\n\n \n \n Paper\n \n \n\n \n \n doi\n \n \n\n \n link\n \n \n\n bibtex\n \n\n \n \n \n abstract \n \n\n \n \n \n 3 downloads\n \n \n\n \n \n \n \n \n \n \n\n \n \n \n\n\n\n
\n
@article{Crawford_2016,\n abstract = {It will not have escaped the notice of many readers of this Journal that a number of ambitious projects in historical musicology with a major IT component have received generous grant funding in recent years. Underpinning each of these projects is the music-encoding standard known as the Music Encoding Initiative (MEI). […] Clearly MEI is here to stay. In this report we aim to give a sketch of its main features, which potentially enable new modes ofmusic research, and a hint of its impact on the discipline ofmusicology.},\n author = {Crawford, Tim and Lewis, Richard},\n year = {2016},\n title = {Review: Music Encoding Initiative},\n url = {https://jams.ucpress.edu/content/69/1/273.full.pdf},\n pages = {273–285},\n volume = {69},\n number = {1},\n journal = {Journal of the American Musicological Society},\n doi = {10.1525/jams.2016.69.1.273}\n}\n\n\n
\n
\n\n\n
\n It will not have escaped the notice of many readers of this Journal that a number of ambitious projects in historical musicology with a major IT component have received generous grant funding in recent years. Underpinning each of these projects is the music-encoding standard known as the Music Encoding Initiative (MEI). […] Clearly MEI is here to stay. In this report we aim to give a sketch of its main features, which potentially enable new modes ofmusic research, and a hint of its impact on the discipline ofmusicology.\n
\n\n\n
\n\n\n
\n
\n\n \n \n Destandau, M.\n\n\n \n \n \n \n \n La MEI dans tous ses états. La Music Encoding Initiative, de l'encodage aux usages.\n \n \n \n \n\n\n \n\n\n\n Master's thesis, Université de Lille 3, Lille, France, 2016.\n
\n\n
\n\n
\n\n
\n\n \n \n Paper\n \n \n\n \n\n \n link\n \n \n\n bibtex\n \n\n \n \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n \n \n \n\n\n\n
\n
@mastersthesis{Destandau_2016,\n abstract = {A l’heure du numérique, les pratiques musicales se transforment et les objets qui les véhiculent aussi. Dans ce contexte, ce mémoire étudie la façon dont la Music Encoding Initiative, un format d’encodage pour la musique notée, interagit avec les usages. Il montre que la définition du modèle de description suppose une bonne connaissance du domaine qu’il représente, et un positionnement clair ; que les évolutions du modèle pour s’adapter à de nouvelles pratiques questionnent sa cohérence ; mais que cette flexibilité est pourtant indispensable car c’est elle qui rend le modèle vivant et permet de fédérer autour de lui une communauté, qui invente à son tour de nouvelles applications},\n author = {Destandau, Marie},\n year = {2016},\n title = {La MEI dans tous ses {\\'e}tats. La Music Encoding Initiative, de l'encodage aux usages},\n url = {http://www.pas-sages.org/_preview/master/memoireMEI-2016-09-15-5.pdf},\n address = {Lille, France},\n school = {{Universit{\\'e} de Lille 3}},\n type = {Master's thesis}\n}\n\n\n
\n
\n\n\n
\n A l’heure du numérique, les pratiques musicales se transforment et les objets qui les véhiculent aussi. Dans ce contexte, ce mémoire étudie la façon dont la Music Encoding Initiative, un format d’encodage pour la musique notée, interagit avec les usages. Il montre que la définition du modèle de description suppose une bonne connaissance du domaine qu’il représente, et un positionnement clair ; que les évolutions du modèle pour s’adapter à de nouvelles pratiques questionnent sa cohérence ; mais que cette flexibilité est pourtant indispensable car c’est elle qui rend le modèle vivant et permet de fédérer autour de lui une communauté, qui invente à son tour de nouvelles applications\n
\n\n\n
\n\n\n
\n\n\n
\n
\n\n \n \n Duguid, T.\n\n\n \n \n \n \n \n MuSO. Aggregation and Peer Review in Music. NEH White Paper.\n \n \n \n \n\n\n \n\n\n\n Technical Report Texas A&M University, College Station, TX, 2016.\n
\n\n
\n\n
\n\n
\n\n \n \n Paper\n \n \n\n \n\n \n link\n \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n \n \n \n\n\n\n
\n
@techreport{Duguid_2016,\n abstract = {},\n author = {Duguid, Timothy},\n date = {31.08.2016},\n year = {2016},\n title = {MuSO. Aggregation and Peer Review in Music. NEH White Paper},\n url = {http://oaktrust.library.tamu.edu/bitstream/handle/1969.1/157548/NEH-White-Paper.pdf},\n address = {College Station, TX},\n institution = {{Texas A{\\&}M University}}\n}\n\n\n
\n
\n\n\n\n
\n\n\n
\n
\n\n \n \n Laplante, A.; and Fujinaga, I.\n\n\n \n \n \n \n Digitizing Musical Scores: Challenges and Opportunities for Libraries.\n \n \n \n\n\n \n\n\n\n In Fields, B.; and Page, K., editor(s),
DLfM 2016. Proceedings of the 3rd International Workshop on Digital Libraries for Musicology, of
ACM International Conference Proceeding Series, pages 45–48, New York, NY, 2016. Association for Computing Machinery\n
\n\n
\n\n
\n\n
\n\n \n\n \n \n doi\n \n \n\n \n link\n \n \n\n bibtex\n \n\n \n \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n \n \n \n\n\n\n
\n
@inproceedings{Laplante_2016,\n abstract = {Musical scores and manuscripts are essential resources for music theory research. Although many libraries are such documents from their collections, these online resources are dispersed and the functionalities for exploiting their content remain limited. In this paper, we present a qualitative study based on interviews with librarians on the challenges libraries of all types face when they wish to digitize musical scores. In the light of a literature review on the role libraries can play in supporting digital humanities research, we conclude by briefly discussing the opportunities new technologies for optical music recognition and computer-aided music analysis could create for libraries.},\n author = {Laplante, Audrey and Fujinaga, Ichiro},\n title = {Digitizing Musical Scores: Challenges and Opportunities for Libraries},\n pages = {45–48},\n publisher = {{Association for Computing Machinery}},\n isbn = {978-1-4503-4751-8},\n series = {ACM International Conference Proceeding Series},\n editor = {Fields, Ben and Page, Kevin},\n booktitle = {DLfM 2016. Proceedings of the 3rd International Workshop on Digital Libraries for Musicology},\n year = {2016},\n address = {New York, NY},\n doi = {10.1145/2970044.2970055}\n}\n\n\n
\n
\n\n\n
\n Musical scores and manuscripts are essential resources for music theory research. Although many libraries are such documents from their collections, these online resources are dispersed and the functionalities for exploiting their content remain limited. In this paper, we present a qualitative study based on interviews with librarians on the challenges libraries of all types face when they wish to digitize musical scores. In the light of a literature review on the role libraries can play in supporting digital humanities research, we conclude by briefly discussing the opportunities new technologies for optical music recognition and computer-aided music analysis could create for libraries.\n
\n\n\n
\n\n\n
\n
\n\n \n \n Leblond Martin, S.\n\n\n \n \n \n \n Musiques orales, leur notation musicale et l'encodage numérique MEI – Music Encoding Initiative – de cette notation.\n \n \n \n\n\n \n\n\n\n In Leblond Martin, S., editor(s),
Musiques orales, notations musicales et encodages numériques, pages 220–243. Les Éditions de l'Immatériel, Paris, 2016.\n
\n\n
\n\n
\n\n
\n\n \n\n \n\n \n link\n \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n \n \n \n\n\n\n
\n
@incollection{LeblondMartin_2016b,\n abstract = {},\n author = {{Leblond Martin}, Sylvaine},\n title = {Musiques orales, leur notation musicale et l'encodage num{\\'e}rique MEI – Music Encoding Initiative – de cette notation},\n pages = {220–243},\n publisher = {{Les {\\'E}ditions de l'Immat{\\'e}riel}},\n isbn = {979-1091636049},\n editor = {{Leblond Martin}, Sylvaine},\n booktitle = {Musiques orales, notations musicales et encodages num{\\'e}riques},\n year = {2016},\n address = {Paris}\n}\n\n\n
\n
\n\n\n\n
\n\n\n
\n
\n\n \n \n McAulay, K.\n\n\n \n \n \n \n Show Me a Strathspey. Taking Steps to Digitize Tune Collections.\n \n \n \n\n\n \n\n\n\n
Reference Reviews, 30(7): 1–6. 2016.\n
\n\n
\n\n
\n\n
\n\n \n\n \n \n doi\n \n \n\n \n link\n \n \n\n bibtex\n \n\n \n \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n \n \n \n\n\n\n
\n
@article{McAulay_2016,\n abstract = {\\textit{Purpose} The present paper describes an Arts and Humanities Research Council (AHRC) research project into Scottish fiddle music and the important considerations of music digitization, access and discovery in designing the website that will be one of the project's enduring outcomes.\n\n\\textit{Design/methodology/approach} The paper is a general review of existing online indices to music repertoires and some of the general problems associated with selecting metadata and indexing such material and is a survey of the various recent and contemporary projects into the digital encoding of musical notation for online use.\n\n\\textit{Findings} The questions addressed during the design of the Bass Culture project database serve to highlight the importance of cooperation between musicologists, information specialists and computer scientists, and the benefits of having researchers with strengths in more than one of these disciplines. The Music Encoding Initiative proves an effective means of providing digital access to the Scottish fiddle tune repertoire.\n\n\\textit{Originality/value} The digital encoding of music notation is still comparatively cutting-edge; the Bass Culture project is thus a useful exemplar for interdisciplinary collaboration between musicologists, information specialists and computer scientists, and it addresses issues which are likely to be applicable to future projects of this nature.},\n author = {McAulay, Karen},\n year = {2016},\n title = {Show Me a Strathspey. Taking Steps to Digitize Tune Collections},\n pages = {1–6},\n volume = {30},\n number = {7},\n issn = {0950-4125},\n journal = {Reference Reviews},\n doi = {10.1108/RR-03-2015-0073}\n}\n\n\n
\n
\n\n\n
\n Purpose The present paper describes an Arts and Humanities Research Council (AHRC) research project into Scottish fiddle music and the important considerations of music digitization, access and discovery in designing the website that will be one of the project's enduring outcomes. Design/methodology/approach The paper is a general review of existing online indices to music repertoires and some of the general problems associated with selecting metadata and indexing such material and is a survey of the various recent and contemporary projects into the digital encoding of musical notation for online use. Findings The questions addressed during the design of the Bass Culture project database serve to highlight the importance of cooperation between musicologists, information specialists and computer scientists, and the benefits of having researchers with strengths in more than one of these disciplines. The Music Encoding Initiative proves an effective means of providing digital access to the Scottish fiddle tune repertoire. Originality/value The digital encoding of music notation is still comparatively cutting-edge; the Bass Culture project is thus a useful exemplar for interdisciplinary collaboration between musicologists, information specialists and computer scientists, and it addresses issues which are likely to be applicable to future projects of this nature.\n
\n\n\n
\n\n\n \n\n\n
\n
\n\n \n \n Pugin, L.\n\n\n \n \n \n \n Encodage de documents musicaux avec la MEI.\n \n \n \n\n\n \n\n\n\n In Leblond Martin, S., editor(s),
Musiques orales, notations musicales et encodages numériques, pages 162–175. Les Éditions de l'Immatériel, Paris, 2016.\n
\n\n
\n\n
\n\n
\n\n \n\n \n\n \n link\n \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n \n \n \n\n\n\n
\n
@incollection{Pugin_2016,\n abstract = {},\n author = {Pugin, Laurent},\n title = {Encodage de documents musicaux avec la MEI},\n pages = {162–175},\n publisher = {{Les {\\'E}ditions de l'Immat{\\'e}riel}},\n isbn = {979-1091636049},\n editor = {{Leblond Martin}, Sylvaine},\n booktitle = {Musiques orales, notations musicales et encodages num{\\'e}riques},\n year = {2016},\n address = {Paris}\n}\n\n\n
\n
\n\n\n\n
\n\n\n
\n\n\n
\n
\n\n \n \n Roland, P.; and Kepper, J.,\n editors.\n \n\n\n \n \n \n \n \n Music Encoding Conference Proceedings 2013 and 2014.\n \n \n \n \n\n\n \n\n\n\n Bavarian State Library (BSB). 2016.\n
\n\n
\n\n
\n\n
\n\n \n \n urn\n \n \n\n \n\n \n link\n \n \n\n bibtex\n \n\n \n \n \n abstract \n \n\n \n \n \n 4 downloads\n \n \n\n \n \n \n \n \n \n \n\n \n \n \n\n\n\n
\n
@proceedings{Roland_2016,\n abstract = {Conference proceedings of the Music Encoding Conferences 2013 and 2014 with Foreword by Perry D. Roland and Johannes Kepper},\n year = {2016},\n title = {{Music Encoding Conference Proceedings 2013 and 2014}},\n url_URN = {http://nbn-resolving.de/urn:nbn:de:bvb:12-babs2-0000007812},\n publisher = {{Bavarian State Library (BSB)}},\n editor = {Roland, Perry and Kepper, Johannes}\n}\n\n\n
\n
\n\n\n
\n Conference proceedings of the Music Encoding Conferences 2013 and 2014 with Foreword by Perry D. Roland and Johannes Kepper\n
\n\n\n
\n\n\n
\n
\n\n \n \n Viglianti, R.\n\n\n \n \n \n \n The Music Addressability API.\n \n \n \n\n\n \n\n\n\n In Fields, B.; and Page, K., editor(s),
DLfM 2016. Proceedings of the 3rd International Workshop on Digital Libraries for Musicology, of
ACM International Conference Proceeding Series, pages 57–60, New York, NY, 2016. Association for Computing Machinery\n
\n\n
\n\n
\n\n
\n\n \n\n \n \n doi\n \n \n\n \n link\n \n \n\n bibtex\n \n\n \n \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n \n \n \n\n\n\n
\n
@inproceedings{Viglianti_2016,\n abstract = {This paper describes an Application Programming Interface (API) for addressing music notation on the web regardless of the format in which it is stored. This API was created as a method for addressing and extracting specific portions of music notation published in machine-readable formats on the web. Music notation, like text, can be ``addressed'' in new ways in a digital environment, allowing scholars to identify and name structures of various kinds, thus raising such questions as how can one virtually ``circle'' some music notation? How can a machine interpret this ``circling'' to select and retrieve the relevant music notation?\n\nThe API was evaluated by: 1) creating an implementation of the API for documents in the Music Encoding Initiative (MEI) format; and by 2) remodelling a dataset ofmusic analysis statements from the Du Chemin: Lost Voices project (Haverford College) by using the API to connect the analytical statements with the portion of notaiton they refer to. Building this corpus has demonstrated that the Music Addressability API is capable of modelling complex analytical statements containing references to music notation.},\n author = {Viglianti, Raffaele},\n title = {The Music Addressability API},\n pages = {57–60},\n publisher = {{Association for Computing Machinery}},\n isbn = {978-1-4503-4751-8},\n series = {ACM International Conference Proceeding Series},\n editor = {Fields, Ben and Page, Kevin},\n booktitle = {DLfM 2016. Proceedings of the 3rd International Workshop on Digital Libraries for Musicology},\n year = {2016},\n address = {New York, NY},\n doi = {10.1145/2970044.2970056}\n}\n\n\n
\n
\n\n\n
\n This paper describes an Application Programming Interface (API) for addressing music notation on the web regardless of the format in which it is stored. This API was created as a method for addressing and extracting specific portions of music notation published in machine-readable formats on the web. Music notation, like text, can be ``addressed'' in new ways in a digital environment, allowing scholars to identify and name structures of various kinds, thus raising such questions as how can one virtually ``circle'' some music notation? How can a machine interpret this ``circling'' to select and retrieve the relevant music notation? The API was evaluated by: 1) creating an implementation of the API for documents in the Music Encoding Initiative (MEI) format; and by 2) remodelling a dataset ofmusic analysis statements from the Du Chemin: Lost Voices project (Haverford College) by using the API to connect the analytical statements with the portion of notaiton they refer to. Building this corpus has demonstrated that the Music Addressability API is capable of modelling complex analytical statements containing references to music notation.\n
\n\n\n
\n\n\n
\n
\n\n \n \n Weigl, D. M.; and Page, K.\n\n\n \n \n \n \n \n Dynamic Semantic Notation. Jamming Together Music Encoding and Linked Data.\n \n \n \n \n\n\n \n\n\n\n In Mandel, M. I.; Devaney, J.; Turnbull, D.; and Tzanetakis, G., editor(s),
Proceedings of the 17th International Society for Music Information Retrieval Conference, ISMIR 2016, New York City, United States, August 7-11, 2016, 2016. \n
Late-Breaking Session\n\n
\n\n
\n\n
\n\n \n \n Paper\n \n \n\n \n\n \n link\n \n \n\n bibtex\n \n\n \n \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n \n \n \n\n\n\n
\n
@inproceedings{Weigl_2016,\n abstract = {The Music Encoding Initiative (MEI) provides a framework for expressing musical notation that enables the identification (via XML identifiers), and thus addressing, of score elements at various levels of granularity (e.g. individual systems, measures, or notes). Verovio, an open-source MEI renderer that produces beautiful SVG renditions of the score, retains the MEI identifiers and element hierarchy in the produced output, enabling dynamic interactivity with score elements through a web browser. We present a demonstrator that combines these capabilities with semantic technologies including RDF, JSON-LD, SPARQL, and the Open Annotation data model, anchoring into the musical notation by using the MEI XML IDs as fragment identifiers to enable the fine-grained incorporation of musical notation within a web of Linked Data. This fusing of music and semantics affords the creation of rich Digital Music Objects supporting contemporary music consumption and performance.},\n author = {Weigl, David M. and Page, Kevin},\n title = {Dynamic Semantic Notation. Jamming Together Music Encoding and Linked Data},\n url = {https://wp.nyu.edu/ismir2016/wp-content/uploads/sites/2294/2016/08/weigl-dynamic.pdf},\n isbn = {978-0-692-75506-8},\n editor = {Mandel, Michael I. and Devaney, Johanna and Turnbull, Douglas and Tzanetakis, George},\n booktitle = {Proceedings of the 17th International Society for Music Information Retrieval Conference, ISMIR 2016, New York City, United States, August 7-11, 2016},\n year = {2016},\n note = {Late-Breaking Session}\n}\n\n\n
\n
\n\n\n
\n The Music Encoding Initiative (MEI) provides a framework for expressing musical notation that enables the identification (via XML identifiers), and thus addressing, of score elements at various levels of granularity (e.g. individual systems, measures, or notes). Verovio, an open-source MEI renderer that produces beautiful SVG renditions of the score, retains the MEI identifiers and element hierarchy in the produced output, enabling dynamic interactivity with score elements through a web browser. We present a demonstrator that combines these capabilities with semantic technologies including RDF, JSON-LD, SPARQL, and the Open Annotation data model, anchoring into the musical notation by using the MEI XML IDs as fragment identifiers to enable the fine-grained incorporation of musical notation within a web of Linked Data. This fusing of music and semantics affords the creation of rich Digital Music Objects supporting contemporary music consumption and performance.\n
\n\n\n
\n\n\n
\n
\n\n \n \n Zitellini, R.; and Pugin, L.\n\n\n \n \n \n \n \n Representing Atypical Music Notation Practices. An Example with Late 17th Century Music.\n \n \n \n \n\n\n \n\n\n\n In Hoadley, R.; Fober, D.; and Nash, C., editor(s),
Proceedings of the Second International Conference on Technologies for Music Notation and Representation, TENOR 2016, Cambridge, UK, May 27–29, 2016, pages 71–76, Cambridge, UK, 2016. Anglia Ruskin University\n
\n\n
\n\n
\n\n
\n\n \n \n Paper\n \n \n\n \n\n \n link\n \n \n\n bibtex\n \n\n \n \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n \n \n \n\n\n\n
\n
@inproceedings{Zitellini_2016,\n abstract = {From the 17th century to the first decades of the 18th century music notation slowly loses all its mensural influences, becoming virtually identical to what we would consider common modern notation. During these five decades of transformation composers did not just suddenly abandon older notation styles, but they used them alongside ones that would eventually become the standard. Void notation, black notation and uncommon tempi were all mixed together. The scholar preparing modern editions of this music is normally forced to normalise all these atypical notations as many software applications do not support them natively. This paper demonstrates the flexibility of the coding scheme proposed by the Music Encoding Initiative (MEI), and of Verovio, a visualisation library designed for it. The modular approach of these tools means that particular notation systems can be added easily while maintaining compatibility with other encoded notations.},\n author = {Zitellini, Rodolfo and Pugin, Laurent},\n title = {Representing Atypical Music Notation Practices. An Example with Late 17th Century Music},\n url = {http://tenor2016.tenor-conference.org/papers/10_Zitellini_tenor2016.pdf},\n pages = {71–76},\n publisher = {{Anglia Ruskin University}},\n isbn = {978-0-9931461-1-4},\n editor = {Hoadley, Richard and Fober, Dominique and Nash, Chris},\n booktitle = {Proceedings of the Second International Conference on Technologies for Music Notation and Representation, TENOR 2016, Cambridge, UK, May 27–29, 2016},\n year = {2016},\n address = {Cambridge, UK}\n}\n
\n
\n\n\n
\n From the 17th century to the first decades of the 18th century music notation slowly loses all its mensural influences, becoming virtually identical to what we would consider common modern notation. During these five decades of transformation composers did not just suddenly abandon older notation styles, but they used them alongside ones that would eventually become the standard. Void notation, black notation and uncommon tempi were all mixed together. The scholar preparing modern editions of this music is normally forced to normalise all these atypical notations as many software applications do not support them natively. This paper demonstrates the flexibility of the coding scheme proposed by the Music Encoding Initiative (MEI), and of Verovio, a visualisation library designed for it. The modular approach of these tools means that particular notation systems can be added easily while maintaining compatibility with other encoded notations.\n
\n\n\n
\n\n\n\n\n\n