{"id":6622,"date":"2012-04-23T10:28:08","date_gmt":"2012-04-22T23:28:08","guid":{"rendered":"http:\/\/www.paradisec.org.au\/blog\/?p=6622"},"modified":"2012-04-23T10:28:38","modified_gmt":"2012-04-22T23:28:38","slug":"hammers-and-nails","status":"publish","type":"post","link":"https:\/\/www.paradisec.org.au\/blog\/2012\/04\/hammers-and-nails\/","title":{"rendered":"Hammers and nails"},"content":{"rendered":"<p>Back in the old days when some of us were younger and starting out on our language documentation and description careers (for me in 1972, as described in <a href=\"http:\/\/www.paradisec.org.au\/blog\/2012\/04\/yet-another-40-years-on\/\">this blog post<\/a>) the world was pretty much analogue and we didn&#8217;t have digital hardware or software to think about.<\/p>\n<p>Back then recordings were made with reel-to-reel tape recorders, like the <a href=\"http:\/\/en.wikipedia.org\/wiki\/Uher_(brand)\">Uher Report<\/a>, or if you had really fancy kit a <a href=\"http:\/\/en.wikipedia.org\/wiki\/Nagra\">Nagra<\/a>. For those of us working in Australia on Aboriginal languages you could archive your tapes at the Australian Institute of Aboriginal Studies (AIAS), as it then was, later the Australian Institute of Aboriginal and Torres Strait Islander Studies (<a href=\"http:\/\/www.aiatsis.gov.au\/\">AIATSIS<\/a>). They would copy your tapes onto their archive masters and return the originals to you and all you, as a depositor, had to do was fill in tape deposit sheets. You were supplied with a book of these, alternately white and green, with a sheet of carbon paper to be placed between them. For each tape you had to complete a white sheet listing basic metadata and a summary of the contents of the tape, tear off the white copies (keeping the green carbon copy) and submit them to the AIAS archive. In addition, the Institute encouraged the preparation of tape audition sheets where the content of the tapes was summarised alongside time codes (in minutes and seconds) starting from the beginning of the tape. Sometimes these were created by the depositor and sometimes by the resident linguist (at that time <a href=\"http:\/\/www.adelaide.edu.au\/directory\/peter.sutton\">Peter Sutton<\/a>).<\/p>\n<p>So, if you wanted to find out where in your stack of tapes you could find Story X by Speaker Y you simply had to look at the deposit sheets and\/or the audition sheets.<\/p>\n<p>Alas, those days are gone and we are in the digital world, where our experience is mediated via software interfaces that can fool us into seeing the world the way the interface presents it. For language documenters Toolbox is often the software tool of analytical choice (along with ELAN) ((ELAN is a tool designed for time-aligned transcription and annotation of media files, and is also widely used by language docunenters, bringing with it its own kind of window on the world that I do not discuss here)) for the processing and value adding analysis and annotation of recordings. As I claimed in a <a href=\"http:\/\/www.paradisec.org.au\/blog\/2012\/04\/is-toolbox-the-linguistic-equivalent-of-nietzsches-typewriter\/\">previous post<\/a>, the existence of Toolbox means that for many documenters annotational value adding only means interlinear glossing, and alternatives such as overview or summary annotation (like the old tape audition sheets) are not part of their tool set. I have two pieces of evidence for this:<\/p>\n<ol>\n<li>the Endangered Languages Archive (ELAR) at SOAS has so far received around 100 deposits comprising roughly 800,000 files. Among these deposits there are many that are made up entirely of media files (plus basic cataloguing metadata) with no textual representation of the content of the files beyond a short description in the cataloguing metadata. When asked about annotations, depositors typically respond that they &#8220;are working on transcription and glossing&#8221; but because of the time needed they cannot provide anything now. They do not seem to consider an alternative, namely time-coded overview annotation which can (and probably should) be done for all the media files, only some of which would then be selected and given priority for interlinear glossing. Why? One reason might be because there is no dedicated software tool designed and set up to do this in an easy and simple manner (interestingly a tool that can be so used, and that produces nice time-coded XML output is Transcriber, though it is generally thought of as a tool for transcription annotation only &#8212; it also does not have a &#8220;reader mode&#8221; that would allow for easy viewing and searching across a set of overview annotations created with it);\n<li>during training courses and public presentations over the past couple of years I have been warning that current approaches to language documentation risk the creation of &#8220;data dumps&#8221; (which I have also called &#8220;data middens&#8221;) because researchers are not well trained in corpus and workflow management and additionally suffer from ILGB or &#8220;interlinear gloss blindness&#8221; which drives them to see textual value adding annotation in terms of the interlinear glossing paradigm ((There may be a separate further dimension to be concerned about that results from the shift from analogue to digital <i>hardware<\/i>, rather than being a software issue. In the old days tapes were expensive and junior researchers in particular only had access to a rationed supply and therefore had to think seriously about <i>what<\/i> and <i>how much<\/i> to record. Today, with digital storage being so cheap and easy to use (especially for copying and file transfer), there is a temptation to &#8220;record everything&#8221; on multiple machines (one or more video cameras plus one or more audio recorders) and not write much down because &#8220;you can always listen to it later&#8221;. This can easily and quickly give rise to megabytes of files to be managed and processed. I saw this temptation among the students taking my Fieldmethods course this year &#8212; they learned after a few sessions of working with the consultant this way about the pain that then comes from the need to search through hours of digital recordings for which they had few fieldnotes or metadata annotations.)) The most recent example of such a presentation was during last months <a href=\"http:\/\/www.hrelp.org\/events\/workshops\/eldp2012_3\/index.html\">grantee training course<\/a> at SOAS (the Powerpoint slides from my presentation are available on <a href=\"http:\/\/www.slideshare.net\/pkaustin\">Slideshare<\/a>). All but one of the grantees attending the training had never heard of, or considered creating, overview summary annotation before launching (selectively) into transcription and interlinear glossing of their recordings.\n<\/ol>\n<p>I may be wrong about the source of the current ILGB and perhaps Toolbox is not (solely) to blame, but I do believe that it plays a part in a narrowing of conceptual thinking about annotation in language documentation, and hence the behaviour of language documenters.<\/p>\n<p><b>NB<\/b>: Thanks to Andrew Garrett for his comments on my previous post that caused me to think more deeply about these issues and attempt to explicate and exemplify them more clearly here.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Back in the old days when some of us were younger and starting out on our language documentation and description careers (for me in 1972, as described in this blog post) the world was pretty much analogue and we didn&#8217;t have digital hardware or software to think about. Back then recordings were made with reel-to-reel &#8230; <a title=\"Hammers and nails\" class=\"read-more\" href=\"https:\/\/www.paradisec.org.au\/blog\/2012\/04\/hammers-and-nails\/\" aria-label=\"Read more about Hammers and nails\">Read more<\/a><\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"jetpack_post_was_ever_published":false,"_jetpack_newsletter_access":"","_jetpack_dont_email_post_to_subs":false,"_jetpack_newsletter_tier_id":0,"_jetpack_memberships_contains_paywalled_content":false,"_jetpack_memberships_contains_paid_content":false,"footnotes":"","jetpack_publicize_message":"","jetpack_publicize_feature_enabled":true,"jetpack_social_post_already_shared":false,"jetpack_social_options":{"image_generator_settings":{"template":"highway","default_image_id":0,"font":"","enabled":false},"version":2}},"categories":[9,3],"tags":[],"class_list":["post-6622","post","type-post","status-publish","format-standard","hentry","category-archiving","category-technology"],"jetpack_publicize_connections":[],"jetpack_featured_media_url":"","jetpack_sharing_enabled":true,"_links":{"self":[{"href":"https:\/\/www.paradisec.org.au\/blog\/wp-json\/wp\/v2\/posts\/6622","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.paradisec.org.au\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.paradisec.org.au\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.paradisec.org.au\/blog\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.paradisec.org.au\/blog\/wp-json\/wp\/v2\/comments?post=6622"}],"version-history":[{"count":29,"href":"https:\/\/www.paradisec.org.au\/blog\/wp-json\/wp\/v2\/posts\/6622\/revisions"}],"predecessor-version":[{"id":6660,"href":"https:\/\/www.paradisec.org.au\/blog\/wp-json\/wp\/v2\/posts\/6622\/revisions\/6660"}],"wp:attachment":[{"href":"https:\/\/www.paradisec.org.au\/blog\/wp-json\/wp\/v2\/media?parent=6622"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.paradisec.org.au\/blog\/wp-json\/wp\/v2\/categories?post=6622"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.paradisec.org.au\/blog\/wp-json\/wp\/v2\/tags?post=6622"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}