When is a linguist’s work done and dusted?

There has been an interesting discussion on the LINGTYP linguistic typology list over the past week about publishing fieldwork data (archived here). David Gil argued that:

One’s collection of transcribed texts constitutes a set of complete objects, each of which could (if there were a willing publisher) stand alone as an electronic or hardcopy publication. Barring the discovery and correction of errata, once the text is transcribed, that’s it, it’s done.


I responded that:

From my experience, and that of other researchers I have spoken to, understanding/analysis of a given “text” (in the sense of inscription of a particular linguistic performance) evolves over time and is not “fixed” at any point, not even the point where it is “published” (in whatever version). Secondly, textual annotation (of which the ‘traditional’ interlinear format is but one particular type) is hypertextual and, these days, multimedia in nature – this is hardly a new insight – see this Kairos article [JHS: apologies to earlier readers – I caused an html oblitto cut] for a discussion of the hypertextual nature of annotation in the Talmudic tradition. Developments in Web 2.0 publishing also mean that multiple annotations of texts by multiple (distributed) contributors is now possible.

I pointed out that there are two excellent papers dealing with these topics that will be coming out in Language Documentation and Description Vol 4 to be published at SOAS later this month:

  • Nick Evans and Hans-Jürgen Sasse ‘Searching for meaning in the Library of Babel: field semantics and problems of digital archiving’
  • Anthony C. Woodbury ‘On thick translation in linguistic documentation’

Both papers emphasise the ongoing, contingent, interpretive, hermeneutical quality of the documentation of languages, especially meaning in texts.
David Gil’s reply to my intervention included the following:

My claim is merely that with respect to texts, there still exists a kind of basic intuitive level of transcription plus annotation — comprising things such as orthographic transcription, phonetic transcription, interlinear gloss, free translation into English (or some other language) — that, once accomplished, provides a natural point at which the text may be published. Even if one chooses to add or amend things later.

Now, maybe my experience doing fieldwork and analysis over the past 35 years is unusual, but I have real doubts about the existence of such a ‘basic intuitive level of transcription plus annotation’, and real reservations about there being “a natural point at which the text may be published”. Transcription and analysis of texts, and their publication, involves a range of decisions taken at a particular point in time about what to include or not include. And we often publish because of pragmatic reasons such as there being an opportunity to do so, or because we are moving on to other projects, or we need the publication for a job or promotion. In addition, my experience in preparing my Jiwarli text corpus for publication (eventually published by Tokyo University of Foreign Studies in 2006 – email pa2 AT soas.ac.uk me if you want a free copy) was that serious editing in collaboration with my native-speaker consultant had to take place before he would approve publication of my transcriptions and translations of the stories we had recorded together. This included: deletion of repetitions and false starts, revisions to word order, insertion of contextual material, and elimination of loan words.
Looking back 11 years later I can imagine a whole set of different decisions that I would now make at various points about all areas of the published texts, from transcription to morpheme-by-morpheme glossing to running translation. What I think I understand about Jiwarli has been constantly changing over the years as I do more work on it and other languages.
Some researchers also seem to believe that there is a kind of endpoint when a whole language documentation is ‘done and dusted’ as they say here in the UK. A draft paper entitled “Adequacy of Documentation” was discussed at the meeting of the Linguistic Society of America Committee on Endangered Languages and their Preservation in Los Angeles in January. In that paper, the authors wonder about: “the conditions that must be met for a language to be considered adequately documented”. They suggest it is possible to measure “how far along one has come in documenting a language” and to determine “how far there is to go” by using an “accounting function of analysis”. So:

How do we know when we’ve gotten all the phonology? When we’ve done the phonological analysis and our non-directed elicitation isn’t producing any new phonology. How do we know when we’ve gotten all the morphology? When we’ve done the morphological analysis, when our non-directed elicitation isn’t producing any new forms, and when – crucially for inflected languages – we have elicited all the implicit inflected forms that didn’t happen to come up in non-directed elicitation.

They recognise, however, that work on the lexicon and texts is very different but still insist that:

even in the more open-ended aspects of syntax and lexicon, we know we are coming to an endpoint when new constructions and lexical items become rarer and rarer in non-directed elicitation

Unfortunately, this begs the question of what we mean by “rarer and rarer”: is 1 million words of annotated text like the Brown Corpus enough to judge? What about 100 million like the British National Corpus? Or the billion word Oxford English Corpus?
Now researchers studying lesser known languages may never record and analyse textual corpora of this size but I wonder if it is only exhaustion or death that brings a “natural point” at which the linguist’s work is done.

4 thoughts on “When is a linguist’s work done and dusted?”

  1. re “rarer and rarer” and diminishing returns in text. I agree (or maybe I’m atypical too) – sure, transription becomes easier, and the number of new words per text declines as the number of textual recordings increases, but I’ve never reached the point, even with 80 hours or so of Bardi recordings, where I’ve felt that the payoff in new vocab would be less than transcribing the text in full was worth. There’s always new vocab, new complex predicates, and new senses of previously discovered words (and this from a language that I know reasonably well and can hold a conversation in). I think my transcirbed corpus is baout 40,000 words at this point, and I’m about a third to half the way through. But there’s always new and interesting stuff, even in the old and interesting stuff…

  2. It’s also worth noting that a transcribed text, particularly in an endangered language, can have endless applications for education/applied linguistics/language revitalisation. There are some texts that we rehash again and again and are eternally useful with each incarnation offering something of value… i don’t see an endpoint there.
    Plus there are other texts, that everytime I think we’ve nailed it, the author/speaker wants to change this word or that word or remembers a better word… surely any so-called endpoint is always going to be arbitrary. Or am I missing the point here?

  3. No Wamut that is precisely the point — I was suggesting that publication and archiving of data takes place at relatively arbitrary points in time, and that what’s in the corpus represents the linguist’s understanding at that time. One problem is that published (or archival) versions of material can become canonical and information from such ‘canonical texts’ can get used in secondary academic writings or typological research and become reified as representing “the language” or “the grammar of X”. I’m just suggesting: “there’s no such thing”.

Here at Endangered Languages and Cultures, we fully welcome your opinion, questions and comments on any post, and all posts will have an active comments form. However if you have never commented before, your comment may take some time before it is approved. Subsequent comments from you should appear immediately.

We will not edit any comments unless asked to, or unless there have been html coding errors, broken links, or formatting errors. We still reserve the right to censor any comment that the administrators deem to be unnecessarily derogatory or offensive, libellous or unhelpful, and we have an active spam filter that may reject your comment if it contains too many links or otherwise fits the description of spam. If this happens erroneously, email the author of the post and let them know. And note that given the huge amount of spam that all WordPress blogs receive on a daily basis (hundreds) it is not possible to sift through them all and find the ham.

In addition to the above, we ask that you please observe the Gricean maxims:

*Be relevant: That is, stay reasonably on topic.

*Be truthful: This goes without saying; don’t give us any nonsense.

*Be concise: Say as much as you need to without being unnecessarily long-winded.

*Be perspicuous: This last one needs no explanation.

We permit comments and trackbacks on our articles. Anyone may comment. Comments are subject to moderation, filtering, spell checking, editing, and removal without cause or justification.

All comments are reviewed by comment spamming software and by the site administrators and may be removed without cause at any time. All information provided is volunteered by you. Any website address provided in the URL will be linked to from your name, if you wish to include such information. We do not collect and save information provided when commenting such as email address and will not use this information except where indicated. This site and its representatives will not be held responsible for errors in any comment submissions.

Again, we repeat: We reserve all rights of refusal and deletion of any and all comments and trackbacks.

Leave a Comment