In a few weeks’ time reports and powerpoints on the ELIIP workshop will be up on the ELIIP website for discussion.
I took away memories of the beauty of the mountains and saltlakes, the strange comfortableness of bison, and a slight increase in knowledge about the Latter Day Saints – how can one not feel sympathetic to the nineteenth century Welsh Mormon who set sail for Zion equipped with an English and Welsh dictionary.
There’s a lively group of people at the University of Utah working on native American languages (from Brazil north to Ojibway). One project that especially struck me was a Shoshone outreach program. Several Shoshone were at the ELIIP workshop. Last year 10 Shoshone high school students came to the Center for a six week summer camp funded by a donation from a local mining company. In the program they learned some Shoshone language, as well as crafts from Shoshone elders. The students worked as paid interns to do some work on language documentation and prepare language learning material in Shoshone. It was a great introduction, not only to language documentation but to university life generally. What a good idea!]
Back to the workshop. Yes we need something like ELIIP – a list of endangered languages with information about them and pointers to other sources about them. But it won’t work unless it is aimed at more than just linguists. And it must point to rich information. And it must be inclusive. And it must be simple to use. And, since there is very little money around, it must be designed to have as low maintenance costs as possible.
Summing up, I’d say the workshop allowed various ideas to gel about what the one-stop shop for languages would look like. I thought the most important were:
- Avoid duplication. A lot of work has already gone into collecting material. Don’t waste it.
- Data-freshness. People will be drawn to the site if they believe that the data is fresh, rich and reliable.
- …comes at a cost Whatever’s built has to be updatable and maintainable at minimal cost. So maintaining links – even with a web crawler – is beyond many sites
- Buy-in If it’s to work, lots of communities, archives and linguists need to be able to add in material easily and to feel that it belongs to all of us
- Simple interface for searching AND for uploading. This means paying for good design and testing with a range of users. Maybe there’ll be several interfaces for different types of user.
- Wish-things
- There was a strong swell of opinion in favour of digital archives where people could deposit digital data files and update information easily
- Snapshots in time People will want to know what a language was like 10 years ago, 20 years ago – how many speakers, did children speak it and so on.
- Localisation How to translate the material into other languages for countries where outreach on the importance of helping speakers keep their languages is really needed? Spanish, Chinese, Russian, Pidgins and French may be the main lingua francas for some of these areas.
.
A divide was proposed by Gary Simons between curated web services (where people create data and people manage that data) – like Wikipedia – and aggregating web services (where automatic harvesters harvest data from archives, libraries etc) – like Google. I think the consensus was that we needed both – linking to information that is out there, and filling in the gaps.http://www.language-archives.org/OLAC/metadata.html