SLAAASh Project

slaash-logo
This project is being conducted jointly between researchers at Gallaudet University (Deborah Chen Pichler, Julie Hochgesang, and others) and the University of Connecticut. SLAAASh is short for “Sign Language Acquisition, Annotation, Archiving and Sharing” though sometimes we instead use “SLAASh” to refer to our overall goal to “annotate, archive, and share” sign language video data from all ages. The ASL Signbank project is part of SLAASh, so check out the ASL SignBank page as well!

focusgroup

 

Language data usually needs to be transcribed in order to be machine-readable and usable for research. This is especially true for sign language data collected on videotape. Large corpora are available for many other languages, shared between different labs and projects for research use. Yet, no large and shared corpus exists for ASL. Our goal is to prepare a corpus of previously collected data on ASL acquisition to share with other researchers, in order to permit more researchers to conduct studies relating to ASL acquisition and use.

 

Video-still-MOTJILThe video data to be used consists of Child ASL data from the CLESS project previously conducted by our lab. Because the data was originally collected for another purpose, one of our initial goals is to request consent for further data sharing from all past participants and seek ethically-sound, community-supported practices for decision making. Our lab has held focus groups for input from various types of stakeholders: Potential participants, family members, and other Deaf community members. Our primary concern is protection of individual rights, with research as an important but secondary concern.

 

SLAAASh Data:

Child # Sessions Age begin Age end Time observed (hrs:mins) Est. # gloss tokens Est. # child utterances
ABY 79 1;04.22 3;04.07 73:43 130,000 16,600
JIL 83 1;07.03 3;07.09 79:16 119,000 17,800
NED 44 1;05.28 4;01.28 40:00 60,000 9,000
SAL 18 1;07.18 2;10.01 17:11 23,000 3,900
Total 224 210:10 332,000 47,300

 

ELAN-tiersPrevious annotation of the ASL video data needs standardization in order to be usable by a wider shared audience. The ASL SignBank’s lexicon of ID glosses greatly facilitates this. (See project page for more information.)

 

Our current day-to-day focus is on converting old annotation files to the new system, completing annotation of previously unfinished files, and conducting basic analyses of the data, including vocabulary counts, MLU, and IPSyn analyses. We plan to release each data set as it is prepared. We are already sharing our tools such as the ASL SignBank open source.

 

We have been developing an ASL-IPSyn measure building on previously established IPSyn for English. Our ASL-IPSyn builds on work by many collaborators, research assistants, and students over a number of years. If you use ASL-IPSyn and would like to share your anonymous data, please contact us (diane.lillo-martin( at )uconn.edu). We will keep a database of scores to use for comparisons. Our current ASL-IPSyn scoresheet and scoring instruction guide are available for your download and use at this link. Contact our lab manager for any issues with the download.

 

Publications

Chen Pichler, Deborah, Hochgesang, Julie, Simons, Doreen, and Lillo-Martin, Diane. (2016). Community Input on Re-consenting for Data Sharing. In Eleni Efthimiou, Stavroula-Evita Fotinea, Thomas Hanke, Julie Hochgesant, Jette Kristoffersen & Johanna Mesch (Eds.), Workshop Proceedings: 7th workshop on the Representation and Processing of Sign Languages: Corpus Mining, 29-34. pdf

Hochgesang, Julie, Pascual Villanueva, Pedro, Mathur, Gaurav, and Lillo-Martin, Diane. (2010). Building a Database while Considering Research Ethics in Sign Language Communities.  Proceedings of the 4th Workshop on the Representation and Processing of Sign Languages: Corpora and Sign Language Technologies; 7th Language Resources and Evaluation Conference. pdf

 

Presentations

Chen Pichler, Deborah, Gökgöz, Kadir & Lillo-Martin, Diane (2018). Points to self by Deaf, hearing and Coda children. International Conference on Sign Language Acquisition; Istanbul, Turkey; June 28, 2018. pdf

Goodwin, Corina & Lillo-Martin, Diane (2018). Aspects of Sign Input to Deaf children of Deaf parents. International Conference on Sign Language Acquisition; Istanbul, Turkey; June 27. pdf

Lillo-Martin, Diane & Chen Pichler, Deborah (2018). It’s not all ME, ME, ME: Revisiting the Acquisition of ASL Pronouns. Formal and Experimental Advances in Sign Language Theory (FEAST); Ca ‘Foscari University, Venice; June 18, 2018. pdf

Hochgesang, J.A. (2017). Ethics of working with signed language communities. Invited workshop lecture for “SIGN8 International Conference for Sign Language Users”. Florianópolis, SC, Brazil, Universidade Federal de Santa Catarina (UFSC), October 9-12.

Becker, Amelia. (2017.) “Selected finger combinations in American Sign Language: frequency, acquisition, and markedness.” CL2017 Pre-Conference Workshop 3: Corpus-based approaches to sign language linguistics: Into the second decade. University of Birmingham, July 24. pdf

Lillo-Martin, Diane, Goodwin, Corina & Prunier, Lee (2017). ASL-IPSyn: A new measure of grammatical development. Poster presentation, Boston University Conference on Language Development (BUCLD). Boston, MA; November 2017. pdf

Lillo-Martin, Diane, Prunier, Lee, Hochgesang, Julie, and Chen Pichler, Deborah. (2017.) Sign Language Acquisition: Annotation, Archiving and Sharing – Status Report. Poster presented at the 8th UConn Language Fest. pdf

Chen Pichler, D., J. Hochgesang, D. Simons & D. Lillo-Martin. (2016). Community Input on Re-consenting for Data Sharing. Presented at the 7th Workshop on the Representation and Processing of Sign Languages: Corpus Mining. LREC, Portorož, Slovenia, May 28.

Chen Pichler, Deborah, Hochgesang, Julie, Simons, Doreen, and Lillo-Martin, Diane. (2016). Reconsenting for Data Sharing. Poster presented at the 12th International Conference of Theoretical Issues in Sign Language Research. LaTrobe University, Melbourne, Australia, January 4-7. pdf

 

Research reported here was supported in part by the National Institute on Deafness and other Communication Disorders of the National Institutes of Health under Award Number R01DC013578. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health.