- LDC at NEALLT 2011-
- 2008/2010 NIST Metrics for Machine Translation (MetricsMaTr) GALE Evaluation Set -
- NIST/USF Evaluation Resources for the VACE Program – Meeting Data Training Set Part 1 -
Spring 2011 LDC Data Scholarship Recipients
LDC is pleased to announce the student recipients of the Spring 2011 LDC Data Scholarship program! The LDC Data Scholarship program provides university students with access to LDC data at no-cost. Students were asked to complete an application which consisted of a proposal describing their intended use of the data, as well as a letter of support from their thesis adviser. LDC received many solid applications from both undergraduate and graduate students attending universities across the globe. After careful deliberation, we have chosen eight proposals to support. These students will receive no-cost copies of LDC data:
Roberto Aceves - Monterrey Institute of Technology and Superior Studies, ITESM (Mexico), graduate student, Computer Science. Roberto has been awarded a copy of the Speech in Noisy Environments (SPINE) database for his research in automatic speech recognition in noisy environments.
Daniel Escobar - Monterrey Institute of Technology and Superior Studies, ITESM (Mexico), graduate student, Mechatronics and Automation. Daniel has been awarded a copy of Switchboard-2 and NIST SRE for designing a parallel joint factor analysis architecture for a speaker verification system.
Erhan Guven - The George Washington University (USA), graduate student, Computer Science. Erhan has been awarded a copy of Emotional Prosody (LDC2002S28) for his work in extracting speaker emotional state from spectrograms.
Anup Kolya - Jadavpur University (India), graduate student, Computer Science and Engineering. Anup has been awarded a copy of ACE 2005 English SpatialML Annotations (LDC2008T03), ACE Time Normalization (TERN) 2004 English Evaluation Data V1.0 (LDC2010T18), and ACE Time Normalization (TERN) 2004 English Training Data v 1.0 (LDC2005T07) for his research in temporal information extraction.
Benjamín Martínez Elizalde - Monterrey Institute of Technology and Superior Studies, ITESM (Mexico), graduate student, Computer Science. Benjamín has been awarded a copy of Switchboard-2 and NIST SRE to support his research in speaker verification modeling.
Hanan Waer - Newcastle University (UK), graduate student, Educational and Applied Linguistics. Hanan has been awarded a copy of CALLHOME Egyptian Arabic Transcripts (LDC97T19), CALLHOME Egyptian Arabic Transcripts Supplement (LDC2002T38), and Egyptian Colloquial Arabic Lexicon (LDC99L22) for her research in comparing Arabic/English code switching in everyday Arabic conversation and academic discourse.
Muhua Zhu - Northeastern University (China), graduate student, Natural Language Processing. Muhua has been awarded a copy of Chinese Treebank 7.0 (LDC2010T07) to support the development of a high-accuracy Chinese parser.
Vignesh Kalaiselvan, Ganapathy Raman Kasi, Preetham Samue, Ramsrinivas Anantharamakrishnan, and Sathyanarayan Jeevan - Amrita Vishwa Vidyapeetham University (India), undergraduate students, Electronics and Communication Engineering - the group has been awarded CALLHOME Speech, Transcripts, and Lexicon in Egyptian Arabic and German for their research in deriving robust features for multilingual acoustic modeling.
Please join us in congratulating our student winners! The next LDC Data Scholarship program is scheduled for the Fall 2011 semester.
LDC will be exhibiting at the upcoming NEALLT (North East Association for Language Learning Technology) conference, which will be held at the University of Pennsylvania from 1-3 April 2011. NEALLT is the regional chapter of the International Association for Language Learning Technology and works to improve language instruction through the use of technology.
How resources developed and distributed by LDC can aid language education will be discussed by LDC’s Dr Mohamed Maamouri in the presentation “Incorporating Resources and New Technologies in Language Education” on Saturday, April 2 (Session 9: 4.00-4.20 pm, Cohen G17). That presentation will highlight the LDC Arabic Reading Enhancement Tool, designed to support the development of reading skills for learning Arabic as a first and second language.
We hope to see you there!
(1) 2008/2010 NIST Metrics for Machine Translation (MetricsMaTr) GALE Evaluation Set (LDC2011T05) is a package containing source data, reference translations, machine translations and associated human judgments used in the NIST 2008 and 2010 MetricsMaTr evaluations. The package was compiled by researchers at NIST, making use of Arabic and Chinese broadcast, newswire and web data and reference translations collected and developed by LDC for Phase 2 and Phase 2.5 of the DARPA GALE program.
NIST MetricsMaTr is a series of research challenge events for machine translation (MT) metrology, promoting the development of innovative MT metrics that correlate highly with human assessments of MT quality. Participants submit their metrics to NIST (National Institute of Standards and Technology). NIST runs those metrics on certain held-back test data for which it has human assessments measuring quality and then calculates correlations between the automatic metric scores and the human assessments. Specifically, the goals of MetricsMATR are: to inform other MT technology evaluation campaigns and conferences with regard to improved metrology; to establish an infrastructure that encourages the development of innovative metrics; to build a diverse community that will bring new perspectives to MT metrology research; and to provide a forum for MT metrology discussion and for establishing future directions of MT metrology.
The first MetricsMaTr challenge was held in 2008; the development data from the 2008 program is available from LDC, 2008 NIST Metrics for Machine Translation (MetricsMATR08) Development Data LDC2009T05. The MetricsMaTr10 evaluation plan is included in this release.
This release contains 149 documents with corresponding reference translations (Arabic-to-English and Chinese-to-English), system translations and human assessments. The human assessments include the following: Adequacy7 (a 7-point scale for judging the meaning of a system translation with respect to the reference translation); Adequacy Yes/No (whether the given system segment meant essentially the same as the reference translation); Preference (the judges' preference between two candidate translations when compared to a human reference translation); and HTER (Human Targeted Error Rate, human edits to a system translation to have the same meaning as a reference translation).
2008/2010 NIST Metrics for Machine Translation (MetricsMaTr) GALE Evaluation Set is distributed via web download.
2011 Subscription Members will automatically receive two copies of this corpus on disc. 2011 Standard Members may request a copy as part of their 16 free membership corpora. Non-members may license this data for US$250.
(2) NIST/USF Evaluation Resources for the VACE Program – Meeting Data Training Set Part 1 (LDC2011V01) was developed by researchers at the Department of Computer Science and Engineering, University of South Florida (USF), Tampa, Florida and the Multimodal Information Group at the National Institute of Standards and Technology (NIST). It contains approximately fifteen hours of meeting room video data collected in 2001 and 2002 at NIST's Meeting Data Collection Laboratory and annotated for the VACE (Video Analysis and Content Extraction Program) 2005 face, person and hand detection and tracking tasks.
The VACE program was established to develop novel algorithms for automatic video content extraction, multi-modal fusion, and event understanding. During VACE Phases I and II, the program made significant progress in the automated detection and tracking of moving objects including faces, hands, people, vehicles and text in four primary video domains: broadcast news, meetings, street surveillance, and unmanned aerial vehicle motion imagery. Initial results were also obtained on automatic analysis of human activities and understanding of video sequences.
Three performance evaluations were conducted under the auspices of the VACE program between 2004 and 2007. The 2005 evaluation was administered by USF in collaboration with NIST and guided by an advisory forum including the evaluation participants.
NIST's Meeting Data Collection Laboratory is designed to collect corpora to support research, development and evaluation in meeting recognition technologies. It is equipped to look and sound like a conventional meeting space. The data collection facility includes five Sony EV1-D30 video cameras, four of which have stationary views of a center conference table with a fixed focus and viewing angle, and an additional "floating" camera which is used to focus on particular participants, whiteboard or conference table depending on the meeting forum. The data is captured in a NIST-internal file format. The video data was extracted from the NIST format and encoded using the MPEG-2 standard in NTSC format.
NIST/USF Evaluation Resources for the VACE Program -- Meeting Data Training Set Part 1 is distributed on eight DVD-ROM.
2011 Subscription Members will automatically receive two copies of this corpus. 2011 Standard Members may request a copy as part of their 16 free membership corpora. Non-members may license this data for US$2500.