Friday, June 17, 2011

LDC June 2011 Newsletter




ACL has returned to North America and LDC is taking this opportunity to interact with top HLT researchers in beautiful Portland, OR. LDC’s exhibition table will feature information on new developments at the consortium and will also be the go-to point for exciting new, green giveaways.

LDC’s Seth Kulick will be presenting research on ‘Using Derivation Trees for Treebank Error Detection’ (S-66) during Monday’s evening poster session (20 June, 6.00 – 8.30 pm). The abstract for this paper, coauthored by LDCers Ann Bies and Justin Mott, is as follows:

This work introduces a new approach to checking treebank consistency. Derivation trees based on a variant of Tree Adjoining Grammar are used to compare the annotation of word sequences based on their structural similarity. This overcomes the problems of earlier approaches based on using strings of words rather than tree structure to identify the appropriate contexts for comparison. We report on the result of applying this approach to the Penn Arabic Treebank and how this approach leads to high precision of error detection.

We hope to see you there.

LDC is now on your favorite Social Networks (Facebook, LinkedIn and RSS, oh my!)

Over the past few months, LDC has responded to requests from the community to increase our online presence. We are happy to announce that LDC now has its very own Facebook page, LinkedIn profile (independent of the University of Pennsylvania) and Blog, which provides an RSS feed for LDC newsletters. Please visit LDC on our various profiles and let us know what you think!


New Publications

(1) 2006 NIST Spoken Term Detection Development Set was compiled by researchers at NIST (National Institute of Standards and Technology) and contains eighteen hours of Arabic, Chinese and English broadcast news, English conversational telephone speech and English meeting room speech used in NIST's 2006 Spoken Term Detection (STD) evaluation. The STD initiative is designed to facilitate research and development of technology for retrieving information from archives of speech data with the goals of exploring promising new ideas in spoken term detection, developing advanced technology incorporating these ideas, measuring the performance of this technology and establishing a community for the exchange of research results and technical insights.
The 2006 STD task was to find all of the occurrences of a specified term (a sequence of one or more words) in a given corpus of speech data. The evaluation was intended to develop technology for rapidly searching very large quantities of audio data. Although the evaluation used modest amounts of data, it was structured to simulate the very large data situation and to make it possible to extrapolate the speed measurements to much larger data sets. Therefore, systems were implemented in two phases: indexing and searching. In the indexing phase, the system processes the speech data without knowledge of the terms. In the searching phase, the system uses the terms, the index, and optionally the audio to detect term occurrences.

The development corpus consists of three data genres: broadcast news (BN), conversational telephone speech (CTS) and conference room meetings (CONFMTG). The broadcast news material was collected in 2001 by LDC's broadcast collection system from the following sources: ABC (English), China Broadcasting System (Chinese), China Central TV (Chinese), China National Radio (Chinese), China Television System (Chinese), CNN (English), MSNBC/NBC (English), Nile TV (Arabic), Public Radio International (English) and Voice of America (Arabic, Chinese, English). The CTS data was taken from the Switchboard data sets (e.g., Switchboard-2 Phase 1 LDC98S75, Switchboard-2 Phase 2 LDC99S79) and the Fisher corpora (e.g., Fisher English Training Sppech Part 1 LDC2004S13), also collected by LDC. The conference room meeting material consists of goal-oriented, small group round table meetings and was collected in 2001, 2004 and 2005 by NIST, the International Computer Science Institute (Berkeley, California), Carnegie Mellon University (Pittsburgh, PA) and Virginia Polytechnic Institute and State University (Blacksburg, VA) as part of the AMI corpus project.

Each BNews recording is a 1-channel, pcm-encoded, 16Khz, SPHERE formatted file. CTS recordings are 2-channel, u-law encoded, 8 Khz, SPHERE formatted files. TheCONFMTG files contain a single recorded channel.

2006 NIST Spoken Term Detection Development Set is distributed on 1 DVD-ROM. 2011 Subscription Members will automatically receive two copies of this corpus. 2011 Standard Members may request a copy as part of their 16 free membership corpora. Non-members may license this data for US$800.

*

(2) Datasets for Generic Relation Extraction (reACE) was developed at The University of Edinburgh, Edinburgh, Scotland. It consists of English broadcast news and newswire data originally annotated for the ACE (Automatic Content Extraction) program to which the Edinburgh Regularized ACE (reACE) mark-up has been applied.

The Edinburgh relation extraction (RE) task aims to identify useful information in text (e.g., PersonW works for OrganisationX, GeneY encodes ProteinZ) and to recode it in a format such as a relational database or RDF triple store (a database for the storage and retrieval of Resource Description Framework (RDF) metadata) that can be more effectively used for querying and automated reasoning. A number of resources have been developed for training and evaluation of automatic systems for RE in different domains. However, comparative evaluation is impeded by the fact that these corpora use different markup formats and different notions of what constitutes a relation.

reACE solves this problem by converting data to a common document type using token standoff and including detailed linguistic markup while maintaining all information in the original annotation. The subsequent re-annotation process normalizes the two data sets so that they comply with a notion of relation that is intuitive, simple and informed by the semantic web.

The data in this corpus consists of newswire and broadcast news material from ACE 2004 Multilingual Training Corpus LDC 2005T09 and ACE 2005 Multilingual Training Corpus LDC2006T06 . This material has been standardized for evaluation of multi-type RE across domains.

Annotation includes (1) a refactored version of the original data to a common XML document type; (2) linguistic information from LT-TTT (a system for tokenizing text and adding markup) and MINIPAR (an English parser); and (3) a normalized version of the original RE markup that complies with a shared notion of what constitutes a relation across domains.

The data sources represented in the corpus were collected by LDC in 2000 and 2003 and consist of the following: ABC, Agence France Presse, Associated Press, Cable News Network, MSNBC/NBC, New York Times, Public Radio International, Voice of America and Xinhua News Agency.

Datasets for Generic Relation Extraction (reACE) is distributed via web download. 2011 Subscription Members will automatically receive two copies of this corpus on disc. 2011 Standard Members may request a copy as part of their 16 free membership corpora. Non-members may license this data for US$800.

*

(3) English Gigaword Fifth Edition is a comprehensive archive of newswire text data that has been acquired over several years by the LDC at the University of Pennsylvania. The fifth edition includes all of the contents in English Gigaword Fourth Edition (LDC2009T13) plus new data covering the 24-month period of January 2009 through December 2010.

The seven distinct international sources of English newswire included in this edition are the following:
  • Agence France-Presse, English Service (afp_eng)
  • Associated Press Worldstream, English Service (apw_eng)
  • Central News Agency of Taiwan, English Service (cna_eng)
  • Los Angeles Times/Washington Post Newswire Service (ltw_eng)
  • Washington Post/Bloomberg Newswire Service (wpb_eng)
  • New York Times Newswire Service (nyt_eng)
  • Xinhua News Agency, English Service (xin_eng)
The seven letter codes in the parentheses above include the three-character source name abbreviations and the three-character language code ("eng") separated by an underscore ("_") character. The three-letter language code conforms to LDC's internal convention based on the ISO 639-3 standard.

Data

The following table sets forth the overall totals for each source. Note that "Total-MB" refers to the quantity of date when unzipped (approximately 26 gigabytes), "Gzip-MB" refers to compressed file sizes as stored on the DVD-ROMs and "K-wrds" refers to the number of whitespace-separated tokens (of all types) after all SGML tags are eliminated:

Source
#Files
Gzip-MB
Totl-MB
K-wrds
#DOCs




afp_eng
146
1732
4937
738322
2479624

apw_eng
193
2700
7889
1186955
3107777

cna_eng
144
86
261
38491
145317

ltw_eng
127
651
1694
268088
411032

nyt_eng
197
3280
8938
1422670
1962178

wpb_eng
12
42
111
17462
26143

xin_eng
191
834
2518
360714
1744025




TOTAL
1010
9325
26348
4032686
9876086

English Gigaword Fifth Edition is distributed on 3 DVD-ROM. 2011 Subscription Members will automatically receive two copies of this corpus. 2011 Standard Members may request a copy as part of their 16 free membership corpora. Non-members may license this data for US$6000.

Monday, May 23, 2011

LDC May 2011 Newsletter


New Publications:
LDC2011S01


Early Renewing Members Save Again!


Once again, LDC's early renewal discount program has resulted in significant savings for our members! For Membership Year (MY) 2011, about 120 organizations that renewed membership or joined early received a discount on their membership fees. Taken together, these members saved almost US$70,000! MY 2010 members are reminded that they are still eligible for a 5% discount when renewing. This discount will apply throughout 2011, regardless of time of renewal.

By joining for MY 2011, any organization can take advantage of membership benefits including free membership year data as well as discounts on older LDC corpora. Please visit our
Members FAQ for further information.

New Publications

(1) 2005 NIST Speaker Recognition Evaluation Training Data was developed at LDC and NIST (National Insitute of Standards and Technology). It consists of 392 hours of conversational telephone speech in English, Arabic, Mandarin Chinese, Russian and Spanish and associated English transcripts used as training data in the NIST-sponsored 2005 Speaker Recognition Evaluation (SRE). The ongoing series of SRE yearly evaluations conducted by NIST are intended to be of interest to researchers working on the general problem of text independent speaker recognition. To that end the evaluations are designed to be simple, to focus on core technology issues, to be fully supported and to be accessible to those wishing to participate.

The task of the 2005 SRE evaluation was speaker detection, that is, to determine whether a specified speaker is speaking during a given segment of conversational speech. The task was divided into 20 distinct and separate tests involving one of five training conditions and one of four test conditions.

The speech data consists of conversational telephone speech with "multi-channel" data collected simultaneously from a number of auxiliary microphones. The files are organized into two segments: 10 second two-channel excerpts (continuous segments from single conversations that are estimated to contain approximately 10 seconds of actual speech in the channel of interest) and 5 minute two-channel conversations.

The speech files are stored as 8-bit u-law speech signals in separate SPHERE files. In addition to the standard header fields, the SPHERE header for each file contains some auxiliary information that includes the language of the conversation and whether the data was recorded over a telephone line.

English language word transcripts in .cmt format were produced using an automatic speech recognition system (ASR) and contain error rates in the range of 15-30%.

NIST 2005 Speaker Recognition Evaluation Training Data is distributed on 6
DVD-ROM. 2011 Subscription Members will automatically receive two copies of this corpus. 2011 Standard Members may request a copy as part of their 16 free membership corpora. Non-members may license this data for US$2000.

*

(2) NIST/USF Evaluation Resources for the VACE Program - Meeting Data Test Set Part 3, Linguistic Data Consortium (LDC) catalog number LDC2011V03 and isbn 1-58563-579-0, was developed by researchers at the Department of Computer Science and Engineering, University of South Florida (USF), Tampa, Florida and the Multimodal Information Group at the National Institute of Standards and Technology (NIST). It contains approximately eleven hours of meeting room video data collected in 2001 and 2002 at NIST's Meeting Data Collection Laboratory and annotated for the VACE (Video Analysis and Content Extraction) 2005 face, person and hand detection and tracking tasks.

The VACE program was established to develop novel algorithms for automatic video content extraction, multi-modal fusion, and event understanding. During VACE Phases I and II, the program made significant progress in the automated detection and tracking of moving objects including faces, hands, people, vehicles and text in four primary video domains: broadcast news, meetings, street surveillance, and unmanned aerial vehicle motion imagery. Initial results were also obtained on automatic analysis of human activities and understanding of video sequences.

Three performance evaluations were conducted under the auspices of the VACE program between 2004 and 2007. The 2005 evaluation was administered by USF in collaboration with NIST and guided by an advisory forum including the evaluation participants. A summary of results of the evaluation can be found in the
2005 VACE results and analysis paper included in this release.

NIST's Meeting Data Collection Laboratory is designed to collect corpora to support research, development and evaluation in meeting recognition technologies. It is equipped to look and sound like a conventional meeting space. The data collection facility includes five Sony EV1-D30 video cameras, four of which have stationary views of a center conference table (one view from each surrounding wall) with a fixed focus and viewing angle, and an additional "floating" camera which is used to focus on particular participants, whiteboard or conference table depending on the meeting forum. The data is captured in a NIST-internal file format. The video data was extracted from the NIST format and encoded using the MPEG-2 standard in NTSC format. Further information concerning the video data parameters can found in the documentation included with this corpus.

NIST/USF Evaluation Resources for the VACE Program - Meeting Data Test Set Part 3 is distributed on 7 DVD-ROM. 2011 Subscription Members will automatically receive two copies of this corpus. 2011 Standard Members may request a copy as part of their 16 free membership corpora. Non-members may license this data for US$2500.

Monday, April 18, 2011

LDC April 2011 Newsletter

Membership Mailbag - Commercial licenses and LDC data-

New Publications:

- Broadcast News Lattices -

- NIST/USF Evaluation Resources for the VACE Program - Meeting Data Training Set Part 2 -

Membership Mailbag - Commercial licenses and LDC data

LDC's Membership office responds to thousands of emailed queries a year, and, over time, we've noticed that some questions tend to crop up with regularity. To address the questions that you, our data users, have asked, we'd like to continue our periodic Membership Mailbag series of newsletter articles. This month, we'll review how to obtain a commercial license to LDC data.

Our non-member research licenses permit non-commercial linguistic education and research use of data. Not-for-profit members and non-members, including non-member commercial organizations, cannot use LDC data to develop or test products for commercialization, nor can they use LDC data in any commercial product or for any commercial purpose. To gain commercial rights to data, an organization must join LDC as a for-profit member. For-profit members gain commercial rights to data from the year joined unless that right is otherwise restricted by a corpus-specific user license. Furthermore, for-profit members can license data for commercial use from closed Membership Years at the Reduced Licensing Fee. If membership is not renewed for the following year, the organization still retains ongoing commercial rights to data licensed as a For-Profit member and any data from the Membership Year. Note that the organization will not have a commercial license to any new data obtained after the Membership Year has ended, unless membership is renewed.

Simply put – organizations who have not signed LDC’s for-profit membership agreement and paid membership fees do not have a commercial license to any LDC data.

In the case of a handful of corpora, such as American National Corpus (ANC) Second Release (LDC2005T35), Buckwalter Arabic Morphological Analyzer Version 2.0 (LDC2004L02), CELEX2 (LDC96L14) and all CSLU corpora, commercial licenses must be obtained separately from the owners of the data even if an organization is a for-profit member. A full list of corpus-specific user licenses can be found on our License Agreements page.

Got a question? About LDC data? Forward it to ldc@ldc.upenn.edu. The answer may appear in a future Membership Mailbag article.

New Publications

(1) Broadcast News Lattices was developed by researchers at Microsoft and Johns Hopkins University (JHU) for the Johns Hopkins 2010 Summer Workshop on Speech Recognition with Conditional Random Fields. The lattices were generated using the IBM Attila speech recognition toolkit and were derived from transcripts of approximately 400 hours of English broadcast news recordings. They are intended to be used for training and decoding with Microsoft's Segmental Conditional Random Field (SCRF) toolkit for speech recognition, SCARF.

The goal of the JHU 2010 workshop was to advance the state-of-the-art in core speech recognition by developing new kinds of features for use in a SCRF. The SCRF approach generalizes Conditional Random Fields to operate at the segment level, rather than at the traditional frame level. Every segment is labeled directly with a word. Features are then extracted which each measure some form of consistency between the underlying audio and the word hypothesis for a segment. These are combined in a log-linear model (lattice) to produce the posterior possibility of a word sequence given the audio.

Broadcast News Lattices consists of training and test material, the source data for which was taken from various corpora distributed by LDC. The training lattices were derived from the following data sets:

1996 English Broadcast News Speech (HUB4) (LDC97S44); 1996 English Broadcast News Transcripts (HUB4) (LDC97T22) (104 hours)
1997 English Broadcast News Speech (HUB4) (LDC98S71); 1997 English Broadcast News Transcripts (HUB4) (LDC98T28) (97 hours)
TDT4 Multilingual Broadcast News Speech Corpus (LDC2005S11); TDT4 Multilingual Text and Annotations (LDC2005T16) (300 hours)

The test lattices are derived from the English broadcast news material in 2003 NIST Rich Transcription Evaluation Data (LDC2007S10).

The lattices were generated from an acoustic model that included LDA+MLLT, VTLN, fMLLR based SAT training, fMMI and mMMI discriminative training, and MLLR. The lattices are annotated with a field indicating the results of a second "confirmatory" decoding made with an independent speech recognizer. When there was a correspondence between a lattice link and the 1-best secondary output, the link was annotated with +1. Silence links are denominated with 0 and all others with -1. Correspondence was computed by finding the midpoint of a lattice link and comparing the link label with that of the word in the secondary decoding at that position. Thus, there are some cases where the same word shifted slightly in time receives a different confirmation score.

Broadcast News Lattices is distributed via web download.

2011 Subscription Members will automatically receive two copies of this corpus on disc. 2011 Standard Members may request a copy as part of their 16 free membership corpora. Non-members may license this data for US$1000.

*

(2) NIST/USF Evaluation Resources for the VACE Program - Meeting Data Training Set Part 2 was developed by researchers at the Department of Computer Science and Engineering, University of South Florida (USF), Tampa, Florida and the Multimodal Information Group at the National Institute of Standards and Technology (NIST). It contains approximately fourteen hours of meeting room video data collected in 2001 and 2002 at NIST's Meeting Data Collection Laboratory and annotated for the VACE (Video Analysis and Content Extraction) 2005 face, person and hand detection and tracking tasks.

The VACE program was established to develop novel algorithms for automatic video content extraction, multi-modal fusion, and event understanding. During VACE Phases I and II, the program made significant progress in the automated detection and tracking of moving objects including faces, hands, people, vehicles and text in four primary video domains: broadcast news, meetings, street surveillance, and unmanned aerial vehicle motion imagery. Initial results were also obtained on automatic analysis of human activities and understanding of video sequences. Three performance evaluations were conducted under the auspices of the VACE program between 2004 and 2007. The 2005 evaluation was administered by USF in collaboration with NIST and guided by an advisory forum including the evaluation participants.

NIST's Meeting Data Collection Laboratory is designed to collect corpora to support research, development and evaluation in meeting recognition technologies. It is equipped to look and sound like a conventional meeting space. The data collection facility includes five Sony EV1-D30 video cameras, four of which have stationary views of a center conference table (one view from each surrounding wall) with a fixed focus and viewing angle, and an additional "floating" camera which is used to focus on particular participants, whiteboard or conference table depending on the meeting forum. The data is captured in a NIST-internal file format. The video data was extracted from the NIST format and encoded using the MPEG-2 standard in NTSC format. Further information concerning the video data parameters can found in the documentation included with this corpus.

NIST/USF Evaluation Resources for the VACE Program - Meeting Data Training Set Part 2 is distributed on 8 DVD-ROM.

2011 Subscription Members will automatically receive two copies of this corpus. 2011 Standard Members may request a copy as part of their 16 free membership corpora. Non-members may license this data for US$2500.

Thursday, March 17, 2011

LDC March 2011 Newsletter

- Spring 2011 LDC Data Scholarship Recipients -

- LDC at NEALLT 2011-

New publications:

- 2008/2010 NIST Metrics for Machine Translation (MetricsMaTr) GALE Evaluation Set -

- NIST/USF Evaluation Resources for the VACE Program – Meeting Data Training Set Part 1 -




Spring 2011 LDC Data Scholarship Recipients

LDC is pleased to announce the student recipients of the Spring 2011 LDC Data Scholarship program! The LDC Data Scholarship program provides university students with access to LDC data at no-cost. Students were asked to complete an application which consisted of a proposal describing their intended use of the data, as well as a letter of support from their thesis adviser. LDC received many solid applications from both undergraduate and graduate students attending universities across the globe. After careful deliberation, we have chosen eight proposals to support. These students will receive no-cost copies of LDC data:
Roberto Aceves - Monterrey Institute of Technology and Superior Studies, ITESM (Mexico), graduate student, Computer Science. Roberto has been awarded a copy of the Speech in Noisy Environments (SPINE) database for his research in automatic speech recognition in noisy environments.

Daniel Escobar - Monterrey Institute of Technology and Superior Studies, ITESM (Mexico), graduate student, Mechatronics and Automation. Daniel has been awarded a copy of Switchboard-2 and NIST SRE for designing a parallel joint factor analysis architecture for a speaker verification system.

Erhan Guven - The George Washington University (USA), graduate student, Computer Science. Erhan has been awarded a copy of Emotional Prosody (LDC2002S28) for his work in extracting speaker emotional state from spectrograms.

Anup Kolya - Jadavpur University (India), graduate student, Computer Science and Engineering. Anup has been awarded a copy of ACE 2005 English SpatialML Annotations (LDC2008T03), ACE Time Normalization (TERN) 2004 English Evaluation Data V1.0 (LDC2010T18), and ACE Time Normalization (TERN) 2004 English Training Data v 1.0 (LDC2005T07) for his research in temporal information extraction.

Benjamín Martínez Elizalde - Monterrey Institute of Technology and Superior Studies, ITESM (Mexico), graduate student, Computer Science. Benjamín has been awarded a copy of Switchboard-2 and NIST SRE to support his research in speaker verification modeling.

Hanan Waer - Newcastle University (UK), graduate student, Educational and Applied Linguistics. Hanan has been awarded a copy of CALLHOME Egyptian Arabic Transcripts (LDC97T19), CALLHOME Egyptian Arabic Transcripts Supplement (LDC2002T38), and Egyptian Colloquial Arabic Lexicon (LDC99L22) for her research in comparing Arabic/English code switching in everyday Arabic conversation and academic discourse.

Muhua Zhu - Northeastern University (China), graduate student, Natural Language Processing. Muhua has been awarded a copy of Chinese Treebank 7.0 (LDC2010T07) to support the development of a high-accuracy Chinese parser.

Vignesh Kalaiselvan, Ganapathy Raman Kasi, Preetham Samue, Ramsrinivas Anantharamakrishnan, and Sathyanarayan Jeevan - Amrita Vishwa Vidyapeetham University (India), undergraduate students, Electronics and Communication Engineering - the group has been awarded CALLHOME Speech, Transcripts, and Lexicon in Egyptian Arabic and German for their research in deriving robust features for multilingual acoustic modeling.

Please join us in congratulating our student winners! The next LDC Data Scholarship program is scheduled for the Fall 2011 semester.


LDC at NEALLT 2011

LDC will be exhibiting at the upcoming NEALLT (North East Association for Language Learning Technology) conference, which will be held at the University of Pennsylvania from 1-3 April 2011. NEALLT is the regional chapter of the International Association for Language Learning Technology and works to improve language instruction through the use of technology.

How resources developed and distributed by LDC can aid language education will be discussed by LDC’s Dr Mohamed Maamouri in the presentation “Incorporating Resources and New Technologies in Language Education” on Saturday, April 2 (Session 9: 4.00-4.20 pm, Cohen G17). That presentation will highlight the LDC Arabic Reading Enhancement Tool, designed to support the development of reading skills for learning Arabic as a first and second language.

We hope to see you there!

New Publications

(1) 2008/2010 NIST Metrics for Machine Translation (MetricsMaTr) GALE Evaluation Set (LDC2011T05) is a package containing source data, reference translations, machine translations and associated human judgments used in the NIST 2008 and 2010 MetricsMaTr evaluations. The package was compiled by researchers at NIST, making use of Arabic and Chinese broadcast, newswire and web data and reference translations collected and developed by LDC for Phase 2 and Phase 2.5 of the DARPA GALE program.

NIST MetricsMaTr is a series of research challenge events for machine translation (MT) metrology, promoting the development of innovative MT metrics that correlate highly with human assessments of MT quality. Participants submit their metrics to NIST (National Institute of Standards and Technology). NIST runs those metrics on certain held-back test data for which it has human assessments measuring quality and then calculates correlations between the automatic metric scores and the human assessments. Specifically, the goals of MetricsMATR are: to inform other MT technology evaluation campaigns and conferences with regard to improved metrology; to establish an infrastructure that encourages the development of innovative metrics; to build a diverse community that will bring new perspectives to MT metrology research; and to provide a forum for MT metrology discussion and for establishing future directions of MT metrology.

The first MetricsMaTr challenge was held in 2008; the development data from the 2008 program is available from LDC, 2008 NIST Metrics for Machine Translation (MetricsMATR08) Development Data LDC2009T05. The MetricsMaTr10 evaluation plan is included in this release.

This release contains 149 documents with corresponding reference translations (Arabic-to-English and Chinese-to-English), system translations and human assessments. The human assessments include the following: Adequacy7 (a 7-point scale for judging the meaning of a system translation with respect to the reference translation); Adequacy Yes/No (whether the given system segment meant essentially the same as the reference translation); Preference (the judges' preference between two candidate translations when compared to a human reference translation); and HTER (Human Targeted Error Rate, human edits to a system translation to have the same meaning as a reference translation).

2008/2010 NIST Metrics for Machine Translation (MetricsMaTr) GALE Evaluation Set is distributed via web download.

2011 Subscription Members will automatically receive two copies of this corpus on disc. 2011 Standard Members may request a copy as part of their 16 free membership corpora. Non-members may license this data for US$250.

*


(2) NIST/USF Evaluation Resources for the VACE Program – Meeting Data Training Set Part 1 (LDC2011V01) was developed by researchers at the Department of Computer Science and Engineering, University of South Florida (USF), Tampa, Florida and the Multimodal Information Group at the National Institute of Standards and Technology (NIST). It contains approximately fifteen hours of meeting room video data collected in 2001 and 2002 at NIST's Meeting Data Collection Laboratory and annotated for the VACE (Video Analysis and Content Extraction Program) 2005 face, person and hand detection and tracking tasks.

The VACE program was established to develop novel algorithms for automatic video content extraction, multi-modal fusion, and event understanding. During VACE Phases I and II, the program made significant progress in the automated detection and tracking of moving objects including faces, hands, people, vehicles and text in four primary video domains: broadcast news, meetings, street surveillance, and unmanned aerial vehicle motion imagery. Initial results were also obtained on automatic analysis of human activities and understanding of video sequences.

Three performance evaluations were conducted under the auspices of the VACE program between 2004 and 2007. The 2005 evaluation was administered by USF in collaboration with NIST and guided by an advisory forum including the evaluation participants.

NIST's Meeting Data Collection Laboratory is designed to collect corpora to support research, development and evaluation in meeting recognition technologies. It is equipped to look and sound like a conventional meeting space. The data collection facility includes five Sony EV1-D30 video cameras, four of which have stationary views of a center conference table with a fixed focus and viewing angle, and an additional "floating" camera which is used to focus on particular participants, whiteboard or conference table depending on the meeting forum. The data is captured in a NIST-internal file format. The video data was extracted from the NIST format and encoded using the MPEG-2 standard in NTSC format.

NIST/USF Evaluation Resources for the VACE Program -- Meeting Data Training Set Part 1 is distributed on eight DVD-ROM.

2011 Subscription Members will automatically receive two copies of this corpus. 2011 Standard Members may request a copy as part of their 16 free membership corpora. Non-members may license this data for US$2500.





Ilya Ahtaridis
Membership Coordinator
--------------------------------------------------------------------
Linguistic Data Consortium Phone: 1 (215) 573-1275
University of Pennsylvania Fax: 1 (215) 573-2175
3600 Market St., Suite 810 ldc@ldc.upenn.edu
Philadelphia, PA 19104 USA http://www.ldc.upenn.edu