Monday, July 15, 2013

LDC July 2013 Newsletter


New publications:




Fall 2013 Data Scholarship Program


Applications are now being accepted through September 16, 2013, 11:59PM EST for the Fall 2013 LDC Data Scholarship program! The LDC Data Scholarship program provides university students access to LDC data at no-cost.

This program is open to students pursuing both undergraduate and graduate studies in an accredited college or university. LDC Data Scholarships are not restricted to any particular field of study; however, students must demonstrate a well-developed research agenda and a bona fide inability to pay. The selection process is highly competitive.

The application consists of two parts:

(1) Data Use Proposal. Applicants must submit a proposal describing their intended use of the data. The proposal should state which data the student plans to use and how the data will benefit their research project as well as information on the proposed methodology or algorithm.

Applicants should consult the LDC Corpus Catalog for a complete list of data distributed by LDC. Due to certain restrictions, a handful of LDC corpora are restricted to members of the Consortium. Applicants are advised to select a maximum of one to two databases.

(2) Letter of Support. Applicants must submit one letter of support from their thesis adviser or department chair. The letter must confirm that the department or university lacks the funding to pay the full Non-member Fee for the data and verify the student's need for data.

For further information on application materials and program rules, please visit the LDC Data Scholarship page.

Students can email their applications to the LDC Data Scholarship program. Decisions will be sent by email from the same address.

The deadline for the Fall 2013 program is Monday, September 16, 2013, 11:59PM EST.

New Publications

(1) Chinese Proposition Bank 3.0 is a continuation of the Chinese Proposition Bank project which aims to create a corpus of text annotated with information about basic semantic propositions. Chinese Proposition Bank 3.0 adds predicate-argument annotation on 187,731 words from Chinese Treebank 7.0 (LDC2010T07). The data sources are comprised of newswire, magazine articles, various broadcast news and broadcast conversation programming, web newsgroups and weblogs. LDC has also released Chinese Proposition Bank 1.0 (LDC2005T23) and Chinese Proposition Bank 2.0 (LDC2008T07).

This release contains the predicate-argument annotation of 173,206 verb instances and 14,525 noun instances. The annotation of nouns is limited to nominalizations that have a corresponding verb. The general annotation guidelines and the lexical guidelines (called frame files) for each verbal and nominal predicate are also included in this release. Below are some statistics about the corpus.
  • Total propositions for verbs - 173,206
  • Total propositions for nouns - 14,525
  • Total verbs framed - 24,642
  • Total framesets - 26,467
  • Verbs with multiple framesets - 1337
  • Average framesets per verb - 1.07
  • Total nouns framed - 1,421
  • Total noun framesets - 1,528
  • Nouns with multiple framesets - 48
  • Average framesets per nouns - 1.08
Chinese Proposition Bank 3.0 is distributed via web download. 2013 Subscription Members will automatically receive two copies of this data on disc. 2013 Standard Members may request a copy as part of their 16 free membership corpora. Non-members may license this data for a fee.

*

(2) GALE Arabic-English Parallel Aligned Treebank -- Broadcast News Part 1 was developed by LDC and contains 115,826 tokens of word aligned Arabic and English parallel text with treebank annotations. This material was used as training data in the DARPA GALE (Global Autonomous Language Exploitation) program.

Parallel aligned treebanks are treebanks annotated with morphological and syntactic structures aligned at the sentence level and the sub-sentence level. Such data sets are useful for natural language processing and related fields, including automatic word alignment system training and evaluation, transfer-rule extraction, word sense disambiguation, translation lexicon extraction and cultural heritage and cross-linguistic studies. With respect to machine translation system development, parallel aligned treebanks may improve system performance with enhanced syntactic parsers, better rules and knowledge about language pairs and reduced word error rate.

In this release, the source Arabic data was translated into English. Arabic and English treebank annotations were performed independently. The parallel texts were then word aligned. The material in this corpus corresponds to a portion of the Arabic treebanked data in Arabic Treebank - Broadcast News v1.0 (LDC2012T07).

The source data consists of Arabic broadcast news programming collected by LDC in 2005 and 2006 from Alhurra, Aljazeera and Dubai TV. All data is encoded as UTF-8. A count of files, words, tokens and segments is below.

Language
Files
Words
Tokens
Segments
Arabic
28
89,213
115,826
4,824

Note: Word count is based on the untokenized Arabic source. Ttoken count is based on the ATB-tokenized Arabic source.

The purpose of the GALE word alignment task was to find correspondences between words, phrases or groups of words in a set of parallel texts. Arabic-English word alignment annotation consisted of the following tasks:
  • Identifying different types of links: translated (correct or incorrect) and not translated (correct or incorrect)
  • Identifying sentence segments not suitable for annotation, e.g., blank segments, incorrectly-segmented segments, segments with foreign languages
  • Tagging unmatched words attached to other words or phrases
GALE Arabic-English Parallel Aligned Treebank -- Broadcast News Part 1 is distributed via web download. 2013 Subscription Members will automatically receive two copies of this data on disc. 2013 Standard Members may request a copy as part of their 16 free membership corpora. Non-members may license this data for a fee.

Monday, June 17, 2013

LDC June 2013 Newsletter

High School students use LDC data

New publications:



High School students use LDC data

A team of students at Thomas Jefferson High School for Science and Technology in Alexandria, VA, USA, have used an LDC database for the development of a device to help autistic children recognize emotions. This team was funded by a grant from the Lemelson-MIT InvenTeam Initiative Program. InvenTeams are groups of high school students, teachers, and mentors that receive grants up to US$10,000 each to invent technological solutions to real-world problems.

The team set out to invent an emotive aid in the form of a bracelet that uses a computational algorithm to extract emotional signatures from speech and display expressed emotions in real-time during a conversation. Potential beneficiaries include children with autism, Asperger’s syndrome, or similar diseases that impair the ability to detect emotion. The algorithm employed machine learning and neural network-based techniques to improve accuracy and efficiency relative to current methods.

The students used speech samples from the LDC database,
Emotional Prosody Speech and Transcripts (LDC2002S28) as well the Berlin Database of Emotional Speech for training and testing their algorithm. Although the samples proved to be too small to produce an algorithm with a high degree of accuracy, the team's algorithm did demonstrate some degree of success. The students will present their results at Eurekafest at MIT in June.

LDC thanks the InvenTeam’s teacher, Mark Hannum, and group leader, Suhas Gondi, for contributing to this article.
  
New publications

(1) GALE Phase 2 Chinese Broadcast Conversation Parallel Text Part 1 was developed by LDC. Along with other corpora, the parallel text in this release comprised training data for Phase 2 of the DARPA GALE (Global Autonomous Language Exploitation) Program. This corpus contains Chinese source text and corresponding English translations selected from broadcast conversation (BC) data collected by LDC in 2006 and 2007 and transcribed by LDC or under its direction.

This release includes 21 source-translation document pairs, comprising 146,082 characters of Chinese source text and its English translation. Data is drawn from seven distinct Chinese programs broadcast in 2006 and 2007 from the following sources -- China Central TV, a national and international broadcaster in Mainland China and Phoenix TV, a Hong Kong-based satellite television station. Broadcast conversation programming is generally more interactive than traditional news broadcasts and includes talk shows, interviews, call-in programs and roundtable discussions. The programs in this release focus on current events topics.

The data was transcribed by LDC staff and/or transcription vendors under contract to LDC in accordance with Quick Rich Transcription guidelines developed by LDC. Transcribers indicated sentence boundaries in addition to transcribing the text. Data was manually selected for translation according to several criteria, including linguistic features, transcription features and topic features. The transcribed and segmented files were then reformatted into a human-readable translation format and assigned to translation vendors. Translators followed LDCs Chinese to English translation guidelines. Bilingual LDC staff performed quality control procedures on the completed translations.

GALE Phase 2 Chinese Broadcast Conversation Parallel Text Part 1 is distributed via web download. 2013 Subscription Members will automatically receive two copies of this data on disc. 2013 Standard Members may request a copy as part of their 16 free membership corpora. Non-members may license this data for a fee.

(2) Greybeard was developed by LDC and is comprised of approximately 590 hours of English telephone conversation speech collected in October and November 2008 by LDC. The goal was to record new telephone conversations among subjects who had participated in one or more previous LDC telephone collections, from Switchboard-1 (1991) through the Mixer studies (2006).

A total of 172 subjects were enrolled in the Greybeard collection, all of whom had participated in one of the following:
  • Switchboard-1 (LDC97S62) 1991-1992: 2 subjects
  • Switchboard-2 (LDC98S75, LDC99S79, LDC2002S06) 1996-1997: 16 subjects
  • Mixer 1 and 2 2003-2005: 103 subjects
  • Mixer 3 2006: 51 subjects
Most Greybeard participants completed 12 calls. Some subjects completed up to 24 calls. Calls were made or received via an automatic operator system at LDC which connected two participants and announced a topic for discussion. 

This release consists of 4680 calls -- the complete set of calls recorded during the Greybeard collection (1098 calls) as well as all calls from the legacy collections that involved the Greybeard speakers.

The audio from each call was captured digitally by the operator system and stored in a separate file as raw mu-law sample data. As the recordings were uploaded daily from the robot operator to network disk storage, automated processes reformatted the audio into a 2-channel SPHERE-format file for each conversation and queued the recordings for manual audit to verify speaker identification and to check other aspects of the recording. 

Auditors provided impressionistic judgments on overall audio quality, presence of background noise and cross-channel echo and any other technical difficulty with the call, in addition to confirming the speaker-ID on each channel.

Greybeard is distributed on five DVDs. 2013 Subscription Members will automatically receive two copies of this data on disc. 2013 Standard Members may request a copy as part of their 16 free membership corpora. Non-members may license this data for a fee.

(3) Manually Annotated Sub-Corpus Third Release (MASC) was developed as part of The American National Corpus project and consists of approximately 500,000 words of contemporary American English written and spoken data annotated for a wide variety of linguistic phenomena. 

The MASC project was established to address, to the extent possible, many of the obstacles to the creation of large-scale, robust, multiply-annotated corpora of English covering a wide range of genres of written and spoken language data. The project provides appropriate data and annotations to serve as the base for a community-wide annotation effort, together with an infrastructure that enables the incorporation of contributed annotations into a single, usable format that can then be analyzed as it is or transduced to any of a variety of other formats. Further information about the project is available at the MASC website.

The source texts were drawn from the open portion of the American National Corpus Second Release, and from the Language Understanding Annotation Corpus.  MASC Third Release includes the contents of MASC First Release (LDC2010T22) (82,000 words) which is also available from LDC. There is no second release.

All data in this release was annotated for logical structure (paragraph, headings, etc.), token and sentence boundaries, part of speech and lemma, shallow parse (noun and verb chunks) and named entities (person, organization, location and date). Portions of the corpus were also annotated for FrameNet frames (40k full text), Penn Treebank syntax (82k) and opinion (50k). 

Manually Annotated Sub-Corpus Third Release is distributed via web download.
2013 Subscription Members will automatically receive two copies of this data on disc. 2013 Standard Members may request a copy as part of their 16 free membership corpora. Non-members may request this data by submitting a signed copy of LDC User Agreement for Non-members. This data is available at no-cost.

Thursday, May 16, 2013

LDC May 2013 Newsletter

 
New publications



LDC at ICASSP 2013

LDC will be at ICASSP 2013, the world’s largest and most comprehensive technical conference focused on signal processing and its applications. The event will be held over May 26-31 and we look forward to interacting with members of this community at our exhibit table and during our poster and paper presentations:
Tuesday, May 28, 15:30 - 17:30, Poster Area D
ARTICULATORY TRAJECTORIES FOR LARGE-VOCABULARY SPEECH RECOGNITION
Authors: Vikramjit Mitra, Wen Wang, Andreas Stolcke, Hosung Nam, Colleen Richey, Jiahong Yuan (LDC), Mark Liberman (LDC)
Tuesday, May 28, 16:30 - 16:50, Room 2011
SCALE-SPACE EXPANSION OF ACOUSTIC FEATURES IMPROVES SPEECH EVENT DETECTION
Authors: Neville Ryant, Jiahong Yuan, Mark Liberman (all LDC)
Wednesday, May 29, 15:20 - 17:20, Poster Area D
USING MULTIPLE VERSIONS OF SPEECH INPUT IN PHONE RECOGNITION
Authors: Mark Liberman (LDC), Jiahong Yuan (LDC), Andreas Stolcke, Wen Wang, Vikramjit Mitra
Please look for LDC’s exhibition at Booth #53 in the Vancouver Convention Centre. We hope to see you there!


Early renewing members save on fees

To date just over 100 organizations have joined for Membership Year (MY) 2013.   For the sixth straight year, LDC's early renewal discount program has resulted in significant savings for our members.  Organizations that renewed membership or joined early for MY2013 saved over US$50,000! MY 2012 members are still eligible for a 5% discount when renewing for MY2013. This discount will apply throughout 2013.

Organizations joining LDC can take advantage of membership benefits including free membership year data as well as discounts on older LDC corpora.  For-profit members can use most LDC data for commercial applications.  Please visit our
Members FAQ for further information.

Commercial use and LDC data

Has your company obtained an LDC database as a non-member?  For-profit organizations are reminded that an LDC membership is a pre-requisite for obtaining a commercial license to almost all LDC databases.  Non-member organizations, including non-member for-profit organizations, cannot use LDC data to develop or test products for commercialization, nor can they use LDC data in any commercial product or for any commercial purpose.  LDC data users should consult corpus-specific license agreements for limitations on the use of certain corpora. In the case of a small group of corpora such as American National Corpus (ANC) Second Release (LDC2005T35), Buckwalter Arabic Morphological Analyzer Version 2.0 (LDC2004L02), CELEX2 (LDC96L14) and all CSLU corpora, commercial licenses must be obtained separately from the owners of the data even if an organization is a for-profit member.

New publications

(1) GALE Arabic-English Parallel Aligned Treebank -- Newswire (LDC2013T10) was developed by LDC and contains 267,520 tokens of word aligned Arabic and English parallel text with treebank annotations. This material was used as training data in the DARPA GALE  (Global Autonomous Language Exploitation) program. Parallel aligned treebanks are treebanks annotated with morphological and syntactic structures aligned at the sentence level and the sub-sentence level. Such data sets are useful for natural language processing and related fields, including automatic word alignment system training and evaluation, transfer-rule extraction, word sense disambiguation, translation lexicon extraction and cultural heritage and cross-linguistic studies. With respect to machine translation system development, parallel aligned treebanks may improve system performance with enhanced syntactic parsers, better rules and knowledge about language pairs and reduced word error rate.

In this release, the source Arabic data was translated into English. Arabic and English treebank annotations were performed independently. The parallel texts were then word aligned. The material in this corpus corresponds to the Arabic treebanked data appearing in Arabic Treebank: Part 3 v 3.2 (LDC2010T08) (ATB) and to the English treebanked data in English Translation Treebank: An-Nahar Newswire (LDC2012T02).

The source data consists of Arabic newswire from the Lebanese publication An Nahar collected by LDC in 2002. All data is encoded as UTF-8. A count of files, words, tokens and segments is below.

Language
Files
Words
Tokens
Segments
Arabic
364
182,351
267,520
7,711

Note: Word count is based on the untokenized Arabic source and token count is based on the ATB-tokenized Arabic source.

The purpose of the GALE word alignment task was to find correspondences between words, phrases or groups of words in a set of parallel texts. Arabic-English word alignment annotation consisted of the following tasks:
Identifying different types of links: translated (correct or incorrect) and not translated (correct or incorrect)
Identifying sentence segments not suitable for annotation, e.g., blank segments, incorrectly-segmented segments, segments with foreign languages
Tagging unmatched words attached to other words or phrases
GALE Arabic-English Parallel Aligned Treebank -- Newswire is distributed via web download. 2013 Subscription Members will automatically receive two copies of this data on disc. 2013 Standard Members may request a copy as part of their 16 free membership corpora. Non-members may license this data for a fee.

*

(2) MADCAT Phase 2 Training Set (LDC2013T09) contains all training data created by LDC to support Phase 2 of the DARPA MADCAT (Multilingual Automatic Document Classification Analysis and Translation)Program. The data in this release consists of handwritten Arabic documents, scanned at high resolution and annotated for the physical coordinates of each line and token. Digital transcripts and English translations of each document are also provided, with the various content and annotation layers integrated in a single MADCAT XML output. 

The goal of the MADCAT program is to automatically convert foreign text images into English transcripts. MADCAT Phase 2 data was collected from Arabic source documents in three genres: newswire, weblog and newsgroup text. Arabic speaking scribes copied documents by hand, following specific instructions on writing style (fast, normal, careful), writing implement (pen, pencil) and paper (lined, unlined). Prior to assignment, source documents were processed to optimize their appearance for the handwriting task, which resulted in some original source documents being broken into multiple pages for handwriting. Each resulting handwritten page was assigned to up to five independent scribes, using different writing conditions. 

The handwritten, transcribed documents were checked for quality and completeness, then each page was scanned at a high resolution (600 dpi, greyscale) to create a digital version of the handwritten document. The scanned images were then annotated to indicate the physical coordinates of each line and token. Explicit reading order was also labeled, along with any errors produced by the scribes when copying the text. The annotation results in GEDI XML output files (gedi.xml), which include ground truth annotations and source transcripts.

The final step was to produce a unified data format that takes multiple data streams and generates a single MADCAT XML output file with all required information. The resulting madcat.xml file has these distinct components: (1) a text layer that consists of the source text, tokenization and sentence segmentation, (2)  an image layer that consist of bounding boxes, (3) a scribe demographic layer that consists of scribe ID and partition (train/test) and (4) a document metadata layer. 

This release includes 27,814 annotation files in both GEDI XML and MADCAT XML formats (gedi.xml and madcat.xml) along with their corresponding scanned image files in TIFF format.

MADCAT Phase 2 Training Set is distributed on six DVD-ROM. 2013 Subscription Members will automatically receive two copies of this data on disc. 2013 Standard Members may request a copy as part of their 16 free membership corpora. Non-members may license this data for a fee.

Wednesday, May 15, 2013

LDC TextPenn Project: Call for Participation

LDC's new TextPenn project will collect and annotate text messaging and chat data in English, Egyptian Arabic and Chinese. We are currently recruiting participants to donate their existing text messages and/or participate in new conversations. Participants who contribute at least 50 messages are entered into a weekly drawing to win $300.

You can learn more about the project or sign up to participate at
https://textpenn.ldc.upenn.edu/textpenn

Monday, April 15, 2013

LDC April 2013 Newsletter



  Checking in with LDC Data Scholarship Recipients

The LDC Data Scholarship program provides college and university students with access to LDC data at no-cost. Students are asked to complete an application which consists of a proposal describing their intended use of the data, as well as a letter of support from their thesis adviser. LDC introduced the Data Scholarship program during the Fall 2010 semester. Since that time, more than thirty individual students and student research groups have been awarded no-cost copies of LDC data for their research endeavors. Here is an update on the work of a few of the student recipients:
  • Leili Javadpour - Louisiana State University (USA), Engineering Science. Leili was awarded a copy of BBN Pronoun Coreference and Entity Type Corpus (LDC2005T33) and Message Understanding Conference (MUC) 7 (LDC2001T02) for her work in pronominal anaphora resolution. Leili's research involves a learning approach for pronominal anaphora resolution in unstructured text. She evaluated her approach on the BBN Pronoun Coreference and Entity Type Corpus and obtained encouraging results of 89%. In this approach machine learning is applied to a set of new features selected from other computational linguistic research. Leili's future plans involve evaluating the approach on Message Understanding Conference (MUC) 7 as well as on other genres of annotated text such as stories and conversation transcripts.
  • Olga Nickolaevna Ladoshko - National Technical University of Ukraine “KPI” (Ukraine), graduate student, Acoustics and Acoustoelectronics. Olga was awarded copies of  NTIMT (LDC93S2) and STC-TIMIT 1.0 (LDC2008S03) for her research in automatic speech recognition for Ukrainian. Olga used NTIMIT in the first phase of her research; one problem she investigated was the influence of telephone communication channels on the reliability of phoneme recognition in different types of parametrization and configuration speech recognition systems on the basis of HTK tools. The second phase involves using NTIMIT to test the algorithm for determining voice in non-stationary noise. Her future work with STC-TIMIT 1.0 will include an experiment to develop an improved speech recognition algorithm, allowing for increased accuracy under noisy conditions.
  •  Genevieve Sapijaszko - University of Central Florida (USA), Phd Candidate, Electrical and Computer Engineering. Genevieve was awarded a copy TIMIT Acoustic-Phonetic Continuous Speech Corpus (LDC93S1) and YOHO Speaker Verification (LDC94S16) for her work in digital signal processing. Her experiment used VQ and Euclidean distance to recognize a speaker's identity through extracting the features of the speech signal by the following methods: RCC, MFCC, MFCC + ΔMFCC, LPC, LPCC, PLPCC and RASTA PLPCC. Based on the results, in a noise free environment MFCC, (at an average of 94%), is the best feature extraction method when used in conjunction with the VQ model. The addition of the ΔMFCC showed no significant improvement to the recognition rate. When comparing three phrases of differing length, the longer two phrases had very similar recognition rates but the shorter phrase at 0.5 seconds had a noticeable lower recognition rate across methods. When comparing recognition time, MFCC was also faster than other methods. Genevieve and her research team concluded that MFCC in a noise free environment was the best method in terms of recognition rate and recognition rate time.
  • John Steinberg -  Temple University (USA), MS candidate, Electrical and Computer Engineering. John was awarded a copy of CALLHOME Mandarin Chinese Lexicon (LDC96L15) and CALLHOME Mandarin Chinese Transcripts (LDC96T16) for his work in speech recognition. John used the CALLHOME Mandarin Lexicon and Transcripts to investigate the integration of Bayesian nonparametric techniques into speech recognition systems. These techniques are able to detect the underlying structure of the data and theoretically generate better acoustic models than typical parametric approaches such as HMM. His work investigated using one such model, Dirichlet process mixtures, in conjunction with three variational Bayesian inference algorithms for acoustic modeling. The scope of his work was limited to a phoneme classification problem since John's goal was to determine the viability of these algorithms for acoustic modeling. 

    One goal of his research group is to develop a speech recognition system that is robust to variations in the acoustic channel. The group is also interested in building acoustic models that generalize well across languages. For these reasons, both CALLHOME English and CALLHOME Mandarin data were used to help determine if these new Bayesian nonparametric models were prone to any language specific artifacts. These two languages, though phonetically very different, did not yield significantly different performances. Furthermore, one variational inference algorithm- accelerated variational Dirichlet process mixtures (AVDPM) - was found to perform well on extremely large data sets.

New publications

(1) GALE Phase 2 Chinese Broadcast Conversation Speech (LDC2013S04) was developed by LDC and is comprised of approximately 120 hours of Chinese broadcast conversation speech collected in 2006 and 2007 by LDC and Hong University of Science and Technology (HKUST), Hong Kong, during Phase 2 of the DARPA GALE (Global Autonomous Language Exploitation) Program. Corresponding transcripts are released as GALE Phase 2 Chinese Broadcast Conversation Transcripts (LDC2013T08).

Broadcast audio for the GALE program was collected at the Philadelphia, PA USA facilities of LDC and at three remote collection sites: HKUST (Chinese) Medianet, Tunis, Tunisia (Arabic) and MTC, Rabat, Morocco (Arabic). The combined local and outsourced broadcast collection supported GALE at a rate of approximately 300 hours per week of programming from more than 50 broadcast sources for a total of over 30,000 hours of collected broadcast audio over the life of the program.

The broadcast conversation recordings in this release feature interviews, call-in programs and roundtable discussions focusing principally on current events from the following sources: Anhui TV, a regional television station in Mainland China, Anhui Province; China Central TV (CCTV), a national and international broadcaster in Mainland China; Hubei TV, a regional broadcaster in Mainland China, Hubei Province; and Phoenix TV, a Hong Kong-based satellite television station. A table showing the number of programs and hours recorded from each source is contained in the readme file. 

This release contains 202 audio files presented in Waveform Audio File format (.wav), 16000 Hz single-channel 16-bit PCM. Each file was audited by a native Chinese speaker following Audit Procedure Specification Version 2.0 which is included in this release. The broadcast auditing process served three principal goals: as a check on the operation of the broadcast collection system equipment by identifying failed, incomplete or faulty recordings; as an indicator of broadcast schedule changes by identifying instances when the incorrect program was recorded; and as a guide for data selection by retaining information about the genre, data type and topic of a program. 

GALE Phase 2 Chinese Broadcast Conversation Speech is distributed on 4 DVD-ROM. 2013 Subscription Members will automatically receive two copies of this data. 2013 Standard Members may request a copy as part of their 16 free membership corpora. Non-members may license this data fora fee.
*

(2) GALE Phase 2 Chinese Broadcast Conversation Transcripts (LDC2013T08) was developed by LDC and contains transcriptions of approximately 120 hours of Chinese broadcast conversation speech collected in 2006 and 2007 by LDC and Hong University of Science and Technology (HKUST), Hong Kong, during Phase 2 of the DARPA GALE (Global Autonomous Language Exploitation) Program. Corresponding audio data is released as GALE Phase 2 Chinese Broadcast Conversation Speech (LDC2013S04).

The source broadcast conversation recordings feature interviews, call-in programs and round table discussions focusing principally on current events from the following sources: Anhui TV, a regional television station in Mainland China, Anhui Province; China Central TV (CCTV), a national and international broadcaster in Mainland China; Hubei TV, a regional broadcaster in Mainland China, Hubei Province; and Phoenix TV, a Hong Kong-based satellite television station.
The transcript files are in plain-text, tab-delimited format (TDF) with UTF-8 encoding, and the transcribed data totals 1,523,373 tokens. The transcripts were created with the LDC-developed transcription tool, XTrans, a multi-platform, multilingual, multi-channel transcription tool that supports manual transcription and annotation of audio recordings. 

The files in this corpus were transcribed by LDC staff and/or by transcription vendors under contract to LDC. Transcribers followed LDC’s quick transcription guidelines (QTR) and quick rich transcription specification (QRTR) both of which are included in the documentation with this release. QTR transcription consists of quick (near-)verbatim, time-aligned transcripts plus speaker identification with minimal additional mark-up. It does not include sentence unit annotation. QRTR annotation adds structural information such as topic boundaries and manual sentence unit annotation to the core components of a quick transcript. Files with QTR as part of the filename were developed using QTR transcription. Files with QRTR in the filename indicate QRTR transcription.

GALE Phase 2 Chinese Broadcast Conversation Transcripts is distributed via web download. 2013 Subscription Members will automatically receive two copies of this data on disc. 2013 Standard Members may request a copy as part of their 16 free membership corpora.  Non-members may license this data for a fee.

*

(3) NIST 2008-2012 Open Machine Translation (OpenMT) Progress Test Sets (LDC2013T07) was developed by NIST Multimodal Information Group. This release contains the evaluation sets (source data and human reference translations), DTD, scoring software, and evaluation plans for the Arabic-to-English and Chinese-to-English progress test sets for the NIST OpenMT 2008, 2009, and 2012 evaluations. The test data remained unseen between evaluations and was reused unchanged each time. The package was compiled, and scoring software was developed, at NIST, making use of Chinese and Arabic newswire and web data and reference translations collected and developed by LDC. 

The objective of the OpenMT evaluation series is to support research in, and help advance the state of the art of, machine translation (MT) technologies -- technologies that translate text between human languages. Input may include all forms of text. The goal is for the output to be an adequate and fluent translation of the original. 

The MT evaluation series started in 2001 as part of the DARPA TIDES (Translingual Information Detection, Extraction) program. Beginning with the 2006 evaluation, the evaluations have been driven and coordinated by NIST as NIST OpenMT. These evaluations provide an important contribution to the direction of research efforts and the calibration of technical capabilities in MT. The OpenMT evaluations are intended to be of interest to all researchers working on the general problem of automatic translation between human languages. To this end, they are designed to be simple, to focus on core technology issues and to be fully supported. For more general information about the NIST OpenMT evaluations, please refer to the NIST OpenMT website.

This evaluation kit includes a single Perl script (mteval-v13a.pl) that may be used to produce a translation quality score for one (or more) MT systems. The script works by comparing the system output translation with a set of (expert) reference translations of the same source text. Comparison is based on finding sequences of words in the reference translations that match word sequences in the system output translation.

This release contains 2,748 documents with corresponding source and reference files, the latter of which contains four independent human reference translations of the source data. The source data is comprised of Arabic and Chinese newswire and web data collected by LDC in 2007. The table below displays statistics by source, genre, documents, segments and source tokens.

Source
  Genre
    Documents
Segments
Source Tokens
Arabic
  Newswire
    84
784
20039
Arabic
  Web Data
    51
594
14793
Chinese
  Newswire
    82
688
26923
Chinese
  Web Data
    40
682
19112

NIST 2008-2012 Open Machine Translation (OpenMT) Progress Test Sets is distributed via web download. 2013 Subscription Members will automatically receive two copies of this data on disc. 2013 Standard Members may request a copy as part of their 16 free membership corpora. Non-members may license this data for a fee.

Friday, March 15, 2013

LDC March 2013 Newsletter

LDC’s 20th Anniversary: Concluding a Year of Celebration

New publications:
1993-2007 United Nations Parallel Text
GALE Chinese-English Word Alignment and Tagging Training Part 4 -- Web



LDC’s 20th Anniversary: Concluding a Year of Celebration

We’ve enjoyed celebrating our 20th Anniversary this last year (April 2012 - March 2013) and would like to review some highlights before its close.

Our 2012 User Survey, circulated early in 2012, included a special Anniversary section in which respondents were asked to reflect on their opinions of, and dealings with, LDC over the years. We were humbled by the response. Multiple users mentioned that they would not be able to conduct their research without LDC and its data. For a full list of survey testimonials, please click
here.

LDC also developed its first-ever
timeline  (initially published in the April 2012 Newsletter) marking significant milestones in the consortium’s founding and growth.

In September, we hosted a
20th Anniversary Workshop  that brought together many friends and collaborators to discuss the present and future of language resources.

Throughout the year, we conducted several interviews of long-time LDC staff members to document their unique recollections of LDC history and to solicit their opinions on the future of the Consortium. These interviews are available as podcasts on the
LDC Blog

As our Anniversary year draws to a close, one task remains – to thank all of LDC’s past, present and future members and other friends of the Consortium for their loyalty and for their contributions to the community. LDC would not exist if not for its supporters.  The variety of relationships that LDC has built over the years is a direct reflection of the vitality, strength and diversity of the community. We thank you all and hope that we continue to serve your needs in our third decade and beyond.


For a last treat, please visit LDC’s newly-launched YouTube channel to enjoy this
video montage of the LDC staff interviews featured in the podcast series.

Thank you again for your continued support!

New publications

(1) 1993-2007 United Nations Parallel Text was developed by Google Research. It consists of United Nations (UN) parliamentary documents from 1993 through 2007 in the official languages of the UN: Arabic, Chinese, English, French, Russian, and Spanish. 

UN parliamentary documents are available from the UN Official Document System (UN ODS). UN ODS, in its main UNDOC database, contains the full text of all types of UN parliamentary documents. It has complete coverage datng from 1993 and variable coverage before that. Documents exist in one or more of the official languages of the UN: Arabic, Chinese, English, French, Russian, and Spanish. UN ODS also contains a large number of German documents, marked with the language other, but these are not included in this dataset.

LDC has released parallel UN parliamentary documents in English, French and Spanish spanning the period 1988-1993, UN Parallel Text (Complete) (LDC94T4A).

The data is presented as raw text and word-aligned text. There are 673,670 raw text documents and 520,283 word aligned documents. The raw text is very close to what was extracted from the original word processing documents in UN ODS (e.g., Word, WordPerfect, PDF), converted to UTF-8 encoding. The word-aligned text was normalized, tokenized, aligned at the sentence-level, further broken into sub-sentential chunk-pairs, and then aligned at the word. The sentence, chunk, and word alignment operations were performed separately for each individual language pair.

1993-2007 United Nations Parallel Text is distributed on 3 DVD-ROM. 2013 Subscription Members will automatically receive two copies of this data provided they have completed the UN Parallel Text Corpus User Agreement. 2013 Standard Members may request a copy as part of their 16 free membership corpora. Non-members may license this data for a fee.

*

(2) GALE Chinese-English Word Alignment and Tagging Training Part 4 -- Web was developed by LDC and contains 158,387 tokens of word aligned Chinese and English parallel text enriched with linguistic tags. This material was used as training data in the DARPA GALE (Global Autonomous Language Exploitation) program. 

Some approaches to statistical machine translation include the incorporation of linguistic knowledge in word aligned text as a means to improve automatic word alignment and machine translation quality. This is accomplished with two annotation schemes: alignment and tagging. Alignment identifies minimum translation units and translation relations by using minimum-match and attachment annotation approaches. A set of word tags and alignment link tags are designed in the tagging scheme to describe these translation units and relations. Tagging adds contextual, syntactic and language-specific features to the alignment annotation. 

This release consists of Chinese source web data (newsgroup, weblog) collected by LDC between 2005-2010. The distribution by words, character tokens and segments appears below: 

Language
Files
Words
CharTokens
Segments
Chinese
1,224
105,591
158,387
4,836

Note that all token counts are based on the Chinese data only. One token is equivalent to one character and one word is equivalent to 1.5 characters.

The Chinese word alignment tasks consisted of the following components: 

Identifying, aligning, and tagging 8 different types of links
Identifying, attaching, and tagging local-level unmatched words
Identifying and tagging sentence/discourse-level unmatched words
Identifying and tagging all instances of Chinese çš„(DE) except when they were a part of a semantic link.

GALE Chinese-English Word Alignment and Tagging Training Part 4 -- Web is distributed via web download. 2013 Subscription Members will automatically receive two copies of this data on disc.  2013 Standard Members may request a copy as part of their 16 free membership corpora.  Non-members may license this data for a fee.

Thursday, March 7, 2013

LDC Timeline: 1992 - 2012

LDC Timeline – Two Decades of Milestones
 
April 15, 2012 marked the “official” 20th anniversary of LDC’s founding. As our Anniversary year draws to a close, LDC would like to share with the blogging community a brief timeline of some significant milestones.

  • 1992: The University of Pennsylvania is chosen as the host site for LDC in response to a call for proposals issued by DARPA; the mission of the new consortium is to operate as a specialized data publisher and archive guaranteeing widespread, long-term availability of language resources. DARPA provides seed money with the stipulation that LDC become self-sustaining within five years. Mark Liberman assumes duties as LDC’s Director with a staff that grows to four, including Jack Godfrey, the Consortium’s first Executive Director.
  • 1993: LDC’s catalog debuts. Early releases include benchmark data sets such as TIMIT, TIPSTER, CSR and Switchboard, shortly followed by the Penn Treebank. 
  • 1994: LDC and NIST (the National Institute of Standards and Technology) enter into a Cooperative R&D Agreement that provides the framework for the continued collaboration between the two organizations.
  • 1995: Collection of conversational telephone speech and broadcast programming and transcription commences. LDC begins its long and continued support for NIST common task evaluations by providing custom data sets for participants. Membership and data license fees prove sufficient to support LDC operations, satisfying the requirement that the Consortium be self-sustaining.
  • 1996: The Lexicon Development Project, under the direction of Dr. Cynthia McLemore, begins releasing pronouncing lexicons in Mandarin, German, Egyptian Colloquial Arabic, Spanish, Japanese and American English. By 1997, all are published.
  • 1997: LDC announces LDC Online, a searchable index of newswire and speech data with associated tools to compute n-gram models, mutual information and other analyses.
  • 1998: LDC adds annotation to its task portfolio. Christopher Cieri joins LDC as Executive Director and develops the annotation operation.
  • 1999: Steven Bird joins LDC; the organization begins to develop tools and best practices for general use. The Annotation Graph Toolkit results from this effort.
  • 2000: LDC expands its support of common task evaluations from providing corpora to coordinating language resources across the program. Early examples include the DARPA TIDES, EARS and GALE programs.
  • 2001: The Arabic treebank project begins.
  • 2002: LDC moves to its current facilities at 3600 Market Street, Philadelphia with a full-time staff of approximately 40 persons.
  • 2004: LDC introduces the Standard and Subscription membership options, allowing members to choose whether to receive all or a subset of the data sets released in a membership year.
  • 2005: LDC makes task specifications and guidelines available through its projects web pages.
  • 2008: LDC introduces programs that provide discounts for continuing members and those who renew early in the year.
  • 2010: LDC inaugurates the Data Scholarship program for students with a demonstrable need for data.
  • 2012: LDC’s full-time staff of 50 and 196 part-time staff support ongoing projects and operations which include collecting, developing and archiving data, data annotation, tool development, sponsored-project support and multiple collaborations with various partners. The general catalog contains over 500 holdings in more than 50 languages. Over 85,000 copies of more than 1300 titles have been distributed to over 3200 organizations in 70 countries.