Tuesday, February 18, 2014

LDC February 2014 Newsletter

Spring 2014 LDC Data Scholarship recipients
Membership fee savings and publications pipeline
New LDC website enhancements coming soon

New publications:

Spring 2014 LDC Data Scholarship recipients
LDC is pleased to announce the student recipients of the Spring 2014 LDC Data Scholarship program!  This program provides university students with access to LDC data at no-cost. Students were asked to complete an application which consisted of a proposal describing their intended use of the data, as well as a letter of support from their thesis adviser. We received many solid applications and have chosen two proposals to support. The following students will receive no-cost copies of LDC data:
  • Skye Anderson ~ Tulane University (USA), BA candidate, Linguistics.  Skye has been awarded a copy of LDC Standard Arabic Morphological Analyzer (SAMA) Version 3.1 for her work in author profiling.

  • Hao Liu ~ University College London (UK), PhD candidate, Speech, Hearing and Phonetic Sciences.  Hao has been awarded a copy of Switchboard-1 Release 2, and NXT Switchboard Annotations for his work in prosody modeling.

Membership fee savings and publications pipeline
Members can still save on 2014 membership fees, but time is running out. Any organization which joins or renews membership for 2014 through Monday, March 3, 2014, is entitled to a 5% discount. Organizations which held membership for MY2013 can receive a 10% discount on fees provided they renew prior to March 3, 2014.

Planned publications for this year include:
  • 2009 NIST Language Recognition Evaluation ~  development data from VOA broadcast and CTS telephone speech in target and non-target languages.
  • ETS Corpus of Non-Native Written English ~ contains 1100 essays written for a college-entrance test sampled from eight prompts (i.e., topics)  with score levels (low/medium/high) for each essay.
  • GALE data ~ including Word Alignment, Broadcast Speech & Transcripts, Parallel Text, Parallel Aligned Treebanks in Arabic, Chinese, and English.

  • Hispanic Accented English ~ contains approximately 30 hours of spontaneous speech and read utterances from non-native speakers of English with corresponding transcripts.
  • Multi-Channel Wall Street Journal Audio-Visual Corpus (MC-WSJ-AV) ~  re-recording of parts of the WSJCAM0 using a number of microphones as well as three recording conditions resulting in 18-20 channels of audio per recording.
  • TAC KBP Reference Knowledge Base ~ TAC KBP aims to develop and evaluate technologies for building and populating knowledge bases (KBs) about named entities from unstructured text.  KBP systems must either populate an existing reference KB, or else build a KB from scratch. The reference KB for is based on a snapshot of English Wikipedia snapshot from October 2008 and contains a set of entities, each with a canonical name and title for the Wikipedia page, an entity type, an automatically parsed version of the data from the infobox in the entity's Wikipedia article, and a stripped version of the text of the Wiki article.
  • USC-SFI MALACH Interviews and Transcripts Czech ~ developed by The University of Southern California's Shoah Foundation Institute (USC-SFI) and the University of West Bohemia as part of the MALACH (Multilingual Access to Large Spoken ArCHives) Project. It contains approximately 143 hours of interviews from 420 interviewees along with transcripts and other documentation.

New LDC website enhancements coming soon
Look for LDC’s new website enhancements in the coming weeks. We've revamped our membership services to make it easier than ever for you to manage your membership and access data more quickly.


New publications
(1) GALEArabic-English Parallel Aligned Treebank -- Broadcast News Part 2 was developed by LDC and contains 141,058 tokens of word aligned Arabic and English parallel text with treebank annotations. This material was used as training data in the DARPA GALE (Global Autonomous Language Exploitation) program.

Parallel aligned treebanks are treebanks annotated with morphological and syntactic structures aligned at the sentence level and the sub-sentence level. Such data sets are useful for natural language processing and related fields, including automatic word alignment system training and evaluation, transfer-rule extraction, word sense disambiguation, translation lexicon extraction and cultural heritage and cross-linguistic studies. With respect to machine translation system development, parallel aligned treebanks may improve system performance with enhanced syntactic parsers, better rules and knowledge about language pairs and reduced word error rate.

In this release, the source Arabic data was translated into English. Arabic and English treebank annotations were performed independently. The parallel texts were then word aligned. The material in this corpus corresponds to a portion of the Arabic treebanked data in Arabic Treebank - Broadcast News v1.0 (LDC2012T07).

The source data consists of Arabic broadcast news programming collected by LDC in 2007 and 2008. All data is encoded as UTF-8. A count of files, words, tokens and segments is below.

Language
Files
Words
Tokens
Segments
Arabic
31
110,690
141,058
7,102

The purpose of the GALE word alignment task was to find correspondences between words, phrases or groups of words in a set of parallel texts. Arabic-English word alignment annotation consisted of the following tasks:
  • Identifying different types of links: translated (correct or incorrect) and not translated (correct or incorrect)
  • Identifying sentence segments not suitable for annotation, e.g., blank segments, incorrectly-segmented segments, segments with foreign languages
  • Tagging unmatched words attached to other words or phrases
GALE Arabic-English Parallel Aligned Treebank -- Broadcast News Part 2 is distributed via web download.

2014 Subscription Members will automatically receive two copies of this data on disc. 2014 Standard Members may request a copy as part of their 16 free membership corpora.  Non-members may license this data for a fee.

*
(2) King Saud University Arabic Speech Database was developed by King Saud University and contains 590 hours of recorded Arabic speech from male and female speakers. The utterances include read and spontaneous speech. The recordings were conducted in varied environments representing quiet and noisy settings.

The corpus was designed principally for speaker recognition research. The speech sources are sentences, word lists, prose and question and answer sessions. Read speech text includes the following:
  • Sets of sentences devised to cover allophones of each phoneme, phonetic balance, and differentiation of accents.
  • Word lists developed to minimize missing phonemes and to represent nasals fricatives, commonly used words, and numbers.
  • Two paragraphs, one from the Quran and another from a book, selected because they included all letters of the alphabet and were easy to read.
Spontaneous speech was captured through question and answer sessions between participants and project team members. Speakers responded to questions on general topics such as the weather and food.

Each speaker was recorded in three different environments: a sound proof room, an office, and a cafeteria. The recordings were collected via microphone and mobile phone and averaged between 16-19 minutes. The data was verified for missing recordings, problems with the recording system or errors in the recording process.

King Saud University Arabic Speech Database is distributed on one hard disk.
2014 Subscription Members will receive a copy of this data provided that they have completed the User License Agreement. 2014 Standard Members may request a copy as part of their 16 free membership corpora.  Non-members may license this data for a fee.

*

(3) NIST2012 Open Machine Translation (OpenMT) Progress Test Five Language Source was developed by NIST Multimodal Information Group. This release contains the evaluation sets (source data and human reference translations), DTD, scoring software, and evaluation plan for the OpenMT 2012 test for Arabic, Chinese, Dari, Farsi, and Korean to English on a parallel data set. The set is based on a subset of the Arabic-to-English and Chinese-to-English progress tests from the OpenMT 2008, 2009 and 2012 evaluations with new source data created by humans based on the English reference translation. The package was compiled, and scoring software was developed, at NIST, making use of newswire and web data and reference translations developed by the Linguistic Data Consortium  and the Defense Language Institute Foreign Language Center.

The objective of the OpenMT evaluation series is to support research in, and help advance the state of the art of, machine translation (MT) technologies -- technologies that translate text between human languages. Input may include all forms of text. The goal is for the output to be an adequate and fluent translation of the original. The 2012 task included the evaluation of five language pairs: Arabic-to-English, Chinese-to-English, Dari-to-English, Farsi-to-English and Korean-to-English in two source data styles. For general information about the NIST OpenMT evaluations, refer to the NIST OpenMT website.

This evaluation kit includes a single Perl script (mteval-v13a.pl) that may be used to produce a translation quality score for one (or more) MT systems. The script works by comparing the system output translation with a set of (expert) reference translations of the same source text. Comparison is based on finding sequences of words in the reference translations that match word sequences in the system output translation.

This release consists of 20 files, four for each of the five languages, presented in XML with an included DTD. The four files are source and reference data in the following two styles:
  • English-true: an English-oriented translation this requires that the text read well and not use any idiomatic expressions in the foreign language to convey meaning, unless absolutely necessary.
  • Foreign-true: a translation as close as possible to the foreign language, as if the text had originated in that language.
NIST 2012 Open Machine Translation (OpenMT) Progress Test Five Language Source is distributed via web download.

2014 Subscription Members will automatically receive two copies of this data on disc. 2014 Standard Members may request a copy as part of their 16 free membership corpora.  Non-members may license this data for a fee.

Wednesday, January 15, 2014

LDC January 2014 Newsletter

LDC Membership Discounts for MY 2014 Still Available

New publications:


LDC Membership Discounts for MY 2014 Still Available

If you are considering joining LDC for Membership Year 2014 (MY2014), there is still time to save on membership fees. Any organization which joins or renews membership for 2014 through Monday, March 3, 2014, is entitled to a 5% discount on membership fees.  Organizations which held membership for MY2013 can receive a 10% discount on fees provided they renew prior to March 3, 2014.  For further information on pricing, please view our Invitation to Join for Membership Year 2014 announcement or contact LDC.

New Publications

(1) CALLFRIEND Farsi Second Edition Speech was developed by LDC and consists of approximately 42 hours of telephone conversation (100 recordings) among native Farsi speakers. The calls were recorded in 1995 and 1996 as part of the CALLFRIEND collection, a project designed primarily to support research in automatic language identification. One hundred native Farsi speakers living in the continental United States each made a single telephone call, lasting up to 30 minutes, to a family member or friend living in the United States.

This release represents all calls from the collection. LDC released recordings from 60 calls without transcripts in 1996 as CALLFRIEND Farsi (LDC96S50) after 20 of those calls were used as evaluation data in the first NIST Language Recognition Evaluation (LRE).

Corresponding transcripts are available in CALLFRIEND Farsi Second Edition Transcripts (LDC2014T01).

All recordings involved domestic calls routed through LDC’s automated telephone collection platform and were stored as 2-channel (4-wire), 8-KHz mu-law samples taken directly from the public telephone network via a T-1 circuit. Each audio file is a FLAC-compressed MS-WAV (RIFF) format audio file containing 2-channel, 8-KHz, 16-bit PCM sample data.

This release includes speaker information, including gender, the number of speakers on each channel and call duration.

CALLFRIEND Farsi Second Edition Speech is distributed on one DVD-ROM.

2014 Subscription Members will automatically receive two copies of this data. 2014 Standard Members may request a copy as part of their 16 free membership corpora.  Non-members may license this data for a fee.

*

(2) CALLFRIEND Farsi Second Edition Transcripts was developed by LDC and consists of transcripts for approximately 42 hours of telephone conversation (100 recordings) among native Farsi speakers. The calls were recorded in 1995 and 1996 as part of the CALLFRIEND collection, a project designed primarily to support research in automatic language identification. One hundred native Farsi speakers living in the continental United States made a single telephone call, lasting up to 30 minutes, to a family member or friend living in the United States.

Corresponding speech data is available as CALLFRIEND Farsi Second Edition Speech (LDC2014S01).

Transcripts are presented in three formats: romanized transcripts (*asc.txt), Arabic-script transcripts (*ntv.txt) and both romanized and Arabic forms in a simple XML format (*.xml). For the *.txt files, the four main fields on each line (start-offset, end-offset, speaker-label, transcript-text) are separated by tabs. Each file begins with a single comment line containing the file_id string. This is followed immediately by the list of time-stamped segments, in order according to their start-offset values, with no blank lines. The XML form of the transcripts contains both Arabicized and romanized forms for Farsi words.

CALLFRIEND Farsi Second Edition Transcripts is distributed via web download.

2014 Subscription Members will automatically receive two copies of this data on disc. 2014 Standard Members may request a copy as part of their 16 free membership corpora. Non-members may license this data for a fee. 

Tuesday, December 17, 2013

LDC December 2013 Newsletter


Spring 2014 LDC Data Scholarship Program - deadline approaching
LDC to close for Winter Break

New publications:




Spring 2014 LDC Data Scholarship Program - deadline approaching 


The deadline for the Spring 2014 LDC Data Scholarship Program is right around the corner. Student applications are being accepted now through January 15, 2014, 11:59PM EST. The LDC Data Scholarship program provides university students with access to LDC data at no cost. This program is open to students pursuing both undergraduate and graduate studies in an accredited college or university. LDC Data Scholarships are not restricted to any particular field of study; however, students must demonstrate a well-developed research agenda and a bona fide inability to pay.

Students will need to complete an application which consists of a data use proposal and letter of support from their adviser.  For further information on application materials and program rules, please visit the LDC Data Scholarship page.


Students can email their applications to the LDC Data Scholarships program. Decisions will be sent by email from the same address.



LDC to close for Winter Break

LDC will be closed from Wednesday, December 25, 2013 through Wednesday, January 1, 2014 in accordance with the University of Pennsylvania Winter Break Policy. Our offices will reopen on Thursday, January 2, 2014. Requests received for membership renewals and corpora during the Winter Break will be processed at that time.
Best wishes for a happy holiday season!


New publications


GALE Chinese-English Word Alignment and Tagging -- Broadcast Training Part 1 was developed by LDC and contains 179,842 tokens of word aligned Chinese and English parallel text enriched with linguistic tags. This material was used as training data in the DARPA GALE (Global Autonomous Language Exploitation) program.


Some approaches to statistical machine translation include the incorporation of linguistic knowledge in word aligned text as a means to improve automatic word alignment and machine translation quality. This is accomplished with two annotation schemes: alignment and tagging. Alignment identifies minimum translation units and translation relations by using minimum-match and attachment annotation approaches. A set of word tags and alignment link tags are designed in the tagging scheme to describe these translation units and relations. Tagging adds contextual, syntactic and language-specific features to the alignment annotation. 


This release consists of Chinese source broadcast conversation (BC) and broadcast news (BN) programming collected by LDC in 2005 - 2007. 


The Chinese word alignment tasks consisted of the following components:

  • Identifying, aligning, and tagging 8 different types of links
  • Identifying, attaching, and tagging local-level unmatched words
  • Identifying and tagging sentence/discourse-level unmatched words
  • Identifying and tagging all instances of Chinese çš„(DE) except when they were a part of a semantic link.

GALE Chinese-English Word Alignment and Tagging -- Broadcast Training Part 1 is distributed via web download. 2013 Subscription Members will automatically receive two copies of this data on disc. 2013 Standard Members may request a copy as part of their 16 free membership corpora. Non-members may license this data for a fee

*

Maninkakan Lexicon was developed by LDC and contains 5,834 entries of the Maninkakan language presented as a Maninkakan-English lexicon and a Maninkakan-French lexicon. It is the second publication in an ongoing LDC project to to build an electronic dictionary of four Mandekan languages: Mawukakan, Maninkakan, Bambara and Jula. These are Eastern Manding languages in the Mande Group of the Niger-Congo language family. LDC released a Mawukakan Lexicon (LDC2005L01) in 2005.


More information about LDC’s work in the languages of West Africa and the challenges those languages present for language resource development can be found here.


Maninkakan is written using Latin script, Arabic script and the NKo alphabet. This lexicon is presented using a Latin-based transcription system because the Latin alphabet is familiar to the majority of Mandekan language speakers and because it is expected to facilitate the work of researchers interested in this resource.


The dictionary is provided in two formats, Toolbox and XML. Toolbox is a version of the widely used SIL Shoebox program adapted to display Unicode.  The Toolbox files are provided in two fonts, Arial and Doulous SIL. The Arial files should display using the Arial font which is standard on most operating systems. Doulous SIL, available as a free download, is a robust font that should display all characters without issue. Users should launch Toolbox using the *.prj files in the Arial or Doulous_SIL folders.


Maninkakan Lexicon is distributed via web download. 2013 Subscription Members will automatically receive two copies of this data on disc. 2013 Standard Members may request a copy as part of their 16 free membership corpora. Non-members may license this data for a fee.
*

The ARRAU (Anaphora Resolution and Underspecification) Corpus of Anaphoric Information was developed by the University of Essex and the University of Trento. It contains annotations of multi-genre English texts for anaphoric relations with information about agreement and explicit representation of multiple antecedents for ambiguous anaphoric expressions and discourse antecedents for expressions which refer to abstract entities such as events, actions and plans. 


The source texts in this release include task-oriented dialogues from the TRAINS-91 and TRAINS-93 corpora (the latter released through LDC, TRAINS Spoken Dialog Corpus LDC95S25), narratives from the English Pear Stories, articles from the Wall Street Journal portions of the Penn Treebank (Treebank-2 LDC95T7) and the RST Discourse Treebank LDC2002T07,  and the Vieira/Poesio Corpus which consists of training and test files from Treebank-2 and RST Discourse Treebank.


The texts were annotated using the ARRAU guidelines which treat all noun phrases (NPs) as markables. Different semantic roles are recognized by distinguishing between referring expressions (that update or refer to a discourse model), and non-referring ones (including expletives, predicative expressions, quantifiers, and coordination). A variety of linguistic features were also annotated, including morphosyntactic agreement, grammatical function, semantic type (person, animate, concrete, action, time, other abstract) and genericity. The annotation was carried out using the MMAX2 annotation tool which allows text units to be marked at different levels. 


The files in MMAX format have been organized so that they can be visualized using the MMAX2 tool or directly used as input/output for the BART toolkit which performs automatic coreference resolution including all necessary preprocessing steps.


The ARRAU Corpus of Anaphoric Information is distributed via web download.

2013 Subscription Members will automatically receive two copies of this data on disc. 2013 Standard Members may request a copy as part of their 16 free membership corpora.  Non-members may license this data for a fee.
 


Monday, November 18, 2013

LDC November 2013 Newsletter



Invitation to Join for Membership Year 2014 
Spring 2014 LDC Data Scholarship Program
LDC to Close for Thanksgiving Break


        New publications:

Chinese Treebank 8.0 
CSC Deceptive Speech 



Invitation to Join for Membership Year (MY) 2014
 
Membership Year (MY) 2014 is open for joining. We would like to invite all current and previous members of LDC to renew their membership as well as welcome new organizations to join the Consortium. For MY2014, LDC is pleased to maintain membership fees at last year’s rates – membership fees will not increase.  Additionally, LDC will extend discounts on membership fees to members who keep their membership current and who join early in the year.

The details of our early renewal discounts for MY2014 are as follows:

·   Organizations who joined for MY2013 will receive a 5% discount when renewing. This discount will apply throughout 2014, regardless of time of renewal. MY2013 members renewing before Monday, March 3, 2014 will receive an additional 5% discount, for a total 10% discount off the membership fee.

·    New members as well as organizations who did not join for MY2013, but who held membership in any of the previous MYs (1993-2012), will also be eligible for a 5% discount provided that they join/renew before March 3, 2014.

Not-for-Profit/US Government

Standard US$2400 (MY 2014 Fee)
              US$2280 (with 5% discount)*
              US$2160 (with 10% discount)**

Subscription US$3850 (MY 2014 Fee)
                    US$3658 (with 5% discount)*
                    US$3465 (with 10% discount)**

For-Profit
Standard US$24000 (MY 2014 Fee)
               US$22800 (with 5% discount)*
               US$21600 (with 10% discount)**


Subscription US$27500 (MY 2014 Fee)
                    US$26125 (with 5% discount)*
                    US$24750 (with 10% discount)**

*  For new members, MY2013 Members renewing for MY2014, and any previous year Member who renews before March 3, 2014

** For MY2013 Members renewing before March 3, 2014

Publications for MY2014 are still being planned; here are the working titles of data sets we intend to provide:


2009 NIST Language Recognition Evaluation
Callfriend Farsi Speech and Transcripts
GALE data -- all phases and genres
Hispanic-English Speech
MADCAT Phase 4 Training
MALACH Czech ASR
NIST OpenMT Five Language Progress Set

In addition to receiving new publications, current year members of  LDC also enjoy the benefit of licensing older data at reduced costs; current year for-profit members may use most data for commercial applications.


Spring 2014 LDC Data Scholarship Program

Applications are now being accepted through Wednesday, January 15, 2014, 11:59PM EST for the Spring 20143 LDC Data Scholarship program! The LDC Data Scholarship program provides university students with access to LDC data at no-cost. During previous program cycles, LDC has awarded no-cost copies of LDC data to over 35 individual students and student research groups.

This program is open to students pursuing both undergraduate and graduate studies in an accredited college or university. LDC Data Scholarships are not restricted to any particular field of study; however, students must demonstrate a well-developed research agenda and a bona fide inability to pay. The selection process is highly competitive.

The application consists of two parts:

(1) Data Use Proposal. Applicants must submit a proposal describing their intended use of the data. The proposal should state which data the student plans to use and how the data will benefit their research project as well as information on the proposed methodology or algorithm.

Applicants should consult the LDC  Catalog for a complete list of data distributed by LDC. Due to certain restrictions, a handful of LDC corpora are restricted to members of the Consortium. Applicants are advised to select a maximum of one to two datasets; students may apply for additional datasets during the following cycle once they have completed processing of the initial datasets and publish or present work in some juried venue.

(2) Letter of Support. Applicants must submit one letter of support from their thesis adviser or department chair. The letter must verify the student's need for data and confirm that the department or university lacks the funding to pay the full Non-member Fee for the data or to join the Consortium.

For further information on application materials and program rules, please visit the LDC Data Scholarship page.

Students can email their applications to the LDC Data Scholarship program. Decisions will be sent by email from the same address.

The deadline for the Spring 2014 program cycle is January 15, 2014, 11:59PM EST.

LDC to Close for Thanksgiving Break

LDC will be closed on Thursday, November 28, 2013 and Friday, November 29, 2013 in observance of the US Thanksgiving Holiday.  Our offices will reopen on Monday, December 2, 2013.

New publications

Chinese Treebank 8.0 consists of approximately 1.5 million words of annotated and parsed text from Chinese newswire, government documents, magazine articles, various broadcast news and broadcast conversation programs, web newsgroups and weblogs.

The Chinese Treebank project began at the University of Pennsylvania in 1998, continued at the University of Colorado and then moved to Brandeis University. The project’s goal is to provide a large, part-of-speech tagged and fully bracketed Chinese language corpus. The first delivery, Chinese Treebank 1.0, contained 100,000 syntactically annotated words from Xinhua News Agency newswire. It was later corrected and released in 2001 as Chinese Treebank 2.0 (LDC2001T11) and consisted of approximately 100,000 words. The LDC released Chinese Treebank 4.0 (LDC2004T05), an updated version containing roughly 400,000 words, in 2004. A year later, LDC published the 500,000 word Chinese Treebank 5.0 (LDC2005T01). Chinese Treebank 6.0 (LDC2007T36), released in 2007, consisted of 780,000 words. Chinese Treebank 7.0 (LDC2010T08), released in 2010, added new annotated newswire data, broadcast material and web text to the approximate total of one million words. Chinese Treebank 8.0 adds new annotated data from newswire, magazine articles and government documents.

There are 3,007 text files in this release, containing 71,369 sentences, 1,620,561 words, 2,589,848 characters (hanzi or foreign). The data is provided in UTF-8 encoding, and the annotation has Penn Treebank-style labeled brackets. Details of the annotation standard can be found in the  segmentation, POS-tagging and bracketing guidelines included in the release. The data is provided in four different formats: raw text, word segmented, POS-tagged, and syntactically bracketed formats. All files were automatically verified and manually checked.

Chinese Treebank 8.0 is distributed via web download. 2013 Subscription Members will automatically receive two copies of this data on disc.2013 Standard Members may request a copy as part of their 16 free membership corpora.  Non-members may license this data for a fee.


*


CSC Deceptive Speech was developed by Columbia University, SRI International and University of Colorado Boulder. It consists of 32 hours of audio interview from 32 native speakers of Standard American English (16 male, 16 female) recruited from the Columbia University student population and the community. The purpose of the study was to distinguish deceptive speech from non-deceptive speech using machine learning techniques on extracted features from the corpus. 

The participants were told that they were participating in a communication experiment which sought to identify people who fit the profile of the top entrepreneurs in America. To this end, the participants performed tasks and answered questions in six areas. Tthey were later told that they had received low scores in some of those areas and did not fit the profile. The subjects then participated in an interview where they were told to convince the interviewer that they had actually achieved high scores in all areas and that they did indeed fit the profile. The task of the interviewer was to determine how he thought the subjects had actually performed, and he was allowed to ask them any questions other than those that were part of the performed tasks. For each question from the interviewer, subjects were asked to indicate whether the reply was true or contained any false information by pressing one of two pedals hidden from the interviewer under a table.

Interviews were conducted in a double-walled sound booth and recorded to digital audio tape on two channels using Crown CM311A Differoid headworn close-talking microphones, then down sampled to 16kHz before processing. 

The interviews were orthographically transcribed by hand using the NIST EARS transcription guidelines. Labels for local lies were obtained automatically from the pedal-press data and hand-corrected for alignment, and labels for global lies were annotated during transcription based on the known scores of the subjects versus their reported scores. The orthographic transcription was force-aligned using the SRI telephone speech recognizer adapted for full-bandwidth recordings. There are several segmentations associated with the corpus: the implicit segmentation of the pedal presses, derived semi-automatically sentence-like units (EARS SLASH-UNITS or SUs) which were hand labeled, intonational phrase units and the units corresponding to each topic of the interview.

CSC Deceptive Speech is distributed on 1 DVD-ROM. 2013 Subscription Members will automatically receive two copies of this data  provided they have completed and returned the User License Agreement for CSC Deceptive Speech (LDC2013S09). 2013 Standard Members may request a copy as part of their 16 free membership corpora.  Non-members may license this data for a fee.

Wednesday, October 16, 2013

LDC October 2013 Newsletter

Fall 2013 LDC Data Scholarship Recipients

     New publications:




Fall 2013 LDC Data Scholarship Recipients

LDC is pleased to announce the student recipients of the Fall 2013 LDC Data Scholarship program. This program provides university and college students with access to LDC data at no-cost. Students were asked to complete an application which consisted of a proposal describing their intended use of the data, as well as a letter of support from their thesis adviser. We received many solid applications and have chosen six proposals to support. The following students will receive no-cost copies of LDC data:
Shamama Afnan - Clemson University (USA), MS candidate, Electrical Engineering.  Shamana has been awarded a copy of 2008 NIST Speaker Recognition Training and Test data for her work in speaker recognition.
Seyedeh Firoozabadi - University of Connecticut (USA), PhD candidate, Biomedical Engineering.  Seyedeh has been awarded a copy of TIDIGITS and TI-46 Word for her work in speech recognition.
Lei Liu - Beijing Foreign Studies University (China), PhD candidate, Foreign Language Education.  Lei has been awarded a copy of Treebank-3 and Prague Czech-English Dependency Treebank 2.0 for his work in parsing.
Monisankha Pal - Indian Institute of Technology, Kharagpur (India), PhD candidate, Electronics and Electrical Communication Engineering.  Monisankha has been awarded a copy of CSR-I (WSJ0) and CSR-II (WSJ1) for his work in speaker recognition.
Sachin Pawar - Indian Institute of Technology, Bombay (India), PhD candidate, Computer Science and Engineering.  Sachin has been awarded a copy of ACE 2004 Multilingual Training Corpus for his work in named-entity recognition.
Sergio Silva - Federal University of Rio Grande do Sul (Brazil), MS candidate, Computer Science.  Sergio has been awarded a copy of 2004 and 2005 Spring NIST Rich Transcription data for his work in diarization.

New publications


(1) GALE Phase 2 Chinese Broadcast News Speech was developed by LDC and is comprised of approximately 126 hours of Mandarin Chinese broadcast news speech collected in 2006 and 2007 by the Linguistic Data Consortium (LDC) and Hong University of Science and Technology (HKUST), Hong Kong, during Phase 2 of the DARPA GALE (Global Autonomous Language Exploitation) Program.
Corresponding transcripts are released as GALE Phase 2 Chinese Broadcast News Transcripts (LDC2013T20).
Broadcast audio for the GALE program was collected at LDC's Philadelphia, PA USA facilities and at three remote collection sites: HKUST (Chinese), Medianet (Tunis, Tunisia) (Arabic), and MTC (Rabat, Morocco) (Arabic). The combined local and outsourced broadcast collection supported GALE at a rate of approximately 300 hours per week of programming from more than 50 broadcast sources for a total of over 30,000 hours of collected broadcast audio over the life of the program.
The broadcast conversation recordings in this release feature news broadcasts focusing principally on current events from the following sources: Anhui TV, a regional television station in Mainland China, Anhui Province; China Central TV (CCTV), a national and international broadcaster in Mainland China; and Phoenix TV, a Hong Kong-based satellite television station. 


This release contains 248 audio files presented in FLAC-compressed Waveform Audio File format (.flac), 16000 Hz single-channel 16-bit PCM. Each file was audited by a native Chinese speaker following Audit Procedure Specification Version 2.0 which is included in this release. The broadcast auditing process served three principal goals: as a check on the operation of the broadcast collection system equipment by identifying failed, incomplete or faulty recordings, as an indicator of broadcast schedule changes by identifying instances when the incorrect program was recorded, and as a guide for data selection by retaining information about a program's genre, data type and topic.

GALE Phase 2 Chinese Broadcast News Speech is distributed on 2 DVD-ROM. 2013 Subscription Members will automatically receive two copies of this data. 2013 Standard Members may request a copy as part of their 16 free membership corporal. Nonmembers may license this data for a fee.   
       
 *
                                                                             
(2) GALE Phase 2 Chinese Broadcast News Transcripts was developed by LDC and contains transcriptions of approximately 110 hours of Chinese broadcast news speech collected in 2006 and 2007 by LDC and Hong University of Science and Technology (HKUST), Hong Kong, during Phase 2 of the DARPA GALE (Global Autonomous Language Exploitation) Program.

Corresponding audio data is released as GALE Phase 2 Chinese Broadcast News Speech (LDC2013S08).
The transcript files are in plain-text, tab-delimited format (TDF) with UTF-8 encoding, and the transcribed data totals 1,593,049 tokens. The transcripts were created with the LDC-developed transcription tool, XTrans, a multi-platform, multilingual, multi-channel transcription tool that supports manual transcription and annotation of audio recordings. 


The files in this corpus were transcribed by LDC staff and/or by transcription vendors under contract to LDC. Transcribers followed LDC’s quick transcription guidelines (QTR) and quick rich transcription specification (QRTR) both of which are included in the documentation with this release. QTR transcription consists of quick (near-)verbatim, time-aligned transcripts plus speaker identification with minimal additional mark-up. It does not include sentence unit annotation. QRTR annotation adds structural information such as topic boundaries and manual sentence unit annotation to the core components of a quick transcript.
GALE Phase 2 Chinese Broadcast News Transcripts is distributed via web download. Subscription Members will automatically receive two copies of this data. 2013 Standard Members may request a copy as part of their 16 free membership corporal. Nonmembers may license this data for a fee. 


 *

(3) OntoNotes Release 5.0 is the final release of the OntoNotes project, a collaborative effort between BBN Technologies, the University of Colorado, the University of Pennsylvania and the University of Southern Californias Information Sciences Institute. The goal of the project was to annotate a large corpus comprising various genres of text (news, conversational telephone speech, weblogs, usenet newsgroups, broadcast, talk shows) in three languages (English, Chinese, and Arabic) with structural information (syntax and predicate argument structure) and shallow semantics (word sense linked to an ontology and coreference).                                                                       


OntoNotes Release 5.0 contains the content of earlier releases -- OntoNotes Release 1.0 LDC2007T21, OntoNotes Release 2.0 LDC2008T04, OntoNotes Release 3.0 LDC2009T24 and OntoNotes Release 4.0 LDC2011T03 -- and adds source data from and/or additional annotations for, newswire (News), broadcast news (BN), broadcast conversation (BC), telephone conversation (Tele) and web data (Web) in English and Chinese and newswire data in Arabic. Also contained is English pivot text (Old Testament and New Testament text). This cumulative publication consists of 2.9 million words 


The OntoNotes project built on two time-tested resources, following the Penn Treebank for syntax and the Penn PropBank for predicate-argument structure. Its semantic representation includes word sense disambiguation for nouns and verbs, with some word senses connected to an ontology, and coreference. 


Documents describing the annotation guidelines and the routines for deriving various views of the data from the database are included in the documentation directory of this release. The annotation is provided both in separate text files for each annotation layer (Treebank, PropBank, word sense, etc.) and in the form of an integrated relational database (ontonotes-v5.0.sql.gz) with a Python API to provide convenient cross-layer access. 
OntoNotes Release 5.0 is distributed on 1 DVD-ROM. Subscription Members will automatically receive two copies of this data. 2013 Standard Members may request a copy as part of their 16 free membership corporal. Nonmembers may license this data at no charge subject to shipping and handling fees.

Tuesday, September 17, 2013

LDC September 2013 Newsletter


New LDC Website Coming Soon
LDC Spoken Language Sampler - 2nd Release

     New publications:

GALE Phase 2 Arabic Broadcast Conversation Speech Part 2
GALE Phase 2 Arabic Broadcast Conversation Transcripts Part 2
Semantic Textual Similarity (STS) 2013 Machine Translation



New LDC Website Coming Soon

Look for LDC's new website in the coming weeks. We've revamped the design and site plan to make it easier than ever to find what you're looking for. The features you use the most -- the catalog, new corpus releases and user login -- will be a short click away. We expect the LDC website to be occasionally unavailable for a few days at the end of September as we make the switch and thank you in advance for your understanding.
LDC Spoken Language Sampler - 2nd Release

The LDC Spoken Language Sampler – 2nd Release is now available.  It contains speech and transcript samples from recent releases and is available at no cost.  Follow the link above to the catalog page, download and browse.

New publications:

(1) GALE Phase 2 Arabic Broadcast Conversation Speech Part 2 was developed by LDC and is comprised of approximately 128 hours of Arabic broadcast conversation speech collected in 2007 by LDC as part of the DARPA GALE (Global Autonomous Language Exploitation) Program. The data was collected at LDC’s Philadelphia, PA USA facilities and at three remote collection sites. The combined local and outsourced broadcast collection supported GALE at a rate of approximately 300 hours per week of programming from more than 50 broadcast sources for a total of over 30,000 hours of collected broadcast audio over the life of the program. 

LDC's local broadcast collection system is highly automated, easily extensible and robust and capable of collecting, processing and evaluating hundreds of hours of content from several dozen sources per day. The broadcast material is served to the system by a set of free-to-air (FTA) satellite receivers, commercial direct satellite systems (DSS) such as DirecTV, direct broadcast satellite (DBS) receivers, and cable television (CATV) feeds. The mapping between receivers and recorders is dynamic and modular; all signal routing is performed under computer control, using a 256x64 A/V matrix switch. Programs are recorded in a high bandwidth A/V format and are then processed to extract audio, to generate keyframes and compressed audio/video, to produce time-synchronized closed captions (in the case of North American English) and to generate automatic speech recognition (ASR) output. 

The broadcast conversation recordings in this release feature interviews, call-in programs and round table discussions focusing principally on current events from several sources. This release contains 141 audio files presented in .wav, 16000 Hz single-channel 16-bit PCM. Each file was audited by a native Arabic speaker following Audit Procedure Specification Version 2.0 which is included in this release.
GALE Phase 2 Arabic Broadcast Conversation Speech Part 2 is distributed on 2 DVD-ROM.

2013 Subscription Members will automatically receive two copies of this data.  2013 Standard Members may request a copy as part of their 16 free membership corpora.  Non-members may license this data fora fee

*

(2) GALE Phase 2 Arabic Broadcast Conversation Transcripts Part 2 was developed by LDC and contains transcriptions of approximately 128 hours of Arabic broadcast conversation speech collected in 2007 by LDC, MediaNet, Tunis, Tunisia and MTC, Rabat, Morocco during Phase 2 of the DARPA GALE (Global Autonomous Language Exploitation) program. The source broadcast conversation recordings feature interviews, call-in programs and round table discussions focusing principally on current events from several sources. 

The transcript files are in plain-text, tab-delimited format (TDF) with UTF-8 encoding, and the transcribed data totals 763,945 tokens. The transcripts were created with the LDC-developed transcription tool, XTrans, a multi-platform, multilingual, multi-channel transcription tool that supports manual transcription and annotation of audio recordings. 

The files in this corpus were transcribed by LDC staff and/or by transcription vendors under contract to LDC. Transcribers followed LDC’s quick transcription guidelines (QTR) and quick rich transcription specification (QRTR) both of which are included in the documentation with this release. QTR transcription consists of quick (near-)verbatim, time-aligned transcripts plus speaker identification with minimal additional mark-up. It does not include sentence unit annotation. QRTR annotation adds structural information such as topic boundaries and manual sentence unit annotation to the core components of a quick transcript.
GALE Phase 2 Arabic Broadcast Conversation Transcripts - Part 2 is distributed via web download.
2013 Subscription Members will automatically receive two copies of this data on disc.  2013 Standard Members may request a copy as part of their 16 free membership corpora.  Non-members may license this data for a fee.



*

(3)  Semantic Textual Similarity (STS) 2013 Machine Translation was developed as part of the STS 2013 Shared Task which was held in conjunction with *SEM 2013, the second joint conference on lexical and computational semantics organized by the ACL (Association of Computational Linguistics) interest groups SIGLEX and SIGSEM. It is comprised of one text file containing 750 English sentence pairs translated from the Arabic and Chinese newswire and web data sources.

The goal of the Semantic Textual Similarity (STS) task was to create a unified framework for the evaluation of semantic textual similarity modules and to characterize their impact on natural language processing (NLP) applications. STS measures the degree of semantic equivalence. The STS task was proposed as an attempt at creating a unified framework that allows for an extrinsic evaluation of multiple semantic components that otherwise have historically tended to be evaluated independently and without characterization of impact on NLP applications. More information is available at the STS 2013 Shared Task homepage.

The source data is Arabic and Chinese newswire and web data collected by LDC that was translated and used in the DARPA GALE (Global Autonomous Language Exploitation) program and in several NIST Open Machine Translation evaluations. Of the 750 sentence pairs, 150 pairs are from the GALE Phase 5 collection and 600 pairs are from NIST 2008-2012 Open Machine Translation (OpenMT) Progress Test Sets (LDC2013T07).

The data was built to identify semantic textual similarity between two short text passages. The corpus is comprised of two tab delimited sentences per line. The first sentence is a translation and the second sentence is a post-edited translation. Post-editing is a process to improve machine translation with a minimum of manual labor. The gold standard similarity values and other STS datasets can be obtained from the STS homepage, linked above. 

Semantic Textual Similarity (STS) 2013 Machine Translation is distributed via web download.
2013 Subscription Members will automatically receive two copies of this data on disc. 2013 Standard Members may request a copy as part of their 16 free membership corpora.  Non-members may request this data by submitting a signed copy of LDC User Agreement for Non-members.  This data is available at no-cost.