Tuesday, January 20, 2015

LDC 2015 January Newsletter

LDC Membership Discounts for MY 2015 Still Available

New publications:


LDC Membership Discounts for MY 2015 Still Available
If you are considering joining LDC for Membership Year 2015 (MY2015), there is still time to save on membership fees. Any organization which joins or renews membership for 2015 through Monday, March 2, 2015, is entitled to a 5% discount on membership fees.  Organizations which held membership for MY2014 can receive a 10% discount on fees provided they renew prior to March 2, 2015.  For further information on planned publications for MY2015, please visit or contact LDC.

New publications

GALE Phase 2 Arabic Broadcast News Speech Part 2 was developed by LDC and is comprised of approximately 170 hours of Arabic broadcast news speech collected in 2007 by LDC, MediaNet, Tunis, Tunisia and MTC, Rabat, Morocco during Phase 2 of the DARPA GALE (Global Autonomous Language Exploitation) Program. Corresponding transcripts are released as GALE Phase 2 Arabic Broadcast News Transcripts Part 1 (LDC2015T01).

Broadcast audio for the GALE program was collected at LDC’s Philadelphia, PA USA facilities and at three remote collection sites: Hong Kong University of Science and Technology, Hong King (Chinese), Medianet (Tunis, Tunisia) (Arabic), and MTC (Rabat, Morocco) (Arabic). The combined local and outsourced broadcast collection supported GALE at a rate of approximately 300 hours per week of programming from more than 50 broadcast sources for a total of over 30,000 hours of collected broadcast audio over the life of the program.

The broadcast recordings in this release feature news programs focusing principally on current events from the following sources: Abu Dhabi TV, a television station based in Abu Dhabi, United Arab Emirates; Al Alam News Channel, based in Iran; Aljazeera , a regional broadcaster located in Doha, Qatar; Al Ordiniyah, a national broadcast station in Jordan; Dubai TV, based in Dubai, United Arab Emirates; Al Iraqiyah, a television network based in Iraq; Kuwait TV, a national television station based in Kuwait; Lebanese Broadcasting Corporation, a Lebanese television station; Nile TV, a broadcast programmer based in Egypt; Saudi TV, a national television station based in Saudi Arabia; and Syria TV, the national television station in Syria.

This release contains 204 audio files presented in FLAC-compressed Waveform Audio File format (.flac), 16000 Hz single-channel 16-bit PCM. Each file was audited by a native Arabic speaker following Audit Procedure Specification Version 2.0 which is included in this release. The broadcast auditing process served three principal goals: as a check on the operation of the broadcast collection system equipment by identifying failed, incomplete or faulty recordings; as an indicator of broadcast schedule changes by identifying instances when the incorrect program was recorded; and as a guide for data selection by retaining information about a program’s genre, data type and topic.

GALE Phase 2 Arabic Broadcast News Speech Part 2 is distributed on 3 DVD-ROM.

2015 Subscription Members will automatically receive two copies of this corpus.  2015 Standard Members may request a copy as part of their 16 free membership corpora.  Non-members may license this data for a fee.

*

GALE Phase 2 Arabic Broadcast News Transcripts Part 2 was developed by LDC and contains transcriptions of approximately 170 hours of Arabic broadcast news speech collected in 2007 by LDC, MediaNet, Tunis, Tunisia and MTC, Rabat, Morocco during Phase 2 of the DARPA GALE (Global Autonomous Language Exploitation) program. Corresponding audio data is released as GALE Phase 2 Arabic Broadcast News Speech Part 2 (LDC2015S01).

The transcript files are in plain-text, tab-delimited format (TDF) with UTF-8 encoding, and the transcribed data totals 920,730 tokens. The transcripts were created with the LDC-developed transcription tool, XTrans, a multi-platform, multilingual, multi-channel transcription tool that supports manual transcription and annotation of audio recordings.

The files in this corpus were transcribed by LDC staff and/or by transcription vendors under contract to LDC. Transcribers followed LDC's quick transcription guidelines (QTR) and quick rich transcription specification (QRTR) both of which are included in the documentation with this release. QTR transcription consists of quick (near-)verbatim, time-aligned transcripts plus speaker identification with minimal additional mark-up. It does not include sentence unit annotation. QRTR annotation adds structural information such as topic boundaries and manual sentence unit annotation to the core components of a quick transcript. Files with QTR as part of the filename were developed using QTR transcription. Files with QRTR in the filename indicate QRTR transcription.

GALE Phase 2 Arabic Broadcast News Transcripts Part 2 is distributed via web download.

2015 Subscription Members will automatically receive two copies of this corpus.  2015 Standard Members may request a copy as part of their 16 free membership corpora.  Non-members may license this data for a fee.

*

SenSem (Sentence Semantics) Databank was developed by GRIAL, the Linguistic Applications Inter-University Research Group that includes the following Spanish institutions: the Universitat Autonoma de Barcelona, the Universitat de Barcelona, the Universitat de Lleida and the Universitat Oberta de Catalunya. It contains syntactic and semantic annotation for over 35,000 sentences, approximately one million words of Spanish and approximately 700,000 words of Catalan translated from the Spanish. GRIAL's work focuses on resources for applied linguistics, including lexicography, translation and natural language processing.

Each sentence in SenSem Databank was labeled according to the verb sense it exemplifies, the type of complement it takes (arguments or adjuncts) and the syntactic category and function. Each argument was also labeled with a semantic role. Further information about the SenSem project can be obtained from the GRIAL website.

The Spanish source data includes texts from news journals (30,000 sentences) and novels (5,299 sentences). Those sentences represent around 1,000 different verb meanings that correspond to the 250 most frequent Spanish verbs. Verb frequencies were retrieved from a quantitative analysis of around 13 million words.

The Catalan corpus was developed by translating the news journal portion of the Spanish data set, resulting in a resource of over 700,000 sentences from which 391,267 sentences were annotated. Sentences were automatically translated and manually post-edited; some were re-annotated for sentence complements. Semantic information was the same for both languages. The Catalan sentences represent close to 1,300 different verbs.

SenSem Databank is distributed via web download.

2015 Subscription Members will automatically receive two copies of this corpus on disc.  2015 Standard Members may request a copy as part of their 16 free membership corpora.  Non-members may license this data for a fee. This data is made available to LDC not-for-profit members and all non-members under the Creative Commons Attribution-Noncommercial Share Alike 3.0 license and to LDC for-profit members under the terms of the For-Profit Membership Agreement.

Monday, December 15, 2014

LDC 2014 December Newsletter

Renew your LDC membership today

Spring 2015 LDC Data Scholarship Program - deadline approaching

Reduced fees for Treebank-2 and Treebank-3 

LDC to close for Winter Break

New publications:

Renew your LDC membership today

Membership Year 2015 (MY2015) discounts are available for those who keep their membership current and  join early in the year. Check here for further information including our planned publications for MY2015.

Now is also a good time to consider joining LDC for the current and open membership years, MY2014 and MY2013. MY2014 offers members an impressive 37 publications which include UN speech data, 2009 NIST LRE test set, 2007 ACE multilingual data, and multi-channel WSJ audio. MY2013 remains open through the end of the 2014 calendar year and its publications include Mixer 6 speech, Greybeard, UN parallel text and CSC Deceptive Speech as well as updates to Chinese Treebank and Chinese Proposition Bank. For full descriptions of these data sets, visit our Catalog.

Spring 2015 LDC Data Scholarship Program - deadline approaching
The deadline for the Spring 2015 LDC Data Scholarship Program is right around the corner! Student applications are being accepted now through January 15, 2015, 11:59PM EST. The LDC Data Scholarship program provides university students with access to LDC data at no cost. This program is open to students pursuing both undergraduate and graduate studies in an accredited college or university. LDC Data Scholarships are not restricted to any particular field of study; however, students must demonstrate a well-developed research agenda and a bona fide inability to pay.

Students will need to complete an application which consists of a data use proposal and letter of support from their adviser. For further information on application materials and program rules, please visit the LDC Data Scholarship page.

Students can email their applications to the LDC Data Scholarships program. Decisions will be sent by email from the same address.

Reduced fees for Treebank-2 and Treebank-3
Treebank-2 (LDC95T7) and Treebank-3 (LDC99T42) are now available to non-members at reduced fees, US$1500 for Treebank-2 and US$1700 for Treebank-3, reductions of 52% and 47%, respectively.

LDC to close for Winter Break
LDC will be closed from December 25, 2014 through January 2, 2015 in accordance with the University of Pennsylvania Winter Break Policy. Our offices will reopen on January 5, 2015. Requests received for membership renewals and corpora during the Winter Break will be processed at that time.

Best wishes for a relaxing holiday season!

New publications

(1) Benchmarks for Open Relation Extraction was developed by the University of Alberta and contains annotations for approximately 14,000 sentences from The New York Times Annotated Corpus (LDC2008T19) and Treebank-3 (LDC99T42). This corpus was designed to contain benchmarks for the task of open relation extraction (ORE), along with sample extractions from ORE methods and evaluation scripts for computing a method's precision and recall.

ORE attempts to extract as many relations as described in a corpus without relying on relation-specific training data. The traditional approach to relation extraction requires substantial training effort for each relation of interest. That can be unpractical for massive collections such as found on the web. Open relation extraction offers an alternative by extracting unseen relations as they come. It does not require training data for any particular relation, making it suitable for applications that require a large (or even unknown) number of relations. Results published in ORE literature are often not comparable due to the lack of reusable annotations and differences in evaluation methodology. The goal of this benchmark data set is to provide annotations that are flexible and can be used to evaluate a wide range of methods.

Binary and n-ary relations were extracted from the text sources. Sentences were annotated for binary relations manually and automatically. In the manual sentence annotation, two entities and a trigger (a single token indicating a relation) were identified for the relation between them, if one existed. A window of tokens allowed to be in a relation was specified; that included modifiers of the trigger and prepositions connecting triggers to their arguments. For each sentence annotated with two entities, a system must extract a string representing the relation between them. The evaluation method deemed an extraction as correct if it contained the trigger and allowed tokens only. The automatic annotator identified pairs of entities and a trigger of the relation between them; the evaluation script for that experiment deemed an extraction correct if it contained the annotated trigger. For n-ary relations, sentences were annotated with one relation trigger and all of its arguments. An extracted argument was deemed correct if it was annotated in the sentence.

Benchmarks for Open Relation Extractions is distributed via web download.

2014 Subscription Members will automatically receive two copies of this data provided they have completed a copy of the user agreement2014 Standard Members may request a copy as part of their 16 free membership corpora.  Non-members may license this data for a fee.
*

(2) Fisher and CALLHOME Spanish--English Speech Translation was developed at Johns Hopkins University and contains English reference translations and speech recognizer output (in various forms) that complement the LDC Fisher Spanish (LDC2010T04) and CALLHOME Spanish audio and transcript releases (LDC96T17). Together, they make a four-way parallel text dataset representing approximately 38 hours of speech, with defined training, development, and held-out test sets.

The source data are the Fisher Spanish and CALLOME Spanish corpora developed by LDC, comprising transcribed telephone conversations between (mostly native) Spanish speakers in a variety of dialects. The Fisher Spanish data set consists of 819 transcribed conversations on an assortment of provided topics primarily between strangers, resulting in approximately 160 hours of speech aligned at the utterance level, with 1.5 million tokens. The CALLHOME Spanish corpus comprises 120 transcripts of spontaneous conversations primarily between friends and family members, resulting in approximately 20 hours of speech aligned at the utterance level, with just over 200,000 words (tokens) of transcribed text.

Translations were obtained by crowdsourcing using Amazon's Mechanical Turk, after which the data was split into training, development, and test sets. The CALLHOME data set defines its own data splits, organized into train, devtest, and evltest, which were retained here. For the Fisher material, four data splits were produced: a large training section and three test sets. These test sets correspond to portions of the data where four translations exist.

Fisher and CALLHOME Spanish--English Speech Translation is distributed via web download.

2014 Subscription Members will automatically receive two copies of this data on disc.  2014 Standard Members may request a copy as part of their 16 free membership corpora.  Non-members may license this data for a fee.

*

(3) GALE Phase 3 Chinese Broadcast Conversation Speech Part 1 was developed by LDC and is comprised of approximately 126 hours of Mandarin Chinese broadcast conversation speech collected in 2007 by LDC and Hong University of Science and Technology (HKUST), Hong Kong, during Phase 3 of the DARPA GALE (Global Autonomous Language Exploitation) Program.

Corresponding transcripts are released as GALE Phase 3 Chinese Broadcast Conversation Transcripts Part 1 (LDC2014T28).

Broadcast audio for the GALE program was collected at LDC’s Philadelphia, PA USA facilities and at three remote collection sites: HKUST (Chinese), Medianet (Tunis, Tunisia) (Arabic), and MTC (Rabat, Morocco) (Arabic). The combined local and outsourced broadcast collection supported GALE at a rate of approximately 300 hours per week of programming from more than 50 broadcast sources for a total of over 30,000 hours of collected broadcast audio over the life of the program. HKUST collected Chinese broadcast programming using its internal recording system and a portable broadcast collection platform designed by LDC and installed at HKUST in 2006.

The broadcast conversation recordings in this release feature interviews, call-in programs, and roundtable discussions focusing principally on current events from the following sources: Anhui TV, a regional television station in Anhui Province, China; Beijing TV, a national television station in China; China Central TV (CCTV), a Chinese national and international broadcaster; Hubei TV, a regional broadcaster in Hubei Province, China; and Phoenix TV, a Hong Kong-based satellite television station.

This release contains 217 audio files presented in FLAC-compressed Waveform Audio File format (.flac), 16000 Hz single-channel 16-bit PCM. Each file was audited by a native Chinese speaker following Audit Procedure Specification Version 2.0 which is included in this release. The broadcast auditing process served three principal goals: as a check on the operation of the broadcast collection system equipment by identifying failed, incomplete or faulty recordings, as an indicator of broadcast schedule changes by identifying instances when the incorrect program was recorded, and as a guide for data selection by retaining information about a program’s genre, data type and topic.

GALE Phase 3 Chinese Broadcast Conversation Speech Part 1 is distributed on 2 DVD-ROM.

2014 Subscription Members will automatically receive two copies of this data.  2014 Standard Members may request a copy as part of their 16 free membership corpora.  Non-members may license this data for a fee.

*

(4) GALE Phase 3 Chinese Broadcast Conversation Transcripts Part 1 was developed by LDC and contains transcriptions of approximately 126 hours of Chinese broadcast conversation speech collected in 2007 by LDC and Hong University of Science and Technology (HKUST), Hong Kong, during Phase 3 of the DARPA GALE (Global Autonomous Language Exploitation) Program.

Corresponding audio data is released as GALE Phase 3 Chinese Broadcast Conversation Speech Part 1 (LDC2014S09).

The source broadcast conversation recordings feature interviews, call-in programs and roundtable discussions focusing principally on current events from the following sources: Anhui TV, a regional television station in Anhui Province, China; Beijing TV, a national television station in China; China Central TV (CCTV), a Chinese national and international broadcaster; Hubei TV, a regional television station in Hubei Province, China; and Phoenix TV, a Hong Kong-based satellite television station.

The transcript files are in plain-text, tab-delimited format (TDF) with UTF-8 encoding, and the transcribed data totals 1,556,904 tokens. The transcripts were created with the LDC-developed transcription tool, XTrans, a multi-platform, multilingual, multi-channel transcription tool that supports manual transcription and annotation of audio recordings. XTrans is available from the following link, https://www.ldc.upenn.edu/language-resources/tools/xtrans .

The files in this corpus were transcribed by LDC staff and/or by transcription vendors under contract to LDC. Transcribers followed LDC's quick transcription guidelines (QTR) and quick rich transcription specification (QRTR) both of which are included in the documentation with this release. QTR transcription consists of quick (near-) verbatim, time-aligned transcripts plus speaker identification with minimal additional mark-up. It does not include sentence unit annotation. QRTR annotation adds structural information such as topic boundaries and manual sentence unit annotation to the core components of a quick transcript. Files with QTR as part of the filename were developed using QTR transcription. Files with QRTR in the filename indicate QRTR transcription.

GALE Phase 3 Chinese Broadcast Conversation Transcripts Part 1 is distributed via web download.

2014 Subscription Members will automatically receive two copies of this data on disc.  2014 Standard Members may request a copy as part of their 16 free membership corpora. Non-members may license this data for a fee.

Monday, November 17, 2014

LDC 2014 November Newsletter

Fall 2014 Data Scholarship Recipients

Invitation to Join for Membership Year (MY) 2015

Spring 2015 Data Scholarship Program

LDC is now on Twitter

LDC closed for Thanksgiving Break

New publications:

Fall 2014 Data Scholarship Recipients
LDC is pleased to announce the student recipients of the Fall 2014 LDC Data Scholarship program.  The following students will receive no-cost copies of LDC data:
Mohammed Abumatar ~ University of Jordan (Jordan), Bsc Candidate, Computer Engineering.  Mohammed has been awarded a copies of MADCAT Phase 1-3 Training Data for his work in handwriting recognition.

Ramy Baly ~ American University of Beirut (Lebanon), PhD candidate, Electrical and Computer Engineering.  Ramy has been awarded a copies of Arabic Treebank Parts 1-3 for his work in opinion mining.

Abbas Khosravanai ~ Amirkabir University of Technology (Iran), PhD candidate, Computer Engineering.  Abbas has been awarded a copy of 2008 NIST Speaker Recognition for his work in robust speaker recognition.

Phuc Nguyen ~ University of North Texas (USA), PhD candidate, Computer Science and Engineering.  Phuc has been awarded a copy of Message Understanding Conference (MUC) 7 for his work in named entity recognition.

Invitation to Join for Membership Year (MY) 2015
Membership Year (MY) 2015 is open for joining.  We would like to invite all current and previous members of LDC to renew their membership as well as welcome new organizations to join the Consortium.  For MY2015, LDC is pleased to maintain membership fees at last year’s rates – membership fees will not increase.  Additionally, LDC will extend discounts on membership fees to members who keep their membership current and who join early in the year.

The details of our early renewal discounts for MY2015 are as follows:

Organizations who joined for MY2014 will receive a 10% discount when renewing before March 2, 2015. After March 2, 2015, MY2014 members are eligible for a 5% discount when renewing through the end of the year.

New members as well as organizations who did not join for MY2014, but who held membership in any of the previous MYs (1993-2013), will also be eligible for a 5% discount provided that they join/renew before March 2, 2015.

Publications for MY2015 are still being planned but we plan to release the following:

  • CIEMPIESS - Mexican Spanish radio broadcast audio and transcripts   
  • GALE Phase 3 and 4 data – all tasks and languages   
  • Mandarin Chinese Phonetic Segmentation and Tone Corpus - phonetic segmentation and tone labels   
  • RATS Speech Activity Detection  – multilanguage audio for robust speech detection and language identification
  • SEAME - Mandarin-English code-switching speech
  • SenSem Spanish and Catalan Lexicon and Databank - sentence semantics and verbal lexicons

Spring 2015 Data Scholarship Program
Applications are now being accepted through Thursday, January 15, 2015, 11:59PM EST for the Spring 2015 LDC Data Scholarship program. The LDC Data Scholarship program provides university students with access to LDC data at no-cost. During previous program cycles, LDC has awarded no-cost copies of LDC data to over 40 individual students and student research groups. This program is open to students pursuing both undergraduate and graduate studies in an accredited college or university. LDC Data Scholarships are not restricted to any particular field of study; however, students must demonstrate a well-developed research agenda and a bona fide inability to pay.

The application consists of two parts:

(1) Data Use Proposal. Applicants must submit a proposal describing their intended use of the data. The proposal should state which data the student plans to use and how the data will benefit their research project as well as information on the proposed methodology or algorithm.

(2) Letter of Support. Applicants must submit one letter of support from their thesis adviser or department chair. The letter must verify the student's need for data and confirm that the department or university lacks the funding to pay the full non-member fee for the data or to join the Consortium.

For further information on application materials and program rules, please visit the LDC Data Scholarship page.

Students can email their applications to the LDC Data Scholarship program. Decisions will be sent by email from the same address.

The deadline for the Spring 2015 program cycle is January 15, 2015, 11:59PM EST.

LDC is now on Twitter
LDC now has a Twitter feed. Start following us today for updates on new corpora releases and the latest LDC news.

LDC closed for Thanksgiving Break
LDC will be closed on Thursday, November 27, 2014 and Friday, November 28, 2013 in observance of the US Thanksgiving Holiday.  Our offices will reopen on Monday, December 1, 2014.


New publications

(1) Boulder Lies and Truth was developed at the University of Colorado Boulder and contains approximately 1,500 elicited English reviews of hotels and electronics for the purpose of studying deception in written language. Reviews were collected by crowd-sourcing with Amazon Medical Turk.

Each review was required to be original and was checked for plagiarism against the web. Reviews were annotated with respect to the following three dimensions:
Domain: Electronics (e.g., iPhone) or Hotels
Sentiment: Positive or Negative
Truth Value:

a) Truthful: a review about an object known by the writer reflecting the real sentiment of the writer toward the object of the review

b) Opposition: A review about an object known by the writer reflecting the opposite sentiment of the writer toward the object of the review (i.e., if the writer liked the object they were asked to write a negative review; if the writer did not like the object, they were asked to write a positive review)

c) Deceptive (i.e., fabricated): a review written about an object not known by the writer either positive or negative in sentiment; the objects reviewed were provided via a URL from the tasks in (a) and (b)

Each review was judged a total of 30 times: (1) 10 times to evaluate its perceived quality (on a range from 1-5); (2) 10 times with judgments about its perceived truthfulness (e.g., truthful or somehow deceptive, a lie or a fabrication); and (3) 10 times for its perceived sentiment (i.e., star rating).

Boulder Lies and Truth is distributed via web download.

2014 Subscription Members will receive two copies of this data on disc, provided they have completed the user license agreement.  2014 Standard Members may request a copy as part of their 16 free membership corpora.  This data is available at no-cost for non-members under the same user license agreement.

*

(2) GALE Chinese-English Word Alignment and Tagging -- Broadcast Training Part 2 was developed by LDC and contains 65,069 tokens of word aligned Chinese and English parallel text enriched with linguistic tags. This material was used as training data in the DARPA GALE (Global Autonomous Language Exploitation) program.

Some approaches to statistical machine translation include the incorporation of linguistic knowledge in word aligned text as a means to improve automatic word alignment and machine translation quality. This is accomplished with two annotation schemes: alignment and tagging. Alignment identifies minimum translation units and translation relations by using minimum-match and attachment annotation approaches. A set of word tags and alignment link tags are designed in the tagging scheme to describe these translation units and relations. Tagging adds contextual, syntactic and language-specific features to the alignment annotation.

This release consists of Chinese source broadcast conversation (BC) programming collected by LDC in 2008.

The Chinese word alignment tasks consisted of the following components:
Identifying, aligning, and tagging eight different types of links
Identifying, attaching, and tagging local-level unmatched words
Identifying and tagging sentence/discourse-level unmatched words
Identifying and tagging all instances of Chinese 的(DE) except when they were a part of a semantic link
GALE Chinese-English Word Alignment and Tagging -- Broadcast Training Part 2 is distributed via web download.

2014 Subscription Members will automatically receive two copies of this data on disc.  2014 Standard Members may request a copy as part of their 16 free membership corpora.  Non-members may license this data for a fee.

*

(3) GALE Phase 2 Chinese Web Parallel Text was developed by LDC and along with other corpora, the parallel text in this release comprised training data for Phase 2 of the DARPA GALE (Global Autonomous Language Exploitation) Program. This corpus contains Chinese source text and corresponding English translations selected from weblog and newsgroup data collected by LDC and translated by LDC or under its direction.

This release includes 46 source-translation document pairs, comprising 66,779 tokens of translated data. Data is drawn from four Chinese weblog and newsgroup sources.
Data was manually selected for translation according to several criteria, including linguistic features and topic features. The files were formatted into a human-readable translation format and assigned to translation vendors. Translators followed LDC's Chinese to English translation guidelines. Bilingual LDC staff performed quality control procedures on the completed translations.

GALE Phase 2 Chinese Web Parallel Text is distributed via web download.

2014 Subscription Members will automatically receive two copies of this data on disc.  2014 Standard Members may request a copy as part of their 16 free membership corpora.  Non-members may license this data for a fee.

Thursday, October 16, 2014

LDC 2014 October Newsletter

LDC at NWAV 43 

LDC Data Scholarship Update 

New publications:
Chinese Discourse Treebank 0.5 
GALE Arabic-English Word Alignment -- Broadcast Training Part 2 
United Nations Proceedings Speech ________________________________________________________________

LDC at NWAV 43 

LDC will be exhibiting at the 43rd New Ways of Analyzing Variation Conference (NWAV 43)  held this year October 23-26 in Chicago, Illinois. Please stop by our table in the Old Town Room on the third floor of the Hilton to learn more about the most recent developments at the Consortium and to check out our latest giveaways. As always, LDC will post conference updates via our Facebook page. We hope to see you in Chicago!

LDC Data Scholarship Update

LDC received many solid applications for the Fall 2014 LDC Data Scholarship Program.  We are in the process of reviewing submissions and will announce recipients soon. The LDC Data Scholarship program provides university students with access to LDC data at no-cost. Students were asked to complete an application which consisted of a proposal describing their intended use of the data, as well as a letter of support from their thesis adviser.

Data use proposals in this cycle included a range of research interests from opinion mining tagging to deceptive speech classification.

New publications

(1) Chinese Discourse Treebank 0.5 was developed at Brandeis University as part of the Chinese Treebank Project and consists of approximately 73,000 words of Chinese newswire text annotated for discourse relations. It follows the lexically grounded approach of the Penn Discourse Treebank (PDTB) (LDC2008T05) with adaptations based on the linguistic and statistical characteristics of Chinese text. Discourse relations are lexically anchored by discourse connectives (e.g., because, but, therefore), which are viewed as predicates that take abstract objects such as propositions, events and states as their arguments. Along with PDTB-style schemes for English, Turkish, Hindi and Czech, Chinese Discourse Treebank provides an additional perspective on how the PDTB approach can be extended for cross-lingual annotation of discourse relations.

Data was selected from the newswire material in Chinese Treebank 8.0 (LDC2013T21), specifically, from Xinhua News Agency stories. There are approximately 5,500 annotation instances. Following the PDTB format, each annotation instance consists of 27 vertical bar delimited fields. The fields specify the attributes of the discourse relation as a whole, as well as the attributes of its two arguments. Not all fields are filled in this release. Filled fields are indicated by a pair of angle brackets; the remaining fields are place holders for future releases.

Chinese Discourse Treebank 0.5 is distributed via web download.

2014 Subscription Members will automatically receive two copies of this data on disc.  2014 Standard Members may request a copy as part of their 16 free membership corpora.  Non-members may license this data for a fee.

*

(2) GALE Arabic-English Word Alignment -- Broadcast Training Part 2 was developed by LDC and contains 215,923 tokens of word aligned Arabic and English parallel text enriched with linguistic tags. This material was used as training data in the DARPA GALE (Global Autonomous Language Exploitation) program. Some approaches to statistical machine translation include the incorporation of linguistic knowledge in word aligned text as a means to improve automatic word alignment and machine translation quality. This is accomplished with two annotation schemes: alignment and tagging. Alignment identifies minimum translation units and translation relations by using minimum-match and attachment annotation approaches. A set of word tags and alignment link tags are designed in the tagging scheme to describe these translation units and relations. Tagging adds contextual, syntactic and language-specific features to the alignment annotation.

This release consists of Arabic source broadcast news and broadcast conversation data collected by LDC from 2007-2009.The Arabic word alignment tasks consisted of the following components:

Normalizing tokenized tokens as needed

Identifying different types of links

Identifying sentence segments not suitable for annotation

Tagging unmatched words attached to other words or phrases

GALE Arabic-English Word Alignment – Broadcast Training Part 2 is distributed via web download.

2014 Subscription Members will automatically receive two copies of this data on disc.  2014 Standard Members may request a copy as part of their 16 free membership corpora.  Non-members may license this data for a fee.

*

(3) United Nations Proceedings Speech was developed by the United Nations (UN) and contains approximately 8,500 hours of recorded proceedings in the six official UN languages, Arabic, Chinese, English, French, Russian and Spanish. The data was recorded in 2009-2012 from sessions 64-66 of the General Assembly (GA) and First Committee (FC) (Disarmament and International Security), and meetings 6434-6763 of the Security Council.

Recordings were made using a customized system following a daily internal circulated instruction from the Meetings Management Section. Most of the subjects and information related to a particular meeting or session are published in a UN Journal which can be found in the following here.

Data is presented either as mp3 or flac compressed wav and are 16-bit single channel files in either 22,050 or 8,000 Hz organized by committee and session number, then language. The folder labeled "Floor" indicates the microphone used by the particular speaker. Those files may include other languages, for instance, if the speaker's language was not among the six official UN languages.

United Nations Proceedings Speech is distributed on one hard drive.

2014 Subscription Members will receive one copy of this data, provided they have completed the user license agreement.  2014 Standard Members may request a copy as part of their 16 free membership corpora.  Non-members may license this data for a fee.

Monday, September 22, 2014

LDC 2014 September Newsletter


LDC at Interspeech 2014, Singapore

New publications:


LDC at Interspeech 2014, Singapore

LDC is off to Singapore to participate in Interspeech 2014. This year’s conference will be held from September 14-18 at Singapore’s Max Atria at the Expo Center. Please stop by LDC’s exhibition booth to learn more about recent developments at the Consortium and new publications. LDC will continue to post conference updates via our Facebook page. We hope to see you there!   
 
New publications

(1) ACE 2007 Multilingual Training Corpus was developed by LDC and contains the complete set of Arabic and Spanish training data for the 2007 Automatic Content Extraction (ACE) technology evaluation, specifically, Arabic and Spanish newswire data and Arabic weblogs annotated for entities and temporal expressions. The objective of the ACE program was to develop automatic content extraction technology to support automatic processing of human language in text form from a variety of sources including newswire, broadcast programming and weblogs. In the 2007 evaluation, participants were tested on system performance for the recognition of entities, values, temporal expressions, relations, and events in Chinese and English and for the recognition of entities and temporal expressions in Arabic and Spanish. LDC's work in the ACE program is described in more detail on the LDC ACE project pages.

The Arabic data is composed of newswire (60%) published in October 2000-December 2000 and weblogs (40%) published during the period November 2004-February 2005. The Spanish data set consists entirely of newswire material from multiple sources published in January 2005-April 2005. A document pool was established for each language based on genre and epoch requirements. Humans reviewed the pool to select individual documents suitable for ACE annotation, such as documents that were representative of their genre and contained targeted ACE entity types. One annotator completed the entity and temporal expression (TIMEX2) markup in the first pass annotation. This work was reviewed in the second pass by a senior annotator. TIMEX2 values were normalized by an annotator specifically trained for that task.

The table below describes the amount of data included in the current release and its annotation status. Corpus content for each language and data type is represented in the three stages of annotation: first pass annotation (1P), second pass annotation (2P) and TIMEX2 normalization and additional quality control (NORM).

Arabic
Words


Files




1P
2P
NORM
1P
2P
NORM
NW
58,015
58,015
58,015
257
257
257
WL
40,338
40,338
40,338
121
121
121
Total
98,353
98,353
98,353
378
378
378
Spanish






Words


Files




1P
2P
NORM
1P
2P
NORM
NW
100,401
100,401
100,401
352
352
352
Total
100,401
100,401
100,401
352
352
352

For a given document, there is a source .sgm file together with the .ag.xml and .apf.xml annotation files in each of the three directories "1p", "2p" and "timex2norm". In other words, for each newswire story or weblog entry, the three annotation directories each contain an identical copy of the source text (SGML .sgm file) along with distinct versions of the associated annotations (XML .ag.xml, apf.xml files and plain text .tab files). All files are presented in UTF-8.

ACE 2007 Multilingual Training Corpus is distributed via web download.

2014 Subscription Members will automatically receive two copies of this data on disc. 2014 Standard Members may request a copy as part of their 16 free membership corpora.  Non-members may license this data for a fee.

*
(2) GALE Arabic-English Word Alignment -- Broadcast Training Part 1 was developed by LDC and contains 267,257 tokens of word aligned Arabic and English parallel text enriched with linguistic tags. This material was used as training data in the DARPA GALE (Global Autonomous Language Exploitation) program.

Some approaches to statistical machine translation include the incorporation of linguistic knowledge in word aligned text as a means to improve automatic word alignment and machine translation quality. This is accomplished with two annotation schemes: alignment and tagging. Alignment identifies minimum translation units and translation relations by using minimum-match and attachment annotation approaches. A set of word tags and alignment link tags are designed in the tagging scheme to describe these translation units and relations. Tagging adds contextual, syntactic and language-specific features to the alignment annotation.

This release consists of Arabic source broadcast news and broadcast conversation data collected by LDC from 2007-2009. The distribution by genre, words, tokens and segments appears below:

Language
Genre
Files
Words
Tokens
Segments
Arabic
BC
231
79,485
103,816
4,114
Arabic
BN
92
131,789
163,441
7,227
Totals

323
211,274
267,257
11,341

Note that word count is based on the untokenized Arabic source, and token count is based on the tokenized Arabic source.

The Arabic word alignment tasks consisted of the following components:
Normalizing tokenized tokens as needed
Identifying different types of links
Identifying sentence segments not suitable for annotation
Tagging unmatched words attached to other words or phrases

GALE Arabic-English Word Alignment -- Broadcast Training Part 1 is distributed via web download.

2014 Subscription Members will automatically receive two copies of this data on disc.  2014 Standard Members may request a copy as part of their 16 free membership corpora.  Non-members may license this data for a fee.

*
(3) GALE Phase 2 Chinese Newswire Parallel Text Part 2 was developed by LDC. Along with other corpora, the parallel text in this release comprised training data for Phase 2 of the DARPA GALE (Global Autonomous Language Exploitation) Program. This corpus contains 117,895 tokens of Chinese source text and corresponding English translations selected from newswire data collected by LDC in 2007 and translated by LDC or under its direction.

This release includes 177 source-translation document pairs, comprising 117,895 tokens of translated data. Data is drawn from four distinct Chinese newswire sources: China News Service, Guangming Daily, People's Daily and People's Liberation Army Daily.

Data was manually selected for translation according to several criteria, including linguistic features and topic features. The files were formatted into a human-readable translation format and assigned to translation vendors. Translators followed LDC's Chinese to English translation guidelines. Bilingual LDC staff performed quality control procedures on the completed translations.

Source data and translations are distributed in TDF format. TDF files are tab-delimited files containing one segment of text along with meta information about that segment. Each field in the TDF file is described in TDF_format.text. All data are encoded in UTF-8.

GALE Phase 2 Chinese Newswire Parallel Text Part 2 is distributed via web download.

2014 Subscription Members will automatically receive two copies of this data on disc. 2014 Standard Members may request a copy as part of their 16 free membership corpora.  Non-members may license this data for a fee.