Showing posts with label database. Show all posts
Showing posts with label database. Show all posts

Monday, March 16, 2015

LDC 2015 March Newsletter

Spring 2015 LDC Data Scholarship recipients

2001 HUB5 English Evaluation update

New publications:
_________________________________________________________________________

Spring 2015 LDC Data Scholarship recipients

Congratulations to the recipients of LDC's Spring 2015 data scholarships:

Christopher Kotfila ~ State University of New York, Albany (USA), PhD Candidate, Informatics. Christopher has been awarded copies of Message Understanding Conference and ACE 2005 SpatialML for his work in named entity extraction.  
  
Ilia Markov ~ National Polytechnic University (Mexico), PhD candidate, Computer Science. Ilia has been awarded a copy of the ETS Corpus of Non-Native Written English for his work in native language identification    

Matthew Nelson ~ Georgia State University (USA), MA candidate, Applied Linguistics. Matthew has been awarded a copy of TIMIT and Nationwide Speech for his work in speaker perception.   

Meladianos Polykarpos ~ Athens University of Economics and Business (Greece), PhD candidate, Informatics. Meladianos has been awarded a copy of TDT5 Text and Topics/Annotations for his work in information retrieval.  

Benjamin Schloss ~ Pennsylvania State University (USA), PhD candidate, Psychology. B
Benjamin has been awarded a copy of the ETS Corpus of Non-Native Written English for his work in semantics.

For program information visit the Data Scholarship page.

2001 HUB5 English Evaluation update
2001 HUB5 English Evaluation (LDC2002S13) now includes corresponding transcriptions.  The transcripts are available as part of the web download for this data.  Additionally, all HUB5 English catalog entries have been updated to reflect LDC's current standards for documentation and metadata.

New publications:

(1) GALE Chinese-English Parallel Aligned Treebank -- Training was developed by LDC and contains 229,249 tokens of word aligned Chinese and English parallel text with treebank annotations. This material was used as training data in the DARPA GALE (Global Autonomous Language Exploitation) program.

The Chinese source data was translated into English. Chinese and English treebank annotations were performed independently. The parallel texts were then word aligned. The material in this release corresponds to portions of the Chinese treebanked data in Chinese Treebank 6.0 (LDC2007T36) (CTB), OntoNotes 3.0 (LDC2009T24) and OntoNotes 4.0 (LDC2011T03).

This release consists of Chinese source broadcast programming (China Central TV, Phoenix TV), newswire (Xinhua News Agency) and web data collected by LDC. The distribution by genre, words, character tokens, treebank tokens and segments appears below:

Genre
   Files
   Words
    CharTokens
  CTBTokens
  Segments
bc
   10 
   57,571
    86,356
  60,270
  3,328
nw
   172
   64,337
    96,505
  57,722
  2,092
wb
   86
   30,925
    46,388
  31,240
  1,321
Total
   268
   152,833
    229,249
  149,232
  6,741

Note that all token counts are based on the Chinese data only. One token is equivalent to one character and one word is equivalent to 1.5 characters.

The Chinese word alignment task consisted of the following components:
  • Identifying, aligning, and tagging eight different types of links
  • Identifying, attaching, and tagging local-level unmatched words
  • Identifying and tagging sentence/discourse-level unmatched words
  • Identifying and tagging all instances of Chinese 的 (DE) except when they were a part of a semantic link
This release contains nine types of files - Chinese raw source files, English raw translation files, Chinese character tokenized files, Chinese CTB tokenized files, English tokenized files, Chinese treebank files, English treebank files, character-based word alignment files, and CTB-based word alignment files.

GALE Chinese-English Parallel Aligned Treebank -- Training is distributed via web download.  2015 Subscription Members will automatically receive two copies of this corpus on disc.  2015 Standard Members may request a copy as part of their 16 free membership corpora.  Non-members may license this data for a fee.

*

(2) GALE Phase 3 and 4 Arabic Broadcast Conversation Parallel Text was developed by LDC. Along with other corpora, the parallel text in this release comprised training data for Phases 3 and 4 of the DARPA GALE (Global Autonomous Language Exploitation) Program. This corpus contains Modern Standard Arabic source text and corresponding English translations selected from broadcast conversation data collected by LDC between 2006 and 2008 and transcribed and translated by LDC or under its direction.

GALE Phase 3 and 4 Arabic Broadcast Conversation Parallel Text includes 55 source-translation document pairs, comprising 280,535 words of Arabic source text and its English translation. Data is drawn from 22 distinct Arabic programs broadcast between 2006 and 2008. Broadcast conversation programming is generally more interactive than traditional news broadcasts and includes talk shows, interviews, call-in programs and roundtables.

The files in this release were transcribed by LDC staff and/or transcription vendors under contract to LDC in accordance with the Quick Rich Transcription guidelines developed by LDC. Transcribers indicated sentence boundaries in addition to transcribing the text. The transcribed and segmented files were reformatted into a human-readable translation format and assigned to translation vendors. Translators followed LDC's Arabic to English translation guidelines. Bilingual LDC staff performed quality control procedures on the completed translations.

GALE Phase 3 and 4 Arabic Broadcast Conversation Parallel Text is distributed via web download.  2015 Subscription Members will automatically receive two copies of this corpus on disc.  2015 Standard Members may request a copy as part of their 16 free membership corpora.  Non-members may license this data for a fee.

*

(3) Mandarin-English Code-Switching in South-East Asia was developed by Nanyang Technological University and Universiti Sains Malaysia in Singapore and Malaysia, respectively. It is comprised of approximately 192 hours of Mandarin-English code-switching speech from 156 speakers with associated transcripts.

Code-switching refers to the practice of shifting between languages or language varieties during conversation. This corpus focuses on the shift between Mandarin and English by Malaysian and Singaporean speakers. Speakers engaged in unscripted conversations and interviews. In the conversational speech segments, two speakers conversed freely with each other. The interviews consisted of questions from an interviewer and answers from an interviewee; only the interviewee's speech was recorded. Topics discussed range from hobbies, friends, and daily activities.

The speakers were gender-balanced (49.7% female, 50.3% male) and between 19 and 33 years of age. Over 60% of the speakers were Singaporean; the rest were Malaysian.

The speech recordings were conducted in a quiet room using several microphones and recording devices. Details about the recording conditions are contained in the documentation provided with this release. The audio files in this corpus are 16KHz, 16-bit recordings in flac compressed wav format between 20 and 120 minutes in length.
Selected segments of the audio recordings were transcribed. Most of those segments contain code-switching utterances.

Mandarin-English Code-Switching in South-East Asia is distributed on two DVD-ROM.  2015 Subscription Members will automatically receive two copies of this corpus.  2015 Standard Members may request a copy as part of their 16 free membership corpora.  Non-members may license this data for a fee.

Monday, December 15, 2014

LDC 2014 December Newsletter

Renew your LDC membership today

Spring 2015 LDC Data Scholarship Program - deadline approaching

Reduced fees for Treebank-2 and Treebank-3 

LDC to close for Winter Break

New publications:

Renew your LDC membership today

Membership Year 2015 (MY2015) discounts are available for those who keep their membership current and  join early in the year. Check here for further information including our planned publications for MY2015.

Now is also a good time to consider joining LDC for the current and open membership years, MY2014 and MY2013. MY2014 offers members an impressive 37 publications which include UN speech data, 2009 NIST LRE test set, 2007 ACE multilingual data, and multi-channel WSJ audio. MY2013 remains open through the end of the 2014 calendar year and its publications include Mixer 6 speech, Greybeard, UN parallel text and CSC Deceptive Speech as well as updates to Chinese Treebank and Chinese Proposition Bank. For full descriptions of these data sets, visit our Catalog.

Spring 2015 LDC Data Scholarship Program - deadline approaching
The deadline for the Spring 2015 LDC Data Scholarship Program is right around the corner! Student applications are being accepted now through January 15, 2015, 11:59PM EST. The LDC Data Scholarship program provides university students with access to LDC data at no cost. This program is open to students pursuing both undergraduate and graduate studies in an accredited college or university. LDC Data Scholarships are not restricted to any particular field of study; however, students must demonstrate a well-developed research agenda and a bona fide inability to pay.

Students will need to complete an application which consists of a data use proposal and letter of support from their adviser. For further information on application materials and program rules, please visit the LDC Data Scholarship page.

Students can email their applications to the LDC Data Scholarships program. Decisions will be sent by email from the same address.

Reduced fees for Treebank-2 and Treebank-3
Treebank-2 (LDC95T7) and Treebank-3 (LDC99T42) are now available to non-members at reduced fees, US$1500 for Treebank-2 and US$1700 for Treebank-3, reductions of 52% and 47%, respectively.

LDC to close for Winter Break
LDC will be closed from December 25, 2014 through January 2, 2015 in accordance with the University of Pennsylvania Winter Break Policy. Our offices will reopen on January 5, 2015. Requests received for membership renewals and corpora during the Winter Break will be processed at that time.

Best wishes for a relaxing holiday season!

New publications

(1) Benchmarks for Open Relation Extraction was developed by the University of Alberta and contains annotations for approximately 14,000 sentences from The New York Times Annotated Corpus (LDC2008T19) and Treebank-3 (LDC99T42). This corpus was designed to contain benchmarks for the task of open relation extraction (ORE), along with sample extractions from ORE methods and evaluation scripts for computing a method's precision and recall.

ORE attempts to extract as many relations as described in a corpus without relying on relation-specific training data. The traditional approach to relation extraction requires substantial training effort for each relation of interest. That can be unpractical for massive collections such as found on the web. Open relation extraction offers an alternative by extracting unseen relations as they come. It does not require training data for any particular relation, making it suitable for applications that require a large (or even unknown) number of relations. Results published in ORE literature are often not comparable due to the lack of reusable annotations and differences in evaluation methodology. The goal of this benchmark data set is to provide annotations that are flexible and can be used to evaluate a wide range of methods.

Binary and n-ary relations were extracted from the text sources. Sentences were annotated for binary relations manually and automatically. In the manual sentence annotation, two entities and a trigger (a single token indicating a relation) were identified for the relation between them, if one existed. A window of tokens allowed to be in a relation was specified; that included modifiers of the trigger and prepositions connecting triggers to their arguments. For each sentence annotated with two entities, a system must extract a string representing the relation between them. The evaluation method deemed an extraction as correct if it contained the trigger and allowed tokens only. The automatic annotator identified pairs of entities and a trigger of the relation between them; the evaluation script for that experiment deemed an extraction correct if it contained the annotated trigger. For n-ary relations, sentences were annotated with one relation trigger and all of its arguments. An extracted argument was deemed correct if it was annotated in the sentence.

Benchmarks for Open Relation Extractions is distributed via web download.

2014 Subscription Members will automatically receive two copies of this data provided they have completed a copy of the user agreement2014 Standard Members may request a copy as part of their 16 free membership corpora.  Non-members may license this data for a fee.
*

(2) Fisher and CALLHOME Spanish--English Speech Translation was developed at Johns Hopkins University and contains English reference translations and speech recognizer output (in various forms) that complement the LDC Fisher Spanish (LDC2010T04) and CALLHOME Spanish audio and transcript releases (LDC96T17). Together, they make a four-way parallel text dataset representing approximately 38 hours of speech, with defined training, development, and held-out test sets.

The source data are the Fisher Spanish and CALLOME Spanish corpora developed by LDC, comprising transcribed telephone conversations between (mostly native) Spanish speakers in a variety of dialects. The Fisher Spanish data set consists of 819 transcribed conversations on an assortment of provided topics primarily between strangers, resulting in approximately 160 hours of speech aligned at the utterance level, with 1.5 million tokens. The CALLHOME Spanish corpus comprises 120 transcripts of spontaneous conversations primarily between friends and family members, resulting in approximately 20 hours of speech aligned at the utterance level, with just over 200,000 words (tokens) of transcribed text.

Translations were obtained by crowdsourcing using Amazon's Mechanical Turk, after which the data was split into training, development, and test sets. The CALLHOME data set defines its own data splits, organized into train, devtest, and evltest, which were retained here. For the Fisher material, four data splits were produced: a large training section and three test sets. These test sets correspond to portions of the data where four translations exist.

Fisher and CALLHOME Spanish--English Speech Translation is distributed via web download.

2014 Subscription Members will automatically receive two copies of this data on disc.  2014 Standard Members may request a copy as part of their 16 free membership corpora.  Non-members may license this data for a fee.

*

(3) GALE Phase 3 Chinese Broadcast Conversation Speech Part 1 was developed by LDC and is comprised of approximately 126 hours of Mandarin Chinese broadcast conversation speech collected in 2007 by LDC and Hong University of Science and Technology (HKUST), Hong Kong, during Phase 3 of the DARPA GALE (Global Autonomous Language Exploitation) Program.

Corresponding transcripts are released as GALE Phase 3 Chinese Broadcast Conversation Transcripts Part 1 (LDC2014T28).

Broadcast audio for the GALE program was collected at LDC’s Philadelphia, PA USA facilities and at three remote collection sites: HKUST (Chinese), Medianet (Tunis, Tunisia) (Arabic), and MTC (Rabat, Morocco) (Arabic). The combined local and outsourced broadcast collection supported GALE at a rate of approximately 300 hours per week of programming from more than 50 broadcast sources for a total of over 30,000 hours of collected broadcast audio over the life of the program. HKUST collected Chinese broadcast programming using its internal recording system and a portable broadcast collection platform designed by LDC and installed at HKUST in 2006.

The broadcast conversation recordings in this release feature interviews, call-in programs, and roundtable discussions focusing principally on current events from the following sources: Anhui TV, a regional television station in Anhui Province, China; Beijing TV, a national television station in China; China Central TV (CCTV), a Chinese national and international broadcaster; Hubei TV, a regional broadcaster in Hubei Province, China; and Phoenix TV, a Hong Kong-based satellite television station.

This release contains 217 audio files presented in FLAC-compressed Waveform Audio File format (.flac), 16000 Hz single-channel 16-bit PCM. Each file was audited by a native Chinese speaker following Audit Procedure Specification Version 2.0 which is included in this release. The broadcast auditing process served three principal goals: as a check on the operation of the broadcast collection system equipment by identifying failed, incomplete or faulty recordings, as an indicator of broadcast schedule changes by identifying instances when the incorrect program was recorded, and as a guide for data selection by retaining information about a program’s genre, data type and topic.

GALE Phase 3 Chinese Broadcast Conversation Speech Part 1 is distributed on 2 DVD-ROM.

2014 Subscription Members will automatically receive two copies of this data.  2014 Standard Members may request a copy as part of their 16 free membership corpora.  Non-members may license this data for a fee.

*

(4) GALE Phase 3 Chinese Broadcast Conversation Transcripts Part 1 was developed by LDC and contains transcriptions of approximately 126 hours of Chinese broadcast conversation speech collected in 2007 by LDC and Hong University of Science and Technology (HKUST), Hong Kong, during Phase 3 of the DARPA GALE (Global Autonomous Language Exploitation) Program.

Corresponding audio data is released as GALE Phase 3 Chinese Broadcast Conversation Speech Part 1 (LDC2014S09).

The source broadcast conversation recordings feature interviews, call-in programs and roundtable discussions focusing principally on current events from the following sources: Anhui TV, a regional television station in Anhui Province, China; Beijing TV, a national television station in China; China Central TV (CCTV), a Chinese national and international broadcaster; Hubei TV, a regional television station in Hubei Province, China; and Phoenix TV, a Hong Kong-based satellite television station.

The transcript files are in plain-text, tab-delimited format (TDF) with UTF-8 encoding, and the transcribed data totals 1,556,904 tokens. The transcripts were created with the LDC-developed transcription tool, XTrans, a multi-platform, multilingual, multi-channel transcription tool that supports manual transcription and annotation of audio recordings. XTrans is available from the following link, https://www.ldc.upenn.edu/language-resources/tools/xtrans .

The files in this corpus were transcribed by LDC staff and/or by transcription vendors under contract to LDC. Transcribers followed LDC's quick transcription guidelines (QTR) and quick rich transcription specification (QRTR) both of which are included in the documentation with this release. QTR transcription consists of quick (near-) verbatim, time-aligned transcripts plus speaker identification with minimal additional mark-up. It does not include sentence unit annotation. QRTR annotation adds structural information such as topic boundaries and manual sentence unit annotation to the core components of a quick transcript. Files with QTR as part of the filename were developed using QTR transcription. Files with QRTR in the filename indicate QRTR transcription.

GALE Phase 3 Chinese Broadcast Conversation Transcripts Part 1 is distributed via web download.

2014 Subscription Members will automatically receive two copies of this data on disc.  2014 Standard Members may request a copy as part of their 16 free membership corpora. Non-members may license this data for a fee.

Thursday, October 16, 2014

LDC 2014 October Newsletter

LDC at NWAV 43 

LDC Data Scholarship Update 

New publications:
Chinese Discourse Treebank 0.5 
GALE Arabic-English Word Alignment -- Broadcast Training Part 2 
United Nations Proceedings Speech ________________________________________________________________

LDC at NWAV 43 

LDC will be exhibiting at the 43rd New Ways of Analyzing Variation Conference (NWAV 43)  held this year October 23-26 in Chicago, Illinois. Please stop by our table in the Old Town Room on the third floor of the Hilton to learn more about the most recent developments at the Consortium and to check out our latest giveaways. As always, LDC will post conference updates via our Facebook page. We hope to see you in Chicago!

LDC Data Scholarship Update

LDC received many solid applications for the Fall 2014 LDC Data Scholarship Program.  We are in the process of reviewing submissions and will announce recipients soon. The LDC Data Scholarship program provides university students with access to LDC data at no-cost. Students were asked to complete an application which consisted of a proposal describing their intended use of the data, as well as a letter of support from their thesis adviser.

Data use proposals in this cycle included a range of research interests from opinion mining tagging to deceptive speech classification.

New publications

(1) Chinese Discourse Treebank 0.5 was developed at Brandeis University as part of the Chinese Treebank Project and consists of approximately 73,000 words of Chinese newswire text annotated for discourse relations. It follows the lexically grounded approach of the Penn Discourse Treebank (PDTB) (LDC2008T05) with adaptations based on the linguistic and statistical characteristics of Chinese text. Discourse relations are lexically anchored by discourse connectives (e.g., because, but, therefore), which are viewed as predicates that take abstract objects such as propositions, events and states as their arguments. Along with PDTB-style schemes for English, Turkish, Hindi and Czech, Chinese Discourse Treebank provides an additional perspective on how the PDTB approach can be extended for cross-lingual annotation of discourse relations.

Data was selected from the newswire material in Chinese Treebank 8.0 (LDC2013T21), specifically, from Xinhua News Agency stories. There are approximately 5,500 annotation instances. Following the PDTB format, each annotation instance consists of 27 vertical bar delimited fields. The fields specify the attributes of the discourse relation as a whole, as well as the attributes of its two arguments. Not all fields are filled in this release. Filled fields are indicated by a pair of angle brackets; the remaining fields are place holders for future releases.

Chinese Discourse Treebank 0.5 is distributed via web download.

2014 Subscription Members will automatically receive two copies of this data on disc.  2014 Standard Members may request a copy as part of their 16 free membership corpora.  Non-members may license this data for a fee.

*

(2) GALE Arabic-English Word Alignment -- Broadcast Training Part 2 was developed by LDC and contains 215,923 tokens of word aligned Arabic and English parallel text enriched with linguistic tags. This material was used as training data in the DARPA GALE (Global Autonomous Language Exploitation) program. Some approaches to statistical machine translation include the incorporation of linguistic knowledge in word aligned text as a means to improve automatic word alignment and machine translation quality. This is accomplished with two annotation schemes: alignment and tagging. Alignment identifies minimum translation units and translation relations by using minimum-match and attachment annotation approaches. A set of word tags and alignment link tags are designed in the tagging scheme to describe these translation units and relations. Tagging adds contextual, syntactic and language-specific features to the alignment annotation.

This release consists of Arabic source broadcast news and broadcast conversation data collected by LDC from 2007-2009.The Arabic word alignment tasks consisted of the following components:

Normalizing tokenized tokens as needed

Identifying different types of links

Identifying sentence segments not suitable for annotation

Tagging unmatched words attached to other words or phrases

GALE Arabic-English Word Alignment – Broadcast Training Part 2 is distributed via web download.

2014 Subscription Members will automatically receive two copies of this data on disc.  2014 Standard Members may request a copy as part of their 16 free membership corpora.  Non-members may license this data for a fee.

*

(3) United Nations Proceedings Speech was developed by the United Nations (UN) and contains approximately 8,500 hours of recorded proceedings in the six official UN languages, Arabic, Chinese, English, French, Russian and Spanish. The data was recorded in 2009-2012 from sessions 64-66 of the General Assembly (GA) and First Committee (FC) (Disarmament and International Security), and meetings 6434-6763 of the Security Council.

Recordings were made using a customized system following a daily internal circulated instruction from the Meetings Management Section. Most of the subjects and information related to a particular meeting or session are published in a UN Journal which can be found in the following here.

Data is presented either as mp3 or flac compressed wav and are 16-bit single channel files in either 22,050 or 8,000 Hz organized by committee and session number, then language. The folder labeled "Floor" indicates the microphone used by the particular speaker. Those files may include other languages, for instance, if the speaker's language was not among the six official UN languages.

United Nations Proceedings Speech is distributed on one hard drive.

2014 Subscription Members will receive one copy of this data, provided they have completed the user license agreement.  2014 Standard Members may request a copy as part of their 16 free membership corpora.  Non-members may license this data for a fee.

Friday, August 15, 2014

LDC 2014 August Newsletter


Fall 2014 LDC Data Scholarship program- September 15 deadline approaching
Neural Engineering Data Consortium publishes first release

New publications:



Fall 2014 LDC Data Scholarship program- September 15 deadline approaching!
Student applications for the Fall 2014 LDC Data Scholarship program are being accepted now through Monday, September 15, 2014, 11:59PM EST.  The LDC Data Scholarship program provides university students with access to LDC data at no cost.  This program is open to students pursuing both undergraduate and graduate studies in an accredited college or university. LDC Data Scholarships are not restricted to any particular field of study; however, students must demonstrate a well-developed research agenda and a bona fide inability to pay.  

Students will need to complete an application which consists of a data use proposal and letter of support from their adviser.  For further information on application materials and program rules, please visit the LDCData Scholarship page.  

Applicants can email their materials to the LDC Data Scholarship program. Decisions will be sent by email from the same address.

Neural Engineering Data Consortium publishes first release
The Neural Engineering Data Consortium (NEDC) has announced its first release, the Temple University Hospital Electroencephalogram (TUH EEG) corpus. TUH EEG corpus is a database of over 20,000 EEG recordings which will aid the development of technology to automatically interpret EEG scans. NEDC, directed by Professors Iyad Obeid and Joe Picone of Temple University, Philadelphia, PA USA , designs, collects and distributes data and resources in support of neural engineering research

NEDC is surveying community needs to help set priorities for future effort. You can complete the survey here.

New publications
(1) GALE Phase 2 Arabic Broadcast News Speech Part 1 was developed by LDC and is comprised of approximately 165 hours of Arabic broadcast news speech collected in 2006 and 2007 by LDC, MediaNet, Tunis, Tunisia and MTC, Rabat, Morocco during Phase 2 of the DARPA GALE (Global Autonomous Language Exploitation) Program. Corresponding transcripts are released as GALE Phase 2 Arabic Broadcast News Transcripts Part 1 (LDC2014T17).

Broadcast audio for the GALE program was collected at LDC’s Philadelphia, PA USA facilities and at three remote collection sites: Hong Kong University of Science and Technology, Hong King (Chinese), Medianet (Tunis, Tunisia) (Arabic), and MTC (Rabat, Morocco) (Arabic). The combined local and outsourced broadcast collection supported GALE at a rate of approximately 300 hours per week of programming from more than 50 broadcast sources for a total of over 30,000 hours of collected broadcast audio over the life of the program.

The broadcast recordings in this release feature news programs focusing principally on current events from the following sources: Abu Dhabi TV, a televisions station based in Abu Dhabi, United Arab Emirates; Al Alam News Channel, based in Iran; Alhurra, a U.S. government-funded regional broadcaster; Aljazeera, a regional broadcaster located in Doha, Qatar; Dubai TV, a broadcast station in the United Arab Emirates; Al Iraqiyah, an Iraqi television station; Kuwait TV, a national broadcast station in Kuwait; Lebanese Broadcasting Corporation, a Lebanese television station; Nile TV, a broadcast programmer based in Egypt; Saudi TV, a national television station based in Saudi Arabia; and Syria TV, the national television station in Syria.

This release contains 200 audio files presented in FLAC-compressed Waveform Audio File format (.flac), 16000 Hz single-channel 16-bit PCM. Each file was audited by a native Arabic speaker following Audit Procedure Specification Version 2.0 which is included in this release. The broadcast auditing process served three principal goals: as a check on the operation of the broadcast collection system equipment by identifying failed, incomplete or faulty recordings; as an indicator of broadcast schedule changes by identifying instances when the incorrect program was recorded; and as a guide for data selection by retaining information about a program’s genre, data type and topic.

GALE Phase 2 Arabic Broadcast News Speech Part 1 is distributed on three DVD-ROM.

2014 Subscription Members will automatically receive two copies of this data.  2014 Standard Members may request a copy as part of their 16 free membership corpora.  Non-members may license this data for a fee.

*
(2) GALE Phase 2 Arabic Broadcast News Transcripts Part 1 was developed by LDC and contains transcriptions of approximately 165 hours of Arabic broadcast news speech collected in 2006 and 2007 by LDC, MediaNet, Tunis, Tunisia and MTC, Rabat, Morocco during Phase 2 of the DARPA GALE (Global Autonomous Language Exploitation) program. Corresponding audio data is released as GALE Phase 2 Arabic Broadcast News Speech Part 1 (LDC2014S07).

The transcript files are in plain-text, tab-delimited format (TDF) with UTF-8 encoding, and the transcribed data totals 897,868 tokens. The transcripts were created with the LDC-developed transcription tool, XTrans, a multi-platform, multilingual, multi-channel transcription tool that supports manual transcription and annotation of audio recordings. 

The files in this corpus were transcribed by LDC staff and/or by transcription vendors under contract to LDC. Transcribers followed LDC's quick transcription guidelines (QTR) and quick rich transcription specification (QRTR) both of which are included in the documentation with this release. QTR transcription consists of quick (near-)verbatim, time-aligned transcripts plus speaker identification with minimal additional mark-up. It does not include sentence unit annotation. QRTR annotation adds structural information such as topic boundaries and manual sentence unit annotation to the core components of a quick transcript. Files with QTR as part of the filename were developed using QTR transcription. Files with QRTR in the filename indicate QRTR transcription.

GALE Phase 2 Arabic Broadcast News Transcripts Part 1 is distributed via web download.

2014 Subscription Members will automatically receive two copies of this data on disc.  2014 Standard Members may request a copy as part of their 16 free membership corpora.  Non-members may license this data for a fee.

*
(3) TAC KBP Reference Knowledge Base was developed by LDC in support of the NIST-sponsored TAC-KBP evaluation series. It is a knowledge base built from English Wikipedia articles and their associated infoboxes and covers over 800,000 entities.

TAC (Text Analysis Conference) is a series of workshops organized by NIST (the National Institute of Standards and Technology) to encourage research in natural language processing and related applications by providing a large test collection, common evaluation procedures, and a forum for researchers to share their results. TAC's KBP track (Knowledge Base Population) encourages the development of systems that can match entities mentioned in natural texts with those appearing in a knowledge base and extract novel information about entities from a document collection and add it to a new or existing knowledge base.

Consult the LDC TAC-KBP project page for further information about LDC's resource development for the TAC-KBP program.

The source data, Wikipedia infoboxes and articles, was taken from an October 2008 snapshot of Wikipedia.

TAC KBP Reference Knowledge Base contains a set of entities, each with a canonical name and title for the Wikipedia page, an entity type, an automatically parsed version of the data from the infobox in the entity's Wikipedia article, and a stripped version of the text of the Wiki article. Each entity is assigned one of four types: PER (person), ORG (organization), GPE (geo-political entity) and UKN (unknown). All data files are presented as UTF-8 encoded XML.

TAC KBP Reference Knowledge Base is distributed on one DVD-ROM.


2014 Subscription Members will automatically receive two copies of this data.  2014 Standard Members may request a copy as part of their 16 free membership corpora.  Non-members may license this data for a fee.

Thursday, May 15, 2014

LDC May 2014 Newsletter

LDC at LREC 2014

New publications:
GALE Arabic-English Word Alignment Training Part 2 -- Newswire  
Hispanic-English Database  
HyTER Networks of Selected OpenMT08/09 Progress Set Sentences  



LDC at LREC 2014
LDC will attend the 9th Language Resource Evaluation Conference (LREC2014), hosted by ELRA, the European Language Resource Association. The conference will be held in Reykjavik, Iceland from May 26-31 and features a broad range of sessions on language resource and human language technologies research. Ten LDC staff members will be presenting current work on topics including the language application grid project, collecting natural SMS and chat conversations in multiple languages, incorporating alternate translations into English translation treebanks, supporting HLT research with degraded audio data, developing an Egyptian Arabic Treebank and more.

Following the conference LDC’s presented papers and posters will be available on LDC’s Papers Page


New publications
(1) GALE Arabic-English Word Alignment Training Part 2 -- Newswire was developed by LDC and contains 162,359 tokens of word aligned Arabic and English parallel text enriched with linguistic tags. This material was used as training data in the DARPA GALE (Global Autonomous Language Exploitation) program.

Some approaches to statistical machine translation include the incorporation of linguistic knowledge in word aligned text as a means to improve automatic word alignment and machine translation quality. This is accomplished with two annotation schemes: alignment and tagging. Alignment identifies minimum translation units and translation relations by using minimum-match and attachment annotation approaches. A set of word tags and alignment link tags are designed in the tagging scheme to describe these translation units and relations. Tagging adds contextual, syntactic and language-specific features to the alignment annotation.

This release consists of Arabic source newswire collected by LDC in 2004 - 2006 and 2008. The distribution by genre, words, character tokens and segments appears below:

Language
Genre
Files
Words
CharTokens
Segments
Arabic
NW
1,126
112,318
162,359
5,349

Note that word count is based on the untokenized Arabic source, and token count is based on the tokenized Arabic source.

The Arabic word alignment tasks consisted of the following components:
  • Identifying and correcting incorrectly tokenized tokens
  • Identifying different types of links
  • Identifying sentence segments not suitable for annotation, such as those that were blank, incorrectly-segmented or containing other languages
  • Tagging unmatched words attached to other words or phrases
GALE Arabic-English Word Alignment Training Part 2 -- Newswire is distributed via web download.

2014 Subscription Members will automatically receive two copies of this data on disc. 2014 Standard Members may request a copy as part of their 16 free membership corpora.  Non-members may license this data for a fee.


*

(2) Hispanic-English Database contains approximately 30 hours of English and Spanish conversational and read speech with transcripts (24 hours) and metadata collected from 22 non-native English speakers between 1996 and 1998. The corpus was developed by Entropic Research Laboratory, Inc, a developer of speech recognition and speech synthesis software toolkits that was acquired by Microsoft in 1999.

Participants were adult native speakers of Spanish as spoken in Central America and South America who resided in the Palo Alto, California area, had lived in the United States for at least one year and demonstrated a basic ability to understand, read and speak English. They read a total of 2200 sentences, 50 each in Spanish and English per speaker. The Spanish sentence prompts were a subset of the materials in LATINO-40 Spanish Read News, and the English sentence prompts were taken from the TIMIT database. Conversations were task-oriented, drawing on exercises similar to those used in English second language instruction and designed to engage the speakers in collaborative, problem-solving activities.

Read speech was recorded on two wideband channels with a Shure SM10A head-mounted microphone in a quiet laboratory environment. The conversational speech was simultaneously recorded on four channels, two of which were used to place phone calls to each subject in two separate offices and to record the incoming speech of the two channels into separate files. The audio was originally saved under the Entropic Audio (ESPS) format using a 16kHz sampling rate and 16 bit samples. Audio files were converted to flac compressed .wav files from the ESPS format. ESPS headers were removed and are presented in this release as *.hdr files that include demographic and technical data.

Transcripts were developed with the Entropic Annotator tool and are time-aligned with speaker turns. The transcription conventions were based on those used in the LDC Switchboard and CALLHOME collections. Transcript files are denoted with a .lab extension.

Hispanic-English Database is distributed on 1 DVD-ROM.

2014 Subscription Members will automatically receive two copies of this data. 2014 Standard Members may request a copy as part of their 16 free membership corpora.  Non-members may license this data for a fee.

*

(3) HyTER Networks of Selected OpenMT08/09 Progress Set Sentences was developed by SDL and contains HyTER (Hybrid Translation Edit Rate) networks for 102 selected source Arabic and Chinese sentences from OpenMT08 and OpenMT09 Progress Set data. HyTER is an evaluation metric based on large reference networks created by an annotation tool that allows users to develop an exponential number of correct translations for a given sentence. Reference networks can be used as a foundation for developing improved machine translation evaluation metrics and for automating the evaluation of human translation efficiency.

The source material is comprised of Arabic and Chinese newswire and web data collected by LDC in 2007. Annotators created meaning-equivalent annotations under three annotation protocols. In the first protocol, foreign language native speakers built English networks starting from foreign language sentences. In the second, English native speakers built English networks from the best translation of a foreign language sentence as identified by NIST (National Institute of Standards and Technology). In the third protocol, English native speakers built English networks starting from the best translation, but those annotators also had access to three additional, independently produced human translations. Networks created by different annotators for each sentence were combined and evaluated.

This release includes the source sentences and four human reference translations produced by LDC in XML format, along with five machine translation system outputs representing a variety of system architectures and performance, and the human post-edited output of those systems also presented in XML.

HyTER Networks of Selected OpenMT08/09 Progress Set Sentences is distributed via web download.

2014 Subscription Members will automatically receive two copies of this data on disc. 2014 Standard Members may request a copy as part of their 16 free membership corpora.  Non-members may license this data for a fee.