Showing posts with label Hispanic. Show all posts
Showing posts with label Hispanic. Show all posts

Monday, December 15, 2014

LDC 2014 December Newsletter

Renew your LDC membership today

Spring 2015 LDC Data Scholarship Program - deadline approaching

Reduced fees for Treebank-2 and Treebank-3 

LDC to close for Winter Break

New publications:

Renew your LDC membership today

Membership Year 2015 (MY2015) discounts are available for those who keep their membership current and  join early in the year. Check here for further information including our planned publications for MY2015.

Now is also a good time to consider joining LDC for the current and open membership years, MY2014 and MY2013. MY2014 offers members an impressive 37 publications which include UN speech data, 2009 NIST LRE test set, 2007 ACE multilingual data, and multi-channel WSJ audio. MY2013 remains open through the end of the 2014 calendar year and its publications include Mixer 6 speech, Greybeard, UN parallel text and CSC Deceptive Speech as well as updates to Chinese Treebank and Chinese Proposition Bank. For full descriptions of these data sets, visit our Catalog.

Spring 2015 LDC Data Scholarship Program - deadline approaching
The deadline for the Spring 2015 LDC Data Scholarship Program is right around the corner! Student applications are being accepted now through January 15, 2015, 11:59PM EST. The LDC Data Scholarship program provides university students with access to LDC data at no cost. This program is open to students pursuing both undergraduate and graduate studies in an accredited college or university. LDC Data Scholarships are not restricted to any particular field of study; however, students must demonstrate a well-developed research agenda and a bona fide inability to pay.

Students will need to complete an application which consists of a data use proposal and letter of support from their adviser. For further information on application materials and program rules, please visit the LDC Data Scholarship page.

Students can email their applications to the LDC Data Scholarships program. Decisions will be sent by email from the same address.

Reduced fees for Treebank-2 and Treebank-3
Treebank-2 (LDC95T7) and Treebank-3 (LDC99T42) are now available to non-members at reduced fees, US$1500 for Treebank-2 and US$1700 for Treebank-3, reductions of 52% and 47%, respectively.

LDC to close for Winter Break
LDC will be closed from December 25, 2014 through January 2, 2015 in accordance with the University of Pennsylvania Winter Break Policy. Our offices will reopen on January 5, 2015. Requests received for membership renewals and corpora during the Winter Break will be processed at that time.

Best wishes for a relaxing holiday season!

New publications

(1) Benchmarks for Open Relation Extraction was developed by the University of Alberta and contains annotations for approximately 14,000 sentences from The New York Times Annotated Corpus (LDC2008T19) and Treebank-3 (LDC99T42). This corpus was designed to contain benchmarks for the task of open relation extraction (ORE), along with sample extractions from ORE methods and evaluation scripts for computing a method's precision and recall.

ORE attempts to extract as many relations as described in a corpus without relying on relation-specific training data. The traditional approach to relation extraction requires substantial training effort for each relation of interest. That can be unpractical for massive collections such as found on the web. Open relation extraction offers an alternative by extracting unseen relations as they come. It does not require training data for any particular relation, making it suitable for applications that require a large (or even unknown) number of relations. Results published in ORE literature are often not comparable due to the lack of reusable annotations and differences in evaluation methodology. The goal of this benchmark data set is to provide annotations that are flexible and can be used to evaluate a wide range of methods.

Binary and n-ary relations were extracted from the text sources. Sentences were annotated for binary relations manually and automatically. In the manual sentence annotation, two entities and a trigger (a single token indicating a relation) were identified for the relation between them, if one existed. A window of tokens allowed to be in a relation was specified; that included modifiers of the trigger and prepositions connecting triggers to their arguments. For each sentence annotated with two entities, a system must extract a string representing the relation between them. The evaluation method deemed an extraction as correct if it contained the trigger and allowed tokens only. The automatic annotator identified pairs of entities and a trigger of the relation between them; the evaluation script for that experiment deemed an extraction correct if it contained the annotated trigger. For n-ary relations, sentences were annotated with one relation trigger and all of its arguments. An extracted argument was deemed correct if it was annotated in the sentence.

Benchmarks for Open Relation Extractions is distributed via web download.

2014 Subscription Members will automatically receive two copies of this data provided they have completed a copy of the user agreement2014 Standard Members may request a copy as part of their 16 free membership corpora.  Non-members may license this data for a fee.
*

(2) Fisher and CALLHOME Spanish--English Speech Translation was developed at Johns Hopkins University and contains English reference translations and speech recognizer output (in various forms) that complement the LDC Fisher Spanish (LDC2010T04) and CALLHOME Spanish audio and transcript releases (LDC96T17). Together, they make a four-way parallel text dataset representing approximately 38 hours of speech, with defined training, development, and held-out test sets.

The source data are the Fisher Spanish and CALLOME Spanish corpora developed by LDC, comprising transcribed telephone conversations between (mostly native) Spanish speakers in a variety of dialects. The Fisher Spanish data set consists of 819 transcribed conversations on an assortment of provided topics primarily between strangers, resulting in approximately 160 hours of speech aligned at the utterance level, with 1.5 million tokens. The CALLHOME Spanish corpus comprises 120 transcripts of spontaneous conversations primarily between friends and family members, resulting in approximately 20 hours of speech aligned at the utterance level, with just over 200,000 words (tokens) of transcribed text.

Translations were obtained by crowdsourcing using Amazon's Mechanical Turk, after which the data was split into training, development, and test sets. The CALLHOME data set defines its own data splits, organized into train, devtest, and evltest, which were retained here. For the Fisher material, four data splits were produced: a large training section and three test sets. These test sets correspond to portions of the data where four translations exist.

Fisher and CALLHOME Spanish--English Speech Translation is distributed via web download.

2014 Subscription Members will automatically receive two copies of this data on disc.  2014 Standard Members may request a copy as part of their 16 free membership corpora.  Non-members may license this data for a fee.

*

(3) GALE Phase 3 Chinese Broadcast Conversation Speech Part 1 was developed by LDC and is comprised of approximately 126 hours of Mandarin Chinese broadcast conversation speech collected in 2007 by LDC and Hong University of Science and Technology (HKUST), Hong Kong, during Phase 3 of the DARPA GALE (Global Autonomous Language Exploitation) Program.

Corresponding transcripts are released as GALE Phase 3 Chinese Broadcast Conversation Transcripts Part 1 (LDC2014T28).

Broadcast audio for the GALE program was collected at LDC’s Philadelphia, PA USA facilities and at three remote collection sites: HKUST (Chinese), Medianet (Tunis, Tunisia) (Arabic), and MTC (Rabat, Morocco) (Arabic). The combined local and outsourced broadcast collection supported GALE at a rate of approximately 300 hours per week of programming from more than 50 broadcast sources for a total of over 30,000 hours of collected broadcast audio over the life of the program. HKUST collected Chinese broadcast programming using its internal recording system and a portable broadcast collection platform designed by LDC and installed at HKUST in 2006.

The broadcast conversation recordings in this release feature interviews, call-in programs, and roundtable discussions focusing principally on current events from the following sources: Anhui TV, a regional television station in Anhui Province, China; Beijing TV, a national television station in China; China Central TV (CCTV), a Chinese national and international broadcaster; Hubei TV, a regional broadcaster in Hubei Province, China; and Phoenix TV, a Hong Kong-based satellite television station.

This release contains 217 audio files presented in FLAC-compressed Waveform Audio File format (.flac), 16000 Hz single-channel 16-bit PCM. Each file was audited by a native Chinese speaker following Audit Procedure Specification Version 2.0 which is included in this release. The broadcast auditing process served three principal goals: as a check on the operation of the broadcast collection system equipment by identifying failed, incomplete or faulty recordings, as an indicator of broadcast schedule changes by identifying instances when the incorrect program was recorded, and as a guide for data selection by retaining information about a program’s genre, data type and topic.

GALE Phase 3 Chinese Broadcast Conversation Speech Part 1 is distributed on 2 DVD-ROM.

2014 Subscription Members will automatically receive two copies of this data.  2014 Standard Members may request a copy as part of their 16 free membership corpora.  Non-members may license this data for a fee.

*

(4) GALE Phase 3 Chinese Broadcast Conversation Transcripts Part 1 was developed by LDC and contains transcriptions of approximately 126 hours of Chinese broadcast conversation speech collected in 2007 by LDC and Hong University of Science and Technology (HKUST), Hong Kong, during Phase 3 of the DARPA GALE (Global Autonomous Language Exploitation) Program.

Corresponding audio data is released as GALE Phase 3 Chinese Broadcast Conversation Speech Part 1 (LDC2014S09).

The source broadcast conversation recordings feature interviews, call-in programs and roundtable discussions focusing principally on current events from the following sources: Anhui TV, a regional television station in Anhui Province, China; Beijing TV, a national television station in China; China Central TV (CCTV), a Chinese national and international broadcaster; Hubei TV, a regional television station in Hubei Province, China; and Phoenix TV, a Hong Kong-based satellite television station.

The transcript files are in plain-text, tab-delimited format (TDF) with UTF-8 encoding, and the transcribed data totals 1,556,904 tokens. The transcripts were created with the LDC-developed transcription tool, XTrans, a multi-platform, multilingual, multi-channel transcription tool that supports manual transcription and annotation of audio recordings. XTrans is available from the following link, https://www.ldc.upenn.edu/language-resources/tools/xtrans .

The files in this corpus were transcribed by LDC staff and/or by transcription vendors under contract to LDC. Transcribers followed LDC's quick transcription guidelines (QTR) and quick rich transcription specification (QRTR) both of which are included in the documentation with this release. QTR transcription consists of quick (near-) verbatim, time-aligned transcripts plus speaker identification with minimal additional mark-up. It does not include sentence unit annotation. QRTR annotation adds structural information such as topic boundaries and manual sentence unit annotation to the core components of a quick transcript. Files with QTR as part of the filename were developed using QTR transcription. Files with QRTR in the filename indicate QRTR transcription.

GALE Phase 3 Chinese Broadcast Conversation Transcripts Part 1 is distributed via web download.

2014 Subscription Members will automatically receive two copies of this data on disc.  2014 Standard Members may request a copy as part of their 16 free membership corpora. Non-members may license this data for a fee.

Thursday, May 15, 2014

LDC May 2014 Newsletter

LDC at LREC 2014

New publications:
GALE Arabic-English Word Alignment Training Part 2 -- Newswire  
Hispanic-English Database  
HyTER Networks of Selected OpenMT08/09 Progress Set Sentences  



LDC at LREC 2014
LDC will attend the 9th Language Resource Evaluation Conference (LREC2014), hosted by ELRA, the European Language Resource Association. The conference will be held in Reykjavik, Iceland from May 26-31 and features a broad range of sessions on language resource and human language technologies research. Ten LDC staff members will be presenting current work on topics including the language application grid project, collecting natural SMS and chat conversations in multiple languages, incorporating alternate translations into English translation treebanks, supporting HLT research with degraded audio data, developing an Egyptian Arabic Treebank and more.

Following the conference LDC’s presented papers and posters will be available on LDC’s Papers Page


New publications
(1) GALE Arabic-English Word Alignment Training Part 2 -- Newswire was developed by LDC and contains 162,359 tokens of word aligned Arabic and English parallel text enriched with linguistic tags. This material was used as training data in the DARPA GALE (Global Autonomous Language Exploitation) program.

Some approaches to statistical machine translation include the incorporation of linguistic knowledge in word aligned text as a means to improve automatic word alignment and machine translation quality. This is accomplished with two annotation schemes: alignment and tagging. Alignment identifies minimum translation units and translation relations by using minimum-match and attachment annotation approaches. A set of word tags and alignment link tags are designed in the tagging scheme to describe these translation units and relations. Tagging adds contextual, syntactic and language-specific features to the alignment annotation.

This release consists of Arabic source newswire collected by LDC in 2004 - 2006 and 2008. The distribution by genre, words, character tokens and segments appears below:

Language
Genre
Files
Words
CharTokens
Segments
Arabic
NW
1,126
112,318
162,359
5,349

Note that word count is based on the untokenized Arabic source, and token count is based on the tokenized Arabic source.

The Arabic word alignment tasks consisted of the following components:
  • Identifying and correcting incorrectly tokenized tokens
  • Identifying different types of links
  • Identifying sentence segments not suitable for annotation, such as those that were blank, incorrectly-segmented or containing other languages
  • Tagging unmatched words attached to other words or phrases
GALE Arabic-English Word Alignment Training Part 2 -- Newswire is distributed via web download.

2014 Subscription Members will automatically receive two copies of this data on disc. 2014 Standard Members may request a copy as part of their 16 free membership corpora.  Non-members may license this data for a fee.


*

(2) Hispanic-English Database contains approximately 30 hours of English and Spanish conversational and read speech with transcripts (24 hours) and metadata collected from 22 non-native English speakers between 1996 and 1998. The corpus was developed by Entropic Research Laboratory, Inc, a developer of speech recognition and speech synthesis software toolkits that was acquired by Microsoft in 1999.

Participants were adult native speakers of Spanish as spoken in Central America and South America who resided in the Palo Alto, California area, had lived in the United States for at least one year and demonstrated a basic ability to understand, read and speak English. They read a total of 2200 sentences, 50 each in Spanish and English per speaker. The Spanish sentence prompts were a subset of the materials in LATINO-40 Spanish Read News, and the English sentence prompts were taken from the TIMIT database. Conversations were task-oriented, drawing on exercises similar to those used in English second language instruction and designed to engage the speakers in collaborative, problem-solving activities.

Read speech was recorded on two wideband channels with a Shure SM10A head-mounted microphone in a quiet laboratory environment. The conversational speech was simultaneously recorded on four channels, two of which were used to place phone calls to each subject in two separate offices and to record the incoming speech of the two channels into separate files. The audio was originally saved under the Entropic Audio (ESPS) format using a 16kHz sampling rate and 16 bit samples. Audio files were converted to flac compressed .wav files from the ESPS format. ESPS headers were removed and are presented in this release as *.hdr files that include demographic and technical data.

Transcripts were developed with the Entropic Annotator tool and are time-aligned with speaker turns. The transcription conventions were based on those used in the LDC Switchboard and CALLHOME collections. Transcript files are denoted with a .lab extension.

Hispanic-English Database is distributed on 1 DVD-ROM.

2014 Subscription Members will automatically receive two copies of this data. 2014 Standard Members may request a copy as part of their 16 free membership corpora.  Non-members may license this data for a fee.

*

(3) HyTER Networks of Selected OpenMT08/09 Progress Set Sentences was developed by SDL and contains HyTER (Hybrid Translation Edit Rate) networks for 102 selected source Arabic and Chinese sentences from OpenMT08 and OpenMT09 Progress Set data. HyTER is an evaluation metric based on large reference networks created by an annotation tool that allows users to develop an exponential number of correct translations for a given sentence. Reference networks can be used as a foundation for developing improved machine translation evaluation metrics and for automating the evaluation of human translation efficiency.

The source material is comprised of Arabic and Chinese newswire and web data collected by LDC in 2007. Annotators created meaning-equivalent annotations under three annotation protocols. In the first protocol, foreign language native speakers built English networks starting from foreign language sentences. In the second, English native speakers built English networks from the best translation of a foreign language sentence as identified by NIST (National Institute of Standards and Technology). In the third protocol, English native speakers built English networks starting from the best translation, but those annotators also had access to three additional, independently produced human translations. Networks created by different annotators for each sentence were combined and evaluated.

This release includes the source sentences and four human reference translations produced by LDC in XML format, along with five machine translation system outputs representing a variety of system architectures and performance, and the human post-edited output of those systems also presented in XML.

HyTER Networks of Selected OpenMT08/09 Progress Set Sentences is distributed via web download.

2014 Subscription Members will automatically receive two copies of this data on disc. 2014 Standard Members may request a copy as part of their 16 free membership corpora.  Non-members may license this data for a fee.