Thursday, October 18, 2012

LDC October 2012 Newsletter


New publications:
LDC2012T20
LDC2012T18




Fall 2012 LDC Data Scholarship Recipients
LDC is pleased to announce the student recipients of the Fall 2012 LDC Data Scholarship program!  This program provides university and college students with access to LDC data at no-cost. Students were asked to complete an application which consisted of a proposal describing their intended use of the data, as well as a letter of support from their thesis adviser. We received many solid applications and have chosen six  proposals to support.   The following students will receive no-cost copies of LDC data:
Jaffar Atwan - National University of Malaysia (Malaysia), Phd  candidate, Information Science and Technology.  Jaffar has been awarded a copy of Arabic Newswire Part 1 (LDC2001T55) for his work in information retrieval.

Sarath Chandar - Indian Institute of Technology, Madras (India), MS candidate, Computer Science and Engineering.  Sarath has been awarded a copy of Treebank-3 (LDC99T42) for his work in grammar induction.

Kuruvachan K. George - Amrita Vishwa Vidyapeetham (India), Phd Candidate, Electrical and Computer Engineering.  Kuruvachan has been awarded a copy of Fisher English Part 2 (LDC2005S13/T19) and 2008 NIST Speaker Recognition Evaluation data (LDC2011S05/07/08/11) for his work in speaker recognition.
Eduardo Motta - Pontifícia Universidade Católica do Rio de Janeiro (Brazil), Phd candidate, Information Sciences.  Eduardo has been awarded a copy of English Web Treebank (LDC2012T13) for his work in machine learning.
Genevieve Sapijaszko - University of Central Florida (USA), Phd Candidate, Electrical and Computer Engineering.  Genevieve has been awarded a copy TIMIT Acoustic-Phonetic Continuous Speech Corpus (LDC93S1) and YOHO Speaker Verification (LDC94S16) for her work in digital signal processing.

John Steinberg - Temple University (USA), MS candidate, Electrical and Computer Engineering.  John has been awarded a copy of CALLHOME Mandarin Chinese Lexicon (LDC96L15) and CALLHOME Mandarin Chinese Transcripts (LDC96T16) for his work in speech recognition.
LDC Exhibiting at NWAV 41
LDC will be exhibiting at the 41st New Ways of Analyzing Variation Conference (NWAV 41) in late October. This marks the fifth time that LDC has been an NWAV exhibitor and we are proud to show our continued support of the sociolinguistic research community.
The conference runs from October 25-28 and the exhibition hall will be open from October 26-28, 2012. Please stop by to say hello!

LDC 20th Anniversary Workshop Wrap-up
In early September, LDC hosted a workshop entitled “The Future of Language Resources” in celebration of  our 20th anniversary. Visit the Program page to browse speaker abstracts and to access pdfs of the presentations. Thanks to the speakers and attendees for making the workshop a success!

LDC 20th Anniversary Podcasts
To further celebrate our 20th Anniversary, LDC is conducting  interviews of long-time staff members for their unique perspectives on the Consortium’s growth and evolution over the past two decades. The first interview podcast debuts this month and features Dave Graff, LDC’s Lead Programmer. Visit the LDC blog to access the podcast.
Other podcasts will  be  published via the LDC blog, so stay tuned to that space.

Language Resource Wiki
The Language Resource Wiki catalogs data, software, descriptive grammars and other resources for a variety of languages but especially those with a paucity of generally available resources for research. LDC is actively seeking editors knowledgeable in these and other languages to develop and maintain the pages, which are readable by anyone but writable only by editors. The wiki currently has resource listings for: Bengali, Berber, Breton, Ewe, Greek (Ancient), Indonesian, Hindi, Latin, Panjabi, Pashto, Sorani (Central Kurdish), Russian, Tagalog, Tamil, and Urdu, and for the following Sign Languages: American, British, Catalan, Dutch, Flemish, German, Japanese, New Zealand, Polish, Spanish, and Swiss German.

New publications
(1) GALE Chinese-English Word Alignment and Tagging Training Part 2 -- Newswire was developed by LDC and contains 169,080 tokens of word aligned Chinese and English parallel text enriched with linguistic tags. This material was used as training data in the DARPA GALE (Global Autonomous Language Exploitation) program. 
Some approaches to statistical machine translation include the incorporation of linguistic knowledge in word aligned text as a means to improve automatic word alignment and machine translation quality. This is accomplished with two annotation schemes: alignment and tagging. Alignment identifies minimum translation units and translation relations by using minimum-match and attachment annotation approaches. A set of word tags and alignment link tags are designed in the tagging scheme to describe these translation units and relations. Tagging adds contextual, syntactic and language-specific features to the alignment annotation.
The Chinese word alignment tasks consisted of the following components:
Identifying, aligning, and tagging 8 different types of links
Identifying, attaching, and tagging local-level unmatched words
Identifying and tagging sentence/discourse-level unmatched words
Identifying and tagging all instances of Chinese 的(DE) except when they were a part of a semantic link.
GALE Chinese-English Word Alignment and Tagging Training Part 2 -- Newswire is distributed via web download. 2012 Subscription Members will automatically receive two copies of this data on disc. 2012 Standard Members may request a copy as part of their 16 free membership corpora. 
*

(2) GALE Phase 2 Arabic Broadcast News Parallel Text was developed by LDC, and along with other corpora, the parallel text in this release comprised training data for Phase 2 of the DARPA GALE (Global Autonomous Language Exploitation) Program. This corpus contains Modern Standard Arabic source text and corresponding English translations selected from broadcast news (BN) data collected by LDC between 2005 and 2007 and transcribed by LDC or under its direction.
GALE Phase 2 Arabic Broadcast News Parallel Text includes seven source-translation pairs, comprising 29,210 words of Arabic source text and its English translation. Data is drawn from six distinct Arabic programs broadcast between 2005 and 2007 from Abu Dhabi TV, based in Abu Dhabi, United Arab Emirates; Al Alam News Channel, based in Iran; Aljazeera, a regional broadcast programmer based in Doha, Qatar; Dubai TV, based in Dubai, United Arab Emirates; and Kuwait TV, a national television station based in Kuwait. The BN programming in this release focuses on current events topics. 
The files in this release were transcribed by LDC staff and/or transcription vendors under contract to LDC in accordance with the Quick Rich Transcription guidelines developed by LDC. Transcribers indicated sentence boundaries in addition to transcribing the text. Data was manually selected for translation according to several criteria, including linguistic features, transcription features and topic features. The transcribed and segmented files were then reformatted into a human-readable translation format and assigned to translation vendors. Translators followed LDC's Arabic to English translation guidelines. Bilingual LDC staff performed quality control procedures on the completed translations.
GALE Phase 2 Arabic Broadcast News Parallel Text is distributed via web download. 2012 Subscription Members will automatically receive two copies of this data on disc. 2012 Standard Members may request a copy as part of their 16 free membership corpora. 

Thursday, October 11, 2012

LDC 20th Anniversary Podcasts: David Graff

As part of our 20th Anniversary celebrations, LDC is conducting interviews of long-time staff members for their unique perspectives on the Consortium's growth and evolution over the past two decades and for some insights into the future. We expect to make these interviews available as audio, video and text. The interviews are conducted by John Vogel, LDC part-time staffer, musician and video artist.

We begin with a series of podcasts. The first podcast features David Graff, LDC's Lead Programmer. Dave has been at LDC since its first days as a small organization occupying one of the many offices in University of Pennsylvania's Williams Hall. Dave has been involved in many aspects of LDC's work over the years; he currently designs tools that support corpus creation, annotation and quality assessment and has a direct role in the production of most LDC publications.

We hope you enjoy Dave's reflections on life at LDC.

Click here for Dave's podcast.

Monday, September 17, 2012

LDC September 2012 Newsletter

New publications
LDC2012T16
LDC2012T15



The Future of Language Resources: LDC 20th Anniversary Workshop Summary 
Thanks to the members, friends and staff  who made our 20th Anniversary Workshop (September 6-7) a fruitful and fun experience. The speakers -- from academia, industry and government – engaged participants and provoked discussion with their talks about the ways in which language resources contribute to research in language-related fields and other disciplines and with their insights into the future. The result was much food for thought as we enter our third decade. 
Visit the workshop page for the proceedings and to learn more about the event.
English Treebanking at LDC
As part of our 20th anniversary celebration, the coming newsletters will include features that provide an overview of the broad range of LDC’s activities. This month, we'll examine English treebanking efforts at LDC. The English treebanking team is lead by Ann Bies, Senior Research Coordinator. The association of treebanks with LDC began with the publication of the original Penn English Treebank (Treebank-2) in 1995. Since that time the need for new varieties of English treebank data has continued to grow, and LDC has expanded its expertise to address new research challenges. This includes the development of treebanked data for additional domains including conversational speech and web text as well as the creation of parallel treebank data.
Speech data presents unique challenges not inherent in edited text such as speech disfluency and hesitations. Penn Treebank contains conversational speech data from the Switchboardtelephone collection which has been tagged, dysfluency-annotated, and parsed. LDC’s more recent publication, English CTS Treebank with Structural Metadata, builds on that annotation and includes new data. The development of that corpus was motivated by the need to have both structural metadata and syntactic structure annotated in order to support work on speech parsing and structural event detection. The annotation involved a two-pass approach to annotating metadata, speech effects and syntactic structure in transcribed conversational speech: separately annotating for structural metadata, or structural events, and for syntactic structure. The two annotations were then combined into a single aligned representation.
Also recently, LDC has undertaken complex syntactic annotation of data collected over the web. Since most parsers are trained using newswire, they achieve better accuracy on similar heavily edited texts. LDC, through a gift from Google Inc., developed English Web Treebank to improve parsing, translation and information extraction on unedited domains, such as blogs, newsgroups, and consumer reviews. LDC’s annotation guidelines were adapted to handle unique features of web text such as inconsistent punctuation and capitalization as well as the increased use of slang, technical jargon and ungrammatical sentences.
LDC and its research partners are also involved in the creation of parallel treebanks used for word alignment tasks.  Parallel treebanks are annotated morphological and syntactic structures that are aligned at sentence as well as sub-sentence levels. These resources are used for improving machine translation quality. To create such treebanks, English files (translated from the source Arabic or Chinese) are first automatically  part-of-speech tagged and parsed and then hand-corrected at each stage. The quality control process consists of a series of specific searches for over 100 types of potential inconsistency and parser or annotation error. Parallel treebank data in the LDC catalog includes the English Translation Treebank: An Nahar Newswire whose files are parallel with those in Arabic Treebank: Part 3 v 3.2
English treebanking at LDC is ongoing; new titles are in progress and will be added to our catalog.


New Publications
(1) GALE Chinese-English Word Alignment and Tagging Training Part 1 -- Newswire and Web was developed by LDC and contains 150,068 tokens of word aligned Chinese and English parallel text enriched with linguistic tags. This material was used as training data in the DARPA GALE (Global Autonomous Language Exploitation) program. This release consists of Chinese source newswire and web data (newsgroup, weblog) collected by LDC in 2008.
Some approaches to statistical machine translation include the incorporation of linguistic knowledge in word aligned text as a means to improve automatic word alignment and machine translation quality. This is accomplished with two annotation schemes: alignment and tagging. Alignment identifies minimum translation units and translation relations by using minimum-match and attachment annotation approaches. A set of word tags and alignment link tags are designed in the tagging scheme to describe these translation units and relations. Tagging adds contextual, syntactic and language-specific features to the alignment annotation. 
The Chinese word alignment tasks consisted of the following components: 
-Identifying, aligning, and tagging 8 different types of links
-Identifying, attaching, and tagging local-level unmatched words
-Identifying and tagging sentence/discourse-level unmatched words
-Identifying and tagging all instances of Chinese 的 (DE) except when they were a part of a semantic link.
GALE Chinese-English Word Alignment and Tagging Training Part 1 -- Newswire and Web is distributed via web download. 2012 Subscription Members will automatically receive two copies of this data on CD. 2012 Standard Members may request a copy as part of their 16 free membership corpora.  
*
(2) MADCAT Phase 1 Training Set contains all training data created by LDC to support Phase 1 of the DARPA MADCAT Program. The data in this release consists of handwritten Arabic documents scanned at high resolution and annotated for the physical coordinates of each line and token. Digital transcripts and English translations of each document are also provided, with the various content and annotation layers integrated in a single MADCAT XML output. 
The goal of the MADCAT program is to automatically convert foreign text images into English transcripts. MADCAT Phase 1 data was collected by LDC from Arabic source documents in three genres: newswire, weblog and newsgroup text. Arabic speaking "scribes" copied documents by hand, following specific instructions on writing style (fast, normal, careful), writing implement (pen, pencil) and paper (lined, unlined). Prior to assignment, source documents were processed to optimize their appearance for the handwriting task, which resulted in some original source documents being broken into multiple "pages" for handwriting. Each resulting handwritten page was assigned to up to five independent scribes, using different writing conditions. 
The handwritten, transcribed documents were  checked for quality and completeness, then each page was scanned at a high resolution (600 dpi, greyscale) to create a digital version of the handwritten document. The scanned images were then annotated to indicate the physical coordinates of each line and token. Explicit reading order was also labeled, along with any errors produced by the scribes when copying the text. 
The final step was to produce a unified data format that takes multiple data streams and generates a single xml output file which contains all required information. The resulting xml file  has these distinct components: a text layer that consists of the source text, tokenization and sentence segmentation; an image layer that consist of bounding boxes; a scribe demographic layer that consists of scribe ID and partition (train/test); and a document metadata layer. This release includes 9693 annotation files in MADCAT XML format (.madcat.xml) along with their corresponding scanned image files in TIFF format.
MADCAT Phase 1 Training Set is distributed on two DVD-ROM. 2012 Subscription Members will automatically receive two copies of this data. 2012 Standard Members may request a copy as part of their 16 free membership corpora. 

Thursday, August 16, 2012

LDC August 2012 Newsletter


New publications:

LDC2012T13
English Web Treebank  -

 LDC2012T14
-  GALE Phase 2 Arabic Broadcast Conversation Parallel Text Part 2   –

 LDC2012T12
-  Spanish TimeBank 1.0  –

 




Google Inc.  and the Linguistic Data Consortium (LDC) have collaborated to develop new syntactically-annotated language resources that enable computers to better understand human language. The project, funded through a gift from Google in 2010, has resulted in the development of the English Web Treebank LDC2012T13 containing over 250,000 words of weblogs, newsgroups, email, reviews and question-answers manually annotated for syntactic structure. This resource will allow language technology researchers to develop and evaluate the robustness of parsing methods in various new web domains. It was used in the 2012 shared task on parsing English web text for the First Workshop on Syntactic Analysis of Non-Canonical Language (SANCL) which took place at NAACL-HLT in Montreal on June 8, 2012. The English Web Treebank is available to the research community through LDC’s Catalog.

Natural language processing (NLP) is a field of computational linguistic research concerned with the interactions between human language and computers. Parsing is a discipline within NLP in which computers analyze text and determine its syntactic structure. While syntactic parsing is already practically useful, Google funded this effort to help the research community develop better parsers for web text. The web texts collected and annotated by LDC provide new, diverse data for training parsing systems.

Google chose LDC for this work based on the Consortium’s experience in developing and creating syntactic annotations, also known as treebanks. Treebanks are critically important to parsing research since they provide human-analyzed sentence structures that facilitate training and testing scenarios in NLP research. This work extends the existing relationship between LDC and Google.  LDC has published four other Google-developed data sets in the past six years: English, Chinese, Japanese and European language n-grams used principally for language modeling.
 
 
 The Future of Language Resources: LDC 20th Anniversary Workshop
 
LDC’s 20th Anniversary Workshop is rapidly approaching! The event will take place on the University of Pennsylvania’s campus on September 6-7, 2012.
 
Workshop themes include: the developments in human language technologies and associated resources that have brought us to our current state; the language resources required by the technical approaches taken and the impact of these resources on HLT progress; the applications of HLT and resources to other disciplines including law, medicine, economics, the political sciences and psychology; the impact of HLTs and related technologies on linguistic analysis and novel approaches in fields as widespread as phonetics, semantics, language documentation, sociolinguistics and dialect geography; and the impact of any of these developments on the ways in which language resources are created, shared and exploited and on the specific resources required.
 
Please read more here.

Applications are now being accepted through September 17, 2012, 11:59PM EST for the Fall 2012 LDC Data Scholarship program! The LDC Data Scholarship program provides university students with access to LDC data at no-cost. During previous program cycles, LDC has awarded no-cost copies of LDC data to over 20 individual students and student research groups.

This program is open to students pursuing both undergraduate and graduate studies in an accredited college or university. LDC Data Scholarships are not restricted to any particular field of study; however, students must demonstrate a well-developed research agenda and a bona fide inability to pay. The selection process is highly competitive.

The application consists of two parts:

(1) Data Use Proposal. Applicants must submit a proposal describing their intended use of the data. The proposal should state which data the student plans to use and how the data will benefit their research project as well as information on the proposed methodology or algorithm.

Applicants should consult the LDC Corpus Catalog for a complete list of data distributed by LDC. Due to certain restrictions, a handful of LDC corpora are restricted to members of the Consortium. Applicants are advised to select a maximum of one to two datasets; students may apply for additional datasets during the following cycle once they have completed processing of the initial datasets and publish or present work in some juried venue.

(2) Letter of Support. Applicants must submit one letter of support from their thesis adviser or department chair. The letter must confirm that the department or university lacks the funding to pay the full Non-member Fee for the data and verify the student's need for data.

For further information on application materials and program rules, please visit the LDC Data Scholarship page.

Students can email their applications to the LDC Data Scholarship program. Decisions will be sent by email from the same address.

The deadline for the Fall 2012 program cycle is September 17, 2012, 11:59PM EST.

Spotlight on HAVIC

As part of our 20th anniversary celebration, the coming newsletters will include features that provide an overview of the broad range of LDC’s activities. To begin, we'll examine the Heterogeneous Audio Visual Internet Collection (HAVIC), one of the many projects handled by LDC’s Collection/Annotation Group led by Senior Associate Director Stephanie Strassel.

Under the supervision of Senior Research Coordinator Amanda Morris, the HAVIC team is developing a large corpus of unconstrained multimedia data drawn from user-generated videos on the web and annotated for a variety of features. The HAVIC corpus has been designed with an eye toward providing increased challenges for both acoustic and video processing technologies, focusing on multi-dimensional variation inherent in user-generated content. Over the past three years the corpus has provided training, development and test data for the NIST TRECVID Multimedia Event Detection (MED) Evaluation Track, whose goal is to assemble core detection technologies into a system that can search multimedia recordings for user-defined events based on pre-computed metadata.

For each MED evaluation, LDC and NIST have collaborated to define many new events, including things like “making a cake” or “assembling a shelter”. Each event requires an Event Kit, consisting of a textual description of the event’s properties along with a few exemplar videos depicting the event. A large team of LDC data scouts search for videos that contain each event, along with videos that are only indirectly or superficially related to defined events plus background videos that are unrelated to any defined event. After finding suitable content, data scouts label each video for a variety of features including the presence of audio, visual or text evidence that a particular event has occurred. This work is done using LDC’s AScout framework, consisting of a browser plug-in, a database backend and processing scripts that together permit data scouts to efficiently search for videos, annotate the multimedia content, and initiate download and post-processing of the data. Collected data is converted to MPEG-4 format, with h.264 video encoding and AAC audio encoding, and the original video resolution and audio/video bitrates are retained.

To date, LDC has collected and labeled well over 100,000 videos as part of the HAVIC Project, and the corpus will ultimately comprise thousands of hours of labeled data. Look for portions of the corpus to appear among LDC’s future releases.

New publications
(1)English Web Treebank was developed by the Linguistic Data Consortium (LDC) with funding through a gift from Google Inc. It consists of over 250,000 words of English weblogs, newsgroups, email, reviews and question-answers manually annotated for syntactic structure and is designed to allow language technology researchers to develop and evaluate the robustness of parsing methods in those web domains.

This release contains 254,830 word-level tokens and 16,624 sentence-level tokens of webtext in 1174 files annotated for sentence- and word-level tokenization, part-of-speech, and syntactic structure. The data is roughly evenly divided across five genres: weblogs, newsgroups, email, reviews, and question-answers. The files were manually annotated following the sentence-level tokenization guidelines for web text and the word-level tokenization guidelines developed for English treebanks in the DARPA GALE project. Only text from the subject line and message body of posts, articles, messages and question-answers were collected and annotated.

English Web Treebank is distributed via web download. 2012 Subscription Members will receive two copies of this data on disc. 2012 Standard Members may request a copy as part of their 16 free membership corpora. Non-members may license this data by completing the LDC User Agreement for Non-members. The agreement can be faxed to +1 215 573 2175 or scanned and emailed to this address. The first fifty copies of this publication are being made available at no charge.
*

(2) GALE Phase 2 Arabic Broadcast Conversation Parallel Text Part 2 was developed by LDC. Along with other corpora, the parallel text in this release comprised training data for Phase 2 of the DARPA GALE (Global Autonomous Language Exploitation) Program. This corpus contains Modern Standard Arabic source text and corresponding English translations selected from broadcast conversation (BC) data collected by LDC between 2004 and 2007 and transcribed by LDC or under its direction.

GALE Phase 2 Arabic Broadcast Conversation Parallel Text Part 2 includes 29 source-translation document pairs, comprising 169,488 words of Arabic source text and its English translation. Data is drawn from eight distinct Arabic programs broadcast between 2004 and 2007 from Aljazeera, a regional broadcast programmer based in Doha, Qatar; and Nile TV, an Egyptian broadcaster. The programs in this release focus on current events topics.

The files in this release were transcribed by LDC staff and/or transcription vendors under contract to LDC in accordance with the Quick Rich Transcription guidelines developed by LDC. Transcribers indicated sentence boundaries in addition to transcribing the text. Data was manually selected for translation according to several criteria, including linguistic features, transcription features and topic features. The transcribed and segmented files were then reformatted into a human-readable translation format and assigned to translation vendors. Translators followed LDC's Arabic to English translation guidelines. Bilingual LDC staff performed quality control procedures in the completed translations.

GALE Phase 2 Arabic Broadcast Conversation Parallel Text Part 2 is distributed via web download. 2012 Subscription Members will receive two copies of this data on disc. 2012 Standard Members may request a copy as part of their 16 free membership corpora.
*

(3) Spanish TimeBank 1.0 was developed by researchers at Barcelona Media and consists of Spanish texts in the AnCora corpus annotated with temporal and event information according to the TimeML specification language.

Spanish TimeBank 1.0 contains stand-off annotations for 210 documents with over 75,800 tokens (including punctuation marks) and 68,000 tokens (excluding punctuation). The source documents are news stories and fiction from the AnCora corpus.

The AnCora corpus is the largest multilayer annotated corpus of Spanish and Catalan. AnCora contains 400,000 words in Spanish and 275,000 words in Catalan. The AnCora documents are annotated on many linguistic levels including structure, syntax, dependencies, semantics and pragmatics. That information is not included in this release, but it can be mapped to the present annotations. The corpus is freely available from the Centre de Llenguatge i Computació (CLiC).

Spanish TimeBank 1.0 is distributed by web download. 2012 Subscription Members will receive two copies of this data on disc. 2012 Standard Members may request a copy as part of their 16 free membership corpora. Non-members may license this data by completing the LDC User Agreement for Non-members. The agreement can be faxed to +1 215 573 2175 or scanned and emailed to this address. The publication is being made available at no charge.

LDC and Google Collaboration Results in New Syntactically-Annotated Language Resources



Philadelphia, PA; Mountain View, CA, August 16, 2012 (443 words)

Google Inc. (NASDAQ: GOOG) and the Linguistic Data Consortium (LDC) at the University of Pennsylvania have collaborated to develop new syntactically-annotated language resources that enable computers to better understand human language. The project, funded through a gift from Google in 2010, has resulted in the development of the English Web Treebank LDC2012T13, containing over 250,000 words of weblogs, newsgroups, email, reviews and question-answers manually annotated for syntactic structure. This resource will allow language technology researchers to develop and evaluate the robustness of parsing methods in various new web domains. It was used in the 2012 shared task on parsing English web text for the First Workshop on Syntactic Analysis of Non-Canonical Language (SANCL), https://sites.google.com/site/sancl2012/, which took place at NAACL-HLT in Montreal on June 8, 2012. The English Web Treebank is available to the research community through LDC’s Catalog, http://www.ldc.upenn.edu/Catalog/.
Natural language processing (NLP) is a field of computational linguistic research concerned with the interactions between human language and computers. Parsing is a discipline within NLP in which computers analyze text and determine its syntactic structure. While syntactic parsing is already practically useful, Google funded this effort to help the research community develop better parsers for web text. The web texts collected and annotated by LDC provide new, diverse data for training parsing systems.
Google chose LDC for this work based on the Consortium’s experience in developing and creating syntactic annotations, also known as treebanks. Treebanks are critically important to parsing research since they provide human-analyzed sentence structures that facilitate training and testing scenarios in NLP research. This work extends the existing relationship between LDC and Google.  LDC has published four other Google-developed data sets in the past six years: English, Chinese, Japanese and European language n-grams used principally for language modeling.
Google is an industry-leading multinational organization headquartered in Mountain View, CA that develops Internet-based services and products and whose research includes work on NLP technologies. LDC is hosted by the University of Pennsylvania and was founded in 1992 by LDC Director, Dr. Mark Y. Liberman, Christopher H. Browne Distinguished Professor of Linguistics at the University of Pennsylvania. LDC is a nonprofit consortium that produces and distributes linguistic resources to researchers, technology developers and universities around the globe. The Penn Treebank, developed at the University of Pennsylvania over 20 years ago, is distributed by LDC and continues to be an important resource for the NLP community. 
The Google collections, as well as all other LDC data publications, can be found in the LDC Catalog, www.ldc.upenn.edu/Catalog, which contains over 500 holdings.

-30-

Media Contact
Marian Reed
Marketing Coordinator
Linguistic Data Consortium
+1.215.898.2561


Wednesday, July 18, 2012

LDC July 2012 Newsletter

 
New publications:



LDC2012T10
Catalan TimeBank 1.0  -

LDC 20th Anniversary Workshop 

LDC announces its 20th Anniversary Workshop on Language Resources, to be held in Philadelphia on September 6-7, 2012. The event will commemorate our anniversary, reflect on the beginning of language data centers and address the future of language resources. 

Workshop themes will include: the developments in human language technologies and associated resources that have brought us to our current state; the language resources required by the technical approaches taken and the impact of these resources on HLT progress; the applications of HLT and resources to other disciplines including law, medicine, economics, the political sciences and psychology; the impact of HLTs and related technologies on linguistic analysis and novel approaches in fields as widespread as phonetics, semantics, language documentation, sociolinguistics and dialect geography; and finally, the impact of any of these developments on the ways in which language resources are created, shared and exploited and on the specific resources required.

Stay tuned for further details.
New publications 

(1) American English Nickname Collection was developed by Intelius, Inc. and is a compilation of American English nicknames to given name mappings based on information in US government records, public web profiles and financial and property reports. This corpus is intended as a tool for the quantitative study of nickname usage in the United States such as in demographic and sociological studies. 

The American English Nickname Collection contains 331,237 distinct mappings encompassing millions of names. The data was collected and processed through a record linkage pipeline. The steps in the pipeline were (1) data cleaning, (2) blocking, (3) pair-wise linkage and (4) clustering. In the cleaning step, material was categorized, processed to remove junk and spam records and normalized to an approximately common representation. The blocking process utilized an algorithm to group records by shared properties for determining which record pairs should be examined by the pairwise linker as potential duplicates. The linkage step assigned a score to record pairs using a supervised pairwise-based machine learning model. The clustering step combined record pairs into connected components and further partitioned each connected component to remove inconsistent pairwise links. The result is that input records were partitioned into disjoint sets called profiles, where each profile corresponded to a single person.

The material is presented in the form of a comma delimited text file. Each line contains a first name, a nickname or alias, its conditional probability and its frequency. The conditional probability for each nickname is derived from the base data using an algorithm which calculates both the probability for which any alias refers to a given name and a threshold below which the mapping is most likely an error. This threshold eliminates typographic errors and other noise from the data.

American English Nickname Collection is distributed via web download. 2012 Subscription Members will receive two copies of this data on disc provided that they have submitted a completed copy of the User License Agreement for American English Nickname Collection (LDC2012T11). 2012 Standard Members may request a copy as part of their 16 free membership corpora. Non-members may license this data by completing the User License Agreement for American English Nickname Collection (LDC2012T11). The agreement can be faxed to +1 215 573 2175 or scanned and emailed to ldc @ ldc . upenn . edu. The collection is being made available at no charge.

*

(2) Arabic Treebank - Broadcast News v1.0 was developed at LDC. It consists of 120 transcribed Arabic broadcast news stories with part-of-speech, morphology, gloss and syntactic tree annotation in accordance with the Penn Arabic Treebank (PATB) Morphological and Syntactic Annotation Guidelines. The ongoing PATB project supports research in Arabic-language natural language processing and human language technology development. 

This release contains 432,976 source tokens before clitics were split, and 517,080 tree tokens after clitics were separated for treebank annotation. The source materials are Arabic broadcast news stories collected by LDC during the period 2005-2008 from the following sources: Abu Dhabi TV, Al Alam News Channel, Al Arabiya, Al Baghdadya TV, Al Fayha, Alhurra, Al Iraqiyah, Aljazeera, Al Ordiniyah, Al Sharqiyah, Dubai TV, Kuwait TV, Lebanese Broadcasting Corp., Oman TV, Radio Sawa, Saudi TV and Syria TV. The transcripts were produced by LDC.

Arabic Treebank - Broadcast News v1.0 is distributed via web download. 2012 Subscription Members will receive two copies of this data on disc. 2012 Standard Members may request a copy as part of their 16 free membership corpora.
*

(3) Catalan TimeBank 1.0 was developed by researchers at Barcelona Media and consists of Catalan texts in the AnCora corpus annotated with temporal and event information according to the TimeML specification language

TimeML is a schema for annotating eventualities and time expressions in natural language as well as the temporal relations among them, thus facilitating the task of extraction, representation and exchange of temporal information. Catalan Timebank 1.0 is annotated in three levels, marking events, time expressions and event metadata. The TimeML annotation scheme was tailored for the specifics of the Catalan language. Temporal relations in Catalan present distinctions of verbal mood (e.g., indicative, subjunctive, conditional, etc.) and grammatical aspect (e.g., imperfective) which are absent in English. 

Catalan TimeBank 1.0 contains stand-off annotations for 210 documents with over 75,800 tokens (including punctuation marks) and 68,000 tokens (excluding punctuation). The source documents are from the EFE news agency, the ACN Catalan news agency2 and the Catalan version of the El Períodico newspaper, and span the period from January to December 2000. 

The AnCora corpus is the largest multilayer annotated corpus of Spanish and Catalan. AnCora contains 400,000 words in Spanish and 275,000 words in Catalan. The AnCora documents are annotated on many linguistic levels including structure, syntax, dependencies, semantics and pragmatics. That information is not included in this release, but it can be mapped to the present annotations. The corpus is freely available from the Centre de Llenguatge i Computació (CLiC)".

Catalan TimeBank 1.0 is distributed by web download. 2012 Subscription Members will receive two copies of this data on disc. 2012 Standard Members may request a copy as part of their 16 free membership corpora. Non-members may license this data by completing the LDC User Agreement for Non-members.  The agreement can be faxed to +1 215 573 2175 or scanned and emailed to  ldc @ ldc . upenn . edu. The collection is being made available at no charge.

Monday, June 18, 2012

LDC June 2012 Newsletter

New publications:



LDC at LREC 2012

LDC attended the 8th Language Resource Evaluation Conference (LREC2012), hosted by ELRA, the European Language Resource Association. The conference was held in Istanbul, Turkey and featured a broad range of sessions on language resource and human language technologies research. Fourteen LDC staff members presented current work on a wide range of topics, including handwriting recognition, word alignment, treebanks, machine translation and information retrieval as well as initiatives for synchronizing metadata practices in sociolinguistic data collection.

The LDC Papers page now includes research papers presented at LREC 2012.  Most papers are available for download in pdf format; presentations slides and posters are available for several papers as well. On the Papers page, you can read about LDC's role in resource creation to support handwriting recognition and translation technology (Song et al 2012). LDC is developing resources to support two research programs:  Multilingual Automatic Document Classification, Analysis and Translations (MADCAT) and Open Handwriting Recognition and Translation (OpenHaRT). To support these programs, LDC is collecting handwritten samples of pre-processed Arabic and Chinese data that had previously been translated into English. To date, LDC has collected and annotated over 225,000 handwriting images.

Additionally, you can learn about LDC's efforts to collect and annotate very large corpora of user-contributed content in multiple languages (Garland et al, 2012). For the Broad Operational Language Translation (BOLT) program, LDC is developing resources to support genre-independent machine translation and information retrieval systems. In the current phase of BOLT, LDC is collecting and annotating threaded posts from online discussion forums, targeting at least 500 millions words each in three languages:  English, Chinese, and Egyptian Arabic. A portion of the data undergoes manual, multi-layered linguistic annotation.

As we mark LDC's 20th anniversary, we will feature the work behind these LREC papers as well as other ongoing research in upcoming newsletters.

New publications

(1) Arabic-Dialect/English Parallel Text was developed by Raytheon BBN Technologies (BBN), LDC and Sakhr Software and contains approximately 3.5 million tokens of Arabic dialect sentences and their English translations. 

The data in this corpus consists of Arabic web text as follows:

1. Filtered automatically from large Arabic text corpora harvested from the web by LDC. The LDC corpora consisted largely of weblog and online user groups and amounted to around 350 million Arabic words. Documents that contained a large percentage of non-Arabic or Modern Standard Arabic (MSA) words were eliminated. A list of dialect words was manually selected by culling through the Levantine Fisher (LDC2005S07, LDC2005T03, LDC2007S02 and LDC2007T04) and Egyptian CALLHOME speech corpora (LDC97S45, LDC2002S37, LDC97T19 and LDC2002T38) distributed by LDC. That list was then used to retain documents that contained a certain number of matches. The resulting subset of the web corpora contained around four million words. Documents were automatically segmented into passages using formatting information from the raw data.

2. Manually harvested by Sakhr Software from Arabic dialect web sites.

Dialect classification and sentence segmentation, as needed, and translation into English were performed by BBN through Amazon's Mechanical Turk. Arabic annotators from Mechanical Turk classified filtered passages as being either MSA or one of four regional dialects: Egyptian, Levantine, Gulf/Iraqi or Maghrebi. An additional "General" dialect option was allowed for ambiguous passages. The classification was applied to whole passages rather than individual sentences. Only the passages labeled Levantine and Egyptian were further processed. The segmented Levantine and Egyptian sentences were then translated. Annotators were instructed to translate completely and accurately and to transliterate Arabic names. They were also provided with examples. All segments of a passage were presented in the same translation task to provide context.
Arabic-Dialect/English Parallel Text is distributed via web download. 2012 Subscription Members will automatically receive two copies of this data on disc. 2012 Standard Members may request a copy as part of their 16 free membership corpora. Non-members may license this data for US$2250.
*

(2) Prague Czech-English Dependency Treebank (PCEDT) 2.0 was developed by the Institute of Formal and Applied Linguistics at Charles University in Prague, Czech Republic. It is a corpus of Czech-English parallel resources translated, aligned and manually annotated for dependency structure, semantic labeling, argument structure, ellipsis and anaphora resolution. This release updates Prague Czech-English Dependency Treebank 1.0 (LDC2004T25) by adding English newswire texts so that it now contains over two million words in close to 100,000 sentences. 

The principal new material in PCEDT 2.0 is the inclusion of the entire Wall Street Journal data from Treebank-3 (LDC99T42). Not included from PCEDT 1.0 are the Reader's Digest material,  the Czech monolingual corpus and  the English-Czech dictionary. Each section is enhanced with a comprehensive manual linguistic annotation in the Prague Dependency Treebank style (LDC2006T01), Prague Dependency Treebank 2.0). The main features of this annotation style are:
-dependency structure of the content words and coordinating and similar structures (function words are attached as their attribute values)
-semantic labeling of content words and types of coordinating structures
-argument structure, including an argument structure ("valency") lexicon for both languages
-ellipsis and anaphora resolution
This annotation style is called tectogrammatical annotation, and it constitutes the tectogrammatical layer in the corpus. Please consult the PCEDT website for more information and documentation.Prague Czech-English Dependency Treebank (PCEDT) 2.0 is distributed on one DVD. 2012 Subscription Members will automatically receive two copies of this data.  2012 Standard Members may request a copy as part of their 16 free membership corpora.  Non-members may license this data for US$100.