New publications
LDC at ICASSP 2013
LDC will be at ICASSP 2013, the world’s
largest and most comprehensive technical conference focused on
signal processing and its applications. The event will be held
over May 26-31 and we look forward to interacting with members of
this community at our exhibit table and during our poster and
paper presentations:
Tuesday, May 28, 15:30 - 17:30, Poster Area D
ARTICULATORY TRAJECTORIES FOR LARGE-VOCABULARY SPEECH RECOGNITION
Authors: Vikramjit Mitra, Wen Wang, Andreas Stolcke, Hosung Nam, Colleen Richey, Jiahong Yuan (LDC), Mark Liberman (LDC)
Tuesday, May 28, 16:30 - 16:50, Room 2011
SCALE-SPACE EXPANSION OF ACOUSTIC FEATURES IMPROVES SPEECH EVENT DETECTION
Authors: Neville Ryant, Jiahong Yuan, Mark Liberman (all LDC)
Wednesday, May 29, 15:20 - 17:20, Poster Area D
USING MULTIPLE VERSIONS OF SPEECH INPUT IN PHONE RECOGNITION
Authors: Mark Liberman (LDC), Jiahong Yuan (LDC), Andreas Stolcke, Wen Wang, Vikramjit Mitra
Please look for LDC’s exhibition at Booth #53
in the Vancouver Convention Centre. We hope to see you there!
Early renewing members save
on fees
To date just over 100 organizations have joined for Membership Year (MY) 2013. For the sixth straight year, LDC's early renewal discount program has resulted in significant savings for our members. Organizations that renewed membership or joined early for MY2013 saved over US$50,000! MY 2012 members are still eligible for a 5% discount when renewing for MY2013. This discount will apply throughout 2013.
Organizations joining LDC can take advantage of membership benefits including free membership year data as well as discounts on older LDC corpora. For-profit members can use most LDC data for commercial applications. Please visit our Members FAQ for further information.
Commercial use and LDC data
Has your company obtained an LDC database as a non-member? For-profit organizations are reminded that an LDC membership is a pre-requisite for obtaining a commercial license to almost all LDC databases. Non-member organizations, including non-member for-profit organizations, cannot use LDC data to develop or test products for commercialization, nor can they use LDC data in any commercial product or for any commercial purpose. LDC data users should consult corpus-specific license agreements for limitations on the use of certain corpora. In the case of a small group of corpora such as American National Corpus (ANC) Second Release (LDC2005T35), Buckwalter Arabic Morphological Analyzer Version 2.0 (LDC2004L02), CELEX2 (LDC96L14) and all CSLU corpora, commercial licenses must be obtained separately from the owners of the data even if an organization is a for-profit member.
New publications
(1) GALE Arabic-English Parallel Aligned Treebank -- Newswire (LDC2013T10) was developed by LDC and contains 267,520 tokens of word aligned Arabic and English parallel text with treebank annotations. This material was used as training data in the DARPA GALE (Global Autonomous Language Exploitation) program. Parallel aligned treebanks are treebanks annotated with morphological and syntactic structures aligned at the sentence level and the sub-sentence level. Such data sets are useful for natural language processing and related fields, including automatic word alignment system training and evaluation, transfer-rule extraction, word sense disambiguation, translation lexicon extraction and cultural heritage and cross-linguistic studies. With respect to machine translation system development, parallel aligned treebanks may improve system performance with enhanced syntactic parsers, better rules and knowledge about language pairs and reduced word error rate.
In this release, the source Arabic data was
translated into English. Arabic and English treebank annotations
were performed independently. The parallel texts were then word
aligned. The material in this corpus corresponds to the Arabic
treebanked data appearing in Arabic Treebank: Part 3 v 3.2 (LDC2010T08)
(ATB) and to the English treebanked data in English Translation
Treebank: An-Nahar Newswire (LDC2012T02).
The source data consists of Arabic newswire
from the Lebanese publication An Nahar collected by LDC in 2002.
All data is encoded as UTF-8. A count of files, words, tokens and
segments is below.
Language
|
Files
|
Words
|
Tokens
|
Segments
|
Arabic
|
364
|
182,351
|
267,520
|
7,711
|
Note: Word count is based on the untokenized Arabic source and token count is based on the ATB-tokenized Arabic source.
The purpose of the GALE word alignment task was
to find correspondences between words, phrases or groups of words
in a set of parallel texts. Arabic-English word alignment
annotation consisted of the following tasks:
Identifying different types of links: translated (correct or incorrect) and not translated (correct or incorrect)
Identifying sentence segments not suitable for annotation, e.g., blank segments, incorrectly-segmented segments, segments with foreign languages
Tagging unmatched words attached to other words or phrases
GALE Arabic-English Parallel Aligned Treebank
-- Newswire is distributed via web download. 2013 Subscription Members will automatically
receive two copies of this data on disc. 2013 Standard Members may
request a copy as part of their 16 free membership corpora. Non-members may license this data for a fee.
*
(2) MADCAT
Phase
2
Training Set (LDC2013T09) contains all training data created
by LDC to support Phase 2 of the DARPA MADCAT (Multilingual
Automatic Document Classification Analysis and
Translation)Program. The data in this release consists of
handwritten Arabic documents, scanned at high resolution and
annotated for the physical coordinates of each line and token.
Digital transcripts and English translations of each document are
also provided, with the various content and annotation layers
integrated in a single MADCAT XML output.
The goal of the MADCAT program is to
automatically convert foreign text images into English
transcripts. MADCAT Phase 2 data was collected from Arabic source
documents in three genres: newswire, weblog and newsgroup text.
Arabic speaking scribes copied documents by hand, following
specific instructions on writing style (fast, normal, careful),
writing implement (pen, pencil) and paper (lined, unlined). Prior
to assignment, source documents were processed to optimize their
appearance for the handwriting task, which resulted in some
original source documents being broken into multiple pages for
handwriting. Each resulting handwritten page was assigned to up to
five independent scribes, using different writing conditions.
The handwritten, transcribed documents were
checked for quality and completeness, then each page was scanned
at a high resolution (600 dpi, greyscale) to create a digital
version of the handwritten document. The scanned images were then
annotated to indicate the physical coordinates of each line and
token. Explicit reading order was also labeled, along with any
errors produced by the scribes when copying the text. The
annotation results in GEDI XML output files (gedi.xml), which
include ground truth annotations and source transcripts.
The final step was to produce a unified data
format that takes multiple data streams and generates a single
MADCAT XML output file with all required information. The
resulting madcat.xml file has these distinct components: (1) a
text layer that consists of the source text, tokenization and
sentence segmentation, (2) an
image layer that consist of bounding boxes, (3) a scribe
demographic layer that consists of scribe ID and partition
(train/test) and (4) a document metadata layer.
This release includes 27,814 annotation files
in both GEDI XML and MADCAT XML formats (gedi.xml and madcat.xml)
along with their corresponding scanned image files in TIFF format.
MADCAT Phase 2 Training Set is distributed on
six DVD-ROM. 2013 Subscription Members will automatically
receive two copies of this data on disc. 2013 Standard Members may
request a copy as part of their 16 free membership corpora. Non-members may license this data for a fee.
No comments:
Post a Comment