Context-sensitive syntactic source-reordering by statistical transduction
| Authors |
|
|---|---|
| Publication date | 2011 |
| Book title | Proceedings of the 5th International Joint Conference on Natural Language Processing (IJCNLP'11): Chiang Mai, Thailand, November 8-13, 2011 |
| Event | The 5th International Joint Conference on Natural Language Processing (IJCNLP'11) |
| Pages (from-to) | 38-46 |
| Publisher | Asian Federation of Natural Language Processing |
| Organisations |
|
| Abstract |
How well can a phrase translation model perform
if we permute the source words to fit target word order as perfectly as word alignment might allow? And how well would it perform if we limit the allowed permutations to ITGlike tree-transduction operations on the source parse tree? First we contribute oracle results showing great potential for performance improvement by source-reordering, ranging from 1.5 to 4 BLEU points depending on language pair. Although less outspoken, the potential of tree-based source-reordering is also significant. Our second contribution is a source reordering model that works with two kinds of tree transductions: the one permutes the order of sibling subtrees under a node, and the other first deletes layers in the parse tree in order to exploit sibling permutation at the remaining levels.The statistical parameters of the model we introduce concern individual tree transductions conditioned on contextual features of the tree resulting from all preceding transductions. Experiments in translating from English to Spanish/Dutch/Chinese show significant improvements of respectively 0.6/1.2/2.0 BLEU points. |
| Document type | Conference contribution |
| Language | English |
| Permalink to this page | |