Publication date: Available online 30 June 2016
Source:Computer Speech & Language
Author(s): Kangil Kim, Eun-Jin Park, Jong-hun Shin, Oh-Woog Kwon, Young-Kil Kim
A widely used automatic translation approach, phrase-based statistical machine translation, learns a probabilistic translation model composed of phrases from a large parallel corpus with a large language model. The translation model is often enormous because of many combinations of source and target phrases, which leads to the restriction of applications to limited computing environments. Entropy-based pruning resolves this issue by reducing the model size while retaining the translation quality. To safely reduce the size, this method detects redundant components by evaluating a relative entropy of models before and after pruning the components. In the literature, this method is effective, but we have observed that it can be improved more by adjusting the divergence distribution determined by the relative entropy. In the results of preliminary experiments, we derives two factors responsible for limiting pruning efficiency of entropy-based pruning. The first factor is proportion of pairs composing translation models with respect to their translation probability and its estimate. The second factor is the exponential increase of the divergence for pairs with low translation probability and estimate. To control the factors, we propose a divergence-based fine pruning using a divergence metric to adapt the curvature change of the boundary conditions for pruning and Laplace smoothing. In practical translation tasks for English-Spanish and English-French language pairs, this method shows statistically significant improvement on the efficiency up to 50% and average 12% more pruning compared to entropy-based pruning to show the same translation quality.
from Speech via a.lsfakia on Inoreader http://ift.tt/297cy1t
via IFTTT
Δεν υπάρχουν σχόλια:
Δημοσίευση σχολίου