Adaptation Philipp Koehn 27 October 2020 Philipp Koehn Machine Translation: Adaptation 27 October 2020 1Adaptation • Better quality when system is adapted to a task • Domain adaptation to a specific domain, e.g., information technology • Some training more relevant • May also adapt to specific user (personalization) • May optimize for a specific document or sentence Philipp Koehn Machine Translation: Adaptation 27 October 2020 2 domains Philipp Koehn Machine Translation: Adaptation 27 October 2020 3Domain • Definition a collection of text with similar topic, style, level of formality, etc. • Practically: a corpus that comes from a specific source Philipp Koehn Machine Translation: Adaptation 27 October 2020 4Example Available parallel corpora on OPUS web site (Italian–English) Philipp Koehn Machine Translation: Adaptation 27 October 2020 5Differences in Corpora Medical Abilify is a medicine containing the active substance aripiprazole. It is available as 5 mg, 10 mg, 15 mg and 30 mg tablets, as 10 mg, 15 mg and 30 mg orodispersible tablets (tablets that dissolve in the mouth), as an oral solution (1 mg/ml) and as a solution for injection (7.5 mg/ml). Software Localization Default GNOME Theme OK People Literature There was a slight noise behind her and she turned just in time to seize a small boy by the slack of his roundabout and arrest his flight. Law Corrigendum to the Interim Agreement with a view to an Economic Partnership Agreement between the European Community and its Member States, of the one part, and the Central Africa Party, of the other part. Religion This is The Book free of doubt and involution, a guidance for those who preserve themselves from evil and follow the straight path. News The Facebook page of a leading Iranian leading cartoonist, Mana Nayestani, was hacked on Tuesday, 11 September 2012, by pro-regime hackers who call themselves ”Soldiers of Islam”. Movie subtitles We’re taking you to Washington, D.C. Do you know where the prisoner was transported to? Uh, Washington. Okay. Twitter Thank u @Starbucks & @Spotify for celebrating artists who #GiveGood with a donation to @BTWFoundation, and to great organizations by @Metallica and @ChanceTheRapper! Limited edition cards available now at Starbucks! Philipp Koehn Machine Translation: Adaptation 27 October 2020 6Dimensions Topic The subject matter of the text, such as politics or sports. Modality How was this text originally created? Is this written text or transcribed speech, and if speech, is it a formal presentation or an informal dialogue full of incompleted and ungrammatical sentences? Register Level of politeness. In some languages, this is very explicit, such as the use of the informal Du or the formal Sie for the personal pronoun you in German. Intent Is the text a statement of fact, an attempt to persuade, or communication between multiple parties? Style Is it a terse informal text, are full of emotional and flowery language? Philipp Koehn Machine Translation: Adaptation 27 October 2020 7Dimensions • In reality, no clear information about dimensions • For example: Wikipedia – spans a whole range of topics – fairly consistent in modality and style • Practical goal: enforce a certain level of politeness • Probably – European parliament proceedings more polite – movie subtitles less polite Philipp Koehn Machine Translation: Adaptation 27 October 2020 8Impact of Domain • Different word meanings – bat in baseball – bat in wildlife report • Different style – What’s up, dude? – Good morning, sir. Philipp Koehn Machine Translation: Adaptation 27 October 2020 9Diverse Problem • Data may differ narrowly or drastically • Amount of relevant and less relevant data differ • Data may be split by domain or mixed • Data may differ by quality • Each corpus may be relatively homogeneous or heterogeneous • May need to adapt on the fly ⇒ Different methods may apply, experimentation needed Philipp Koehn Machine Translation: Adaptation 27 October 2020 10Multiple Domain Scenario Sports Law Finance IT Sports IT Finance Law • Multiple collections of data, clearly identified e.g., sports, information technology, finance, law, ... • Train specialized model for each domain • Route test sentences to appropriate model (using classifier, if not known) • Probabilistic assignment Philipp Koehn Machine Translation: Adaptation 27 October 2020 11In/Out Domain Scenario • Optimize system for just one domain • Available data – small amounts of in-domain data – large amounts of out-of-domain data • Need to balance both data sources Philipp Koehn Machine Translation: Adaptation 27 October 2020 12Why Use Out-of-Domain Data? • In-domain data much more valuable • But: gaps – word-to-be-translated may not occur – word-to-be-translated may not occur with the correct translation • Motivation – out-of-domain data may fill these gaps – but be careful not to drown out in-domain data Philipp Koehn Machine Translation: Adaptation 27 October 2020 13S4 Taxonomy of Adaptation Effects [Carpuat, Daume, Fraser, Quirk, 2012] • Seen: Never seen this word before News to medical: diabetes mellitus • Sense: Never seen this word used in this way News to technical: monitor • Score: The wrong output is scored higher News to medical: manifest • Search: Decoding/search erred Philipp Koehn Machine Translation: Adaptation 27 October 2020 14Adaptation Effects German source Verfahren und Anlage zur Durchf¨uhrung einer exothermen Gasphasenreaktion an einem heterogenen partikelf¨ormigen Katalysator Human reference translation Method and system for carrying out an exothermic gas phase reaction on a heterogeneous particulate catalyst General model translation Procedures and equipment for the implementation of an exothermen gas response response to a heterogeneous particle catalytic converter In-Domain (chemistry patents) model translation Method and system for carrying out an exothermic gas phase reaction on a heterogeneous particulate catalyst • Stylistic, e.g., method, system vs. procedures, equipment) • Word sense, e.g., catalyst vs. catalytic converter) • Better language coverage e.g., exothermic gas phase reaction vs. exothermen gas response response Philipp Koehn Machine Translation: Adaptation 27 October 2020 15 mixture models Philipp Koehn Machine Translation: Adaptation 27 October 2020 16Combine Data Combined Domain Model • Too biased towards out of domain data • May flag translation options with indicator feature functions Philipp Koehn Machine Translation: Adaptation 27 October 2020 17Interpolate Data Combined Domain Model Out-of-domain data In-domain data Oversample in-domain data Philipp Koehn Machine Translation: Adaptation 27 October 2020 18Interpolate Models In Domain Model Out-of Domain Model Philipp Koehn Machine Translation: Adaptation 27 October 2020 19Domain-Aware Training • Train a model on all domains • Indicate domain for each input sentence • Domain token – append domain token to each input sentence, e.g., – label training data – label test data • Neural machine translation models – domain token will have word embedding – attention model will rely on domain token as needed Philipp Koehn Machine Translation: Adaptation 27 October 2020 20Unknown Domain at Test Time • Domain of input sentence unknown • Classifier: predict domain of input sentence – predict domain token – augment input sentence • Probability distribution over domains – sentences may not fall neatly into one of our pre-defined domains – e.g., rule violation in sports → SPORTS, LAW – encode soft domain assignment in vector – may be also used to label training data Philipp Koehn Machine Translation: Adaptation 27 October 2020 21Fine-Grained Domains: Personalization • Thousands of domains – machine translation system personalized for individual translators – machine translation system optimized for authors/speakers • Domain token/classification idea does not scale well • Not much data for each domain Philipp Koehn Machine Translation: Adaptation 27 October 2020 22Fine-Grained Domains: Personalization • Only influence word prediction layer • Recall output word distribution ti as a softmax given – previous hidden state (si−1) – previous output word embedding (Eyi−1) – input context (ci) ti = softmax W(Usi−1 + V Eyi−1 + Cci) + b • More generally, prediction given some conditioning vector zi ti = softmax Wzi + b • Add an additional bias term βp specific to a person p ti = softmax Wzi + b + βp Philipp Koehn Machine Translation: Adaptation 27 October 2020 23Topic Models • Cluster corpus by topic — Latent Dirichlet Allocation (LDA) • Train separate sub-models for each topic • For input sentence, detect topic (or topic distribution) Philipp Koehn Machine Translation: Adaptation 27 October 2020 24Latent Dirichlet Allocation (LDA) • Formalized as a graphical model • Sentences belong to a fixed number of topics • Model – predicts distribution over topics – predicts words based on each topic • For instance, typical topics – European, political, policy, interests, ... – crisis, rate, financial, monetary, ... Philipp Koehn Machine Translation: Adaptation 27 October 2020 25Sentence Embeddings • Sentence embeddings – simple method: average of embedding of the words in the sentence – ongoing research on more complex methods • Cluster sentences into topics: k-means clustering – randomly generate centroids (vectors in sentence embedding space) – assign each sentence to its closest centroid – re-compute centroid as center of the embeddings of its assigned sentences – iterate • Input sentence to be translated – assign to topic, based on proximity to centroids – translate with topic-specific model Philipp Koehn Machine Translation: Adaptation 27 October 2020 26 subsampling Philipp Koehn Machine Translation: Adaptation 27 October 2020 27Sentence Selection Combined Domain Model • Select out-of-domain sentence pairs that are similar to in-domain data Philipp Koehn Machine Translation: Adaptation 27 October 2020 28Sentence Selection • Various methods • Goal 1: Increase coverage (fill gaps) • Goal 2: Get content with in-domain content, style, etc. Philipp Koehn Machine Translation: Adaptation 27 October 2020 29Moore Lewis In-Domain Language Model Out-of Domain Language Model score score • Build language models – out of domain – in domain • Score each sentence • Sub-select sentence pairs with pIN(f) − pOUT(f) > τ Philipp Koehn Machine Translation: Adaptation 27 October 2020 30Modified Moore Lewis In-Domain Language Model (source) Out-of Domain Language Model (source) score score Out-of Domain Language Model (target) In-Domain Language Model (target) • 2 sets of language models – source language – target language • Add scores Philipp Koehn Machine Translation: Adaptation 27 October 2020 31Subsampling with POS • Replace rare words with part-of-speech tags an earthquake in Port-au-Prince ⇓ an earthquake in NNP • Works better [Axelrod et al., WMT2015] • Is it all about style, not key terminology? Philipp Koehn Machine Translation: Adaptation 27 October 2020 32Coverage-Based Methods • Problem with subsampling sentences based on similarity: not much new is added • Original goal: increase coverage with out-of-domain data → coverage-based selection Philipp Koehn Machine Translation: Adaptation 27 October 2020 33Basic Approach • Score each candidate sentence pair to be added based on word-based score 1 |si| w∈s score(w, s1,..,i−1) • Simple word score: check if word w occurred in the previously added sentences s1, ..., si−1 score(w, s1,..,i−1) = 0 if w ∈ s1, ..., si−1 1 otherwise • Add sentence with highest score Philipp Koehn Machine Translation: Adaptation 27 October 2020 34Scoring N-Grams • Compute coverage of n-grams, not just words 1 |si| × N N−1 n=0 wj,...,j+n∈s score(wj,...,j+n, s1,..,i−1) Philipp Koehn Machine Translation: Adaptation 27 October 2020 35Feature Decay • Not hard 0/1 scoring • Decaying function based on frequency score(w, s1,..,i−1) = frequency(w, s1,..,i−1) e−λ frequency(w,s1,..,i−1) • May also consider frequency of n-grams in raw corpus (avoid overfitting to rare n-grams) Philipp Koehn Machine Translation: Adaptation 27 October 2020 36Instance Weighting • So far: either include sentence pair or not • Now: weigh sentence pair based on relevance • Use same scoring metrics as previously for filtering • Scale learning rate by relevance score Philipp Koehn Machine Translation: Adaptation 27 October 2020 37 fine tuning Philipp Koehn Machine Translation: Adaptation 27 October 2020 38Fine-Tuning In Domain Model Out-of Domain Model + • First train system on out-of-domain data (or: all available data) • Stop at convergence • Then, continue training on in-domain data Philipp Koehn Machine Translation: Adaptation 27 October 2020 39Catastrophic Forgetting • Fine tuning may overfit to in-domain data (catastrophic forgetting) • Two goals – do well on in-domain data – maintain quality on out-of-domain data • Makes model more robust on in-domain data as well Philipp Koehn Machine Translation: Adaptation 27 October 2020 40Updating only Some Model Parameters • Too many parameters, too few in-domain data • Update only some parameters – weights for decoder state progression – output word prediction softmax – output word embeddings Philipp Koehn Machine Translation: Adaptation 27 October 2020 41Adaptation Parameters • Leave general model parameters fixed • Learning hidden unit contribution (LHUC) layer – learn scaling values in narrow range (say, factor 0 to 2) a(ρ) = 2 1 + eρ – scale values of decoder state s. sLHUC = a(ρ) ◦ s • Can be easily turned off Philipp Koehn Machine Translation: Adaptation 27 October 2020 42Regularized Training Objective • Stated goal: do not diverge too far from the original model • Default training objective – reduce the error on word predictions probability ti[yi] – given to the correct output word yi at time step i cost = −log ti[yi] • Measurement of difference to general model’s prediction tBASE i costREG = y∈V tBASE i [y] log ti[y] • Combine both training objectives (1 − α) cost + α costREG • Balancing factor α can be used to balance in-domain / out-of-domain quality Philipp Koehn Machine Translation: Adaptation 27 October 2020 43Document-Level Adaptation Translation Model corrected translation input draft adapt • Computer aided translation: translator post-edits machine translation • Provides additional training data (translated sentences) • Incrementally update model Philipp Koehn Machine Translation: Adaptation 27 October 2020 44Sentence-Level Adaptation • Adapt model to each sentence to be translated • Find most similar sentence in parallel corpus (fuzzy match) • Retrieve it and its translation • Adapt model with this sentence pair Philipp Koehn Machine Translation: Adaptation 27 October 2020 45Curriculum Training • Recall: relevance score for each sentence pair • Training epochs – start with all data (100%) – train only on somewhat relevant data (50%) – train only on relevant data (25%) – train only on very relevant data (10%) Philipp Koehn Machine Translation: Adaptation 27 October 2020