2021
Lex Rosetta: Transfer of Predictive Models Across Languages, Jurisdictions, and Legal Domains
ŠAVELKA, Jaromír, Hannes WESTERMANN, Karim BENYEKHLEF, Charlotte S. ALEXANDER, Jayla C. GRANT et. al.Základní údaje
Originální název
Lex Rosetta: Transfer of Predictive Models Across Languages, Jurisdictions, and Legal Domains
Autoři
ŠAVELKA, Jaromír (203 Česká republika, garant), Hannes WESTERMANN, Karim BENYEKHLEF, Charlotte S. ALEXANDER, Jayla C. GRANT, Restrepo David AMARILES, Rajaa El HAMDANI, Sébastien MEEÙS, Aurore TROUSSEL, Michal ARASZKIEWICZ, Kevin D. ASHLEY (840 Spojené státy), Alexandra ASHLEY, Karl BRANTING, Mattia FALDUTI, Matthias GRABMAIR, Jakub HARAŠTA (203 Česká republika, domácí), Tereza NOVOTNÁ (203 Česká republika, domácí), Elizabeth TIPPETT a Shiwanni JOHNSON
Vydání
1. vyd. New York, Eighteenth International Conference on Artificial Intelligence and Law: Proceedings of the Conference, od s. 129-138, 10 s. 2021
Nakladatel
ACM
Další údaje
Jazyk
angličtina
Typ výsledku
Stať ve sborníku
Obor
50501 Law
Stát vydavatele
Spojené státy
Utajení
není předmětem státního či obchodního tajemství
Forma vydání
elektronická verze "online"
Odkazy
Kód RIV
RIV/00216224:14220/21:00121820
Organizační jednotka
Právnická fakulta
ISBN
978-1-4503-8526-8
Klíčová slova anglicky
multi-lingual sentence embeddings; transfer learning; domain adaptation; adjudicatory decisions; document segmentation; annotation
Štítky
Příznaky
Mezinárodní význam, Recenzováno
Změněno: 14. 4. 2022 16:25, Mgr. Petra Georgala
Anotace
V originále
In this paper, we examine the use of multi-lingual sentence embeddings to transfer predictive models for functional segmentation of adjudicatory decisions across jurisdictions, legal systems (common and civil law), languages, and domains (i.e. contexts). Mechanisms for utilizing linguistic resources outside of their original context have significant potential benefits in AI & Law because differences between legal systems, languages, or traditions often block wider adoption of research outcomes. We analyze the use of LanguageAgnostic Sentence Representations in sequence labeling models using Gated Recurrent Units (GRUs) that are transferable across languages. To investigate transfer between different contexts we developed an annotation scheme for functional segmentation of adjudicatory decisions. We found that models generalize beyond the contexts on which they were trained (e.g., a model trained on administrative decisions from the US can be applied to criminal law decisions from Italy). Further, we found that training the models on multiple contexts increases robustness and improves overall performance when evaluating on previously unseen contexts. Finally, we found that pooling the training data from all the contexts enhances the models’ in-context performance.
Návaznosti
CZ.02.2.69/0.0/0.0/19_073/0016943, interní kód MU (Kód CEP: EF19_073/0016943) |
|