Size: 11207
Comment:
|
Size: 34577
Comment:
|
Deletions are marked like this. | Additions are marked like this. |
Line 1: | Line 1: |
#acl CLIPGroup:read,write,revert All:read | |
Line 3: | Line 4: |
Here we will document performance of various NLP systems for Polish. ==Test collections== * '''Performance measure:''' per token accuracy. (The convention is for this to be measured on all tokens, including punctuation tokens and other unambiguous tokens.) * '''English''' ** '''Penn Treebank''' ''Wall Street Journal'' (WSJ) release 3 (LDC99T42). The splits of data for this task were not standardized early on (unlike for parsing) and early work uses various data splits defined by counts of tokens or by sections. Most work from 2002 on adopts the following data splits, introduced by Collins (2002): *** '''Training data:''' sections 0-18 *** '''Development test data:''' sections 19-21 *** '''Testing data:''' sections 22-24 * '''French''' ** '''French TreeBank''' (FTB, Abeillé et al; 2003) ''Le Monde'', December 2007 version, 28-tag tagset (CC tagset, Crabbé and Candito, 2008). Classical data split (10-10-80): *** '''Training data:''' sentences 2471 to 12351 *** '''Development test data:''' sentences 1236 to 2470 *** '''Testing data:''' sentences 1 to 1235 == Tables of results == ===WSJ=== {| border="1" cellpadding="5" cellspacing="1" width="100%" |- ! System name ! Short description ! Main publication ! Software ! Extra Data?*** ! All tokens ! Unknown words ! License |- | TnT* | Hidden markov model | Brants (2000) | [http://www.coli.uni-saarland.de/~thorsten/tnt/ TnT] | No | 96.46% | 85.86% | Academic/research use only ([http://www.coli.uni-saarland.de/~thorsten/tnt/tnt-license.html license]) |- | MElt | MEMM with external lexical information | Denis and Sagot (2009) | [https://gforge.inria.fr/projects/lingwb/ Alpage linguistic workbench] | No | 96.96% | 91.29% | CeCILL-C |- | GENiA Tagger** | Maximum entropy cyclic dependency network | Tsuruoka, et al (2005) | [http://www.nactem.ac.uk/tsujii/GENIA/tagger/ GENiA] | No | 97.05% | Not available | Gratis for non-commercial usage |- | Averaged Perceptron | Averaged Perception discriminative sequence model | Collins (2002) | Not available | No | 97.11% | Not available | Unknown |- | Maxent easiest-first | Maximum entropy bidirectional easiest-first inference | Tsuruoka and Tsujii (2005) | [http://www-tsujii.is.s.u-tokyo.ac.jp/~tsuruoka/postagger/ Easiest-first] | No | 97.15% | Not available | Unknown |- | SVMTool | SVM-based tagger and tagger generator | Giménez and Márquez (2004) | [http://www.lsi.upc.es/~nlp/SVMTool/ SVMTool] | No | 97.16% | 89.01% | LGPL 2.1 |- | LAPOS | Perceptron based training with lookahead | Tsuruoka, Miyao, and Kazama (2011) | [http://www.logos.t.u-tokyo.ac.jp/~tsuruoka/lapos/ LAPOS] | No | 97.22% | Not available | MIT |- | Morče/COMPOST | Averaged Perceptron | Spoustová et al. (2009) | [http://ufal.mff.cuni.cz/compost COMPOST] | No | 97.23% | Not available | Non-free ([http://ufal.mff.cuni.cz/compost/register.php academic-only]) |- | Morče/COMPOST | Averaged Perceptron | Spoustová et al. (2009) | [http://ufal.mff.cuni.cz/compost COMPOST] | Yes | 97.44% | Not available | Unknown |- | Stanford Tagger 1.0 | Maximum entropy cyclic dependency network | Toutanova et al. (2003) | [http://nlp.stanford.edu/software/tagger.shtml Stanford Tagger] | No | 97.24% | 89.04% | GPL v2+ |- | Stanford Tagger 2.0 | Maximum entropy cyclic dependency network | Manning (2011) | [http://nlp.stanford.edu/software/tagger.shtml Stanford Tagger] | No | 97.29% | 89.70% | GPL v2+ |- | Stanford Tagger 2.0 | Maximum entropy cyclic dependency network | Manning (2011) | [http://nlp.stanford.edu/software/tagger.shtml Stanford Tagger] | Yes | 97.32% | 90.79% | GPL v2+ |- | LTAG-spinal | Bidirectional perceptron learning | Shen et al. (2007) | [http://www.cis.upenn.edu/~xtag/spinal/ LTAG-spinal] | No | 97.33% | Not available | Unknown |- | SCCN | Semi-supervised condensed nearest neighbor | Søgaard (2011) | [http://cst.dk/anders/scnn/ SCCN] | Yes | 97.50% | Not available | Unknown |- | CharWNN | MLP with Neural Character Embeddings | dos Santos and Zadrozny (2014) | Not available | No | 97.32% | 89.86% | Unknown |- | structReg | CRFs with structure regularization | Sun(2014) | Not available | No | 97.36% | Not available | Unknown |- | BI-LSTM-CRF | Bidirectional LSTM-CRF Model | Huang et al. (2015) | Not available | No | 97.55% | Not available | Unknown |- | NLP4J | Dynamic Feature Induction | Choi (2016) | [https://github.com/emorynlp/nlp4j NLP4J] | Yes | 97.64% | 92.03% | Apache 2 |} (*) TnT: Accuracy is as reported by Giménez and Márquez (2004) for the given test collection. Brants (2000) reports 96.7% token accuracy and 85.5% unknown word accuracy on a 10-fold cross-validation of the Penn WSJ corpus. (**) GENiA: Results are for models trained and tested on the given corpora (to be comparable to other results). The distributed GENiA tagger is trained on a mixed training corpus and gets 96.94% on WSJ, and 98.26% on GENiA biomedical English. (***) Extra data: Whether system training exploited (usually large amounts of) extra unlabeled text, such as by semi-supervised learning, self-training, or using distributional similarity features, beyond the standard supervised training data. ===FTB=== {| border="1" cellpadding="5" cellspacing="1" width="100%" |- ! System name ! Short description ! Main publication ! Software ! Extra Data?*** ! All tokens ! Unknown words ! License |- | Morfette | Perceptron with external lexical information* | Chrupała et al. (2008), Seddah et al. (2010) | [http://sites.google.com/site/morfetteweb/ Morfette] | No | 97.68% | 90.52% | New BSD |- | SEM | CRF with external lexical information* | Constant et al. (2011) | [http://www.univ-orleans.fr/lifo/Members/Isabelle.Tellier/SEM.html SEM] | No | 97.7% | Not available | "GNU"(?) |- | MElt | MEMM with external lexical information* | Denis and Sagot (2009) | [https://gforge.inria.fr/projects/lingwb/ Alpage linguistic workbench] | No | 97.80% | 91.77% | CeCILL-C |} (*) External lexical information from the Lefff lexicon (Sagot 2010, [https://gforge.inria.fr/frs/?group_id=482 Alexina project]) == References == * Brants, Thorsten. 2000. [http://acl.ldc.upenn.edu/A/A00/A00-1031.pdf TnT -- A Statistical Part-of-Speech Tagger]. "6th Applied Natural Language Processing Conference". * Chrupała, Grzegorz, Dinu, Georgiana and van Genabith, Josef. 2008. [http://www.lrec-conf.org/proceedings/lrec2008/pdf/594_paper.pdf Learning Morphology with Morfette]. "LREC 2008". * Collins, Michael. 2002. [http://people.csail.mit.edu/mcollins/papers/tagperc.pdf Discriminative Training Methods for Hidden Markov Models: Theory and Experiments with Perceptron Algorithms]. ''EMNLP 2002''. * Constant, Matthieu, Tellier, Isabelle, Duchier, Denys, Dupont, Yoann, Sigogne, Anthony, and Billot, Sylvie. [http://www.lirmm.fr/~lopez/TALN2011/Longs-TALN+RECITAL/Tellier_taln11_submission_54.pdf Intégrer des connaissances linguistiques dans un CRF : application à l'apprentissage d'un segmenteur-étiqueteur du français]. "TALN'11" * Denis, Pascal and Sagot, Benoît. 2009. [http://alpage.inria.fr/~sagot/pub/paclic09tagging.pdf Coupling an annotated corpus and a morphosyntactic lexicon for state-of-the-art POS tagging with less human effort]. "PACLIC 2009" * Giménez, J., and Márquez, L. 2004. [http://www.lsi.upc.es/~nlp/SVMTool/lrec2004-gm.pdf SVMTool: A general POS tagger generator based on Support Vector Machines]. ''Proceedings of the 4th International Conference on Language Resources and Evaluation (LREC'04)''. Lisbon, Portugal. * Manning, Christopher D. 2011. Part-of-Speech Tagging from 97% to 100%: Is It Time for Some Linguistics? In Alexander Gelbukh (ed.), Computational Linguistics and Intelligent Text Processing, 12th International Conference, CICLing 2011, Proceedings, Part I. Lecture Notes in Computer Science 6608, pp. 171--189. Springer. * Seddah, Djamé, Chrupała, Grzegorz, Çetinoglu, Özlem and Candito, Marie. 2010. [http://aclweb.org/anthology-new/W/W10/W10-1410.pdf Lemmatization and Lexicalized Statistical Parsing of Morphologically Rich Languages: the Case of French] "SPMRL 2010 (NAACL 2010 workshop)" * Shen, L., Satta, G., and Joshi, A. 2007. [http://acl.ldc.upenn.edu/P/P07/P07-1096.pdf Guided learning for bidirectional sequence classification]. ''Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics (ACL 2007)'', pages 760-767. * Søgaard, Anders. 2011. Semi-supervised condensed nearest neighbor for part-of-speech tagging. The 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies (ACL-HLT). Portland, Oregon. * Spoustová, Drahomíra "Johanka", Jan Hajič, Jan Raab and Miroslav Spousta. 2009. Semi-supervised Training for the Averaged Perceptron POS Tagger. Proceedings of the 12 EACL, pages 763-771. * Toutanova, K., Klein, D., Manning, C.D., Yoram Singer, Y. 2003. [http://nlp.stanford.edu/kristina/papers/tagging.pdf Feature-rich part-of-speech tagging with a cyclic dependency network]. ''Proceedings of HLT-NAACL 2003'', pages 252-259. * Tsuruoka, Yoshimasa, Yuka Tateishi, Jin-Dong Kim, Tomoko Ohta, John McNaught, Sophia Ananiadou, and Jun'ichi Tsujii. 2005. "[http://www-tsujii.is.s.u-tokyo.ac.jp/~tsuruoka/papers/pci05.pdf Developing a Robust Part-of-Speech Tagger for Biomedical Text, Advances in Informatics]" - ''10th Panhellenic Conference on Informatics'', '''LNCS 3746''', pp. 382-392, 2005 * Tsuruoka, Yoshimasa, Yusuke Miyao, and Jun’ichi Kazama. 2011. "[http://aclweb.org/anthology-new/W/W11/W11-0328.pdf Learning with Lookahead: Can History-Based Models Rival Globally Optimized Models?]" ''Proceedings of the Fifteenth Conference on Computational Natural Language Learning'', pp 238–246, 2011. * Tsuruoka, Yoshimasa and Jun'ichi Tsujii. 2005. "[http://www-tsujii.is.s.u-tokyo.ac.jp/~tsuruoka/papers/emnlp05bidir.pdf Bidirectional Inference with the Easiest-First Strategy for Tagging Sequence Data]", ''Proceedings of HLT/EMNLP 2005'', pp. 467-474. * Sun, Xu. "[http://papers.nips.cc/paper/5643-structure-regularization-for-structured-prediction.pdf Structure Regularization for Structured Prediction]". ''In Neural Information Processing Systems (NIPS)''. 2402-2410. 2014 * Cicero dos Santos, and Bianca Zadrozny. "[http://jmlr.org/proceedings/papers/v32/santos14.pdf Learning character-level representations for part-of-speech tagging]". ''In Proceedings of the 31st International Conference on Machine Learning, JMLR: W&CP volume 32''. 2014. * Z. H. Huang, W. Xu, and K. Yu. "[http://arxiv.org/abs/1508.01991 Bidirectional LSTM-CRF Models for Sequence Tagging]". ''In arXiv:1508.01991''. 2015. * Jinho D. Choi. 2016. "[https://aclweb.org/anthology/N/N16/N16-1031.pdf Dynamic Feature Induction: The Last Gist to the State-of-the-Art]", Proceedings of NAACL 2016. == See also == * [[POS Induction (State of the art)]] * [[Part-of-speech tagging]] * [[State of the art]] [[Category:State of the art]] |
This page documents performance of most popular contemporary NLP systems for Polish. == Single-word lemmatization and morphological analysis == || '''System name and URL''' || '''Approach''' || '''Main publication''' || '''License''' || '''P''' || '''R''' || '''F''' || || [[http://sgjp.pl/morfeusz/|Morfeusz]] || || Woliński, M. (2006). [[http://nlp.ipipan.waw.pl/Bib/woli:06.pdf|Morfeusz — a practical tool for the morphological analysis of Polish]]. In M.A. Kłopotek, S.T. Wierzchoń, K. Trojanowski (eds.) Proceedings of the International IIS:IIPWM 2006 Conference, pp. 511–520. || 2-clause BSD || % || % || % || || [[https://github.com/morfologik/|Morfologik]] || || Miłkowski M. (2010). [[http://doi.wiley.com/10.1002/spe.971|Developing an open-source, rule-based proofreading tool]]. Software: Practice and Experience, 40(7):543–566. || || % || % || % || || [[http://zil.ipipan.waw.pl/LemmaPL|LemmaPL]] || dictionary-based rules and heuristics || Kobyliński Ł. (unpublished) || GPL || % || % || % || == Multi-word lemmatization == || '''System name and URL''' || '''Approach''' || '''Main publication''' || '''License''' || '''Accuracy''' || || – || rule-based || Degórski, Ł. (2012). [[http://nlp.ipipan.waw.pl/Bib/deg:11.pdf|Towards the lemmatisation of Polish nominal syntactic groups using a shallow grammar]]. In P. Bouvry, M.A. Kłopotek, F. Leprevost, M. Marciniak, A. Mykowiecka, H. Rybiński (eds.) Security and Intelligent Information Systems, Lecture Notes in Computer Science vol. 7053, pp. 370–378, Springer-Verlag Berlin Heidelberg. || ? || 82.90% || || – || automatic generation of lemmatization rules using CRF || Radziszewski A. (2013). [[http://aclweb.org/anthology/P/P13/P13-1069.pdf|Learning to lemmatise Polish noun phrases]]. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (ACL 2013), Volume 1: Long Papers. ACL, pp. 701–709. || GPL || 80.70% || || – || automatic generation of lemmatization rules from a corpus || Abramowicz W., Filipowska A., Małyszko J., Wagner T. (2015). [[http://ltc.amu.edu.pl/a2015/book/papers/MWE-1.pdf|Lemmatization of Multi-Word Entity Names for Polish Language Using Rules Automatically Generated Based on the Corpus Analysis]]. In Z. Vetulani, J. Mariani (eds.) Human Language Technologies as a Challenge for Computer Science and Linguistics, Fundacja Uniwersytetu im. A. Mickiewicza, pp. 540–544, Poznań. || ? || 82.10% || || [[https://github.com/CLARIN-PL/Polem|Polem]] || dictionary-based rules and heuristics || Marcińczuk M. (2017). [[https://www.researchgate.net/publication/321581841_Lemmatization_of_Multi-word_Common_Noun_Phrases_and_Named_Entities_in_Polish|Lemmatization of Multi-word Common Noun Phrases and Named Entities in Polish]]. In Proceedings of Recent Advances in Natural Language Processing, pp. 483–491. || GPL || 97.99% || == Disambiguated POS tagging == The comparisons are performed using plain text as input and reporting the accuracy lower bound (Acc,,lower,,) metric proposed by [[http://www.plwordnet.pwr.wroc.pl/redmine/attachments/download/669/taggereval.pdf|Radziszewski and Acedański (2012)]]. The metric penalizes all segmentation changes in regard to the gold standard and treats such tokens as misclassified. Furthermore, we report separate metric values for both known and unknown words to assess the performance of guesser modules built into the taggers. These are indicated as Acc^K^,,lower,, for known and Acc^U^,,lower,, for unknown words. The experiments have been performed on the manually annotated part of the [[http://www.nkjp.pl|National Corpus of Polish]] v. 1.1 (1M tokens). The ten-fold cross-validation procedure has been followed, by re-evaluating the methods ten times, each time selecting one of ten parts of the corpus for testing and the remaining parts for training the taggers. The provided results are averages calculated over ten training and testing sequences. Each of the taggers and each tagger ensemble has been trained and tested on the same set of cross-validation folds, so the results are directly comparable. Each of the training folds has been reanalyzed, according to the procedure described in ([[http://nlp.pwr.wroc.pl/ltg/files/publications/wcrft.pdf|A Tiered CRF Tagger for Polish|Radziszewski 2013)]], using the Maca toolkit ([[http://mt-archive.info/FreeRBMT-2011-Radziszewski.pdf|Radziszewski and Śniatowski 2011)]]. The idea of a morphological reanalysis of the gold-standard data is to allow the trained tagger to see similar input that is expected in the tagging phase. The training data is firstly turned into plain text and analyzed using the same mechanism that will be used by the tagger during actual tagging process. The output of the analyzer is then synchronized with the original gold-standard data, by using the original tokenization. Tokens with changed segmentation are taken from the gold-standard intact. In the case of tokens for which the segmentation did not change in the process of morphological analysis, the produced interpretations are compared with the original. A token is marked as an unknown word if the correct interpretation has not been produced by the analyzer. Maca has been run with the morfeusz-nkjp-official configuration, which uses Morfeusz SGJP analyzer ([[http://nlp.ipipan.waw.pl/Bib/woli:06.pdf|Woliński 2006]]) and no guesser module. Tagger efficiency was compared by measuring training and tagging times of each of the methods on the same machine. 1.1M token set was used both for training and tagging stages. The total processing time included model loading/saving time and other I/O operations (e.g. reading/writing the tokens). || '''System name and URL''' || '''Approach''' || '''Main publication''' || '''License''' || '''Acc,,lower,,''' || '''Acc^K^,,lower,,''' || '''Acc^U^,,lower,,''' ||'''Training time''' ||'''Tagging time''' || || [[http://zil.ipipan.waw.pl/OpenNLP|OpenNLP]] || !MaxEnt model || Kobyliński Ł., Kieraś W. (2016). [[attachment:kob-kie-16.pdf|Part of Speech Tagging for Polish: State of the Art and Future Perspectives]]. In Proceedings of the 17th International Conference on Intelligent Text Processing and Computational Linguistics (CICLing 2016), Konya, Turkey. || GPL || 87.24% || 88.02% || 62.05% ||<)> 11095 s ||<)> 362 s || || [[http://zil.ipipan.waw.pl/PANTERA|Pantera]] || rule-based adapted Brill tagger || Acedański S. (2010). [[http://ripper.dasie.mimuw.edu.pl/~accek/homepage/wp-content/papercite-data/pdf/ace10.pdf|A Morphosyntactic Brill Tagger for Inflectional Languages]]. In H. Loftsson, E. Rögnvaldsson, S. Helgadóttir (eds.) Advances in Natural Language Processing, LNCS 6233, pp. 3–14, Springer. || GPL 3 || 88.95% || 91.22% || 15.19% ||<)> 2624 s ||<)> 186 s || || [[http://nlp.pwr.wroc.pl/redmine/projects/wmbt/wiki|WMBT]] || memory-based || Radziszewski A., Śniatowski T. (2011). [[http://nlp.pwr.wroc.pl/redmine/attachments/download/420/wmbt.pdf|A memory-based tagger for Polish]]. In: Z. Vetulani (ed.) Proceedings of the 5th Language and Technology Conference (LTC 2011), pp. 556–560, Poznań, Poland. || || 90.33% || 91.26% || 60.25% ||<)> 548 s ||<)> 4338 s || || [[http://nlp.pwr.wroc.pl/redmine/projects/wcrft/wiki|WCRFT]] || tiered, CRF-based || Radziszewski A. (2013). [[https://pdfs.semanticscholar.org/719c/0b314bc4ac5204f8d288a8c4e6053f08285d.pdf|A Tiered CRF Tagger for Polish]]. In R. Bembenik, Ł. Skonieczny, H. Rybiński, M. Kryszkiewicz, M. Niezgódka (eds.) Intelligent Tools for Building a Scientific Information Platform, pp. 215–230, Springer Berlin Heidelberg. || LGPL 3.0 || 90.76% || 91.92% || 53.18% ||<)> 27242 s ||<)> 420 s || || [[http://zil.ipipan.waw.pl/Concraft|Concraft]] || mutually dependent CRF layers || Waszczuk J. (2012). [[http://www.aclweb.org/anthology/C12-1170|Harnessing the CRF complexity with domain-specific constraints. The case of morphosyntactic tagging of a highly inflected language]]. In Proceedings of the 24th International Conference on Computational Linguistics (COLING 2012), pp. 2789–2804, Mumbai, India. || 2-clause BSD || 91.07% || 92.06% || 58.81% ||<)> 26675 s ||<)> 403 s || || [[http://zil.ipipan.waw.pl/PoliTa|PoliTa]] || voting ensemble || Kobyliński Ł. (2014). [[http://www.lrec-conf.org/proceedings/lrec2014/pdf/1018_Paper.pdf|PoliTa: A multitagger for Polish]]. In N. Calzolari, K. Choukri, T. Declerck, H. Loftsson, B. Maegaard, J. Mariani, A. Moreno, J. Odijk, S. Piperidis (eds.) Proceedings of the 9th International Conference on Language Resources and Evaluation (LREC 2014), pp. 2949–2954, Reykjavík, Iceland, ELRA. || GPL || 92.01% || 92.91% || 62.81% ||<:> N/A ||<:> N/A || || [[http://mozart.ipipan.waw.pl/~kkrasnowska/PolEval/src/SCWAD-tagger/|Toygger]] || bi-LSTM || Krasnowska-Kieraś K. (2017). [[http://ltc.amu.edu.pl/book2017/papers/PolEval1-2.pdf|Morphosyntactic disambiguation for Polish with bi-LSTM neural networks]]. In Zygmunt Vetulani and Patrick Paroubek (eds.) Proceedings of the 8th Language & Technology Conference: Human Language Technologies as a Challenge for Computer Science and Linguistics, pp. 367–371, Poznań, Poland. Fundacja Uniwersytetu im. Adama Mickiewicza w Poznaniu. || GPL || 92.01% || 92.91% || 62.81% ||<:> N/A ||<:> N/A || || [[https://github.com/kwrobel-nlp/krnnt|KRNNT]] || RNN || Wróbel K. (2017) [[http://ltc.amu.edu.pl/book2017/papers/PolEval1-6.pdf|KRNNT: Polish Recurrent Neural Network Tagger]]. In Zygmunt Vetulani and Patrick Paroubek (eds.) Proceedings of the 8th Language & Technology Conference: Human Language Technologies as a Challenge for Computer Science and Linguistics, pp. 386–391, Poznań, Poland. Fundacja Uniwersytetu im. Adama Mickiewicza w Poznaniu. || LGPL 3.0 || 93.72% || 94.43% || 69.03% ||<:> 245.9 s (!GeForce GTX 1050M) ||<:> N/A || == Dependency parsing == Dependency parsing systems are trained and tested either on [[http://zil.ipipan.waw.pl/PDB|Polish Dependency Bank (PDB)]] or on [[http://git.nlp.ipipan.waw.pl/alina/PDBUD|PDB-UD]], i.e. PDB converted to the Universal Dependencies format (license [[https://creativecommons.org/licenses/by-nc-sa/4.0/|CC BY-NC-SA 4.0]]). The script ([[http://universaldependencies.org/conll18/conll18_ud_eval.py|conll18_ud_eval.py]]) of CoNLL 2018 shared task is used for evaluation. || '''Parser''' || '''Model''' || '''Data''' || '''External Embedding''' || '''UPOS F1''' || '''XPOS F1''' || '''UFeats F1''' || '''AllTags''' || '''Lemmas F1''' || '''UAS F1''' || '''LAS F1''' || '''CLAS F1''' || '''MLAS F1''' || '''BLEX F1''' || || [[https://github.com/ufal/udpipe|UDPipe]] || MSP || PDB-UD || [[https://dl.fbaipublicfiles.com/fasttext/vectors-crawl/cc.pl.300.vec.gz|FastText]] || 97.34 || 88.62 || 89.16 || 88.03 || 96.08 || 87.41 || 83.94 || 80.43 || 69.98 || 76.49 || {{{#!wiki comment || [[http://www.maltparser.org|MaltParser]] || SP || PDB || || [[https://github.com/360er0/COMBO|COMBO]] || SP || PDB || [[https://dl.fbaipublicfiles.com/fasttext/vectors-crawl/cc.pl.300.vec.gz|FastText]] || || [[|COMBO2]] || SP || PDB || [[https://dl.fbaipublicfiles.com/fasttext/vectors-crawl/cc.pl.300.vec.gz|FastText]] || || [[|COMBO2]] || SP || PDB || [[HerBERTa]] || [[|COMBO2]] || SP || PDB-UD || [[https://dl.fbaipublicfiles.com/fasttext/vectors-crawl/cc.pl.300.vec.gz|FastText]] || || [[|COMBO2]] || SP || PDB-UD || [[HerBERTa]] || [[https://github.com/ufal/udpipe|UDPipe]] || SP || PDB-UD || || [[https://github.com/360er0/COMBO|COMBO]] || MSP || PDB || [[https://dl.fbaipublicfiles.com/fasttext/vectors-crawl/cc.pl.300.vec.gz|FastText]] || || [[|COMBO2]] || MSP || PDB || [[HerBERTa]] || [[https://github.com/360er0/COMBO|COMBO]] || MSP || PDB-UD || [[https://dl.fbaipublicfiles.com/fasttext/vectors-crawl/cc.pl.300.vec.gz|FastText]] || || [[|COMBO2]] || MSP || PDB || [[HerBERTa]] }}} '''SP''' (syntactic prediction) -- prediction of labelled dependency trees '''MSP''' (morphosyntactic prediction) -- prediction of morphosyntactic features (i.e. LEMMA, UPOS, XPOS, FEATS) and labelled dependency trees {{{#!wiki comment || '''System name and URL''' || '''Model''' || '''License''' || '''UAS''' || '''LAS''' ||'''MLAS'''||'''BLEX'''||'''ELAS'''|| || [[http://www.maltparser.org|MaltParser]] || SP || || 83.39 || 80.39 || n/a || n/a || n/a || || [[https://github.com/ufal/udpipe|UDPipe]] || SP || || 83.41 || 80.13 || n/a || n/a || n/a || || [[https://github.com/elikip/bist-parser|transition-based BIST]]|| SP || || 86.77 || 83.35 || n/a || n/a || n/a || || [[https://github.com/elikip/bist-parser|graph-based BIST]] || SP || || 87.01 || 83.67 || n/a || n/a || n/a || || [[https://code.google.com/archive/p/mate-tools|MATE tool]] || SP || || 89.51 || 87.12 || n/a || n/a || n/a || || [[https://github.com/360er0/COMBO|COMBO]] || SP* || || 91.25 || 88.81 || n/a || n/a || n/a || || [[https://github.com/360er0/COMBO|COMBO]] || SP || || 91.36 || 88.92 || n/a || n/a || 80.65 || || [[https://github.com/360er0/COMBO|COMBO]] || SP*+embedd || || 91.84 || 89.56 || n/a || n/a || n/a || || [[https://github.com/360er0/COMBO|COMBO]] || SP+embedd || || 91.87 || 89.58 || n/a || n/a || '''81.22''' || || [[https://github.com/tdozat/Parser-v3|Stanford system]] || SP || || '''92.78''' || '''90.61''' || n/a || n/a || n/a || || [[https://github.com/ufal/udpipe|UDPipe]] || MSP || || 80.95 || 76.11 || 61.34 || 67.82 || n/a || || [[https://github.com/ufal/udpipe|UDPipe]] || MSP+embedd || || 82.89 || 78.57 || 63.88 || 70.49 || n/a || || [[https://code.google.com/archive/p/mate-tools|MATE tool]] || MSP || || 85.56 || 81.34 || 65.31 || 69.64 || n/a || || [[https://github.com/360er0/COMBO|COMBO]] || MSP || || 90.20 || 86.64 || 73.90 || 79.94 || n/a || || [[https://github.com/360er0/COMBO|COMBO]] || MSP+embedd || || 91.17 || 87.89 || 76.63 || '''81.86''' || n/a || || [[https://github.com/tdozat/Parser-v3|Stanford system]] || MSP+embedd || || 91.65 || 88.70 || 78.26 || 62.67 || n/a || || [[https://github.com/tdozat/Parser-v3|Stanford system]] || MSP || || '''91.70''' || '''88.88''' || '''78.36''' || 62.68 || n/a || '''SP''' (syntactic prediction) -- prediction of labelled dependency trees by a model trained on [[http://git.nlp.ipipan.waw.pl/alina/PDBUD|PDBUD]] '''SP*''' (syntactic prediction) -- prediction of labelled dependency trees by a model trained on [[http://zil.ipipan.waw.pl/PDB|PDB]] '''MSP''' (morphosyntactic prediction) -- prediction of morphosyntactic features (i.e. LEMMA, UPOS, XPOS, FEATS) and labelled dependency trees by a model trained on [[http://git.nlp.ipipan.waw.pl/alina/PDBUD|PDBUD]] '''+embedd''' -- model estimation supported by external word embeddings }}} ==== Related publications ==== * Wróblewska A. and Rybak P. (2019) Dependency parsing of Polish. Poznań Studies in Contemporary Linguistics, 55(2):305–337. (Note: Please contact the first author to get a copy of this article.) * Wróblewska A. (2014). [[http://nlp.ipipan.waw.pl/Bib/wro:14.pdf|Polish Dependency Parser Trained on an Automatically Induced Dependency Bank]]. Ph.D. dissertation, Institute of Computer Science, Polish Academy of Sciences, Warsaw, 2014. * Wróblewska A. (2018). Results of the PolEval 2018 Competition: Dependency Parsing Shared Task. In: Ogrodniczuk M. and Kobyliński Ł. (eds.) Proceedings of the PolEval 2018 Workshop. Institute of Computer Science, Polish Academy of Sciences, pp. 11–24. * Wróblewska A. (2018). [[http://universaldependencies.org/udw18/PDFs/18_Paper.pdf|Extended and Enhanced Polish Dependency Bank in Universal Dependencies Format]] In Proceedings of Universal Dependencies Workshop 2018 (UDW 2018). == Shallow parsing == || '''System name and URL''' || '''Approach''' || '''Main publication''' || '''License''' || '''P''' || '''R''' || '''F''' || || [[http://zil.ipipan.waw.pl/Spejd|Spejd]] || rule-based || Buczyński A., Przepiórkowski A. (2009). [[http://nlp.ipipan.waw.pl/~adamp/Papers/2007-ltc-spade/Spade.pdf|Spejd: A shallow processing and morphological disambiguation tool]]. In Z. Vetulani, H. Uszkoreit (eds.) Human Language Technology: Challenges of the Information Society, LNCS 5603, pp. 131–141. Springer-Verlag, Berlin. || GPL 3 || % || % || % || {{{#!wiki comment == Deep parsing == || '''System name and URL''' || '''Approach''' || '''Main publication''' || '''License''' || '''P''' || '''R''' || '''F''' || || [[http://zil.ipipan.waw.pl/Świgra|Świgra]] || || || || % || % || % || || [[http://zil.ipipan.waw.pl/LFG|POLFIE]] || || || GPL 3 (grammar) || % || % || % || || [[http://zil.ipipan.waw.pl/ENIAM|ENIAM]] || || || GPL 3 || % || % || % || }}} == Word sense disambiguation == Manually annotated subcorpus of the National Corpus of Polish with 3889 texts and 1217822 segments was used for training and test (cross-validation); it contained 34186 occurrences of multi-sense words (1–7 senses). Simple heuristics selecting the most frequent sense resulted in 78.3% accuracy. The table presents results of leave-one-out evaluation with individual selection of each ambiguous words (as described in the publication). || '''System name and URL''' || '''Approach''' || '''Main publication''' || '''License''' || '''Accuracy''' || || [[http://zil.ipipan.waw.pl/WSDDE|WSDDE]] || Machine-learning || Kopeć M., Młodzki R., Przepiórkowski A. (2012). [[http://nkjp.pl/settings/papers/NKJP_ksiazka.pdf|Automatyczne znakowanie sensami słów]]. In A. Przepiórkowski, M. Bańko, R.L. Górski, B. Lewandowska-Tomaszczyk (eds.) Narodowy Korpus Języka Polskiego, pp. 209–224. Wydawnictwo Naukowe PWN, Warsaw. || GPL 3 || 91.46% || == Named entity recognition == The below table reflects [[http://2018.poleval.pl/index.php/tasks/|PolEval 2018 Named Entity Recognition task]] plus several new systems made available more recently; please see the training data – [[http://clip.ipipan.waw.pl/NationalCorpusOfPolish|1M NKJP corpus]] and [[http://mozart.ipipan.waw.pl/~axw/poleval2018/|testing data with evaluation script]]. || '''System name and URL''' || '''Approach''' || '''Main publication''' || '''Weighted F,,1,,''' || || [[https://github.com/ipipan/spacy-pl#user-content-named-entity-recognizer|NER model for SpacyPL]] || || Tuora R., Kobyliński Ł. (2019). [[http://pp-rai.pwr.edu.pl/PPRAI19_proceedings.pdf|Integrating Polish Language Tools and Resources in spaCy]]. Proceedings of PP-RAI 2019 Conference, pp. 210–214. || 0.8752 || || Per group LSTM-CRF with C. S. E. || Per group LSTM-CRF with Contextual String Embeddings || || 0.851 || || [[https://github.com/CLARIN-PL/PolDeepNer|PolDeepNer]] || !BiDirectional GRU/LSTM with CRF || Marcińczuk M., Kocoń J., Gawor., M. (2018). [[https://www.researchgate.net/publication/328429192_Recognition_of_Named_Entities_for_Polish-Comparison_of_Deep_Learning_and_Conditional_Random_Fields_Approaches|Recognition of Named Entities for Polish-Comparison of Deep Learning and Conditional Random Fields Approaches]]. Proceedings of PolEval 2018 Workshop. || 0.866 || || [[https://github.com/CLARIN-PL/Liner2|Liner2]] || CRF || Marcińczuk M., Kocoń J., Oleksy., M. (2017). [[https://www.researchgate.net/publication/315789247_Liner2_-_a_Generic_Framework_for_Named_Entity_Recognition|Liner2 — a Generic Framework for Named Entity Recognition]] Proceedings of the 6th Workshop on Balto-Slavic Natural Language Processing. || 0.810 || || OPI Z3 || ??? || ||0.793 || || joint || ??? || || 0.780 || || disjoint || ??? || || 0.779 || || via ner || LSTM+CRF || || 0.756 || || [[http://zil.ipipan.waw.pl/Nerf|NERF]] + polimorf || CRF ||<style="text-align: left"|2>Savary A., Waszczuk J., Przepiórkowski A. [[http://nlp.ipipan.waw.pl/~adamp/Papers/2010-lrec-as/sav-wasz-przep-lrec-2010.pdf|Towards the annotation of named entities in the National Corpus of Polish]]. Proceedings of the 7th International Conference on Language Resources and Evaluation (LREC 2010), pp. 3622–3629. ELRA.<<BR>>See also Chapters 9 and 13 in the [[http://nkjp.pl/settings/papers/NKJP_ksiazka.pdf|NKJP book]] (in Polish). || 0.739 || || [[http://zil.ipipan.waw.pl/Nerf|NERF]] || CRF || 0.735 || || kner sep || ??? || || 0.733 || || Poleval2k18 || ??? || || 0.719 || || KNER || RNN/CNN + CRF || || 0.711 || || simple ner || !BiDirectional GRU || || 0.636 || == Sentiment analysis == || '''System name and URL''' || '''Approach''' || '''Main publication''' || '''License''' || '''Accuracy''' || || [[http://zil.ipipan.waw.pl/Sentipejd|Sentipejd]] || dictionary+rules || Buczyński A., Wawer A. (2008). [[http://www.lrec-conf.org/proceedings/lrec2008/workshops/W24_Proceedings.pdf|Shallow parsing in sentiment analysis of product reviews]]. In S. Kübler, J. Piskorski, A. Przepiórkowski (eds.) Proceedings of the LREC 2008 Workshop on Partial Parsing: Between Chunking and Deep Parsing, pp. 14–18, Marrakech, ELRA. || API usage || 0.782* || * measured document level with a C.50 classifier The systems below have been evaluated on PolEval 2017 test data, using [[http://zil.ipipan.waw.pl/TreebankWydzwieku#preview|the Polish sentiment treebank]] version 1.0. || '''System name and URL''' || '''Approach''' || '''Main publication''' || '''License''' || '''Accuracy''' || || [[https://github.com/norbertryc/poleval|Tree-LSTM-NR]] || Tree-LSTM || Ryciak N. (2017). [[http://ltc.amu.edu.pl/book/papers/PolEval2-3.pdf|Polish Language Sentiment Analysis with Tree-Structured Long Short-Term Memory Network]]. In Z. Vetulani and P. Paroubek (eds.) Proceedings of the 8th Language & Technology Conference: Human Language Technologies as a Challenge for Computer Science and Linguistics, pp. 402–405, Poznań, Poland. Fundacja Uniwersytetu im. Adama Mickiewicza w Poznaniu. || Open source || 0.795 || || [[https://github.com/tomekkorbak/treehopper|Alan Turing climbs a tree]] || Tree-LSTM || Żak P., Korbak T. (2017). [[http://ltc.amu.edu.pl/book/papers/PolEval2-1.pdf|Fine-tuning Tree-LSTM for phrase-level sentiment classification on a Polish dependency treebank]]. In Z. Vetulani and P. Paroubek (eds.) Proceedings of the 8th Language & Technology Conference: Human Language Technologies as a Challenge for Computer Science and Linguistics, pp. 392–396, Poznań, Poland. Fundacja Uniwersytetu im. Adama Mickiewicza w Poznaniu. || Open source || '''0.805''' || || [[https://github.com/michal-lew/tree-lstm|Tree-LSTM-ML]] || Tree-LSTM || Lew M., Pęzik P. (2017). [[http://ltc.amu.edu.pl/book/papers/PolEval2-2.pdf|A Sequential Child-Combination Tree-LSTM Network for Sentiment Analysis]]. In Z. Vetulani and P. Paroubek (eds.) Proceedings of the 8th Language & Technology Conference: Human Language Technologies as a Challenge for Computer Science and Linguistics, pp. 397–401, Poznań, Poland. Fundacja Uniwersytetu im. Adama Mickiewicza w Poznaniu. || Open source || 0.768 || == Mention detection == Precision, recall and F-measure are calculated on [[http://clip.ipipan.waw.pl/PCC|Polish Coreference Corpus]] data with two alternative mention detection scores: * EXACT: score of exact boundary matches (an automatic and a manual mention match if they have exactly the same boundaries; in other words, they consist of the same tokens) * HEAD: score of head matches (we reduce the automatic and the manual mentions to their single head tokens and compare them). ||<|2> '''System name and URL''' ||<|2> '''Approach''' ||<|2> '''Main publication''' ||<|2> '''License''' |||||| '''EXACT''' |||||| '''HEAD''' || ||<:> '''P''' ||<:> '''R''' ||<:> '''F''' ||<:> '''P''' ||<:> '''R''' ||<:> '''F''' || || [[http://zil.ipipan.waw.pl/MentionDetector|MentionDetector]] || rule-based || Ogrodniczuk M., Głowińska K., Kopeć M., Savary A., Zawisławska M. (2015). [[http://www.degruyter.com/view/product/428667|Coreference in Polish: Annotation, Resolution and Evaluation]], chapter 10.6. Walter De Gruyter. || CC BY 3 || 70.11% || 68.13% || 69.10% || 90.07% || 88.21% || 89.12% || || !MentionStat || statistical || Ogrodniczuk M. (2019). [[https://www.wuw.pl/data/include/cms/Automatyczne_wykrywanie_nominalnych_Ogrodniczuk_Maciej_2019.pdf|Automatyczne wykrywanie nominalnych zależności referencyjnych w języku polskim]]. Wydawnictwa Uniwersytetu Warszawskiego. || CC BY 3 || 74.34% || 69.41% || 71.79% || 92.27% || 90.21% || 91.23% || == Coreference resolution == As there is still no consensus about the single best coreference resolution metrics, CoNLL measure is used to rank systems (average of MUC, B3 and CEAFE F1) with a CoNLL-2011 shared task-based approach with two calculation strategies for EXACT mention borders and semantic HEADs: * INTERSECT: consider only correct system mentions (i.e. the intersection between gold and system mentions) * TRANSFORM: unify system and gold mention sets using the following procedure for twinless mentions (without a corresponding mention in the second set): 1. insert twinless gold mentions into system mention set as singletons 1. remove twinless singleton system mentions 1. insert twinless non-singletion system mentions into gold set as singletons. The results are produced on [[http://clip.ipipan.waw.pl/PCC|Polish Coreference Corpus]] data, with 10-fold crossvalidation. || '''System name and URL''' || '''Approach''' || '''Main publication''' || '''License''' || '''GOLD''' || '''EXACT INTERSECT''' || '''EXACT TRANSFORM''' || '''HEAD INTERSECT''' || '''HEAD TRANSFORM''' || || [[http://zil.ipipan.waw.pl/Ruler|Ruler]] || rule-based || Ogrodniczuk M., Kopeć M. (2011). [[http://nlp.ipipan.waw.pl/Bib/ogro:kop:11:ltc.pdf|End-to-end coreference resolution baseline system for Polish]]. In Z. Vetulani (ed.), Proceedings of the 5th Language & Technology Conference: Human Language Technologies as a Challenge for Computer Science and Linguistics, pp. 167–171, Poznań, Poland. || CC BY 3 || 74.10% || 78.86% || 68.88% || 77.05% || 72.19% || || [[http://zil.ipipan.waw.pl/Bartek|Bartek5]] || statistical || Kopeć M., Ogrodniczuk M. (2012). [[http://www.lrec-conf.org/proceedings/lrec2012/pdf/1064_Paper.pdf|Creating a Coreference Resolution System for Polish]]. In Proceedings of the 8th International Conference on Language Resources and Evaluation (LREC 2012), pp. 192–195, ELRA. || CC BY 3 || 80.50% || 82.71% || 73.96% || 80.00% || 74.82% || || [[http://zil.ipipan.waw.pl/Bartek|BartekS2]] || sieve-based || Nitoń B., Ogrodniczuk M. (2017). [[http://nlp.ipipan.waw.pl/Bib/nit:ogr:17:ldk.pdf|Multi-Pass Sieve Coreference Resolution System for Polish]]. In J. Gracia, F. Bond, J. P. !McCrae, P. Buitelaar, Ch. Chiarcos, S. Hellmann (eds.) Proceedings of the 1st Conference on Language, Data and Knowledge (LDK 2017). || CC BY 3 || 80.70% || 82.49% || 73.21% || 80.85% || 75.55% || || [[http://zil.ipipan.waw.pl/Corneferencer|Corneferencer]] || neural || Nitoń B., Morawiecki P., Ogrodniczuk M. (2018). [[http://www.lrec-conf.org/proceedings/lrec2018/pdf/183.pdf|Deep neural networks for coreference resolution for Polish]]. In N. Calzolari, K. Choukri, C. Cieri, T. Declerck, S. Goggi, K. Hasida, H. Isahara, B. Maegaard, J. Mariani, H. Mazo, A. Moreno, J. Odijk, S. Piperidis, T. Tokunaga (eds.) Proceedings of the 11th International Conference on Language Resources and Evaluation (LREC 2018), pp. 395–400, ELRA. || CC BY 3 || 80.59% || 82.73% || 73.39% || 80.89% || 76.03% || || Hybrid || hybrid || Ogrodniczuk M. (2019). [[https://www.wuw.pl/data/include/cms/Automatyczne_wykrywanie_nominalnych_Ogrodniczuk_Maciej_2019.pdf|Automatyczne wykrywanie nominalnych zależności referencyjnych w języku polskim]]. Wydawnictwa Uniwersytetu Warszawskiego.|| CC BY 3 || 81.09% || 82.54% || 73.17% || 81.04% || 75.74% || == Summarization == The table presents results of evaluation on the [[http://zil.ipipan.waw.pl/PolishSummariesCorpus|Polish Summaries Corpus]] (154 texts, each with 5 abstractive summaries, 20% word count of the original document). [[http://anthology.aclweb.org/W/W04/W04-1013.pdf|ROUGE]] covers several metrics, with the following variants used here: * ROUGE,,n,, which counts n-gram co-occurrences between reference (gold) summaries and system summary, * ROUGE-M,,n,, with ROUGE,,n,, score calculated for each text using a single manual summary which gives the highest score as a reference. The evaluation tool source code is available [[http://git.nlp.ipipan.waw.pl/summarization/eval|here]]. || '''System name and URL''' || '''Approach''' || '''Main publication''' || '''License''' || '''ROUGE,,1,,''' || '''ROUGE,,2,,''' || '''ROUGE,,3,,''' || '''ROUGE-M,,1,,''' || '''ROUGE-M,,2,,''' || '''ROUGE-M,,3,,''' || || [[http://las.aei.polsl.pl/PolSum/#/Home|PolSum]] || ? || Ciura M., Grund D., Kulików S., Suszczańska N. (2004). [[http://sun.aei.polsl.pl/~mciura/publikacje/summarizing.pdf|A System to Adapt Techniques of Text Summarizing to Polish]]. In Okatan A. (ed.) International Conference on Computational Intelligence, pp. 117–120, Istanbul, Turkey. International Computational Intelligence Society. || ? || ? % || ? % || ? % || ? % || ? % || ? % || || [[http://www.cs.put.poznan.pl/dweiss/research/lakon/|Lakon]] || sentence extraction relying on one of: positional heuristics, word frequency features, lexical chains information || Dudczak A. (2007). [[http://www.cs.put.poznan.pl/dweiss/research/lakon/publications/thesis.pdf|Zastosowanie wybranych metod eksploracji danych do tworzenia streszczeń tekstów prasowych dla języka polskiego]]. MSc thesis, Poznań Technical University. || 3-clause BSD || 55.4% || 20.8% || 14.4% || 62.9% || 33.3% || 27.4% || || [[http://clip.ipipan.waw.pl/Summarizer|Summarizer]] || sentence extraction with machine learning || Świetlicka J. (2010). [[http://nlp.ipipan.waw.pl/~adamp/msc/swietlicka.joanna/TekstPracy.pdf.gz|Metody maszynowego uczenia w automatycznym streszczaniu tekstów]]. MSc thesis, Warsaw University. || GNU GPL 3.0 || 58.0% || 22.6% || 16.1% || 65.4% || 35.8% || 29.8% || || [[https://github.com/neopunisher/Open-Text-Summarizer|Open Text Summarizer]] || word frequency based sentence extraction || -- || GNU GPL 2.0 || 51.3% || 13.6% || 9.0% || 58.5% || 22.5% || 17.9% || || [[http://git.nlp.ipipan.waw.pl/summarization/emily|Emily-C]] || sentence extraction with machine learning ||<style="text-align: left"|2>Kopeć M. (2015). [[http://rbc.ipipan.waw.pl/Content/184/Strony%20od%20ITRIA_2015_Sele_s.23-46.pdf|Coreference-based Content Selection for Automatic Summarization of Polish News]]. ITRIA 2015. Selected Problems in Information Technologies, pp. 23–46.|| GNU GPL 3.0 || 52.9% || 15.1% || 10.0% || 58.8% || 24.2% || 19.2% || || [[http://git.nlp.ipipan.waw.pl/summarization/emily|Emily-S]] || sentence extraction with machine learning || GNU GPL 3.0 || 53.0% || 15.4% || 10.4% || 59.4% || 24.7% || 20.1% || || [[http://zil.ipipan.waw.pl/Nicolas|Nicolas]] || sentence extraction with machine learning || Kopeć M. (2018). [[http://zil.ipipan.waw.pl/MateuszKopec|Summarization of Polish Press Articles Using Coreference]] PhD dissertation. || GNU GPL 3.0 || 59.1% || 24.2% || 17.5% || 67.9% || 39.8% || 33.9% || || [[https://github.com/summanlp/textrank|TextRank]] || unsupervised, graph based sentence extraction || Barrios, F., López, F., Argerich, L., Wachenchauzer, R. (2015). [[https://arxiv.org/pdf/1602.03606|Variations of the Similarity Function of TextRank for Automated Summarization]]. arXiv:1602.03606. || MIT || 56.5% || 19.2% || 12.8% || 63.3% || 30.0% || 23.8% || == Language models == Given a set of sentences in Polish in a random order, each with original punctuation, the goal of the task is to create a language model for Polish. Please see [[http://poleval.pl/tasks/task3/task3_train_segmented.txt.gz|segmented training data]] and [[http://poleval.pl/tasks/task3/task3_test_segmented.txt.gz|segmented testing data]]. Evaluation is based on calculation of perplexity, please see [[http://poleval.pl/files/8515/2940/9834/ExampleOfPerplexity.pdf|instruction on OOV rate and perplexity]]. || '''System name and URL''' || '''Main publication''' || '''Perplexity''' || || [[https://github.com/n-waves/poleval2018|ULMFiT-SP-PL]] || Kardas M., Howard J., Czapla P. (2018). Universal Language Model Fine-Tuning with Subword Tokenization for Polish. || 117.6705 || || [[https://github.com/kwrobel-nlp/lm/|AGH-UJ]] || Please see [[http://wierzba.wzks.uj.edu.pl/~kwrobel/LM/lm_o5.gz|the model]].|| 146.7082 || || [[https://drive.google.com/drive/folders/1Bvyoff_8D88pYVmoUjBq_-ocAggksp3F?usp=sharing|PocoLM]] Order 6 || || 208.6297 || |
Benchmarks
This page documents performance of most popular contemporary NLP systems for Polish.
Single-word lemmatization and morphological analysis
System name and URL |
Approach |
Main publication |
License |
P |
R |
F |
|
Woliński, M. (2006). Morfeusz — a practical tool for the morphological analysis of Polish. In M.A. Kłopotek, S.T. Wierzchoń, K. Trojanowski (eds.) Proceedings of the International IIS:IIPWM 2006 Conference, pp. 511–520. |
2-clause BSD |
% |
% |
% |
|
|
Miłkowski M. (2010). Developing an open-source, rule-based proofreading tool. Software: Practice and Experience, 40(7):543–566. |
|
% |
% |
% |
|
dictionary-based rules and heuristics |
Kobyliński Ł. (unpublished) |
GPL |
% |
% |
% |
Multi-word lemmatization
System name and URL |
Approach |
Main publication |
License |
Accuracy |
– |
rule-based |
Degórski, Ł. (2012). Towards the lemmatisation of Polish nominal syntactic groups using a shallow grammar. In P. Bouvry, M.A. Kłopotek, F. Leprevost, M. Marciniak, A. Mykowiecka, H. Rybiński (eds.) Security and Intelligent Information Systems, Lecture Notes in Computer Science vol. 7053, pp. 370–378, Springer-Verlag Berlin Heidelberg. |
? |
82.90% |
– |
automatic generation of lemmatization rules using CRF |
Radziszewski A. (2013). Learning to lemmatise Polish noun phrases. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (ACL 2013), Volume 1: Long Papers. ACL, pp. 701–709. |
GPL |
80.70% |
– |
automatic generation of lemmatization rules from a corpus |
Abramowicz W., Filipowska A., Małyszko J., Wagner T. (2015). Lemmatization of Multi-Word Entity Names for Polish Language Using Rules Automatically Generated Based on the Corpus Analysis. In Z. Vetulani, J. Mariani (eds.) Human Language Technologies as a Challenge for Computer Science and Linguistics, Fundacja Uniwersytetu im. A. Mickiewicza, pp. 540–544, Poznań. |
? |
82.10% |
dictionary-based rules and heuristics |
Marcińczuk M. (2017). Lemmatization of Multi-word Common Noun Phrases and Named Entities in Polish. In Proceedings of Recent Advances in Natural Language Processing, pp. 483–491. |
GPL |
97.99% |
Disambiguated POS tagging
The comparisons are performed using plain text as input and reporting the accuracy lower bound (Acclower) metric proposed by Radziszewski and Acedański (2012). The metric penalizes all segmentation changes in regard to the gold standard and treats such tokens as misclassified. Furthermore, we report separate metric values for both known and unknown words to assess the performance of guesser modules built into the taggers. These are indicated as AccKlower for known and AccUlower for unknown words.
The experiments have been performed on the manually annotated part of the National Corpus of Polish v. 1.1 (1M tokens). The ten-fold cross-validation procedure has been followed, by re-evaluating the methods ten times, each time selecting one of ten parts of the corpus for testing and the remaining parts for training the taggers. The provided results are averages calculated over ten training and testing sequences. Each of the taggers and each tagger ensemble has been trained and tested on the same set of cross-validation folds, so the results are directly comparable. Each of the training folds has been reanalyzed, according to the procedure described in (A Tiered CRF Tagger for Polish, using the Maca toolkit (Radziszewski and Śniatowski 2011). The idea of a morphological reanalysis of the gold-standard data is to allow the trained tagger to see similar input that is expected in the tagging phase. The training data is firstly turned into plain text and analyzed using the same mechanism that will be used by the tagger during actual tagging process. The output of the analyzer is then synchronized with the original gold-standard data, by using the original tokenization. Tokens with changed segmentation are taken from the gold-standard intact. In the case of tokens for which the segmentation did not change in the process of morphological analysis, the produced interpretations are compared with the original. A token is marked as an unknown word if the correct interpretation has not been produced by the analyzer. Maca has been run with the morfeusz-nkjp-official configuration, which uses Morfeusz SGJP analyzer (Woliński 2006) and no guesser module.
Tagger efficiency was compared by measuring training and tagging times of each of the methods on the same machine. 1.1M token set was used both for training and tagging stages. The total processing time included model loading/saving time and other I/O operations (e.g. reading/writing the tokens).
System name and URL |
Approach |
Main publication |
License |
Acclower |
AccKlower |
AccUlower |
Training time |
Tagging time |
MaxEnt model |
Kobyliński Ł., Kieraś W. (2016). Part of Speech Tagging for Polish: State of the Art and Future Perspectives. In Proceedings of the 17th International Conference on Intelligent Text Processing and Computational Linguistics (CICLing 2016), Konya, Turkey. |
GPL |
87.24% |
88.02% |
62.05% |
11095 s |
362 s |
|
rule-based adapted Brill tagger |
Acedański S. (2010). A Morphosyntactic Brill Tagger for Inflectional Languages. In H. Loftsson, E. Rögnvaldsson, S. Helgadóttir (eds.) Advances in Natural Language Processing, LNCS 6233, pp. 3–14, Springer. |
GPL 3 |
88.95% |
91.22% |
15.19% |
2624 s |
186 s |
|
memory-based |
Radziszewski A., Śniatowski T. (2011). A memory-based tagger for Polish. In: Z. Vetulani (ed.) Proceedings of the 5th Language and Technology Conference (LTC 2011), pp. 556–560, Poznań, Poland. |
|
90.33% |
91.26% |
60.25% |
548 s |
4338 s |
|
tiered, CRF-based |
Radziszewski A. (2013). A Tiered CRF Tagger for Polish. In R. Bembenik, Ł. Skonieczny, H. Rybiński, M. Kryszkiewicz, M. Niezgódka (eds.) Intelligent Tools for Building a Scientific Information Platform, pp. 215–230, Springer Berlin Heidelberg. |
LGPL 3.0 |
90.76% |
91.92% |
53.18% |
27242 s |
420 s |
|
mutually dependent CRF layers |
Waszczuk J. (2012). Harnessing the CRF complexity with domain-specific constraints. The case of morphosyntactic tagging of a highly inflected language. In Proceedings of the 24th International Conference on Computational Linguistics (COLING 2012), pp. 2789–2804, Mumbai, India. |
2-clause BSD |
91.07% |
92.06% |
58.81% |
26675 s |
403 s |
|
voting ensemble |
Kobyliński Ł. (2014). PoliTa: A multitagger for Polish. In N. Calzolari, K. Choukri, T. Declerck, H. Loftsson, B. Maegaard, J. Mariani, A. Moreno, J. Odijk, S. Piperidis (eds.) Proceedings of the 9th International Conference on Language Resources and Evaluation (LREC 2014), pp. 2949–2954, Reykjavík, Iceland, ELRA. |
GPL |
92.01% |
92.91% |
62.81% |
N/A |
N/A |
|
bi-LSTM |
Krasnowska-Kieraś K. (2017). Morphosyntactic disambiguation for Polish with bi-LSTM neural networks. In Zygmunt Vetulani and Patrick Paroubek (eds.) Proceedings of the 8th Language & Technology Conference: Human Language Technologies as a Challenge for Computer Science and Linguistics, pp. 367–371, Poznań, Poland. Fundacja Uniwersytetu im. Adama Mickiewicza w Poznaniu. |
GPL |
92.01% |
92.91% |
62.81% |
N/A |
N/A |
|
RNN |
Wróbel K. (2017) KRNNT: Polish Recurrent Neural Network Tagger. In Zygmunt Vetulani and Patrick Paroubek (eds.) Proceedings of the 8th Language & Technology Conference: Human Language Technologies as a Challenge for Computer Science and Linguistics, pp. 386–391, Poznań, Poland. Fundacja Uniwersytetu im. Adama Mickiewicza w Poznaniu. |
LGPL 3.0 |
93.72% |
94.43% |
69.03% |
245.9 s (GeForce GTX 1050M) |
N/A |
Dependency parsing
Dependency parsing systems are trained and tested either on Polish Dependency Bank (PDB) or on PDB-UD, i.e. PDB converted to the Universal Dependencies format (license CC BY-NC-SA 4.0). The script (conll18_ud_eval.py) of CoNLL 2018 shared task is used for evaluation.
Parser |
Model |
Data |
External Embedding |
UPOS F1 |
XPOS F1 |
UFeats F1 |
Lemmas F1 |
UAS F1 |
LAS F1 |
CLAS F1 |
MLAS F1 |
BLEX F1 |
|
MSP |
PDB-UD |
97.34 |
88.62 |
89.16 |
88.03 |
96.08 |
87.41 |
83.94 |
80.43 |
69.98 |
76.49 |
SP (syntactic prediction) -- prediction of labelled dependency trees
MSP (morphosyntactic prediction) -- prediction of morphosyntactic features (i.e. LEMMA, UPOS, XPOS, FEATS) and labelled dependency trees
Related publications
- Wróblewska A. and Rybak P. (2019) Dependency parsing of Polish. Poznań Studies in Contemporary Linguistics, 55(2):305–337. (Note: Please contact the first author to get a copy of this article.)
Wróblewska A. (2014). Polish Dependency Parser Trained on an Automatically Induced Dependency Bank. Ph.D. dissertation, Institute of Computer Science, Polish Academy of Sciences, Warsaw, 2014.
Wróblewska A. (2018). Results of the PolEval 2018 Competition: Dependency Parsing Shared Task. In: Ogrodniczuk M. and Kobyliński Ł. (eds.) Proceedings of the PolEval 2018 Workshop. Institute of Computer Science, Polish Academy of Sciences, pp. 11–24.
Wróblewska A. (2018). Extended and Enhanced Polish Dependency Bank in Universal Dependencies Format In Proceedings of Universal Dependencies Workshop 2018 (UDW 2018).
Shallow parsing
System name and URL |
Approach |
Main publication |
License |
P |
R |
F |
rule-based |
Buczyński A., Przepiórkowski A. (2009). Spejd: A shallow processing and morphological disambiguation tool. In Z. Vetulani, H. Uszkoreit (eds.) Human Language Technology: Challenges of the Information Society, LNCS 5603, pp. 131–141. Springer-Verlag, Berlin. |
GPL 3 |
% |
% |
% |
Word sense disambiguation
Manually annotated subcorpus of the National Corpus of Polish with 3889 texts and 1217822 segments was used for training and test (cross-validation); it contained 34186 occurrences of multi-sense words (1–7 senses). Simple heuristics selecting the most frequent sense resulted in 78.3% accuracy. The table presents results of leave-one-out evaluation with individual selection of each ambiguous words (as described in the publication).
System name and URL |
Approach |
Main publication |
License |
Accuracy |
Machine-learning |
Kopeć M., Młodzki R., Przepiórkowski A. (2012). Automatyczne znakowanie sensami słów. In A. Przepiórkowski, M. Bańko, R.L. Górski, B. Lewandowska-Tomaszczyk (eds.) Narodowy Korpus Języka Polskiego, pp. 209–224. Wydawnictwo Naukowe PWN, Warsaw. |
GPL 3 |
91.46% |
Named entity recognition
The below table reflects PolEval 2018 Named Entity Recognition task plus several new systems made available more recently; please see the training data – 1M NKJP corpus and testing data with evaluation script.
System name and URL |
Approach |
Main publication |
Weighted F1 |
|
Tuora R., Kobyliński Ł. (2019). Integrating Polish Language Tools and Resources in spaCy. Proceedings of PP-RAI 2019 Conference, pp. 210–214. |
0.8752 |
|
Per group LSTM-CRF with C. S. E. |
Per group LSTM-CRF with Contextual String Embeddings |
|
0.851 |
BiDirectional GRU/LSTM with CRF |
Marcińczuk M., Kocoń J., Gawor., M. (2018). Recognition of Named Entities for Polish-Comparison of Deep Learning and Conditional Random Fields Approaches. Proceedings of PolEval 2018 Workshop. |
0.866 |
|
CRF |
Marcińczuk M., Kocoń J., Oleksy., M. (2017). Liner2 — a Generic Framework for Named Entity Recognition Proceedings of the 6th Workshop on Balto-Slavic Natural Language Processing. |
0.810 |
|
OPI Z3 |
??? |
|
0.793 |
joint |
??? |
|
0.780 |
disjoint |
??? |
|
0.779 |
via ner |
LSTM+CRF |
|
0.756 |
NERF + polimorf |
CRF |
Savary A., Waszczuk J., Przepiórkowski A. Towards the annotation of named entities in the National Corpus of Polish. Proceedings of the 7th International Conference on Language Resources and Evaluation (LREC 2010), pp. 3622–3629. ELRA. |
0.739 |
CRF |
0.735 |
||
kner sep |
??? |
|
0.733 |
Poleval2k18 |
??? |
|
0.719 |
KNER |
RNN/CNN + CRF |
|
0.711 |
simple ner |
BiDirectional GRU |
|
0.636 |
Sentiment analysis
System name and URL |
Approach |
Main publication |
License |
Accuracy |
dictionary+rules |
Buczyński A., Wawer A. (2008). Shallow parsing in sentiment analysis of product reviews. In S. Kübler, J. Piskorski, A. Przepiórkowski (eds.) Proceedings of the LREC 2008 Workshop on Partial Parsing: Between Chunking and Deep Parsing, pp. 14–18, Marrakech, ELRA. |
API usage |
0.782* |
* measured document level with a C.50 classifier
The systems below have been evaluated on PolEval 2017 test data, using the Polish sentiment treebank version 1.0.
System name and URL |
Approach |
Main publication |
License |
Accuracy |
Tree-LSTM |
Ryciak N. (2017). Polish Language Sentiment Analysis with Tree-Structured Long Short-Term Memory Network. In Z. Vetulani and P. Paroubek (eds.) Proceedings of the 8th Language & Technology Conference: Human Language Technologies as a Challenge for Computer Science and Linguistics, pp. 402–405, Poznań, Poland. Fundacja Uniwersytetu im. Adama Mickiewicza w Poznaniu. |
Open source |
0.795 |
|
Tree-LSTM |
Żak P., Korbak T. (2017). Fine-tuning Tree-LSTM for phrase-level sentiment classification on a Polish dependency treebank. In Z. Vetulani and P. Paroubek (eds.) Proceedings of the 8th Language & Technology Conference: Human Language Technologies as a Challenge for Computer Science and Linguistics, pp. 392–396, Poznań, Poland. Fundacja Uniwersytetu im. Adama Mickiewicza w Poznaniu. |
Open source |
0.805 |
|
Tree-LSTM |
Lew M., Pęzik P. (2017). A Sequential Child-Combination Tree-LSTM Network for Sentiment Analysis. In Z. Vetulani and P. Paroubek (eds.) Proceedings of the 8th Language & Technology Conference: Human Language Technologies as a Challenge for Computer Science and Linguistics, pp. 397–401, Poznań, Poland. Fundacja Uniwersytetu im. Adama Mickiewicza w Poznaniu. |
Open source |
0.768 |
Mention detection
Precision, recall and F-measure are calculated on Polish Coreference Corpus data with two alternative mention detection scores:
- EXACT: score of exact boundary matches (an automatic and a manual mention match if they have exactly the same boundaries; in other words, they consist of the same tokens)
- HEAD: score of head matches (we reduce the automatic and the manual mentions to their single head tokens and compare them).
System name and URL |
Approach |
Main publication |
License |
EXACT |
HEAD |
||||
P |
R |
F |
P |
R |
F |
||||
rule-based |
Ogrodniczuk M., Głowińska K., Kopeć M., Savary A., Zawisławska M. (2015). Coreference in Polish: Annotation, Resolution and Evaluation, chapter 10.6. Walter De Gruyter. |
CC BY 3 |
70.11% |
68.13% |
69.10% |
90.07% |
88.21% |
89.12% |
|
MentionStat |
statistical |
Ogrodniczuk M. (2019). Automatyczne wykrywanie nominalnych zależności referencyjnych w języku polskim. Wydawnictwa Uniwersytetu Warszawskiego. |
CC BY 3 |
74.34% |
69.41% |
71.79% |
92.27% |
90.21% |
91.23% |
Coreference resolution
As there is still no consensus about the single best coreference resolution metrics, CoNLL measure is used to rank systems (average of MUC, B3 and CEAFE F1) with a CoNLL-2011 shared task-based approach with two calculation strategies for EXACT mention borders and semantic HEADs:
- INTERSECT: consider only correct system mentions (i.e. the intersection between gold and system mentions)
- TRANSFORM: unify system and gold mention sets using the following procedure for twinless mentions (without a corresponding mention in the second set):
- insert twinless gold mentions into system mention set as singletons
- remove twinless singleton system mentions
- insert twinless non-singletion system mentions into gold set as singletons.
The results are produced on Polish Coreference Corpus data, with 10-fold crossvalidation.
System name and URL |
Approach |
Main publication |
License |
GOLD |
EXACT INTERSECT |
EXACT TRANSFORM |
HEAD INTERSECT |
HEAD TRANSFORM |
rule-based |
Ogrodniczuk M., Kopeć M. (2011). End-to-end coreference resolution baseline system for Polish. In Z. Vetulani (ed.), Proceedings of the 5th Language & Technology Conference: Human Language Technologies as a Challenge for Computer Science and Linguistics, pp. 167–171, Poznań, Poland. |
CC BY 3 |
74.10% |
78.86% |
68.88% |
77.05% |
72.19% |
|
statistical |
Kopeć M., Ogrodniczuk M. (2012). Creating a Coreference Resolution System for Polish. In Proceedings of the 8th International Conference on Language Resources and Evaluation (LREC 2012), pp. 192–195, ELRA. |
CC BY 3 |
80.50% |
82.71% |
73.96% |
80.00% |
74.82% |
|
sieve-based |
Nitoń B., Ogrodniczuk M. (2017). Multi-Pass Sieve Coreference Resolution System for Polish. In J. Gracia, F. Bond, J. P. McCrae, P. Buitelaar, Ch. Chiarcos, S. Hellmann (eds.) Proceedings of the 1st Conference on Language, Data and Knowledge (LDK 2017). |
CC BY 3 |
80.70% |
82.49% |
73.21% |
80.85% |
75.55% |
|
neural |
Nitoń B., Morawiecki P., Ogrodniczuk M. (2018). Deep neural networks for coreference resolution for Polish. In N. Calzolari, K. Choukri, C. Cieri, T. Declerck, S. Goggi, K. Hasida, H. Isahara, B. Maegaard, J. Mariani, H. Mazo, A. Moreno, J. Odijk, S. Piperidis, T. Tokunaga (eds.) Proceedings of the 11th International Conference on Language Resources and Evaluation (LREC 2018), pp. 395–400, ELRA. |
CC BY 3 |
80.59% |
82.73% |
73.39% |
80.89% |
76.03% |
|
Hybrid |
hybrid |
Ogrodniczuk M. (2019). Automatyczne wykrywanie nominalnych zależności referencyjnych w języku polskim. Wydawnictwa Uniwersytetu Warszawskiego. |
CC BY 3 |
81.09% |
82.54% |
73.17% |
81.04% |
75.74% |
Summarization
The table presents results of evaluation on the Polish Summaries Corpus (154 texts, each with 5 abstractive summaries, 20% word count of the original document). ROUGE covers several metrics, with the following variants used here:
ROUGEn which counts n-gram co-occurrences between reference (gold) summaries and system summary,
ROUGE-Mn with ROUGEn score calculated for each text using a single manual summary which gives the highest score as a reference.
The evaluation tool source code is available here.
System name and URL |
Approach |
Main publication |
License |
ROUGE1 |
ROUGE2 |
ROUGE3 |
ROUGE-M1 |
ROUGE-M2 |
ROUGE-M3 |
? |
Ciura M., Grund D., Kulików S., Suszczańska N. (2004). A System to Adapt Techniques of Text Summarizing to Polish. In Okatan A. (ed.) International Conference on Computational Intelligence, pp. 117–120, Istanbul, Turkey. International Computational Intelligence Society. |
? |
? % |
? % |
? % |
? % |
? % |
? % |
|
sentence extraction relying on one of: positional heuristics, word frequency features, lexical chains information |
Dudczak A. (2007). Zastosowanie wybranych metod eksploracji danych do tworzenia streszczeń tekstów prasowych dla języka polskiego. MSc thesis, Poznań Technical University. |
3-clause BSD |
55.4% |
20.8% |
14.4% |
62.9% |
33.3% |
27.4% |
|
sentence extraction with machine learning |
Świetlicka J. (2010). Metody maszynowego uczenia w automatycznym streszczaniu tekstów. MSc thesis, Warsaw University. |
GNU GPL 3.0 |
58.0% |
22.6% |
16.1% |
65.4% |
35.8% |
29.8% |
|
word frequency based sentence extraction |
-- |
GNU GPL 2.0 |
51.3% |
13.6% |
9.0% |
58.5% |
22.5% |
17.9% |
|
sentence extraction with machine learning |
Kopeć M. (2015). Coreference-based Content Selection for Automatic Summarization of Polish News. ITRIA 2015. Selected Problems in Information Technologies, pp. 23–46. |
GNU GPL 3.0 |
52.9% |
15.1% |
10.0% |
58.8% |
24.2% |
19.2% |
|
sentence extraction with machine learning |
GNU GPL 3.0 |
53.0% |
15.4% |
10.4% |
59.4% |
24.7% |
20.1% |
||
sentence extraction with machine learning |
Kopeć M. (2018). Summarization of Polish Press Articles Using Coreference PhD dissertation. |
GNU GPL 3.0 |
59.1% |
24.2% |
17.5% |
67.9% |
39.8% |
33.9% |
|
unsupervised, graph based sentence extraction |
Barrios, F., López, F., Argerich, L., Wachenchauzer, R. (2015). Variations of the Similarity Function of TextRank for Automated Summarization. arXiv:1602.03606. |
MIT |
56.5% |
19.2% |
12.8% |
63.3% |
30.0% |
23.8% |
Language models
Given a set of sentences in Polish in a random order, each with original punctuation, the goal of the task is to create a language model for Polish. Please see segmented training data and segmented testing data. Evaluation is based on calculation of perplexity, please see instruction on OOV rate and perplexity.