Size: 12179
Comment:
|
Size: 12184
Comment:
|
Deletions are marked like this. | Additions are marked like this. |
Line 21: | Line 21: |
|| [[http://zil.ipipan.waw.pl/Pantera|Pantera]] || rule-based adapted Brill tagger || Acedański S. ''A Morphosyntactic Brill Tagger for Inflectional Languages''. In H. Loftsson, E. Rögnvaldsson, S. Helgadóttir (eds.) Advances in Natural Language Processing, LNCS 6233, pp. 3–14, Springer, 2010. || || 88.95% || 91.22% || 15.19% || | || [[http://zil.ipipan.waw.pl/PANTERA|Pantera]] || rule-based adapted Brill tagger || Acedański S. ''A Morphosyntactic Brill Tagger for Inflectional Languages''. In H. Loftsson, E. Rögnvaldsson, S. Helgadóttir (eds.) Advances in Natural Language Processing, LNCS 6233, pp. 3–14, Springer, 2010. || GPL 3 || 88.95% || 91.22% || 15.19% || |
Benchmarks
This page documents performance of most popular contemporary NLP systems for Polish.
Morphological analysis
System name and URL |
Approach |
Main publication |
License |
P |
R |
F |
|
Woliński, M. Morfeusz — a practical tool for the morphological analysis of Polish. In: Kłopotek, M.A., Wierzchoń, S.T., Trojanowski, K. (eds.) Proceedings of the International IIS: IIPWM’06 Conference, pp. 511–520, Wisła, Poland, 2006. |
|
% |
% |
% |
|
|
Miłkowski M. Developing an open-source, rule-based proofreading tool. Software: Practice and Experience, 40(7):543–566, 2010. |
|
% |
% |
% |
POS tagging
The comparisons are performed using plain text as input and reporting the accuracy lower bound (Acclower) metric proposed by Radziszewski and Acedański (Taggers gonna tag: an argument against evaluating disambiguation capacities of morphosyntactic taggers. In Proceedings of TSD 2012, LNCS, Springer Verlag). The metric penalizes all segmentation changes in regard to the gold standard and treats such tokens as misclassified. Furthermore, we report separate metric values for both known and unknown words to assess the performance of guesser modules built into the taggers. These are indicated as AccKlower for known and AccUlower for unknown words.
The experiments have been performed on the manually annotated part of the National Corpus of Polish v. 1.1 (1M tokens). The ten-fold cross-validation procedure has been followed, by re-evaluating the methods ten times, each time selecting one of ten parts of the corpus for testing and the remaining parts for training the taggers. The provided results are averages calculated over ten training and testing sequences. Each of the taggers and each tagger ensemble has been trained and tested on the same set of cross-validation folds, so the results are directly comparable. Each of the training folds has been reanalyzed, according to the procedure described in (Radziszewski A. A Tiered CRF Tagger for Polish. In R. Bembenik, Ł. Skonieczny, H. Rybiński, M. Kryszkiewicz, M. Niezgódka (eds.) Intelligent Tools for Building a Scientific Information Platform, Springer Verlag, 2013.), using the Maca toolkit (Radziszewski A., Śniatowski T. Maca – a configurable tool to integrate Polish morphological data. In Proceedings of the 2nd International Workshop on Free/Open-Source Rule-Based Machine Translation, 2011). The idea of a morphological reanalysis of the gold-standard data is to allow the trained tagger to see similar input that is expected in the tagging phase. The training data is firstly turned into plain text and analyzed using the same mechanism that will be used by the tagger during actual tagging process. The output of the analyzer is then synchronized with the original gold-standard data, by using the original tokenization. Tokens with changed segmentation are taken from the gold-standard intact. In the case of tokens for which the segmentation did not change in the process of morphological analysis, the produced interpretations are compared with the original. A token is marked as an unknown word, if the correct interpretation has not been produced by the analyzer. In our experiments, Maca has been run with the morfeusz-nkjp-official configuration, which uses Morfeusz SGJP analyzer (Woliński, M. Morfeusz — a practical tool for the morphological analysis of Polish. In: Kłopotek, M.A., Wierzchoń, S.T., Trojanowski, K. (eds.) Proceedings of the International IIS: IIPWM’06 Conference, pp. 511–520, Wisła, Poland, 2006) and no guesser module.
System name and URL |
Approach |
Main publication |
License |
Acclower |
AccKlower |
AccUlower |
rule-based adapted Brill tagger |
Acedański S. A Morphosyntactic Brill Tagger for Inflectional Languages. In H. Loftsson, E. Rögnvaldsson, S. Helgadóttir (eds.) Advances in Natural Language Processing, LNCS 6233, pp. 3–14, Springer, 2010. |
GPL 3 |
88.95% |
91.22% |
15.19% |
|
hybrid (multiclassifier) rule-based |
Piasecki M., Wardyński A. Multiclassifier Approach to Tagging of Polish. In Proceedings of 1st International Symposium Advances in Artificial Intelligence and Applications, 2006. |
|
% |
% |
% |
|
memory-based |
Radziszewski A., Śniatowski T. A memory-based tagger for Polish. In Proceedings of LTC 2011. |
|
90.33% |
91.26% |
60.25% |
|
mutually dependent CRF layers |
Waszczuk J. Harnessing the CRF complexity with domain-specific constraints. The case of morphosyntactic tagging of a highly inflected language. In Proceedings of the 24th International Conference on Computational Linguistics (COLING 2012), pp. 2789–2804, Mumbai, India, 2012. |
|
91.07% |
92.06% |
58.81% |
|
tiered, CRF-based |
Radziszewski A. A Tiered CRF Tagger for Polish. In R. Bembenik, Ł. Skonieczny, H. Rybiński, M. Kryszkiewicz, M. Niezgódka (eds.) Intelligent Tools for Building a Scientific Information Platform, Springer Verlag, 2013. |
|
% |
% |
% |
|
voting ensemble |
Kobyliński Ł. PoliTa: A multitagger for Polish. In N. Calzolari, K. Choukri, T. Declerck, H. Loftsson, B. Maegaard, J. Mariani, A. Moreno, J. Odijk, S. Piperidis (eds.) Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC 2014), pp. 2949–2954, Reykjavík, Iceland, ELRA, 2014. |
|
% |
% |
% |
Constituency parsing
System name and URL |
Approach |
Main publication |
License |
P |
R |
F |
|| Spejd || rule-based || Buczyński A., Przepiórkowski A. Spejd: A shallow processing and morphological disambiguation tool. In Z. Vetulani, H. Uszkoreit (eds.) Human Language Technology: Challenges of the Information Society, LNCS 5603, pp. 131–141. Springer-Verlag, Berlin, 2009. || GPL 3 || % || % || % ||
Dependency parsing
System name and URL |
Approach |
Main publication |
License |
LAS |
UAS |
trained on the extended version of the Polish dependency treebank with MaltParser |
Wróblewska A. Polish Dependency Parser Trained on an Automatically Induced Dependency Bank. PhD dissertation, Institute of Computer Science, Polish Academy of Sciences, Warsaw, 2014. |
|
84% |
89% |
|
trained on the same data with MateParser |
|
89% |
93% |
||
|
|
|
% |
% |
Deep parsing
Word sense disambiguation
System name and URL |
Approach |
Main publication |
License |
P |
R |
F |
Machine-learning |
|
|
% |
% |
% |
Named entity recognition
Sentiment analysis
System name and URL |
Approach |
Main publication |
License |
P |
R |
F |
|
|
|
% |
% |
% |
Mention detection
Precision, recall and F-measure are calculated on Polish Coreference Corpus data with two alternative mention detection scores:
- EXACT: score of exact boundary matches (an automatic and a manual mention match if they have exactly the same boundaries; in other words, they consist of the same tokens)
- HEAD: score of head matches (we reduce the automatic and the manual mentions to their single head tokens and compare them).
System name and URL |
Approach |
Main publication |
License |
EXACT |
HEAD |
||||
P |
R |
F |
P |
R |
F |
||||
Collects mention candidates from available sources – morphosyntactical, shallow parsing, named entity and/or zero anaphora detection tools |
Ogrodniczuk M., Głowińska K., Kopeć M., Savary A., Zawisławska M. Coreference in Polish: Annotation, Resolution and Evaluation, chapter 10.6. Walter De Gruyter, 2015. |
CC BY 3 |
66.79% |
67.21% |
67.00% |
88.29% |
89.41% |
88.85% |
Coreference resolution
As there is still no consensus about the single best coreference resolution metrics, CoNLL measure is used (average of MUC, B3 and CEAFE F-measure values). For end-to-end systems CoNLL-2011 shared task-based approach is used, so two result calculation strategies are presented:
- INTERSECT: consider only correct system mentions (i.e. the intersection between gold and system mentions)
- TRANSFORM: unify system and gold mention sets using the following procedure for twinless mentions (without a corresponding mention in the second set):
- insert twinless gold mentions into system mention set as singletons
- remove twinless singleton system mentions
- insert twinless non-singletion system mentions into gold set as singletons.
The results are produced on Polish Coreference Corpus data.
System name and URL |
Approach |
Main publication |
License |
GOLD |
EXACT INTERSECT |
EXACT TRANSFORM |
HEAD INTERSECT |
HEAD TRANSFORM |
Rule-based |
Ogrodniczuk M., Kopeć M. End-to-end coreference resolution baseline system for Polish. In Z. Vetulani (ed.), Proceedings of the 5th Language & Technology Conference: Human Language Technologies as a Challenge for Computer Science and Linguistics, pp. 167–171, Poznań, Poland, 2011. |
CC BY 3 |
73.40% |
78.54% |
66.55% |
76.27% |
70.11% |
|
Statistical |
Kopeć M., Ogrodniczuk M. Creating a Coreference Resolution System for Polish. In Proceedings of the 8th International Conference on Language Resources and Evaluation, LREC 2012, pp. 192–195, ELRA. |
CC BY 3 |
78.41% |
80.86% |
68.96% |
78.58% |
72.15% |
Summarization
System name and URL |
Approach |
Main publication |
License |
P |
R |
F |
|
|
|
% |
% |
% |
|
|
|
|
% |
% |
% |