Unknown action newaccount.

Clear message
Locked History Actions

benchmarks

Benchmarks

This page documents performance of most popular contemporary NLP systems for Polish.

Single-word lemmatization and morphological analysis

System name and URL

Approach

Main publication

License

P

R

F

Morfeusz

Woliński, M. (2006). Morfeusz — a practical tool for the morphological analysis of Polish. In M.A. Kłopotek, S.T. Wierzchoń, K. Trojanowski (eds.) Proceedings of the International IIS:IIPWM 2006 Conference, pp. 511–520.

2-clause BSD

%

%

%

Morfologik

Miłkowski M. (2010). Developing an open-source, rule-based proofreading tool. Software: Practice and Experience, 40(7):543–566.

%

%

%

LemmaPL

dictionary-based rules and heuristics

Kobyliński Ł. (unpublished)

GPL

%

%

%

Multi-word lemmatization

System name and URL

Approach

Main publication

License

Accuracy

rule-based

Degórski, Ł. (2012). Towards the lemmatisation of Polish nominal syntactic groups using a shallow grammar. In P. Bouvry, M.A. Kłopotek, F. Leprevost, M. Marciniak, A. Mykowiecka, H. Rybiński (eds.) Security and Intelligent Information Systems, Lecture Notes in Computer Science vol. 7053, pp. 370–378, Springer-Verlag Berlin Heidelberg.

?

82.90%

automatic generation of lemmatization rules using CRF

Radziszewski A. (2013). Learning to lemmatise Polish noun phrases. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (ACL 2013), Volume 1: Long Papers. ACL, pp. 701–709.

GPL

80.70%

automatic generation of lemmatization rules from a corpus

Abramowicz W., Filipowska A., Małyszko J., Wagner T. (2015). Lemmatization of Multi-Word Entity Names for Polish Language Using Rules Automatically Generated Based on the Corpus Analysis. In Z. Vetulani, J. Mariani (eds.) Human Language Technologies as a Challenge for Computer Science and Linguistics, Fundacja Uniwersytetu im. A. Mickiewicza, pp. 540–544, Poznań.

?

82.10%

Polem

dictionary-based rules and heuristics

Marcińczuk M. (2017). Lemmatization of Multi-word Common Noun Phrases and Named Entities in Polish. In Proceedings of Recent Advances in Natural Language Processing, pp. 483–491.

GPL

97.99%

Disambiguated POS tagging

The comparisons are performed using plain text as input and reporting the accuracy lower bound (Acclower) metric proposed by Radziszewski and Acedański (2012). The metric penalizes all segmentation changes in regard to the gold standard and treats such tokens as misclassified. Furthermore, we report separate metric values for both known and unknown words to assess the performance of guesser modules built into the taggers. These are indicated as AccKlower for known and AccUlower for unknown words.

The experiments have been performed on the manually annotated part of the National Corpus of Polish v. 1.1 (1M tokens). The ten-fold cross-validation procedure has been followed, by re-evaluating the methods ten times, each time selecting one of ten parts of the corpus for testing and the remaining parts for training the taggers. The provided results are averages calculated over ten training and testing sequences. Each of the taggers and each tagger ensemble has been trained and tested on the same set of cross-validation folds, so the results are directly comparable. Each of the training folds has been reanalyzed, according to the procedure described in (A Tiered CRF Tagger for Polish, using the Maca toolkit (Radziszewski and Śniatowski 2011). The idea of a morphological reanalysis of the gold-standard data is to allow the trained tagger to see similar input that is expected in the tagging phase. The training data is firstly turned into plain text and analyzed using the same mechanism that will be used by the tagger during actual tagging process. The output of the analyzer is then synchronized with the original gold-standard data, by using the original tokenization. Tokens with changed segmentation are taken from the gold-standard intact. In the case of tokens for which the segmentation did not change in the process of morphological analysis, the produced interpretations are compared with the original. A token is marked as an unknown word if the correct interpretation has not been produced by the analyzer. Maca has been run with the morfeusz-nkjp-official configuration, which uses Morfeusz SGJP analyzer (Woliński 2006) and no guesser module.

Tagger efficiency was compared by measuring training and tagging times of each of the methods on the same machine. 1.1M token set was used both for training and tagging stages. The total processing time included model loading/saving time and other I/O operations (e.g. reading/writing the tokens).

System name and URL

Approach

Main publication

License

Acclower

AccKlower

AccUlower

Training time

Tagging time

OpenNLP

MaxEnt model

Kobyliński Ł., Kieraś W. (2016). Part of Speech Tagging for Polish: State of the Art and Future Perspectives. In Proceedings of the 17th International Conference on Intelligent Text Processing and Computational Linguistics (CICLing 2016), Konya, Turkey.

GPL

87.24%

88.02%

62.05%

11095 s

362 s

Pantera

rule-based adapted Brill tagger

Acedański S. (2010). A Morphosyntactic Brill Tagger for Inflectional Languages. In H. Loftsson, E. Rögnvaldsson, S. Helgadóttir (eds.) Advances in Natural Language Processing, LNCS 6233, pp. 3–14, Springer.

GPL 3

88.95%

91.22%

15.19%

2624 s

186 s

WMBT

memory-based

Radziszewski A., Śniatowski T. (2011). A memory-based tagger for Polish. In: Z. Vetulani (ed.) Proceedings of the 5th Language and Technology Conference (LTC 2011), pp. 556–560, Poznań, Poland.

90.33%

91.26%

60.25%

548 s

4338 s

WCRFT

tiered, CRF-based

Radziszewski A. (2013). A Tiered CRF Tagger for Polish. In R. Bembenik, Ł. Skonieczny, H. Rybiński, M. Kryszkiewicz, M. Niezgódka (eds.) Intelligent Tools for Building a Scientific Information Platform, pp. 215–230, Springer Berlin Heidelberg.

LGPL 3.0

90.76%

91.92%

53.18%

27242 s

420 s

Concraft

mutually dependent CRF layers

Waszczuk J. (2012). Harnessing the CRF complexity with domain-specific constraints. The case of morphosyntactic tagging of a highly inflected language. In Proceedings of the 24th International Conference on Computational Linguistics (COLING 2012), pp. 2789–2804, Mumbai, India.

2-clause BSD

91.07%

92.06%

58.81%

26675 s

403 s

PoliTa

voting ensemble

Kobyliński Ł. (2014). PoliTa: A multitagger for Polish. In N. Calzolari, K. Choukri, T. Declerck, H. Loftsson, B. Maegaard, J. Mariani, A. Moreno, J. Odijk, S. Piperidis (eds.) Proceedings of the 9th International Conference on Language Resources and Evaluation (LREC 2014), pp. 2949–2954, Reykjavík, Iceland, ELRA.

GPL

92.01%

92.91%

62.81%

N/A

N/A

Toygger

bi-LSTM

Krasnowska-Kieraś K. (2017). Morphosyntactic disambiguation for Polish with bi-LSTM neural networks. In Zygmunt Vetulani and Patrick Paroubek (eds.) Proceedings of the 8th Language & Technology Conference: Human Language Technologies as a Challenge for Computer Science and Linguistics, pp. 367–371, Poznań, Poland. Fundacja Uniwersytetu im. Adama Mickiewicza w Poznaniu.

GPL

92.01%

92.91%

62.81%

N/A

N/A

KRNNT

RNN

Wróbel K. (2017) KRNNT: Polish Recurrent Neural Network Tagger. In Zygmunt Vetulani and Patrick Paroubek (eds.) Proceedings of the 8th Language & Technology Conference: Human Language Technologies as a Challenge for Computer Science and Linguistics, pp. 386–391, Poznań, Poland. Fundacja Uniwersytetu im. Adama Mickiewicza w Poznaniu.

LGPL 3.0

93.72%

94.43%

69.03%

245.9 s (GeForce GTX 1050M)

N/A

Dependency parsing

Dependency parsing systems are trained and tested either on Polish Dependency Bank (PDB) or on PDB-UD, i.e. PDB converted to the Universal Dependencies format (license CC BY-NC-SA 4.0).

Parser

Model

Data

External Embedding

Evaluation script

UPOS F1

XPOS F1

UFeats F1

AllTags

Lemmas F1

UAS F1

LAS F1

CLAS F1

MLAS F1

BLEX F1

SLAS

COMBO

COMBO-SP

PDB

FastText

poleval

93.08

89.41

88.64

88.22

88.64

COMBO-pytorch

COMBO2-SP

PDB

HerBERT-base

poleval

94.99

91.74

91.04

90.60

91.04

COMBO-pytorch

COMBO2-SP

PDB

HerBERT-large

poleval

95.29

92.03

91.24

90.83

91.24

COMBO

COMBO-MSP

PDB

FastText

poleval

94.26

94.26

94.13

93.49

97.29

91.13

86.33

85.33

78.74

82.86

COMBO-pytorch

COMBO2-MSP

PDB

HerBERT-base

poleval

99.02

96.55

96.69

96.23

97.75

94.10

90.66

90.13

86.02

87.84

COMBO-pytorch

COMBO2-MSP

PDB

HerBERT-large

poleval

99.08

96.78

96.92

96.47

97.77

94.40

90.99

90.32

86.39

88.01

COMBO

COMBO-SMSP

PDB

FastText

poleval

98.47

93.96

93.72

92.98

97.29

91.24

86.43

85.53

78.80

83.03

72.12

COMBO-pytorch

COMBO2-SMSP

PDB

HerBERT-base

poleval

99.04

96.54

96.64

96.19

97.64

94.15

90.64

90.04

85.94

87.61

85.71

UDPipe

UDPipe-MSP

PDB-UD

FastText

conll

97.34

88.62

89.16

88.03

96.08

87.41

83.94

80.43

69.98

76.49

poleval

97.30

88.61

89.00

87.96

96.04

87.41

80.75

75.48

65.92

71.83

COMBO

COMBO-MSP

PDB-UD

FastText

conll

98.56

94.62

94.63

93.39

97.58

93.88

91.61

89.37

82.48

86.66

poleval

98.56

94.62

94.45

93.27

97.58

93.88

88.85

85.04

78.55

82.60

COMBO-pytorch

COMBO2-MSP

PDB-UD

HerBERT-base

conll

99.02

96.42

96.73

95.73

97.77

96.07

94.52

93.17

88.41

90.30

poleval

99.02

96.42

96.64

95.67

97.77

96.07

92.19

89.52

85.04

86.83

COMBO-pytorch

COMBO2-MSP

PDB-UD

HerBERT-large

conll

99.07

96.73

97.01

96.04

97.83

95.85

94.27

92.84

88.45

90.09

poleval

99.07

96.73

96.91

95.99

97.83

95.85

91.93

89.16

84.94

86.55

COMBO

COMBO-SMSP

PDB-UD

FastText

poleval

98.48

94.64

94.36

93.14

97.43

93.81

88.84

85.08

78.44

82.43

69.64

COMBO-pytorch

COMBO2-SMSP

PDB-UD

HerBERT-base

poleval

98.99

96.42

96.64

95.67

97.74

95.95

92.07

89.28

84.72

86.60

84.32

SP (syntactic prediction) -- prediction of labelled dependency trees

MSP (morphosyntactic prediction) -- prediction of morphosyntactic features (i.e. LEMMA, UPOS, XPOS, FEATS) and labelled dependency trees

SMSP (semantic and morphosyntactic prediction) -- prediction of morphosyntactic features (i.e. LEMMA, UPOS, XPOS, FEATS), labelled dependency trees and semantic roles

conll -- The script (conll18_ud_eval.py) of CoNLL 2018 shared task is used for evaluation (only UD labels are taken into account).

poleval -- The script(poleval2018_cykle.py) of PolEval 2018 shared task on dependency parsing is used for evaluation (UD labels and Polish-specific sublabels are taken into account).

Shallow parsing

System name and URL

Approach

Main publication

License

P

R

F

Spejd

rule-based

Buczyński A., Przepiórkowski A. (2009). Spejd: A shallow processing and morphological disambiguation tool. In Z. Vetulani, H. Uszkoreit (eds.) Human Language Technology: Challenges of the Information Society, LNCS 5603, pp. 131–141. Springer-Verlag, Berlin.

GPL 3

%

%

%

Word sense disambiguation

Manually annotated subcorpus of the National Corpus of Polish with 3889 texts and 1217822 segments was used for training and test (cross-validation); it contained 34186 occurrences of multi-sense words (1–7 senses). Simple heuristics selecting the most frequent sense resulted in 78.3% accuracy. The table presents results of leave-one-out evaluation with individual selection of each ambiguous words (as described in the publication).

System name and URL

Approach

Main publication

License

Accuracy

WSDDE

Machine-learning

Kopeć M., Młodzki R., Przepiórkowski A. (2012). Automatyczne znakowanie sensami słów. In A. Przepiórkowski, M. Bańko, R.L. Górski, B. Lewandowska-Tomaszczyk (eds.) Narodowy Korpus Języka Polskiego, pp. 209–224. Wydawnictwo Naukowe PWN, Warsaw.

GPL 3

91.46%

Named entity recognition

The below table reflects PolEval 2018 Named Entity Recognition task plus several new systems made available more recently; please see the training data – 1M NKJP corpus and testing data with evaluation script.

System name and URL

Approach

Main publication

Weighted F1

NER model for SpacyPL

Tuora R., Kobyliński Ł. (2019). Integrating Polish Language Tools and Resources in spaCy. Proceedings of PP-RAI 2019 Conference, pp. 210–214.

0.8752

Per group LSTM-CRF with C. S. E.

Per group LSTM-CRF with Contextual String Embeddings

0.851

PolDeepNer

BiDirectional GRU/LSTM with CRF

Marcińczuk M., Kocoń J., Gawor., M. (2018). Recognition of Named Entities for Polish-Comparison of Deep Learning and Conditional Random Fields Approaches. Proceedings of PolEval 2018 Workshop.

0.866

Liner2

CRF

Marcińczuk M., Kocoń J., Oleksy., M. (2017). Liner2 — a Generic Framework for Named Entity Recognition Proceedings of the 6th Workshop on Balto-Slavic Natural Language Processing.

0.810

OPI Z3

???

0.793

joint

???

0.780

disjoint

???

0.779

via ner

LSTM+CRF

0.756

NERF + polimorf

CRF

Savary A., Waszczuk J., Przepiórkowski A. Towards the annotation of named entities in the National Corpus of Polish. Proceedings of the 7th International Conference on Language Resources and Evaluation (LREC 2010), pp. 3622–3629. ELRA.
See also Chapters 9 and 13 in the NKJP book (in Polish).

0.739

NERF

CRF

0.735

kner sep

???

0.733

Poleval2k18

???

0.719

KNER

RNN/CNN + CRF

0.711

simple ner

BiDirectional GRU

0.636

Sentiment analysis

System name and URL

Approach

Main publication

License

Accuracy

Sentipejd

dictionary+rules

Buczyński A., Wawer A. (2008). Shallow parsing in sentiment analysis of product reviews. In S. Kübler, J. Piskorski, A. Przepiórkowski (eds.) Proceedings of the LREC 2008 Workshop on Partial Parsing: Between Chunking and Deep Parsing, pp. 14–18, Marrakech, ELRA.

API usage

0.782*

* measured document level with a C.50 classifier

The systems below have been evaluated on PolEval 2017 test data, using the Polish sentiment treebank version 1.0.

System name and URL

Approach

Main publication

License

Accuracy

Tree-LSTM-NR

Tree-LSTM

Ryciak N. (2017). Polish Language Sentiment Analysis with Tree-Structured Long Short-Term Memory Network. In Z. Vetulani and P. Paroubek (eds.) Proceedings of the 8th Language & Technology Conference: Human Language Technologies as a Challenge for Computer Science and Linguistics, pp. 402–405, Poznań, Poland. Fundacja Uniwersytetu im. Adama Mickiewicza w Poznaniu.

Open source

0.795

Alan Turing climbs a tree

Tree-LSTM

Żak P., Korbak T. (2017). Fine-tuning Tree-LSTM for phrase-level sentiment classification on a Polish dependency treebank. In Z. Vetulani and P. Paroubek (eds.) Proceedings of the 8th Language & Technology Conference: Human Language Technologies as a Challenge for Computer Science and Linguistics, pp. 392–396, Poznań, Poland. Fundacja Uniwersytetu im. Adama Mickiewicza w Poznaniu.

Open source

0.805

Tree-LSTM-ML

Tree-LSTM

Lew M., Pęzik P. (2017). A Sequential Child-Combination Tree-LSTM Network for Sentiment Analysis. In Z. Vetulani and P. Paroubek (eds.) Proceedings of the 8th Language & Technology Conference: Human Language Technologies as a Challenge for Computer Science and Linguistics, pp. 397–401, Poznań, Poland. Fundacja Uniwersytetu im. Adama Mickiewicza w Poznaniu.

Open source

0.768

Mention detection

Precision, recall and F-measure are calculated on Polish Coreference Corpus data with two alternative mention detection scores:

  • EXACT: score of exact boundary matches (an automatic and a manual mention match if they have exactly the same boundaries; in other words, they consist of the same tokens)
  • HEAD: score of head matches (we reduce the automatic and the manual mentions to their single head tokens and compare them).

System name and URL

Approach

Main publication

License

EXACT

HEAD

P

R

F

P

R

F

MentionDetector

rule-based

Ogrodniczuk M., Głowińska K., Kopeć M., Savary A., Zawisławska M. (2015). Coreference in Polish: Annotation, Resolution and Evaluation, chapter 10.6. Walter De Gruyter.

CC BY 3

70.11%

68.13%

69.10%

90.07%

88.21%

89.12%

MentionStat

statistical

Ogrodniczuk M. (2019). Automatyczne wykrywanie nominalnych zależności referencyjnych w języku polskim. Wydawnictwa Uniwersytetu Warszawskiego.

CC BY 3

74.34%

69.41%

71.79%

92.27%

90.21%

91.23%

Coreference resolution

As there is still no consensus about the single best coreference resolution metrics, CoNLL measure is used to rank systems (average of MUC, B3 and CEAFE F1) with a CoNLL-2011 shared task-based approach with two calculation strategies for EXACT mention borders and semantic HEADs:

  • INTERSECT: consider only correct system mentions (i.e. the intersection between gold and system mentions)
  • TRANSFORM: unify system and gold mention sets using the following procedure for twinless mentions (without a corresponding mention in the second set):
    1. insert twinless gold mentions into system mention set as singletons
    2. remove twinless singleton system mentions
    3. insert twinless non-singletion system mentions into gold set as singletons.

The results are produced on Polish Coreference Corpus data, with 10-fold crossvalidation.

System name and URL

Approach

Main publication

License

GOLD

EXACT INTERSECT

EXACT TRANSFORM

HEAD INTERSECT

HEAD TRANSFORM

Ruler

rule-based

Ogrodniczuk M., Kopeć M. (2011). End-to-end coreference resolution baseline system for Polish. In Z. Vetulani (ed.), Proceedings of the 5th Language & Technology Conference: Human Language Technologies as a Challenge for Computer Science and Linguistics, pp. 167–171, Poznań, Poland.

CC BY 3

74.10%

78.86%

68.88%

77.05%

72.19%

Bartek5

statistical

Kopeć M., Ogrodniczuk M. (2012). Creating a Coreference Resolution System for Polish. In Proceedings of the 8th International Conference on Language Resources and Evaluation (LREC 2012), pp. 192–195, ELRA.

CC BY 3

80.50%

82.71%

73.96%

80.00%

74.82%

BartekS2

sieve-based

Nitoń B., Ogrodniczuk M. (2017). Multi-Pass Sieve Coreference Resolution System for Polish. In J. Gracia, F. Bond, J. P. McCrae, P. Buitelaar, Ch. Chiarcos, S. Hellmann (eds.) Proceedings of the 1st Conference on Language, Data and Knowledge (LDK 2017).

CC BY 3

80.70%

82.49%

73.21%

80.85%

75.55%

Corneferencer

neural

Nitoń B., Morawiecki P., Ogrodniczuk M. (2018). Deep neural networks for coreference resolution for Polish. In N. Calzolari, K. Choukri, C. Cieri, T. Declerck, S. Goggi, K. Hasida, H. Isahara, B. Maegaard, J. Mariani, H. Mazo, A. Moreno, J. Odijk, S. Piperidis, T. Tokunaga (eds.) Proceedings of the 11th International Conference on Language Resources and Evaluation (LREC 2018), pp. 395–400, ELRA.

CC BY 3

80.59%

82.73%

73.39%

80.89%

76.03%

Hybrid

hybrid

Ogrodniczuk M. (2019). Automatyczne wykrywanie nominalnych zależności referencyjnych w języku polskim. Wydawnictwa Uniwersytetu Warszawskiego.

CC BY 3

81.09%

82.54%

73.17%

81.04%

75.74%

Summarization

The table presents results of evaluation on the Polish Summaries Corpus (154 texts, each with 5 abstractive summaries, 20% word count of the original document). ROUGE covers several metrics, with the following variants used here:

  • ROUGEn which counts n-gram co-occurrences between reference (gold) summaries and system summary,

  • ROUGE-Mn with ROUGEn score calculated for each text using a single manual summary which gives the highest score as a reference.

The evaluation tool source code is available here.

System name and URL

Approach

Main publication

License

ROUGE1

ROUGE2

ROUGE3

ROUGE-M1

ROUGE-M2

ROUGE-M3

PolSum

?

Ciura M., Grund D., Kulików S., Suszczańska N. (2004). A System to Adapt Techniques of Text Summarizing to Polish. In Okatan A. (ed.) International Conference on Computational Intelligence, pp. 117–120, Istanbul, Turkey. International Computational Intelligence Society.

?

? %

? %

? %

? %

? %

? %

Open Text Summarizer

word frequency based sentence extraction

--

GNU GPL 2.0

51.3%

13.6%

9.0%

58.5%

22.5%

17.9%

Emily-C

sentence extraction with machine learning

Kopeć M. (2015). Coreference-based Content Selection for Automatic Summarization of Polish News. ITRIA 2015. Selected Problems in Information Technologies, pp. 23–46.

GNU GPL 3.0

52.9%

15.1%

10.0%

58.8%

24.2%

19.2%

Emily-S

sentence extraction with machine learning

GNU GPL 3.0

53.0%

15.4%

10.4%

59.4%

24.7%

20.1%

TextRank

unsupervised, graph based sentence extraction

Barrios, F., López, F., Argerich, L., Wachenchauzer, R. (2015). Variations of the Similarity Function of TextRank for Automated Summarization. arXiv:1602.03606.

MIT

56.5%

19.2%

12.8%

63.3%

30.0%

23.8%

Lakon

sentence extraction relying on one of: positional heuristics, word frequency features, lexical chains information

Dudczak A. (2007). Zastosowanie wybranych metod eksploracji danych do tworzenia streszczeń tekstów prasowych dla języka polskiego. MSc thesis, Poznań Technical University.

3-clause BSD

55.4%

20.8%

14.4%

62.9%

33.3%

27.4%

Summarizer

sentence extraction with machine learning

Świetlicka J. (2010). Metody maszynowego uczenia w automatycznym streszczaniu tekstów. MSc thesis, Warsaw University.

GNU GPL 3.0

58.0%

22.6%

16.1%

65.4%

35.8%

29.8%

Nicolas

sentence extraction with machine learning

Kopeć M. (2018). Summarization of Polish Press Articles Using Coreference PhD dissertation.

GNU GPL 3.0

59.1%

24.2%

17.5%

67.9%

39.8%

33.9%

Language models

Given a set of sentences in Polish in a random order, each with original punctuation, the goal of the task is to create a language model for Polish. Please see segmented training data and segmented testing data. Evaluation is based on calculation of perplexity, please see instruction on OOV rate and perplexity.

System name and URL

Main publication

Perplexity

ULMFiT-SP-PL

Kardas M., Howard J., Czapla P. (2018). Universal Language Model Fine-Tuning with Subword Tokenization for Polish.

117.6705

AGH-UJ

Please see the model.

146.7082

PocoLM Order 6

208.6297