Locked History Actions

Diff for "benchmarks"

Differences between revisions 11 and 12
Revision 11 as of 2016-10-19 22:52:22
Size: 1922
Comment:
Revision 12 as of 2016-10-19 23:04:22
Size: 3176
Comment:
Deletions are marked like this. Additions are marked like this.
Line 32: Line 32:

---- /!\ '''Edit conflict - other version:''' ----
Line 37: Line 39:

---- /!\ '''Edit conflict - your version:''' ----
Precision, recall and F-measure is calculated with two alternative mention detection scores:
– EXACT: score of exact boundary matches (an automatic and a manual mention match if they have exactly the same boundaries; in other words, they consist of the same tokens)
– HEAD: score of head matches (we reduce the automatic and the manual mentions to their single head tokens and compare them).

=== Results ===

||<|2> '''System name''' ||<|2> '''Short description''' ||<|2> '''Main publication''' ||<|2> '''License''' |||||| '''EXACT''' |||||| '''HEAD''' ||
||'''P''' || '''R''' || '''F''' ||'''P''' || '''R''' || '''F''' ||
|| [[http://zil.ipipan.waw.pl/MentionDetector|Mention Detector]] || Collects mention candidates from available sources – morphosyntactical, shallow parsing, named entity and/or zero anaphora detection tools || Ogrodniczuk M., Głowińska K., Kopeć M., Savary A., Zawisławska M. 'Coreference in Polish: Annotation, Resolution and Evaluation', chapter 10.6. Walter De Gruyter, 2015. || CC BY 3 || 66.79% || 67.21% || 67.00% || 88.29% || 89.41% || 88.85% ||


---- /!\ '''End of edit conflict''' ----

Benchmarks

This page documents performance of various NLP systems for Polish.

Lemmatization

POS tagging

Shallow parsing

Dependency parsing

Deep parsing

Word sense disambiguation

Named entity recognition

Sentiment analysis

Mention detection

Test set

Polish Coreference Corpus


/!\ Edit conflict - other version:


Results

System name

Short description

Main publication

License

P

R

F

|| Mention Detector || Rule-based || Ogrodniczuk M., Głowińska K., Kopeć M., Savary A., Zawisławska M. 'Coreference in Polish: Annotation, Resolution and Evaluation'. Walter De Gruyter, 2015. || CC BY 3 || || || ||


/!\ Edit conflict - your version:


Precision, recall and F-measure is calculated with two alternative mention detection scores: – EXACT: score of exact boundary matches (an automatic and a manual mention match if they have exactly the same boundaries; in other words, they consist of the same tokens) – HEAD: score of head matches (we reduce the automatic and the manual mentions to their single head tokens and compare them).

Results

System name

Short description

Main publication

License

EXACT

HEAD

P

R

F

P

R

F

Mention Detector

Collects mention candidates from available sources – morphosyntactical, shallow parsing, named entity and/or zero anaphora detection tools

Ogrodniczuk M., Głowińska K., Kopeć M., Savary A., Zawisławska M. 'Coreference in Polish: Annotation, Resolution and Evaluation', chapter 10.6. Walter De Gruyter, 2015.

CC BY 3

66.79%

67.21%

67.00%

88.29%

89.41%

88.85%


/!\ End of edit conflict


Coreference resolution

Test set

Polish Coreference Corpus

Results

System name

Short description

Main publication

License

P

R

F

Ruler

Rule-based

Ogrodniczuk M., Kopeć M. 'End-to-end coreference resolution baseline system for Polish'. In Z. Vetulani (ed.), Proceedings of the 5th Language & Technology Conference: Human Language Technologies as a Challenge for Computer Science and Linguistics, pp. 167–171, Poznań, Poland, 2011.

CC BY 3

Bartek

Statistical

Kopeć M., Ogrodniczuk M. 'Creating a Coreference Resolution System for Polish'. In Proceedings of the 8th International Conference on Language Resources and Evaluation, LREC 2012, pp. 192–195, ELRA.

CC BY 3

Summarization