Locked History Actions

Diff for "benchmarks"

Differences between revisions 3 and 21 (spanning 18 versions)
Revision 3 as of 2016-10-19 22:25:57
Size: 11207
Comment:
Revision 21 as of 2016-10-27 23:12:22
Size: 3556
Comment:
Deletions are marked like this. Additions are marked like this.
Line 1: Line 1:
## page was renamed from Benchmarks
Line 3: Line 4:
Here we will document performance of various NLP systems for Polish. This page documents performance of various NLP systems for Polish.
Line 5: Line 6:
==Test collections==
* '''Performance measure:''' per token accuracy. (The convention is for this to be measured on all tokens, including punctuation tokens and other unambiguous tokens.)
* '''English'''
** '''Penn Treebank''' ''Wall Street Journal'' (WSJ) release 3 (LDC99T42). The splits of data for this task were not standardized early on (unlike for parsing) and early work uses various data splits defined by counts of tokens or by sections. Most work from 2002 on adopts the following data splits, introduced by Collins (2002):
*** '''Training data:''' sections 0-18
*** '''Development test data:''' sections 19-21
*** '''Testing data:''' sections 22-24
== Morphological analysis ==
Line 13: Line 8:
* '''French'''
** '''French TreeBank''' (FTB, Abeillé et al; 2003) ''Le Monde'', December 2007 version, 28-tag tagset (CC tagset, Crabbé and Candito, 2008). Classical data split (10-10-80):
*** '''Training data:''' sentences 2471 to 12351
*** '''Development test data:''' sentences 1236 to 2470
*** '''Testing data:''' sentences 1 to 1235
 : Morfeusz, Concraft/WCRFT, Spejd, Dependency Parser, TIMEX/Nerf,

== POS tagging ==

== Shallow parsing ==

== Dependency parsing ==
Line 20: Line 17:
== Tables of results ==

===WSJ===

{| border="1" cellpadding="5" cellspacing="1" width="100%"
|-
! System name
! Short description
! Main publication
! Software
! Extra Data?***
! All tokens
! Unknown words
! License
|-
| TnT*
| Hidden markov model
| Brants (2000)
| [http://www.coli.uni-saarland.de/~thorsten/tnt/ TnT]
| No
| 96.46%
| 85.86%
| Academic/research use only ([http://www.coli.uni-saarland.de/~thorsten/tnt/tnt-license.html license])
|-
| MElt
| MEMM with external lexical information
| Denis and Sagot (2009)
| [https://gforge.inria.fr/projects/lingwb/ Alpage linguistic workbench]
| No
| 96.96%
| 91.29%
| CeCILL-C
|-
| GENiA Tagger**
| Maximum entropy cyclic dependency network
| Tsuruoka, et al (2005)
| [http://www.nactem.ac.uk/tsujii/GENIA/tagger/ GENiA]
| No
| 97.05%
| Not available
| Gratis for non-commercial usage
|-
| Averaged Perceptron
| Averaged Perception discriminative sequence model
| Collins (2002)
| Not available
| No
| 97.11%
| Not available
| Unknown
|-
| Maxent easiest-first
| Maximum entropy bidirectional easiest-first inference
| Tsuruoka and Tsujii (2005)
| [http://www-tsujii.is.s.u-tokyo.ac.jp/~tsuruoka/postagger/ Easiest-first]
| No
| 97.15%
| Not available
| Unknown
|-
| SVMTool
| SVM-based tagger and tagger generator
| Giménez and Márquez (2004)
| [http://www.lsi.upc.es/~nlp/SVMTool/ SVMTool]
| No
| 97.16%
| 89.01%
| LGPL 2.1
|-
| LAPOS
| Perceptron based training with lookahead
| Tsuruoka, Miyao, and Kazama (2011)
| [http://www.logos.t.u-tokyo.ac.jp/~tsuruoka/lapos/ LAPOS]
| No
| 97.22%
| Not available
| MIT
|-
| Morče/COMPOST
| Averaged Perceptron
| Spoustová et al. (2009)
| [http://ufal.mff.cuni.cz/compost COMPOST]
| No
| 97.23%
| Not available
| Non-free ([http://ufal.mff.cuni.cz/compost/register.php academic-only])
|-
| Morče/COMPOST
| Averaged Perceptron
| Spoustová et al. (2009)
| [http://ufal.mff.cuni.cz/compost COMPOST]
| Yes
| 97.44%
| Not available
| Unknown
|-
| Stanford Tagger 1.0
| Maximum entropy cyclic dependency network
| Toutanova et al. (2003)
| [http://nlp.stanford.edu/software/tagger.shtml Stanford Tagger]
| No
| 97.24%
| 89.04%
| GPL v2+
|-
| Stanford Tagger 2.0
| Maximum entropy cyclic dependency network
| Manning (2011)
| [http://nlp.stanford.edu/software/tagger.shtml Stanford Tagger]
| No
| 97.29%
| 89.70%
| GPL v2+
|-
| Stanford Tagger 2.0
| Maximum entropy cyclic dependency network
| Manning (2011)
| [http://nlp.stanford.edu/software/tagger.shtml Stanford Tagger]
| Yes
| 97.32%
| 90.79%
| GPL v2+
|-
| LTAG-spinal
| Bidirectional perceptron learning
| Shen et al. (2007)
| [http://www.cis.upenn.edu/~xtag/spinal/ LTAG-spinal]
| No
| 97.33%
| Not available
| Unknown
|-
| SCCN
| Semi-supervised condensed nearest neighbor
| Søgaard (2011)
| [http://cst.dk/anders/scnn/ SCCN]
| Yes
| 97.50%
| Not available
| Unknown
|-
| CharWNN
| MLP with Neural Character Embeddings
| dos Santos and Zadrozny (2014)
| Not available
| No
| 97.32%
| 89.86%
| Unknown
|-
| structReg
| CRFs with structure regularization
| Sun(2014)
| Not available
| No
| 97.36%
| Not available
| Unknown
|-
| BI-LSTM-CRF
| Bidirectional LSTM-CRF Model
| Huang et al. (2015)
| Not available
| No
| 97.55%
| Not available
| Unknown
|-
| NLP4J
| Dynamic Feature Induction
| Choi (2016)
| [https://github.com/emorynlp/nlp4j NLP4J]
| Yes
| 97.64%
| 92.03%
| Apache 2
|}

(*) TnT: Accuracy is as reported by Giménez and Márquez (2004) for the given test collection. Brants (2000) reports 96.7% token accuracy and 85.5% unknown word accuracy on a 10-fold cross-validation of the Penn WSJ corpus.

(**) GENiA: Results are for models trained and tested on the given corpora (to be comparable to other results). The distributed GENiA tagger is trained on a mixed training corpus and gets 96.94% on WSJ, and 98.26% on GENiA biomedical English.

(***) Extra data: Whether system training exploited (usually large amounts of) extra unlabeled text, such as by semi-supervised learning, self-training, or using distributional similarity features, beyond the standard supervised training data.

===FTB===

{| border="1" cellpadding="5" cellspacing="1" width="100%"
|-
! System name
! Short description
! Main publication
! Software
! Extra Data?***
! All tokens
! Unknown words
! License
|-
| Morfette
| Perceptron with external lexical information*
| Chrupała et al. (2008), Seddah et al. (2010)
| [http://sites.google.com/site/morfetteweb/ Morfette]
| No
| 97.68%
| 90.52%
| New BSD
|-
| SEM
| CRF with external lexical information*
| Constant et al. (2011)
| [http://www.univ-orleans.fr/lifo/Members/Isabelle.Tellier/SEM.html SEM]
| No
| 97.7%
| Not available
| "GNU"(?)
|-
| MElt
| MEMM with external lexical information*
| Denis and Sagot (2009)
| [https://gforge.inria.fr/projects/lingwb/ Alpage linguistic workbench]
| No
| 97.80%
| 91.77%
| CeCILL-C
|}

(*) External lexical information from the Lefff lexicon (Sagot 2010, [https://gforge.inria.fr/frs/?group_id=482 Alexina project])

== References ==

* Brants, Thorsten. 2000. [http://acl.ldc.upenn.edu/A/A00/A00-1031.pdf TnT -- A Statistical Part-of-Speech Tagger]. "6th Applied Natural Language Processing Conference".

* Chrupała, Grzegorz, Dinu, Georgiana and van Genabith, Josef. 2008. [http://www.lrec-conf.org/proceedings/lrec2008/pdf/594_paper.pdf Learning Morphology with Morfette]. "LREC 2008".

* Collins, Michael. 2002. [http://people.csail.mit.edu/mcollins/papers/tagperc.pdf Discriminative Training Methods for Hidden Markov Models: Theory and Experiments with Perceptron Algorithms]. ''EMNLP 2002''.

* Constant, Matthieu, Tellier, Isabelle, Duchier, Denys, Dupont, Yoann, Sigogne, Anthony, and Billot, Sylvie. [http://www.lirmm.fr/~lopez/TALN2011/Longs-TALN+RECITAL/Tellier_taln11_submission_54.pdf Intégrer des connaissances linguistiques dans un CRF : application à l'apprentissage d'un segmenteur-étiqueteur du français]. "TALN'11"

* Denis, Pascal and Sagot, Benoît. 2009. [http://alpage.inria.fr/~sagot/pub/paclic09tagging.pdf Coupling an annotated corpus and a morphosyntactic lexicon for state-of-the-art POS tagging with less human effort]. "PACLIC 2009"

* Giménez, J., and Márquez, L. 2004. [http://www.lsi.upc.es/~nlp/SVMTool/lrec2004-gm.pdf SVMTool: A general POS tagger generator based on Support Vector Machines]. ''Proceedings of the 4th International Conference on Language Resources and Evaluation (LREC'04)''. Lisbon, Portugal.

* Manning, Christopher D. 2011. Part-of-Speech Tagging from 97% to 100%: Is It Time for Some Linguistics? In Alexander Gelbukh (ed.), Computational Linguistics and Intelligent Text Processing, 12th International Conference, CICLing 2011, Proceedings, Part I. Lecture Notes in Computer Science 6608, pp. 171--189. Springer.

* Seddah, Djamé, Chrupała, Grzegorz, Çetinoglu, Özlem and Candito, Marie. 2010. [http://aclweb.org/anthology-new/W/W10/W10-1410.pdf Lemmatization and Lexicalized Statistical Parsing of Morphologically Rich Languages: the Case of French] "SPMRL 2010 (NAACL 2010 workshop)"

* Shen, L., Satta, G., and Joshi, A. 2007. [http://acl.ldc.upenn.edu/P/P07/P07-1096.pdf Guided learning for bidirectional sequence classification]. ''Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics (ACL 2007)'', pages 760-767.

* Søgaard, Anders. 2011. Semi-supervised condensed nearest neighbor for part-of-speech tagging. The 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies (ACL-HLT). Portland, Oregon.

* Spoustová, Drahomíra "Johanka", Jan Hajič, Jan Raab and Miroslav Spousta. 2009. Semi-supervised Training for the Averaged Perceptron POS Tagger. Proceedings of the 12 EACL, pages 763-771.

* Toutanova, K., Klein, D., Manning, C.D., Yoram Singer, Y. 2003. [http://nlp.stanford.edu/kristina/papers/tagging.pdf Feature-rich part-of-speech tagging with a cyclic dependency network]. ''Proceedings of HLT-NAACL 2003'', pages 252-259.

* Tsuruoka, Yoshimasa, Yuka Tateishi, Jin-Dong Kim, Tomoko Ohta, John McNaught, Sophia Ananiadou, and Jun'ichi Tsujii. 2005. "[http://www-tsujii.is.s.u-tokyo.ac.jp/~tsuruoka/papers/pci05.pdf Developing a Robust Part-of-Speech Tagger for Biomedical Text, Advances in Informatics]" - ''10th Panhellenic Conference on Informatics'', '''LNCS 3746''', pp. 382-392, 2005

* Tsuruoka, Yoshimasa, Yusuke Miyao, and Jun’ichi Kazama. 2011. "[http://aclweb.org/anthology-new/W/W11/W11-0328.pdf Learning with Lookahead: Can History-Based Models Rival Globally Optimized Models?]" ''Proceedings of the Fifteenth Conference on Computational Natural Language Learning'', pp 238–246, 2011.

* Tsuruoka, Yoshimasa and Jun'ichi Tsujii. 2005. "[http://www-tsujii.is.s.u-tokyo.ac.jp/~tsuruoka/papers/emnlp05bidir.pdf Bidirectional Inference with the Easiest-First Strategy for Tagging Sequence Data]", ''Proceedings of HLT/EMNLP 2005'', pp. 467-474.

* Sun, Xu. "[http://papers.nips.cc/paper/5643-structure-regularization-for-structured-prediction.pdf Structure Regularization for Structured Prediction]". ''In Neural Information Processing Systems (NIPS)''. 2402-2410. 2014

* Cicero dos Santos, and Bianca Zadrozny. "[http://jmlr.org/proceedings/papers/v32/santos14.pdf Learning character-level representations for part-of-speech tagging]". ''In Proceedings of the 31st International Conference on Machine Learning, JMLR: W&CP volume 32''. 2014.

* Z. H. Huang, W. Xu, and K. Yu. "[http://arxiv.org/abs/1508.01991 Bidirectional LSTM-CRF Models for Sequence Tagging]". ''In arXiv:1508.01991''. 2015.

* Jinho D. Choi. 2016. "[https://aclweb.org/anthology/N/N16/N16-1031.pdf Dynamic Feature Induction: The Last Gist to the State-of-the-Art]", Proceedings of NAACL 2016.

== See also ==
* [[POS Induction (State of the art)]]
* [[Part-of-speech tagging]]
* [[State of the art]]
== Deep parsing ==
Line 293: Line 20:
[[Category:State of the art]] == Word sense disambiguation ==

== Named entity recognition ==



== Sentiment analysis ==


== Mention detection ==

Precision, recall and F-measure are calculated on [[http://clip.ipipan.waw.pl/PCC|Polish Coreference Corpus]] data with two alternative mention detection scores:
 * EXACT: score of exact boundary matches (an automatic and a manual mention match if they have exactly the same boundaries; in other words, they consist of the same tokens)
 * HEAD: score of head matches (we reduce the automatic and the manual mentions to their single head tokens and compare them).

||<|2> '''System name''' ||<|2> '''Short description''' ||<|2> '''Main publication''' ||<|2> '''License''' |||||| '''EXACT''' |||||| '''HEAD''' ||
||<:> '''P''' ||<:> '''R''' ||<:> '''F''' ||<:> '''P''' ||<:> '''R''' ||<:> '''F''' ||
|| [[http://zil.ipipan.waw.pl/MentionDetector|Mention Detector]] || Collects mention candidates from available sources – morphosyntactical, shallow parsing, named entity and/or zero anaphora detection tools || Ogrodniczuk M., Głowińska K., Kopeć M., Savary A., Zawisławska M. ''Coreference in Polish: Annotation, Resolution and Evaluation'', chapter 10.6. Walter De Gruyter, 2015. || CC BY 3 || 66.79% || 67.21% || 67.00% || 88.29% || 89.41% || 88.85% ||


== Coreference resolution ==


As there is still no consensus about the single best coreference resolution metrics, CoNLL measure is used (average of MUC, B3 and CEAFE F-measure values). For end-to-end systems CoNLL-2011 shared task-based approach is used, so two result calculation strategies are presented:
 * INTERSECT: consider only correct system mentions (i.e. the intersection between gold and system mentions)
 * TRANSFORM: unify system and gold mention sets using the following procedure for twinless mentions (without a corresponding mention in the second set):
  1. insert twinless gold mentions into system mention set as singletons
  1. remove twinless singleton system mentions
  1. insert twinless non-singletion system mentions into gold set as singletons.

The results are produced on [[http://clip.ipipan.waw.pl/PCC|Polish Coreference Corpus]] data.

|| '''System name''' || '''Short description''' || '''Main publication''' || '''License''' || '''GOLD''' || '''EXACT INTERSECT''' || '''EXACT TRANSFORM''' || '''HEAD INTERSECT''' || '''HEAD TRANSFORM''' ||
|| [[http://zil.ipipan.waw.pl/Ruler|Ruler]] || Rule-based || Ogrodniczuk M., Kopeć M. ''End-to-end coreference resolution baseline system for Polish''. In Z. Vetulani (ed.), Proceedings of the 5th Language & Technology Conference: Human Language Technologies as a Challenge for Computer Science and Linguistics, pp. 167–171, Poznań, Poland, 2011. || CC BY 3 || 73.40% || 78.54% || 66.55% || 76.27% || 70.11% ||
|| [[http://zil.ipipan.waw.pl/Bartek|Bartek&#160;3]] || Statistical || Kopeć M., Ogrodniczuk M. ''Creating a Coreference Resolution System for Polish''. In Proceedings of the 8th International Conference on Language Resources and Evaluation, LREC 2012, pp. 192–195, ELRA. || CC BY 3 || 78.41% || 80.86% || 68.96% || 78.58% || 72.15% ||


== Summarization ==

Benchmarks

This page documents performance of various NLP systems for Polish.

Morphological analysis

  • : Morfeusz, Concraft/WCRFT, Spejd, Dependency Parser, TIMEX/Nerf,

POS tagging

Shallow parsing

Dependency parsing

Deep parsing

Word sense disambiguation

Named entity recognition

Sentiment analysis

Mention detection

Precision, recall and F-measure are calculated on Polish Coreference Corpus data with two alternative mention detection scores:

  • EXACT: score of exact boundary matches (an automatic and a manual mention match if they have exactly the same boundaries; in other words, they consist of the same tokens)
  • HEAD: score of head matches (we reduce the automatic and the manual mentions to their single head tokens and compare them).

System name

Short description

Main publication

License

EXACT

HEAD

P

R

F

P

R

F

Mention Detector

Collects mention candidates from available sources – morphosyntactical, shallow parsing, named entity and/or zero anaphora detection tools

Ogrodniczuk M., Głowińska K., Kopeć M., Savary A., Zawisławska M. Coreference in Polish: Annotation, Resolution and Evaluation, chapter 10.6. Walter De Gruyter, 2015.

CC BY 3

66.79%

67.21%

67.00%

88.29%

89.41%

88.85%

Coreference resolution

As there is still no consensus about the single best coreference resolution metrics, CoNLL measure is used (average of MUC, B3 and CEAFE F-measure values). For end-to-end systems CoNLL-2011 shared task-based approach is used, so two result calculation strategies are presented:

  • INTERSECT: consider only correct system mentions (i.e. the intersection between gold and system mentions)
  • TRANSFORM: unify system and gold mention sets using the following procedure for twinless mentions (without a corresponding mention in the second set):
    1. insert twinless gold mentions into system mention set as singletons
    2. remove twinless singleton system mentions
    3. insert twinless non-singletion system mentions into gold set as singletons.

The results are produced on Polish Coreference Corpus data.

System name

Short description

Main publication

License

GOLD

EXACT INTERSECT

EXACT TRANSFORM

HEAD INTERSECT

HEAD TRANSFORM

Ruler

Rule-based

Ogrodniczuk M., Kopeć M. End-to-end coreference resolution baseline system for Polish. In Z. Vetulani (ed.), Proceedings of the 5th Language & Technology Conference: Human Language Technologies as a Challenge for Computer Science and Linguistics, pp. 167–171, Poznań, Poland, 2011.

CC BY 3

73.40%

78.54%

66.55%

76.27%

70.11%

Bartek&#160;3

Statistical

Kopeć M., Ogrodniczuk M. Creating a Coreference Resolution System for Polish. In Proceedings of the 8th International Conference on Language Resources and Evaluation, LREC 2012, pp. 192–195, ELRA.

CC BY 3

78.41%

80.86%

68.96%

78.58%

72.15%

Summarization