Both sides previous revision Previous revision Next revision | Previous revision Next revisionBoth sides next revision |
corpora [2013/03/20 09:57] – eros | corpora [2013/10/11 09:13] – eros |
---|
* **PukWaC**: the same as ukWaC, but with a further layer of annotation added, i.e. a full dependency parse. The parsing was performed with the [[http://maltparser.org/|MaltParser]]. | * **PukWaC**: the same as ukWaC, but with a further layer of annotation added, i.e. a full dependency parse. The parsing was performed with the [[http://maltparser.org/|MaltParser]]. |
| |
* **ukWaC**: a 2 billion word corpus constructed from the Web limiting the crawl to the **.uk** domain and using medium-frequency words from the [[http://www.natcorp.ox.ac.uk/|BNC]] as seeds. The corpus was POS-tagged and lemmatized with the [[http://www.ims.uni-stuttgart.de/projekte/corplex/TreeTagger/|TreeTagger]], more information available {{:papers:wacky_2008.pdf|here}}. | * **ukWaC**: a 2 billion word corpus constructed from the Web limiting the crawl to the **.uk** domain and using medium-frequency words from the [[http://www.natcorp.ox.ac.uk/|BNC]] as seeds. The corpus was POS-tagged and lemmatized with the [[http://www.ims.uni-stuttgart.de/projekte/corplex/TreeTagger/|TreeTagger]]. The tagset is available {{:tagsets:ukwac_tagset.txt|here}}, more information can be found in this {{:papers:wacky_2008.pdf|paper}}. |
| |
* **WaCkypedia_EN**: a 2009 dump of the English Wikipedia (about 800 million tokens), in the same format as PukWaC, including POS/lemma information, as well as a full dependency parse (parsing performed with the [[http://maltparser.org/|MaltParser]]). The texts were extracted from the dump and cleaned using the [[http://medialab.di.unipi.it/wiki/Wikipedia_extractor|Wikipedia extractor]]. | * **WaCkypedia_EN**: a 2009 dump of the English Wikipedia (about 800 million tokens), in the same format as PukWaC, including POS/lemma information, as well as a full dependency parse (parsing performed with the [[http://maltparser.org/|MaltParser]]). The texts were extracted from the dump and cleaned using the [[http://medialab.di.unipi.it/wiki/Wikipedia_extractor|Wikipedia extractor]]. |
===== Italian ===== | ===== Italian ===== |
| |
* **itWaC**: a 2 billion word corpus constructed from the Web limiting the crawl to the **.it** domain and using medium-frequency words from the [[http://sslmit.unibo.it/repubblica|Repubblica]] corpus and basic Italian vocabulary lists as seeds. The corpus was POS-tagged with the [[http://www.ims.uni-stuttgart.de/projekte/corplex/TreeTagger/|TreeTagger]], and lemmatized using the [[http://sslmit.unibo.it/morphit|Morph-it!]] lexicon, more information available {{:papers:wacky_2008.pdf|here}}. | * **itWaC**: a 2 billion word corpus constructed from the Web limiting the crawl to the **.it** domain and using medium-frequency words from the [[http://sslmit.unibo.it/repubblica|Repubblica]] corpus and basic Italian vocabulary lists as seeds. The corpus was POS-tagged with the [[http://www.ims.uni-stuttgart.de/projekte/corplex/TreeTagger/|TreeTagger]] using this [[http://sslmit.unibo.it/~baroni/collocazioni/itwac.tagset.txt|tagset]], and lemmatized using the [[http://sslmit.unibo.it/morphit|Morph-it!]] lexicon, more information available {{:papers:wacky_2008.pdf|here}}. |
| |
* semantically and syntactically annotated **Italian Wikipedia**: | * semantically and syntactically annotated **Italian Wikipedia**: |