TenTen Corpus Family


The TenTen Corpus Family is a set of comparable web text corpora, i.e. collections of texts that have been crawled from the World Wide Web and processed to match the same standards. These corpora are made available through the Sketch Engine corpus manager. There are TenTen corpora for more than 35 languages. Their target size is 10 billion words per language, which gave rise to the corpus family's name.
In the creation of the TenTen corpora, data crawled from the World Wide Web are processed with natural language processing tools developed by the Natural Language Processing Centre at the Faculty of Informatics at Masaryk University and by the Lexical Computing company.

Corpus linguistics

In corpus linguistics, a text corpus is a large and structured collection of texts that are electronically stored and processed. It is used to do hypothesis testing about languages, validating linguistic rules or the frequency distribution of words within languages.
Electronically processed corpora provide fast search. Text processing procedures such as tokenization, part-of-speech tagging and word-sense disambiguation enrich corpus texts with detailed linguistic information. This enables to narrow the search to a particular parts of speech, word sequences or a specific part of the corpus.
First text corpora were created in the 1960s, such as the 1-million-word Brown Corpus of American English. Over time, many further corpora were produced and work had begun also on corpora of larger sizes and covering other languages than English. This development was linked with the emergence of corpus creation tools that help achieve larger size, wider coverage, cleaner data etc.

Production of TenTen corpora

The procedure by which TenTen corpora are produced is based on the creators' earlier research in preparing web corpora and the subsequent processing thereof.
At the beginning, a huge amount of text data is downloaded from the World Wide Web by the dedicated SpiderLing web crawler. In a later stage, these texts undergo cleaning, which consists of removing any non-textual material such as navigation links, headers and footers from the HTML source code of web pages with the jusText tool, so that only full solid sentences are preserved. Eventually, the ONION tool is applied to remove duplicate text portions from the corpus, which naturally occur on the World Wide Web due to practices such as quoting, citing, copying etc.

TenTen corpora data structure

TenTen corpora follow a specific metadata structure that is common to all of them. Metadata is contained in structural attributes that relate to individual documents and paragraphs in the corpus. Some TenTen corpora can feature additional specific attributes.

Document attributes

The following corpora can be accessed through the Sketch Engine as of October 2018: