SemEval


SemEval is an ongoing series of evaluations of computational semantic analysis systems; it evolved from the Senseval word sense evaluation series. The evaluations are intended to explore the nature of meaning in language. While meaning is intuitive to humans, transferring those intuitions to computational analysis has proved elusive.
This series of evaluations is providing a mechanism to characterize in more precise terms exactly what is necessary to compute in meaning. As such, the evaluations provide an emergent mechanism to identify the problems and solutions for computations with meaning. These exercises have evolved to articulate more of the dimensions that are involved in our use of language. They began with apparently simple attempts to identify word senses computationally. They have evolved to investigate the interrelationships among the elements in a sentence, relations between sentences, and the nature of what we are saying.
The purpose of the SemEval and Senseval exercises is to evaluate semantic analysis systems. "Semantic Analysis" refers to a formal analysis of meaning, and "computational" refer to approaches that in principle support effective implementation.
The first three evaluations, Senseval-1 through Senseval-3, were focused on word sense disambiguation, each time growing in the number of languages offered in the tasks and in the number of participating teams. Beginning with the fourth workshop, SemEval-2007, the nature of the tasks evolved to include semantic analysis tasks outside of word sense disambiguation.
Triggered by the conception of the *SEM conference, the SemEval community had decided to hold the evaluation workshops yearly in association with the *SEM conference. It was also the decision that not every evaluation task will be run every year, e.g. none of the WSD tasks were included in the SemEval-2012 workshop.

History

Early evaluation of algorithms for word sense disambiguation

From the earliest days, assessing the quality of word sense disambiguation algorithms had been primarily a matter of intrinsic evaluation, and “almost no attempts had been made to evaluate embedded WSD components”. Only very recently had extrinsic evaluations begun to provide some evidence for the value of WSD in end-user applications. Until 1990 or so, discussions of the sense disambiguation task focused mainly on illustrative examples rather than comprehensive evaluation. The early 1990s saw the beginnings of more systematic and rigorous intrinsic evaluations, including more formal experimentation on small sets of ambiguous words.

Senseval to SemEval

In April 1997, Martha Palmer and Marc Light organized a workshop entitled Tagging with Lexical Semantics: Why, What, and How? in conjunction with the Conference on Applied Natural Language Processing. At the time, there was a clear recognition that manually annotated corpora had revolutionized other areas of NLP, such as part-of-speech tagging and parsing, and that corpus-driven approaches had the potential to revolutionize automatic semantic analysis as well. Kilgarriff recalled that there was “a high degree of consensus that the field needed evaluation,” and several practical proposals by Resnik and Yarowsky kicked off a discussion that led to the creation of the Senseval evaluation exercises.

SemEval's 3, 2 or 1 year(s) cycle

After SemEval-2010, many participants feel that the 3-year cycle is a long wait. Many other shared tasks such as Conference on Natural Language Learning and Recognizing Textual Entailments run annually. For this reason, the SemEval coordinators gave the opportunity for task organizers to choose between a 2-year or a 3-year cycle. The SemEval community favored the 3-year cycle.
Although the votes within the SemEval community favored a 3-year cycle, organizers and coordinators had settled to split the SemEval task into 2 evaluation workshops. This was triggered by the introduction of the new *SEM conference. The SemEval organizers thought it would be appropriate to associate our event with the *SEM conference and collocate the SemEval workshop with the *SEM conference. The organizers got very positive responses about the association with the yearly *SEM, and 8 tasks were willing to switch to 2012. Thus was born SemEval-2012 and SemEval-2013. The current plan is to switch to a yearly SemEval schedule to associate it with the *SEM conference but not every task needs to run every year.

List of Senseval and SemEval Workshops

The framework of the SemEval/Senseval evaluation workshops emulates the Message Understanding Conferences and other evaluation workshops ran by ARPA.
]
Stages of SemEval/Senseval evaluation workshops
  1. Firstly, all likely participants were invited to express their interest and participate in the exercise design.
  2. A timetable towards a final workshop was worked out.
  3. A plan for selecting evaluation materials was agreed.
  4. 'Gold standards' for the individual tasks were acquired, often human annotators were considered as a gold standard to measure precision and recall scores of computer systems. These 'gold standards' are what the computational systems strive towards. In WSD tasks, human annotators were set on the task of generating a set of correct WSD answers
  5. The gold standard materials, without answers, were released to participants, who then had a short time to run their programs over them and return their sets of answers to the organizers.
  6. The organizers then scored the answers and the scores were announced and discussed at a workshop.

    Semantic evaluation tasks

Senseval-1 & Senseval-2 focused on evaluation WSD systems on major languages that were available corpus and computerized dictionary. Senseval-3 looked beyond the lexemes and started to evaluate systems that looked into wider areas of semantics, such as Semantic Roles, Logic Form Transformation and Senseval-3 explored performances of semantics analysis on Machine translation.
As the types of different computational semantic systems grew beyond the coverage of WSD, Senseval evolved into SemEval, where more aspects of computational semantic systems were evaluated.

Overview of Issues in Semantic Analysis

The SemEval exercises provide a mechanism for examining issues in semantic analysis of texts. The topics of interest fall short of the logical rigor that is found in formal computational semantics, attempting to identify and characterize the kinds of issues relevant to human understanding of language. The primary goal is to replicate human processing by means of computer systems. The tasks are developed by individuals and groups to deal with identifiable issues, as they take on some concrete form.
The first major area in semantic analysis is the identification of the intended meaning at the word level. This is word-sense disambiguation. The tasks in this area include lexical sample and all-word disambiguation, multi- and cross-lingual disambiguation, and lexical substitution. Given the difficulties of identifying word senses, other tasks relevant to this topic include word-sense induction, subcategorization acquisition, and evaluation of lexical resources.
The second major area in semantic analysis is the understanding of how different sentence and textual elements fit together. Tasks in this area include semantic role labeling, semantic relation analysis, and coreference resolution. Other tasks in this area look at more specialized issues of semantic analysis, such as temporal information processing, metonymy resolution, and sentiment analysis. The tasks in this area have many potential applications, such as information extraction, question answering, document summarization, machine translation, construction of thesauri and semantic networks, language modeling, paraphrasing,
and recognizing textual entailment. In each of these potential applications, the contribution of the types of semantic analysis constitutes the most outstanding research issue.
For example, in the word sense induction and disambiguation task, there are three separate phases:
  1. In the training phase, evaluation task participants were asked to use a training dataset to induce the sense inventories for a set of polysemous words. The training dataset consisting of a set of polysemous nouns/verbs and the sentence instances that they occurred in. No other resources were allowed other than morphological and syntactic Natural Language Processing components, such as morphological analyzers, Part-Of-Speech taggers and syntactic parsers.
  2. In the testing phase, participants were provided with a test set for the disambiguating subtask using the induced sense inventory from the training phase.
  3. In the evaluation phase, answers of to the testing phase were evaluated in a supervised an unsupervised framework.
The unsupervised evaluation for WSI considered two types of evaluation V Measure, and paired F-Score. This evaluation follows the supervised evaluation of SemEval-2007 WSI task

Senseval and SemEval tasks overview

The tables below reflects the workshop growth from Senseval to SemEval and gives an overview of which area of computational semantics was evaluated throughout the Senseval/SemEval workshops.
WorkshopNo. of TasksAreas of studyLanguages of Data Evaluated
Senseval-13Word Sense Disambiguation - Lexical Sample WSD tasksEnglish, French, Italian
Senseval-212Word Sense Disambiguation - Lexical Sample, All Words, Translation WSD tasksCzech, Dutch, English, Estonian, Basque, Chinese, Danish, English, Italian, Japanese, Korean, Spanish, Swedish
Senseval-316
Logic Form Transformation, Machine Translation Evaluation, Semantic Role Labelling, WSDBasque, Catalan, Chinese, English, Italian, Romanian, Spanish
SemEval200719
Cross-lingual, Frame Extraction, Information Extraction, Lexical Substitution, Lexical Sample, Metonymy, Semantic Annotation, Semantic Relations, Semantic Role Labelling, Sentiment Analysis, Time Expression, WSDArabic, Catalan, Chinese, English, Spanish, Turkish
SemEval201018
Coreference, Cross-lingual, Ellipsis, Information Extraction, Lexical Substitution, Metonymy, Noun Compounds, Parsing, Semantic Relations, Semantic Role Labeling, Sentiment Analysis, Textual Entailment, Time Expressions, WSDCatalan, Chinese, Dutch, English, French, German, Italian, Japanese, Spanish
SemEval20128Common Sense Reasoning, Lexical Simplification, Relational Similarity, Spatial Role Labelling, Semantic Dependency Parsing, Semantic and Textual SimilarityChinese, English
SemEval201314Temporal Annotation, Sentiment Analysis, Spatial Role Labeling, Noun Compounds, Phrasal Semantics, Textual Similarity, Response Analysis, Cross-lingual Textual Entailment, BioMedical Texts, Cross and Multilingual WSD, Word Sense Induction, and Lexical SampleCatalan, French, German, English, Italian, Spanish
SemEval201410Compositional Distributional Semantic, Grammar Induction for Spoken Dialogue Systems, Cross-Level Semantic Similarity, Sentiment Analysis, L2 Writing Assistant, Supervised Semantic Parsing, Clinical Text Analysis, Semantic Dependency Parsing, Sentiment Analysis in Twitter, Multilingual Semantic Textual SimilarityEnglish, Spanish, French, German, Dutch,
SemEval201518
Text Similarity and Question Answering, Time and Space, Sentiment, Word Sense Disambiguation and Induction, Learning Semantic RelationsEnglish, Spanish, Arabic, Italian
SemEval201614Textual Similarity and Question Answering, Sentiment Analysis, Semantic Parsing, Semantic Analysis, Semantic Taxonomy
SemEval201712Semantic comparison for words and texts, Detecting sentiment, humor, and truth and Parsing semantic structures
SemEval201812Affect and Creative Language in Tweets, Coreference, Information Extraction, Lexical Semantics and Reading Comprehension and Reasoning

The Multilingual WSD task was introduced for the SemEval-2013 workshop. The task is aimed at evaluating Word Sense Disambiguation systems in a multilingual scenario using BabelNet as its sense inventory. Unlike similar task like crosslingual WSD or the multilingual lexical substitution task, where no fixed sense inventory is specified, Multilingual WSD uses the BabelNet as its sense inventory. Prior to the development of BabelNet, a bilingual lexical sample WSD evaluation task was carried out in SemEval-2007 on Chinese-English bitexts.
The Cross-lingual WSD task was introduced in the SemEval-2007 evaluation workshop and re-proposed in the SemEval-2013 workshop
. To facilitate the ease of integrating WSD systems into other Natural Language Processing applications, such as Machine Translation and multilingual Information Retrieval, the cross-lingual WSD evaluation task was introduced a language-independent and knowledge-lean approach to WSD. The task is an unsupervised Word Sense Disambiguation task for English nouns by means of parallel corpora. It follows the lexical-sample variant of the Classic WSD task, restricted to only 20 polysemous nouns.
It is worth noting that the SemEval-2014 have only two tasks that were multilingual/crosslingual, i.e. the task, which is a crosslingual WSD task that includes English, Spanish, German, French and Dutch and the task that evaluates systems on English and Spanish texts.

Areas of evaluation

The major tasks in semantic evaluation include the following areas of natural language processing. This list is expected to grow as the field progresses.
The following table shows the areas of studies that were involved in Senseval-1 through SemEval-2014 :
Areas of StudyS1S2S3SE07SE10SE12SE13SE14SE15SE16SE17
Bioinfomatics / Clinical Text Analysis
Common Sense Reasoning
Coreference Resolution
Noun Compounds
Ellipsis
Grammar Induction
Keyphrase Extraction
Lexical Simplification
Lexical Substitution
Lexical Complexity
Metonymy
Paraphrases
Question Answering
Relational Similarity
Rumour and veracity
Semantic Parsing
Semantic Relation Identification
Semantic Role Labeling
Semantic Similarity
Semantic Similarity
Semantic Similarity
Sentiment Analysis
Spatial Role Labelling
Taxonomy Induction/Enrichment
Textual Entailment
Textual Entailment
Temporal annotation
Twitter Analysis
Word sense disambiguation
Word sense disambiguation
Word sense disambiguation
Word sense disambiguation
Word sense induction

Type of Semantic Annotations

SemEval tasks have created many types of semantic annotations, each type with various schema. In SemEval-2015, the organizers have decided to group tasks together into several tracks. These tracks are by the type of semantic annotations that the task hope to achieve. Here lists the type of semantic annotations involved in the SemEval workshops:
  1. Learning Semantic Relations
  2. Question and Answering
  3. Semantic Parsing
  4. Semantic Taxonomy
  5. Sentiment Analysis
  6. Text Similarity
  7. Time and Space
  8. Word Sense Disambiguation and Induction
A task and its track allocation is flexible; a task might develop into its own track, e.g. the taxonomy evaluation task in SemEval-2015 was under the Learning Semantic Relations track and in SemEval-2016, there is a dedicated track for Semantic Taxonomy with a new Semantic Taxonomy Enrichment task.