Academic studies about Wikipedia


Ever since Wikipedia was a few years old, there have been numerous academic studies about Wikipedia in peer-reviewed publications. This research can be grouped into two categories. The first analyzed the production and reliability of the encyclopedia content, while the second investigated social aspects, such as usage and administration. Such studies are greatly facilitated by the fact that Wikipedia's database can be downloaded without help from the site owner.

Content

Production

A minority of editors produce the majority of persistent content

In a landmark peer-reviewed paper, also mentioned in The Guardian, a team of six researchers from the University of Minnesota measured the relationship between editors' edit count and the editors' ability to convey their writings to Wikipedia readers, measured in terms of persistent word views —the number of times a word introduced by an edit is viewed. The accounting method is best described using the author's own words: "each time an article is viewed, each of its words is also viewed. When a word written by editor X is viewed, he or she is credited with one PWV." The number of times an article was viewed was estimated from the web server logs.
The researchers analyzed 25 trillion PWVs attributable to registered users in the interval September 1, 2002 − October 31, 2006. At the end of this period, the top 10% of editors were credited with 86% of PWVs, the top 1% about 70%, and the top 0.1% were attributed 44% of PWVs, i.e. nearly half of Wikipedia's "value" as measured in this study. The top 10 editors contributed only 2.6% of PWVs, and only three of them were in top 50 by edit count. From the data, the study authors derived the following relationship:
The study also analyzed the impact of bots on content. By edit count, bots dominate Wikipedia; 9 of the top 10 and 20 of the top 50 are bots. In contrast, in the PWV ranking only two bots appear in the top 50, and none in the top 10.
Based on the steady growth of the influence on those top 0.1% editors by PWV, the study concluded unequivocally:

Work distribution and social strata

A peer-reviewed paper noted the "social stratification in the Wikipedia society" due to the "admins class". The paper suggested that such stratification could be beneficial in some respects but recognized a "clear subsequent shift in power among levels of stratification" due to the "status and power differentials" between administrators and other editors.
Analyzing the entire edit history of English Wikipedia up to July 2006, the same study determined that the influence of administrator edits on contents has steadily diminished since 2003, when administrators performed roughly 50% of total edits, to 2006 when only 10% of the edits were performed by administrators. This happened despite the fact that the average number of edits per administrator had increased more than fivefold during the same period. This phenomenon was labeled the "rise of the crowd" by the authors of the paper. An analysis that used as metric the number of words edited instead of the number of edit actions showed a similar pattern. Because the admin class is somewhat arbitrary with respect to the number of edits, the study also considered a breakdown of users in categories based on the number of edits performed. The results for "elite users", i.e. users with more than 10,000 edits, were somewhat in line with those obtained for administrators, except that "the number of words changed by elite users has kept up with the changes made by novice users, even though the number of edits made by novice users has grown proportionally faster". The elite users were attributed about 30% of the changes for 2006. The study concludes:

Reliability

An Argumentation conference paper assessed whether trust in Wikipedia is based on epistemic or pragmatic merits. While readers may not assess the actual knowledge and expertise of the authors of a given article, they may assess the contributors' passion for the project, and communicative design through which that passion is made manifest, and provide a reason for trust.
In details, the author argued that Wikipedia can't be trusted based on individual expertise, collective knowledge, or past experience of reliability. This is because prevent knowledge assessment, and "" makes it unlikely that this will change. Editing Wikipedia may largely be confined to an elite group of editors, without aggregating "wisdom of the crowd" which in some cases lowers the quality of an article anyway. Personal experiences and empirical studies, confirmed by incidents including Seigenthaler biography controversy, point to the conclusion that Wikipedia is not generally reliable. Hence, these epistemic factors don't justify consulting with Wikipedia.
The author then proposed rationale to trust Wikipedia based on pragmatic values, which roughly can be summarized into two factors. First, the size and activity around wikipedia indicates that editors are deeply committed to provide the world with knowledge. Second, transparent developments of policies, practices, institutions, and technologies in addition to conspicous massive efforts, address the possible concerns that one might have in trusting Wikipedia. The concerns raised include the definition of provided knowledge, preventing distorted contributions from people not sharing the same commitment, correcting editing damages, and article quality control and improvement.

Geography

A research conducted by the Oxford Internet Institute showed that, as of 2009, Wikipedia "" articles in all language editions covered about half a million places on Earth. However, the geographic distribution of articles was highly uneven. Most articles are written about North America, Europe, and East Asia, with very little coverage of large parts of the developing world, including most of Africa.

Natural language processing

The textual content and the structured hierarchy of Wikipedia has become an important knowledge source for researchers in natural language processing and artificial intelligence. In 2007 researchers at Technion – Israel Institute of Technology developed a technique called Explicit Semantic Analysis which uses the world knowledge contained in English Wikipedia articles. Conceptual representations of words and texts are created automatically and used to compute the similarity between words and between texts.
Researchers at Ubiquitous Knowledge Processing Lab use the linguistic and world knowledge encoded in Wikipedia and Wiktionary to automatically create linguistic knowledge bases which are similar to expert-built resources like WordNet. Strube and Ponzetto created an algorithm to identify relationships among words by traversing English Wikipedia via its categorization scheme, and concluded that Wikipedia had created "a taxonomy able to compete with WordNet on linguistic processing tasks."

Critiques of content fields

Health information

Health information on English Wikipedia is popularly accessed as results from search engines and Search engine result page, which frequently deliver links to Wikipedia articles. Independent assessments of the quality of health information provided on Wikipedia and of who is accessing the information have been undertaken. The number and demographics of people who seek health information on Wikipedia, the scope of health information on Wikipedia, and the quality of the information on Wikipedia have been studied. There are drawbacks to using Wikipedia as a source of health information.

Social aspects

Demographics

A 2007 study by Hitwise, reproduced in Time magazine, found that visitors to Wikipedia are almost equally split 50/50 male/female, but that 60% of edits are made by male editors.
WikiWarMonitor which is part of the European Commission, CORDIS FP7 FET-Open supported project called ICTeCollective, have published:
In 2011 in IEEE Xplore titled "Edit wars in Wikipedia" for IEEE Third International Conference on Social Computing, it reported a new way to measure how an Wikipedia article is, and verified against 6 indo-european language editions including English.
In 2012 in PLoS ONE which used accumulative data from 32 language editions of wikipedia, it reported that based on circadian activity pattern analysis, the shares of contributions to English Wikipedia, from North America and Europe-Far East-Australia are almost equal, whereas this increases to 75% of European-Far Eastern-Australian contributions for the Simple English Wikipedia. The research also covers some other demographic analysis on the other editions in different languages.
In 2013 in Physical Review Letters, the Letter reported a generic social dynamics model in a collaborative environment involving opinions, conflicts, and consensus, with a specific analogue to wikipedia: "a peaceful article can suddenly become controversial when more people get involved in its editing."
In 2014 published as a book chapter titled "The Most Controversial Topics in Wikipedia: A Multilingual and Geographical Analysis": analysed the volume of editing of articles in various language versions of Wikipedia in order to establish the most controversial topics in different languages and groups of languages. For the English version, the top three most controversial articles were George W. Bush, Anarchism and Muhammad. Topics in other languages causing most controversy were Croatia, Ségolène Royal, Chile and Homosexuality.

Policies and guidelines

A descriptive study that analyzed English language Wikipedia's policies and guidelines up to September 2007 identified a number of key statistics:
Even a short policy like "ignore all rules" was found to have generated a lot of discussion and clarifications:
The study sampled the expansion of some key policies since their inception:
The number for "deletion" was considered inconclusive however because the policy was split in several sub-policies.

Power plays

A 2007 joint peer-reviewed study conducted by researchers from the University of Washington and HP Labs examined how policies are employed and how contributors work towards consensus by quantitatively analyzing a sample of active talk pages. Using a November 2006 English Wikipedia database dump, the study focused on 250 talk pages in the tail of the distribution: 0.3% of all talk pages, but containing 28.4% of all talk page revisions, and more significantly, containing 51.1% of all links to policies. From the sampled pages' histories, the study examined only the months with high activity, called critical sections—sets of consecutive months where both article and talk page revisions were significant in number.
The study defined and calculated a measure of policy prevalence. A critical section was considered policy-laden if its policy factor was at least twice the average. Articles were tagged with 3 indicator variables:
All possible levels of these three factors yielded 8 sampling categories. The study intended to analyze 9 critical sections from each sampling category, but only 69 critical sections could be selected because only 6 articles were simultaneously featured, controversial, and policy laden.
The study found that policies were by no means consistently applied. Illustrative of its broader findings, the report presented the following two extracts from Wikipedia talk pages in obvious contrast:
Claiming that such ambiguities easily give rise to power plays, the study identified, using the methods of grounded theory, 7 types of power plays:
Due to lack of space, the study detailed only the first 4 types of power plays that were exercised by merely interpreting policy. A fifth power play category was analyzed; it consisted of blatant violations of policy that were forgiven because the contributor was valued for his contributions despite his lack of respect for rules.

Article scope

The study considers that Wikipedia's policies are ambiguous on scoping issues. The following vignette is used to illustrate the claim:
The study gives the following interpretation for the heated debate:

Prior consensus

The study remarks that in Wikipedia consensus is never final, and what constitutes consensus can change at any time. The study finds that this temporal ambiguity is fertile ground for power plays, and places the generational struggle over consensus in larger picture of the struggle for article ownership:
The study uses the following discussion snippet to illustrate this continuous struggle:

Power of interpretation

A vignette illustrated how administrators overrode consensus and deleted personal accounts of user/patients suffering from an anonymized illness. The administrator's intervention happened as the article was being nominated to become as a featured article.

Legitimacy of contributor

This type of power play is illustrated by a contributor that draws on his past contributions to argue against another contributor who is accusing U24 of being unproductive and disruptive:

Explicit vie for ownership

The study finds that there are contributors who consistently and successfully violate policy without sanction:

Obtaining administratorship

In 2008, researchers from Carnegie Mellon University devised a probit model of English Wikipedia editors who had successfully passed. Using only Wikipedia metadata, including the text of edit summaries, their model was 74.8% accurate in predicting successful candidates.
The paper observed that despite protestations to the contrary, "in many ways election to admin is a promotion, distinguishing an elite core group from the large mass of editors." Consequently, the paper used policy capture—a method that compares nominally important attributes to those that actually lead to promotion in a work environment.
The overall success rate for promotion decreased from 75% in 2005, to 53% in 2006, and to 42% in 2007. This sudden increase in failure rate was attributed to a higher standard that recently promoted administrators had to meet, and supported by anecdotal evidence from another recent study quoting some early admins who have expressed doubt that they would pass muster if their election were held recently. In light of these developments the study argued that:
Factor2006–2007pre–2006
each previous RfA attempt-14.7%-11.1%
each month since first edit0.4%
every 1000 article edits1.8%
every 1000 Wikipedia policy edits19.6%
every 1000 WikiProject edits17.1%
every 1000 article talk edits6.3%15.4%
each Arb/mediation/wikiquette edit-0.1%-0.2%
each diversity score 2.8%3.7%
each percentage of in edit summaries0.2%0.2%
each percentage of human written edit summaries0.5%0.4%
each "thank" in edit summaries0.3%
each "POV" indication in edit summaries0.1%
each edit in Admin attention/noticeboard-0.1%

Contrary to expectations perhaps, "running" for administrator multiple times is detrimental to the candidate's chance of success. Each subsequent attempt has a 14.8% lower chance of success than the previous one. Length of participation in the project makes only a small contribution to the chance of a successful RfA.
Another significant finding of the paper is that one Wikipedia policy edit or WikiProject edit is worth ten article edits. A related observation is that candidates with experience in multiple areas of the site stood better chance of election. This was measured by the diversity score, a simple count of the number of areas that the editor has participated in. The paper divided Wikipedia in 16 areas: article, article talk, articles/categories/templates for deletion, deletion review, etc.. For instance, a user who has edited articles, her own user page, and posted once at deletion review would have a diversity score of 3. Making a single edit in any additional region of Wikipedia correlated with a 2.8% increased likelihood of success in gaining administratorship.
Making minor edits also helped, although the study authors consider that this may be so because minor edits correlate with experience. In contrast, each edit to an Arbitration or Mediation committee page, or a, all of which are venues for dispute resolution, decreases the likelihood of success by 0.1%. Posting messages to administrator noticeboards had a similarly deleterious effect. The study interpreted this as evidence that editors involved in escalating or protracting conflicts lower their chances of becoming administrators.
Saying "thanks" or variations thereof in edit summaries, and pointing out point of view issues were of minor benefit, contributing to 0.3% and 0.1% to candidate's chances in 2006–2007, but did not reach statistical significance before.
A few factors that were found to be irrelevant or marginal at best:
The study suggests that some of the 25% unexplained variability in outcomes may be due to factors that were not measured, such as quality of edits or participation in off-site coordination, such as the secret mailing list reported in The Register. The paper concludes:
Subsequent research by another group probed the sensemaking activities of individuals during their contributions to RfA decisions. This work establishes that decisions about RfA candidates is based on a shared interpretation of evidence in the wiki and histories of prior interactions.

Wikipedia in education

Despite the reluctance of teachers to use Wikipedia as a base for classroom work, it has been found that writing students' use of Wikipedia improves their interest in the learning process, their investment in their work output, their progress in their learning and personal development, and the possibility of students for local and international collaborations.

Machine learning

Automated semantic knowledge extraction using machine learning algorithms is used to "extract machine-processable information at a relatively low complexity cost". DBpedia uses structured content extracted from infoboxes of Wikipedia articles in different languages by machine learning algorithms to create a resource of linked data in a Semantic Web.

Wikipedia view statistics and human behavior

In a study published in PLoS ONE Taha Yasseri from Oxford Internet Institute and his colleagues from Central European University have shown that the page view statistics of articles about movies are well correlated with the box office revenue of them. They developed a mathematical model to predict the box office takings by analysing the page view counts as well as number of edits and unique editors of the Wikipedia pages on movies. Although this model was developed against English Wikipedia for movies, the language-independent methods can be generalized to other languages and to other kinds of products beyond movies.
In a work published in Scientific Reports in 2013, Helen Susannah Moat, Tobias Preis and colleagues demonstrated a link between changes in the number of views of English Wikipedia articles relating to financial topics and subsequent large US stock market moves.