Digital library


A digital library, digital repository, or digital collection, is an online database of digital objects that can include text, still images, audio, video, digital documents, or other digital media formats. Objects can consist of digitized content like print or photographs, as well as originally produced digital content like word processor files or social media posts. In addition to storing content, digital libraries provide means for organizing, searching, and retrieving the content contained in the collection.
Digital libraries can vary immensely in size and scope, and can be maintained by individuals or organizations. The digital content may be stored locally, or accessed remotely via computer networks. These information retrieval systems are able to exchange information with each other through interoperability and sustainability.

History

The early history of digital libraries is not well documented, but several key thinkers are connected to the emergence of the concept. Predecessors include Paul Otlet and Henri La Fontaine's Mundaneum, an attempt begun in 1895 to gather and systematically catalogue the world's knowledge, with the hope of bringing about world peace. The visions of the digital library were largely realized a century later during the great expansion of the Internet, with access to the books and searching of the documents by millions of individuals on the World Wide Web.
Vannevar Bush and J.C.R. Licklider are two contributors that advanced this idea into then current technology. Bush had supported research that led to the bomb that was dropped on Hiroshima. After seeing the disaster, he wanted to create a machine that would show how technology can lead to understanding instead of destruction. This machine would include a desk with two screens, switches and buttons, and a keyboard. He named this the "Memex." This way individuals would be able to access stored books and files at a rapid speed. In 1956, Ford Foundation funded Licklider to analyze how libraries could be improved with technology. Almost a decade later, his book entitled "Libraries of the Future" included his vision. He wanted to create a system that would use computers and networks so human knowledge would be accessible for human needs and feedback would be automatic for machine purposes. This system contained three components, the corpus of knowledge, the question, and the answer. Licklider called it a procognitive system.
Early projects centered on the creation of an electronic card catalogue known as Online Public Access Catalog. By the 1980s, the success of these endeavors resulted in OPAC replacing the traditional card catalog in many academic, public and special libraries. This permitted libraries to undertake additional rewarding co-operative efforts to support resource sharing and expand access to library materials beyond an individual library.
An early example of a digital library is the Education Resources Information Center, a database of education citations, abstracts and texts that was created in 1964 and made available online through DIALOG in 1969.
In 1994, digital libraries became widely visible in the research community due to a $24.4 million NSF managed program supported jointly by DARPA's Intelligent Integration of Information program, NASA, and NSF itself
. Successful research proposals came from six U.S. universities The universities included Carnegie Mellon University, University of California-Berkeley, University of Michigan, University of Illinois, University of California-Santa Barbara, and Stanford University. Articles from the projects summarized their progress at their halfway point in May 1996. Stanford research, by Sergey Brin and Larry Page led to the founding of Google.
Early attempts at creating a model for digital libraries included the DELOS Digital Library Reference Model and the 5S Framework.

Terminology

The term digital library was first popularized by the NSF/DARPA/NASA Digital Libraries Initiative in 1994. With the availability of the computer networks the information resources are expected to stay distributed and accessed as needed, whereas in Vannevar Bush's essay As We May Think they were to be collected and kept within the researcher's Memex.
The term virtual library was initially used interchangeably with digital library, but is now primarily used for libraries that are virtual in other senses. In the early days of digital libraries, there was discussion of the similarities and differences among the terms digital, virtual, and electronic.
A distinction is often made between content that was created in a digital format, known as born-digital, and information that has been converted from a physical medium, e.g. paper, through digitization. Not all electronic content is in digital data format. The term hybrid library is sometimes used for libraries that have both physical collections and electronic collections. For example, American Memory is a digital library within the Library of Congress.
Some important digital libraries also serve as long term archives, such as arXiv and the Internet Archive. Others, such as the Digital Public Library of America, seek to make digital information from various institutions widely accessible online.

Types of digital libraries

Institutional repositories

Many academic libraries are actively involved in building institutional repositories of the institution's books, papers, theses, and other works which can be digitized or were 'born digital'. Many of these repositories are made available to the general public with few restrictions, in accordance with the goals of open access, in contrast to the publication of research in commercial journals, where the publishers often limit access rights. Institutional, truly free, and corporate repositories are sometimes referred to as digital libraries. Institutional repository software is designed for archiving, organizing, and searching a library's content. Popular open-source solutions include DSpace, EPrints, Digital Commons, and Fedora Commons-based systems Islandora and Samvera.

National library collections

is often covered by copyright legislation and sometimes by laws specific to legal deposit, and requires that one or more copies of all material published in a country should be submitted for preservation in an institution, typically the national library. Since the advent of electronic documents, legislation has had to be amended to cover the new formats, such as the 2016 amendment to the Copyright Act 1968 in Australia.
Since then various types of electronic depositories have been built. The British Library’s Publisher Submission Portal and the German model at the Deutsche Nationalbibliothek have one deposit point for a network of libraries, but public access is only available in the reading rooms in the libraries. The Australian National edeposit system has the same features, but also allows for remote access by the general public for most of the content.

Digital archives

Physical archives differ from physical libraries in several ways. Traditionally, archives are defined as:
  1. Containing primary sources of information rather than the secondary sources found in a library.
  2. Having their contents organized in groups rather than individual items.
  3. Having unique contents.
The technology used to create digital libraries is even more revolutionary for archives since it breaks down the second and third of these general rules. In other words, "digital archives" or "online archives" will still generally contain primary sources, but they are likely to be described individually rather than in groups or collections. Further, because they are digital, their contents are easily reproducible and may indeed have been reproduced from elsewhere. The Oxford Text Archive is generally considered to be the oldest digital archive of academic physical primary source materials.
Archives differ from libraries in the nature of the materials held. Libraries collect individual published books and serials, or bounded sets of individual items. The books and journals held by libraries are not unique, since multiple copies exist and any given copy will generally prove as satisfactory as any other copy. The material in archives and manuscript libraries are "the unique records of corporate bodies and the papers of individuals and families".
A fundamental characteristic of archives is that they have to keep the context in which their records have been created and the network of relationships between them in order to preserve their informative content and provide understandable and useful information over time. The fundamental characteristic of archives resides in their hierarchical organization expressing the context by means of the archival bond.
Archival descriptions are the fundamental means to describe, understand, retrieve and access archival material. At the digital level, archival descriptions are usually encoded by means of the Encoded Archival Description XML format. The EAD is a standardized electronic representation of archival description which makes it possible to provide union access to detailed archival descriptions and resources in repositories distributed throughout the world.
Given the importance of archives, a dedicated formal model, called , built around their peculiar constituents, has been defined. NESTOR is based on the idea of expressing the hierarchical relationships between objects through the inclusion property between sets, in contrast to the binary relation between nodes exploited by the tree.
NESTOR has been used to formally extend the 5S model to define a digital archive as a specific case of digital library able to take into consideration the peculiar features of archives.

Features of digital libraries

The advantages of digital libraries as a means of easily and rapidly accessing books, archives and images of various types are now widely recognized by commercial interests and public bodies alike.
Traditional libraries are limited by storage space; digital libraries have the potential to store much more information, simply because digital information requires very little physical space to contain it. As such, the cost of maintaining a digital library can be much lower than that of a traditional library. A physical library must spend large sums of money paying for staff, book maintenance, rent, and additional books. Digital libraries may reduce or, in some instances, do away with these fees. Both types of library require cataloging input to allow users to locate and retrieve material. Digital libraries may be more willing to adopt innovations in technology providing users with improvements in electronic and audio book technology as well as presenting new forms of communication such as wikis and blogs; conventional libraries may consider that providing online access to their OP AC catalog is sufficient. An important advantage to digital conversion is increased accessibility to users. They also increase availability to individuals who may not be traditional patrons of a library, due to geographic location or organizational affiliation.
There are a number of software packages for use in general digital libraries, for notable ones see :Category:Digital library software|Digital library software. Institutional repository software, which focuses primarily on ingest, preservation and access of locally produced documents, particularly locally produced academic outputs, can be found in :Category:Institutional repository software|Institutional repository software. This software may be proprietary, as is the case with the Library of Congress which uses Digiboard and CTS to manage digital content.
The design and implementation in digital libraries are constructed so computer systems and software can make use of the information when it is exchanged. These are referred to as semantic digital libraries. Semantic libraries are also used to socialize with different communities from a mass of social networks. DjDL is a type of semantic digital library. Keywords-based and semantic search are the two main types of searches. A tool is provided in the semantic search that create a group for augmentation and refinement for keywords-based search. Conceptual knowledge used in DjDL is centered around two forms; the subject ontology and the set of concept search patterns based on the ontology. The three type of ontologies that are associated to this search are bibliographic ontologies, community-aware ontologies, and subject ontologies.

Metadata

In traditional libraries, the ability to find works of interest is directly related to how well they were cataloged. While cataloging electronic works digitized from a library's existing holding may be as simple as copying or moving a record from the print to the electronic form, complex and born-digital works require substantially more effort. To handle the growing volume of electronic publications, new tools and technologies have to be designed to allow effective automated semantic classification and searching. While full-text search can be used for some items, there are many common catalog searches which cannot be performed using full text, including:
Most digital libraries provide a search interface which allows resources to be found. These resources are typically deep web resources since they frequently cannot be located by search engine crawlers. Some digital libraries create special pages or sitemaps to allow search engines to find all their resources. Digital libraries frequently use the Open Archives Initiative Protocol for Metadata Harvesting to expose their metadata to other digital libraries, and search engines like Google Scholar, Yahoo! and Scirus can also use OAI-PMH to find these deep web resources.
There are two general strategies for searching a federation of digital libraries: distributed searching and searching previously harvested metadata.
Distributed searching typically involves a client sending multiple search requests in parallel to a number of servers in the federation. The results are gathered, duplicates are eliminated or clustered, and the remaining items are sorted and presented back to the client. Protocols like Z39.50 are frequently used in distributed searching. A benefit to this approach is that the resource-intensive tasks of indexing and storage are left to the respective servers in the federation. A drawback to this approach is that the search mechanism is limited by the different indexing and ranking capabilities of each database; therefore, making it difficult to assemble a combined result consisting of the most relevant found items.
Searching over previously harvested metadata involves searching a locally stored index of information that has previously been collected from the libraries in the federation. When a search is performed, the search mechanism does not need to make connections with the digital libraries it is searching - it already has a local representation of the information. This approach requires the creation of an indexing and harvesting mechanism which operates regularly, connecting to all the digital libraries and querying the whole collection in order to discover new and updated resources. OAI-PMH is frequently used by digital libraries for allowing metadata to be harvested. A benefit to this approach is that the search mechanism has full control over indexing and ranking algorithms, possibly allowing more consistent results. A drawback is that harvesting and indexing systems are more resource-intensive and therefore expensive.

Digital preservation

Digital preservation aims to ensure that digital media and information systems are still interpretable into the indefinite future. Each necessary component of this must be migrated, preserved or emulated. Typically lower levels of systems are emulated, bit-streams are preserved and operating systems are emulated as a virtual machine. Only where the meaning and content of digital media and information systems are well understood is migration possible, as is the case for office documents. However, at least one organization, the Wider Net Project, has created an offline digital library, the eGranary, by reproducing materials on a 6 TB hard drive. Instead of a bit-stream environment, the digital library contains a built-in proxy server and search engine so the digital materials can be accessed using an Internet browser. Also, the materials are not preserved for the future. The eGranary is intended for use in places or situations where Internet connectivity is very slow, non-existent, unreliable, unsuitable or too expensive.
In the past few years, procedures for digitizing books at high speed and comparatively low cost have improved considerably with the result that it is now possible to digitize millions of books per year. Google book-scanning project is also working with libraries to offer digitize books pushing forward on the digitize book realm.

Copyright and licensing

Digital libraries are hampered by copyright law because, unlike with traditional printed works, the laws of digital copyright are still being formed. The republication of material on the web by libraries may require permission from rights holders, and there is a conflict of interest between libraries and the publishers who may wish to create online versions of their acquired content for commercial purposes. In 2010, it was estimated that twenty-three percent of books in existence were created before 1923 and thus out of copyright. Of those printed after this date, only five percent were still in print as of 2010. Thus, approximately seventy-two percent of books were not available to the public.
There is a dilution of responsibility that occurs as a result of the distributed nature of digital resources. Complex intellectual property matters may become involved since digital material is not always owned by a library. The content is, in many cases, public domain or self-generated content only. Some digital libraries, such as Project Gutenberg, work to digitize out-of-copyright works and make them freely available to the public. An estimate of the number of distinct books still existent in library catalogues from 2000 BC to 1960, has been made.
The Fair Use Provisions under the Copyright Act of 1976 provide specific guidelines under which circumstances libraries are allowed to copy digital resources. Four factors that constitute fair use are "Purpose of the use, Nature of the work, Amount or substantiality used and Market impact."
Some digital libraries acquire a license to lend their resources. This may involve the restriction of lending out only one copy at a time for each license, and applying a system of digital rights management for this purpose.
The Digital Millennium Copyright Act of 1998 was an act created in the United States to attempt to deal with the introduction of digital works. This Act incorporates two treaties from the year 1996. It criminalizes the attempt to circumvent measures which limit access to copyrighted materials. It also criminalizes the act of attempting to circumvent access control. This act provides an exemption for nonprofit libraries and archives which allows up to three copies to be made, one of which may be digital. This may not be made public or distributed on the web, however. Further, it allows libraries and archives to copy a work if its format becomes obsolete.
Copyright issues persist. As such, proposals have been put forward suggesting that digital libraries be exempt from copyright law. Although this would be very beneficial to the public, it may have a negative economic effect and authors may be less inclined to create new works.
Another issue that complicates matters is the desire of some publishing houses to restrict the use of digit materials such as e-books purchased by libraries. Whereas with printed books, the library owns the book until it can no longer be circulated, publishers want to limit the number of times an e-book can be checked out before the library would need to repurchase that book. " began licensing use of each e-book copy for a maximum of 26 loans. This affects only the most popular titles and has no practical effect on others. After the limit is reached, the library can repurchase access rights at a lower cost than the original price." While from a publishing perspective, this sounds like a good balance of library lending and protecting themselves from a feared decrease in book sales, libraries are not set up to monitor their collections as such. They acknowledge the increased demand of digital materials available to patrons and the desire of a digital library to become expanded to include best sellers, but publisher licensing may hinder the process.

Recommendation systems

Many digital libraries offer recommender systems to reduce information overload and help their users discovering relevant literature. Some examples of digital libraries offering recommender systems are IEEE Xplore, Europeana, and GESIS Sowiport. The recommender systems work mostly based on content-based filtering but also other approaches are used such as collaborative filtering and citation-based recommendations. Beel et al. report that there are more than 90 different recommendation approaches for digital libraries, presented in more than 200 research articles.
Typically, digital libraries develop and maintain their own recommender systems based on existing search and recommendation frameworks such as Apache Lucene or Apache Mahout. However, there are also some recommendation-as-a-service provider specializing in offering a recommender system for digital libraries as a service.

Drawbacks of digital libraries

Digital libraries, or at least their digital collections, unfortunately also have brought their own problems and challenges in areas such as:
There are many large scale digitisation projects that perpetuate these problems.

Future development

Large scale digitization projects are underway at Google, the Million Book Project, and Internet Archive. With continued improvements in book handling and presentation technologies such as optical character recognition and development of alternative depositories and business models, digital libraries are rapidly growing in popularity. Just as libraries have ventured into audio and video collections, so have digital libraries such as the Internet Archive. Google Books project recently received a court victory on proceeding with their book-scanning project that was halted by the Authors' guild. This helped open the road for libraries to work with Google to better reach patrons who are accustomed to computerized information.
According to Larry Lannom, Director of Information Management Technology at the nonprofit Corporation for National Research Initiatives, "all the problems associated with digital libraries are wrapped up in archiving." He goes on to state, "If in 100 years people can still read your article, we'll have solved the problem." Daniel Akst, author of The Webster Chronicle, proposes that "the future of libraries — and of information — is digital." Peter Lyman and Hal Variant, information scientists at the University of California, Berkeley, estimate that "the world's total yearly production of print, film, optical, and magnetic content would require roughly 1.5 billion gigabytes of storage." Therefore, they believe that "soon it will be technologically possible for an average person to access virtually all recorded information."
Collection development and content selection decisions for the libraries' electronic resources typically involve various qualitative and quantitative methods. In the 2020s, libraries have expanded the usage of open source data analysis strumentation like the non-profit Unpaywall Journals which combines several methods.