Wget


GNU Wget is a computer program that retrieves content from web servers. It is part of the GNU Project. Its name derives from World Wide Web and get. It supports downloading via HTTP, HTTPS, and FTP.
Its features include recursive download, conversion of links for offline viewing of local HTML, and support for proxies. It appeared in 1996, coinciding with the boom of popularity of the Web, causing its wide use among Unix users and distribution with most major Linux distributions. Written in portable C, Wget can be easily installed on any Unix-like system. Wget has been ported to Microsoft Windows, Mac OS X, OpenVMS, HP-UX, MorphOS and AmigaOS. Since version 1.14 Wget has been able to save its output in the web archiving standard WARC format.
It has been used as the basis for graphical programs such as GWget for the GNOME Desktop.

History

Wget descends from an earlier program named Geturl by the same author, the development of which commenced in late 1995. The name changed to Wget after the author became aware of an earlier Amiga program named GetURL, written by James Burton in AREXX.
Wget filled a gap in the inconsistent web-downloading software available in the mid-1990s. No single program could reliably use both HTTP and FTP to download files. Existing programs either supported FTP or were written in Perl, which was not yet ubiquitous. While Wget was inspired by features of some of the existing programs, it supported both HTTP and FTP and could be built using only the standard development tools found on every Unix system.
At that time many Unix users struggled behind extremely slow university and dial-up Internet connections, leading to a growing need for a downloading agent that could deal with transient network failures without assistance from the human operator.
In 2010, Chelsea Manning used Wget to download 250,000 U.S. diplomatic cables and 500,000 Army reports that came to be known as the Iraq War logs and Afghan War logs sent to WikiLeaks.

Features

Robustness

Wget has been designed for robustness over slow or unstable network connections. If a download does not complete due to a network problem, Wget will automatically try to continue the download from where it left off, and repeat this until the whole file has been retrieved. It was one of the first clients to make use of the then-new Range HTTP header to support this feature.

Recursive download

Wget can optionally work like a web crawler by extracting resources linked from HTML pages and downloading them in sequence, repeating the process recursively until all the pages have been downloaded or a maximum recursion depth specified by the user has been reached. The downloaded pages are saved in a directory structure resembling that on the remote server. This "recursive download" enables partial or complete mirroring of web sites via HTTP. Links in downloaded HTML pages can be adjusted to point to locally downloaded material for offline viewing. When performing this kind of automatic mirroring of web sites, Wget supports the Robots Exclusion Standard.
Recursive download works with FTP as well, where Wget issues the LIST command to find which additional files to download, repeating this process for directories and files under the one specified in the top URL. Shell-like wildcards are supported when the download of FTP URLs is requested.
When downloading recursively over either HTTP or FTP, Wget can be instructed to inspect the timestamps of local and remote files, and download only the remote files newer than the corresponding local ones. This allows easy mirroring of HTTP and FTP sites, but is considered inefficient and more error-prone when compared to programs designed for mirroring from the ground up, such as rsync. On the other hand, Wget doesn't require special server-side software for this task.

Non-interactiveness

Wget is non-interactive in the sense that, once started, it does not require user interaction and does not need to control a TTY, being able to log its progress to a separate file for later inspection. Users can start Wget and log off, leaving the program unattended. By contrast, most graphical or text user interface web browsers require the user to remain logged in and to manually restart failed downloads, which can be a great hindrance when transferring a lot of data.

Portability

Written in a highly portable style of C with minimal dependencies on third-party libraries, Wget requires little more than a C compiler and a BSD-like interface to TCP/IP networking. Designed as a Unix program invoked from the Unix shell, the program has been ported to numerous Unix-like environments and systems, including Microsoft Windows via Cygwin, and Mac OS X. It is also available as a native Microsoft Windows program as one of the GnuWin packages.

Other features

Basic usage

Typical usage of GNU Wget consists of invoking it from the command line, providing one or more URLs as arguments.

  1. Download the title page of example.com to a file
  2. named "index.html".
wget http://www.example.com/


  1. Download Wget's source code from the GNU ftp site.
wget ftp://ftp.gnu.org/pub/gnu/wget/wget-latest.tar.gz

More complex usage includes automatic download of multiple URLs into a directory hierarchy.

  1. Download *.gif from a website
wget -e robots=off -r -l 1 --no-parent -A.gif ftp://www.example.com/dir/


  1. Download the title page of example.com, along with
  2. the images and style sheets needed to display the page, and convert the
  3. URLs inside it to refer to locally available content.
wget -p -k http://www.example.com/


  1. Download the entire contents of example.com
wget -r -l 0 http://www.example.com/

Advanced examples

Download a mirror of the errata for a book you just purchased, follow all local links recursively and make the files suitable for off-line viewing. Use a random wait of up to 5 seconds between each file download and log the access results to "myLog.log". When there is a failure, retry for up to 7 times with 14 seconds between each retry.

wget -t 7 -w 5 --waitretry=14 --random-wait -m -k -K -e robots=off
http://www.oreilly.com/catalog/upt3/errata/ -o./myLog.log

Collect only specific links listed line by line in the local file "my_movies.txt". Use a random wait of 0 to 33 seconds between files, and use 512 kilobytes per second of bandwidth throttling. When there is a failure, retry for up to 22 times with 48 seconds between each retry. Send no tracking user agent or HTTP referer to a restrictive site and ignore robot exclusions. Place all the captured files in the local "movies" directory and collect the access results to the local file "my_movies.log". Good for downloading specific sets of files without hogging the network:
wget -t 22 --waitretry=48 --wait=33 --random-wait --referer="" --user-agent=""
--limit-rate=512k -e robots=off -o./my_movies.log -P./movies -i./my_movies.txt
Instead of an empty referer and user-agent use a real one that does not cause an “ERROR: 403 Forbidden” message from a restrictive site. It is also possible to create a.wgetrc file that holds some default values. To get around cookie-tracked sessions:

  1. Using Wget to download content protected by referer and cookies.
  2. 1. Get a base URL and save its cookies in a file.
  3. 2. Get protected content using stored cookies.
wget --cookies=on --keep-session-cookies --save-cookies=cookie.txt http://first_page
wget --referer=http://first_page --cookies=on --load-cookies=cookie.txt
--keep-session-cookies --save-cookies=cookie.txt http://second_page

Mirror and convert CGI, ASP or PHP and others to HTML for offline browsing:

  1. Mirror website to a static copy for local browsing.
  2. This means all links will be changed to point to the local files.
  3. Note --html-extension will convert any CGI, ASP or PHP generated files to HTML.
wget --mirror -w 2 -p --html-extension --convert-links -P $ http://www.yourdomain.com

Authors and copyright

GNU Wget was written by Hrvoje Nikšić with contributions by many other people, including Dan Harkless, Ian Abbott, and Mauro Tortonesi. Significant contributions are credited in the AUTHORS file included in the distribution, and all remaining ones are documented in the changelogs, also included with the program. Wget is currently maintained by Giuseppe Scrivano, Tim Rühsen and Darshit Shah.
The copyright to Wget belongs to the Free Software Foundation, whose policy is to require copyright assignments for all non-trivial contributions to GNU software.

License

GNU Wget is distributed under the terms of the GNU General Public License, version 3 or later, with a special exception that allows distribution of binaries linked against the OpenSSL library. The text of the exception follows:

Additional permission under GNU GPL version 3 section 7
If you modify this program, or any covered work, by linking or combining it with the OpenSSL project's OpenSSL library, containing parts covered by the terms of the OpenSSL or SSLeay licenses, the Free Software Foundation grants you additional permission to convey the resulting work. Corresponding Source for a non-source form of such a combination shall include the source code for the parts of OpenSSL used as well as that of the covered work.

It is expected that the exception clause will be removed once Wget is modified to also link with the GnuTLS library.
Wget's documentation, in the form of a Texinfo reference manual, is distributed under the terms of the GNU Free Documentation License, version 1.2 or later. The man page usually distributed on Unix-like systems is automatically generated from a subset of the Texinfo manual and falls under the terms of the same license.

Development

Wget is developed in an open fashion, most of the design decisions typically being discussed on the public mailing list followed by users and developers. Bug reports and patches are relayed to the same list.

Release

When a sufficient number of features or bug fixes accumulate during development, Wget is released to the general public via the GNU FTP site and its mirrors. Being entirely run by volunteers, there is no external pressure to issue a release nor are there enforceable release deadlines.
Releases are numbered as versions of the form of major.minor, such as Wget 1.11 or Wget 1.8.2. An increase of the major version number represents large and possibly incompatible changes in Wget's behavior or a radical redesign of the code base. An increase of the minor version number designates addition of new features and bug fixes. A new revision indicates a release that, compared to the previous revision, only contains bug fixes. Revision zero is omitted, meaning that for example Wget 1.11 is the same as 1.11.0. Wget does not use the odd-even release number convention popularized by Linux.

Popular references

Wget makes an appearance in the 2010 Columbia Pictures motion picture release, The Social Network. The lead character, loosely based on Facebook co-founder Mark Zuckerberg, uses Wget to aggregate student photos from various Harvard University housing-facility directories.

Notable releases

The following releases represent notable milestones in Wget's development. Features listed next to each release are edited for brevity and do not constitute comprehensive information about the release, which is available in the NEWS file distributed with Wget.

GWget

GWget is a free software graphical user interface for Wget. It is developed by David Sedeño Fernández and is part of the GNOME project. GWget supports all of the main features that Wget does, as well as parallel downloads.

Cliget

Cliget is an open source Firefox addon downloader that uses Curl, Wget and Aria2. It is developed by Zaid Abdulla.

wget2

Currently GNU Wget2 is being developed. It will have many improvements in comparison to Wget, particularly, in many cases Wget2 downloads much faster than Wget1.x due to support of the following protocols and technologies: