Home / Blog / How to Use curl to Download Files From the Linux Command Line

How to Use curl to Download Files From the Linux Command Line

A terminal window on a Ubuntu-style Linux desktop.
Fatmawati Achmad Zaenuri/Shutterstock

The Linux curl command can do a complete lot greater than obtain information. Discover out what curl is able to, and when it’s best to use it as an alternative of wget.

 curl vs. wget : What’s the Distinction?

Individuals usually wrestle to establish the relative strengths of the wget and curl instructions. The instructions do have some practical overlap. They will every retrieve information from distant places, however that’s the place the similarity ends.

wget is a improbable software for downloading content material and information. It may obtain information, internet pages, and directories. It accommodates clever routines to traverse hyperlinks in internet pages and recursively obtain content material throughout a whole web site. It’s unsurpassed as a command-line obtain supervisor.

curl satisfies an altogether completely different want. Sure, it may possibly retrieve information, nevertheless it can’t recursively navigate an internet site on the lookout for content material to retrieve. What curl truly does is allow you to work together with distant methods by making requests to these methods, and retrieving and displaying their responses to you. These responses would possibly nicely be internet web page content material and information, however they will additionally include information offered through an online service or API on account of the “query” requested by the curl request.

And curl isn’t restricted to web sites. curl helps over 20 protocols, together with HTTP, HTTPS, SCP, SFTP, and FTP. And arguably, attributable to its superior dealing with of Linux pipes, curl could be extra simply built-in with different instructions and scripts.

The writer of curl has a webpage that describes the variations he sees between curl and wget.

Putting in curl

Out of the computer systems used to analysis this text, Fedora 31 and Manjaro 18.1.Zero had curl already put in. curl needed to be put in on Ubuntu 18.04 LTS. On Ubuntu, run this command to put in it:

sudo apt-get set up curl

sudo apt-get install curl in a terminal window

The curl Model

The --version choice makes curlreport its model. It additionally lists all of the protocols that it helps.

curl --version

curl --version in a terminal window

Retrieving a Internet Web page

If we level curl at an online web page, it is going to retrieve it for us.

curl https://www.bbc.com

curl https://www.bbc.com in a terminal window

However its default motion is to dump it to the terminal window as supply code.

Output from curl displaying web page source code in a terminal window

Beware: Should you don’t inform curl you need one thing saved as a file, it is going to at all times dump it to the terminal window. If the file it’s retrieving is a binary file, the end result could be unpredictable. The shell could attempt to interpret a number of the byte values within the binary file as management characters or escape sequences.

Saving Information to a File

Let’s inform curl to redirect the output right into a file:

curl https://www.bbc.com  > bbc.html

curl https://www.bbc.com > bbc.html in a terminal window

This time we don’t see the retrieved data, it’s despatched straight to the file for us. As a result of there is no such thing as a terminal window output to show, curl outputs a set of progress data.

It didn’t do that within the earlier instance as a result of the progress data would have been scattered all through the online web page supply code, so curl routinely suppressed it.

On this instance, curl detects that the output is being redirected to a file and that it’s protected to generate the progress data.

curl download progress meter in a terminal window

The knowledge offered is:

  • % Whole: The entire quantity to be retrieved.
  • % Obtained: The share and precise values of the information retrieved to date.
  • % Xferd: The p.c and precise despatched, if information is being uploaded.
  • Common Velocity Dload: The common obtain velocity.
  • Common Velocity Add: The common add velocity.
  • Time Whole: The estimated whole length of the switch.
  • Time Spent: The elapsed time to date for this switch.
  • Time Left: The estimated time left for the switch to finish
  • Present Velocity: The present switch velocity for this switch.

As a result of we redirected the output from curl to a file, we now have a file known as “bbc.html.”

bbc.html file created by curl.

Double-clicking that file will open your default browser in order that it shows the retrieved internet web page.

Retrieved web page disdplayed in a browser window.

Notice that the tackle within the browser tackle bar is an area file on this pc, not a distant web site.

We don’t need to redirect the output to create a file. We will create a file through the use of the -o (output) choice, and telling curl to create the file. Right here we’re utilizing the -o choice and offering the title of the file we want to create “bbc.html.”

curl -o bbc.html https://www.bbc.com

curl -o bbc.html https://www.bbc.com in a terminal window

Utilizing a Progress Bar To Monitor Downloads

To have the text-based obtain data changed by a easy progress bar, use the -# (progress bar) choice.

curl -x -o bbc.html https://www.bbc.com

curl -x -o bbc.html https://www.bbc.com in a terminal window

Restarting an Interrupted Obtain

It’s straightforward to restart a obtain that has been terminated or interrupted. Let’s begin a obtain of a sizeable file. We’ll use the newest Lengthy Time period Assist construct of Ubuntu 18.04. We’re utilizing the --output choice to specify the title of the file we want to reserve it into: “ubuntu180403.iso.”

curl --output ubuntu18043.iso http://releases.ubuntu.com/18.04.3/ubuntu-18.04.3-desktop-amd64.iso

curl --output ubuntu18043.iso http://releases.ubuntu.com/18.04.3/ubuntu-18.04.3-desktop-amd64.iso in a terminal window

The obtain begins and works its means in the direction of completion.

Progess of a large download in a terminal widnow

If we forcibly interrupt the obtain with Ctrl+C , we’re returned to the command immediate, and the obtain is deserted.

To restart the obtain, use the -C (proceed at) choice. This causes curl to restart the obtain at a specified level or offset inside the goal file. Should you use a hyphen - because the offset, curl will take a look at the already downloaded portion of the file and decide the right offset to make use of for itself.

curl -C - --output ubuntu18043.iso http://releases.ubuntu.com/18.04.3/ubuntu-18.04.3-desktop-amd64.iso

curl -C - --output ubuntu18043.iso http://releases.ubuntu.com/18.04.3/ubuntu-18.04.3-desktop-amd64.iso ina terminal window

The obtain is restarted. curl studies the offset at which it’s restarting.

curl -C - --output ubuntu18043.iso http://releases.ubuntu.com/18.04.3/ubuntu-18.04.3-desktop-amd64.iso in a terminal window

Retrieving HTTP headers

With the -I (head) choice, you possibly can retrieve the HTTP headers solely. This is identical as sending the HTTP HEAD command to an online server.

curl -I www.twitter.com

curl -I www.twitter.com in a terminal window

This command retrieves data solely; it doesn’t obtain any internet pages or information.

Output from curl -I www.twitter.com in a terminal window

Downloading A number of URLs

Utilizing xargs we are able to obtain a number of URLs without delay. Maybe we need to obtain a sequence of internet pages that make up a single article or tutorial.

Copy these URLs to an editor and reserve it to a file known as “urls-to-download.txt.” We will use xargs to deal with the content material of every line of the textual content file as a parameter which it is going to feed to curl, in flip.


That is the command we have to use to have xargs cross these URLs to curl one after the other:

xargs -n 1 curl -O < urls-to-download.txt

Notice that this command makes use of the -O (distant file) output command, which makes use of an uppercase “O.” This feature causes curl to save lots of the retrieved  file with the identical title that the file has on the distant server.

The -n 1 choice tells xargs to deal with every line of the textual content file as a single parameter.

While you run the command, you’ll see a number of downloads begin and end, one after the opposite.

Output from xargs and curl downloading multiple files

Checking within the file browser reveals the a number of information have been downloaded. Every one bears the title it had on the distant server.

downloaded file sin the nautilus file browser

RELATED: The way to Use the xargs Command on Linux

Downloading Recordsdata From an FTP Server

Utilizing curl with a File Switch Protocol (FTP) server is straightforward, even when you must authenticate with a username and password. To cross a username and password with curl use the -u (consumer) choice, and kind the username, a colon “:”, and the password. Don’t put an area earlier than or after the colon.

It is a free-for-testing FTP server hosted by Rebex. The take a look at FTP web site has a pre-set username of “demo”, and the password is “password.” Don’t use the sort of weak username and password on a manufacturing or “actual” FTP server.

curl -u demo:password ftp://take a look at.rebex.web

curl -u demo:password ftp://test.rebex.net in a terminal window

curl figures out that we’re pointing it at an FTP server, and returns an inventory of the information which are current on the server.

List of files on a remtoe FTP server ina terminal window

The one file on this server is a “readme.txt” file, of 403 bytes in size. Let’s retrieve it. Use the identical command as a second in the past, with the filename appended to it:

curl -u demo:password ftp://take a look at.rebex.web/readme.txt

curl -u demo:password ftp://test.rebex.net/readme.txt in a terminal window

The file is retrieved and curl shows its contents within the terminal window.

The contents of a file retrieved from an FTP server displayed in a terminal window

In nearly all instances, it’ll be extra handy to have the retrieved file saved to disk for us, relatively than displayed within the terminal window. As soon as extra we are able to use the -O (distant file) output command to have the file saved to disk, with the identical filename that it has on the distant server.

curl -O -u demo:password ftp://take a look at.rebex.web/readme.txt

curl -O -u demo:password ftp://test.rebex.net/readme.txt in a terminal window

The file is retrieved and saved to disk. We will use ls to examine the file particulars. It has the identical title because the file on the FTP server, and it’s the similar size, 403 bytes.

ls -hl readme.txt

ls -hl readme.txt in a terminal window

RELATED: The way to Use the FTP Command on Linux

Sending Parameters to Distant Servers

Some distant servers will settle for parameters in requests which are despatched to them. The parameters may be used to format the returned information, for instance, or they could be used to pick out the precise information that the consumer needs to retrieve. It’s usually potential to work together with internet utility programming interfaces (APIs) utilizing curl.

As a easy instance, the ipify web site has an API could be queried to determine your exterior IP tackle.

curl https://api.ipify.org

By including the format parameter to the command, with the worth of “json” we are able to once more request our exterior IP tackle, however this time the returned information will probably be encoded within the JSON format.

curl https://api.ipify.org?format=json

curl https://api.ipify.org in a terminal window

Right here’s one other instance that makes use of a Google API. It returns a JSON object describing a guide. The parameter you should present is the Worldwide Customary Guide Quantity (ISBN) variety of a guide. You will discover these on the again cowl of most books, often beneath a barcode. The parameter we’ll use right here is “0131103628.”

curl https://www.googleapis.com/books/v1/volumes?q=isbn:0131103628

curl https://www.googleapis.com/books/v1/volumes?q=isbn:0131103628 in a terminal window

The returned information is complete:

Google book API data displayed in a terminal window

Typically curl, Typically wget

If I wished to obtain content material from an internet site and have the tree-structure of the web site searched recursively for that content material, I’d use wget.

If I wished to work together with a distant server or API, and presumably obtain some information or internet pages, I’d use curl. Particularly if the protocol was one of many many not supported by wget.

About Dave McKay

Check Also

How to Use Your Calendar From Windows 10’s Taskbar

Home windows 10 has a built-in Calendar app, however you don’t have to make use …

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.