Updated 10 hours ago
Wget is a command-line tool built for one thing: getting files from the Internet when conditions are bad.
Where other tools assume your connection is stable, wget assumes it will fail. It retries automatically. It resumes interrupted downloads. It waits politely between requests so servers don't block you. It runs in the background while you do other things.
Wget assumes your download will fail. That paranoia is its gift.
The Simplest Case
Download a file:
The file saves to your current directory. If the download fails, wget retries. If your connection drops partway through, add -c and wget picks up where it left off:
That -c flag embodies wget's philosophy: networks are unreliable, but your download should still complete.
Downloading in Hostile Conditions
Resume Interrupted Downloads
Run this after a failed download. Wget checks how much you have, asks the server for the rest, and continues. No wasted bandwidth.
Retry Until It Works
--tries=0 means infinite retries. --waitretry=10 waits 10 seconds between attempts. Walk away. Come back to a completed download.
Background Downloads
Wget forks to the background and logs progress to wget-log. Check on it with tail -f wget-log.
Limit Bandwidth
Don't saturate your connection. This caps wget at 500 KB/sec.
Recursive Downloads
Wget's real power: downloading entire directory structures.
Download Everything Under a Path
This follows every link under /docs/ and downloads what it finds. --no-parent prevents wget from wandering up to the parent directory.
Mirror an Entire Website
--mirror: recursive with infinite depth, timestamps for efficiency--convert-links: rewrite links to work offline--page-requisites: grab CSS, images, JavaScript—everything needed to render pages--no-parent: stay within the target directory
You get a complete offline copy.
Filter by File Type
Be Polite to Servers
--wait=2 pauses 2 seconds between requests. Combined with rate limiting, this prevents you from hammering the server—and getting blocked.
Downloading Multiple Files
Create a file with URLs, one per line:
Then:
Wget downloads each in sequence, with its usual retry logic.
Authentication
Basic Auth
Or prompt for the password:
Cookie-Based Sessions
Common Options Reference
| Option | Purpose |
|---|---|
-O filename | Save with specific filename |
-c | Continue/resume partial download |
-b | Run in background |
-q | Quiet—no output |
-i file | Read URLs from file |
--tries=N | Retry N times (0 = infinite) |
--limit-rate=N | Limit bandwidth (e.g., 500k) |
--recursive | Follow links and download |
--mirror | Full site mirror with timestamps |
--convert-links | Make links work offline |
--no-parent | Don't ascend to parent directories |
--accept=ext | Only download these extensions |
--reject=ext | Skip these extensions |
-N | Only download if newer than local |
Scripting with Wget
Check if download succeeded:
Key exit codes:
- 0: Success
- 4: Network failure
- 5: SSL error
- 6: Authentication required
- 8: Server returned error
Configuration File
Store defaults in ~/.wgetrc:
Now every wget command uses these settings unless overridden.
When to Use Wget vs. Curl
Use wget for:
- Downloading files, especially large ones
- Mirroring websites for offline use
- Batch downloads from URL lists
- Anything that might fail and need automatic retry
Use curl for:
- API testing and development
- Custom HTTP methods (PUT, DELETE, PATCH)
- Uploading data
- Examining response headers
Wget downloads. Curl transfers. Both tools. Different jobs.
Frequently Asked Questions About Wget
Was this page helpful?