Let’s see a mini-tutorial on how to download all the files linked on a web page easily with a single terminal command thanks to the Wget application.
Imagine you have found a very interesting website, with many free and public resources. For example, an image bank, video game assets, documents, PDFs, or whatever.
You would like to download them all. Well, you could right-click / save each one, but… it’s not our style to click a button 500 times 😉.
Normally, in this case, you would look for some application to help you download them. However, it’s not necessary. We can solve it very easily with our old friend Wget.
Wget is a command-line tool that allows us to make web requests easily. Although there are many more complex solutions, Wget is a very quick and easy-to-use option if all we want to do is download files.
In Linux you can use Wget directly, because most distributions come with it preinstalled. In Windows you can get an available version, like GNU Wget 1.21.4 for Windows, or easier, use WSL.
We simply have to use this command, for example:
wget —no-parent -A pdf -r http://www.website.com/url
In this example,
- It will download all files with the PDF extension on the website and in all subfolders of the url folder.
- The —no-parent option tells Wget not to download files from parent folders of the specified folder
Of course, you can configure and play with the parameters to adapt them to your needs.
It’s important to say that this will only work for links to files that are directly available on the Internet, i.e., public. If a file is protected behind some security mechanism, like a captcha, it won’t work for you.
That’s how easy it is to download all files from a website using only a command-line console command, without needing to write a script or use a third-party program. See you next time!

