A Bash DIY data extraction tool
Data Collector
With some simple Bash commands, you can gather, parse, and filter text data into CSV files ready for your favorite statistical application.
If your research involves pulling large amounts of text data from the Internet, you can gather and process that data from the command line with a few simple Bash commands and turn it into a CSV file for your favorite statistical application, such as SPSS, R, or a MySQL table. In this article, I will show how to accomplish this with a project that examines the Romanian university dropout rate.
The data I need comes from 97 universities. For confidentiality reasons, chances are slim that I can get access to each university's database, but I can obtain that information legally from their website. (However keep in mind that many websites have licenses that prohibit web scraping. This article does not attempt to address copyright and other legal issues related to this practice. See the site's permission page and consult the applicable laws for your jurisdiction.) To gather my data, I could search for the word abandon (Romanian for dropout) on each of the 97 websites, but that would be tedious. Furthermore, each website may use a different content management system (CMS), so my search might not return the desired results. Instead, an easier option is to download all 97 websites in their entirety and recursively search their text content on my local hard drive. Linux lets you do this with the command shown in Listing 1.
Listing 1
Downloading Websites
wget -cv --progress=bar --connect-timeout=30 --force-directories --ignore-length -r -l 7 --convert-links --waitretry=61 -R gif,jpg,png,svg,pdf http://www.address.ro
Retrieving Data
In Listing 1, wget
is a command-line utility in Linux and other POSIX-compliant operating systems used to download files from servers. It can be used as a mass downloader, and you can specify exactly which type of files you want downloaded and which type of files wget
should disregard.
In the case of an interruption, the --continue
attribute (-c
) allows wget
to continue where it left off without re-retrieving the data it has already copied when access is granted again. This can save you a lot of time and ensure the process won't loop due to frequent connection glitches.
The --verbose
attribute (-v
), when used in conjunction with --progress=bar
, lets you see what the command does in real time by watching the Linux command-line interface. This is helpful if you want to see wget
's downloading progress and look out for possible errors.
The --connect-timeout
attribute is set to 30 seconds, which means that if no TCP connection can be established with the target server in 30 seconds, wget
will stop trying. Similarly, --waitretry
is set to 61 seconds, which means that wget
will wait for 61 seconds before again trying to retrieve a file it failed to download. The 61 seconds are necessary, because some servers might experience temporary disconnections or might limit the number of downloads to just a few per minute. With 61 seconds, wget
will wait a little over a minute before trying to download the file again; usually this method works, although it is time-consuming.
The --force-directories
attribute, which is very important for gathering research data, ensures that you get an exact replica of the website you are downloading, one that maintains the same directory structure as the original.
--ignore-length
guarantees wget
will successfully download the target website. Some servers send bogus Content-Length headers that make wget
think a file has not fully been retrieved, so wget
will try and download it again. The --ignore-length
attribute will ignore the Content-Length header and thus circumvent the problem.
--recursive
(-r
) forces wget
to enter each subdirectory and retrieve every file in that subdirectory, not just the main branch. The -l 7
attribute specifies that wget
should go up to seven levels deep in the server's directory structure to gather data.
Naturally, you want to store an exact copy of the target web server locally. However, any hyperlink present in a downloaded HTML or PHP file will still point to the online server and not the corresponding copy stored on the local drive. To fix this and make all the links point to their corresponding local counterparts, use --convert-links
, which will result in a browsable copy of the website that you can read with the help of your favorite web browser, even without an Internet connection.
Excluding Non-Text Content
Websites also contain non-text content, like image files, other file types (MP3, ZIP, RAR, or PPT), and video formats (AVI, MP4, MKV, or VOB). Since I am only interested in text-based information, I want to exclude these types of files, which will have the added benefit of speeding up the entire process and conserving bandwidth. To do this, use the -R
attribute followed by a comma-delimited list of extensions. For example, if I wish to discard image files, I would use the following:
-R gif,jpg,svg,png
For Linux and Unix-based operating systems, which are case-sensitive, you will want to modify this by including capitalized extensions as well:
-R gif,GIF,jpg,JPG,svg,SVG,png,PNG
You can do the same for any other file extension, especially those representing potentially large files:
-R avi,AVI,mpg,MPG,mp4,MP4,mkv,MKV,vob,VOB,iso,ISO,zip,ZIP,rar,RAR,tar,TAR
Exclude everything not representing text content including archives, video files, ISO images, pictures, and Microsoft Office documents.
Downloading the Websites
Finally, at the end of the wget
command, you can add the target website's address to inform wget
of the address you want cloned. When dealing with several addresses, you can copy and paste each line in a Bash script (one after the other), make the script executable, and execute it. When wget
finishes with one website, it will pass on to the next.
Since we are dealing with 97 websites, the easiest way is to have all the URLs in one text file, each one on a separate line, and use the small script shown in Listing 2 to show wget
where to get each address. This way you only have to launch wget
once instead of 97 times.
Listing 2
Downloading Multiple Websites
wget -cv --progress=bar --connect-timeout=30 --force-directories --ignore-length -r -l 7 --convert-links --waitretry=61 -R gif,jpg,png,svg,pdf $(<addresses.txt)
The separate file addresses.txt
should contain only the target server's web addresses, each on its own line, avoiding any special characters, quotes, and/or spaces.
The downloading process may vary depending on your Internet connection's speed, your CPU speed, and the available RAM. Also, be aware that some websites might be down or might even block you. Many websites do try to block scraping, whether because they are protecting their business interests, or because downloading tens of thousands of files with just as many server interrogations might be seen by the server or the server administrator as a potential denial of service (DoS) attack. Consequently, your IP might be blocked at some point. As a reference point, launching 10 such concurrent scripts on a Linux machine running at 800MHz and 256MB RAM memory took six days to download the 97 Romanian university websites – all on a 1GB Internet connection. Better hardware greatly improves the process. In the end, 392,868 files were retrieved with a total 108,447,924,224 characters.
Buy this article as PDF
(incl. VAT)
Buy Linux Magazine
Subscribe to our Linux Newsletters
Find Linux and Open Source Jobs
Subscribe to our ADMIN Newsletters
Support Our Work
Linux Magazine content is made possible with support from readers like you. Please consider contributing when you’ve found an article to be beneficial.
News
-
Latest Cinnamon Desktop Releases with a Bold New Look
Just in time for the holidays, the developer of the Cinnamon desktop has shipped a new release to help spice up your eggnog with new features and a new look.
-
Armbian 24.11 Released with Expanded Hardware Support
If you've been waiting for Armbian to support OrangePi 5 Max and Radxa ROCK 5B+, the wait is over.
-
SUSE Renames Several Products for Better Name Recognition
SUSE has been a very powerful player in the European market, but it knows it must branch out to gain serious traction. Will a name change do the trick?
-
ESET Discovers New Linux Malware
WolfsBane is an all-in-one malware that has hit the Linux operating system and includes a dropper, a launcher, and a backdoor.
-
New Linux Kernel Patch Allows Forcing a CPU Mitigation
Even when CPU mitigations can consume precious CPU cycles, it might not be a bad idea to allow users to enable them, even if your machine isn't vulnerable.
-
Red Hat Enterprise Linux 9.5 Released
Notify your friends, loved ones, and colleagues that the latest version of RHEL is available with plenty of enhancements.
-
Linux Sees Massive Performance Increase from a Single Line of Code
With one line of code, Intel was able to increase the performance of the Linux kernel by 4,000 percent.
-
Fedora KDE Approved as an Official Spin
If you prefer the Plasma desktop environment and the Fedora distribution, you're in luck because there's now an official spin that is listed on the same level as the Fedora Workstation edition.
-
New Steam Client Ups the Ante for Linux
The latest release from Steam has some pretty cool tricks up its sleeve.
-
Gnome OS Transitioning Toward a General-Purpose Distro
If you're looking for the perfectly vanilla take on the Gnome desktop, Gnome OS might be for you.