Scraping the web for data
More City Data
As quick and dirty as it is, the Wikipedia city scraper did its job pretty well. Unfortunately, I needed additional data for each city that was not available on the Wikipedia pages: an email contact address for each city and demographic information. Eventually, I found another portal [3] that published that data, plus other data I did not need. Figure 6 shows the section of that portal that contains the email contact; Figure 7 shows the page with the demographic data.
Listing 6 (again omitting lines 1-28 shown in Listing 1) shows the scraper I wrote to extract the additional data. Since it has the same basic structure as Listing 4, I'll only outline its main parts, leaving the details as an exercise for the reader. This website provides one single list of all the cities as one sequence of 164 numbered pages, whose URLs have the format https://www.comuniecitta.it/comuni-italiani?pg=N
. The loop starting in line 3 loads those pages one at a time and then loads the URLs of the individual cities' pages from the first table it finds (line 9). When the script loads a city page, the demobox
section in lines 17 to 24 extracts the demographic data, and lines 26 to 29 detect and print all the email addresses on the page. The result, again, is a CSV text file with fields separated by pipe characters with rows (Listing 7). At this point, the outputs of the two city-scraping scripts can be easily merged, with the Bash join
command or another script, into one single database with all the data in one coherent format. Since this is a task not limited to web scraping, I leave it as an exercise for the reader.
Listing 6
Email/Demographic Information Scraper
Listing 7
Sample Output from comuniecitta.it
Conclusions
The official Beautiful Soup documentation contains additional information, but with these examples, you now know enough to use it productively. If you decide to do large scale web scraping, I recommend checking out how to use shared proxies. You should set your User Agent
headers, possibly changing their value at random interval, as follows:
myheader = {'User-Agent': 'Mozilla/5.0 (Windows NT 6.3; Win64; x64) ...
Add "headers=myheader"
to the parameters of your get(url)
calls (for details, see the documentation). This will make your requests look as if they were coming from several normal web browsers, in different locations, instead of one voracious script. Happy scraping!
Infos
- Beautiful Soup: http://www.crummy.com/software/BeautifulSoup/bs4/doc/
- Micro-encyclopedia project: http://stop.zona-m.net/2017/12/5000-concepts-for-europe-a-book-proposal/
- Italian Municipalities and Cities: http://www.comuniecitta.it
« Previous 1 2 3
Buy this article as PDF
(incl. VAT)