Aggregating data with Portia

Route Planner

You can use the Crawling slide control to manage Slybot's behavior. This lets you define whether Slybot observes the predefined access rules set by the webmaster in robots.txt (Respect Nofollow). If you want the crawler to ignore certain pages, just add them to the drop-down list Configure follow-up and exclude patterns. Slybot only follows links that match the regular expression in the top box. Similarly, Slybot follows any link that matches the regular expression in the bottom box. If you check Overlay blocked links, you see the links the spider would currently follow, or not follow, in the preview on the left (Figure 3).

Figure 3: Slybot would follow all the links highlighted in green but ignore the ones highlighted in red.

Portia keeps track of the sections of a page you previously marked in light blue in a template. The Extraction slider lists all currently existing templates. A click on one of the identifiers opens the corresponding template in selection mode. You can now use this to add more selections in the normal way. You can also edit the individual selected elements in the Annotations slider.

Sometimes users only select the desired text roughly with the mouse and refine the selection in the Extractors slider using a regular expression. Clicking on New extractor adds the expression to the corresponding field; Slybot applies it to the cut-out text and returns only the results.

Family Growth

Portia refers to the now complete configuration as a Spider, Slybot then later uses the settings stored in it to harvest pages which have the same structure as the one currently on view. A spider usually applies to a specific domain; you can use the web application to create more spiders. Portia groups these in projects. For example, you could create a project named News that contains a spider with all the settings for theregister.co.uk and for slashdot.org (Figure 4).

Figure 4: The News project has two spiders. You can click on the name to rename the project in this view.

By default, Portia creates a project named New_Project. To discover the spiders in a project, click on the project name in the top left corner. The sidebar then lists the spiders; clicking on the sidebar shows the settings they store. You can add a new spider by clicking on the project name and then calling a new URL. By default, each spider is assigned the domain name. The sidebar lists all your projects when you click on the Home icon.

Endurance

Portia stores all your projects in subdirectories below slyd/data/projects; the directory names are also the project names. To see which spiders live in new_project, type this at the command line:

portiacrawl slyd/data/projects/new_project

Now, to finally grab the information off the sites, you simply call portiacrawl with the required spider:

portiacrawl slyd/data/projects/new_project en.wikipedia.com

The output in JSON format is sent to standard output; the -o parameter redirects the data to a file:

portiacrawl slyd/data/projects/new_project en.wikipedia.com \
  -o output.txt

portiacrawl is really just a wrapper script for Slybot. Its documentation can be found online [4].

Buy this article as PDF

Express-Checkout as PDF
Price $2.95
(incl. VAT)

Buy Linux Magazine

SINGLE ISSUES
 
SUBSCRIPTIONS
 
TABLET & SMARTPHONE APPS
Get it on Google Play

US / Canada

Get it on Google Play

UK / Australia

Related content

  • SpiderOak

    Back up, synchronize, version, and collaborate with SpiderOak.

  • Ruby Web Spiders

    Ruby is a very elegant language,and it’s harmonious – the parts work together effectively. Ruby also significantly reduces a developer’s burden. We’ll show you how to use Ruby to build a quick and simple web spider application.

  • Shell Download Manager

    A few lines of shell code and the Gawk scripting language make downloading files off the web a breeze.

  • Welcome

    Sometimes you get a reminder of something you already know, and you have to stop and say, "Ah, yes, that's right, I'm awake." Several news sources ran a story recently on Google's Safe Browsing technology, which "scan websites for potential risks to warn users before they visit unsafe sites." Safe browsing is now integrated into the Chrome and Firefox browsers, which means users get a warning about sites with potentially unsafe content.

  • 3D File Browsers

    Moving a flat filesystem hierarchy to the third dimension makes navigating a directory tree child's play.

comments powered by Disqus
Subscribe to our Linux Newsletters
Find Linux and Open Source Jobs
Subscribe to our ADMIN Newsletters

Support Our Work

Linux Magazine content is made possible with support from readers like you. Please consider contributing when you’ve found an article to be beneficial.

Learn More

News