An indexing search engine with Nutch and Solr

Indexed

If you want to prevent a search engine server from accessing the Internet, you can define a very short HTTP timeout using a firewall. The search engine will find external URLs in the documents, but its attempts to resolve the URLs will fail if you set a sufficiently short timeout. The crawler will therefore find external URLs, but not reach them and thus not add them to the database.

For a cleaner approach, you can use the regex-urlfilter.txt file in Nutch's conf directory. The regex-urlfilter.txt file lets you define exceptions. (Nutch already has some default rules that prevent the crawler from reading unnecessary files such as CSS files or images.)

The following command

-^(http|https)://www.wikipedia.com

stops Nutch from following links to http://www.wikipedia.com. It makes even more sense to define a whitelist and thus only permit individual servers:

+^(http|https)://intranet.company.local

Or you can specify multiple addresses using regular expressions:

+^http://([a-z0-9\-A-Z]*\.)*.company.local/([a-z0-9\-A-Z]*\/)*

The important thing is to correct the last line, which defines the general policy:

# accept anything else
+.

and replace it with:

# deny anything else
-.

You need to deny unspecified traffic in order for the list to serve as a whitelist.

On Your Marks

Everything is set up; the search can start  – all you need to do is tell the crawler the starting point of its journey. You define the subdirectories and the file that contain the URLs in /opt/nutch/:

mkdir /opt/nutch/urls
echo "http://intranetserver.company.local" > /opt/nutch/urls/seed.txt

Any number of URLs are permissible.

Then start the crawler:

export JAVA_HOME=/usr/lib/jvm/java-7-openjdk-amd64/jre/
/opt/bin/crawl /opt/nutch/urls/ /opt/nutch/IntranetCrawler/ http://localhost:8080/solr/ 10

The first parameter in the crawl command specifies the directory containing the seed.txt file.

Nutch runs fetcher processes that load and parse the discovered content. The /opt/nutch/IntranetCrawler option specifies the directory in which you want Nutch to create this content. The address follows, including the Solr server port to which Nutch saves the results.

The number 10 at the end states the number of crawler runs. Depending on the pages it finds and the search depth, it can take some time for the command to complete. For initial tests, you might prefer a value of 1 or 2.

When the fetcher downloads and parses the results, it typically finds more links to more content. These links end up in its link database. On the next run, the crawler reads these URLs too and hands them over to the fetcher processes. This to-do list with links for the crawler grows very quickly at the start, because the crawler can only process a certain amount of content during each run.

Nutch breaks down the discovered links into segments, which it processes one by one. A segment only contains a certain number of links; for this reason, it can happen that the crawler creates new segments immediately after the first one that it is processing. The new links found in this process end up in new segments.

The fact that the script has stopped running does not mean the crawler has found all the content; some content may have been saved for fetcher processes later on.

After the first run, however, the content should all be available for displaying in the Solr web interface: http://<Searchserver>:8080/solr/admin/ (see also Figure 1).

Figure 1: The Apache Solr admin interface is deliberately simple.

How often the crawler runs – every night, once a month, or on the weekend – is a question of the data volume and your need for up-to-date results. The important thing is that Nutch only finds data to which a link points – from a website or an indexed document. Non-linked documents virtually don't exist, except in the FTP or HTTP directory listings.

Querying with jQuery

Admins typically integrate the Solr search directly into an existing intranet portal. Solr provides an API for this purpose that manages database access and returns the search results. DIY queries with jQuery are a useful solution. Listing  4 shows the HTML code for a simple website with jQuery scripts (see Figure  2).

Listing 4

jQuery Query

01 <html>
02  <head>
03   <title>Example Search</title>
04  </head>
05  <body>
06   <h3>Simple Search Engine</h3>
07   Search: <input id="query" />
08   <button id="search">Search</button> (Example: "content:foobar"; "url:bar"; "title:foo")
09   <hr/>
10   <div id="results">
11   </div>
12  </body>
13   <script src="http://ajax.googleapis.com/ajax/libs/jquery/1.4.2/jquery.min.js"></script>
14   <script>
15    function on_data(data) {
16     $('#results').empty();
17     var docs = data.response.docs;
18     $.each(docs, function(i, item) {
19      if (item.content.length > 400)
20       contentpart = item.content.substring(0,400);
21      else
22       contentpart = item.content
23     $('#results').prepend($(
24       '<strong>' + item.title + '</strong><br/>' +
25       '<a href="'+ item.url +'" target="_blank">'+ item.url +'</a>' +
26       '<br/><div style="font-size:80%;">' + contentpart +'</div><hr/>'));
27     });
28    var total = 'Found page: ' + docs.length + '<hr/>';
29    $('#results').prepend('<div>' + total + '</div>');
30    }
31
32 function on_search() {
33  var query = $('#query').val();
34  if (query.length == 0) {
35   return;
36   }
37  }
38  var solrServer = 'http://SEARCHSERVER:8080/solr';
39  var url = solrServer + '/select/?q='+encodeURIComponent(query) + '&version=2.2&start=0&rows=50&indent=on&wt=
              json&callback=?&json.wrf=on_data';
40  $.getJSON(url);
41 function on_ready() {
42  $('#search').click(on_search);
43  /* Hook enter to search */
44  $('body').keypress(function(e) {
45   if (e.keyCode == '13') {
46    on_search();
47    }
48   });
49  }
50  $(document).ready(on_ready);
51  </script>
52 </html>
Figure 2: Thanks to jQuery, the search engine can be quickly and easily integrated into your own websites.

Figure 3 shows the server's response in the XML code transferred – a simple example with primitive search requests against the Solr back end.

Figure 3: The jQuery script builds the HTML websites from this XML response.

In the search query, the user has many options. You can rank the page title higher than the content. The statement content:(linux) title:(linux)^1.5 gives a match in the title one and a half times more weight than a match that is found in the document body. On the other hand, you can search for pages that contain the word "Linux" without the word "Debian." In this case, you might still want to give the title preferential treatment:

content:(linux -debian) title:(linux -debian)^1.5

Logical ANDs are easily achieved with a simple plus sign; OR-ing is the default; thus, content:(+linux +debian) searches for Linux and Debian.

Without the plus sign, the Solr-Nutch team would show you any documents containing Linux or Debian. Quotes let you search for complete terms (content:("Linux Live USB Stick").

These simple forms of user interaction give you a good idea of the potential this option offers for jQuery and web programmers. What you will need for a professional-looking search engine front end is forms and point-and-click queries, as well as input validation and the ability to highlight the search key in the results.

Do-It-Yourself Search Engine

The Author

Markus Feilner is a Linux specialist from Regensburg, Germany. He has worked with the free operating system since 1994 as an author, trainer, consultant, and journalist. The Conch Diplomat, Minister of the Universal Life Church, and Jedi Knight leads SUSE's document team in Nuremberg.

Sebastian Mogilowski is a computer scientist from Regensburg, Germany, who enjoys angling in his free time. He has been working part time since 2005 at a large data center, while also working as a freelance Linux administrator, consultant, and author. He focuses on open source system management, virtualization, and home automation.

Buy this article as PDF

Express-Checkout as PDF
Price $2.95
(incl. VAT)

Buy Linux Magazine

SINGLE ISSUES
 
SUBSCRIPTIONS
 
TABLET & SMARTPHONE APPS
Get it on Google Play

US / Canada

Get it on Google Play

UK / Australia

Related content

  • Index Search with Lucene

    Even state-of-the-art computers need to use clever methods to process ever-increasing amounts of document data. The open source Lucene framework uses inverted indexing for fast searches of document collections.

  • Full-Text Search Engines

    Full-text search engines like Solr, Xapian, and Sphinx make the daily data chaos on your hard disk searchable – and they even cooperate with relational databases.

  • Open Data with CKAN

    CKAN, a versatile data management system, lets you build a portal to share your open data.

  • Search Engines

    If you are interested in data privacy, you might want to try an alternative search engine. We discuss a few search engines that serve up good results, along with an option for setting up your own search engine.

  • Workspace: OpenOffice.org Macros

    Dmitri shows you how to compose and publish your blog from OpenOffice.org.

comments powered by Disqus
Subscribe to our Linux Newsletters
Find Linux and Open Source Jobs
Subscribe to our ADMIN Newsletters

Support Our Work

Linux Magazine content is made possible with support from readers like you. Please consider contributing when you’ve found an article to be beneficial.

Learn More

News