Big Data search engine for full-text strings and photos with radius search

Elastic Hits

Article from Issue 162/2014
Author(s):

The Elasticsearch full-text search engine quickly finds expressions even in huge text collections. With a few tricks, you can even locate photos that have been shot in the vicinity of a reference image.

While I was looking for a search engine the other day that would crawl logs quickly, I came across Elasticsearch [1], an Apache Lucene-based full-text search engine that includes all sorts of extra goodies.

On the download page, the open source project offers the usual tarball and a Debian package. The 1.0.0.RC2 pre-release version, which was the latest when this issue went to press, can be installed without any problems on Ubuntu by typing sudo dpkg --install *.deb. The Debian package includes a convenient boot script. When called by root at the command line as follows,

"#" /etc/init.d/elasticsearch start

the script fires up the Elasticsearch server on the default port of 9200.

Polyglot or Perl

Most hands-on Elasticsearch tutorials off the web use the REST interface to communicate with the server via HTTP. Figure 1 shows a GET request on the running server, which displays its status.

Figure 1: After starting, the daemon responds to API requests on port 9200.

Several REST clients in multiple languages can be used to feed in data and query it later. The CPAN Elasticsearch module is the official Perl client. Note, however, that Elasticsearch (current version 1.03) [2] is the successor to the obsolete ElasticSearch module (with an uppercase S) [3]. This was an unfortunate choice of name by the CPAN author, if only because the old version still resides on CPAN and pops up before the new one when you search on search.cpan.org.

As a useful sample application for an Elasticsearch full-text search, I chose a keyword search of all Perl columns previously published in Linux Magazine. The manuscripts of more than 130 articles in this series can be found in a Git repository below my home directory; the script in Listing 1 [4] sends all the recursively found text files via the REST interface to the running Elasticsearch server for indexing. The command

"$" fs-index ~/git/articles

took just a few minutes. A second call, with the disk cache warmed up, whizzed by in just 30 seconds. A subsequent search for the seemingly unlikely word "balcony" then returns results within a fraction of a second:

"$" fs-search balcony
/home/mschilli/git/articles/water/t.pnd
/home/mschilli/git/articles/gimp/t.pnd

The files found in the index reveal that I have only used the word "balcony" in two issues thus far: once in August 2008 in an article about a Perl interface for the GIMP image editor, in which I manipulated a photo shot from my own balcony [5]; and in April 2007, when I described an automatic irrigation system for my balcony plants [6].

Listing 1

fs-index

 

Fuzzy Search

Elasticsearch is not case-sensitive and handles stemming without any help. However, the indexer does not realize that "balconies" is the plural of "balcony" and provides no results in this case. Unfortunately, Elasticsearch sometimes takes things a little too far with the fuzzy search and presents matches that are not real matches, because the words start with the same string. Apart from that, the search function finds a needle in a haystack – and quickly.

Line 10 of the fs-index script in Listing 1 accepts the search directory handed over to the script at the command line; it then calls the Elasticsearch class constructor. If the search queries do not return the desired results and you wonder why, you can twist the constructor's arm:

my $es = Elasticsearch->new(
    trace_to => ['File','log']
);

It will then output all the commands sent to the Elasticsearch server in Curl format in the log file. Using cut and paste, puzzled developers can then begin to gradually understand what's going on under the hood.

Elasticsearch stores the text data in an index, which is named fs in this example (as in "filesystem") in line 8. If the index already exists, the delete() method deletes it in line 17. The surrounding eval block tacitly fields any errors – for example, if the index does not exist yet because this is the very first call to fs-index.

Heavyweights and Binaries Excluded

The find() function from the File::Find module starts to dig through the directories on the hard disk as of line 21, starting with the base directory passed in at the command line. Line 25 ignores any binary files, and anything that is not a real file or is larger than 100,000 bytes also is left out with additional tests. The slurp() function from the CPAN Sysadm::Install module then sends the content of any files worth keeping to memory, which the index() method in line 30 feeds to the database under the content keyword. The name of the file also ends up there under keyword file.

Buy this article as PDF

Express-Checkout as PDF
Price $2.95
(incl. VAT)

Buy Linux Magazine

SINGLE ISSUES
 
SUBSCRIPTIONS
 
TABLET & SMARTPHONE APPS
Get it on Google Play

US / Canada

Get it on Google Play

UK / Australia

Related content

  • Perl – Elasticsearch

    Websites often offer readers links to articles about similar topics. Using Elasticsearch, the free search engine, is one way to find related documents instantly and automatically.

  • ELK Stack

    A powerful search engine, a tool for processing and normalizing protocols, and another for visualizing the results – Elasticsearch, Logstash, and Kibana form the ELK stack, which helps admins manage logfiles on high-volume systems.

  • ELK Stack Workshop

    ELK Stack is a powerful monitoring system known for efficient log management and versatile visualization. This hands-on workshop will help you take your first steps with setting up your own ELK Stack monitoring solution.

  • Logstash

    When something goes wrong on a system, the logfile is the first place to look for troubleshooting clues. Logstash, a log server with built-in analysis tools, consolidates logs from many servers and even makes the data searchable.

  • Tube Archivist

    Tube Archivist indexes videos or entire channels from YouTube in order to download them with the help of the yt-dlp tool.

comments powered by Disqus
Subscribe to our Linux Newsletters
Find Linux and Open Source Jobs
Subscribe to our ADMIN Newsletters

Support Our Work

Linux Magazine content is made possible with support from readers like you. Please consider contributing when you’ve found an article to be beneficial.

Learn More

News