Creating virtual clusters with Rocks

In the Rocks

© Mattias Löw, Fotolia

© Mattias Löw, Fotolia

Article from Issue 98/2009
Author(s):

Rocks offers an easy solution for clustering with virtual machines.

Rocks is a Linux distribution and cluster management system that allows for rapid deployment of Linux clusters on physical hardware or virtual Xen containers. A Rocks cluster [1] is easy to deploy, and it offers all the benefits of virtualization for the cluster member nodes. With a minimum of two physical machines, Rocks allows for simple and rapid cluster deployment and management, freeing the cluster administrator to focus on supporting grid computing and the distributed applications that make clustering an attractive option.

Included in the standard Rocks distribution are various open source high-performance distributed and parallel computing tools, such as Sun's Grid Engine [2], OpenMPI [3], and Condor. This powerful collection of advanced features is one reason why NASA, the NSA, IBM Austin Research Lab, the U.S. Navy, MIT, Harvard, and Johns Hopkins University are all using Rocks for some of their most intensive applications.

Why Virtualize a Cluster?

The arguments for deploying virtual clusters are the same arguments that justify any virtualization solution: flexibility, ease of management, and efficient hardware resource utilization. For example, in an environment in which 64-bit and 32-bit operating systems must run simultaneously, virtualization is a much more efficient solution than attempting to support two separate hardware platforms in a single cluster.

Pre-Installation Tasks

Before installing the cluster, make sure all of the necessary components are readily available. Rocks clusters can be configured in a multitude of different ways, with various network configurations. Rocks can be installed within virtual containers (VM containers) or directly on physical hardware. The example provided in this article assumes that you have at least two physical machines for deploying a front-end node and at least one VM container. The front-end node requires at least 1GB of RAM, and the VM container should have at least 4GB of RAM (Rocks requires a minimum of 1GB).

It is essential to ensure that the hardware is supported by the Rocks OS distribution. The Rocks OS is based on CentOS, so make sure your hardware complies with the CentOS/Red Hat Hardware Compatibility list. The general rule of thumb is to use widely supported, commodity hardware, especially when selecting network adapters and graphics adapters.

The basic Rocks network configuration assumes the presences of a public network and a private network for the VM container and its compute nodes. The front-end node should have two network interface cards, and the compute nodes require at least one card to connect to the private compute node network. Also, you will need a switch that connects the various VM containers to the front-end node. See Figure 1 for a sample Rocks network configuration.

Preparing the Installation

Insert the Rocks DVD (or boot CD) and boot the system off of the CD/DVD drive. If you are using CDs, insert the Rocks Kernel/Boot CD first. Rocks will prompt for the various rolls. In the Rocks lexicon, a roll is a collection of software intended for a specific task. A base configuration requires the Kernel/Boot roll, Base roll, Web Server roll, and OS roll 1 and roll 2, as well as the Xen roll for cluster virtualization support.

The base configuration is not a very exciting configuration, so research the various rolls that are available [4] and include the various distributed and grid computing rolls as desired to really have fun with Rocks. Sun Grid Engine (SGE), Torque, and the high-performance computing (HPC) roll are good starting points for really making the most out of a Rocks cluster.

A splash screen will prompt for a boot mode. To boot into the front-end installation, type frontend and hit Enter. If this is not done within a few seconds, the Rocks installer will boot into a compute node installation. If this happens, reboot the system and type frontend in the prompt before it automatically boots again.

Once the Rocks install CD boots, it will attempt to contact a DHCP server, but if it cannot find a DHCP server on both network interfaces, it will prompt for a network configuration. Most likely, eth0 will get a lease, but eth1 (private cluster network) will not have a DHCP server on it. In this case, either have a DHCP server on the private network as well or select manual configuration and enter the IPv4 address, gateway, and name server manually. Once network connectivity is established, you should select OK to continue with the front-end installation.

A screen that says "Welcome to Rocks" will appear that lets you launch the installation off the DVDs, the CDs, or the network. The simplest approach is to download the DVD in advance and install from the DVD because it contains most of the rolls or software packages that are offered on the Rocks site.

With the Rocks installation DVD is in the drive, click CD/DVD Based Roll, then select the rolls you want to install. A base Rocks system consists of the kernel, OS, web server, and base rolls. To configure a virtual cluster, the Xen roll is also required (Figure 2).

Now select the recommended rolls and click "Submit." The selected rolls will now appear on the left of the installation screen. Clicking Next begins the installation.

Entering the cluster information will provide identification for the cluster if it is registered with rocksclusters.org. Various prompts ask for configuration information, such as the network settings for eth0 and eth1, the root password, the time zone, and the partition scheme.

Buy this article as PDF

Express-Checkout as PDF
Price $2.95
(incl. VAT)

Buy Linux Magazine

SINGLE ISSUES
 
SUBSCRIPTIONS
 
TABLET & SMARTPHONE APPS
Get it on Google Play

US / Canada

Get it on Google Play

UK / Australia

Related content

  • Rocks Releases Mamba

    The latest version of Rocks cluster distribution – an open source toolkit for real and virtual clusters – has been released.

  • HPC Cluster Basics

    The beginning for high-performance computing is understanding what you are trying to achieve, the assumptions you make to get there, and the resulting boundaries and limitations imposed on you and your HPC system.

  • Proxmox VE

    The Proxmox Virtual Environment has developed from an insider’s tip to a free VMware ESXi/ vSphere clone. We show you how to get started setting up a PVE high-availability cluster.

  • OpenSSI

    The OpenSSI framework rearranges processes for easy and transparent clustering.

  • StarCluster

    Cloud computing has become a viable option for highperformance computing. In this article, we discuss the use case for cloud-based HPC, introduce the StarCluster toolkit, and show how to build a custom machine image for compute nodes.

comments powered by Disqus
Subscribe to our Linux Newsletters
Find Linux and Open Source Jobs
Subscribe to our ADMIN Newsletters

Support Our Work

Linux Magazine content is made possible with support from readers like you. Please consider contributing when you’ve found an article to be beneficial.

Learn More

News