Creating virtual clusters with Rocks

Running the Installation

The installer will boot, and you will see a message box that says the pre-installation scripts are running. Then the installer will format the filesystem, and a graphical installer very similar to the Red Hat Enterprise Linux graphical installation will appear with images of the pyramids and Uncle Sam encouraging clustering fans to register their cluster on behalf of the National Science Foundation (Figure 3).

Note: If you selected a network installation, the rolls will be downloaded from the Internet before the installation proceeds. At a certain point during the installation process, the installation media should eject automatically, and you should remove it; otherwise, carefully monitor the installation process, because sometimes if the installation media is not removed, the computer will cycle into another installation.

Once the installation is complete, the system will reboot automatically, and a blue CentOS 5 login screen will appear. Now that you have deployed the front-end controller, you can begin the installation of the various VM containers and compute nodes.

Installing the VM Container

The compute nodes do all of the work and serve as the individual systems within the "supercomputer" you are building, and you can set up compute nodes on any number of single physical machines. A compute cluster could be 500 servers in a large data center, or it could be two machines sitting under the desk. To harness the benefits of virtualization in a cluster, a virtual machine container will be used to deploy compute nodes. Deploying compute nodes on a Xen VM container will allow for rapid deployment and management of cluster compute nodes.

To start deploying cluster compute nodes, log into the front-end node and open a terminal. The first time a terminal window is opened, a prompt will ask you to create an SSH key file. To accept the defaults, hit enter, unless you prefer to enter a passphrase for the SSH key.

Now type insert-ethers in the terminal command line and select VM Container under Choose Application Type. A message box that says Inserted Appliances will appear, and the MAC address of the VM container (which is the physical machine itself) should show up once it is booted up.

Make sure the VM container supports network booting (PXE) and is set to network boot before any other booting method. If the system does not support PXE booting, you'll need to boot the VM container manually with the Kernel roll CD or the Rocks DVD in the VM container by simply inserting the DVD/CD and letting it boot off the installation media.

Keep in mind that if no DHCP server is on the compute node network, when compute nodes boot, they will time out when trying to query for an IP address assignment from DHCP. If a large cluster is provisioned with multiple network segments, you will need a DHCP server because it will simply take too long to wait for the boot timeouts.

Note: If you have a managed switch, Power Unit, or NAS appliance on the private compute cluster network, remember to configure these devices before installing VM containers or compute nodes. Verify the MAC addresses in the insert-ethers console as devices are booted for installation.

The VM container will show up in the Inserted Appliances section with the MAC address of the container. An asterisk (*) will appear within the parentheses once the front-end node requests a Kickstart configuration, which will include the default rolls included during the initial front-end installation.

Now relax and let Rocks do all the work. The beautiful thing about Rocks is that it makes provisioning a cluster very simple – provided the front-end node is performing automated Kickstart-based installations. The Kickstart installation will deploy the installation packages to the VM container. Now is another good time for a coffee break. When you return, you should find a brand new VM container ready for you to deploy Xen-based compute nodes. The default name for the first VM container is vm-container-0-0.

Creating and Installing Virtual Compute Nodes

Now that the VM container is installed, you need to create Xen compute nodes in the VM container to perform the work for the cluster. Any number of VM containers can be created and mixed with non-virtual compute nodes. To boot a virtual compute node into a usable, active state, only a few commands need to be executed from the front end to boot a compute node.

To create a compute node virtual machine, execute the following command on the front end:

#rocks add host vm vm-container-0-0 membership="Compute"

To set various virtual machine configuration parameters, such as memory size, disk size, and network information, you can pass optional arguments to the add command. The default memory size for virtual machines created with this command is 1,024 megabytes.

Once the command is executed, the configuration for a new virtual compute node will be added to the Rocks database, and the following output will appear:

Added VM on node "vm-container-0-0" slice "1" with vm_name "compute-0-0-1"

When you try to create the Xen virtual machine with the preceding command, an error message might appear stating that there is not enough memory on the machine. In such cases, make sure you have enough physical memory on the VM container as well as for the guest virtual machines. As a last resort, the amount of memory allocated per VM can be scaled back by configuring the Xen domain memory configuration dom0-min-mem parameter in the /etc/xen/xend-config.sxp configuration file.

Now that the VM container has been assigned to the physical VM container, the installation of the Xen VM on the target container is executed by typing rocks create host vm compute-0-0-1. This command will read the configuration from the Rocks database and start provisioning the Xen virtual machine on the target container automatically. By typing rocks-console compute-0-0-1, you can view the progress of the installation.

Once you have created the virtual machine, you can boot it by running this command from the front-end node:

rocks start host VM compute-0-0-1.

If the virtual machine starts successfully, the terminal will output the following:

Using config file '/etc/xen/rocks/compute-0-0-1"
Started domain compute-0-0-1

Now repeat the process of associating the virtual machine with a physical machine, creating and starting the VM as often as desired or up to the amount of free physical memory on the VM container. The cluster is ready now to do the bidding of the mad scientist who brought it to life.

Quick Command Reference

Stop a cluster compute node:

# rocks stop host vm compute-x-x-x

Start a cluster compute node:

# rocks start host vm compute-x-x-x

Query information about all nodes:

# rocks list host

Query information about virtual compute nodes:

#rocks list host vm

Buy this article as PDF

Express-Checkout as PDF
Price $2.95
(incl. VAT)

Buy Linux Magazine

SINGLE ISSUES
 
SUBSCRIPTIONS
 
TABLET & SMARTPHONE APPS
Get it on Google Play

US / Canada

Get it on Google Play

UK / Australia

Related content

  • Rocks Releases Mamba

    The latest version of Rocks cluster distribution – an open source toolkit for real and virtual clusters – has been released.

  • HPC Cluster Basics

    The beginning for high-performance computing is understanding what you are trying to achieve, the assumptions you make to get there, and the resulting boundaries and limitations imposed on you and your HPC system.

  • Proxmox VE

    The Proxmox Virtual Environment has developed from an insider’s tip to a free VMware ESXi/ vSphere clone. We show you how to get started setting up a PVE high-availability cluster.

  • OpenSSI

    The OpenSSI framework rearranges processes for easy and transparent clustering.

  • StarCluster

    Cloud computing has become a viable option for highperformance computing. In this article, we discuss the use case for cloud-based HPC, introduce the StarCluster toolkit, and show how to build a custom machine image for compute nodes.

comments powered by Disqus
Subscribe to our Linux Newsletters
Find Linux and Open Source Jobs
Subscribe to our ADMIN Newsletters

Support Our Work

Linux Magazine content is made possible with support from readers like you. Please consider contributing when you’ve found an article to be beneficial.

Learn More

News