Building high-performance clusters with LAM/MPI
Many applications used in engineering, oil exploration, simulation, and scientific research require the power of parallel computation, and that is why developers continue to use LAM/MPI for building HPC applications.
Although the next-generation Open MPI implementation  includes many new features that are not present in LAM/MPI, LAM/MPI has a very large base of users who are quite happy with its reliability, scalability, and performance.
In parallel computation scenarios, the main objective is often to reduce the total wall clock execution time rather than simply reduce CPU time. Because so many different factors are present, you cannot expect a linear improvement in performance just by adding more and more nodes.
One of the most important factors is the inherent parallelism present in the code (i.e., how well the problem is broken into pieces for parallel execution). From an infrastructure point of view, many additional factors can also contribute to improved performance.
In most LAM/MPI cluster implementations, because client nodes have to communicate with each other through the MPI architecture, it is important to have a fast and dedicated network between nodes (e.g., gigabit Ethernet interfaces with bonding).
Also, it is a good idea to create a separate VLAN for a private communication network so that no other traffic can contribute to performance degradation.
If your application is performing any kind of data mining (which is often the case for commercial implementations of LAM/MPI), disk I/O from master and client nodes also has an effect on performance. However, because of the nature of parallel execution, it is important that source data for data mining (or the executables in simpler implementations) is available to all nodes for simultaneous read and write operations.
If you are using SAN-based external disks along with NFS, setting NFS parameters can be beneficial in terms of performance improvement. If you are using NAS storage subsystems and NFS/CIFS protocols to make shared data sources available to all nodes for simultaneous read/write, it is highly recommended that you use a separate VLAN and Ethernet interfaces on each node for disk I/O from the NAS subsystem, so that storage traffic is isolated from MPI traffic.
Finally, cluster filesystems (such as GFS, GPFS, and Veritas) can also help speed up disk I/O for large LAM/MPI implementations.
- LAM/MPI website: http://www.lam-mpi.org/
- C3: http://www.csm.ornl.gov/torc/C3/
- C3 download: http://www.csm.ornl.gov/torc/C3/C3softwarepage.shtml
- LAM/MPI download page: http://www.lam-mpi.org/7.1/download.php
- LAM run time in Debian:http://packages.debian.org/lenny/lam-runtime
- Open MPI: http://www.open-mpi.org
- LAM/MPI User's Guide: http://www.lam-mpi.org/download/files/7.1.4-user.pdf
- Openshaw, Stan, and Ian Turton. High Performance Computing and the Art of Parallel Programming. ISBN: 0415156920
- Lafferty, Edward L., et al. Parallel Computing: An Introduction. ISBN: 0815513291
Makes it easier for customers to move workloads into container-centric applications.
SUSE’s answer to container-centric operating systems.
Linux 4.9 is the biggest release in terms of number of commits.
The latest version of the official RHEL clone is here.
New release targets Linux professionals.
The Fedora project adds Wayland and Gnome 3.22
CeBIT 2017: Open Source Forum Call for Papers
Long-time Linux antagonist joins the revolution.
Major bug affects Debian/Ubuntu distributions.
Canonical releases the minimal edition for embedded devices, Internet of Things, and cloud deployments.