Parallel Programming with OpenMP

OpenMP Hands On

To use OpenMP in your own programs, you need a computer with more than one CPU, or a multi-core CPU and an OpenMP-capable compiler. GNU compilers later than version 4.2 support OpenMP. Also, the Sun compiler for Linux is free [2], and the Intel Compiler is free for non-commercial use [3].

Listing 5 shows an OpenMP version of the classic Hello World program. To enable OpenMP, set -fopenmp when launching GCC. Listing 8 shows the commands for building the program along with the output.

Listing 5

Hello, World

 

Listing 8

Building Hello World

$ gcc -Wall -fopenmp helloworld.c
$ export OMP_NUM_THREADS=4
[...]
$ ./a.out
Hello World from thread 3
Hello World from thread 0
Hello World from thread 1
Hello World from thread 2
There are 4 threads

If you are using the Sun compiler, the compiler option is -xopenmp. With the Intel compiler, the option is -openmp. The Intel compiler even notifies the programmer if something has been parallelized (Listing 9).

Listing 9

Notification

 

Benefits?

For an example of a performance boost with OpenMP, I'll look at a test that calculates pi [4] with the use of Gregory Leibniz's formula (Listing 7 and Figure 5). This method is by no means the most efficient for calculating pi; however, the goal here is not to be efficient but to get the CPUs to work hard.

Listing 7

Calculating Pi

 

Parallelizing the for() loop with OpenMP does optimize performance (Listing 6). The program runs twice as fast with two CPUs than with one, in that more or less the whole calculation can be parallelized.

Listing 6

Parallel Pi

$ gcc -Wall -fopenmp -o pi-openmp pi-openmp.c
$ export OMP_NUM_THREADS=1 ; time ./pi-openmp
Pi = 3.141593
real    0m31.435s
user    0m31.430s
sys     0m0.004s
$ export OMP_NUM_THREADS=2 ; time ./pi-openmp
Pi = 3.141593
real    0m15.792s
user    0m31.414s
sys     0m0.012s

If you monitor the program with the top tool, you will see that the two CPUs really are working hard and that the pi-openmp program really does use 200 percent CPU power.

This effect will not be quite as pronounced for some problems, in which case, you might need to resort to serial execution for a large proportion of the program. Of course, your two CPUs will not be a big help in such a case, and the performance boost will be less significant. Amdahl's Law [5] (see the "Amdahl's Law" box for an explanation) applies here.

Amdahl's Law

"Speedup" describes the factor by which a program can be accelerated with parallelization. In an ideal case, program execution with N processors would take just 1/N of the time required by a serial program. This ideal case is known as linear speedup. In the real world, linear speedup often is impossible to achieve because some parts of a program do not particularly lend themselves to parallelization.

Given a part of a program that supports parallelization, P (thus, 1 – P is the non-parallelizable part), and the number of processors available, N, the maximum speedup is calculated by the formula in Figure 6.

If the serial part of the program (1-P) is 1/4, the speedup cannot be greater than 4 – no matter how many processors you use.

Glossar

SMP

Symmetric multi-processor system. All of the machine's CPUs can access the shared main memory – in contrast to cluster systems, in which separate machines exchange data over the wire. OpenMP is suitable for parallel programming on SMP systems.

Thread

One popular definition of thread is a "lightweight process." A Unix process has a separate memory area and various resources are assigned to it – such as environmental variables, network connections, or device access. A thread shares memory and certain other resources with other threads in a process. This reduces the management overhead compared with processes, and facilitates switching between threads. Pressing Shift+H in the top tool enables and disables the thread display.

Buy this article as PDF

Express-Checkout as PDF
Price $2.95
(incl. VAT)

Buy Linux Magazine

SINGLE ISSUES
 
SUBSCRIPTIONS
 
TABLET & SMARTPHONE APPS
Get it on Google Play

US / Canada

Get it on Google Play

UK / Australia

Related content

  • GCC 4.2

    The latest GNU compiler provides better support for parallel programming, and GCC also rolls out some new optimization features. We took GCC 4.2 for a test drive.

  • Intel Compiler 9.0

    Intel presented Version 9.0 of the C++ compiler for Intel processors in June, raising the bar for highly optimized code.

  • New C++ Features in GCC

    Recent versions of the GNU compiler include new features from the next C++ standard.

  • Intel Updates C++ and Fortran Compilers for Linux

    Chipmaker Intel has reworked its proprietary Linux compilers. The Intel C/C++ compiler version 11.0 now supports the mobile processor Atom. The same version of the Fortran compiler now supports the Fortran 2003 language standard.

  • maddog's Doghouse

    A recent rocket launch has maddog thinking about high performance computing and accurate weather forecasts.

comments powered by Disqus
Subscribe to our Linux Newsletters
Find Linux and Open Source Jobs
Subscribe to our ADMIN Newsletters

Support Our Work

Linux Magazine content is made possible with support from readers like you. Please consider contributing when you’ve found an article to be beneficial.

Learn More

News