Keep control of goroutines with a Context construct

Simpler with Context

To wrap this common synchronization pattern into an easily deployed package, the Go standard library provides Context structs [1]. They are used in Google's data centers by servers, which often have to call many goroutines to compile the results for incoming user requests. If this takes too long, the main function handling the request must be able to contact all the goroutines that are still running, forcing them to immediately abandon their current work, release any allocated resources, and terminate their program flow.

The context's interface uses Done() to return an open channel from which the worker bees try to read. However, no message ever reaches the channel (like in the previous example). Instead, the main program calls close() to close the channel abstracted in the struct when it's time for the final whistle, using the context's exported Cancel() function, which suddenly gives the reading workers an error value. They field this and interpret it as a signal to close up shop.

Listing 2 imports context to introduce the standard library package of the same name in line 4. The context.WithCancel() function extends a standard context created by context.Background() and returns two things: a context object in ctx and a cancel() function that the programmer calls later on (in line 17 in Listing 2) to send the signal to end the party and turn off the lights.

Listing 2


01 package main
03 import (
04   "context"
05   "fmt"
06   "time"
07 )
09 func main() {
10   ctx, cancel := context.WithCancel(context.Background())
12   work(ctx)
13   work(ctx)
14   work(ctx)
16   time.Sleep(3 * time.Second)
17   cancel()
18   time.Sleep(3 * time.Second)
19 }
21 func work(ctx context.Context) {
22   go func() {
23     for i := 0; i < 10; i++ {
24       fmt.Printf("%d\n", i)
26       select {
27       case <-ctx.Done():
28         fmt.Printf("Ok. I quit.\n")
29         return
30       case <-time.After(time.Second):
31       }
32     }
33   }()
34 }

Worker bees use ctx.Done() to extract the channel to be monitored from the context and insert a case statement with a read operation into their select loops; they then use this mechanism to receive control commands from their caller. After compilation, the output from Listing 2 looks exactly like Figure 1 and exhibits exactly the same behavior. This is not surprising, since the context implementation uses the same internal infrastructure.

Inside Google's Brain

In Google's data centers, all worker functions use a context variable as their first parameter; this controls any premature termination that may be necessary. However, it also helps pass down payloads of received requests, such as the name of the authenticated user or credentials for subsystems. In this way, all subsystems across all API boundaries support certain standard functions such as timeouts, cleanup signals due to unsolvable problems, or simply convenient access to global key/value values.

Brake and Lose

Listing 3 shows what this kind of server function could look like in a practical use case. It retrieves four URLs: the Google, Facebook, and Amazon splash pages, along with the artificially slowed down website Line 23 shows that it calls the AOL website with a delay of 5,000 milliseconds to make the client think it has a lame Internet connection.

Listing 3


01 package main
03 import (
04   "context"
05   "fmt"
06   "net/http"
07   "time"
08 )
10 type Resp struct {
11   rcode int
12   url   string
13 }
15 func main() {
16   ctx, cancel := context.WithTimeout(context.Background(), 2*time.Second)
17   defer cancel()
19   urls := []string{
20     "",
21     "",
22     "",
23     "",
24   }
26   results := make(chan Resp)
28   for _, url := range urls {
29     chkurl(ctx, url, results)
30   }
32   for _ = range urls {
33     resp := <-results
34     fmt.Printf("Received: %d %s\n", resp.rcode, resp.url)
35   }
36 }
38 func chkurl(ctx context.Context, url string, results chan Resp) {
39   fmt.Printf("Fetching %s\n", url)
40   httpch := make(chan int)
42   go func() {
43     // async url fetch
44     go func() {
45       resp, err := http.Get(url)
46       if err != nil {
47         httpch <- 500
48       } else {
49         httpch <- resp.StatusCode
50       }
51     }()
53     select {
54     case result := <-httpch:
55       results <- Resp{
56         rcode: result, url: url}
57     case <-ctx.Done():
58       fmt.Printf("Timeout!!\n")
59       results <- Resp{
60         rcode: 501, url: url}
61     }
62   }()
63 }

The main program now goes through these URLs in the for loop starting in line 28 and passes each one to the chkurl() function, along with a context variable and a results channel. The latter returns the workers' results to the main program in the form of Resp structures. This data type, defined starting in line 10, stores the URL obtained along with the request's HTTP return code.

In the process, chkurl() processes the requests asynchronously. It starts a goroutine from line 42 for time-consuming retrieval over the web and therefore quickly returns to the main program. Results bubble up later on via the results channel, where the for loop collects them starting in line 32, outputting the URLs along with their numeric result codes.

Buy this article as PDF

Express-Checkout as PDF
Price $2.95
(incl. VAT)

Buy Linux Magazine

Get it on Google Play

US / Canada

Get it on Google Play

UK / Australia

Related content

  • Simultaneous Runners

    In the Go language, program parts that run simultaneously synchronize and communicate natively via channels. Mike Schilli whips up a parallel web fetcher to demonstrate the concept.

  • Rat Rac

    If program parts running in parallel keep interfering with each other, you may have a race condition. Mike Schilli shows how to instruct the Go compiler to detect these conditions and how to avoid them in the first place.

  • Let's Go!

    Released back in 2012, Go flew under the radar for a long time until showcase projects such as Docker pushed its popularity. Today, Go has become the language of choice of many system programmers.

  • Progress by Installments

    Desktop applications, websites, and even command-line tools routinely display progress bars to keep impatient users patient during time-consuming actions. Mike Schilli shows several programming approaches for handwritten tools.

  • Motion Sensor

    Inotify lets applications subscribe to change notifications in the filesystem. Mike Schilli uses the cross-platform fsnotify library to instruct a Go program to detect what's happening.

comments powered by Disqus
Subscribe to our Linux Newsletters
Find Linux and Open Source Jobs
Subscribe to our ADMIN Newsletters

Support Our Work

Linux Magazine content is made possible with support from readers like you. Please consider contributing when you’ve found an article to be beneficial.

Learn More