Gimp and neural networks

Matching Paint Job

© Lead Image © Orlando Rosu, 123RF.com

© Lead Image © Orlando Rosu, 123RF.com

Article from Issue 193/2016
Author(s): , Author(s): , Author(s):

Deep learning isn't just for industrial automation tasks. With a little help from Gimp and some special neural network tools, you can add color to your old black and white images.

Neural networks (NN) don't just play the traditional Japanese board game Go better than the best human player; they can also solve more practical tasks. For example, a project from Japan colors old black-and-white photos with the help of a neural network – and without asking the user to get involved with the image editing.

Researchers at Waseda University in Tokyo used a database that contains several objects to train a neural model to correctly recognize objects in images and fill them with appropriate color information. Using this model, the network then identifies the individual parts of the image, say, trees and people, and assigns matching colors.

The Waseda team presented this deep learning tool at the SIGGRAPH 2016 computer graphics conference [1]; you will find the code for their photo-coloring tool on Github [2]. The university website [3] provides a research paper on the subject [4], as well as some sample images.

Neural networks consist of many layers that gradually filter out information. For an image, that image might consist of the brightness, edges, and shadows. At the end, the network identifies specific, complex objects. Siri, Google Now, or Cortana use the same principle for speech recognition.

The problem with a conventional neural network is that each layer can make mistakes. The layer then passes this mistake on to the next layer. The type of neural network used with the tool described in this article, which is called a convolutional neural network (CNN) [5], has some built-in ways for limiting the effects of errors.

CNN versus NN

The concept for CNNs comes from biology, although it is not the human brain that serves as a template, but the visual cortex of cats. The Convolutional layer considers the spatial structure of the objects.

CNNs differ from conventional neural networks in the type of signal flow between neurons. The signals in NNs typically pass along the input-output channel in one direction, without the ability to iterate through a loop. CNN takes a different approach. The areas on which the neurons operate overlap and are arranged with offsets. Each layer contributes to a more reliable overall result, thus optimizing the detection rate. The network can identify an object even if it is different from the position defined by the training templates.

Deep learning makes it possible for a computer to identify the objects in an image. This procedure works even when the object on a screen is significantly changed compared to the training model, say, because it has a different background or because the viewing angle of the object or the lighting conditions have changed [6].

Tasks that require visual recognition are something that CNNs cope with very well. But the result depends on the quality and amount of training data, as you will see in the sample pictures later in this article.

The model shown in Figure 1 consists of roughly four parallelized and combined networks. The low-level features network recognizes the corners and edges of an image in high resolution. The data ends up in the global features network, which then sends them through four convoluted layers and then through three fully connected layers that each link the neurons of the one layer with all of the neurons in the other layer.

Figure 1: Put simply, the model used is a network that can be divided into four functional networks.

The result is a global, 256-dimensional vector representation of the image. On the other hand, the mid-level features network extracts textures from the data from the low-level features network.

The results of the global and mid-level-features network are then combined in the fusion layer; the results are resolution-independent thanks to vectorization. Finally, the colorization network adds color information (chrominance) and luminance and restores the resolution of the source image.

The end to-end network thus brings together global and local properties of images and processes them at any resolution. If the global features network suggests that the shot was taken outdoors, the local features network then tends to focus on natural colors.

Not Only Gray Theory

You can use the software from the Japanese researchers and the GIMP image-processing tool to colorize black-and-white images. You'll need a powerful computer with a reasonably recent graphics card.

In my test, I used Ubuntu 16.04 with Gnome. (The Japanese team used Ubuntu 14.04 with Gnome.) To follow the examples, you need to install Git, Gimp, and the LUA package manager, Luarocks:

sudo apt-get install git gimp luarocks

With only marginally more effort, you can then install Torch [7], Facebook's deep learning library. Torch is written in Lua [8] and is available under a BSD-style license. The Torch library provides algorithms for deep learning and is easy to install thanks to LUA.

Because Torch uses C backends and a MATLAB-like environment, it is perfect for scientific projects. Torch also includes packages for optimizing graphical models and image processing. The associated nn package produces neural networks and equips them with various abilities.

You can clone Torch yourself from Github. Finally, you need to execute the included install script:

git clone https://github.com/torch/distro.git ~/torch --recursivecd ~/torch
bash install-deps
./install.sh

The step adds Torch as a PATH variable to your .bashrc; you will want to restart Bash at this point. Now, you need to install some LUA packages on the computer:

luarocks install nn
luarocks install image
luarocks install nngraph

The next step is to set up the actual coloring software. You can download this software from Github [2] using git clone; I used the supplied script named download_model.sh to install on my machine:

cd ~
git clone https://github.com/satoshiiizuka/ siggraph2016_colorization.gitcd siggraph2016_colorization
./download_model.sh

For my first attempt, I copied the test1.jpg image to the siggraph2016_colorization folder. The test image is a scan of a photo with 638x638 pixels. I trimmed the image to a square shape because the neural network was trained on square images. Then I handed it over to the colorization script:

th colorize.lua test1.jpg test1_color1.jpg

The not-entirely-so-convincing first results are shown in Figure 2. This uninspiring result is probably due to the fact that the CNN processes images 224x224 in size. Also, my image template does not really consist of grayscales.

Figure 2: A first colorization run is not very effective; the image is still not optimized for the colorization software.

For my next attempt, I used GIMP to convert the image to grayscale (Image | Mode | Grayscale); this change did not cause any visible changes to the image. I then reduced the image to 224x224 pixels but without grayscaling. This step affected the resolution, but at least it improved the color scheme. Finally, I set up the grayscales and reduced the image to 224x224 pixels (Figure 3). But how do I transfer the significantly better color information to pictures with a higher resolution?

Figure 3: Grayscale and scaled to 224x224 pixels: Better colorization – bad resolution. GIMP to the rescue.

New Layer

GIMP lets you break down an image to create a Lab color model [9]. GIMP divides the image into three layers: An L level for the luminance, an a level for the hues between green and red, and a b level for the colors between blue and yellow [10].

The idea is to break down the large and small images into this color space. Then I would scale the a and b layers of the small image scale to the resolution of the larger image and transfer them to it. When you put the levels together again, you get the larger picture with the color information from the smaller image.

To do this, you open the small colored image and the large picture in GIMP as the first step. Then select Colors | Components | Decompose to open a menu, where you can decompose the image, and look for LAB as the color mode (Figure 4). You need to pre-select the option for decomposing the image into layers.

Figure 4: The small image in the original, and next to it the lab color layers extracted via GIMP (top left).

In the next step, activate the a layer from the small image, right click, and select Scale layer to scale it to a higher resolution. Then click on the layer with your mouse and copy it using CTRL + C. Then select the large image and add the layer in the Layers dialog.

Pressing the anchor button at the bottom of the Layers dialog lets you embed the floating selection; you then need to repeat this procedure with the b layer. Finally, use Colors | Components | Recompose to put the color layers back together. The results: An image in a higher resolution with the color information from the smaller image. For comparison, Figure 5 once again shows the grayscale image as a starting point.

Figure 5: From a grayscale image, the Torch-based colorization software makes a color photo, though not perfect.

Buy this article as PDF

Express-Checkout as PDF
Price $2.95
(incl. VAT)

Buy Linux Magazine

SINGLE ISSUES
 
SUBSCRIPTIONS
 
TABLET & SMARTPHONE APPS
Get it on Google Play

US / Canada

Get it on Google Play

UK / Australia

Related content

  • FAQ

    Welcome our new artificial intelligence overlords by tinkering with their gray matter.

  • Spam-Detecting Neural Network

    Build a neural network that uncovers spam websites.

  • Data Poisoning

    Machine learning can be maliciously manipulated – we'll show you how.

  • Animation with GIMP

    Although developed for editing individual images, GIMP has everything you need to create perfect animations via plugins and scripts.

  • Tiling in GIMP

    Graphic artists often face the problem of turning a photograph into an image that will tile over a larger surface. This task is not as easy as it sounds, but if you’re up for the challenge, this tutorial will give you a first-hand look at some advanced tools in the GIMP toolkit.

comments powered by Disqus
Subscribe to our Linux Newsletters
Find Linux and Open Source Jobs
Subscribe to our ADMIN Newsletters

Support Our Work

Linux Magazine content is made possible with support from readers like you. Please consider contributing when you’ve found an article to be beneficial.

Learn More

News