## Artificial intelligence detects mileage patterns

# Hot on the Tracks

On the basis of training data in the form of daily car mileage, Mike Schilli's AI program tries to identify patterns in driving behavior and make forecasts.

New releases in the Deep Learning category are currently springing up from publishing companies like mushrooms out of the ground, with "neural networks" here and "decision trees" there. Armed with brand new open source tools such as TensorFlow or SciKits, even the average consumer can treat their home PC to a helping of artificial intelligence (AI). What could be more obvious than feeding your trusted home Linux box with acquired data and checking whether it can then predict the future based on historical values and by applying new AI techniques.

#### Simply Linear

As discussed in a previous issue of this column, I have an Automatic adapter in my car that collects driving data via the OBD-II port and stores the data on a web service via the mobile phone network [1]. Scripts then use the REST API to retrieve the data from the network and can thus be used to determine the exact time when the car was driven and where it went.

For example, it is easy to retrieve the daily mileage and output it as a CSV file (Figure 1) or plot the odometer data graphically over a time axis for an entire year (Figure 2).

Apart from a few outliers, the linear course of the mileage readings suggests that the car travels a considerable number of miles almost every day. If someone wants to know the probable mileage for July next year, a mathematically capable person could calculate the future mileage relatively quickly with the help of the rule of three – hopefully remembered from high school days.

But what about today's AI programs? How complex would it be to feed the historical driving data to a script and let it learn the odometer history to generate accurate forecasts in the future?

#### Still Witchcraft?

Nowadays, AI tools still have a long way to go to reach something resembling human intelligence; they still require you to define the framework precisely before the computer sees anything at all. If, however, the linear progression of the curve is known, you can choose an AI tool for linear regression, and suddenly your application may turn some heads for actually looking pretty intelligent.

TensorFlow, a hot AI framework from Google, helps at a relatively high abstraction level by feeding in data and letting a chosen model learn behavior until it's ready to evaluate its performance later. Because AI tools rely to quite a large extent on linear algebra and matrices for computations, math tools such as Python's *pandas* library help a great deal. TensorFlow for Python 3 is easily installed on Ubuntu with the Python module installer:

pip3 install tensorflow

The same applies to *pandas* and other modules.

Incidentally, during the install on my system, the TensorFlow engine output a slew of obnoxious deprecation warnings whenever it was called, but I silenced them by setting the `TF_CPP_MIN_LOG_LEVEL`

environment variable to a value of `3`

.

#### AI Feed

TensorFlow expects the mathematical equations for operating a model as *nodes* in a graph; it fills them with parameters in *sessions* and executes them either on a single PC or in parallel on entire clusters of machines at the data center. In this particular car mileage use case, Listing 1 [2] defines the straight-line equation for the linear model in line 23 as:

Y = X * W + b

Listing 1

linreg.py

The variable `X`

here is the input value for the simulation; it provides the date and time value for which the process computes the mileage `Y`

as the output. The parameters `W`

(weight) and `b`

(bias) for multiplying `X`

and adding an offset to the result are used to determine the model in the example such that `Y`

corresponds as closely as possible to the mileage at time `X`

during the training session.

For this purpose, lines 16 to 17 define the variables `X`

and `Y`

as `placeholder`

, and lines 19 to 20 define the parameters `W`

and `b`

as `Variable`

and initialize these with random values from the `random`

component of the `numpy`

module. Line 14 loads the two columns `date`

and `miles`

for every record from the CSV file into a *pandas* dataframe in one fell swoop; imagine a kind of database table with two columns.

The actual training session is orchestrated by the optimizer in lines 39 and 40; it uses a gradient descent procedure to approximate the straight-line equation to the individual points from the training data by modifying the parameters `W`

and `b`

until the `cost`

(or error) calculation defined in lines 27 and 28 drops to a minimum. This cost function again uses TensorFlow semantics to compute the mean square deviation of all training data points from the straight line defined by `W`

and `b`

.

In the TensorFlow session, starting in line 45, the `for`

loop iterates across all 2,000 training runs, as set in line 11, and calculates the value for the `cost`

function every 250 passes to keep the user at the command-line prompt entertained. For training the model, however, only the call to `run`

in line 49 is relevant, computing how the current `X`

value maps to a known `Y`

; this then calls the formula in line 23 via the optimizer in the background, computes the result, and in turn modifies the parameters based on the computed value in the `cost`

function. After 2,000 cycles, Figure 3 shows that the value for `W`

has reached a steady state at `6491`

, with `b`

at `32838`

.

## Buy this article as PDF

(incl. VAT)