Backdoors in Machine Learning Models
Preparation
The example in this article uses PyTorch, which, along with TensorFlow, is one of the most popular deep-learning frameworks. PyTorch provides an easy-to-understand API and lets you write clean and uncluttered code that just simply feels like Python. To get started, you need to install the Python packages from PyTorch. Use the following command:
pip install torch torchvision
Then download the MNIST data set and create an instance of the MNIST class from the Torchvision package. Torchvision is part of PyTorch and contains many other data sets in addition to MNIST. Listing 1 shows which arguments are passed in to the class. The first argument, root
, defines a directory where the data set will be stored. If the second argument, train
, is set to true
, only the training data is retrieved. The third argument, download
, is used to download the data set. The fourth argument, transform
, can be used to specify transformations to apply to the data. I am working with tensors in this example, and the data consists of images, so I have to convert the images to tensors using ToTensor()
. I will use the same approach to load the data set and validate the model. The only difference is that I need to set train
to false
instead of true
.
Listing 1
MNIST model
01 mnist_training = torchvision.datasets.MNIST( 02 root='.data', 03 train=True, 04 download=True, 05 transform=torchvision.transforms.ToTensor() 06 )
Computing the Model
The next step is to create a function that computes a model for a data set. This function can be seen in Listing 2. Lines 2 to 13 encode the architecture of the CNN. It has a very simple architecture. The first layer is a convolutional layer, followed by a pooling layer. The widely used ReLU
acts as the activation function. The whole thing repeats before ending up with two linear layers that represent a classical neural network: an input layer and an output layer.
Listing 2
Computing the Model
01 def create_model(dataset): 02 model = torch.nn.Sequential( 03 nn.Conv2d(1, 16, 5, 1), 04 nn.ReLU(), 05 nn.MaxPool2d(2, 2), 06 nn.Conv2d(16, 32, 5, 1), 07 nn.ReLU(), 08 nn.MaxPool2d(2, 2), 09 nn.Flatten(), 10 nn.Linear(32*4*4, 512), 11 nn.ReLU(), 12 nn.Linear(512, 10) 13 ) 14 15 opt = torch.optim.Adam(model.parameters(), 0.001) 16 loss_fn = torch.nn.CrossEntropyLoss() 17 loader = torch.utils.data.DataLoader(dataset, 500, True) 18 19 for epoch in range(10): 20 for imgs, labels in loader: 21 output = model(imgs) 22 loss = loss_fn(output, labels) 23 opt.zero_grad() 24 loss.backward() 25 opt.step() 26 print(f"Epoch {epoch}, Loss {loss.item()}") 27 28 return model
Lines 15 to 17 select an optimizer (Adam
, in this case) and a loss function (CrossEntropyLoss
in this case) and create an instance of DataLoader
. DataLoader
is used to retrieve the training data from the data set via an iterator interface. This data set is specified as the first argument. In each iteration, DataLoader
delivers a batch of training data. The size of the batch defines the second argument. In this case, each iteration provides 500 examples. If you set the third argument to true, the data will be randomly shuffled beforehand.
Lines 19 to 26 train the model step by step. They iterate 10 times (line 19) over the complete data set (line 20). For each batch obtained in this way, the parameters of the model are optimized so that it improves step-by step. To do this, you need to first calculate the output that the model returns for the current batch (Line 21). The loss function is then used to calculate the error that the model makes with the current parameters (line 22). In simple terms, this is the difference between the output
that the model provides and the correct values (labels
). Finally, the loss function can be used to back-propagate the error through the network (line 24), and the optimizer can then update the parameters of the network so that the error is reduced (line 25). For this to work, the gradients in line 23 must be set to zero. Additional technical details are not important for this example.
Accuracy of the Model
Calling the create_model()
function with the training data returns a model that recognizes handwritten digits with about 99 percent accuracy in less than two minutes on a current CPU. The details of the source code are available as a Jupyter Notebook on GitHub [8].
« Previous 1 2 3 4 Next »
Buy this article as PDF
(incl. VAT)
Buy Linux Magazine
Subscribe to our Linux Newsletters
Find Linux and Open Source Jobs
Subscribe to our ADMIN Newsletters
Support Our Work
Linux Magazine content is made possible with support from readers like you. Please consider contributing when you’ve found an article to be beneficial.
News
-
Gnome 48 Debuts New Audio Player
To date, the audio player found within the Gnome desktop has been meh at best, but with the upcoming release that all changes.
-
Plasma 6.3 Ready for Public Beta Testing
Plasma 6.3 will ship with KDE Gear 24.12.1 and KDE Frameworks 6.10, along with some new and exciting features.
-
Budgie 10.10 Scheduled for Q1 2025 with a Surprising Desktop Update
If Budgie is your desktop environment of choice, 2025 is going to be a great year for you.
-
Firefox 134 Offers Improvements for Linux Version
Fans of Linux and Firefox rejoice, as there's a new version available that includes some handy updates.
-
Serpent OS Arrives with a New Alpha Release
After months of silence, Ikey Doherty has released a new alpha for his Serpent OS.
-
HashiCorp Cofounder Unveils Ghostty, a Linux Terminal App
Ghostty is a new Linux terminal app that's fast, feature-rich, and offers a platform-native GUI while remaining cross-platform.
-
Fedora Asahi Remix 41 Available for Apple Silicon
If you have an Apple Silicon Mac and you're hoping to install Fedora, you're in luck because the latest release supports the M1 and M2 chips.
-
Systemd Fixes Bug While Facing New Challenger in GNU Shepherd
The systemd developers have fixed a really nasty bug amid the release of the new GNU Shepherd init system.
-
AlmaLinux 10.0 Beta Released
The AlmaLinux OS Foundation has announced the availability of AlmaLinux 10.0 Beta ("Purple Lion") for all supported devices with significant changes.
-
Gnome 47.2 Now Available
Gnome 47.2 is now available for general use but don't expect much in the way of newness, as this is all about improvements and bug fixes.