A bag o' tricks from the Perlmeister
Infrastructure as Code
To make sure that code does not only work in the development environment, but also for the users in their worlds, the release process must ensure two things. The generated artifact is exclusively allowed to pick up the source code from the Git repository and is not allowed to rely on local files; this means that the build can always be reproduced. And, before the product is launched into the wild, it has to pass through the accompanying test suite, which simulates the end user's view.
Professional developers use build servers for this. They automatically wake up with tools like Jenkins or similar, if new sources appear in the Git repository. They grab the new code, start the build, run the tests, and put together an artifact such as a tarball or an RPM package if successful. They then upload the latter to the distribution server in one fell swoop. Open source projects often use Travis-CI [3] for this. It's an excellent build hosting provider that sets up a build server for a GitHub project at the push of a button and is happy with simple three-liner configurations that live alongside in the source code.
A virtual environment, such as a Vagrant VM [4] provisioned with Ansible, or a Docker container [5], which generates artifacts and runs tests, is equally fine for home use. If all goes well, the release is made and the cpan-upload
script from the CPAN::Upload CPAN module then uploads a tarball created by make tardist
to CPAN.
Listing 1 [6] shows a Docker configuration that produces a clean room based on the latest Ubuntu distro. A call to Docker's build
command picks up the local Dockerfile, pulls in the lean Ubuntu base image from the Docker mothership, and adds more layers to it according to each RUN
instruction in the file:
Listing 1 Dockerfile
01 FROM ubuntu 02 03 RUN apt-get -y update 04 RUN apt-get -y install cpanminus 05 RUN apt-get -y install make 06 RUN apt-get -y install libwww-perl
docker build -t testimg .
The statements in the Dockerfile tell Docker to run apt-get update
to point Ubuntu's package manager to the latest repository versions, and to install packages for build support, such as make
. Later calls to the same build
command will reuse the previously created content from the cache, as long as the lines in the Dockerfile haven't changed according to a checksum comparison.
The build script in Listing 2 first runs the Docker build
command, which creates a new image, and then invokes the run
command that launches a container based on the image. The -v
option makes the host's source directory for the module available inside the container below /mybuild
for read and write access.
Listing 2
Build Script
01 #!/usr/local/bin/perl -w 02 use strict; 03 use Sysadm::Install qw(:all); 04 use FindBin qw( $Bin ); 05 use Path::Tiny; 06 07 my $tag = "build"; 08 my $dir = path( "$Bin/.." )->realpath; 09 10 sysrun "docker", "build", "-t", $tag, "."; 11 12 sysrun qw( docker run --rm --name buildc -v ), 13 "$dir:/mybuild", $tag, "bash", "-c", 14 "cd /mybuild; perl Makefile.PL; make test; make tardist";
Because the build
script is checked as adm/build
into the module's Git repository, Perl's FindBin
module first identifies its absolute location. The module code resides in the parent directory, and Path::Tiny
changes the location accordingly and then recreates a minimal absolute path using realpath
.
Line 13 calls bash
as a command in the container; it uses the -c
option to pass a string with the typical Perl-style triple jump of perl Makefile.PL; make test; make tardist
to it. This in turn puts together the distribution tarball from the module code under clean-room conditions. Subsequent build steps should copy the tarball to new clean rooms and test whether it can be installed and used, as this is not automatically the case, especially if it needs more modules from CPAN at run time.
Automatically Error-Free
The important thing is that each level of the build process runs automatically and immediately pulls the emergency ripcord if unexpected events occur. The automatic part is essential because human operators tire easily and start making mistakes when they continuously repeat the same steps. Broken releases are often the consequence, causing embarrassment and user frustration. If you do invest time in automating the build process, you will learn to enjoy the ability to push a button after making a change to the code, before heading off for lunch, in the assurance that everything will follow a tried and trusted path.
Tagging Releases
To later determine which state of the source tree a release is based on, the build process needs to mark the status in Git, usually with a tag that contains the release number:
git tag release_1.01 git push --tags origin
If origin
refers to the remote Git repository, the following push
command with the --tags
argument ensures locally applied tags in the Git repository get copied to GitHub for everyone to see. If you want to reproduce bugs present in previous releases later, you can get the source code's historical state during the release in question back, by checking it out with
git checkout -b testbug release_1.01
The new testbug
branch then contains the status quo at the time of authoring.
« Previous 1 2 3 4 Next »
Buy this article as PDF
(incl. VAT)