Make a camera for lenticular photography
Wiggle Time
You can take lenticular images with a homemade camera to re-create the "wiggle" pictures of your childhood.
Lenticular images store multiple exposures in the same area. Animation is achieved by tilting the image. Another application creates a spatial appearance without special tools (autostereoscopy). The digital version of this often shows up on social media as a "wigglegram."
Lenticular Cameras
On the consumer market, lenticular cameras are sold under the name ActionSampler. More than 40 years ago, the four-lens Nishika (Nimslo) appeared, followed by Fuji's eight-lens Rensha Cardia in 1991. Unlike the Nishika's synchronous shutter action, the Fuji exposed the 35mm film sequentially. Even today, the analog scenes are still very popular on Instagram and the like.
One way of creating a multilens digital recording system is to use a Raspberry Pi and a Camarray HAT [1] (hardware attached on top) by ArduCam [2]. The camera I make in this article uses four Sony IMX519 sensors arranged at a distance of 4cm apart (Figure 1). After the first exposure, you can move the device by half the camera distance, which produces eight shots of a subject at equal distances with a total of 32 megapixels (MP).
Lenticular Technology
The predecessors of today's lenticular screens are corrugated and lamellar screens that take two and three displayable images, respectively. Unlike the planar image strips of their predecessors, the lens screens commonly used today are transparent films of semi-cylindrical strips that show multiple images simultaneously [3]. Depending on the viewer's angle of view, the left eye sees something different than the right eye, and the viewer perceives the view as three-dimensional (Figure 2).
The lenses differ in terms of thickness and curvature radius; resolution is stated in lines per inch (lpi). Changing the image, animation, and zooming and morphing effects can be achieved with image strips arranged horizontally. To separate spatial images you need vertical image strips; the input images are encoded strip by strip in line with the lens spacing and are printed on a self-adhesive foil or as mirror images on the reverse side of the foil.
You can achieve a spatial vision effect by nesting the individual images inside each other, which leads to image separation for the viewer. However, you do not need to restrict yourself to two images; instead, you can compile a series of images. To do this for static scenes, you move the camera step-by-step. Alternatively, you can use camera technology with multiple lenses, which is also how to capture dynamic scenes. StereoPhoto Maker [4] is freeware for preparing image series. If you want to look more closely into wigglegrams, it is a good idea to take a look at the Triaxes 3DMasterKit [5] software.
Four-Lens DIY Camera
As the control unit, I will add the ArduCam Camarray HAT to a Raspberry Pi 4B. The Pivariety manufacturer makes extended solutions for Raspberry Pi standard cameras that act as Video for Linux version 2 (V4L2) devices. The HAT operates four Sony IMX519 sensors over a Camera Serial Interface (CSI), which you address on the I2C bus. The sensors have an image memory of 16MP, but depending on the addressing, one or more sensors share the image memory. The sensors can be operated in autofocus mode or with manual adjustment. The field of view is 80 degrees horizontally, and sharpness starts at about 8cm.
While you are on the move, a 5V power bank supplies the unit with energy. Three-dimensional printed components let you design a case. The camera boards sit side by side in the supplied brackets. Of course, you can't adjust a setup like this with single-pixel accuracy, but you don't need that because you can use the application software instead. The housing is designed so you can move the entire lens board by half the camera distance. In this way, the data for a lenticular image can be assembled from eight exposures, each 2cm apart over a base of 14cm.
The multicamera adapter is connected to the sensors by four interfaces that use ribbon cables. You then need to connect it to the computer on the CSI interface. Three spacer screws secure the mechanical connection to the Raspberry Pi; the 5V power supply comes through the GPIO pins. How you arrange the cameras is entirely up to you. The boards each come in a small case, and you install them 40mm apart. If you do not use the housings, the minimum distance is reduced to 24mm. The software addresses the sensors as a single frame. By setting the corresponding I2C parameters, you can configure one, two, or four sensors. The cameras always have to share the available resolution.
The default is
i2cset -y 10 0x24 0x00
(i.e., Quadro mode). Accordingly, the resolution for each image is restricted to a maximum of 2328x1746 pixels, and synchronization is in pairs at frame level. If you specify the i2cset
parameter as above, the result is a resolution of two times 2328x3496 pixels in dual mode, which is extrapolated to two times 4656x3496 pixels later in the application. You may already be familiar with image compression from stereoscopy.
The images in Figure 3 on the right were taken from close up and therefore clearly reflect the camera layout and cabling (from left to right: RX2, RX3, RX1, and RX0). Despite the convenient autofocus mode, it is important not to forget the manual focus options. Especially at close range, manual focus results in some interesting photographic options (Figure 4).
Buy this article as PDF
(incl. VAT)