15. API - Arrays

The picamera library provides a set of classes designed to aid in construction of n-dimensional numpy arrays from camera output. In order to avoid adding a hard dependency on numpy to picamera, this module (picamera.array) is not automatically imported by the main picamera package and must be explicitly imported, e.g.:

import picamera
import picamera.array

15.1. PiArrayOutput

class picamera.array.PiArrayOutput(camera, size=None)[source]

Base class for capture arrays.

This class extends io.BytesIO with a numpy array which is intended to be filled when flush() is called (i.e. at the end of capture).

array

After flush() is called, this attribute contains the frame’s data as a multi-dimensional numpy array. This is typically organized with the dimensions (rows, columns, plane). Hence, an RGB image with dimensions x and y would produce an array with shape (y, x, 3).

close()[source]

Disable all I/O operations.

truncate(size=None)[source]

Resize the stream to the given size in bytes (or the current position if size is not specified). This resizing can extend or reduce the current file size. The new file size is returned.

In prior versions of picamera, truncation also changed the position of the stream (because prior versions of these stream classes were non-seekable). This functionality is now deprecated; scripts should use seek() and truncate() as one would with regular BytesIO instances.

15.2. PiRGBArray

class picamera.array.PiRGBArray(camera, size=None)[source]

Produces a 3-dimensional RGB array from an RGB capture.

This custom output class can be used to easily obtain a 3-dimensional numpy array, organized (rows, columns, colors), from an unencoded RGB capture. The array is accessed via the array attribute. For example:

import picamera
import picamera.array

with picamera.PiCamera() as camera:
    with picamera.array.PiRGBArray(camera) as output:
        camera.capture(output, 'rgb')
        print('Captured %dx%d image' % (
                output.array.shape[1], output.array.shape[0]))

You can re-use the output to produce multiple arrays by emptying it with truncate(0) between captures:

import picamera
import picamera.array

with picamera.PiCamera() as camera:
    with picamera.array.PiRGBArray(camera) as output:
        camera.resolution = (1280, 720)
        camera.capture(output, 'rgb')
        print('Captured %dx%d image' % (
                output.array.shape[1], output.array.shape[0]))
        output.truncate(0)
        camera.resolution = (640, 480)
        camera.capture(output, 'rgb')
        print('Captured %dx%d image' % (
                output.array.shape[1], output.array.shape[0]))

If you are using the GPU resizer when capturing (with the resize parameter of the various capture() methods), specify the resized resolution as the optional size parameter when constructing the array output:

import picamera
import picamera.array

with picamera.PiCamera() as camera:
    camera.resolution = (1280, 720)
    with picamera.array.PiRGBArray(camera, size=(640, 360)) as output:
        camera.capture(output, 'rgb', resize=(640, 360))
        print('Captured %dx%d image' % (
                output.array.shape[1], output.array.shape[0]))
flush()[source]

Does nothing.

15.3. PiYUVArray

class picamera.array.PiYUVArray(camera, size=None)[source]

Produces 3-dimensional YUV & RGB arrays from a YUV capture.

This custom output class can be used to easily obtain a 3-dimensional numpy array, organized (rows, columns, channel), from an unencoded YUV capture. The array is accessed via the array attribute. For example:

import picamera
import picamera.array

with picamera.PiCamera() as camera:
    with picamera.array.PiYUVArray(camera) as output:
        camera.capture(output, 'yuv')
        print('Captured %dx%d image' % (
                output.array.shape[1], output.array.shape[0]))

The rgb_array attribute can be queried for the equivalent RGB array (conversion is performed using the ITU-R BT.601 matrix):

import picamera
import picamera.array

with picamera.PiCamera() as camera:
    with picamera.array.PiYUVArray(camera) as output:
        camera.resolution = (1280, 720)
        camera.capture(output, 'yuv')
        print(output.array.shape)
        print(output.rgb_array.shape)

If you are using the GPU resizer when capturing (with the resize parameter of the various capture() methods), specify the resized resolution as the optional size parameter when constructing the array output:

import picamera
import picamera.array

with picamera.PiCamera() as camera:
    camera.resolution = (1280, 720)
    with picamera.array.PiYUVArray(camera, size=(640, 360)) as output:
        camera.capture(output, 'yuv', resize=(640, 360))
        print('Captured %dx%d image' % (
                output.array.shape[1], output.array.shape[0]))
flush()[source]

Does nothing.

15.4. PiBayerArray

class picamera.array.PiBayerArray(camera, output_dims=3)[source]

Produces a 3-dimensional RGB array from raw Bayer data.

This custom output class is intended to be used with the capture() method, with the bayer parameter set to True, to include raw Bayer data in the JPEG output. The class strips out the raw data, and constructs a numpy array from it. The resulting data is accessed via the array attribute:

import picamera
import picamera.array

with picamera.PiCamera() as camera:
    with picamera.array.PiBayerArray(camera) as output:
        camera.capture(output, 'jpeg', bayer=True)
        print(output.array.shape)

The output_dims parameter specifies whether the resulting array is three-dimensional (the default, or when output_dims is 3), or two-dimensional (when output_dims is 2). The three-dimensional data is already separated into the three color planes, whilst the two-dimensional variant is not (in which case you need to know the Bayer ordering to accurately deal with the results).

Note

Bayer data is usually full resolution, so the resulting array usually has the shape (1944, 2592, 3) with the V1 module, or (2464, 3280, 3) with the V2 module (if two-dimensional output is requested the 3-layered color dimension is omitted). If the camera’s sensor_mode has been forced to something other than 0, then the output will be the native size for the requested sensor mode.

This also implies that the optional size parameter (for specifying a resizer resolution) is not available with this array class.

As the sensor records 10-bit values, the array uses the unsigned 16-bit integer data type.

By default, de-mosaicing is not performed; if the resulting array is viewed it will therefore appear dark and too green (due to the green bias in the Bayer pattern). A trivial weighted-average demosaicing algorithm is provided in the demosaic() method:

import picamera
import picamera.array

with picamera.PiCamera() as camera:
    with picamera.array.PiBayerArray(camera) as output:
        camera.capture(output, 'jpeg', bayer=True)
        print(output.demosaic().shape)

Viewing the result of the de-mosaiced data will look more normal but still considerably worse quality than the regular camera output (as none of the other usual post-processing steps like auto-exposure, white-balance, vignette compensation, and smoothing have been performed).

Changed in version 1.13: This class now supports the V2 module properly, and handles flipped images, and forced sensor modes correctly.

demosaic()[source]

Perform a rudimentary de-mosaic of self.array, returning the result as a new array. The result of the demosaic is always three dimensional, with the last dimension being the color planes (see output_dims parameter on the constructor).

flush()[source]

Does nothing.

15.5. PiMotionArray

class picamera.array.PiMotionArray(camera, size=None)[source]

Produces a 3-dimensional array of motion vectors from the H.264 encoder.

This custom output class is intended to be used with the motion_output parameter of the start_recording() method. Once recording has finished, the class generates a 3-dimensional numpy array organized as (frames, rows, columns) where rows and columns are the number of rows and columns of macro-blocks (16x16 pixel blocks) in the original frames. There is always one extra column of macro-blocks present in motion vector data.

The data-type of the array is an (x, y, sad) structure where x and y are signed 1-byte values, and sad is an unsigned 2-byte value representing the sum of absolute differences of the block. For example:

import picamera
import picamera.array

with picamera.PiCamera() as camera:
    with picamera.array.PiMotionArray(camera) as output:
        camera.resolution = (640, 480)
        camera.start_recording(
              '/dev/null', format='h264', motion_output=output)
        camera.wait_recording(30)
        camera.stop_recording()
        print('Captured %d frames' % output.array.shape[0])
        print('Frames are %dx%d blocks big' % (
            output.array.shape[2], output.array.shape[1]))

If you are using the GPU resizer with your recording, use the optional size parameter to specify the resizer’s output resolution when constructing the array:

import picamera
import picamera.array

with picamera.PiCamera() as camera:
    camera.resolution = (640, 480)
    with picamera.array.PiMotionArray(camera, size=(320, 240)) as output:
        camera.start_recording(
            '/dev/null', format='h264', motion_output=output,
            resize=(320, 240))
        camera.wait_recording(30)
        camera.stop_recording()
        print('Captured %d frames' % output.array.shape[0])
        print('Frames are %dx%d blocks big' % (
            output.array.shape[2], output.array.shape[1]))

Note

This class is not suitable for real-time analysis of motion vector data. See the PiMotionAnalysis class instead.

flush()[source]

Does nothing.

15.6. PiAnalysisOutput

class picamera.array.PiAnalysisOutput(camera, size=None)[source]

Base class for analysis outputs.

This class extends io.IOBase with a stub analyze() method which will be called for each frame output. In this base implementation the method simply raises NotImplementedError.

analyse(array)[source]

Deprecated alias of analyze().

analyze(array)[source]

Stub method for users to override.

writable()[source]

Return whether object was opened for writing.

If False, write() will raise OSError.

15.7. PiRGBAnalysis

class picamera.array.PiRGBAnalysis(camera, size=None)[source]

Provides a basis for per-frame RGB analysis classes.

This custom output class is intended to be used with the start_recording() method when it is called with format set to 'rgb' or 'bgr'. While recording is in progress, the write() method converts incoming frame data into a numpy array and calls the stub analyze() method with the resulting array (this deliberately raises NotImplementedError in this class; you must override it in your descendent class).

Note

If your overridden analyze() method runs slower than the required framerate (e.g. 33.333ms when framerate is 30fps) then the camera’s effective framerate will be reduced. Furthermore, this doesn’t take into account the overhead of picamera itself so in practice your method needs to be a bit faster still.

The array passed to analyze() is organized as (rows, columns, channel) where the channels 0, 1, and 2 are R, G, and B respectively (or B, G, R if format is 'bgr').

15.8. PiYUVAnalysis

class picamera.array.PiYUVAnalysis(camera, size=None)[source]

Provides a basis for per-frame YUV analysis classes.

This custom output class is intended to be used with the start_recording() method when it is called with format set to 'yuv'. While recording is in progress, the write() method converts incoming frame data into a numpy array and calls the stub analyze() method with the resulting array (this deliberately raises NotImplementedError in this class; you must override it in your descendent class).

Note

If your overridden analyze() method runs slower than the required framerate (e.g. 33.333ms when framerate is 30fps) then the camera’s effective framerate will be reduced. Furthermore, this doesn’t take into account the overhead of picamera itself so in practice your method needs to be a bit faster still.

The array passed to analyze() is organized as (rows, columns, channel) where the channel 0 is Y (luminance), while 1 and 2 are U and V (chrominance) respectively. The chrominance values normally have quarter resolution of the luminance values but this class makes all channels equal resolution for ease of use.

15.9. PiMotionAnalysis

class picamera.array.PiMotionAnalysis(camera, size=None)[source]

Provides a basis for real-time motion analysis classes.

This custom output class is intended to be used with the motion_output parameter of the start_recording() method. While recording is in progress, the write method converts incoming motion data into numpy arrays and calls the stub analyze() method with the resulting array (which deliberately raises NotImplementedError in this class).

Note

If your overridden analyze() method runs slower than the required framerate (e.g. 33.333ms when framerate is 30fps) then the camera’s effective framerate will be reduced. Furthermore, this doesn’t take into account the overhead of picamera itself so in practice your method needs to be a bit faster still.

The array passed to analyze() is organized as (rows, columns) where rows and columns are the number of rows and columns of macro-blocks (16x16 pixel blocks) in the original frames. There is always one extra column of macro-blocks present in motion vector data.

The data-type of the array is an (x, y, sad) structure where x and y are signed 1-byte values, and sad is an unsigned 2-byte value representing the sum of absolute differences of the block.

An example of a crude motion detector is given below:

import numpy as np
import picamera
import picamera.array

class DetectMotion(picamera.array.PiMotionAnalysis):
    def analyze(self, a):
        a = np.sqrt(
            np.square(a['x'].astype(np.float)) +
            np.square(a['y'].astype(np.float))
            ).clip(0, 255).astype(np.uint8)
        # If there're more than 10 vectors with a magnitude greater
        # than 60, then say we've detected motion
        if (a > 60).sum() > 10:
            print('Motion detected!')

with picamera.PiCamera() as camera:
    with DetectMotion(camera) as output:
        camera.resolution = (640, 480)
        camera.start_recording(
              '/dev/null', format='h264', motion_output=output)
        camera.wait_recording(30)
        camera.stop_recording()

You can use the optional size parameter to specify the output resolution of the GPU resizer, if you are using the resize parameter of start_recording().

15.10. PiArrayTransform

class picamera.array.PiArrayTransform(formats=('rgb', 'bgr', 'rgba', 'bgra'))[source]

A derivative of MMALPythonComponent which eases the construction of custom MMAL transforms by representing buffer data as numpy arrays. The formats parameter specifies the accepted input formats as a sequence of strings (default: ‘rgb’, ‘bgr’, ‘rgba’, ‘bgra’).

Override the transform() method to modify buffers sent to the component, then place it in your MMAL pipeline as you would a normal encoder.

transform(source, target)[source]

This method will be called for every frame passing through the transform. The source and target parameters represent buffers from the input and output ports of the transform respectively. They will be derivatives of MMALBuffer which return a 3-dimensional numpy array when used as context managers. For example:

def transform(self, source, target):
    with source as source_array, target as target_array:
        # Copy the source array data to the target
        target_array[...] = source_array
        # Draw a box around the edges
        target_array[0, :, :] = 0xff
        target_array[-1, :, :] = 0xff
        target_array[:, 0, :] = 0xff
        target_array[:, -1, :] = 0xff
        return False

The target buffer’s meta-data starts out as a copy of the source buffer’s meta-data, but the target buffer’s data starts out uninitialized.