9. API Reference

This package primarily provides the PiCamera class which is a pure Python interface to the Raspberry Pi’s camera module. Various ancillary classes are provided for usage with PiCamera including PiVideoFrame (for holding video frame meta-data), PiCameraCircularIO (for recording video to a ring-buffer), PiEncoder (an abstract base class for camera encoders), and the concrete encoder classes: PiVideoEncoder, PiCookedOneImageEncoder, PiCookedMultiImageEncoder, PiRawOneImageEncoder, and PiRawMultiImageEncoder.

Note

In the documentation below several apparently “private” methods are documented (i.e. methods which have names beginning with an underscore). Most users can ignore these methods; they are intended for those developers that wish to override or extend the encoder implementations used by picamera.

Some may question, given that these methods are intended to be overridden, why they are declared with a leading underscore (which in the Python idiom suggests that these methods are “private” to the class). In the cases where such methods are documented, the author intends these methods to have “protected” status (in the idiom of C++ and Object Pascal). That is to say, they are not intended to be used outside of the declaring class, but are intended to be accessible to, and overridden by, descendent classes.

9.1. PiCamera

class picamera.PiCamera

Provides a pure Python interface to the Raspberry Pi’s camera module.

Upon construction, this class initializes the camera. The camera_num parameter (which defaults to 0) selects the camera module that the instance will represent. Only the Raspberry Pi compute module currently supports more than one camera, and this class has not yet been tested with more than one module.

The stereo_mode and stereo_decimate parameters configure dual cameras on a compute module for sterescopic mode. These parameters can only be set at construction time; they cannot be altered later without closing the PiCamera instance and recreating it. The stereo_mode parameter defaults to 'none' (no stereoscopic mode) but can be set to 'side-by-side' or 'top-bottom' to activate a stereoscopic mode. If the stereo_decimate parameter is True, the resolution of the two cameras will be halved so that the resulting image has the same dimensions as if stereoscopic mode were not being used.

Warning

Stereoscopic mode is untested in picamera at this time. If you have the necessary hardware, the author would be most interested to hear of your experiences!

The resolution of the camera is initially set to the display’s resolution unless the display has been disabled (e.g. with tvservice -o) in which case a default of 1280x720 is used.

No preview or recording is started automatically upon construction. Use the capture() method to capture images, the start_recording() method to begin recording video, or the start_preview() method to start live display of the camera’s input.

Several attributes are provided to adjust the camera’s configuration. Some of these can be adjusted while a recording is running, like brightness. Others, like resolution, can only be adjusted when the camera is idle.

When you are finished with the camera, you should ensure you call the close() method to release the camera resources:

camera = PiCamera()
try:
    # do something with the camera
    pass
finally:
    camera.close()

The class supports the context manager protocol to make this particularly easy (upon exiting the with statement, the close() method is automatically called):

with PiCamera() as camera:
    # do something with the camera
    pass
_check_camera_open()

Raise an exception if the camera is already closed

_check_recording_stopped()

Raise an exception if the camera is currently recording

_get_image_encoder(camera_port, output_port, format, resize, **options)

Construct an image encoder for the requested parameters.

This method is called by capture() and capture_continuous() to construct an image encoder. The camera_port parameter gives the MMAL camera port that should be enabled for capture by the encoder. The output_port parameter gives the MMAL port that the encoder should read output from (this may be the same as the camera port, but may be different if other component(s) like a splitter have been placed in the pipeline). The format parameter indicates the image format and will be one of:

  • 'jpeg'
  • 'png'
  • 'gif'
  • 'bmp'
  • 'yuv'
  • 'rgb'
  • 'rgba'
  • 'bgr'
  • 'bgra'

The resize parameter indicates the size that the encoder should resize the output to (presumably by including a resizer in the pipeline). Finally, options includes extra keyword arguments that should be passed verbatim to the encoder.

_get_image_format(output, format=None)

Given an output object and an optional format, attempt to determine the requested image format.

This method is used by all capture methods to determine the requested output format. If format is specified as a MIME-type the “image/” prefix is stripped. If format is not specified, then _get_output_format() will be called to attempt to determine format from the output object.

_get_images_encoder(camera_port, output_port, format, resize, **options)

Construct a multi-image encoder for the requested parameters.

This method is largely equivalent to _get_image_encoder() with the exception that the encoder returned should expect to be passed an iterable of outputs to its start() method, rather than a single output object. This method is called by the capture_sequence() method.

All parameters are the same as in _get_image_encoder(). Please refer to the documentation for that method for further information.

_get_output_format(output)

Given an output object, attempt to determine the requested format.

We attempt to determine the filename of the output object and derive a MIME type from the extension. If output has no filename, an error is raised.

_get_ports(from_video_port, splitter_port)

Determine the camera and output ports for given capture options.

See Under the Hood for more information on picamera’s usage of camera, splitter, and encoder ports. The general idea here is that the capture (still) port operates on its own, while the video port is always connected to a splitter component, so requests for a video port also have to specify which splitter port they want to use.

_get_settings()

Returns the current camera settings as an MMAL structure.

This is a utility method for _get_exposure_speed(), _get_analog_gain(), etc. all of which rely on the MMAL_PARAMETER_CAMERA_SETTINGS structure to determine their values.

_get_video_encoder(camera_port, output_port, format, resize, **options)

Construct a video encoder for the requested parameters.

This method is called by start_recording() and record_sequence() to construct a video encoder. The camera_port parameter gives the MMAL camera port that should be enabled for capture by the encoder. The output_port parameter gives the MMAL port that the encoder should read output from (this may be the same as the camera port, but may be different if other component(s) like a splitter have been placed in the pipeline). The format parameter indicates the video format and will be one of:

  • 'h264'
  • 'mjpeg'

The resize parameter indicates the size that the encoder should resize the output to (presumably by including a resizer in the pipeline). Finally, options includes extra keyword arguments that should be passed verbatim to the encoder.

_get_video_format(output, format=None)

Given an output object and an optional format, attempt to determine the requested video format.

This method is used by all recording methods to determine the requested output format. If format is specified as a MIME-type the “video/” or “application/” prefix will be stripped. If format is not specified, then _get_output_format() will be called to attempt to determine format from the output object.

add_overlay(source, size=None, **options)

Adds a static overlay to the preview output.

This method creates a new static overlay using the same rendering mechanism as the preview. Overlays will appear on the Pi’s video output, but will not appear in captures or video recordings. Multiple overlays can exist; each call to add_overlay() returns a new PiOverlayRenderer instance representing the overlay.

The optional size parameter specifies the size of the source image as a (width, height) tuple. If this is omitted or None then the size is assumed to be the same as the camera’s current resolution.

The source must be an object that supports the buffer protocol which has the same length as an image in RGB format (colors represented as interleaved unsigned bytes) with the specified size after the width has been rounded up to the nearest multiple of 32, and the height has been rounded up to the nearest multiple of 16.

For example, if size is (1280, 720), then source must be a buffer with length 1280 × 720 × 3 bytes, or 2,764,800 bytes (because 1280 is a multiple of 32, and 720 is a multiple of 16 no extra rounding is required). However, if size is (97, 57), then source must be a buffer with length 128 × 64 × 3 bytes, or 24,576 bytes (pixels beyond column 97 and row 57 in the source will be ignored).

New overlays default to layer 0, whilst the preview defaults to layer 2. Higher numbered layers obscure lower numbered layers, hence new overlays will be invisible (if the preview is running) by default. You can make the new overlay visible either by making any existing preview transparent (with the alpha property) or by moving the overlay into a layer higher than the preview (with the layer property).

All keyword arguments captured in options are passed onto the PiRenderer constructor. All camera properties except resolution and framerate can be modified while overlays exist. The reason for these exceptions is that the overlay has a static resolution and changing the camera’s mode would require resizing of the source.

capture(output, format=None, use_video_port=False, resize=None, splitter_port=0, **options)

Capture an image from the camera, storing it in output.

If output is a string, it will be treated as a filename for a new file which the image will be written to. Otherwise, output is assumed to a be a file-like object and the image data is appended to it (the implementation only assumes the object has a write() method - no other methods will be called).

If format is None (the default), the method will attempt to guess the required image format from the extension of output (if it’s a string), or from the name attribute of output (if it has one). In the case that the format cannot be determined, a PiCameraValueError will be raised.

If format is not None, it must be a string specifying the format that you want the image written to. The format can be a MIME-type or one of the following strings:

  • 'jpeg' - Write a JPEG file
  • 'png' - Write a PNG file
  • 'gif' - Write a GIF file
  • 'bmp' - Write a Windows bitmap file
  • 'yuv' - Write the raw image data to a file in YUV420 format
  • 'rgb' - Write the raw image data to a file in 24-bit RGB format
  • 'rgba' - Write the raw image data to a file in 32-bit RGBA format
  • 'bgr' - Write the raw image data to a file in 24-bit BGR format
  • 'bgra' - Write the raw image data to a file in 32-bit BGRA format
  • 'raw' - Deprecated option for raw captures; the format is taken from the deprecated raw_format attribute

The use_video_port parameter controls whether the camera’s image or video port is used to capture images. It defaults to False which means that the camera’s image port is used. This port is slow but produces better quality pictures. If you need rapid capture up to the rate of video frames, set this to True.

When use_video_port is True, the splitter_port parameter specifies the port of the video splitter that the image encoder will be attached to. This defaults to 0 and most users will have no need to specify anything different. This parameter is ignored when use_video_port is False. See Under the Hood for more information about the video splitter.

If resize is not None (the default), it must be a two-element tuple specifying the width and height that the image should be resized to.

Warning

If resize is specified, or use_video_port is True, Exif metadata will not be included in JPEG output. This is due to an underlying firmware limitation.

Certain file formats accept additional options which can be specified as keyword arguments. Currently, only the 'jpeg' encoder accepts additional options, which are:

  • quality - Defines the quality of the JPEG encoder as an integer ranging from 1 to 100. Defaults to 85. Please note that JPEG quality is not a percentage and definitions of quality vary widely.
  • thumbnail - Defines the size and quality of the thumbnail to embed in the Exif metadata. Specifying None disables thumbnail generation. Otherwise, specify a tuple of (width, height, quality). Defaults to (64, 48, 35).
  • bayer - If True, the raw bayer data from the camera’s sensor is included in the Exif metadata.

Note

The so-called “raw” formats listed above ('yuv', 'rgb', etc.) do not represent the raw bayer data from the camera’s sensor. Rather they provide access to the image data after GPU processing, but before format encoding (JPEG, PNG, etc). Currently, the only method of accessing the raw bayer data is via the bayer parameter described above.

Changed in version 1.0: The resize parameter was added, and raw capture formats can now be specified directly

Changed in version 1.3: The splitter_port parameter was added, and bayer was added as an option for the 'jpeg' format

capture_continuous(output, format=None, use_video_port=False, resize=None, splitter_port=0, burst=False, **options)

Capture images continuously from the camera as an infinite iterator.

This method returns an infinite iterator of images captured continuously from the camera. If output is a string, each captured image is stored in a file named after output after substitution of two values with the format() method. Those two values are:

  • {counter} - a simple incrementor that starts at 1 and increases by 1 for each image taken
  • {timestamp} - a datetime instance

The table below contains several example values of output and the sequence of filenames those values could produce:

output Value Filenames Notes
'image{counter}.jpg' image1.jpg, image2.jpg, image3.jpg, ...  
'image{counter:02d}.jpg' image01.jpg, image02.jpg, image03.jpg, ...  
'image{timestamp}.jpg' image2013-10-05 12:07:12.346743.jpg, image2013-10-05 12:07:32.498539, ...
'image{timestamp:%H-%M-%S-%f}.jpg' image12-10-02-561527.jpg, image12-10-14-905398.jpg  
'{timestamp:%H%M%S}-{counter:03d}.jpg' 121002-001.jpg, 121013-002.jpg, 121014-003.jpg, ...
  1. Note that because timestamp’s default output includes colons (:), the resulting filenames are not suitable for use on Windows. For this reason (and the fact the default contains spaces) it is strongly recommended you always specify a format when using {timestamp}.
  2. You can use both {timestamp} and {counter} in a single format string (multiple times too!) although this tends to be redundant.

If output is not a string, it is assumed to be a file-like object and each image is simply written to this object sequentially. In this case you will likely either want to write something to the object between the images to distinguish them, or clear the object between iterations.

The format, use_video_port, splitter_port, resize, and options parameters are the same as in capture().

If use_video_port is False (the default), the burst parameter can be used to make still port captures faster. Specifically, this prevents the preview from switching resolutions between captures which significantly speeds up consecutive captures from the still port. The downside is that this mode is currently has several bugs; the major issue is that if captures are performed too quickly some frames will come back severely underexposed. It is recommended that users avoid the burst parameter unless they absolutely require it and are prepared to work around such issues.

For example, to capture 60 images with a one second delay between them, writing the output to a series of JPEG files named image01.jpg, image02.jpg, etc. one could do the following:

import time
import picamera
with picamera.PiCamera() as camera:
    camera.start_preview()
    try:
        for i, filename in enumerate(camera.capture_continuous('image{counter:02d}.jpg')):
            print(filename)
            time.sleep(1)
            if i == 59:
                break
    finally:
        camera.stop_preview()

Alternatively, to capture JPEG frames as fast as possible into an in-memory stream, performing some processing on each stream until some condition is satisfied:

import io
import time
import picamera
with picamera.PiCamera() as camera:
    stream = io.BytesIO()
    for foo in camera.capture_continuous(stream, format='jpeg'):
        # Truncate the stream to the current position (in case
        # prior iterations output a longer image)
        stream.truncate()
        stream.seek(0)
        if process(stream):
            break

Changed in version 1.0: The resize parameter was added, and raw capture formats can now be specified directly

Changed in version 1.3: The splitter_port parameter was added

capture_sequence(outputs, format='jpeg', use_video_port=False, resize=None, splitter_port=0, burst=False, **options)

Capture a sequence of consecutive images from the camera.

This method accepts a sequence or iterator of outputs each of which must either be a string specifying a filename for output, or a file-like object with a write method. For each item in the sequence or iterator of outputs, the camera captures a single image as fast as it can.

The format, use_video_port, splitter_port, resize, and options parameters are the same as in capture(), but format defaults to 'jpeg'. The format is not derived from the filenames in outputs by this method.

If use_video_port is False (the default), the burst parameter can be used to make still port captures faster. Specifically, this prevents the preview from switching resolutions between captures which significantly speeds up consecutive captures from the still port. The downside is that this mode is currently has several bugs; the major issue is that if captures are performed too quickly some frames will come back severely underexposed. It is recommended that users avoid the burst parameter unless they absolutely require it and are prepared to work around such issues.

For example, to capture 3 consecutive images:

import time
import picamera
with picamera.PiCamera() as camera:
    camera.start_preview()
    time.sleep(2)
    camera.capture_sequence([
        'image1.jpg',
        'image2.jpg',
        'image3.jpg',
        ])
    camera.stop_preview()

If you wish to capture a large number of images, a list comprehension or generator expression can be used to construct the list of filenames to use:

import time
import picamera
with picamera.PiCamera() as camera:
    camera.start_preview()
    time.sleep(2)
    camera.capture_sequence([
        'image%02d.jpg' % i
        for i in range(100)
        ])
    camera.stop_preview()

More complex effects can be obtained by using a generator function to provide the filenames or output objects.

Changed in version 1.0: The resize parameter was added, and raw capture formats can now be specified directly

Changed in version 1.3: The splitter_port parameter was added

close()

Finalizes the state of the camera.

After successfully constructing a PiCamera object, you should ensure you call the close() method once you are finished with the camera (e.g. in the finally section of a try..finally block). This method stops all recording and preview activities and releases all resources associated with the camera; this is necessary to prevent GPU memory leaks.

record_sequence(outputs, format='h264', resize=None, splitter_port=1, **options)

Record a sequence of video clips from the camera.

This method accepts a sequence or iterator of outputs each of which must either be a string specifying a filename for output, or a file-like object with a write method.

The method acts as an iterator itself, yielding each item of the sequence in turn. In this way, the caller can control how long to record to each item by only permitting the loop to continue when ready to switch to the next output.

The format, splitter_port, resize, and options parameters are the same as in start_recording(), but format defaults to 'h264'. The format is not derived from the filenames in outputs by this method.

For example, to record 3 consecutive 10-second video clips, writing the output to a series of H.264 files named clip01.h264, clip02.h264, and clip03.h264 one could use the following:

import picamera
with picamera.PiCamera() as camera:
    for filename in camera.record_sequence([
            'clip01.h264',
            'clip02.h264',
            'clip03.h264']):
        print('Recording to %s' % filename)
        camera.wait_recording(10)

Alternatively, a more flexible method of writing the previous example (which is easier to expand to a large number of output files) is by using a generator expression as the input sequence:

import picamera
with picamera.PiCamera() as camera:
    for filename in camera.record_sequence(
            'clip%02d.h264' % i for i in range(3)):
        print('Recording to %s' % filename)
        camera.wait_recording(10)

More advanced techniques are also possible by utilising infinite sequences, such as those generated by itertools.cycle(). In the following example, recording is switched between two in-memory streams. Whilst one stream is recording, the other is being analysed. The script only stops recording when a video recording meets some criteria defined by the process function:

import io
import itertools
import picamera
with picamera.PiCamera() as camera:
    analyse = None
    for stream in camera.record_sequence(
            itertools.cycle((io.BytesIO(), io.BytesIO()))):
        if analyse is not None:
            if process(analyse):
                break
            analyse.seek(0)
            analyse.truncate()
        camera.wait_recording(5)
        analyse = stream

New in version 1.3.

remove_overlay(overlay)

Removes a static overlay from the preview output.

This method removes an overlay which was previously created by add_overlay(). The overlay parameter specifies the PiRenderer instance that was returned by add_overlay().

split_recording(output, splitter_port=1, **options)

Continue the recording in the specified output; close existing output.

When called, the video encoder will wait for the next appropriate split point (an inline SPS header), then will cease writing to the current output (and close it, if it was specified as a filename), and continue writing to the newly specified output.

If output is a string, it will be treated as a filename for a new file which the video will be written to. Otherwise, output is assumed to be a file-like object and the video data is appended to it (the implementation only assumes the object has a write() method - no other methods will be called).

The motion_output parameter can be used to redirect the output of the motion vector data in the same fashion as output. If motion_output is None (the default) then motion vector data will not be redirected and will continue being written to the output specified by the motion_output parameter given to start_recording(). Alternatively, if you only wish to redirect motion vector data, you can set output to None and given a new value for motion_output.

The splitter_port parameter specifies which port of the video splitter the encoder you wish to change outputs is attached to. This defaults to 1 and most users will have no need to specify anything different. Valid values are between 0 and 3 inclusive.

Note that unlike start_recording(), you cannot specify format or other options as these cannot be changed in the middle of recording. Only the new output (and motion_output) can be specified. Furthermore, the format of the recording is currently limited to H264, and inline_headers must be True when start_recording() is called (this is the default).

Changed in version 1.3: The splitter_port parameter was added

Changed in version 1.5: The motion_output parameter was added

start_preview(**options)

Displays the preview overlay.

This method starts a camera preview as an overlay on the Pi’s primary display (HDMI or composite). A PiRenderer instance (more specifically, a PiPreviewRenderer) is constructed with the keyword arguments captured in options, and is returned from the method (this instance is also accessible from the preview attribute for as long as the renderer remains active). By default, the renderer will be opaque and fullscreen.

This means the default preview overrides whatever is currently visible on the display. More specifically, the preview does not rely on a graphical environment like X-Windows (it can run quite happily from a TTY console); it is simply an overlay on the Pi’s video output. To stop the preview and reveal the display again, call stop_preview(). The preview can be started and stopped multiple times during the lifetime of the PiCamera object.

All other camera properties can be modified “live” while the preview is running (e.g. brightness).

Note

Because the default preview typically obscures the screen, ensure you have a means of stopping a preview before starting one. If the preview obscures your interactive console you won’t be able to Alt+Tab back to it as the preview isn’t in a window. If you are in an interactive Python session, simply pressing Ctrl+D usually suffices to terminate the environment, including the camera and its associated preview.

start_recording(output, format=None, resize=None, splitter_port=1, **options)

Start recording video from the camera, storing it in output.

If output is a string, it will be treated as a filename for a new file which the video will be written to. Otherwise, output is assumed to be a file-like object and the video data is appended to it (the implementation only assumes the object has a write() method - no other methods will be called).

If format is None (the default), the method will attempt to guess the required video format from the extension of output (if it’s a string), or from the name attribute of output (if it has one). In the case that the format cannot be determined, a PiCameraValueError will be raised.

If format is not None, it must be a string specifying the format that you want the image written to. The format can be a MIME-type or one of the following strings:

  • 'h264' - Write an H.264 video stream
  • 'mjpeg' - Write an M-JPEG video stream
  • 'yuv' - Write the raw video data to a file in YUV420 format
  • 'rgb' - Write the raw video data to a file in 24-bit RGB format
  • 'rgba' - Write the raw video data to a file in 32-bit RGBA format
  • 'bgr' - Write the raw video data to a file in 24-bit BGR format
  • 'bgra' - Write the raw video data to a file in 32-bit BGRA format

If resize is not None (the default), it must be a two-element tuple specifying the width and height that the video recording should be resized to. This is particularly useful for recording video using the full area of the camera sensor (which is not possible without down-sizing the output).

The splitter_port parameter specifies the port of the built-in splitter that the video encoder will be attached to. This defaults to 1 and most users will have no need to specify anything different. If you wish to record multiple (presumably resized) streams simultaneously, specify a value between 0 and 3 inclusive for this parameter, ensuring that you do not specify a port that is currently in use.

Certain formats accept additional options which can be specified as keyword arguments. The 'h264' format accepts the following additional options:

  • profile - The H.264 profile to use for encoding. Defaults to ‘high’, but can be one of ‘baseline’, ‘main’, ‘high’, or ‘constrained’.
  • intra_period - The key frame rate (the rate at which I-frames are inserted in the output). Defaults to None, but can be any 32-bit integer value representing the number of frames between successive I-frames. The special value 0 causes the encoder to produce a single initial I-frame, and then only P-frames subsequently. Note that split_recording() will fail in this mode.
  • inline_headers - When True, specifies that the encoder should output SPS/PPS headers within the stream to ensure GOPs (groups of pictures) are self describing. This is important for streaming applications where the client may wish to seek within the stream, and enables the use of split_recording(). Defaults to True if not specified.
  • sei - When True, specifies the encoder should include “Supplemental Enhancement Information” within the output stream. Defaults to False if not specified.
  • motion_output - Indicates the output destination for motion vector estimation data. When None (the default), motion data is not output. If set to a string, it is assumed to be a filename which should be opened for motion data to be written to. Any other value is assumed to be a file-like object which motion vector is to be written to (the object must have a write method).

All formats accept the following additional options:

  • bitrate - The bitrate at which video will be encoded. Defaults to 17000000 (17Mbps) if not specified. The maximum value is 25000000 (25Mbps). Bitrate 0 indicates the encoder should not use bitrate control (the encoder is limited by the quality only).
  • quality - Specifies the quality that the encoder should attempt to maintain. For the 'h264' format, use values between 10 and 40 where 10 is extremely high quality, and 40 is extremely low (20-25 is usually a reasonable range for H.264 encoding). For the mjpeg format, use JPEG quality values between 1 and 100 (where higher values are higher quality). Quality 0 is special and seems to be a “reasonable quality” default.
  • quantization - Deprecated alias for quality.

Changed in version 1.0: The resize parameter was added, and 'mjpeg' was added as a recording format

Changed in version 1.3: The splitter_port parameter was added

Changed in version 1.5: The quantization parameter was deprecated in favor of quality, and the motion_output parameter was added.

stop_preview()

Hides the preview overlay.

If start_preview() has previously been called, this method shuts down the preview display which generally results in the underlying display becoming visible again. If a preview is not currently running, no exception is raised - the method will simply do nothing.

stop_recording(splitter_port=1)

Stop recording video from the camera.

After calling this method the video encoder will be shut down and output will stop being written to the file-like object specified with start_recording(). If an error occurred during recording and wait_recording() has not been called since the error then this method will raise the exception.

The splitter_port parameter specifies which port of the video splitter the encoder you wish to stop is attached to. This defaults to 1 and most users will have no need to specify anything different. Valid values are between 0 and 3 inclusive.

Changed in version 1.3: The splitter_port parameter was added

wait_recording(timeout=0, splitter_port=1)

Wait on the video encoder for timeout seconds.

It is recommended that this method is called while recording to check for exceptions. If an error occurs during recording (for example out of disk space), an exception will only be raised when the wait_recording() or stop_recording() methods are called.

If timeout is 0 (the default) the function will immediately return (or raise an exception if an error has occurred).

The splitter_port parameter specifies which port of the video splitter the encoder you wish to wait on is attached to. This defaults to 1 and most users will have no need to specify anything different. Valid values are between 0 and 3 inclusive.

Changed in version 1.3: The splitter_port parameter was added

ISO

Retrieves or sets the apparent ISO setting of the camera.

Deprecated since version 1.8: Please use the iso attribute instead.

_still_encoding

Configures the encoding of the camera’s still port.

This attribute controls the encoding of the camera’s still port (see Under the Hood for more information). It is intended for internal use, but may be useful to developers wishing to implement custom encoders.

analog_gain

Retrieves the current analog gain of the camera.

When queried, this property returns the analog gain currently being used by the camera. The value represents the analog gain of the sensor prior to digital conversion. The value is returned as a Fraction instance.

New in version 1.6.

annotate_background

Controls whether a black background is drawn behind the annotation.

The annotate_bg attribute is a bool indicating whether or not a black background will be drawn behind the annotation text. The background will appear in all output including image captures and video recording. The default is False.

annotate_frame_num

Controls whether the current frame number is drawn as an annotation.

The annotate_frame_num attribute is a bool indicating whether or not the current frame number is rendered as an annotation, similar to annotate_text. The default is False.

annotate_text

Retrieves or sets a text annotation for all output.

When queried, the annotate_text property returns the current annotation (if no annotation has been set, this is simply a blank string).

When set, the property immediately applies the annotation to the preview (if it is running) and to any future captures or video recording. Strings longer than 255 characters, or strings containing non-ASCII characters will raise a PiCameraValueError. The default value is ''.

awb_gains

Gets or sets the auto-white-balance gains of the camera.

When queried, this attribute returns a tuple of values representing the (red, blue) balance of the camera. The red and blue values are returned Fraction instances. The values will be between 0.0 and 8.0.

When set, this attribute adjusts the camera’s auto-white-balance gains. The property can be specified as a single value in which case both red and blue gains will be adjusted equally, or as a (red, blue) tuple. Values can be specified as an int, float or Fraction and each gain must be between 0.0 and 8.0. Typical values for the gains are between 0.9 and 1.9. The property can be set while recordings or previews are in progress.

Note

This attribute only has an effect when awb_mode is set to 'off'.

Changed in version 1.6: Prior to version 1.6, this attribute was write-only.

awb_mode

Retrieves or sets the auto-white-balance mode of the camera.

When queried, the awb_mode property returns a string representing the auto-white-balance setting of the camera. The possible values can be obtained from the PiCamera.AWB_MODES attribute, and are as follows:

  • 'off'
  • 'auto'
  • 'sunlight'
  • 'cloudy'
  • 'shade'
  • 'tungsten'
  • 'fluorescent'
  • 'incandescent'
  • 'flash'
  • 'horizon'

When set, the property adjusts the camera’s auto-white-balance mode. The property can be set while recordings or previews are in progress. The default value is 'auto'.

brightness

Retrieves or sets the brightness setting of the camera.

When queried, the brightness property returns the brightness level of the camera as an integer between 0 and 100. When set, the property adjusts the brightness of the camera. Brightness can be adjusted while previews or recordings are in progress. The default value is 50.

closed

Returns True if the close() method has been called.

color_effects

Retrieves or sets the current color effect applied by the camera.

When queried, the color_effects property either returns None which indicates that the camera is using normal color settings, or a (u, v) tuple where u and v are integer values between 0 and 255.

When set, the property changes the color effect applied by the camera. The property can be set while recordings or previews are in progress. For example, to make the image black and white set the value to (128, 128). The default value is None.

contrast

Retrieves or sets the contrast setting of the camera.

When queried, the contrast property returns the contrast level of the camera as an integer between -100 and 100. When set, the property adjusts the contrast of the camera. Contrast can be adjusted while previews or recordings are in progress. The default value is 0.

crop

Retrieves or sets the zoom applied to the camera’s input.

Deprecated since version 1.8: Please use the zoom attribute instead.

digital_gain

Retrieves the current digital gain of the camera.

When queried, this property returns the digital gain currently being used by the camera. The value represents the digital gain the camera applies after conversion of the sensor’s analog output. The value is returned as a Fraction instance.

New in version 1.6.

drc_strength

Retrieves or sets the dynamic range compression strength of the camera.

When queried, the drc_strength property returns a string indicating the amount of dynamic range compression the camera applies to images.

When set, the attributes adjusts the strength of the dynamic range compression applied to the camera’s output. Valid values are given in the list below:

  • 'off'
  • 'low'
  • 'medium'
  • 'high'

The default value is 'off'. All possible values for the attribute can be obtained from the PiCamera.DRC_STRENGTHS attribute.

New in version 1.6.

exif_tags

Holds a mapping of the Exif tags to apply to captured images.

Note

Please note that Exif tagging is only supported with the jpeg format.

By default several Exif tags are automatically applied to any images taken with the capture() method: IFD0.Make (which is set to RaspberryPi), IFD0.Model (which is set to RP_OV5647), and three timestamp tags: IFD0.DateTime, EXIF.DateTimeOriginal, and EXIF.DateTimeDigitized which are all set to the current date and time just before the picture is taken.

If you wish to set additional Exif tags, or override any of the aforementioned tags, simply add entries to the exif_tags map before calling capture(). For example:

camera.exif_tags['IFD0.Copyright'] = 'Copyright (c) 2013 Foo Industries'

The Exif standard mandates ASCII encoding for all textual values, hence strings containing non-ASCII characters will cause an encoding error to be raised when capture() is called. If you wish to set binary values, use a bytes() value:

camera.exif_tags['EXIF.UserComment'] = b'Something containing\x00NULL characters'

Warning

Binary Exif values are currently ignored; this appears to be a libmmal or firmware bug.

You may also specify datetime values, integer, or float values, all of which will be converted to appropriate ASCII strings (datetime values are formatted as YYYY:MM:DD HH:MM:SS in accordance with the Exif standard).

The currently supported Exif tags are:

Group Tags
IFD0, IFD1 ImageWidth, ImageLength, BitsPerSample, Compression, PhotometricInterpretation, ImageDescription, Make, Model, StripOffsets, Orientation, SamplesPerPixel, RowsPerString, StripByteCounts, Xresolution, Yresolution, PlanarConfiguration, ResolutionUnit, TransferFunction, Software, DateTime, Artist, WhitePoint, PrimaryChromaticities, JPEGInterchangeFormat, JPEGInterchangeFormatLength, YcbCrCoefficients, YcbCrSubSampling, YcbCrPositioning, ReferenceBlackWhite, Copyright
EXIF ExposureTime, FNumber, ExposureProgram, SpectralSensitivity, ISOSpeedRatings, OECF, ExifVersion, DateTimeOriginal, DateTimeDigitized, ComponentsConfiguration, CompressedBitsPerPixel, ShutterSpeedValue, ApertureValue, BrightnessValue, ExposureBiasValue, MaxApertureValue, SubjectDistance, MeteringMode, LightSource, Flash, FocalLength, SubjectArea, MakerNote, UserComment, SubSecTime, SubSecTimeOriginal, SubSecTimeDigitized, FlashpixVersion, ColorSpace, PixelXDimension, PixelYDimension, RelatedSoundFile, FlashEnergy, SpacialFrequencyResponse, FocalPlaneXResolution, FocalPlaneYResolution, FocalPlaneResolutionUnit, SubjectLocation, ExposureIndex, SensingMethod, FileSource, SceneType, CFAPattern, CustomRendered, ExposureMode, WhiteBalance, DigitalZoomRatio, FocalLengthIn35mmFilm, SceneCaptureType, GainControl, Contrast, Saturation, Sharpness, DeviceSettingDescription, SubjectDistanceRange, ImageUniqueID
GPS GPSVersionID, GPSLatitudeRef, GPSLatitude, GPSLongitudeRef, GPSLongitude, GPSAltitudeRef, GPSAltitude, GPSTimeStamp, GPSSatellites, GPSStatus, GPSMeasureMode, GPSDOP, GPSSpeedRef, GPSSpeed, GPSTrackRef, GPSTrack, GPSImgDirectionRef, GPSImgDirection, GPSMapDatum, GPSDestLatitudeRef, GPSDestLatitude, GPSDestLongitudeRef, GPSDestLongitude, GPSDestBearingRef, GPSDestBearing, GPSDestDistanceRef, GPSDestDistance, GPSProcessingMethod, GPSAreaInformation, GPSDateStamp, GPSDifferential
EINT InteroperabilityIndex, InteroperabilityVersion, RelatedImageFileFormat, RelatedImageWidth, RelatedImageLength
exposure_compensation

Retrieves or sets the exposure compensation level of the camera.

When queried, the exposure_compensation property returns an integer value between -25 and 25 indicating the exposure level of the camera. Larger values result in brighter images.

When set, the property adjusts the camera’s exposure compensation level. Each increment represents 1/6th of a stop. Hence setting the attribute to 6 increases exposure by 1 stop. The property can be set while recordings or previews are in progress. The default value is 0.

exposure_mode

Retrieves or sets the exposure mode of the camera.

When queried, the exposure_mode property returns a string representing the exposure setting of the camera. The possible values can be obtained from the PiCamera.EXPOSURE_MODES attribute, and are as follows:

  • 'off'
  • 'auto'
  • 'night'
  • 'nightpreview'
  • 'backlight'
  • 'spotlight'
  • 'sports'
  • 'snow'
  • 'beach'
  • 'verylong'
  • 'fixedfps'
  • 'antishake'
  • 'fireworks'

When set, the property adjusts the camera’s exposure mode. The property can be set while recordings or previews are in progress. The default value is 'auto'.

exposure_speed

Retrieves the current shutter speed of the camera.

When queried, this property returns the shutter speed currently being used by the camera. If you have set shutter_speed to a non-zero value, then exposure_speed and shutter_speed should be equal. However, if shutter_speed is set to 0 (auto), then you can read the actual shutter speed being used from this attribute. The value is returned as an integer representing a number of microseconds. This is a read-only property.

New in version 1.6.

frame

Retrieves information about the current frame recorded from the camera.

When video recording is active (after a call to start_recording()), this attribute will return a PiVideoFrame tuple containing information about the current frame that the camera is recording.

If multiple video recordings are currently in progress (after multiple calls to start_recording() with different values for the splitter_port parameter), which encoder’s frame information is returned is arbitrary. If you require information from a specific encoder, you will need to extract it from _encoders explicitly.

Querying this property when the camera is not recording will result in an exception.

Note

There is a small window of time when querying this attribute will return None after calling start_recording(). If this attribute returns None, this means that the video encoder has been initialized, but the camera has not yet returned any frames.

framerate

Retrieves or sets the framerate at which video-port based image captures, video recordings, and previews will run.

When queried, the framerate property returns the rate at which the camera’s video and preview ports will operate as a Fraction instance which can be easily converted to an int or float.

Note

For backwards compatibility, a derivative of the Fraction class is actually used which permits the value to be treated as a tuple of (numerator, denominator).

Setting and retrieving framerate as a (numerator, denominator) tuple is deprecated and will be removed in 2.0. Please use a Fraction instance instead (which is just as accurate and also permits direct use with math operators).

When set, the property reconfigures the camera so that the next call to recording and previewing methods will use the new framerate. The framerate can be specified as an int, float, Fraction, or a (numerator, denominator) tuple. The camera must not be closed, and no recording must be active when the property is set.

Note

This attribute, in combination with resolution, determines the mode that the camera operates in. The actual sensor framerate and resolution used by the camera is influenced, but not directly set, by this property. See Camera Modes for more information.

hflip

Retrieves or sets whether the camera’s output is horizontally flipped.

When queried, the hflip property returns a boolean indicating whether or not the camera’s output is horizontally flipped. The property can be set while recordings or previews are in progress. The default value is False.

image_denoise

Retrieves or sets whether denoise will be applied to image captures.

When queried, the image_denoise property returns a boolean value indicating whether or not the camera software will apply a denoise algorithm to image captures.

When set, the property activates or deactivates the denoise algorithm for image captures. The property can be set while recordings or previews are in progress. The default value is True.

New in version 1.7.

image_effect

Retrieves or sets the current image effect applied by the camera.

When queried, the image_effect property returns a string representing the effect the camera will apply to captured video. The possible values can be obtained from the PiCamera.IMAGE_EFFECTS attribute, and are as follows:

  • 'none'
  • 'negative'
  • 'solarize'
  • 'sketch'
  • 'denoise'
  • 'emboss'
  • 'oilpaint'
  • 'hatch'
  • 'gpen'
  • 'pastel'
  • 'watercolor'
  • 'film'
  • 'blur'
  • 'saturation'
  • 'colorswap'
  • 'washedout'
  • 'posterise'
  • 'colorpoint'
  • 'colorbalance'
  • 'cartoon'
  • 'deinterlace1'
  • 'deinterlace2'

When set, the property changes the effect applied by the camera. The property can be set while recordings or previews are in progress, but only certain effects work while recording video (notably 'negative' and 'solarize'). The default value is 'none'.

image_effect_params

Retrieves or sets the parameters for the current effect.

When queried, the image_effect_params property either returns None (for effects which have no configurable parameters, or if no parameters have been configured), or a tuple of numeric values up to six elements long.

When set, the property changes the parameters of the current effect as a sequence of numbers, or a single number. Attempting to set parameters on an effect which does not support parameters, or providing an incompatible set of parameters for an effect will raise a PiCameraValueError exception.

The effects which have parameters, and what combinations those parameters can take is as follows:

Effect Parameters Description
'solarize' yuv, x0, y1, y2, y3 yuv controls whether data is processed as RGB (0) or YUV(1). Input values from 0 to x0 - 1 are remapped linearly onto the range 0 to y0. Values from x0 to 255 are remapped linearly onto the range y1 to y2.
x0, y0, y1, y2 Same as above, but yuv defaults to 0 (process as RGB).
yuv Same as above, but x0, y0, y1, y2 default to 128, 128, 128, 0 respectively.
'colorpoint' quadrant quadrant specifies which quadrant of the U/V space to retain chroma from: 0=green, 1=red/yellow, 2=blue, 3=purple. There is no default; this effect does nothing until parameters are set.
'colorbalance' lens, r, g, b, u, v lens specifies the lens shading strength (0.0 to 256.0, where 0.0 indicates lens shading has no effect). r, g, b are multipliers for their respective color channels (0.0 to 256.0). u and v are offsets added to the U/V plane (0 to 255).
lens, r, g, b Same as above but u are defaulted to 0.
lens, r, b Same as above but g also defaults to to 1.0.
'colorswap' dir If dir is 0, swap RGB to BGR. If dir is 1, swap RGB to BRG.
'posterise' steps Control the quantization steps for the image. Valid values are 2 to 32, and the default is 4.
'blur' size Specifies the size of the kernel. Valid values are 1 or 2.
'film' strength, u, v strength specifies the strength of effect. u and v are offsets added to the U/V plane (0 to 255).
'watercolor' u, v u and v specify offsets to add to the U/V plane (0 to 255).
  No parameters indicates no U/V effect.

New in version 1.8.

iso

Retrieves or sets the apparent ISO setting of the camera.

When queried, the iso property returns the ISO setting of the camera, a value which represents the sensitivity of the camera to light. Lower values (e.g. 100) imply less sensitivity than higher values (e.g. 400 or 800). Lower sensitivities tend to produce less “noisy” (smoother) images, but operate poorly in low light conditions.

When set, the property adjusts the sensitivity of the camera. Valid values are between 0 (auto) and 1600. The actual value used when iso is explicitly set will be one of the following values (whichever is closest): 100, 200, 320, 400, 500, 640, 800.

The attribute can be adjusted while previews or recordings are in progress. The default value is 0 which means automatically determine a value according to image-taking conditions.

Note

You can query the analog_gain and digital_gain attributes to determine the actual gains being used by the camera. If both are 1.0 this equates to ISO 100. Please note that this capability requires an up to date firmware (#692 or later).

Note

With iso settings other than 0 (auto), the exposure_mode property becomes non-functional.

Note

Some users on the Pi camera forum have noted that higher ISO values than 800 (specifically up to 1600) can be achieved in certain conditions with exposure_mode set to 'sports' and iso set to 0. It doesn’t appear to be possible to manually request an ISO setting higher than 800, but the picamera library will permit settings up to 1600 in case the underlying firmware permits such settings in particular circumstances.

led

Sets the state of the camera’s LED via GPIO.

If a GPIO library is available (only RPi.GPIO is currently supported), and if the python process has the necessary privileges (typically this means running as root via sudo), this property can be used to set the state of the camera’s LED as a boolean value (True is on, False is off).

Note

This is a write-only property. While it can be used to control the camera’s LED, you cannot query the state of the camera’s LED using this property.

meter_mode

Retrieves or sets the metering mode of the camera.

When queried, the meter_mode property returns the method by which the camera determines the exposure as one of the following strings:

  • 'average'
  • 'spot'
  • 'backlit'
  • 'matrix'

When set, the property adjusts the camera’s metering mode. All modes set up two regions: a center region, and an outer region. The major difference between each mode is the size of the center region. The 'backlit' mode has the largest central region (30% of the width), while 'spot' has the smallest (10% of the width).

The property can be set while recordings or previews are in progress. The default value is 'average'. All possible values for the attribute can be obtained from the PiCamera.METER_MODES attribute.

overlays

Retrieves all active PiRenderer overlays.

If no overlays are current active, overlays will return an empty iterable. Otherwise, it will return an iterable of PiRenderer instances which are currently acting as overlays. Note that the preview renderer is an exception to this: it is not included as an overlay despite being derived from PiRenderer.

preview

Retrieves the PiRenderer displaying the camera preview.

If no preview is currently active, preview will return None. Otherwise, it will return the instance of PiRenderer which is currently connected to the camera’s preview port for rendering what the camera sees. You can use the attributes of the PiRenderer class to configure the appearance of the preview. For example, to make the preview semi-transparent:

import picamera

with picamera.PiCamera() as camera:
    camera.start_preview()
    camera.preview.alpha = 128

New in version 1.8.

preview_alpha

Retrieves or sets the opacity of the preview window.

Deprecated since version 1.8: Please use the alpha attribute of the preview object instead.

preview_fullscreen

Retrieves or sets full-screen for the preview window.

Deprecated since version 1.8: Please use the fullscreen attribute of the preview object instead.

preview_layer

Retrieves of sets the layer of the preview window.

Deprecated since version 1.8: Please use the layer attribute of the preview object instead.

preview_window

Retrieves or sets the size of the preview window.

Deprecated since version 1.8: Please use the window attribute of the preview object instead.

previewing

Returns True if the start_preview() method has been called, and no stop_preview() call has been made yet.

Deprecated since version 1.8: Test whether preview is None instead.

raw_format

Retrieves or sets the raw format of the camera’s ports.

Deprecated since version 1.0: Please use 'yuv' or 'rgb' directly as a format in the various capture methods instead.

recording

Returns True if the start_recording() method has been called, and no stop_recording() call has been made yet.

resolution

Retrieves or sets the resolution at which image captures, video recordings, and previews will be captured.

When queried, the resolution property returns the resolution at which the camera will operate as a tuple of (width, height) measured in pixels. This is the resolution that the capture() method will produce images at, and the resolution that start_recording() will produce videos at.

When set, the property reconfigures the camera so that the next call to these methods will use the new resolution. The resolution must be specified as a (width, height) tuple, the camera must not be closed, and no recording must be active when the property is set.

The property defaults to the Pi’s currently configured display resolution unless the display has been disabled (with tvservice -o) in which case it defaults to 1280x720 (720p).

Note

This attribute, in combination with framerate, determines the mode that the camera operates in. The actual sensor framerate and resolution used by the camera is influenced, but not directly set, by this property. See Camera Modes for more information.

rotation

Retrieves or sets the current rotation of the camera’s image.

When queried, the rotation property returns the rotation applied to the image. Valid values are 0, 90, 180, and 270.

When set, the property changes the rotation applied to the camera’s input. The property can be set while recordings or previews are in progress. The default value is 0.

saturation

Retrieves or sets the saturation setting of the camera.

When queried, the saturation property returns the color saturation of the camera as an integer between -100 and 100. When set, the property adjusts the saturation of the camera. Saturation can be adjusted while previews or recordings are in progress. The default value is 0.

sharpness

Retrieves or sets the sharpness setting of the camera.

When queried, the sharpness property returns the sharpness level of the camera (a measure of the amount of post-processing to reduce or increase image sharpness) as an integer between -100 and 100. When set, the property adjusts the sharpness of the camera. Sharpness can be adjusted while previews or recordings are in progress. The default value is 0.

shutter_speed

Retrieves or sets the shutter speed of the camera in microseconds.

When queried, the shutter_speed property returns the shutter speed of the camera in microseconds, or 0 which indicates that the speed will be automatically determined by the auto-exposure algorithm. Faster shutter times naturally require greater amounts of illumination and vice versa.

When set, the property adjusts the shutter speed of the camera, which most obviously affects the illumination of subsequently captured images. Shutter speed can be adjusted while previews or recordings are running. The default value is 0 (auto).

Note

You can query the exposure_speed attribute to determine the actual shutter speed being used when this attribute is set to 0. Please note that this capability requires an up to date firmware (#692 or later).

Note

In later firmwares, this attribute is limited by the value of the framerate attribute. For example, if framerate is set to 30fps, the shutter speed cannot be slower than 33,333µs (1/fps).

vflip

Retrieves or sets whether the camera’s output is vertically flipped.

When queried, the vflip property returns a boolean indicating whether or not the camera’s output is vertically flipped. The property can be set while recordings or previews are in progress. The default value is False.

video_denoise

Retrieves or sets whether denoise will be applied to video recordings.

When queried, the video_denoise property returns a boolean value indicating whether or not the camera software will apply a denoise algorithm to video recordings.

When set, the property activates or deactivates the denoise algorithm for video recordings. The property can be set while recordings or previews are in progress. The default value is True.

New in version 1.7.

video_stabilization

Retrieves or sets the video stabilization mode of the camera.

When queried, the video_stabilization property returns a boolean value indicating whether or not the camera attempts to compensate for motion.

When set, the property activates or deactivates video stabilization. The property can be set while recordings or previews are in progress. The default value is False.

Note

The built-in video stabilization only accounts for vertical and horizontal motion, not rotation.

zoom

Retrieves or sets the zoom applied to the camera’s input.

When queried, the zoom property returns a (x, y, w, h) tuple of floating point values ranging from 0.0 to 1.0, indicating the proportion of the image to include in the output (this is also known as the “Region of Interest” or ROI). The default value is (0.0, 0.0, 1.0, 1.0) which indicates that everything should be included. The property can be set while recordings or previews are in progress.

9.2. PiCameraCircularIO

class picamera.PiCameraCircularIO(camera, size=None, seconds=None, bitrate=17000000, splitter_port=1)

A derivative of CircularIO which tracks camera frames.

PiCameraCircularIO provides an in-memory stream based on a ring buffer. It is a specialization of CircularIO which associates video frame meta-data with the recorded stream, accessible from the frames property.

Warning

The class makes a couple of assumptions which will cause the frame meta-data tracking to break if they are not adhered to:

  • the stream is only ever appended to - no writes ever start from the middle of the stream
  • the stream is never truncated (from the right; being ring buffer based, left truncation will occur automatically)

The camera parameter specifies the PiCamera instance that will be recording video to the stream. If specified, the size parameter determines the maximum size of the stream in bytes. If size is not specified (or None), then seconds must be specified instead. This provides the maximum length of the stream in seconds, assuming a data rate in bits-per-second given by the bitrate parameter (which defaults to 17000000, or 17Mbps, which is also the default bitrate used for video recording by PiCamera). You cannot specify both size and seconds.

The splitter_port parameter specifies the port of the built-in splitter that the video encoder will be attached to. This defaults to 1 and most users will have no need to specify anything different. If you do specify something else, ensure it is equal to the splitter_port parameter of the corresponding call to PiCamera.start_recording(). For example:

import picamera

with picamera.PiCamera() as camera:
    with picamera.PiCameraCircularIO(camera, splitter_port=2) as stream:
        camera.start_recording(stream, format='h264', splitter_port=2)
        camera.wait_recording(10, splitter_port=2)
        camera.stop_recording(splitter_port=2)
frames

Returns an iterator over the frame meta-data.

As the camera records video to the stream, the class captures the meta-data associated with each frame (in the form of a PiVideoFrame tuple), discarding meta-data for frames which are no longer fully stored within the underlying ring buffer. You can use the frame meta-data to locate, for example, the first keyframe present in the stream in order to determine an appropriate range to extract.

9.3. CircularIO

class picamera.CircularIO(size)

A thread-safe stream which uses a ring buffer for storage.

CircularIO provides an in-memory stream similar to the io.BytesIO class. However, unlike BytesIO its underlying storage is a ring buffer with a fixed maximum size. Once the maximum size is reached, writing effectively loops round to the beginning to the ring and starts overwriting the oldest content.

The size parameter specifies the maximum size of the stream in bytes. The read(), tell(), and seek() methods all operate equivalently to those in io.BytesIO whilst write() only differs in the wrapping behaviour described above. A read1() method is also provided for efficient reading of the underlying ring buffer in write-sized chunks (or less).

A re-entrant threading lock guards all operations, and is accessible for external use via the lock attribute.

The performance of the class is geared toward faster writing than reading on the assumption that writing will be the common operation and reading the rare operation (a reasonable assumption for the camera use-case, but not necessarily for more general usage).

getvalue()

Return bytes containing the entire contents of the buffer.

read(n=-1)

Read up to n bytes from the stream and return them. As a convenience, if n is unspecified or -1, readall() is called. Fewer than n bytes may be returned if there are fewer than n bytes from the current stream position to the end of the stream.

If 0 bytes are returned, and n was not 0, this indicates end of the stream.

read1(n=-1)

Read up to n bytes from the stream using only a single call to the underlying object.

In the case of CircularIO this roughly corresponds to returning the content from the current position up to the end of the write that added that content to the stream (assuming no subsequent writes overwrote the content). read1() is particularly useful for efficient copying of the stream’s content.

readable()

Returns True, indicating that the stream supports read().

readall()

Read and return all bytes from the stream until EOF, using multiple calls to the stream if necessary.

seek(offset, whence=0)

Change the stream position to the given byte offset. offset is interpreted relative to the position indicated by whence. Values for whence are:

  • SEEK_SET or 0 – start of the stream (the default); offset should be zero or positive
  • SEEK_CUR or 1 – current stream position; offset may be negative
  • SEEK_END or 2 – end of the stream; offset is usually negative

Return the new absolute position.

seekable()

Returns True, indicating the stream supports seek() and tell().

tell()

Return the current stream position.

truncate(size=None)

Resize the stream to the given size in bytes (or the current position if size is not specified). This resizing can extend or reduce the current stream size. In case of extension, the contents of the new file area will be NUL (\x00) bytes. The new stream size is returned.

The current stream position isn’t changed unless the resizing is expanding the stream, in which case it may be set to the maximum stream size if the expansion causes the ring buffer to loop around.

writable()

Returns True, indicating that the stream supports write().

write(b)

Write the given bytes or bytearray object, b, to the underlying stream and return the number of bytes written.

lock

A re-entrant threading lock which is used to guard all operations.

size

Return the maximum size of the buffer in bytes.

9.4. PiVideoFrameType

class picamera.PiVideoFrameType

This class simply defines constants used to represent the type of a frame in PiVideoFrame.frame_type. Effectively it is a namespace for an enum.

frame

Indicates a predicted frame (P-frame). This is the most common frame type.

key_frame

Indicates an intra-frame (I-frame) also known as a key frame.

sps_header

Indicates an inline SPS/PPS header (rather than picture data) which is typically used as a split point.

motion_data

Indicates the frame is inline motion vector data, rather than picture data.

New in version 1.5.

9.5. PiVideoFrame

class picamera.PiVideoFrame(index, frame_type, frame_size, video_size, split_size, timestamp)

This class is a namedtuple derivative used to store information about a video frame. It is recommended that you access the information stored by this class by attribute name rather than position (for example: frame.index rather than frame[0]).

index

Returns the zero-based number of the frame. This is a monotonic counter that is simply incremented every time the camera returns a frame-end buffer. As a consequence, this attribute cannot be used to detect dropped frames. Nor does it necessarily represent actual frames; it will be incremented for SPS headers and motion data buffers too.

frame_type

Returns a constant indicating the kind of data that the frame contains (see PiVideoFrameType). Please note that certain frame types contain no image data at all.

frame_size

Returns the size in bytes of the current frame.

video_size

Returns the size in bytes of the entire video up to the current frame. Note that this is unlikely to match the size of the actual file/stream written so far. Firstly this is because the frame attribute is only updated when the encoder outputs the end of a frame, which will cause the reported size to be smaller than the actual amount written. Secondly this is because a stream may utilize buffering which will cause the actual amount written (e.g. to disk) to lag behind the value reported by this attribute.

split_size

Returns the size in bytes of the video recorded since the last call to either start_recording() or split_recording(). For the reasons explained above, this may differ from the size of the actual file/stream written so far.

timestamp

Returns the presentation timestamp (PTS) of the current frame as reported by the encoder. This is represented by the number of microseconds (millionths of a second) since video recording started. As the frame attribute is only updated when the encoder outputs the end of a frame, this value may lag behind the actual time since start_recording() was called.

Warning

Currently, the video encoder occasionally returns “time unknown” values in this field which picamera represents as None. If you are querying this property you will need to check the value is not None before using it.

Changed in version 1.5: Deprecated header and keyframe attributes and added the new frame_type attribute instead.

header

Contains a bool indicating whether the current frame is actually an SPS/PPS header. Typically it is best to split an H.264 stream so that it starts with an SPS/PPS header.

Deprecated since version 1.5: Please compare frame_type to PiVideoFrameType.sps_header instead.

keyframe

Returns a bool indicating whether the current frame is a keyframe (an intra-frame, or I-frame in MPEG parlance).

Deprecated since version 1.5: Please compare frame_type to PiVideoFrameType.key_frame instead.

position

Returns the zero-based position of the frame in the stream containing it.

9.6. PiEncoder

class picamera.PiEncoder(parent, camera_port, input_port, format, resize, **options)

Base implementation of an MMAL encoder for use by PiCamera.

The parent parameter specifies the PiCamera instance that has constructed the encoder. The camera_port parameter provides the MMAL camera port that the encoder should enable for capture (this will be the still or video port of the camera component). The input_port parameter specifies the MMAL port that the encoder should connect to its input. Sometimes this will be the same as the camera port, but if other components are present in the pipeline (e.g. a splitter), it may be different.

The format parameter specifies the format that the encoder should produce in its output. This is specified as a string and will be one of the following for image encoders:

  • 'jpeg'
  • 'png'
  • 'gif'
  • 'bmp'
  • 'yuv'
  • 'rgb'
  • 'rgba'
  • 'bgr'
  • 'bgra'

And one of the following for video encoders:

  • 'h264'
  • 'mjpeg'

The resize parameter is either None (indicating no resizing should take place), or a (width, height) tuple specifying the resolution that the output of the encoder should be resized to.

Finally, the options parameter specifies additional keyword arguments that can be used to configure the encoder (e.g. bitrate for videos, or quality for images).

The class has a number of attributes:

camera_port

A pointer to the camera output port that needs to be activated and deactivated in order to start/stop capture. This is not necessarily the port that the encoder component’s input port is connected to (for example, in the case of video-port based captures, this will be the camera video port behind the splitter).

encoder

A pointer to the MMAL encoder component, or None if no encoder component has been created (some encoder classes don’t use an actual encoder component, for example PiRawImageMixin).

encoder_connection

A pointer to the MMAL connection linking the encoder’s input port to the camera, splitter, or resizer output port (depending on configuration), if any.

event

A threading.Event instance used to synchronize operations (like start, stop, and split) between the control thread and the callback thread.

exception

If an exception occurs during the encoder callback, this attribute is used to store the exception until it can be re-raised in the control thread.

format

The image or video format that the encoder is expected to produce. This is equal to the value of the format parameter.

input_port

A pointer to the MMAL port that the encoder component’s input port should be connected to.

output_port

A pointer to the MMAL port of the encoder’s output. In the case no encoder component is created, this should be the camera/component output port responsible for producing data. In other words, this attribute must be set on initialization.

outputs

A mapping of key to (output, opened) tuples where output is a file-like object, and opened is a bool indicating whether or not we opened the output object (and thus whether we are responsible for eventually closing it).

outputs_lock

A threading.Lock instance used to protect access to outputs.

parent

The PiCamera instance that created this PiEncoder instance.

pool

A pointer to a pool of MMAL buffers.

resizer

A pointer to the MMAL resizer component, or None if no resizer component has been created.

resizer_connection

A pointer to the MMAL connection linking the resizer’s input port to the camera or splitter’s output port, if any.

_callback(port, buf)

The encoder’s main callback function.

When the encoder is active, this method is periodically called in a background thread. The port parameter specifies the MMAL port providing the output (typically this is the encoder’s output port, but in the case of unencoded captures may simply be a camera port), while the buf parameter is an MMAL buffer header pointer which can be used to obtain the data to write, along with meta-data about the current frame.

This method must release the MMAL buffer header before returning (failure to do so will cause a lockup), and should recycle buffers if expecting further data (the _callback_recycle() method can be called to perform the latter duty). Finally, this method must set event when the encoder has finished (and should set exception if an exception occurred during encoding).

Developers wishing to write a custom encoder class may find it simpler to override the _callback_write() method, rather than deal with these complexities.

_callback_recycle(port, buf)

Recycles the buffer on behalf of the encoder callback function.

This method is called by _callback() when there is a buffer to recycle (because further output is expected). It is unlikely descendent classes will have a need to override this method, but if they override the _callback() method they may wish to call it.

_callback_write(buf, key=0)

Writes output on behalf of the encoder callback function.

This method is called by _callback() to handle writing to an object in outputs identified by key. The buf parameter is an MMAL buffer header pointer which can be used to obtain the length of data available (buf[0].length), a pointer to the data (buf[0].data) which should typically be used with ctypes.string_at(), and meta-data about the contents of the buffer (buf[0].flags). The method is expected to return a boolean to indicate whether output is complete (True) or whether more data is expected (False).

The default implementation simply writes the contents of the buffer to the output identified by key, and returns True if the buffer flags indicate end of stream. Image encoders will typically override the return value to indicate True on end of frame (as they only wish to output a single image). Video encoders will typically override this method to determine where key-frames and SPS headers occur.

_close_output(key=0)

Closes the output associated with key in outputs.

Closes the output object associated with the specified key, and removes it from the outputs dictionary (if we didn’t open the object then we attempt to flush it instead).

_create_connections()

Creates all connections between MMAL components.

This method is called to connect the encoder and the optional resizer to the input port provided by the camera. It sets the encoder_connection and resizer_connection attributes as required.

_create_encoder()

Creates and configures the MMAL encoder component.

This method only constructs the encoder; it does not connect it to the input port. The method sets the encoder attribute to the constructed encoder component, and the output_port attribute to the encoder’s output port (or the previously constructed resizer’s output port if one has been requested). Descendent classes extend this method to finalize encoder configuration.

Note

It should be noted that this method is called with the initializer’s option keyword arguments. This base implementation expects no additional arguments, but descendent classes extend the parameter list to include options relevant to them.

_create_pool()

Allocates a pool of MMAL buffers for the encoder.

This method is expected to construct an MMAL pool of buffers for the output_port, and store the result in the pool attribute.

_create_resizer(width, height)

Creates and configures an MMAL resizer component.

This is called when the initializer’s resize parameter is something other than None. The width and height parameters are passed to the constructed resizer. Note that this method only constructs the resizer - it does not connect it to the encoder. The method sets the resizer attribute to the constructed resizer component.

_open_output(output, key=0)

Opens output and associates it with key in outputs.

If output is a string, this method opens it as a filename and keeps track of the fact that the encoder was the one to open it (which implies that _close_output() should eventually close it). Otherwise, output is assumed to be a file-like object and is used verbatim. The opened output is added to the outputs dictionary with the specified key.

close()

Finalizes the encoder and deallocates all structures.

This method is called by the camera prior to destroying the encoder (or more precisely, letting it go out of scope to permit the garbage collector to destroy it at some future time). The method destroys all components that the various create methods constructed and resets their attributes.

start(output)

Starts the encoder object writing to the specified output.

This method is called by the camera to start the encoder capturing data from the camera to the specified output. The output parameter is either a filename, or a file-like object (for image and video encoders), or an iterable of filenames or file-like objects (for multi-image encoders).

stop()

Stops the encoder, regardless of whether it’s finished.

This method is called by the camera to terminate the execution of the encoder. Typically, this is used with video to stop the recording, but can potentially be called in the middle of image capture to terminate the capture.

wait(timeout=None)

Waits for the encoder to finish (successfully or otherwise).

This method is called by the owning camera object to block execution until the encoder has completed its task. If the timeout parameter is None, the method will block indefinitely. Otherwise, the timeout parameter specifies the (potentially fractional) number of seconds to block for. If the encoder finishes successfully within the timeout, the method returns True. Otherwise, it returns False.

active

Returns True if the MMAL encoder exists and is enabled.

9.7. PiVideoEncoder

class picamera.PiVideoEncoder(parent, camera_port, input_port, format, resize, **options)

Encoder for video recording.

This derivative of PiEncoder configures itself for H.264 or MJPEG encoding. It also introduces a split() method which is used by split_recording() and record_sequence() to redirect future output to a new filename or object. Finally, it also extends PiEncoder.start() and PiEncoder._callback_write() to track video frame meta-data, and to permit recording motion data to a separate output object.

_callback_write(buf, key=0)

Extended to implement video frame meta-data tracking, and to handle splitting video recording to the next output when split() is called.

_create_encoder(bitrate=17000000, intra_period=None, profile='high', quantization=0, quality=0, inline_headers=True, sei=False, motion_output=None)

Extends the base _create_encoder() implementation to configure the video encoder for H.264 or MJPEG output.

split(output, motion_output=None)

Called to switch the encoder’s output.

This method is called by split_recording() and record_sequence() to switch the encoder’s output object to the output parameter (which can be a filename or a file-like object, as with start()).

start(output, motion_output=None)

Extended to initialize video frame meta-data tracking.

9.8. PiImageEncoder

class picamera.PiImageEncoder(parent, camera_port, input_port, format, resize, **options)

Encoder for image capture.

This derivative of PiEncoder extends the _create_encoder() method to configure the encoder for a variety of encoded image outputs (JPEG, PNG, etc.).

_create_encoder(quality=85, thumbnail=(64, 48, 35), bayer=False)

Extends the base _create_encoder() implementation to configure the image encoder for JPEG, PNG, etc.

9.9. PiRawMixin

class picamera.PiRawMixin(parent, camera_port, input_port, format, resize, **options)

Mixin class for “raw” (unencoded) output.

This mixin class overrides the initializer of PiEncoder, along with _create_resizer() and _create_encoder() to configure the pipeline for unencoded output. Specifically, it disables the construction of an encoder, and sets the output port to the input port passed to the initializer, unless resizing is required (either for actual resizing, or for format conversion) in which case the resizer’s output is used.

_callback_write(buf, key=0)

Overridden to strip alpha bytes when required.

_create_connections()

Overridden to skip creating an encoder connection; only a resizer connection is required (if one has been configured).

_create_encoder()

Overridden to skip creating an encoder. Instead, this class simply uses the resizer’s port as the output port (if a resizer has been configured) or the specified input port otherwise.

_create_resizer(width, height)

Overridden to configure the resizer’s output with the required encoding.

9.10. PiCookedVideoEncoder

class picamera.PiCookedVideoEncoder(parent, camera_port, input_port, format, resize, **options)

Video encoder for encoded recordings.

This class is a derivative of PiVideoEncoder and only exists to provide naming symmetry with the image encoder classes.

9.11. PiRawVideoEncoder

class picamera.PiRawVideoEncoder(parent, camera_port, input_port, format, resize, **options)

Video encoder for unencoded recordings.

This class is a derivative of PiVideoEncoder and the PiRawMixin class intended for use with start_recording() when it is called with an unencoded format.

Warning

This class creates an inheritance diamond. Take care to determine the MRO of super-class calls.

9.12. PiOneImageEncoder

class picamera.PiOneImageEncoder(parent, camera_port, input_port, format, resize, **options)

Encoder for single image capture.

This class simply extends _callback_write() to terminate capture at frame end (i.e. after a single frame has been received).

9.13. PiMultiImageEncoder

class picamera.PiMultiImageEncoder(parent, camera_port, input_port, format, resize, **options)

Encoder for multiple image capture.

This class extends PiImageEncoder to handle an iterable of outputs instead of a single output. The _callback_write() method is extended to terminate capture when the iterable is exhausted, while PiEncoder._open_output() is overridden to begin iteration and rely on the new _next_output() method to advance output to the next item in the iterable.

_next_output(key=0)

This method moves output to the next item from the iterable passed to start().

9.14. PiRawImageMixin

class picamera.PiRawImageMixin(parent, camera_port, input_port, format, resize, **options)

Mixin class for “raw” (unencoded) image capture.

The _callback_write() method is overridden to manually calculate when to terminate output.

_callback_write(buf, key=0)

Overridden to manually calculate when to terminate capture (see comments in __init__()).

9.15. PiCookedOneImageEncoder

class picamera.PiCookedOneImageEncoder(parent, camera_port, input_port, format, resize, **options)

Encoder for “cooked” (encoded) single image output.

This encoder extends PiOneImageEncoder to include Exif tags in the output.

9.16. PiRawOneImageEncoder

class picamera.PiRawOneImageEncoder(parent, camera_port, input_port, format, resize, **options)

Single image encoder for unencoded capture.

This class is a derivative of PiOneImageEncoder and the PiRawImageMixin class intended for use with capture() (et al) when it is called with an unencoded image format.

Warning

This class creates an inheritance diamond. Take care to determine the MRO of super-class calls.

9.17. PiCookedMultiImageEncoder

class picamera.PiCookedMultiImageEncoder(parent, camera_port, input_port, format, resize, **options)

Encoder for “cooked” (encoded) multiple image output.

This encoder descends from PiMultiImageEncoder but includes no new functionality as video-port based encodes (which is all this class is used for) don’t support Exif tag output.

9.18. PiRawMultiImageEncoder

class picamera.PiRawMultiImageEncoder(parent, camera_port, input_port, format, resize, **options)

Multiple image encoder for unencoded capture.

This class is a derivative of PiMultiImageEncoder and the PiRawImageMixin class intended for use with capture_sequence() when it is called with an unencoded image format.

Warning

This class creates an inheritance diamond. Take care to determine the MRO of super-class calls.

9.19. PiRenderer

class picamera.PiRenderer(parent, layer=0, alpha=255, fullscreen=True, window=None, crop=None, rotation=0, vflip=False, hflip=False)

Base implementation of an MMAL video renderer for use by PiCamera.

The parent parameter specifies the PiCamera instance that has constructed this renderer. The layer parameter specifies the layer that the renderer will inhabit. Higher numbered layers obscure lower numbered layers (unless they are partially transparent). The initial opacity of the renderer is specified by the alpha parameter (which defaults to 255, meaning completely opaque). The fullscreen parameter which defaults to True indicates whether the renderer should occupy the entire display. Finally, the window parameter (which only has meaning when fullscreen is False) is a four-tuple of (x, y, width, height) which gives the screen coordinates that the renderer should occupy when it isn’t full-screen.

This base class isn’t directly used by PiCamera, but the two derivatives defined below, PiOverlayRenderer and PiPreviewRenderer, are used to produce overlays and the camera preview respectively.

close()

Finalizes the renderer and deallocates all structures.

This method is called by the camera prior to destroying the renderer (or more precisely, letting it go out of scope to permit the garbage collector to destroy it at some future time).

alpha

Retrieves or sets the opacity of the renderer.

When queried, the alpha property returns a value between 0 and 255 indicating the opacity of the renderer, where 0 is completely transparent and 255 is completely opaque. The default value is 255. The property can be set while recordings or previews are in progress.

crop

Retrieves or sets the area to read from the source.

The crop property specifies the rectangular area that the renderer will read from the source as a 4-tuple of (x, y, width, height). The special value (0, 0, 0, 0) (which is also the default) means to read entire area of the source. The property can be set while recordings or previews are active.

For example, if the camera’s resolution is currently configured as 1280x720, setting this attribute to (160, 160, 640, 400) will crop the preview to the center 640x400 pixels of the input. Note that this property does not affect the size of the output rectangle, which is controlled with fullscreen and window.

Note

This property only affects the renderer; it has no bearing on image captures or recordings (unlike the zoom property of the PiCamera class).

fullscreen

Retrieves or sets whether the renderer appears full-screen.

The fullscreen property is a bool which controls whether the renderer takes up the entire display or not. When set to False, the window property can be used to control the precise size of the renderer display. The property can be set while recordings or previews are active.

hflip

Retrieves of sets whether the renderer’s output is horizontally flipped.

When queried, the vflip property returns a boolean indicating whether or not the renderer’s output is horizontally flipped. The property can be set while recordings or previews are in progress. The default is False.

Note

This property only affects the renderer; it has no bearing on image captures or recordings (unlike the hflip property of the PiCamera class).

layer

Retrieves of sets the layer of the renderer.

The layer property is an integer which controls the layer that the renderer occupies. Higher valued layers obscure lower valued layers (with 0 being the “bottom” layer). The default value is 2. The property can be set while recordings or previews are in progress.

rotation

Retrieves of sets the current rotation of the renderer.

When queried, the rotation property returns the rotation applied to the renderer. Valid values are 0, 90, 180, and 270.

When set, the property changes the rotation applied to the renderer’s output. The property can be set while recordings or previews are active. The default is 0.

Note

This property only affects the renderer; it has no bearing on image captures or recordings (unlike the rotation property of the PiCamera class).

vflip

Retrieves of sets whether the renderer’s output is vertically flipped.

When queried, the vflip property returns a boolean indicating whether or not the renderer’s output is vertically flipped. The property can be set while recordings or previews are in progress. The default is False.

Note

This property only affects the renderer; it has no bearing on image captures or recordings (unlike the vflip property of the PiCamera class).

window

Retrieves or sets the size of the renderer.

When the fullscreen property is set to False, the window property specifies the size and position of the renderer on the display. The property is a 4-tuple consisting of (x, y, width, height). The property can be set while recordings or previews are active.

9.20. PiOverlayRenderer

class picamera.PiOverlayRenderer(parent, source, size=None, layer=0, alpha=255, fullscreen=True, window=None, crop=None, rotation=0, vflip=False, hflip=False)

Represents an MMAL renderer with a static source for overlays.

This class descends from PiRenderer and adds a static source for the MMAL renderer. The optional size parameter specifies the size of the source image as a (width, height) tuple. If this is omitted or None then the size is assumed to be the same as the parent camera’s current resolution.

The source must be an object that supports the buffer protocol which has the same length as an image in RGB format (colors represented as interleaved unsigned bytes) with the specified size after the width has been rounded up to the nearest multiple of 32, and the height has been rounded up to the nearest multiple of 16.

For example, if size is (1280, 720), then source must be a buffer with length 1280 x 720 x 3 bytes, or 2,764,800 bytes (because 1280 is a multiple of 32, and 720 is a multiple of 16 no extra rounding is required). However, if size is (97, 57), then source must be a buffer with length 128 x 64 x 3 bytes, or 24,576 bytes (pixels beyond column 97 and row 57 in the source will be ignored).

The layer, alpha, fullscreen, and window parameters are the same as in PiRenderer.

update(source)

Update the overlay with a new source of data.

The new source buffer must have the same size as the original buffer used to create the overlay. There is currently no method for changing the size of an existing overlay (remove and recreate the overlay if you require this).

9.21. PiPreviewRenderer

class picamera.PiPreviewRenderer(parent, source, layer=2, alpha=255, fullscreen=True, window=None, crop=None, rotation=0, vflip=False, hflip=False)

Represents an MMAL renderer which uses the camera’s preview as a source.

This class descends from PiRenderer and adds an MMAL connection to connect the renderer to an MMAL port. The source parameter specifies the MMAL port to connect to the renderer.

The layer, alpha, fullscreen, and window parameters are the same as in PiRenderer.

9.22. PiNullSink

class picamera.PiNullSink(parent, source)

Implements an MMAL null-sink which can be used in place of a renderer.

The parent parameter specifies the PiCamera instance which constructed this null-sink. The source parameter specifies the MMAL port which the null-sink should connect to its input.

The null-sink can act as a drop-in replacement for PiRenderer in most cases, but obviously doesn’t implement attributes like alpha, layer, etc. as it simply dumps any incoming frames. This is also the reason that this class doesn’t derive from PiRenderer like all other classes in this module.

close()

Finalizes the null-sink and deallocates all structures.

This method is called by the camera prior to destroying the null-sink (or more precisely, letting it go out of scope to permit the garbage collector to destroy it at some future time).

9.23. Exceptions

exception picamera.PiCameraWarning

Base class for PiCamera warnings.

exception picamera.PiCameraError

Base class for PiCamera errors.

exception picamera.PiCameraValueError

Raised when an invalid value is fed to a PiCamera object.

exception picamera.PiCameraRuntimeError

Raised when an invalid sequence of operations is attempted with a PiCamera object.

exception picamera.PiCameraClosed

Raised when a method is called on a camera which has already been closed.

exception picamera.PiCameraNotRecording

Raised when stop_recording() or split_recording() are called against a port which has no recording active.

exception picamera.PiCameraAlreadyRecording

Raised when start_recording() or record_sequence() are called against a port which already has an active recording.

exception picamera.PiCameraMMALError(status, prefix='')

Raised when an MMAL operation fails for whatever reason.

picamera.mmal_check(status, prefix='')

Checks the return status of an mmal call and raises an exception on failure.

The status parameter is the result of an MMAL call. If status is anything other than MMAL_SUCCESS, a PiCameraMMALError exception is raised. The optional prefix parameter specifies a prefix message to place at the start of the exception’s message to provide some context.