10. API - picamera.camera Module

The camera module defines the PiCamera class, which is the primary interface to the Raspberry Pi’s camera module.

Note

All classes in this module are available from the picamera namespace without having to import picamera.camera directly.

The following classes are defined in the module:

10.1. PiCamera

Note

In the documentation below several apparently “private” methods are documented (i.e. methods which have names beginning with an underscore). Most users can ignore these methods; they are intended for those developers that wish to override or extend the encoder implementations used by picamera.

Some may question, given that these methods are intended to be overridden, why they are declared with a leading underscore (which in the Python idiom suggests that these methods are “private” to the class). In the cases where such methods are documented, the author intends these methods to have “protected” status (in the idiom of C++ and Object Pascal). That is to say, they are not intended to be used outside of the declaring class, but are intended to be accessible to, and overridden by, descendent classes.

class picamera.camera.PiCamera(camera_num=0, stereo_mode='none', stereo_decimate=False, resolution=None, framerate=None, sensor_mode=0, led_pin=None)[source]

Provides a pure Python interface to the Raspberry Pi’s camera module.

Upon construction, this class initializes the camera. The camera_num parameter (which defaults to 0) selects the camera module that the instance will represent. Only the Raspberry Pi compute module currently supports more than one camera, and this class has not yet been tested with more than one module.

The resolution and framerate parameters can be used to specify an initial resolution and framerate. If they are not specified, the framerate will default to 30fps, and the resolution will default to the connected display’s resolution or 1280x720 if no display can be detected (e.g. if the display has been disabled with tvservice -o). If specified, resolution must be a tuple of (width, height), and framerate must be a rational value (integer, float, fraction, etc).

The sensor_mode parameter can be used to force the camera’s initial sensor_mode to a particular value. This defaults to 0 indicating that the sensor mode should be selected automatically based on the requested resolution and framerate. The possible values for this parameter, along with a description of the heuristic used with the default can be found in the Camera Modes section.

The stereo_mode and stereo_decimate parameters configure dual cameras on a compute module for sterescopic mode. These parameters can only be set at construction time; they cannot be altered later without closing the PiCamera instance and recreating it. The stereo_mode parameter defaults to 'none' (no stereoscopic mode) but can be set to 'side-by-side' or 'top-bottom' to activate a stereoscopic mode. If the stereo_decimate parameter is True, the resolution of the two cameras will be halved so that the resulting image has the same dimensions as if stereoscopic mode were not being used.

Warning

Stereoscopic mode is untested in picamera at this time. If you have the necessary hardware, the author would be most interested to hear of your experiences!

The led_pin parameter can be used to specify the GPIO pin which should be used to control the camera’s LED via the led attribute. If this is not specified, it should default to the correct value for your Pi platform. You should only need to specify this parameter if you are using a custom DeviceTree blob (this is only typical on the Compute Module platform).

No preview or recording is started automatically upon construction. Use the capture() method to capture images, the start_recording() method to begin recording video, or the start_preview() method to start live display of the camera’s input.

Several attributes are provided to adjust the camera’s configuration. Some of these can be adjusted while a recording is running, like brightness. Others, like resolution, can only be adjusted when the camera is idle.

When you are finished with the camera, you should ensure you call the close() method to release the camera resources:

camera = PiCamera()
try:
    # do something with the camera
    pass
finally:
    camera.close()

The class supports the context manager protocol to make this particularly easy (upon exiting the with statement, the close() method is automatically called):

with PiCamera() as camera:
    # do something with the camera
    pass

Changed in version 1.8: Added stereo_mode and stereo_decimate parameters

Changed in version 1.9: Added resolution, framerate, and sensor_mode parameters

Changed in version 1.10: Added led_pin parameter

_check_camera_open()[source]

Raise an exception if the camera is already closed

_check_recording_stopped()[source]

Raise an exception if the camera is currently recording

_get_annotate_settings()[source]

Returns the current annotation settings as an MMAL structure.

This is a utility method for _get_annotate_text(), _get_annotate_background(), etc. all of which rely on the MMAL_PARAMETER_CAMERA_ANNOTATE_Vn structure to determine their values.

_get_camera_settings()[source]

Returns the current camera settings as an MMAL structure.

This is a utility method for _get_exposure_speed(), _get_analog_gain(), etc. all of which rely on the MMAL_PARAMETER_CAMERA_SETTINGS structure to determine their values.

_get_image_encoder(camera_port, output_port, format, resize, **options)[source]

Construct an image encoder for the requested parameters.

This method is called by capture() and capture_continuous() to construct an image encoder. The camera_port parameter gives the MMAL camera port that should be enabled for capture by the encoder. The output_port parameter gives the MMAL port that the encoder should read output from (this may be the same as the camera port, but may be different if other component(s) like a splitter have been placed in the pipeline). The format parameter indicates the image format and will be one of:

  • 'jpeg'
  • 'png'
  • 'gif'
  • 'bmp'
  • 'yuv'
  • 'rgb'
  • 'rgba'
  • 'bgr'
  • 'bgra'

The resize parameter indicates the size that the encoder should resize the output to (presumably by including a resizer in the pipeline). Finally, options includes extra keyword arguments that should be passed verbatim to the encoder.

_get_image_format(output, format=None)[source]

Given an output object and an optional format, attempt to determine the requested image format.

This method is used by all capture methods to determine the requested output format. If format is specified as a MIME-type the “image/” prefix is stripped. If format is not specified, then _get_output_format() will be called to attempt to determine format from the output object.

_get_images_encoder(camera_port, output_port, format, resize, **options)[source]

Construct a multi-image encoder for the requested parameters.

This method is largely equivalent to _get_image_encoder() with the exception that the encoder returned should expect to be passed an iterable of outputs to its start() method, rather than a single output object. This method is called by the capture_sequence() method.

All parameters are the same as in _get_image_encoder(). Please refer to the documentation for that method for further information.

_get_output_format(output)[source]

Given an output object, attempt to determine the requested format.

We attempt to determine the filename of the output object and derive a MIME type from the extension. If output has no filename, an error is raised.

_get_ports(from_video_port, splitter_port)[source]

Determine the camera and output ports for given capture options.

See Camera Hardware for more information on picamera’s usage of camera, splitter, and encoder ports. The general idea here is that the capture (still) port operates on its own, while the video port is always connected to a splitter component, so requests for a video port also have to specify which splitter port they want to use.

_get_video_encoder(camera_port, output_port, format, resize, **options)[source]

Construct a video encoder for the requested parameters.

This method is called by start_recording() and record_sequence() to construct a video encoder. The camera_port parameter gives the MMAL camera port that should be enabled for capture by the encoder. The output_port parameter gives the MMAL port that the encoder should read output from (this may be the same as the camera port, but may be different if other component(s) like a splitter have been placed in the pipeline). The format parameter indicates the video format and will be one of:

  • 'h264'
  • 'mjpeg'

The resize parameter indicates the size that the encoder should resize the output to (presumably by including a resizer in the pipeline). Finally, options includes extra keyword arguments that should be passed verbatim to the encoder.

_get_video_format(output, format=None)[source]

Given an output object and an optional format, attempt to determine the requested video format.

This method is used by all recording methods to determine the requested output format. If format is specified as a MIME-type the “video/” or “application/” prefix will be stripped. If format is not specified, then _get_output_format() will be called to attempt to determine format from the output object.

_set_camera_mode(old_mode, new_mode, framerate, resolution)[source]

A utility method for setting a new camera mode, framerate, and resolution.

This method is used by the setters of the resolution, framerate, and sensor_mode properties. It assumes that the camera has already been disabled and will be enabled after being called. The old_mode and new_mode arguments are required to ensure correct operation on older firmwares (specifically that we don’t try to set the sensor mode when both old and new modes are 0 or automatic).

add_overlay(source, size=None, **options)[source]

Adds a static overlay to the preview output.

This method creates a new static overlay using the same rendering mechanism as the preview. Overlays will appear on the Pi’s video output, but will not appear in captures or video recordings. Multiple overlays can exist; each call to add_overlay() returns a new PiOverlayRenderer instance representing the overlay.

The optional size parameter specifies the size of the source image as a (width, height) tuple. If this is omitted or None then the size is assumed to be the same as the camera’s current resolution.

The source must be an object that supports the buffer protocol which has the same length as an image in RGB format (colors represented as interleaved unsigned bytes) with the specified size after the width has been rounded up to the nearest multiple of 32, and the height has been rounded up to the nearest multiple of 16.

For example, if size is (1280, 720), then source must be a buffer with length 1280 × 720 × 3 bytes, or 2,764,800 bytes (because 1280 is a multiple of 32, and 720 is a multiple of 16 no extra rounding is required). However, if size is (97, 57), then source must be a buffer with length 128 × 64 × 3 bytes, or 24,576 bytes (pixels beyond column 97 and row 57 in the source will be ignored).

New overlays default to layer 0, whilst the preview defaults to layer 2. Higher numbered layers obscure lower numbered layers, hence new overlays will be invisible (if the preview is running) by default. You can make the new overlay visible either by making any existing preview transparent (with the alpha property) or by moving the overlay into a layer higher than the preview (with the layer property).

All keyword arguments captured in options are passed onto the PiRenderer constructor. All camera properties except resolution and framerate can be modified while overlays exist. The reason for these exceptions is that the overlay has a static resolution and changing the camera’s mode would require resizing of the source.

Warning

If too many overlays are added, the display output will be disabled and a reboot will generally be required to restore the display. Overlays are composited “on the fly”. Hence, a real-time constraint exists wherein for each horizontal line of HDMI output, the content of all source layers must be fetched, resized, converted, and blended to produce the output pixels.

If enough overlays exist (where “enough” is a number dependent on overlay size, display resolution, bus frequency, and several other factors making it unrealistic to calculate in advance), this process breaks down and video output fails. One solution is to add dispmanx_offline=1 to /boot/config.txt to force the use of an off-screen buffer. Be aware that this requires more GPU memory and may reduce the update rate.

New in version 1.8.

capture(output, format=None, use_video_port=False, resize=None, splitter_port=0, **options)[source]

Capture an image from the camera, storing it in output.

If output is a string, it will be treated as a filename for a new file which the image will be written to. Otherwise, output is assumed to a be a file-like object and the image data is appended to it (the implementation only assumes the object has a write() method - no other methods will be called).

If format is None (the default), the method will attempt to guess the required image format from the extension of output (if it’s a string), or from the name attribute of output (if it has one). In the case that the format cannot be determined, a PiCameraValueError will be raised.

If format is not None, it must be a string specifying the format that you want the image output in. The format can be a MIME-type or one of the following strings:

  • 'jpeg' - Write a JPEG file
  • 'png' - Write a PNG file
  • 'gif' - Write a GIF file
  • 'bmp' - Write a Windows bitmap file
  • 'yuv' - Write the raw image data to a file in YUV420 format
  • 'rgb' - Write the raw image data to a file in 24-bit RGB format
  • 'rgba' - Write the raw image data to a file in 32-bit RGBA format
  • 'bgr' - Write the raw image data to a file in 24-bit BGR format
  • 'bgra' - Write the raw image data to a file in 32-bit BGRA format
  • 'raw' - Deprecated option for raw captures; the format is taken from the deprecated raw_format attribute

The use_video_port parameter controls whether the camera’s image or video port is used to capture images. It defaults to False which means that the camera’s image port is used. This port is slow but produces better quality pictures. If you need rapid capture up to the rate of video frames, set this to True.

When use_video_port is True, the splitter_port parameter specifies the port of the video splitter that the image encoder will be attached to. This defaults to 0 and most users will have no need to specify anything different. This parameter is ignored when use_video_port is False. See Under the Hood for more information about the video splitter.

If resize is not None (the default), it must be a two-element tuple specifying the width and height that the image should be resized to.

Warning

If resize is specified, or use_video_port is True, Exif metadata will not be included in JPEG output. This is due to an underlying firmware limitation.

Certain file formats accept additional options which can be specified as keyword arguments. Currently, only the 'jpeg' encoder accepts additional options, which are:

  • quality - Defines the quality of the JPEG encoder as an integer ranging from 1 to 100. Defaults to 85. Please note that JPEG quality is not a percentage and definitions of quality vary widely.
  • thumbnail - Defines the size and quality of the thumbnail to embed in the Exif metadata. Specifying None disables thumbnail generation. Otherwise, specify a tuple of (width, height, quality). Defaults to (64, 48, 35).
  • bayer - If True, the raw bayer data from the camera’s sensor is included in the Exif metadata.

Note

The so-called “raw” formats listed above ('yuv', 'rgb', etc.) do not represent the raw bayer data from the camera’s sensor. Rather they provide access to the image data after GPU processing, but before format encoding (JPEG, PNG, etc). Currently, the only method of accessing the raw bayer data is via the bayer parameter described above.

Changed in version 1.0: The resize parameter was added, and raw capture formats can now be specified directly

Changed in version 1.3: The splitter_port parameter was added, and bayer was added as an option for the 'jpeg' format

capture_continuous(output, format=None, use_video_port=False, resize=None, splitter_port=0, burst=False, **options)[source]

Capture images continuously from the camera as an infinite iterator.

This method returns an infinite iterator of images captured continuously from the camera. If output is a string, each captured image is stored in a file named after output after substitution of two values with the format() method. Those two values are:

  • {counter} - a simple incrementor that starts at 1 and increases by 1 for each image taken
  • {timestamp} - a datetime instance

The table below contains several example values of output and the sequence of filenames those values could produce:

output Value Filenames Notes
'image{counter}.jpg' image1.jpg, image2.jpg, image3.jpg, ...  
'image{counter:02d}.jpg' image01.jpg, image02.jpg, image03.jpg, ...  
'image{timestamp}.jpg' image2013-10-05 12:07:12.346743.jpg, image2013-10-05 12:07:32.498539, ...
'image{timestamp:%H-%M-%S-%f}.jpg' image12-10-02-561527.jpg, image12-10-14-905398.jpg  
'{timestamp:%H%M%S}-{counter:03d}.jpg' 121002-001.jpg, 121013-002.jpg, 121014-003.jpg, ...
  1. Note that because timestamp’s default output includes colons (:), the resulting filenames are not suitable for use on Windows. For this reason (and the fact the default contains spaces) it is strongly recommended you always specify a format when using {timestamp}.
  2. You can use both {timestamp} and {counter} in a single format string (multiple times too!) although this tends to be redundant.

If output is not a string, it is assumed to be a file-like object and each image is simply written to this object sequentially. In this case you will likely either want to write something to the object between the images to distinguish them, or clear the object between iterations.

The format, use_video_port, splitter_port, resize, and options parameters are the same as in capture().

If use_video_port is False (the default), the burst parameter can be used to make still port captures faster. Specifically, this prevents the preview from switching resolutions between captures which significantly speeds up consecutive captures from the still port. The downside is that this mode is currently has several bugs; the major issue is that if captures are performed too quickly some frames will come back severely underexposed. It is recommended that users avoid the burst parameter unless they absolutely require it and are prepared to work around such issues.

For example, to capture 60 images with a one second delay between them, writing the output to a series of JPEG files named image01.jpg, image02.jpg, etc. one could do the following:

import time
import picamera
with picamera.PiCamera() as camera:
    camera.start_preview()
    try:
        for i, filename in enumerate(camera.capture_continuous('image{counter:02d}.jpg')):
            print(filename)
            time.sleep(1)
            if i == 59:
                break
    finally:
        camera.stop_preview()

Alternatively, to capture JPEG frames as fast as possible into an in-memory stream, performing some processing on each stream until some condition is satisfied:

import io
import time
import picamera
with picamera.PiCamera() as camera:
    stream = io.BytesIO()
    for foo in camera.capture_continuous(stream, format='jpeg'):
        # Truncate the stream to the current position (in case
        # prior iterations output a longer image)
        stream.truncate()
        stream.seek(0)
        if process(stream):
            break

Changed in version 1.0: The resize parameter was added, and raw capture formats can now be specified directly

Changed in version 1.3: The splitter_port parameter was added

capture_sequence(outputs, format='jpeg', use_video_port=False, resize=None, splitter_port=0, burst=False, **options)[source]

Capture a sequence of consecutive images from the camera.

This method accepts a sequence or iterator of outputs each of which must either be a string specifying a filename for output, or a file-like object with a write method. For each item in the sequence or iterator of outputs, the camera captures a single image as fast as it can.

The format, use_video_port, splitter_port, resize, and options parameters are the same as in capture(), but format defaults to 'jpeg'. The format is not derived from the filenames in outputs by this method.

If use_video_port is False (the default), the burst parameter can be used to make still port captures faster. Specifically, this prevents the preview from switching resolutions between captures which significantly speeds up consecutive captures from the still port. The downside is that this mode is currently has several bugs; the major issue is that if captures are performed too quickly some frames will come back severely underexposed. It is recommended that users avoid the burst parameter unless they absolutely require it and are prepared to work around such issues.

For example, to capture 3 consecutive images:

import time
import picamera
with picamera.PiCamera() as camera:
    camera.start_preview()
    time.sleep(2)
    camera.capture_sequence([
        'image1.jpg',
        'image2.jpg',
        'image3.jpg',
        ])
    camera.stop_preview()

If you wish to capture a large number of images, a list comprehension or generator expression can be used to construct the list of filenames to use:

import time
import picamera
with picamera.PiCamera() as camera:
    camera.start_preview()
    time.sleep(2)
    camera.capture_sequence([
        'image%02d.jpg' % i
        for i in range(100)
        ])
    camera.stop_preview()

More complex effects can be obtained by using a generator function to provide the filenames or output objects.

Changed in version 1.0: The resize parameter was added, and raw capture formats can now be specified directly

Changed in version 1.3: The splitter_port parameter was added

close()[source]

Finalizes the state of the camera.

After successfully constructing a PiCamera object, you should ensure you call the close() method once you are finished with the camera (e.g. in the finally section of a try..finally block). This method stops all recording and preview activities and releases all resources associated with the camera; this is necessary to prevent GPU memory leaks.

record_sequence(outputs, format='h264', resize=None, splitter_port=1, **options)[source]

Record a sequence of video clips from the camera.

This method accepts a sequence or iterator of outputs each of which must either be a string specifying a filename for output, or a file-like object with a write method.

The method acts as an iterator itself, yielding each item of the sequence in turn. In this way, the caller can control how long to record to each item by only permitting the loop to continue when ready to switch to the next output.

The format, splitter_port, resize, and options parameters are the same as in start_recording(), but format defaults to 'h264'. The format is not derived from the filenames in outputs by this method.

For example, to record 3 consecutive 10-second video clips, writing the output to a series of H.264 files named clip01.h264, clip02.h264, and clip03.h264 one could use the following:

import picamera
with picamera.PiCamera() as camera:
    for filename in camera.record_sequence([
            'clip01.h264',
            'clip02.h264',
            'clip03.h264']):
        print('Recording to %s' % filename)
        camera.wait_recording(10)

Alternatively, a more flexible method of writing the previous example (which is easier to expand to a large number of output files) is by using a generator expression as the input sequence:

import picamera
with picamera.PiCamera() as camera:
    for filename in camera.record_sequence(
            'clip%02d.h264' % i for i in range(3)):
        print('Recording to %s' % filename)
        camera.wait_recording(10)

More advanced techniques are also possible by utilising infinite sequences, such as those generated by itertools.cycle(). In the following example, recording is switched between two in-memory streams. Whilst one stream is recording, the other is being analysed. The script only stops recording when a video recording meets some criteria defined by the process function:

import io
import itertools
import picamera
with picamera.PiCamera() as camera:
    analyse = None
    for stream in camera.record_sequence(
            itertools.cycle((io.BytesIO(), io.BytesIO()))):
        if analyse is not None:
            if process(analyse):
                break
            analyse.seek(0)
            analyse.truncate()
        camera.wait_recording(5)
        analyse = stream

New in version 1.3.

remove_overlay(overlay)[source]

Removes a static overlay from the preview output.

This method removes an overlay which was previously created by add_overlay(). The overlay parameter specifies the PiRenderer instance that was returned by add_overlay().

New in version 1.8.

split_recording(output, splitter_port=1, **options)[source]

Continue the recording in the specified output; close existing output.

When called, the video encoder will wait for the next appropriate split point (an inline SPS header), then will cease writing to the current output (and close it, if it was specified as a filename), and continue writing to the newly specified output.

If output is a string, it will be treated as a filename for a new file which the video will be written to. Otherwise, output is assumed to be a file-like object and the video data is appended to it (the implementation only assumes the object has a write() method - no other methods will be called).

The motion_output parameter can be used to redirect the output of the motion vector data in the same fashion as output. If motion_output is None (the default) then motion vector data will not be redirected and will continue being written to the output specified by the motion_output parameter given to start_recording(). Alternatively, if you only wish to redirect motion vector data, you can set output to None and given a new value for motion_output.

The splitter_port parameter specifies which port of the video splitter the encoder you wish to change outputs is attached to. This defaults to 1 and most users will have no need to specify anything different. Valid values are between 0 and 3 inclusive.

Note that unlike start_recording(), you cannot specify format or other options as these cannot be changed in the middle of recording. Only the new output (and motion_output) can be specified. Furthermore, the format of the recording is currently limited to H264, and inline_headers must be True when start_recording() is called (this is the default).

Changed in version 1.3: The splitter_port parameter was added

Changed in version 1.5: The motion_output parameter was added

start_preview(**options)[source]

Displays the preview overlay.

This method starts a camera preview as an overlay on the Pi’s primary display (HDMI or composite). A PiRenderer instance (more specifically, a PiPreviewRenderer) is constructed with the keyword arguments captured in options, and is returned from the method (this instance is also accessible from the preview attribute for as long as the renderer remains active). By default, the renderer will be opaque and fullscreen.

This means the default preview overrides whatever is currently visible on the display. More specifically, the preview does not rely on a graphical environment like X-Windows (it can run quite happily from a TTY console); it is simply an overlay on the Pi’s video output. To stop the preview and reveal the display again, call stop_preview(). The preview can be started and stopped multiple times during the lifetime of the PiCamera object.

All other camera properties can be modified “live” while the preview is running (e.g. brightness).

Note

Because the default preview typically obscures the screen, ensure you have a means of stopping a preview before starting one. If the preview obscures your interactive console you won’t be able to Alt+Tab back to it as the preview isn’t in a window. If you are in an interactive Python session, simply pressing Ctrl+D usually suffices to terminate the environment, including the camera and its associated preview.

start_recording(output, format=None, resize=None, splitter_port=1, **options)[source]

Start recording video from the camera, storing it in output.

If output is a string, it will be treated as a filename for a new file which the video will be written to. Otherwise, output is assumed to be a file-like object and the video data is appended to it (the implementation only assumes the object has a write() method - no other methods will be called).

If format is None (the default), the method will attempt to guess the required video format from the extension of output (if it’s a string), or from the name attribute of output (if it has one). In the case that the format cannot be determined, a PiCameraValueError will be raised.

If format is not None, it must be a string specifying the format that you want the video output in. The format can be a MIME-type or one of the following strings:

  • 'h264' - Write an H.264 video stream
  • 'mjpeg' - Write an M-JPEG video stream
  • 'yuv' - Write the raw video data to a file in YUV420 format
  • 'rgb' - Write the raw video data to a file in 24-bit RGB format
  • 'rgba' - Write the raw video data to a file in 32-bit RGBA format
  • 'bgr' - Write the raw video data to a file in 24-bit BGR format
  • 'bgra' - Write the raw video data to a file in 32-bit BGRA format

If resize is not None (the default), it must be a two-element tuple specifying the width and height that the video recording should be resized to. This is particularly useful for recording video using the full resolution of the camera sensor (which is not possible in H.264 without down-sizing the output).

The splitter_port parameter specifies the port of the built-in splitter that the video encoder will be attached to. This defaults to 1 and most users will have no need to specify anything different. If you wish to record multiple (presumably resized) streams simultaneously, specify a value between 0 and 3 inclusive for this parameter, ensuring that you do not specify a port that is currently in use.

Certain formats accept additional options which can be specified as keyword arguments. The 'h264' format accepts the following additional options:

  • profile - The H.264 profile to use for encoding. Defaults to ‘high’, but can be one of ‘baseline’, ‘main’, ‘high’, or ‘constrained’.
  • intra_period - The key frame rate (the rate at which I-frames are inserted in the output). Defaults to None, but can be any 32-bit integer value representing the number of frames between successive I-frames. The special value 0 causes the encoder to produce a single initial I-frame, and then only P-frames subsequently. Note that split_recording() will fail in this mode.
  • intra_refresh - The key frame format (the way in which I-frames will be inserted into the output stream). Defaults to None, but can be one of ‘cyclic’, ‘adaptive’, ‘both’, or ‘cyclicrows’.
  • inline_headers - When True, specifies that the encoder should output SPS/PPS headers within the stream to ensure GOPs (groups of pictures) are self describing. This is important for streaming applications where the client may wish to seek within the stream, and enables the use of split_recording(). Defaults to True if not specified.
  • sei - When True, specifies the encoder should include “Supplemental Enhancement Information” within the output stream. Defaults to False if not specified.
  • motion_output - Indicates the output destination for motion vector estimation data. When None (the default), motion data is not output. If set to a string, it is assumed to be a filename which should be opened for motion data to be written to. Any other value is assumed to be a file-like object which motion vector is to be written to (the object must have a write method).

All encoded formats accept the following additional options:

  • bitrate - The bitrate at which video will be encoded. Defaults to 17000000 (17Mbps) if not specified. The maximum value is 25000000 (25Mbps). Bitrate 0 indicates the encoder should not use bitrate control (the encoder is limited by the quality only).
  • quality - Specifies the quality that the encoder should attempt to maintain. For the 'h264' format, use values between 10 and 40 where 10 is extremely high quality, and 40 is extremely low (20-25 is usually a reasonable range for H.264 encoding). For the mjpeg format, use JPEG quality values between 1 and 100 (where higher values are higher quality). Quality 0 is special and seems to be a “reasonable quality” default.
  • quantization - Deprecated alias for quality.

Changed in version 1.0: The resize parameter was added, and 'mjpeg' was added as a recording format

Changed in version 1.3: The splitter_port parameter was added

Changed in version 1.5: The quantization parameter was deprecated in favor of quality, and the motion_output parameter was added.

stop_preview()[source]

Hides the preview overlay.

If start_preview() has previously been called, this method shuts down the preview display which generally results in the underlying display becoming visible again. If a preview is not currently running, no exception is raised - the method will simply do nothing.

stop_recording(splitter_port=1)[source]

Stop recording video from the camera.

After calling this method the video encoder will be shut down and output will stop being written to the file-like object specified with start_recording(). If an error occurred during recording and wait_recording() has not been called since the error then this method will raise the exception.

The splitter_port parameter specifies which port of the video splitter the encoder you wish to stop is attached to. This defaults to 1 and most users will have no need to specify anything different. Valid values are between 0 and 3 inclusive.

Changed in version 1.3: The splitter_port parameter was added

wait_recording(timeout=0, splitter_port=1)[source]

Wait on the video encoder for timeout seconds.

It is recommended that this method is called while recording to check for exceptions. If an error occurs during recording (for example out of disk space) the recording will stop, but an exception will only be raised when the wait_recording() or stop_recording() methods are called.

If timeout is 0 (the default) the function will immediately return (or raise an exception if an error has occurred).

The splitter_port parameter specifies which port of the video splitter the encoder you wish to wait on is attached to. This defaults to 1 and most users will have no need to specify anything different. Valid values are between 0 and 3 inclusive.

Changed in version 1.3: The splitter_port parameter was added

ISO

Retrieves or sets the apparent ISO setting of the camera.

Deprecated since version 1.8: Please use the iso attribute instead.

_still_encoding

Configures the encoding of the camera’s still port.

This attribute controls the encoding of the camera’s still port (see Under the Hood for more information). It is intended for internal use, but may be useful to developers wishing to implement custom encoders.

analog_gain

Retrieves the current analog gain of the camera.

When queried, this property returns the analog gain currently being used by the camera. The value represents the analog gain of the sensor prior to digital conversion. The value is returned as a Fraction instance.

New in version 1.6.

annotate_background

Controls what background is drawn behind the annotation.

The annotate_background attribute specifies if a background will be drawn behind the annotation text and, if so, what color it will be. The value is specified as a Color or None if no background should be drawn. The default is None.

Note

For backward compatibility purposes, the value False will be treated as None, and the value True will be treated as the color black. The “truthiness” of the values returned by the attribute are backward compatible although the values themselves are not.

New in version 1.8.

Changed in version 1.10: In prior versions this was a bool value with True representing a black background.

annotate_foreground

Controls the color of the annotation text.

The annotate_foreground attribute specifies, partially, the color of the annotation text. The value is specified as a Color. The default is white.

Note

The underlying firmware does not directly support setting all components of the text color, only the Y’ component of a Y’UV tuple. This is roughly (but not precisely) analogous to the “brightness” of a color, so you may choose to think of this as setting how bright the annotation text will be relative to its background. In order to specify just the Y’ component when setting this attribute, you may choose to construct the Color instance as follows:

camera.annotate_foreground = picamera.Color(y=0.2, u=0, v=0)

New in version 1.10.

annotate_frame_num

Controls whether the current frame number is drawn as an annotation.

The annotate_frame_num attribute is a bool indicating whether or not the current frame number is rendered as an annotation, similar to annotate_text. The default is False.

New in version 1.8.

annotate_text

Retrieves or sets a text annotation for all output.

When queried, the annotate_text property returns the current annotation (if no annotation has been set, this is simply a blank string).

When set, the property immediately applies the annotation to the preview (if it is running) and to any future captures or video recording. Strings longer than 255 characters, or strings containing non-ASCII characters will raise a PiCameraValueError. The default value is ''.

Changed in version 1.8: Text annotations can now be 255 characters long. The prior limit was 32 characters.

annotate_text_size

Controls the size of the annotation text.

The annotate_text_size attribute is an int which determines how large the annotation text will appear on the display. Valid values are in the range 6 to 160, inclusive. The default is 32.

New in version 1.10.

awb_gains

Gets or sets the auto-white-balance gains of the camera.

When queried, this attribute returns a tuple of values representing the (red, blue) balance of the camera. The red and blue values are returned Fraction instances. The values will be between 0.0 and 8.0.

When set, this attribute adjusts the camera’s auto-white-balance gains. The property can be specified as a single value in which case both red and blue gains will be adjusted equally, or as a (red, blue) tuple. Values can be specified as an int, float or Fraction and each gain must be between 0.0 and 8.0. Typical values for the gains are between 0.9 and 1.9. The property can be set while recordings or previews are in progress.

Note

This attribute only has an effect when awb_mode is set to 'off'.

Changed in version 1.6: Prior to version 1.6, this attribute was write-only.

awb_mode

Retrieves or sets the auto-white-balance mode of the camera.

When queried, the awb_mode property returns a string representing the auto white balance setting of the camera. The possible values can be obtained from the PiCamera.AWB_MODES attribute, and are as follows:

  • 'off'
  • 'auto'
  • 'sunlight'
  • 'cloudy'
  • 'shade'
  • 'tungsten'
  • 'fluorescent'
  • 'incandescent'
  • 'flash'
  • 'horizon'

When set, the property adjusts the camera’s auto-white-balance mode. The property can be set while recordings or previews are in progress. The default value is 'auto'.

Note

AWB mode 'off' is special: this disables the camera’s automatic white balance permitting manual control of the white balance via the awb_gains property.

brightness

Retrieves or sets the brightness setting of the camera.

When queried, the brightness property returns the brightness level of the camera as an integer between 0 and 100. When set, the property adjusts the brightness of the camera. Brightness can be adjusted while previews or recordings are in progress. The default value is 50.

closed

Returns True if the close() method has been called.

color_effects

Retrieves or sets the current color effect applied by the camera.

When queried, the color_effects property either returns None which indicates that the camera is using normal color settings, or a (u, v) tuple where u and v are integer values between 0 and 255.

When set, the property changes the color effect applied by the camera. The property can be set while recordings or previews are in progress. For example, to make the image black and white set the value to (128, 128). The default value is None.

contrast

Retrieves or sets the contrast setting of the camera.

When queried, the contrast property returns the contrast level of the camera as an integer between -100 and 100. When set, the property adjusts the contrast of the camera. Contrast can be adjusted while previews or recordings are in progress. The default value is 0.

crop

Retrieves or sets the zoom applied to the camera’s input.

Deprecated since version 1.8: Please use the zoom attribute instead.

digital_gain

Retrieves the current digital gain of the camera.

When queried, this property returns the digital gain currently being used by the camera. The value represents the digital gain the camera applies after conversion of the sensor’s analog output. The value is returned as a Fraction instance.

New in version 1.6.

drc_strength

Retrieves or sets the dynamic range compression strength of the camera.

When queried, the drc_strength property returns a string indicating the amount of dynamic range compression the camera applies to images.

When set, the attributes adjusts the strength of the dynamic range compression applied to the camera’s output. Valid values are given in the list below:

  • 'off'
  • 'low'
  • 'medium'
  • 'high'

The default value is 'off'. All possible values for the attribute can be obtained from the PiCamera.DRC_STRENGTHS attribute.

New in version 1.6.

exif_tags

Holds a mapping of the Exif tags to apply to captured images.

Note

Please note that Exif tagging is only supported with the jpeg format.

By default several Exif tags are automatically applied to any images taken with the capture() method: IFD0.Make (which is set to RaspberryPi), IFD0.Model (which is set to RP_OV5647), and three timestamp tags: IFD0.DateTime, EXIF.DateTimeOriginal, and EXIF.DateTimeDigitized which are all set to the current date and time just before the picture is taken.

If you wish to set additional Exif tags, or override any of the aforementioned tags, simply add entries to the exif_tags map before calling capture(). For example:

camera.exif_tags['IFD0.Copyright'] = 'Copyright (c) 2013 Foo Industries'

The Exif standard mandates ASCII encoding for all textual values, hence strings containing non-ASCII characters will cause an encoding error to be raised when capture() is called. If you wish to set binary values, use a bytes() value:

camera.exif_tags['EXIF.UserComment'] = b'Something containing\x00NULL characters'

Warning

Binary Exif values are currently ignored; this appears to be a libmmal or firmware bug.

You may also specify datetime values, integer, or float values, all of which will be converted to appropriate ASCII strings (datetime values are formatted as YYYY:MM:DD HH:MM:SS in accordance with the Exif standard).

The currently supported Exif tags are:

Group Tags
IFD0, IFD1 ImageWidth, ImageLength, BitsPerSample, Compression, PhotometricInterpretation, ImageDescription, Make, Model, StripOffsets, Orientation, SamplesPerPixel, RowsPerString, StripByteCounts, Xresolution, Yresolution, PlanarConfiguration, ResolutionUnit, TransferFunction, Software, DateTime, Artist, WhitePoint, PrimaryChromaticities, JPEGInterchangeFormat, JPEGInterchangeFormatLength, YcbCrCoefficients, YcbCrSubSampling, YcbCrPositioning, ReferenceBlackWhite, Copyright
EXIF ExposureTime, FNumber, ExposureProgram, SpectralSensitivity, ISOSpeedRatings, OECF, ExifVersion, DateTimeOriginal, DateTimeDigitized, ComponentsConfiguration, CompressedBitsPerPixel, ShutterSpeedValue, ApertureValue, BrightnessValue, ExposureBiasValue, MaxApertureValue, SubjectDistance, MeteringMode, LightSource, Flash, FocalLength, SubjectArea, MakerNote, UserComment, SubSecTime, SubSecTimeOriginal, SubSecTimeDigitized, FlashpixVersion, ColorSpace, PixelXDimension, PixelYDimension, RelatedSoundFile, FlashEnergy, SpacialFrequencyResponse, FocalPlaneXResolution, FocalPlaneYResolution, FocalPlaneResolutionUnit, SubjectLocation, ExposureIndex, SensingMethod, FileSource, SceneType, CFAPattern, CustomRendered, ExposureMode, WhiteBalance, DigitalZoomRatio, FocalLengthIn35mmFilm, SceneCaptureType, GainControl, Contrast, Saturation, Sharpness, DeviceSettingDescription, SubjectDistanceRange, ImageUniqueID
GPS GPSVersionID, GPSLatitudeRef, GPSLatitude, GPSLongitudeRef, GPSLongitude, GPSAltitudeRef, GPSAltitude, GPSTimeStamp, GPSSatellites, GPSStatus, GPSMeasureMode, GPSDOP, GPSSpeedRef, GPSSpeed, GPSTrackRef, GPSTrack, GPSImgDirectionRef, GPSImgDirection, GPSMapDatum, GPSDestLatitudeRef, GPSDestLatitude, GPSDestLongitudeRef, GPSDestLongitude, GPSDestBearingRef, GPSDestBearing, GPSDestDistanceRef, GPSDestDistance, GPSProcessingMethod, GPSAreaInformation, GPSDateStamp, GPSDifferential
EINT InteroperabilityIndex, InteroperabilityVersion, RelatedImageFileFormat, RelatedImageWidth, RelatedImageLength
exposure_compensation

Retrieves or sets the exposure compensation level of the camera.

When queried, the exposure_compensation property returns an integer value between -25 and 25 indicating the exposure level of the camera. Larger values result in brighter images.

When set, the property adjusts the camera’s exposure compensation level. Each increment represents 1/6th of a stop. Hence setting the attribute to 6 increases exposure by 1 stop. The property can be set while recordings or previews are in progress. The default value is 0.

exposure_mode

Retrieves or sets the exposure mode of the camera.

When queried, the exposure_mode property returns a string representing the exposure setting of the camera. The possible values can be obtained from the PiCamera.EXPOSURE_MODES attribute, and are as follows:

  • 'off'
  • 'auto'
  • 'night'
  • 'nightpreview'
  • 'backlight'
  • 'spotlight'
  • 'sports'
  • 'snow'
  • 'beach'
  • 'verylong'
  • 'fixedfps'
  • 'antishake'
  • 'fireworks'

When set, the property adjusts the camera’s exposure mode. The property can be set while recordings or previews are in progress. The default value is 'auto'.

Note

Exposure mode 'off' is special: this disables the camera’s automatic gain control, fixing the values of digital_gain and analog_gain. Please note that these properties are not directly settable, and default to low values when the camera is first initialized. Therefore it is important to let them settle on higher values before disabling automatic gain control otherwise all frames captured will appear black.

exposure_speed

Retrieves the current shutter speed of the camera.

When queried, this property returns the shutter speed currently being used by the camera. If you have set shutter_speed to a non-zero value, then exposure_speed and shutter_speed should be equal. However, if shutter_speed is set to 0 (auto), then you can read the actual shutter speed being used from this attribute. The value is returned as an integer representing a number of microseconds. This is a read-only property.

New in version 1.6.

flash_mode

Retrieves or sets the flash mode of the camera.

When queried, the flash_mode property returns a string representing the flash setting of the camera. The possible values can be obtained from the PiCamera.FLASH_MODES attribute, and are as follows:

  • 'off'
  • 'auto'
  • 'on'
  • 'redeye'
  • 'fillin'
  • 'torch'

When set, the property adjusts the camera’s flash mode. The property can be set while recordings or previews are in progress. The default value is 'off'.

Note

You must define which GPIO pins the camera is to use for flash and privacy indicators. This is done within the Device Tree configuration which is considered an advanced topic. Specifically, you need to define pins FLASH_0_ENABLE and optionally FLASH_0_INDICATOR (for the privacy indicator). More information can be found in this recipe.

frame

Retrieves information about the current frame recorded from the camera.

When video recording is active (after a call to start_recording()), this attribute will return a PiVideoFrame tuple containing information about the current frame that the camera is recording.

If multiple video recordings are currently in progress (after multiple calls to start_recording() with different values for the splitter_port parameter), which encoder’s frame information is returned is arbitrary. If you require information from a specific encoder, you will need to extract it from _encoders explicitly.

Querying this property when the camera is not recording will result in an exception.

Note

There is a small window of time when querying this attribute will return None after calling start_recording(). If this attribute returns None, this means that the video encoder has been initialized, but the camera has not yet returned any frames.

framerate

Retrieves or sets the framerate at which video-port based image captures, video recordings, and previews will run.

When queried, the framerate property returns the rate at which the camera’s video and preview ports will operate as a Fraction instance which can be easily converted to an int or float.

Note

For backwards compatibility, a derivative of the Fraction class is actually used which permits the value to be treated as a tuple of (numerator, denominator).

Setting and retrieving framerate as a (numerator, denominator) tuple is deprecated and will be removed in 2.0. Please use a Fraction instance instead (which is just as accurate and also permits direct use with math operators).

When set, the property reconfigures the camera so that the next call to recording and previewing methods will use the new framerate. The framerate can be specified as an int, float, Fraction, or a (numerator, denominator) tuple. The camera must not be closed, and no recording must be active when the property is set.

Note

This attribute, in combination with resolution, determines the mode that the camera operates in. The actual sensor framerate and resolution used by the camera is influenced, but not directly set, by this property. See sensor_mode for more information.

The initial value of this property can be specified with the framerate parameter in the PiCamera constructor.

hflip

Retrieves or sets whether the camera’s output is horizontally flipped.

When queried, the hflip property returns a boolean indicating whether or not the camera’s output is horizontally flipped. The property can be set while recordings or previews are in progress. The default value is False.

image_denoise

Retrieves or sets whether denoise will be applied to image captures.

When queried, the image_denoise property returns a boolean value indicating whether or not the camera software will apply a denoise algorithm to image captures.

When set, the property activates or deactivates the denoise algorithm for image captures. The property can be set while recordings or previews are in progress. The default value is True.

New in version 1.7.

image_effect

Retrieves or sets the current image effect applied by the camera.

When queried, the image_effect property returns a string representing the effect the camera will apply to captured video. The possible values can be obtained from the PiCamera.IMAGE_EFFECTS attribute, and are as follows:

  • 'none'
  • 'negative'
  • 'solarize'
  • 'sketch'
  • 'denoise'
  • 'emboss'
  • 'oilpaint'
  • 'hatch'
  • 'gpen'
  • 'pastel'
  • 'watercolor'
  • 'film'
  • 'blur'
  • 'saturation'
  • 'colorswap'
  • 'washedout'
  • 'posterise'
  • 'colorpoint'
  • 'colorbalance'
  • 'cartoon'
  • 'deinterlace1'
  • 'deinterlace2'

When set, the property changes the effect applied by the camera. The property can be set while recordings or previews are in progress, but only certain effects work while recording video (notably 'negative' and 'solarize'). The default value is 'none'.

image_effect_params

Retrieves or sets the parameters for the current effect.

When queried, the image_effect_params property either returns None (for effects which have no configurable parameters, or if no parameters have been configured), or a tuple of numeric values up to six elements long.

When set, the property changes the parameters of the current effect as a sequence of numbers, or a single number. Attempting to set parameters on an effect which does not support parameters, or providing an incompatible set of parameters for an effect will raise a PiCameraValueError exception.

The effects which have parameters, and what combinations those parameters can take is as follows:

Effect Parameters Description
'solarize' yuv, x0, y1, y2, y3 yuv controls whether data is processed as RGB (0) or YUV(1). Input values from 0 to x0 - 1 are remapped linearly onto the range 0 to y0. Values from x0 to 255 are remapped linearly onto the range y1 to y2.
x0, y0, y1, y2 Same as above, but yuv defaults to 0 (process as RGB).
yuv Same as above, but x0, y0, y1, y2 default to 128, 128, 128, 0 respectively.
'colorpoint' quadrant quadrant specifies which quadrant of the U/V space to retain chroma from: 0=green, 1=red/yellow, 2=blue, 3=purple. There is no default; this effect does nothing until parameters are set.
'colorbalance' lens, r, g, b, u, v lens specifies the lens shading strength (0.0 to 256.0, where 0.0 indicates lens shading has no effect). r, g, b are multipliers for their respective color channels (0.0 to 256.0). u and v are offsets added to the U/V plane (0 to 255).
lens, r, g, b Same as above but u are defaulted to 0.
lens, r, b Same as above but g also defaults to to 1.0.
'colorswap' dir If dir is 0, swap RGB to BGR. If dir is 1, swap RGB to BRG.
'posterise' steps Control the quantization steps for the image. Valid values are 2 to 32, and the default is 4.
'blur' size Specifies the size of the kernel. Valid values are 1 or 2.
'film' strength, u, v strength specifies the strength of effect. u and v are offsets added to the U/V plane (0 to 255).
'watercolor' u, v u and v specify offsets to add to the U/V plane (0 to 255).
  No parameters indicates no U/V effect.

New in version 1.8.

iso

Retrieves or sets the apparent ISO setting of the camera.

When queried, the iso property returns the ISO setting of the camera, a value which represents the sensitivity of the camera to light. Lower values (e.g. 100) imply less sensitivity than higher values (e.g. 400 or 800). Lower sensitivities tend to produce less “noisy” (smoother) images, but operate poorly in low light conditions.

When set, the property adjusts the sensitivity of the camera. Valid values are between 0 (auto) and 1600. The actual value used when iso is explicitly set will be one of the following values (whichever is closest): 100, 200, 320, 400, 500, 640, 800.

The attribute can be adjusted while previews or recordings are in progress. The default value is 0 which means automatically determine a value according to image-taking conditions.

Note

You can query the analog_gain and digital_gain attributes to determine the actual gains being used by the camera. If both are 1.0 this equates to ISO 100. Please note that this capability requires an up to date firmware (#692 or later).

Note

With iso settings other than 0 (auto), the exposure_mode property becomes non-functional.

Note

Some users on the Pi camera forum have noted that higher ISO values than 800 (specifically up to 1600) can be achieved in certain conditions with exposure_mode set to 'sports' and iso set to 0. It doesn’t appear to be possible to manually request an ISO setting higher than 800, but the picamera library will permit settings up to 1600 in case the underlying firmware permits such settings in particular circumstances.

led

Sets the state of the camera’s LED via GPIO.

If a GPIO library is available (only RPi.GPIO is currently supported), and if the python process has the necessary privileges (typically this means running as root via sudo), this property can be used to set the state of the camera’s LED as a boolean value (True is on, False is off).

Note

This is a write-only property. While it can be used to control the camera’s LED, you cannot query the state of the camera’s LED using this property.

Warning

There are circumstances in which the camera firmware may override an existing LED setting. For example, in the case that the firmware resets the camera (as can happen with a CSI-2 timeout), the LED may also be reset. If you wish to guarantee that the LED remain off at all times, you may prefer to use the disable_camera_led option in config.txt (this has the added advantage that sudo privileges and GPIO access are not required, at least for LED control).

meter_mode

Retrieves or sets the metering mode of the camera.

When queried, the meter_mode property returns the method by which the camera determines the exposure as one of the following strings:

  • 'average'
  • 'spot'
  • 'backlit'
  • 'matrix'

When set, the property adjusts the camera’s metering mode. All modes set up two regions: a center region, and an outer region. The major difference between each mode is the size of the center region. The 'backlit' mode has the largest central region (30% of the width), while 'spot' has the smallest (10% of the width).

The property can be set while recordings or previews are in progress. The default value is 'average'. All possible values for the attribute can be obtained from the PiCamera.METER_MODES attribute.

overlays

Retrieves all active PiRenderer overlays.

If no overlays are current active, overlays will return an empty iterable. Otherwise, it will return an iterable of PiRenderer instances which are currently acting as overlays. Note that the preview renderer is an exception to this: it is not included as an overlay despite being derived from PiRenderer.

New in version 1.8.

preview

Retrieves the PiRenderer displaying the camera preview.

If no preview is currently active, preview will return None. Otherwise, it will return the instance of PiRenderer which is currently connected to the camera’s preview port for rendering what the camera sees. You can use the attributes of the PiRenderer class to configure the appearance of the preview. For example, to make the preview semi-transparent:

import picamera

with picamera.PiCamera() as camera:
    camera.start_preview()
    camera.preview.alpha = 128

New in version 1.8.

preview_alpha

Retrieves or sets the opacity of the preview window.

Deprecated since version 1.8: Please use the alpha attribute of the preview object instead.

preview_fullscreen

Retrieves or sets full-screen for the preview window.

Deprecated since version 1.8: Please use the fullscreen attribute of the preview object instead.

preview_layer

Retrieves of sets the layer of the preview window.

Deprecated since version 1.8: Please use the layer attribute of the preview object instead.

preview_window

Retrieves or sets the size of the preview window.

Deprecated since version 1.8: Please use the window attribute of the preview object instead.

previewing

Returns True if the start_preview() method has been called, and no stop_preview() call has been made yet.

Deprecated since version 1.8: Test whether preview is None instead.

raw_format

Retrieves or sets the raw format of the camera’s ports.

Deprecated since version 1.0: Please use 'yuv' or 'rgb' directly as a format in the various capture methods instead.

recording

Returns True if the start_recording() method has been called, and no stop_recording() call has been made yet.

resolution

Retrieves or sets the resolution at which image captures, video recordings, and previews will be captured.

When queried, the resolution property returns the resolution at which the camera will operate as a tuple of (width, height) measured in pixels. This is the resolution that the capture() method will produce images at, and the resolution that start_recording() will produce videos at.

When set, the property reconfigures the camera so that the next call to these methods will use the new resolution. The resolution must be specified as a (width, height) tuple, the camera must not be closed, and no recording must be active when the property is set.

The property defaults to the Pi’s currently configured display resolution unless the display has been disabled (with tvservice -o) in which case it defaults to 1280x720 (720p).

Note

This attribute, in combination with framerate, determines the mode that the camera operates in. The actual sensor framerate and resolution used by the camera is influenced, but not directly set, by this property. See sensor_mode for more information.

The initial value of this property can be specified with the resolution parameter in the PiCamera constructor.

rotation

Retrieves or sets the current rotation of the camera’s image.

When queried, the rotation property returns the rotation applied to the image. Valid values are 0, 90, 180, and 270.

When set, the property changes the rotation applied to the camera’s input. The property can be set while recordings or previews are in progress. The default value is 0.

saturation

Retrieves or sets the saturation setting of the camera.

When queried, the saturation property returns the color saturation of the camera as an integer between -100 and 100. When set, the property adjusts the saturation of the camera. Saturation can be adjusted while previews or recordings are in progress. The default value is 0.

sensor_mode

Retrieves or sets the input mode of the camera’s sensor.

This is an advanced property which can be used to control the camera’s sensor mode. By default, mode 0 is used which allows the camera to automatically select an input mode based on the requested resolution and framerate. Valid values are currently between 0 and 7. The set of valid sensor modes (along with the heuristic used to select one automatically) are detailed in the Camera Modes section of the documentation.

Note

At the time of writing, setting this property does nothing unless the camera has been initialized with a sensor mode other than 0. Furthermore, some mode transitions appear to require setting the property twice (in a row). This appears to be a firmware limitation.

The initial value of this property can be specified with the sensor_mode parameter in the PiCamera constructor.

New in version 1.9.

sharpness

Retrieves or sets the sharpness setting of the camera.

When queried, the sharpness property returns the sharpness level of the camera (a measure of the amount of post-processing to reduce or increase image sharpness) as an integer between -100 and 100. When set, the property adjusts the sharpness of the camera. Sharpness can be adjusted while previews or recordings are in progress. The default value is 0.

shutter_speed

Retrieves or sets the shutter speed of the camera in microseconds.

When queried, the shutter_speed property returns the shutter speed of the camera in microseconds, or 0 which indicates that the speed will be automatically determined by the auto-exposure algorithm. Faster shutter times naturally require greater amounts of illumination and vice versa.

When set, the property adjusts the shutter speed of the camera, which most obviously affects the illumination of subsequently captured images. Shutter speed can be adjusted while previews or recordings are running. The default value is 0 (auto).

Note

You can query the exposure_speed attribute to determine the actual shutter speed being used when this attribute is set to 0. Please note that this capability requires an up to date firmware (#692 or later).

Note

In later firmwares, this attribute is limited by the value of the framerate attribute. For example, if framerate is set to 30fps, the shutter speed cannot be slower than 33,333µs (1/fps).

still_stats

Retrieves or sets whether statistics will be calculated from still frames or the prior preview frame.

When queried, the still_stats property returns a boolean value indicating when scene statistics will be calculated for still captures (that is, captures where the use_video_port parameter of capture() is False). When this property is False (the default), statistics will be calculated from the preceding preview frame (this also applies when the preview is not visible). When True, statistics will be calculated from the captured image itself.

When set, the propetry controls when scene statistics will be calculated for still captures. The property can be set while recordings or previews are in progress. The default value is False.

The advantages to calculating scene statistics from the captured image are that time between startup and capture is reduced as only the AGC (automatic gain control) has to converge. The downside is that processing time for captures increases and that white balance and gain won’t necessarily match the preview.

New in version 1.9.

vflip

Retrieves or sets whether the camera’s output is vertically flipped.

When queried, the vflip property returns a boolean indicating whether or not the camera’s output is vertically flipped. The property can be set while recordings or previews are in progress. The default value is False.

video_denoise

Retrieves or sets whether denoise will be applied to video recordings.

When queried, the video_denoise property returns a boolean value indicating whether or not the camera software will apply a denoise algorithm to video recordings.

When set, the property activates or deactivates the denoise algorithm for video recordings. The property can be set while recordings or previews are in progress. The default value is True.

New in version 1.7.

video_stabilization

Retrieves or sets the video stabilization mode of the camera.

When queried, the video_stabilization property returns a boolean value indicating whether or not the camera attempts to compensate for motion.

When set, the property activates or deactivates video stabilization. The property can be set while recordings or previews are in progress. The default value is False.

Note

The built-in video stabilization only accounts for vertical and horizontal motion, not rotation.

zoom

Retrieves or sets the zoom applied to the camera’s input.

When queried, the zoom property returns a (x, y, w, h) tuple of floating point values ranging from 0.0 to 1.0, indicating the proportion of the image to include in the output (this is also known as the “Region of Interest” or ROI). The default value is (0.0, 0.0, 1.0, 1.0) which indicates that everything should be included. The property can be set while recordings or previews are in progress.