# 9. API - The PiCamera Class¶

The picamera library contains numerous classes, but the primary one that all users are likely to interact with is PiCamera, documented below. With the exception of the contents of the picamera.array module, all classes in picamera are accessible from the package’s top level namespace. In other words, the following import is sufficient to import everything in the library (excepting the contents of picamera.array):

import picamera


## 9.1. PiCamera¶

class picamera.PiCamera(camera_num=0, stereo_mode='none', stereo_decimate=False, resolution=None, framerate=None, sensor_mode=0, led_pin=None, clock_mode='reset', framerate_range=None)[source]

Provides a pure Python interface to the Raspberry Pi’s camera module.

Upon construction, this class initializes the camera. The camera_num parameter (which defaults to 0) selects the camera module that the instance will represent. Only the Raspberry Pi compute module currently supports more than one camera.

The sensor_mode, resolution, framerate, framerate_range, and clock_mode parameters provide initial values for the sensor_mode, resolution, framerate, framerate_range, and clock_mode attributes of the class (these attributes are all relatively expensive to set individually, hence setting them all upon construction is a speed optimization). Please refer to the attribute documentation for more information and default values.

The stereo_mode and stereo_decimate parameters configure dual cameras on a compute module for sterescopic mode. These parameters can only be set at construction time; they cannot be altered later without closing the PiCamera instance and recreating it. The stereo_mode parameter defaults to 'none' (no stereoscopic mode) but can be set to 'side-by-side' or 'top-bottom' to activate a stereoscopic mode. If the stereo_decimate parameter is True, the resolution of the two cameras will be halved so that the resulting image has the same dimensions as if stereoscopic mode were not being used.

The led_pin parameter can be used to specify the GPIO pin which should be used to control the camera’s LED via the led attribute. If this is not specified, it should default to the correct value for your Pi platform. You should only need to specify this parameter if you are using a custom DeviceTree blob (this is only typical on the Compute Module platform).

No preview or recording is started automatically upon construction. Use the capture() method to capture images, the start_recording() method to begin recording video, or the start_preview() method to start live display of the camera’s input.

Several attributes are provided to adjust the camera’s configuration. Some of these can be adjusted while a recording is running, like brightness. Others, like resolution, can only be adjusted when the camera is idle.

When you are finished with the camera, you should ensure you call the close() method to release the camera resources:

camera = PiCamera()
try:
# do something with the camera
pass
finally:
camera.close()


The class supports the context manager protocol to make this particularly easy (upon exiting the with statement, the close() method is automatically called):

with PiCamera() as camera:
# do something with the camera
pass


Changed in version 1.8: Added stereo_mode and stereo_decimate parameters.

Changed in version 1.9: Added resolution, framerate, and sensor_mode parameters.

Changed in version 1.10: Added led_pin parameter.

Changed in version 1.11: Added clock_mode parameter, and permitted setting of resolution as appropriately formatted string.

Changed in version 1.13: Added framerate_range parameter.

add_overlay(source, size=None, format=None, **options)[source]

Adds a static overlay to the preview output.

This method creates a new static overlay using the same rendering mechanism as the preview. Overlays will appear on the Pi’s video output, but will not appear in captures or video recordings. Multiple overlays can exist; each call to add_overlay() returns a new PiOverlayRenderer instance representing the overlay.

The source must be an object that supports the buffer protocol in one of the supported unencoded formats: 'yuv', 'rgb', 'rgba', 'bgr', or 'bgra'. The format can specified explicitly with the optional format parameter. If not specified, the method will attempt to guess the format based on the length of source and the size (assuming 3 bytes per pixel for RGB, and 4 bytes for RGBA).

The optional size parameter specifies the size of the source image as a (width, height) tuple. If this is omitted or None then the size is assumed to be the same as the camera’s current resolution.

The length of source must take into account that widths are rounded up to the nearest multiple of 32, and heights to the nearest multiple of 16. For example, if size is (1280, 720), and format is 'rgb', then source must be a buffer with length 1280 × 720 × 3 bytes, or 2,764,800 bytes (because 1280 is a multiple of 32, and 720 is a multiple of 16 no extra rounding is required). However, if size is (97, 57), and format is 'rgb' then source must be a buffer with length 128 × 64 × 3 bytes, or 24,576 bytes (pixels beyond column 97 and row 57 in the source will be ignored).

New overlays default to layer 0, whilst the preview defaults to layer 2. Higher numbered layers obscure lower numbered layers, hence new overlays will be invisible (if the preview is running) by default. You can make the new overlay visible either by making any existing preview transparent (with the alpha property) or by moving the overlay into a layer higher than the preview (with the layer property).

All keyword arguments captured in options are passed onto the PiRenderer constructor. All camera properties except resolution and framerate can be modified while overlays exist. The reason for these exceptions is that the overlay has a static resolution and changing the camera’s mode would require resizing of the source.

Warning

If too many overlays are added, the display output will be disabled and a reboot will generally be required to restore the display. Overlays are composited “on the fly”. Hence, a real-time constraint exists wherein for each horizontal line of HDMI output, the content of all source layers must be fetched, resized, converted, and blended to produce the output pixels.

If enough overlays exist (where “enough” is a number dependent on overlay size, display resolution, bus frequency, and several other factors making it unrealistic to calculate in advance), this process breaks down and video output fails. One solution is to add dispmanx_offline=1 to /boot/config.txt to force the use of an off-screen buffer. Be aware that this requires more GPU memory and may reduce the update rate.

New in version 1.8.

Changed in version 1.13: Added format parameter

capture(output, format=None, use_video_port=False, resize=None, splitter_port=0, bayer=False, **options)[source]

Capture an image from the camera, storing it in output.

If output is a string, it will be treated as a filename for a new file which the image will be written to. If output is not a string, but is an object with a write method, it is assumed to be a file-like object and the image data is appended to it (the implementation only assumes the object has a write method - no other methods are required but flush will be called at the end of capture if it is present). If output is not a string, and has no write method it is assumed to be a writeable object implementing the buffer protocol. In this case, the image data will be written directly to the underlying buffer (which must be large enough to accept the image data).

If format is None (the default), the method will attempt to guess the required image format from the extension of output (if it’s a string), or from the name attribute of output (if it has one). In the case that the format cannot be determined, a PiCameraValueError will be raised.

If format is not None, it must be a string specifying the format that you want the image output in. The format can be a MIME-type or one of the following strings:

• 'jpeg' - Write a JPEG file
• 'png' - Write a PNG file
• 'gif' - Write a GIF file
• 'bmp' - Write a Windows bitmap file
• 'yuv' - Write the raw image data to a file in YUV420 format
• 'rgb' - Write the raw image data to a file in 24-bit RGB format
• 'rgba' - Write the raw image data to a file in 32-bit RGBA format
• 'bgr' - Write the raw image data to a file in 24-bit BGR format
• 'bgra' - Write the raw image data to a file in 32-bit BGRA format
• 'raw' - Deprecated option for raw captures; the format is taken from the deprecated raw_format attribute

The use_video_port parameter controls whether the camera’s image or video port is used to capture images. It defaults to False which means that the camera’s image port is used. This port is slow but produces better quality pictures. If you need rapid capture up to the rate of video frames, set this to True.

When use_video_port is True, the splitter_port parameter specifies the port of the video splitter that the image encoder will be attached to. This defaults to 0 and most users will have no need to specify anything different. This parameter is ignored when use_video_port is False. See MMAL for more information about the video splitter.

If resize is not None (the default), it must be a two-element tuple specifying the width and height that the image should be resized to.

Warning

If resize is specified, or use_video_port is True, Exif metadata will not be included in JPEG output. This is due to an underlying firmware limitation.

Certain file formats accept additional options which can be specified as keyword arguments. Currently, only the 'jpeg' encoder accepts additional options, which are:

• quality - Defines the quality of the JPEG encoder as an integer ranging from 1 to 100. Defaults to 85. Please note that JPEG quality is not a percentage and definitions of quality vary widely.
• restart - Defines the restart interval for the JPEG encoder as a number of JPEG MCUs. The actual restart interval used will be a multiple of the number of MCUs per row in the resulting image.
• thumbnail - Defines the size and quality of the thumbnail to embed in the Exif metadata. Specifying None disables thumbnail generation. Otherwise, specify a tuple of (width, height, quality). Defaults to (64, 48, 35).
• bayer - If True, the raw bayer data from the camera’s sensor is included in the Exif metadata.

Note

The so-called “raw” formats listed above ('yuv', 'rgb', etc.) do not represent the raw bayer data from the camera’s sensor. Rather they provide access to the image data after GPU processing, but before format encoding (JPEG, PNG, etc). Currently, the only method of accessing the raw bayer data is via the bayer parameter described above.

Changed in version 1.0: The resize parameter was added, and raw capture formats can now be specified directly

Changed in version 1.3: The splitter_port parameter was added, and bayer was added as an option for the 'jpeg' format

Changed in version 1.11: Support for buffer outputs was added.

capture_continuous(output, format=None, use_video_port=False, resize=None, splitter_port=0, burst=False, bayer=False, **options)[source]

Capture images continuously from the camera as an infinite iterator.

This method returns an infinite iterator of images captured continuously from the camera. If output is a string, each captured image is stored in a file named after output after substitution of two values with the format() method. Those two values are:

• {counter} - a simple incrementor that starts at 1 and increases by 1 for each image taken
• {timestamp} - a datetime instance

The table below contains several example values of output and the sequence of filenames those values could produce:

output Value Filenames Notes
'image{counter}.jpg' image1.jpg, image2.jpg, image3.jpg, …
'image{counter:02d}.jpg' image01.jpg, image02.jpg, image03.jpg, …
'image{timestamp}.jpg' image2013-10-05 12:07:12.346743.jpg, image2013-10-05 12:07:32.498539, …
'image{timestamp:%H-%M-%S-%f}.jpg' image12-10-02-561527.jpg, image12-10-14-905398.jpg
'{timestamp:%H%M%S}-{counter:03d}.jpg' 121002-001.jpg, 121013-002.jpg, 121014-003.jpg, …
1. Note that because timestamp’s default output includes colons (:), the resulting filenames are not suitable for use on Windows. For this reason (and the fact the default contains spaces) it is strongly recommended you always specify a format when using {timestamp}.
2. You can use both {timestamp} and {counter} in a single format string (multiple times too!) although this tends to be redundant.

If output is not a string, but has a write method, it is assumed to be a file-like object and each image is simply written to this object sequentially. In this case you will likely either want to write something to the object between the images to distinguish them, or clear the object between iterations. If output is not a string, and has no write method, it is assumed to be a writeable object supporting the buffer protocol; each image is simply written to the buffer sequentially.

The format, use_video_port, splitter_port, resize, and options parameters are the same as in capture().

If use_video_port is False (the default), the burst parameter can be used to make still port captures faster. Specifically, this prevents the preview from switching resolutions between captures which significantly speeds up consecutive captures from the still port. The downside is that this mode is currently has several bugs; the major issue is that if captures are performed too quickly some frames will come back severely underexposed. It is recommended that users avoid the burst parameter unless they absolutely require it and are prepared to work around such issues.

For example, to capture 60 images with a one second delay between them, writing the output to a series of JPEG files named image01.jpg, image02.jpg, etc. one could do the following:

import time
import picamera
with picamera.PiCamera() as camera:
camera.start_preview()
try:
for i, filename in enumerate(
camera.capture_continuous('image{counter:02d}.jpg')):
print(filename)
time.sleep(1)
if i == 59:
break
finally:
camera.stop_preview()


Alternatively, to capture JPEG frames as fast as possible into an in-memory stream, performing some processing on each stream until some condition is satisfied:

import io
import time
import picamera
with picamera.PiCamera() as camera:
stream = io.BytesIO()
for foo in camera.capture_continuous(stream, format='jpeg'):
# Truncate the stream to the current position (in case
# prior iterations output a longer image)
stream.truncate()
stream.seek(0)
if process(stream):
break


Changed in version 1.0: The resize parameter was added, and raw capture formats can now be specified directly

Changed in version 1.3: The splitter_port parameter was added

Changed in version 1.11: Support for buffer outputs was added.

capture_sequence(outputs, format='jpeg', use_video_port=False, resize=None, splitter_port=0, burst=False, bayer=False, **options)[source]

Capture a sequence of consecutive images from the camera.

This method accepts a sequence or iterator of outputs each of which must either be a string specifying a filename for output, or a file-like object with a write method, or a writeable buffer object. For each item in the sequence or iterator of outputs, the camera captures a single image as fast as it can.

The format, use_video_port, splitter_port, resize, and options parameters are the same as in capture(), but format defaults to 'jpeg'. The format is not derived from the filenames in outputs by this method.

If use_video_port is False (the default), the burst parameter can be used to make still port captures faster. Specifically, this prevents the preview from switching resolutions between captures which significantly speeds up consecutive captures from the still port. The downside is that this mode is currently has several bugs; the major issue is that if captures are performed too quickly some frames will come back severely underexposed. It is recommended that users avoid the burst parameter unless they absolutely require it and are prepared to work around such issues.

For example, to capture 3 consecutive images:

import time
import picamera
with picamera.PiCamera() as camera:
camera.start_preview()
time.sleep(2)
camera.capture_sequence([
'image1.jpg',
'image2.jpg',
'image3.jpg',
])
camera.stop_preview()


If you wish to capture a large number of images, a list comprehension or generator expression can be used to construct the list of filenames to use:

import time
import picamera
with picamera.PiCamera() as camera:
camera.start_preview()
time.sleep(2)
camera.capture_sequence([
'image%02d.jpg' % i
for i in range(100)
])
camera.stop_preview()


More complex effects can be obtained by using a generator function to provide the filenames or output objects.

Changed in version 1.0: The resize parameter was added, and raw capture formats can now be specified directly

Changed in version 1.3: The splitter_port parameter was added

Changed in version 1.11: Support for buffer outputs was added.

close()[source]

Finalizes the state of the camera.

After successfully constructing a PiCamera object, you should ensure you call the close() method once you are finished with the camera (e.g. in the finally section of a try..finally block). This method stops all recording and preview activities and releases all resources associated with the camera; this is necessary to prevent GPU memory leaks.

record_sequence(outputs, format='h264', resize=None, splitter_port=1, **options)[source]

Record a sequence of video clips from the camera.

This method accepts a sequence or iterator of outputs each of which must either be a string specifying a filename for output, or a file-like object with a write method.

The method acts as an iterator itself, yielding each item of the sequence in turn. In this way, the caller can control how long to record to each item by only permitting the loop to continue when ready to switch to the next output.

The format, splitter_port, resize, and options parameters are the same as in start_recording(), but format defaults to 'h264'. The format is not derived from the filenames in outputs by this method.

For example, to record 3 consecutive 10-second video clips, writing the output to a series of H.264 files named clip01.h264, clip02.h264, and clip03.h264 one could use the following:

import picamera
with picamera.PiCamera() as camera:
for filename in camera.record_sequence([
'clip01.h264',
'clip02.h264',
'clip03.h264']):
print('Recording to %s' % filename)
camera.wait_recording(10)


Alternatively, a more flexible method of writing the previous example (which is easier to expand to a large number of output files) is by using a generator expression as the input sequence:

import picamera
with picamera.PiCamera() as camera:
for filename in camera.record_sequence(
'clip%02d.h264' % i for i in range(3)):
print('Recording to %s' % filename)
camera.wait_recording(10)


More advanced techniques are also possible by utilising infinite sequences, such as those generated by itertools.cycle(). In the following example, recording is switched between two in-memory streams. Whilst one stream is recording, the other is being analysed. The script only stops recording when a video recording meets some criteria defined by the process function:

import io
import itertools
import picamera
with picamera.PiCamera() as camera:
analyse = None
for stream in camera.record_sequence(
itertools.cycle((io.BytesIO(), io.BytesIO()))):
if analyse is not None:
if process(analyse):
break
analyse.seek(0)
analyse.truncate()
camera.wait_recording(5)
analyse = stream


New in version 1.3.

remove_overlay(overlay)[source]

Removes a static overlay from the preview output.

This method removes an overlay which was previously created by add_overlay(). The overlay parameter specifies the PiRenderer instance that was returned by add_overlay().

New in version 1.8.

request_key_frame(splitter_port=1)[source]

Request the encoder generate a key-frame as soon as possible.

When called, the video encoder running on the specified splitter_port will attempt to produce a key-frame (full-image frame) as soon as possible. The splitter_port defaults to 1. Valid values are between 0 and 3 inclusive.

Note

This method is only meaningful for recordings encoded in the H264 format as MJPEG produces full frames for every frame recorded. Furthermore, there’s no guarantee that the next frame will be a key-frame; this is simply a request to produce one as soon as possible after the call.

New in version 1.11.

split_recording(output, splitter_port=1, **options)[source]

Continue the recording in the specified output; close existing output.

When called, the video encoder will wait for the next appropriate split point (an inline SPS header), then will cease writing to the current output (and close it, if it was specified as a filename), and continue writing to the newly specified output.

The output parameter is treated as in the start_recording() method (it can be a string, a file-like object, or a writeable buffer object).

The motion_output parameter can be used to redirect the output of the motion vector data in the same fashion as output. If motion_output is None (the default) then motion vector data will not be redirected and will continue being written to the output specified by the motion_output parameter given to start_recording(). Alternatively, if you only wish to redirect motion vector data, you can set output to None and given a new value for motion_output.

The splitter_port parameter specifies which port of the video splitter the encoder you wish to change outputs is attached to. This defaults to 1 and most users will have no need to specify anything different. Valid values are between 0 and 3 inclusive.

Note that unlike start_recording(), you cannot specify format or other options as these cannot be changed in the middle of recording. Only the new output (and motion_output) can be specified. Furthermore, the format of the recording is currently limited to H264, and inline_headers must be True when start_recording() is called (this is the default).

Changed in version 1.3: The splitter_port parameter was added

Changed in version 1.5: The motion_output parameter was added

Changed in version 1.11: Support for buffer outputs was added.

start_preview(**options)[source]

Displays the preview overlay.

This method starts a camera preview as an overlay on the Pi’s primary display (HDMI or composite). A PiRenderer instance (more specifically, a PiPreviewRenderer) is constructed with the keyword arguments captured in options, and is returned from the method (this instance is also accessible from the preview attribute for as long as the renderer remains active). By default, the renderer will be opaque and fullscreen.

This means the default preview overrides whatever is currently visible on the display. More specifically, the preview does not rely on a graphical environment like X-Windows (it can run quite happily from a TTY console); it is simply an overlay on the Pi’s video output. To stop the preview and reveal the display again, call stop_preview(). The preview can be started and stopped multiple times during the lifetime of the PiCamera object.

All other camera properties can be modified “live” while the preview is running (e.g. brightness).

Note

Because the default preview typically obscures the screen, ensure you have a means of stopping a preview before starting one. If the preview obscures your interactive console you won’t be able to Alt+Tab back to it as the preview isn’t in a window. If you are in an interactive Python session, simply pressing Ctrl+D usually suffices to terminate the environment, including the camera and its associated preview.

start_recording(output, format=None, resize=None, splitter_port=1, **options)[source]

Start recording video from the camera, storing it in output.

If output is a string, it will be treated as a filename for a new file which the video will be written to. If output is not a string, but is an object with a write method, it is assumed to be a file-like object and the video data is appended to it (the implementation only assumes the object has a write() method - no other methods are required but flush will be called at the end of recording if it is present). If output is not a string, and has no write method it is assumed to be a writeable object implementing the buffer protocol. In this case, the video frames will be written sequentially to the underlying buffer (which must be large enough to accept all frame data).

If format is None (the default), the method will attempt to guess the required video format from the extension of output (if it’s a string), or from the name attribute of output (if it has one). In the case that the format cannot be determined, a PiCameraValueError will be raised.

If format is not None, it must be a string specifying the format that you want the video output in. The format can be a MIME-type or one of the following strings:

• 'h264' - Write an H.264 video stream
• 'mjpeg' - Write an M-JPEG video stream
• 'yuv' - Write the raw video data to a file in YUV420 format
• 'rgb' - Write the raw video data to a file in 24-bit RGB format
• 'rgba' - Write the raw video data to a file in 32-bit RGBA format
• 'bgr' - Write the raw video data to a file in 24-bit BGR format
• 'bgra' - Write the raw video data to a file in 32-bit BGRA format

If resize is not None (the default), it must be a two-element tuple specifying the width and height that the video recording should be resized to. This is particularly useful for recording video using the full resolution of the camera sensor (which is not possible in H.264 without down-sizing the output).

The splitter_port parameter specifies the port of the built-in splitter that the video encoder will be attached to. This defaults to 1 and most users will have no need to specify anything different. If you wish to record multiple (presumably resized) streams simultaneously, specify a value between 0 and 3 inclusive for this parameter, ensuring that you do not specify a port that is currently in use.

Certain formats accept additional options which can be specified as keyword arguments. The 'h264' format accepts the following additional options:

• profile - The H.264 profile to use for encoding. Defaults to ‘high’, but can be one of ‘baseline’, ‘main’, ‘extended’, ‘high’, or ‘constrained’.
• level - The H.264 level to use for encoding. Defaults to ‘4’, but can be any H.264 level up to ‘4.2’.
• intra_period - The key frame rate (the rate at which I-frames are inserted in the output). Defaults to None, but can be any 32-bit integer value representing the number of frames between successive I-frames. The special value 0 causes the encoder to produce a single initial I-frame, and then only P-frames subsequently. Note that split_recording() will fail in this mode.
• intra_refresh - The key frame format (the way in which I-frames will be inserted into the output stream). Defaults to None, but can be one of ‘cyclic’, ‘adaptive’, ‘both’, or ‘cyclicrows’.
• inline_headers - When True, specifies that the encoder should output SPS/PPS headers within the stream to ensure GOPs (groups of pictures) are self describing. This is important for streaming applications where the client may wish to seek within the stream, and enables the use of split_recording(). Defaults to True if not specified.
• sei - When True, specifies the encoder should include “Supplemental Enhancement Information” within the output stream. Defaults to False if not specified.
• sps_timing - When True the encoder includes the camera’s framerate in the SPS header. Defaults to False if not specified.
• motion_output - Indicates the output destination for motion vector estimation data. When None (the default), motion data is not output. Otherwise, this can be a filename string, a file-like object, or a writeable buffer object (as with the output parameter).

All encoded formats accept the following additional options:

• bitrate - The bitrate at which video will be encoded. Defaults to 17000000 (17Mbps) if not specified. The maximum value depends on the selected H.264 level and profile. Bitrate 0 indicates the encoder should not use bitrate control (the encoder is limited by the quality only).
• quality - Specifies the quality that the encoder should attempt to maintain. For the 'h264' format, use values between 10 and 40 where 10 is extremely high quality, and 40 is extremely low (20-25 is usually a reasonable range for H.264 encoding). For the mjpeg format, use JPEG quality values between 1 and 100 (where higher values are higher quality). Quality 0 is special and seems to be a “reasonable quality” default.
• quantization - Deprecated alias for quality.

Changed in version 1.0: The resize parameter was added, and 'mjpeg' was added as a recording format

Changed in version 1.3: The splitter_port parameter was added

Changed in version 1.5: The quantization parameter was deprecated in favor of quality, and the motion_output parameter was added.

Changed in version 1.11: Support for buffer outputs was added.

stop_preview()[source]

Hides the preview overlay.

If start_preview() has previously been called, this method shuts down the preview display which generally results in the underlying display becoming visible again. If a preview is not currently running, no exception is raised - the method will simply do nothing.

stop_recording(splitter_port=1)[source]

Stop recording video from the camera.

After calling this method the video encoder will be shut down and output will stop being written to the file-like object specified with start_recording(). If an error occurred during recording and wait_recording() has not been called since the error then this method will raise the exception.

The splitter_port parameter specifies which port of the video splitter the encoder you wish to stop is attached to. This defaults to 1 and most users will have no need to specify anything different. Valid values are between 0 and 3 inclusive.

Changed in version 1.3: The splitter_port parameter was added

wait_recording(timeout=0, splitter_port=1)[source]

Wait on the video encoder for timeout seconds.

It is recommended that this method is called while recording to check for exceptions. If an error occurs during recording (for example out of disk space) the recording will stop, but an exception will only be raised when the wait_recording() or stop_recording() methods are called.

If timeout is 0 (the default) the function will immediately return (or raise an exception if an error has occurred).

The splitter_port parameter specifies which port of the video splitter the encoder you wish to wait on is attached to. This defaults to 1 and most users will have no need to specify anything different. Valid values are between 0 and 3 inclusive.

Changed in version 1.3: The splitter_port parameter was added

ISO

Retrieves or sets the apparent ISO setting of the camera.

Deprecated since version 1.8: Please use the iso attribute instead.

analog_gain

Retrieves the current analog gain of the camera.

When queried, this property returns the analog gain currently being used by the camera. The value represents the analog gain of the sensor prior to digital conversion. The value is returned as a Fraction instance.

New in version 1.6.

annotate_background

Controls what background is drawn behind the annotation.

The annotate_background attribute specifies if a background will be drawn behind the annotation text and, if so, what color it will be. The value is specified as a Color or None if no background should be drawn. The default is None.

Note

For backward compatibility purposes, the value False will be treated as None, and the value True will be treated as the color black. The “truthiness” of the values returned by the attribute are backward compatible although the values themselves are not.

New in version 1.8.

Changed in version 1.10: In prior versions this was a bool value with True representing a black background.

annotate_foreground

Controls the color of the annotation text.

The annotate_foreground attribute specifies, partially, the color of the annotation text. The value is specified as a Color. The default is white.

Note

The underlying firmware does not directly support setting all components of the text color, only the Y’ component of a Y’UV tuple. This is roughly (but not precisely) analogous to the “brightness” of a color, so you may choose to think of this as setting how bright the annotation text will be relative to its background. In order to specify just the Y’ component when setting this attribute, you may choose to construct the Color instance as follows:

camera.annotate_foreground = picamera.Color(y=0.2, u=0, v=0)


New in version 1.10.

annotate_frame_num

Controls whether the current frame number is drawn as an annotation.

The annotate_frame_num attribute is a bool indicating whether or not the current frame number is rendered as an annotation, similar to annotate_text. The default is False.

New in version 1.8.

annotate_text

Retrieves or sets a text annotation for all output.

When queried, the annotate_text property returns the current annotation (if no annotation has been set, this is simply a blank string).

When set, the property immediately applies the annotation to the preview (if it is running) and to any future captures or video recording. Strings longer than 255 characters, or strings containing non-ASCII characters will raise a PiCameraValueError. The default value is ''.

Changed in version 1.8: Text annotations can now be 255 characters long. The prior limit was 32 characters.

annotate_text_size

Controls the size of the annotation text.

The annotate_text_size attribute is an int which determines how large the annotation text will appear on the display. Valid values are in the range 6 to 160, inclusive. The default is 32.

New in version 1.10.

awb_gains

Gets or sets the auto-white-balance gains of the camera.

When queried, this attribute returns a tuple of values representing the (red, blue) balance of the camera. The red and blue values are returned Fraction instances. The values will be between 0.0 and 8.0.

When set, this attribute adjusts the camera’s auto-white-balance gains. The property can be specified as a single value in which case both red and blue gains will be adjusted equally, or as a (red, blue) tuple. Values can be specified as an int, float or Fraction and each gain must be between 0.0 and 8.0. Typical values for the gains are between 0.9 and 1.9. The property can be set while recordings or previews are in progress.

Note

This attribute only has an effect when awb_mode is set to 'off'. Also note that even with AWB disabled, some attributes (specifically still_stats and drc_strength) can cause AWB re-calculations.

Changed in version 1.6: Prior to version 1.6, this attribute was write-only.

awb_mode

Retrieves or sets the auto-white-balance mode of the camera.

When queried, the awb_mode property returns a string representing the auto white balance setting of the camera. The possible values can be obtained from the PiCamera.AWB_MODES attribute, and are as follows:

• 'off'
• 'auto'
• 'sunlight'
• 'cloudy'
• 'shade'
• 'tungsten'
• 'fluorescent'
• 'incandescent'
• 'flash'
• 'horizon'

When set, the property adjusts the camera’s auto-white-balance mode. The property can be set while recordings or previews are in progress. The default value is 'auto'.

Note

AWB mode 'off' is special: this disables the camera’s automatic white balance permitting manual control of the white balance via the awb_gains property. However, even with AWB disabled, some attributes (specifically still_stats and drc_strength) can cause AWB re-calculations.

brightness

Retrieves or sets the brightness setting of the camera.

When queried, the brightness property returns the brightness level of the camera as an integer between 0 and 100. When set, the property adjusts the brightness of the camera. Brightness can be adjusted while previews or recordings are in progress. The default value is 50.

clock_mode

Retrieves or sets the mode of the camera’s clock.

This is an advanced property which can be used to control the nature of the frame timestamps available from the frame property. When this is “reset” (the default) each frame’s timestamp will be relative to the start of the recording. When this is “raw”, each frame’s timestamp will be relative to the last initialization of the camera.

The initial value of this property can be specified with the clock_mode parameter in the PiCamera constructor, and will default to “reset” if not specified.

New in version 1.11.

closed

Returns True if the close() method has been called.

color_effects

Retrieves or sets the current color effect applied by the camera.

When queried, the color_effects property either returns None which indicates that the camera is using normal color settings, or a (u, v) tuple where u and v are integer values between 0 and 255.

When set, the property changes the color effect applied by the camera. The property can be set while recordings or previews are in progress. For example, to make the image black and white set the value to (128, 128). The default value is None.

contrast

Retrieves or sets the contrast setting of the camera.

When queried, the contrast property returns the contrast level of the camera as an integer between -100 and 100. When set, the property adjusts the contrast of the camera. Contrast can be adjusted while previews or recordings are in progress. The default value is 0.

crop

Retrieves or sets the zoom applied to the camera’s input.

Deprecated since version 1.8: Please use the zoom attribute instead.

digital_gain

Retrieves the current digital gain of the camera.

When queried, this property returns the digital gain currently being used by the camera. The value represents the digital gain the camera applies after conversion of the sensor’s analog output. The value is returned as a Fraction instance.

New in version 1.6.

drc_strength

Retrieves or sets the dynamic range compression strength of the camera.

When queried, the drc_strength property returns a string indicating the amount of dynamic range compression the camera applies to images.

When set, the attributes adjusts the strength of the dynamic range compression applied to the camera’s output. Valid values are given in the list below:

• 'off'
• 'low'
• 'medium'
• 'high'

The default value is 'off'. All possible values for the attribute can be obtained from the PiCamera.DRC_STRENGTHS attribute.

Warning

Enabling DRC will override fixed white balance gains (set via awb_gains and awb_mode).

New in version 1.6.

exif_tags

Holds a mapping of the Exif tags to apply to captured images.

Note

Please note that Exif tagging is only supported with the jpeg format.

By default several Exif tags are automatically applied to any images taken with the capture() method: IFD0.Make (which is set to RaspberryPi), IFD0.Model (which is set to RP_OV5647), and three timestamp tags: IFD0.DateTime, EXIF.DateTimeOriginal, and EXIF.DateTimeDigitized which are all set to the current date and time just before the picture is taken.

If you wish to set additional Exif tags, or override any of the aforementioned tags, simply add entries to the exif_tags map before calling capture(). For example:

camera.exif_tags['IFD0.Copyright'] = 'Copyright (c) 2013 Foo Industries'


The Exif standard mandates ASCII encoding for all textual values, hence strings containing non-ASCII characters will cause an encoding error to be raised when capture() is called. If you wish to set binary values, use a bytes() value:

camera.exif_tags['EXIF.UserComment'] = b'Something containing\x00NULL characters'


Warning

Binary Exif values are currently ignored; this appears to be a libmmal or firmware bug.

You may also specify datetime values, integer, or float values, all of which will be converted to appropriate ASCII strings (datetime values are formatted as YYYY:MM:DD HH:MM:SS in accordance with the Exif standard).

The currently supported Exif tags are:

Group Tags
IFD0, IFD1 ImageWidth, ImageLength, BitsPerSample, Compression, PhotometricInterpretation, ImageDescription, Make, Model, StripOffsets, Orientation, SamplesPerPixel, RowsPerString, StripByteCounts, Xresolution, Yresolution, PlanarConfiguration, ResolutionUnit, TransferFunction, Software, DateTime, Artist, WhitePoint, PrimaryChromaticities, JPEGInterchangeFormat, JPEGInterchangeFormatLength, YcbCrCoefficients, YcbCrSubSampling, YcbCrPositioning, ReferenceBlackWhite, Copyright
EXIF ExposureTime, FNumber, ExposureProgram, SpectralSensitivity, ISOSpeedRatings, OECF, ExifVersion, DateTimeOriginal, DateTimeDigitized, ComponentsConfiguration, CompressedBitsPerPixel, ShutterSpeedValue, ApertureValue, BrightnessValue, ExposureBiasValue, MaxApertureValue, SubjectDistance, MeteringMode, LightSource, Flash, FocalLength, SubjectArea, MakerNote, UserComment, SubSecTime, SubSecTimeOriginal, SubSecTimeDigitized, FlashpixVersion, ColorSpace, PixelXDimension, PixelYDimension, RelatedSoundFile, FlashEnergy, SpacialFrequencyResponse, FocalPlaneXResolution, FocalPlaneYResolution, FocalPlaneResolutionUnit, SubjectLocation, ExposureIndex, SensingMethod, FileSource, SceneType, CFAPattern, CustomRendered, ExposureMode, WhiteBalance, DigitalZoomRatio, FocalLengthIn35mmFilm, SceneCaptureType, GainControl, Contrast, Saturation, Sharpness, DeviceSettingDescription, SubjectDistanceRange, ImageUniqueID
GPS GPSVersionID, GPSLatitudeRef, GPSLatitude, GPSLongitudeRef, GPSLongitude, GPSAltitudeRef, GPSAltitude, GPSTimeStamp, GPSSatellites, GPSStatus, GPSMeasureMode, GPSDOP, GPSSpeedRef, GPSSpeed, GPSTrackRef, GPSTrack, GPSImgDirectionRef, GPSImgDirection, GPSMapDatum, GPSDestLatitudeRef, GPSDestLatitude, GPSDestLongitudeRef, GPSDestLongitude, GPSDestBearingRef, GPSDestBearing, GPSDestDistanceRef, GPSDestDistance, GPSProcessingMethod, GPSAreaInformation, GPSDateStamp, GPSDifferential
EINT InteroperabilityIndex, InteroperabilityVersion, RelatedImageFileFormat, RelatedImageWidth, RelatedImageLength
exposure_compensation

Retrieves or sets the exposure compensation level of the camera.

When queried, the exposure_compensation property returns an integer value between -25 and 25 indicating the exposure level of the camera. Larger values result in brighter images.

When set, the property adjusts the camera’s exposure compensation level. Each increment represents 1/6th of a stop. Hence setting the attribute to 6 increases exposure by 1 stop. The property can be set while recordings or previews are in progress. The default value is 0.

exposure_mode

Retrieves or sets the exposure mode of the camera.

When queried, the exposure_mode property returns a string representing the exposure setting of the camera. The possible values can be obtained from the PiCamera.EXPOSURE_MODES attribute, and are as follows:

• 'off'
• 'auto'
• 'night'
• 'nightpreview'
• 'backlight'
• 'spotlight'
• 'sports'
• 'snow'
• 'beach'
• 'verylong'
• 'fixedfps'
• 'antishake'
• 'fireworks'

When set, the property adjusts the camera’s exposure mode. The property can be set while recordings or previews are in progress. The default value is 'auto'.

Note

Exposure mode 'off' is special: this disables the camera’s automatic gain control, fixing the values of digital_gain and analog_gain.

Please note that these properties are not directly settable (although they can be influenced by setting iso prior to fixing the gains), and default to low values when the camera is first initialized. Therefore it is important to let them settle on higher values before disabling automatic gain control otherwise all frames captured will appear black.

exposure_speed

Retrieves the current shutter speed of the camera.

When queried, this property returns the shutter speed currently being used by the camera. If you have set shutter_speed to a non-zero value, then exposure_speed and shutter_speed should be equal. However, if shutter_speed is set to 0 (auto), then you can read the actual shutter speed being used from this attribute. The value is returned as an integer representing a number of microseconds. This is a read-only property.

New in version 1.6.

flash_mode

Retrieves or sets the flash mode of the camera.

When queried, the flash_mode property returns a string representing the flash setting of the camera. The possible values can be obtained from the PiCamera.FLASH_MODES attribute, and are as follows:

• 'off'
• 'auto'
• 'on'
• 'redeye'
• 'fillin'
• 'torch'

When set, the property adjusts the camera’s flash mode. The property can be set while recordings or previews are in progress. The default value is 'off'.

Note

You must define which GPIO pins the camera is to use for flash and privacy indicators. This is done within the Device Tree configuration which is considered an advanced topic. Specifically, you need to define pins FLASH_0_ENABLE and optionally FLASH_0_INDICATOR (for the privacy indicator). More information can be found in this recipe.

New in version 1.10.

frame

Retrieves information about the current frame recorded from the camera.

When video recording is active (after a call to start_recording()), this attribute will return a PiVideoFrame tuple containing information about the current frame that the camera is recording.

If multiple video recordings are currently in progress (after multiple calls to start_recording() with different values for the splitter_port parameter), which encoder’s frame information is returned is arbitrary. If you require information from a specific encoder, you will need to extract it from _encoders explicitly.

Querying this property when the camera is not recording will result in an exception.

Note

There is a small window of time when querying this attribute will return None after calling start_recording(). If this attribute returns None, this means that the video encoder has been initialized, but the camera has not yet returned any frames.

framerate

Retrieves or sets the framerate at which video-port based image captures, video recordings, and previews will run.

When queried, the framerate property returns the rate at which the camera’s video and preview ports will operate as a Fraction instance (which can be easily converted to an int or float). If framerate_range has been set, then framerate will be 0 which indicates that a dynamic range of framerates is being used.

Note

For backwards compatibility, a derivative of the Fraction class is actually used which permits the value to be treated as a tuple of (numerator, denominator).

Setting and retrieving framerate as a (numerator, denominator) tuple is deprecated and will be removed in 2.0. Please use a Fraction instance instead (which is just as accurate and also permits direct use with math operators).

When set, the property configures the camera so that the next call to recording and previewing methods will use the new framerate. Setting this property implicitly sets framerate_range so that the low and high values are equal to the new framerate. The framerate can be specified as an int, float, Fraction, or a (numerator, denominator) tuple. For example, the following definitions are all equivalent:

from fractions import Fraction

camera.framerate = 30
camera.framerate = 30 / 1
camera.framerate = Fraction(30, 1)
camera.framerate = (30, 1) # deprecated


The camera must not be closed, and no recording must be active when the property is set.

Note

This attribute, in combination with resolution, determines the mode that the camera operates in. The actual sensor framerate and resolution used by the camera is influenced, but not directly set, by this property. See sensor_mode for more information.

The initial value of this property can be specified with the framerate parameter in the PiCamera constructor, and will default to 30 if not specified.

framerate_delta

Retrieves or sets a fractional amount that is added to the camera’s framerate for the purpose of minor framerate adjustments.

When queried, the framerate_delta property returns the amount that the camera’s framerate has been adjusted. This defaults to 0 (so the camera’s framerate is the actual framerate used).

When set, the property adjusts the camera’s framerate on the fly. The property can be set while recordings or previews are in progress. Thus the framerate used by the camera is actually framerate + framerate_delta.

Note

Framerates deltas can be fractional with adjustments as small as 1/256th of an fps possible (finer adjustments will be rounded). With an appropriately tuned PID controller, this can be used to achieve synchronization between the camera framerate and other devices.

If the new framerate demands a mode switch (such as moving between a low framerate and a high framerate mode), currently active recordings may drop a frame. This should only happen when specifying quite large deltas, or when framerate is at the boundary of a sensor mode (e.g. 49fps).

The framerate delta can be specified as an int, float, Fraction or a (numerator, denominator) tuple. For example, the following definitions are all equivalent:

from fractions import Fraction

camera.framerate_delta = 0.5
camera.framerate_delta = 1 / 2 # in python 3
camera.framerate_delta = Fraction(1, 2)
camera.framerate_delta = (1, 2) # deprecated


Note

This property is implicitly reset to 0 when framerate or framerate_range is set. When framerate is 0 (indicating that framerate_range is set), this property cannot be used. (there would be little point in making fractional adjustments to the framerate when the framerate itself is variable).

New in version 1.11.

framerate_range

Retrieves or sets a range between which the camera’s framerate is allowed to float.

When queried, the framerate_range property returns a namedtuple() derivative with low and high components (index 0 and 1 respectively) which specify the limits of the permitted framerate range.

When set, the property configures the camera so that the next call to recording and previewing methods will use the new framerate range. Setting this property will implicitly set the framerate property to 0 (indicating that a dynamic range of framerates is in use by the camera).

Note

Use of this property prevents use of framerate_delta (there would be little point in making fractional adjustments to the framerate when the framerate itself is variable).

The low and high framerates can be specified as int, float, or Fraction values. For example, the following definitions are all equivalent:

from fractions import Fraction

camera.framerate_range = (0.16666, 30)
camera.framerate_range = (Fraction(1, 6), 30 / 1)
camera.framerate_range = (Fraction(1, 6), Fraction(30, 1))


The camera must not be closed, and no recording must be active when the property is set.

Note

This attribute, like framerate, determines the mode that the camera operates in. The actual sensor framerate and resolution used by the camera is influenced, but not directly set, by this property. See sensor_mode for more information.

New in version 1.13.

hflip

Retrieves or sets whether the camera’s output is horizontally flipped.

When queried, the hflip property returns a boolean indicating whether or not the camera’s output is horizontally flipped. The property can be set while recordings or previews are in progress. The default value is False.

image_denoise

Retrieves or sets whether denoise will be applied to image captures.

When queried, the image_denoise property returns a boolean value indicating whether or not the camera software will apply a denoise algorithm to image captures.

When set, the property activates or deactivates the denoise algorithm for image captures. The property can be set while recordings or previews are in progress. The default value is True.

New in version 1.7.

image_effect

Retrieves or sets the current image effect applied by the camera.

When queried, the image_effect property returns a string representing the effect the camera will apply to captured video. The possible values can be obtained from the PiCamera.IMAGE_EFFECTS attribute, and are as follows:

• 'none'
• 'negative'
• 'solarize'
• 'sketch'
• 'denoise'
• 'emboss'
• 'oilpaint'
• 'hatch'
• 'gpen'
• 'pastel'
• 'watercolor'
• 'film'
• 'blur'
• 'saturation'
• 'colorswap'
• 'washedout'
• 'posterise'
• 'colorpoint'
• 'colorbalance'
• 'cartoon'
• 'deinterlace1'
• 'deinterlace2'

When set, the property changes the effect applied by the camera. The property can be set while recordings or previews are in progress, but only certain effects work while recording video (notably 'negative' and 'solarize'). The default value is 'none'.

image_effect_params

Retrieves or sets the parameters for the current effect.

When queried, the image_effect_params property either returns None (for effects which have no configurable parameters, or if no parameters have been configured), or a tuple of numeric values up to six elements long.

When set, the property changes the parameters of the current effect as a sequence of numbers, or a single number. Attempting to set parameters on an effect which does not support parameters, or providing an incompatible set of parameters for an effect will raise a PiCameraValueError exception.

The effects which have parameters, and what combinations those parameters can take is as follows:

Effect Parameters Description
'solarize' yuv, x0, y1, y2, y3 yuv controls whether data is processed as RGB (0) or YUV(1). Input values from 0 to x0 - 1 are remapped linearly onto the range 0 to y0. Values from x0 to 255 are remapped linearly onto the range y1 to y2.
x0, y0, y1, y2 Same as above, but yuv defaults to 0 (process as RGB).
yuv Same as above, but x0, y0, y1, y2 default to 128, 128, 128, 0 respectively.
'colorpoint' quadrant quadrant specifies which quadrant of the U/V space to retain chroma from: 0=green, 1=red/yellow, 2=blue, 3=purple. There is no default; this effect does nothing until parameters are set.
'colorbalance' lens, r, g, b, u, v lens specifies the lens shading strength (0.0 to 256.0, where 0.0 indicates lens shading has no effect). r, g, b are multipliers for their respective color channels (0.0 to 256.0). u and v are offsets added to the U/V plane (0 to 255).
lens, r, g, b Same as above but u are defaulted to 0.
lens, r, b Same as above but g also defaults to to 1.0.
'colorswap' dir If dir is 0, swap RGB to BGR. If dir is 1, swap RGB to BRG.
'posterise' steps Control the quantization steps for the image. Valid values are 2 to 32, and the default is 4.
'blur' size Specifies the size of the kernel. Valid values are 1 or 2.
'film' strength, u, v strength specifies the strength of effect. u and v are offsets added to the U/V plane (0 to 255).
'watercolor' u, v u and v specify offsets to add to the U/V plane (0 to 255).
No parameters indicates no U/V effect.

New in version 1.8.

iso

Retrieves or sets the apparent ISO setting of the camera.

When queried, the iso property returns the ISO setting of the camera, a value which represents the sensitivity of the camera to light. Lower values (e.g. 100) imply less sensitivity than higher values (e.g. 400 or 800). Lower sensitivities tend to produce less “noisy” (smoother) images, but operate poorly in low light conditions.

When set, the property adjusts the sensitivity of the camera (by adjusting the analog_gain and digital_gain). Valid values are between 0 (auto) and 1600. The actual value used when iso is explicitly set will be one of the following values (whichever is closest): 100, 200, 320, 400, 500, 640, 800.

On the V1 camera module, non-zero ISO values attempt to fix overall gain at various levels. For example, ISO 100 attempts to provide an overall gain of 1.0, ISO 200 attempts to provide overall gain of 2.0, etc. The algorithm prefers analog gain over digital gain to reduce noise.

On the V2 camera module, ISO 100 attempts to produce overall gain of ~1.84, and ISO 800 attempts to produce overall gain of ~14.72 (the V2 camera module was calibrated against the ISO film speed standard).

The attribute can be adjusted while previews or recordings are in progress. The default value is 0 which means automatically determine a value according to image-taking conditions.

Note

Some users on the Pi camera forum have noted that higher ISO values than 800 (specifically up to 1600) can be achieved in certain conditions with exposure_mode set to 'sports' and iso set to 0. It doesn’t appear to be possible to manually request an ISO setting higher than 800, but the picamera library will permit settings up to 1600 in case the underlying firmware permits such settings in particular circumstances.

Note

Certain exposure_mode values override the ISO setting. For example, 'off' fixes analog_gain and digital_gain entirely, preventing this property from adjusting them when set.

led

Sets the state of the camera’s LED via GPIO.

If a GPIO library is available (only RPi.GPIO is currently supported), and if the python process has the necessary privileges (typically this means running as root via sudo), this property can be used to set the state of the camera’s LED as a boolean value (True is on, False is off).

Note

This is a write-only property. While it can be used to control the camera’s LED, you cannot query the state of the camera’s LED using this property.

Note

At present, the camera’s LED cannot be controlled on the Pi 3 (the GPIOs used to control the camera LED were re-routed to GPIO expander on the Pi 3).

Warning

There are circumstances in which the camera firmware may override an existing LED setting. For example, in the case that the firmware resets the camera (as can happen with a CSI-2 timeout), the LED may also be reset. If you wish to guarantee that the LED remain off at all times, you may prefer to use the disable_camera_led option in config.txt (this has the added advantage that sudo privileges and GPIO access are not required, at least for LED control).

meter_mode

Retrieves or sets the metering mode of the camera.

When queried, the meter_mode property returns the method by which the camera determines the exposure as one of the following strings:

• 'average'
• 'spot'
• 'backlit'
• 'matrix'

When set, the property adjusts the camera’s metering mode. All modes set up two regions: a center region, and an outer region. The major difference between each mode is the size of the center region. The 'backlit' mode has the largest central region (30% of the width), while 'spot' has the smallest (10% of the width).

The property can be set while recordings or previews are in progress. The default value is 'average'. All possible values for the attribute can be obtained from the PiCamera.METER_MODES attribute.

overlays

Retrieves all active PiRenderer overlays.

If no overlays are current active, overlays will return an empty iterable. Otherwise, it will return an iterable of PiRenderer instances which are currently acting as overlays. Note that the preview renderer is an exception to this: it is not included as an overlay despite being derived from PiRenderer.

New in version 1.8.

preview

Retrieves the PiRenderer displaying the camera preview.

If no preview is currently active, preview will return None. Otherwise, it will return the instance of PiRenderer which is currently connected to the camera’s preview port for rendering what the camera sees. You can use the attributes of the PiRenderer class to configure the appearance of the preview. For example, to make the preview semi-transparent:

import picamera

with picamera.PiCamera() as camera:
camera.start_preview()
camera.preview.alpha = 128


New in version 1.8.

preview_alpha

Retrieves or sets the opacity of the preview window.

Deprecated since version 1.8: Please use the alpha attribute of the preview object instead.

preview_fullscreen

Retrieves or sets full-screen for the preview window.

Deprecated since version 1.8: Please use the fullscreen attribute of the preview object instead.

preview_layer

Retrieves or sets the layer of the preview window.

Deprecated since version 1.8: Please use the layer attribute of the preview object instead.

preview_window

Retrieves or sets the size of the preview window.

Deprecated since version 1.8: Please use the window attribute of the preview object instead.

previewing

Returns True if the start_preview() method has been called, and no stop_preview() call has been made yet.

Deprecated since version 1.8: Test whether preview is None instead.

raw_format

Retrieves or sets the raw format of the camera’s ports.

Deprecated since version 1.0: Please use 'yuv' or 'rgb' directly as a format in the various capture methods instead.

recording

Returns True if the start_recording() method has been called, and no stop_recording() call has been made yet.

resolution

Retrieves or sets the resolution at which image captures, video recordings, and previews will be captured.

When queried, the resolution property returns the resolution at which the camera will operate as a tuple of (width, height) measured in pixels. This is the resolution that the capture() method will produce images at, and the resolution that start_recording() will produce videos at.

When set, the property configures the camera so that the next call to these methods will use the new resolution. The resolution can be specified as a (width, height) tuple, as a string formatted 'WIDTHxHEIGHT', or as a string containing a commonly recognized display resolution name (e.g. “VGA”, “HD”, “1080p”, etc). For example, the following definitions are all equivalent:

camera.resolution = (1280, 720)
camera.resolution = '1280x720'
camera.resolution = '1280 x 720'
camera.resolution = 'HD'
camera.resolution = '720p'


The camera must not be closed, and no recording must be active when the property is set.

Note

This attribute, in combination with framerate, determines the mode that the camera operates in. The actual sensor framerate and resolution used by the camera is influenced, but not directly set, by this property. See sensor_mode for more information.

The initial value of this property can be specified with the resolution parameter in the PiCamera constructor, and will default to the display’s resolution or 1280x720 if the display has been disabled (with tvservice -o).

Changed in version 1.11: Resolution permitted to be set as a string. Preview resolution added as separate property.

revision

Returns a string representing the revision of the Pi’s camera module. At the time of writing, the string returned is ‘ov5647’ for the V1 module, and ‘imx219’ for the V2 module.

rotation

Retrieves or sets the current rotation of the camera’s image.

When queried, the rotation property returns the rotation applied to the image. Valid values are 0, 90, 180, and 270.

When set, the property changes the rotation applied to the camera’s input. The property can be set while recordings or previews are in progress. The default value is 0.

saturation

Retrieves or sets the saturation setting of the camera.

When queried, the saturation property returns the color saturation of the camera as an integer between -100 and 100. When set, the property adjusts the saturation of the camera. Saturation can be adjusted while previews or recordings are in progress. The default value is 0.

sensor_mode

Retrieves or sets the input mode of the camera’s sensor.

This is an advanced property which can be used to control the camera’s sensor mode. By default, mode 0 is used which allows the camera to automatically select an input mode based on the requested resolution and framerate. Valid values are currently between 0 and 7. The set of valid sensor modes (along with the heuristic used to select one automatically) are detailed in the Sensor Modes section of the documentation.

Note

At the time of writing, setting this property does nothing unless the camera has been initialized with a sensor mode other than 0. Furthermore, some mode transitions appear to require setting the property twice (in a row). This appears to be a firmware limitation.

The initial value of this property can be specified with the sensor_mode parameter in the PiCamera constructor, and will default to 0 if not specified.

New in version 1.9.

sharpness

Retrieves or sets the sharpness setting of the camera.

When queried, the sharpness property returns the sharpness level of the camera (a measure of the amount of post-processing to reduce or increase image sharpness) as an integer between -100 and 100. When set, the property adjusts the sharpness of the camera. Sharpness can be adjusted while previews or recordings are in progress. The default value is 0.

shutter_speed

Retrieves or sets the shutter speed of the camera in microseconds.

When queried, the shutter_speed property returns the shutter speed of the camera in microseconds, or 0 which indicates that the speed will be automatically determined by the auto-exposure algorithm. Faster shutter times naturally require greater amounts of illumination and vice versa.

When set, the property adjusts the shutter speed of the camera, which most obviously affects the illumination of subsequently captured images. Shutter speed can be adjusted while previews or recordings are running. The default value is 0 (auto).

Note

You can query the exposure_speed attribute to determine the actual shutter speed being used when this attribute is set to 0. Please note that this capability requires an up to date firmware (#692 or later).

Note

In later firmwares, this attribute is limited by the value of the framerate attribute. For example, if framerate is set to 30fps, the shutter speed cannot be slower than 33,333µs (1/fps).

still_stats

Retrieves or sets whether statistics will be calculated from still frames or the prior preview frame.

When queried, the still_stats property returns a boolean value indicating when scene statistics will be calculated for still captures (that is, captures where the use_video_port parameter of capture() is False). When this property is False (the default), statistics will be calculated from the preceding preview frame (this also applies when the preview is not visible). When True, statistics will be calculated from the captured image itself.

When set, the propetry controls when scene statistics will be calculated for still captures. The property can be set while recordings or previews are in progress. The default value is False.

The advantages to calculating scene statistics from the captured image are that time between startup and capture is reduced as only the AGC (automatic gain control) has to converge. The downside is that processing time for captures increases and that white balance and gain won’t necessarily match the preview.

Warning

Enabling the still statistics pass will override fixed white balance gains (set via awb_gains and awb_mode).

New in version 1.9.

timestamp

Retrieves the system time according to the camera firmware.

The camera’s timestamp is a 64-bit integer representing the number of microseconds since the last system boot. When the camera’s clock_mode is 'raw' the values returned by this attribute are comparable to those from the frame timestamp attribute.

vflip

Retrieves or sets whether the camera’s output is vertically flipped.

When queried, the vflip property returns a boolean indicating whether or not the camera’s output is vertically flipped. The property can be set while recordings or previews are in progress. The default value is False.

video_denoise

Retrieves or sets whether denoise will be applied to video recordings.

When queried, the video_denoise property returns a boolean value indicating whether or not the camera software will apply a denoise algorithm to video recordings.

When set, the property activates or deactivates the denoise algorithm for video recordings. The property can be set while recordings or previews are in progress. The default value is True.

New in version 1.7.

video_stabilization

Retrieves or sets the video stabilization mode of the camera.

When queried, the video_stabilization property returns a boolean value indicating whether or not the camera attempts to compensate for motion.

When set, the property activates or deactivates video stabilization. The property can be set while recordings or previews are in progress. The default value is False.

Note

The built-in video stabilization only accounts for vertical and horizontal motion, not rotation.

zoom

Retrieves or sets the zoom applied to the camera’s input.

When queried, the zoom property returns a (x, y, w, h) tuple of floating point values ranging from 0.0 to 1.0, indicating the proportion of the image to include in the output (this is also known as the “Region of Interest” or ROI). The default value is (0.0, 0.0, 1.0, 1.0) which indicates that everything should be included. The property can be set while recordings or previews are in progress.

## 9.2. PiVideoFrameType¶

class picamera.PiVideoFrameType[source]

This class simply defines constants used to represent the type of a frame in PiVideoFrame.frame_type. Effectively it is a namespace for an enum.

frame

Indicates a predicted frame (P-frame). This is the most common frame type.

key_frame

Indicates an intra-frame (I-frame) also known as a key frame.

sps_header

Indicates an inline SPS/PPS header (rather than picture data) which is typically used as a split point.

motion_data

Indicates the frame is inline motion vector data, rather than picture data.

New in version 1.5.

## 9.3. PiVideoFrame¶

class picamera.PiVideoFrame(index, frame_type, frame_size, video_size, split_size, timestamp)[source]

This class is a namedtuple() derivative used to store information about a video frame. It is recommended that you access the information stored by this class by attribute name rather than position (for example: frame.index rather than frame[0]).

index

Returns the zero-based number of the frame. This is a monotonic counter that is simply incremented every time the camera starts outputting a new frame. As a consequence, this attribute cannot be used to detect dropped frames. Nor does it necessarily represent actual frames; it will be incremented for SPS headers and motion data buffers too.

frame_type

Returns a constant indicating the kind of data that the frame contains (see PiVideoFrameType). Please note that certain frame types contain no image data at all.

frame_size

Returns the size in bytes of the current frame. If a frame is written in multiple chunks, this value will increment while index remains static. Query complete to determine whether the frame has been completely output yet.

video_size

Returns the size in bytes of the entire video up to this frame. Note that this is unlikely to match the size of the actual file/stream written so far. This is because a stream may utilize buffering which will cause the actual amount written (e.g. to disk) to lag behind the value reported by this attribute.

split_size

Returns the size in bytes of the video recorded since the last call to either start_recording() or split_recording(). For the reasons explained above, this may differ from the size of the actual file/stream written so far.

timestamp

Returns the presentation timestamp (PTS) of the frame. This represents the point in time that the Pi received the first line of the frame from the camera.

The timestamp is measured in microseconds (millionths of a second). When the camera’s clock mode is 'reset' (the default), the timestamp is relative to the start of the video recording. When the camera’s clock_mode is 'raw', it is relative to the last system reboot. See timestamp for more information.

Warning

Currently, the camera occasionally returns “time unknown” values in this field which picamera represents as None. If you are querying this property you will need to check the value is not None before using it. This happens for SPS header “frames”, for example.

complete

Returns a bool indicating whether the current frame is complete or not. If the frame is complete then frame_size will not increment any further, and will reset for the next frame.

Changed in version 1.5: Deprecated header and keyframe attributes and added the new frame_type attribute instead.

Changed in version 1.9: Added the complete attribute.

header

Contains a bool indicating whether the current frame is actually an SPS/PPS header. Typically it is best to split an H.264 stream so that it starts with an SPS/PPS header.

Deprecated since version 1.5: Please compare frame_type to PiVideoFrameType.sps_header instead.

keyframe

Returns a bool indicating whether the current frame is a keyframe (an intra-frame, or I-frame in MPEG parlance).

Deprecated since version 1.5: Please compare frame_type to PiVideoFrameType.key_frame instead.

position

Returns the zero-based position of the frame in the stream containing it.

## 9.4. PiResolution¶

class picamera.PiResolution(width, height)[source]

A namedtuple() derivative which represents a resolution with a width and height.

width

The width of the resolution in pixels

height

The height of the resolution in pixels

New in version 1.11.

pad(width=32, height=16)[source]

Returns the resolution padded up to the nearest multiple of width and height which default to 32 and 16 respectively (the camera’s native block size for most operations). For example:

>>> PiResolution(1920, 1080).pad()
PiResolution(width=1920, height=1088)
PiResolution(width=128, height=112)
PiResolution(width=112, height=112)

transpose()[source]

Returns the resolution with the width and height transposed. For example:

>>> PiResolution(1920, 1080).transpose()
PiResolution(width=1080, height=1920)


## 9.5. PiFramerateRange¶

class picamera.PiFramerateRange(low, high)[source]

This class is a namedtuple() derivative used to store the low and high limits of a range of framerates. It is recommended that you access the information stored by this class by attribute rather than position (for example: camera.framerate_range.low rather than camera.framerate_range[0]).

low

The lowest framerate that the camera is permitted to use (inclusive). When the framerate_range attribute is queried, this value will always be returned as a Fraction.

high

The highest framerate that the camera is permitted to use (inclusive). When the framerate_range attribute is queried, this value will always be returned as a Fraction.

New in version 1.13.