5. Frequently Asked Questions (FAQ)

5.1. AttributeError: ‘module’ object has no attribute ‘PiCamera’

You’ve named your script picamera.py (or you’ve named some other script picamera.py. If you name a script after a system or third-party package you will break imports for that system or third-party package. Delete or rename that script (and any associated .pyc files), and try again.

5.2. Can I put the preview in a window?

No. The camera module’s preview system is quite crude: it simply tells the GPU to overlay the preview on the Pi’s video output. The preview has no knowledge (or interaction with) the X-Windows environment (incidentally, this is why the preview works quite happily from the command line, even without anyone logged in).

That said, the preview area can be resized and repositioned via the window attribute of the preview object. If your program can respond to window repositioning and sizing events you can “cheat” and position the preview within the borders of the target window. However, there’s currently no way to allow anything to appear on top of the preview so this is an imperfect solution at best.

5.3. Help! I started a preview and can’t see my console!

As mentioned above, the preview is simply an overlay over the Pi’s video output. If you start a preview you may therefore discover you can’t see your console anymore and there’s no obvious way of getting it back. If you’re confident in your typing skills you can try calling stop_preview() by typing “blindly” into your hidden console. However, the simplest way of getting your display back is usually to hit Ctrl+D to terminate the Python process (which should also shut down the camera).

When starting a preview, you may want to set the alpha parameter of the start_preview() method to something like 128. This should ensure that when the preview is displayed, it is partially transparent so you can still see your console.

5.4. The preview doesn’t work on my PiTFT screen

The camera’s preview system directly overlays the Pi’s output on the HDMI or composite video ports. At this time, it will not operate with GPIO-driven displays like the PiTFT. Some projects, like the Adafruit Touchscreen Camera project, have approximated a preview by rapidly capturing unencoded images and displaying them on the PiTFT instead.

5.5. How much power does the camera require?

The camera requires 250mA when running. Note that simply creating a PiCamera object means the camera is running (due to the hidden preview that is started to allow the auto-exposure algorithm to run). If you are running your Pi from batteries, you should close() (or destroy) the instance when the camera is not required in order to conserve power. For example, the following code captures 60 images over an hour, but leaves the camera running all the time:

import picamera
import time

with picamera.PiCamera() as camera:
    camera.resolution = (1280, 720)
    time.sleep(1) # Camera warm-up time
    for i, filename in enumerate(camera.capture_continuous('image{counter:02d}.jpg')):
        print('Captured %s' % filename)
        # Capture one image a minute
        time.sleep(60)
        if i == 59:
            break

By contrast, this code closes the camera between shots (but can’t use the convenient capture_continuous() method as a result):

import picamera
import time

for i in range(60):
    with picamera.PiCamera() as camera:
        camera.resolution = (1280, 720)
        time.sleep(1) # Camera warm-up time
        filename = 'image%02d.jpg' % i
        camera.capture(filename)
        print('Captured %s' % filename)
    # Capture one image a minute
    time.sleep(59)

Note

Please note the timings in the scripts above are approximate. A more precise example of timing is given in Capturing timelapse sequences.

If you are experiencing lockups or reboots when the camera is active, your power supply may be insufficient. A practical minimum is 1A for running a Pi with an active camera module; more may be required if additional peripherals are attached.

5.6. How can I take two consecutive pictures with equivalent settings?

See the Capturing consistent images recipe.

5.7. Can I use picamera with a USB webcam?

No. The picamera library relies on libmmal which is specific to the Pi’s camera module.

5.8. How can I tell what version of picamera I have installed?

The picamera library relies on the setuptools package for installation services. You can use the setuptools pkg_resources API to query which version of picamera is available in your Python environment like so:

>>> from pkg_resources import require
>>> require('picamera')
[picamera 1.2 (/usr/local/lib/python2.7/dist-packages)]
>>> require('picamera')[0].version
'1.2'

If you have multiple versions installed (e.g. from pip and apt-get) they will not show up in the list returned by the require method. However, the first entry in the list will be the version that import picamera will import.

If you receive the error “No module named pkg_resources”, you need to install the pip utility. This can be done with the following command in Raspbian:

$ sudo apt-get install python-pip

5.9. How come I can’t upgrade to the latest version?

If you are using Raspbian, firstly check that you haven’t got both a PyPI (pip) and an apt (apt-get) installation of picamera installed simultaneously. If you have, one will be taking precedence and it may not be the most up to date version.

Secondly, please understand that while the PyPI release process is entirely automated (so as soon as a new picamera release is announced, it will be available on PyPI), the release process for Raspbian packages is semi-manual. There is typically a delay of a few days after a release before updated picamera packages become accessible in the Raspbian repository.

Users desperate to try the latest version may choose to uninstall their apt based copy (uninstall instructions are provided in the installation instructions, and install using pip instead. However, be aware that keeping a PyPI based installation up to date is a more manual process (sticking with apt ensures everything gets upgraded with a simple sudo apt-get upgrade command).

5.10. Why is there so much latency when streaming video?

The first thing to understand is that streaming latency has little to do with the encoding or sending end of things (i.e. the Pi), and much more to do with the playing or receiving end. If the Pi weren’t capable of encoding a frame before the next frame arrived, it wouldn’t be capable of recording video at all (because its internal buffers would rapidly become filled with unencoded frames).

So, why do players typically introduce several seconds worth of latency? The primary reason is that most players (e.g. VLC) are optimized for playing streams over a network. Such players allocate a large (multi-second) buffer and only start playing once this is filled to guard against possible future packet loss.

A secondary reason that all such players allocate at least a couple of frames worth of buffering is that the MPEG standard includes certain frame types that require it:

  • I-frames (intra-frames, also known as “key frames”). These frames contain a complete picture and thus are the largest sort of frames. They occur at the start of playback and at periodic points during the stream.
  • P-frames (predicted frames). These frames describe the changes from the prior frame to the current frame, therefore one must have successfully decoded the prior frame in order to decode a P-frame.
  • B-frames (bi-directional predicted frames). These frames describe the changes from the next frame to the current frame, therefore one must have successfully decoded the next frame in order to decode the current B-frame.

B-frames aren’t produced by the Pi’s camera (or, as I understand it, by most real-time recording cameras) as it would require buffering yet-to-be-recorded frames before encoding the current one. However, most recorded media (DVDs, Blu-rays, and hence network video streams) do use them, so players must support them. It is simplest to write such a player by assuming that any source may contain B-frames, and buffering at least 2 frames worth of data at all times to make decoding them simpler.

As for the network in between, a slow wifi network may introduce a frame’s worth of latency, but not much more than that. Check the ping time across your network; it’s likely to be less than 30ms in which case your network cannot account for more than a frame’s worth of latency.

TL;DR: the reason you’ve got lots of latency when streaming video is nothing to do with the Pi. You need to persuade your video player to reduce or forgo its buffering.

5.11. Why are there more than 20 seconds of video in the circular buffer?

Read the note at the bottom of the Recording to a circular stream recipe. When you set the number of seconds for the circular stream you are setting a lower bound for a given bitrate (which defaults to 17Mbps - the same as the video recording default). If the recorded scene has low motion or complexity the stream can store considerably more than the number of seconds specified.

If you need to copy a specific number of seconds from the stream, see the seconds parameter of the copy_to() method (which was introduced in release 1.11).

Finally, if you specify a different bitrate limit for the stream and the recording, the seconds limit will be inaccurate.

5.12. Can I move the annotation text?

No: the firmware provides no means of moving the annotation text. The only configurable attributes of the annotation are currently color and font size.

5.13. Why is playback too fast/too slow in VLC/omxplayer/etc.?

The camera’s H264 encoder doesn’t output a full MP4 file (which would contain frames-per-second meta-data). Instead it outputs an H264 NAL stream which just has frame-size and a few other details (but not FPS).

Most players (like VLC) default to 24, 25, or 30 fps. Hence, recordings at 12fps will appear “fast”, while recordings as 60fps will appear “slow”. Your playback client needs to be told what fps to use when playing back (assuming it supports such an option).

For those wondering why the camera doesn’t output a full MP4 file, consider that the Pi camera’s heritage is mobile phone cameras. In these devices you only want the camera to output the H264 stream so you can mux it with, say, an AAC stream recorded from the microphone input and wrap the result into a full MP4 file.

To convert the H264 NAL stream to a full MP4 file, there are a couple of options. The simplest is to use the MP4Box utility from the gpac package on Raspbian. Unfortunately this only works with files; it cannot accept redirected streams:

$ sudo apt-get install gpac
...
$ MP4Box -add input.h264 output.mp4

Alternatively you can use the console version of VLC to handle the conversion. This is a more complex command line, but a lot more powerful (it’ll handle redirected streams and can be used with a vast array of outputs including HTTP, RTP, etc.):

$ sudo apt-get install vlc
...
$ cvlc input.h264 --play-and-exit --sout \
> '#standard{access=file,mux=mp4,dst=output.mp4}' :demux=h264 \

Or to read from stdin:

$ raspivid -t 5000 -o - | cvlc stream:///dev/stdin \
> --play-and-exit --sout \
> '#standard{access=file,mux=mp4,dst=output.mp4}' :demux=h264 \

5.14. Out of resources at full resolution on a V2 module

See Hardware Limits.

5.15. Preview flickers at full resolution on a V2 module

Use the new resolution property to select a lower resolution for the preview, or specify one when starting the preview. For example:

from picamera import PiCamera

camera = PiCamera()
camera.resolution = camera.MAX_RESOLUTION
camera.start_preview(resolution=(1024, 768))

5.16. Camera locks up with multiprocessing

The camera firmware is designed to be used by a single process at a time. Attempting to use the camera from multiple processes simultaneously will fail in a variety of ways (from simple errors to the process locking up).

Python’s multiprocessing module creates multiple copies of a Python process (usually via os.fork()) for the purpose of parallel processing. Whilst you can use multiprocessing with picamera, you must ensure that only a single process creates a PiCamera instance at any given time.

The following script demonstrates an approach with one process that owns the camera, which handles disseminating captured frames to other processes via a Queue:

import os
import io
import time
import multiprocessing as mp
from queue import Empty
import picamera
from PIL import Image

class QueueOutput(object):
    def __init__(self, queue, finished):
        self.queue = queue
        self.finished = finished
        self.stream = io.BytesIO()

    def write(self, buf):
        if buf.startswith(b'\xff\xd8'):
            # New frame, put the last frame's data in the queue
            size = self.stream.tell()
            if size:
                self.stream.seek(0)
                self.queue.put(self.stream.read(size))
                self.stream.seek(0)
        self.stream.write(buf)

    def flush(self):
        self.queue.close()
        self.queue.join_thread()
        self.finished.set()

def do_capture(queue, finished):
    with picamera.PiCamera(resolution='VGA', framerate=30) as camera:
        output = QueueOutput(queue, finished)
        camera.start_recording(output, format='mjpeg')
        camera.wait_recording(10)
        camera.stop_recording()

def do_processing(queue, finished):
    while not finished.wait(0.1):
        try:
            stream = io.BytesIO(queue.get(False))
        except Empty:
            pass
        else:
            stream.seek(0)
            image = Image.open(stream)
            # Pretend it takes 0.1 seconds to process the frame; on a quad-core
            # Pi this gives a maximum processing throughput of 40fps
            time.sleep(0.1)
            print('%d: Processing image with size %dx%d' % (
                os.getpid(), image.size[0], image.size[1]))

if __name__ == '__main__':
    queue = mp.Queue()
    finished = mp.Event()
    capture_proc = mp.Process(target=do_capture, args=(queue, finished))
    processing_procs = [
        mp.Process(target=do_processing, args=(queue, finished))
        for i in range(4)
        ]
    for proc in processing_procs:
        proc.start()
    capture_proc.start()
    for proc in processing_procs:
        proc.join()
    capture_proc.join()

5.17. VLC won’t play back MJPEG recordings

MJPEG is a particularly ill-defined format (see “Disadvantages”) which results in compatibility issues between software that purports to produce MJPEG files, and software that purports to play MJPEG files. This is one such case: the Pi’s camera firmware produces an MJPEG file which simply consists of concatenated JPEGs; this is reasonably common on other devices and webcams, and is a nice simple format which makes parsing particularly easy (see Web streaming for an example).

Unfortunately, VLC doesn’t recognize this as a valid MJPEG file: it thinks it’s a single JPEG image and doesn’t bother reading the rest of the file (which is also a reasonable interpretation in the absence of any other information). Thankfully, extra command line switches can be provided to give it a hint that there’s more to read in the file:

$ vlc --demux=mjpeg --mjpeg-fps=30 my_recording.mjpeg