From define
Jump to: navigation, search

Pieye01.jpg Pieye02.jpg PiEye Variable03.jpg

Raspberry Pi meets PlayStation Eye


Raspberry Pi (Model B) running Raspbian

Final script

Named startcamera (bash script)

while [ 1 ]  # spaces around the 1 are important!
gst-launch-0.10 \
oggmux name=mux ! shout2send ip= password=hackme mount=variable.ogg \
alsasrc device="plughw:CameraB404271" !  audio/x-raw-int,rate=8000,channels=1,de
pth=8 ! queue ! audioresample ! audioconvert ! queue !  vorbisenc quality=0 !   
mux. \
v4l2src  ! 'video/x-raw-yuv,width=320,height=240,framerate=30/1' ! videorate max-rate=1 !  theoraenc ! mux. 
sleep 10

And then we added to /etc/init.d/rc.local the following:

sudo -u pi -i "/home/pi/startcamera"


First version with video only that worked:

gst-launch v4l2src  ! 'video/x-raw-yuv,width=320,height=240,framerate=30/1' ! videorate max-rate=2 !  theoraenc ! oggmux ! shout
2send ip= password=hackme mount=variable.ogg

In getting the video to work, changing the framerate in the "caps" filter after v4l2src did not work -- it seems the camera (we were using a PlayStation eye) only provides a higher frame rate (in this case 30 fps). The trick was then to add a videorate object that limits that rate to no more than 2 frames per second (otherwise the pi really chokes).

First working pipeline that added audio:

gst-launch-0.10 \
oggmux name=mux ! shout2send ip= password=hackme mount=variable.ogg \
alsasrc device="plughw:CameraB404271" ! audio/x-raw-int,rate=8000,channels=1,depth=8 ! queue ! audioconvert ! \
vorbisenc ! queue ! mux. \
v4l2src  ! 'video/x-raw-yuv,width=320,height=240,framerate=30/1' ! videorate max-rate=4 !  theoraenc ! mux.

Here, we tried higher sampling/bit depth settings but helaas these made our pi choke and drop many audio samples. It seemed like an idea to send the audio data raw (but attempts to couple a wavenc with oggmux failed). Also thought about trying to get the "native" format of the audio from the camera (48000) but this didn't really produce good results. The final script has an audioresample in an attempt to lower the work of the vorbis encoding (though this we didn't really test very thoroughly).


Providing the device name of the camera's in-built microphone was crucial. Helpful is alsa record to do this:

arecord -l

Also, it was necessary to explicitly pull in gstreamer alsa (as well as tools):

sudo apt-get install gstreamer0.10-tools gstreamer0.10-plugins-good gstreamer0.10-alsa

(We also installed gstreamer bad and ugly but I don't think they were necessary!)

Important thing with testing gstreamer pipelines on PI, because video acceleration doesn't work, any pipeline that tries to use xv will fail, like:

gst-launch videotestsrc ! xvimagesink

Use instead:

gst-launch videotestsrc ! ximagesink

Or avoid X display altogether.