Ffmpeg to capture stills from H.264 stream

ffmpegimagesvideo

I'm trying to capture still images from an H.264 wireless IP camera using ffmpeg. I found a similar question here: How can I extract a good quality JPEG image from an H264 video file with ffmpeg?

ffmpeg -y -i rtsp://10.2.69.201:554/ch0_0.h264 -r 10 -f image2 /var/www/camera.jpg

I've implemented it as shown in that example. You can see a sample of the image I'm getting here:

Basically the issue is the bottom part of the image is always blocky. If the sky has more detail and clouds the whole bottom half of the image may be blocky or blurry.

My camera has limited stream options. One of which is I-Frame interval, you can vary it between 25–100.

Does anyone have some suggestions about how I can get a better image? I wouldn't mind if it were possible to actually store the stream to a video file and also extract a still every 2 minutes. Is that something easily done?

Here is the ffmpeg output:

ffmpeg version 1.2.4 Copyright (c) 2000-2013 the FFmpeg developers
built on Oct  3 2013 07:36:02 with gcc 4.8 (Debian 4.8.1-10)
configuration: --prefix=/usr --extra-cflags='-g -O2 -fstack-protector --param=ssp-buffer-size=4 -Wformat -Werror=format-security ' --extra-ldflags='-Wl,-z,relro' --cc='ccache cc' --enable-shared --enable-libmp3lame --enable-gpl --enable-nonfree --enable-libvorbis --enable-pthreads --enable-libfaac --enable-libxvid --enable-postproc --enable-x11grab --enable-libgsm --enable-libtheora --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libx264 --enable-libspeex --enable-nonfree --disable-stripping --enable-libvpx --enable-libschroedinger --disable-encoder=libschroedinger --enable-version3 --enable-libopenjpeg --enable-librtmp --enable-avfilter --enable-libfreetype --enable-libvo-aacenc --disable-decoder=amrnb --enable-libvo-amrwbenc --enable-libaacplus --libdir=/usr/lib/i386-linux-gnu --disable-vda --enable-libbluray --enable-libcdio --enable-gnutls --enable-frei0r --enable-openssl --enable-libass --enable-libopus --enable-fontconfig --enable-libpulse --disable-mips32r2 --disable-mipsdspr1 --disab  libavutil      52. 18.100 / 52. 18.100
libavcodec     54. 92.100 / 54. 92.100
libavformat    54. 63.104 / 54. 63.104
libavdevice    54.  3.103 / 54.  3.103
libavfilter     3. 42.103 /  3. 42.103
libswscale      2.  2.100 /  2.  2.100
libswresample   0. 17.102 /  0. 17.102
libpostproc    52.  2.100 / 52.  2.100
[h264 @ 0x867ed80] RTP: missed 1 packets
    Last message repeated 1 times
[h264 @ 0x867ed80] mb_type 34 in I slice too large at 17 18
[h264 @ 0x867ed80] error while decoding MB 17 18
[h264 @ 0x867ed80] concealing 5732 DC, 5732 AC, 5732 MV errors in I frame
[h264 @ 0x867ed80] RTP: missed 1 packets
Last message repeated 14 times
[rtsp @ 0x867c640] Stream #1: not enough frames to estimate rate; consider increasing probesize
[rtsp @ 0x867c640] Estimating duration from bitrate, this may be inaccurate
Guessed Channel Layout for  Input Stream #0.1 : mono
Input #0, rtsp, from 'rtsp://10.2.69.201:554/ch0_0.h264':
 Metadata:
title           : H.264 Program Stream, streamed by the LIVE555 Media Server
comment         : ch0_0.h264
 Duration: N/A, start: 0.065833, bitrate: 64 kb/s
Stream #0:0: Video: h264 (Constrained Baseline), yuv420p, 1600x1200, 15.19 tbr, 90k tbn, 180k tbc
Stream #0:1: Audio: pcm_alaw, 8000 Hz, mono, s16, 64 kb/s 
Output #0, image2, to '/var/www/camera.jpg':
Metadata:
title           : H.264 Program Stream, streamed by the LIVE555 Media Server
comment         : ch0_0.h264
encoder         : Lavf54.63.104
Stream #0:0: Video: mjpeg, yuvj420p, 1600x1200, q=2-31, 200 kb/s, 90k tbn, 10 tbc
Stream mapping:
Stream #0:0 -> #0:0 (h264 -> mjpeg)
Press [q] to stop, [?] for help
[h264 @ 0x8793260] mb_type 34 in I slice too large at 17 18
[h264 @ 0x8793260] error while decoding MB 17 18
[h264 @ 0x8793260] concealing 5732 DC, 5732 AC, 5732 MV errors in I frame 
[image2 @ 0x86d1640] Could not get frame filename number 2 from pattern '/var/www/camera.jpg' (either set updatefirst or use a pattern like %03d within the filename pattern)
av_interleaved_write_frame(): Invalid argument

Best Answer

The problem, as your output hints at, is that you are missing RTP packets and therefore essential parts of the video. With your command, ffmpeg will output exactly one image – as soon as it sees the end of the first frame – but there was some data missing. So it tries to conceal the errors in the macroblocks, but it can only do so by copying parts of the already decoded image, which leads to the artifacts you're seeing here.

The FFmpeg Wiki has an example of how to create a thumbnail every x seconds:

ffmpeg -i rtsp://10.2.69.201:554/ch0_0.h264 -f image2 -vf fps=fps=1/120 img%03d.jpg

You could, of course try to save the stream in a file. In this case, it would stop after 120 seconds:

ffmpeg -i rtsp://10.2.69.201:554/ch0_0.h264 -c:v copy -t 120 stream.mp4

If you can, try to download or compile a recent static build, as your ffmpeg is a little older and you never know if you didn't come across a bug that has already been fixed.

Related Question