ffmpeg
3.4.4 can do it directly on Ubuntu 18.04
You likely want to use something like:
sudo apt install ffmpeg
wget -O opengl-rotating-triangle.mp4 https://github.com/cirosantilli/media/blob/master/opengl-rotating-triangle.mp4?raw=true
ffmpeg \
-i opengl-rotating-triangle.mp4 \
-r 15 \
-vf scale=512:-1 \
-ss 00:00:03 -to 00:00:06 \
opengl-rotating-triangle.gif
opengl-rotating-triangle.gif
Image info: 426kB, 45 frames, 512x512 apparent size, coalesced, conversion time on a Lenovo P51: 0.5s.
The above conversion also worked after a ulimit -Sv 1000000
(DRAM usage limited to 1GB), so it does "not consume huge amounts of memory" like previous attempts I did with Imagemagick which almost killed my machine. 500MB however failed because ffmpeg failed to load its shared libraries... time to upgrade your RAM ;-)?
Test data generation procedure described on this post.
The output has a visible dotting pattern, which is not as visible in "ffmpeg + convert" method below. We can try to improve the image quality with methods described at:
E.g. using the palettegen
filter:
ffmpeg \
-i opengl-rotating-triangle.mp4 \
-r 15 \
-vf "scale=512:-1,split[s0][s1];[s0]palettegen[p];[s1][p]paletteuse" \
-ss 00:00:03 -to 00:00:06 \
opengl-rotating-triangle-palettegen.gif
opengl-rotating-triangle-palettegen.gif
Image info: 979K, 45 frames, 512x512 apparent size, coalesced, conversion time on a Lenovo P51: 3.5s.
So we see that:
- the dotting pattern is much less visible now
- GIF size roughly doubled. TODO: why does simply choosing a palette increase the image size? Is it because now more colors so we need more bits per color? How to observe each palette?
- generation time was 7x slower, presumably because ffmpeg is first scanning through the entire video to determine an optimal palette
We could also play with documented palettegen
parameters like palettegen=max_colors=16
to achieve different size quality trade-off points.
Argument breakdown
-ss 00:00:03 -to 00:00:06
: start and end time to cut the video from.
No, GIFs are not the best way to pirate distribute videos online.
See also: https://stackoverflow.com/questions/18444194/cutting-the-videos-based-on-start-and-end-time-using-ffmpeg
-vf scale=512:-1
: make the output 512
pixels in height, and adjust width to maintain the aspect ratio.
This is a common use case for images for the web, which tend to have much smaller resolution than video.
If you remove this option, the output GIF has the same height as the input video.
The original video height can be found for example with ffprobe
: https://superuser.com/questions/595177/how-to-retrieve-video-file-information-from-command-line-under-linux/1035178#1035178 and is 1024 x 1024 in our case.
-r 15
: sampling FPS.
For example, the original video was 30 FPS, so -r 15
means that ffmpeg
will pick one frame in every 2 (= 30 / 15
).
The perceived output FPS is adjusted to match the input however, so you won't notice a speedup, only greater granularity.
The input FPS can be found with ffprobe
, and the total number of input frames can be found with mediainfo
as explained at: https://superuser.com/questions/84631/how-do-i-get-the-number-of-frames-in-a-video-on-the-linux-command-line/1044894#1044894
I recommend this option because video formats usually have a higher framerate due to the larger resolution. With smaller GIFs, the lower framerate is less noticeable, and so we can skip some frames and make smaller GIFs.
Video camera footage example
If you want to see the result quality of a video camera video from Wikimedia Commons with a similar command:
wget https://upload.wikimedia.org/wikipedia/commons/f/f9/STS-132_Liftoff_Space_Shuttle_Atlantis.ogv
ffmpeg -i STS-132_Liftoff_Space_Shuttle_Atlantis.ogv -r 15 -vf scale=512:-1 \
-ss 00:00:17 -to 00:00:22 STS-132_Liftoff_Space_Shuttle_Atlantis.gif
STS-132_Liftoff_Space_Shuttle_Atlantis.gif
Image info: 1.3MB, 75 frames, 512x288 apparent size, coalesced (has minimal effect however, because footage pans slightly from the start), conversion time on a Lenovo P51: 2.3s.
Here is a version with palettegen
but only 2 seconds to fit the 2MiB upload limit:
Image info: 1.5MB, 30 frames, 512x288 apparent size, conversion time on a Lenovo P51: 43s.
A more direct:
sudo apt-get install ffmpeg
ffmpeg -i in.mp4 out.gif
also works, but the output GIF would be way larger than the input video, because video formats can compress more efficiently across frames with advanced algorithms, while GIF can only does a simple rectangular frame diff.
Before pre 18.04: ffmpeg
+ convert
one-liner without intermediate files
ffmpeg
could not handle GIF previously. The best I had was something along:
sudo apt-get install ffmpeg imagemagick
ffmpeg -i opengl-rotating-triangle.mp4 -r 15 -vf scale=512:-1 \
-ss 00:00:03 -to 00:00:06 -f image2pipe -vcodec ppm - |
convert -deconstruct -delay 5 -loop 0 - opengl-rotating-triangle-image-magick.gif
opengl-rotating-triangle-image-magick.gif
Image info: 995kB, 45 frames, 512x512 apparent size, coalesced.
For the Atlantis shuttle footage, and analogous:
ffmpeg -i STS-132_Liftoff_Space_Shuttle_Atlantis.ogv -r 15 -vf scale=512:-1 \
-ss 00:00:17 -to 00:00:22 -f image2pipe -vcodec ppm - |
convert -deconstruct -delay 5 -loop 0 - STS-132_Liftoff_Space_Shuttle_Atlantis_512x.gif
produced better looking output, but the final GIF was considerably larger at 6.2MB, so I can't upload it.
Explanation of some of the arguments:
Even if you reduce the height and framerate, the output GIF may still be larger than the video, since "real" non-GIF video formats compress across frames, while GIF only compresses individual frames.
A direct:
convert input.mp4 rpi2-bare-metal-blink.gif
worked, but almost killed my computer because of memory overflow, and produced an ouptput 100x larger for my 2s 1Mb input file. Maybe one day ImageMagick will catch up.
See also: https://superuser.com/questions/556029/how-do-i-convert-a-video-to-gif-using-ffmpeg-with-reasonable-quality
Tested on Ubuntu 17.10.
Gifski
https://gif.ski/
This is another option that was brought to my attention and which claims intelligent algorithms, so let's try it out.
First we need to convert the video to a sequence of images, and then feed that into gifsky, e.g.:
sudo snap install gifski
mkdir -p frames
ffmpeg \
-i opengl-rotating-triangle.mp4 \
-r 15 \
-vf scale=512:-1 \
-ss 00:00:03 -to 00:00:06 \
frames/%04d.png
gifski -o opengl-rotating-triangle-gifski.gif frames/*.png
opengl-rotating-triangle-gifski.gif
Image info: 954K, 45 frames, 512x512 apparent size, not coalesced, conversion time on a Lenovo P51: 4.8s.
And the 2s STS:
Image info: 1.6M, 30 frames, 512x288 apparent size, not coalesced, conversion time on a Lenovo P51: 2.8s.
So for me, subjectively, this did not appear to offer significant benefit over ffmpeg's palettegen
.
It is well worth running a deinterlacing video filter during your video encode and this may very well lessen some of the odd screen effects that you are seeing in your output video. A second thought, unrelated to motion artefact but well worth adding in, is the use of a de-noising filter.
1. Deinterlacing:
For FFmpeg the best and fastest choice is yadif
which in the usual quirky geek fashion simply stands for 'Yet Another DeInterlacing Filter'!
yadif
can be used with no options or you can specify an option for each of 3 fields:
- mode: The basic interlacing mode to adopt
- parity: The picture field parity assumed for the input interlaced video
- deint: Specify which frames to deinterlace
The safe defaults can be specified on the FFmpeg command line as:
-vf yadif=0:-1:0
If you wish to alter these all of the deeper detail is contained here:
FFmpeg Filter Documentation: yadif
https://ffmpeg.org/ffmpeg-filters.html#yadif-1
A further deinterlacing filter called mcdeint
(motion-compensation deinterlacing) can also be applied but you may find this painfully slow. A typical command line for use of this filter would be:
-vf yadif=1:-1:0,mcdeint=2:1:10
And again the fine detail of the mcdeint
options can be seen in the FFmpeg documentation:
FFmpeg Filter Documentation: mcdeint
https://ffmpeg.org/ffmpeg-filters.html#mcdeint
2. Denoising:
A final though that may well be worth some experimentation is the use of a denoising filter, although this should not effect motion artefact it is still a well worth addition. Under FFmpeg there are a few choices but one well worth looking at is nlmeans
(denoise frames using Non-Local Means algorithm). You will need the very latest FFmpeg for this one.
To use this in the easiest command line try the following:
-vf yadif=0:-1:0,nlmeans
There is a hit with nlmeans
in terms of encoding time, not as severe a penalty as is seen with mcdeint
but still a consideration...
If you have an older copy of FFmpeg with no access to this newest filter there is an older denoise filter that can safely be used with trust in the sane defaults:
-vf yadif=0:-1:0,hqdn3d
I note on my own system that hqdn3d
is very, very much faster than the newer nlmeans
. Better? Well I suspect that is a debate for another forum :)
And hopefully a combination of any of these thoughts will solve your problem...
References:
Best Answer
Steps:
ffprobe
.Example script:
This fulfills your many requirements:
ffmpeg
) command