The final line in the output log is too generic — you'll have to look up further. The error is this:
[ac3 @ 0x9fdc740] invalid bit rate
The problem is that for some reason avconv wants to encode your MP3 audio to AC3. In fact, when you're only resizing video, you can leave the audio bitstream alone.
Note that resizing and re-encoding will lower your quality drastically. So, unless you really need to, don't do it. Your video will suffer from generation loss.
If you can, avoid using MPEG-4 Part 2 codecs (Xvid, or the libavcodec-native mpeg4
), and use MPEG-4 Part 10 / H.264 codecs instead (e.g., x264). Since H.264 isn't properly supported in AVI containers, we'll use MP4 instead, which should be your container of choice instead of AVI most of the time.
ffmpeg -i in.avi -c:a copy -c:v libx264 -crf 23 -s:v 640x360 output.mp4
This will copy the audio stream (-c:a copy
), encode the video to x264 (-c:v libx264
) with a constant quality of 23 (-crf
). Use a lower value here for better quality (with sane values from 18–28). The size will be changed with -s:v
.
I'm using FFmpeg synonymously with Libav here, since the syntax should be the same. I would however recommend you to ditch the default Libav version that ships with Ubuntu and compile FFmpeg from source or use a recent Linux static build.
From my own experience, if you want absolutely no loss in quality, --lossless is what you are looking for.
Not sure about avconv
but the command you typed looks identical to what I do with FFmpeg
. In FFmpeg
you can pass the parameter like this:
ffmpeg -i INPUT.mkv -c:v libx265 -preset ultrafast -x265-params lossless=1 OUTPUT.mkv
Most x265
switches (options with no value) can be specified like this (except those CLI-only ones, those are only used with x265
binary directly).
With that out of the way, I'd like to share my experience with x265
encoding. For most videos (be it WMV, or MPEG, or AVC/H.264) I use crf=23
. x265
decides the rest of the parameters and usually it does a good enough job.
However often before I commit to transcoding a video in its entirety, I test my settings by converting a small portion of the video in question. Here's an example, suppose an mkv file with stream 0 being video, stream 1 being DTS audio, and stream 2 being a subtitle:
ffmpeg -hide_banner \
-ss 0 \
-i "INPUT.mkv" \
-attach "COVER.jpg" \
-map_metadata 0 \
-map_chapters 0 \
-metadata title="TITLE" \
-map 0:0 -metadata:s:v:0 language=eng \
-map 0:1 -metadata:s:a:0 language=eng -metadata:s:a:0 title="Surround 5.1 (DTS)" \
-map 0:2 -metadata:s:s:0 language=eng -metadata:s:s:0 title="English" \
-metadata:s:t:0 filename="Cover.jpg" -metadata:s:t:0 mimetype="image/jpeg" \
-c:v libx265 -preset ultrafast -x265-params \
crf=22:qcomp=0.8:aq-mode=1:aq_strength=1.0:qg-size=16:psy-rd=0.7:psy-rdoq=5.0:rdoq-level=1:merange=44 \
-c:a copy \
-c:s copy \
-t 120 \
"OUTPUT.HEVC.DTS.Sample.mkv"
Note that the backslashes signal line breaks in a long command, I do it to help me keep track of various bits of a complex CLI input. Before I explain it line-by-line, the part where you convert only a small portion of a video is the second line and the second last line: -ss 0
means seek to 0 second before starts decoding the input, and -t 120
means stop writing to the output after 120 seconds. You can also use hh:mm:ss or hh:mm:ss.sss time formats.
Now line-by-line:
-hide_banner
prevents FFmpeg
from showing build information on start. I just don' want to see it when I scroll up in the console;
-ss 0
seeks to 0 second before start decoding the input. Note that if this parameter is given after the input file and before the output file, it becomes an output option and tells ffmpeg
to decode and ignore the input until x seconds, and then start writing to output. As an input option it is less accurate (because seeking is not accurate in most container formats), but takes almost no time. As an output option it is very precise but takes a considerable amount of time to decode all the stream before the specified time, and for testing purpose you don't want to waste time;
-i "INPUT.mkv"
: Specify the input file;
-attach "COVER.jpg"
: Attach a cover art (thumbnail picture, poster, whatever) to the output. The cover art is usually shown in file explorers;
-map_metadata 0
: Copy over any and all metadata from input 0, which in the example is just the input;
-map_chapters 0
: Copy over chapter info (if present) from input 0;
-metadata title="TITLE"
: Set the title of the video;
-map 0:0 ...
: Map stream 0 of input 0, which means we want the first stream from the input to be written to the output. Since this stream is a video stream, it is the first video stream in the output, hence the stream specifier :s:v:0
. Set its language tag to English;
-map 0:1 ...
: Similar to line 8, map the second stream (DTS audio), and set its language and title (for easier identification when choosing from players);
-map 0:2 ...
: Similar to line 9, except this stream is a subtitle;
-metadata:s:t:0 ...
: Set metadata for the cover art. This is required for mkv container format;
-c:v libx265 ...
: Video codec options. It's so long that I've broken it into two lines. This setting is good for high quality bluray video (1080p) with minimal banding in gradient (which x265 sucks at). It is most likely an overkill for DVDs and TV shows and phone videos. This setting is mostly stolen from this Doom9 post;
crf=22:...
: Continuation of video codec parameters. See the forum post mentioned above;
-c:a copy
: Copy over audio;
-c:s copy
: Copy over subtitles;
-t 120
: Stop writing to the output after 120 seconds, which gives us a 2-minute clip for previewing trancoding quality;
"OUTPUT.HEVC.DTS.Sample.mkv"
: Output file name. I tag my file names with the video codec and the primary audio codec.
Whew. This is my first answer so if there is anything I missed please leave a comment. I'm not a video production expert, I'm just a guy who's too lazy to watch a movie by putting the disc into the player.
PS. Maybe this question belongs to somewhere else as it isn't strongly related to Unix & Linux.
Best Answer
For ffmpeg, I always recommend using
-sameq
. During testing you can create a smaller test source. I assume 420x350 is lower resolution than the source. Try creating a source matching this to speed up testing.Where $testin is a filename with the same extension as $in. ffmpeg should keep the video codec and container the same, but drop the audio stream and drop the resolution. This will speed up testing since the source video will be a little smaller and you can just focus on making the codec conversion work well. I can't find
-me_range
documented in my ffmpeg. I would focus on playing with different values of-b
and-r
and use of-sameq
until you get an output file size and quality you want.