|
|
<!DOCTYPE html>
|
|
|
<html lang="en">
|
|
|
<head>
|
|
|
<meta charset="utf-8" />
|
|
|
<meta http-equiv="X-UA-Compatible" content="IE=edge" />
|
|
|
<title>FFmpeg documentation</title>
|
|
|
<link rel="stylesheet" href="bootstrap.min.css" />
|
|
|
<link rel="stylesheet" href="style.min.css" />
|
|
|
|
|
|
<meta name="description" content="FFmpeg FAQ: ">
|
|
|
<meta name="keywords" content="FFmpeg documentation : FFmpeg FAQ: ">
|
|
|
<meta name="Generator" content="texi2html 5.0">
|
|
|
<!-- Created on July 3, 2018 by texi2html 5.0 -->
|
|
|
<!--
|
|
|
texi2html was written by:
|
|
|
Lionel Cons <Lionel.Cons@cern.ch> (original author)
|
|
|
Karl Berry <karl@freefriends.org>
|
|
|
Olaf Bachmann <obachman@mathematik.uni-kl.de>
|
|
|
and many others.
|
|
|
Maintained by: Many creative people.
|
|
|
Send bugs and suggestions to <texi2html-bug@nongnu.org>
|
|
|
|
|
|
-->
|
|
|
<meta http-equiv="Content-Type" content="text/html; charset=utf-8">
|
|
|
</head>
|
|
|
<body>
|
|
|
<div class="container">
|
|
|
|
|
|
<h1 class="titlefont">FFmpeg FAQ</h1>
|
|
|
<hr>
|
|
|
<a name="SEC_Top"></a>
|
|
|
|
|
|
<a name="SEC_Contents"></a>
|
|
|
<h1>Table of Contents</h1>
|
|
|
|
|
|
<div class="contents">
|
|
|
|
|
|
<ul class="no-bullet">
|
|
|
<li><a name="toc-General-Questions" href="#General-Questions">1 General Questions</a>
|
|
|
<ul class="no-bullet">
|
|
|
<li><a name="toc-Why-doesn_0027t-FFmpeg-support-feature-_005bxyz_005d_003f" href="#Why-doesn_0027t-FFmpeg-support-feature-_005bxyz_005d_003f">1.1 Why doesn’t FFmpeg support feature [xyz]?</a></li>
|
|
|
<li><a name="toc-FFmpeg-does-not-support-codec-XXX_002e-Can-you-include-a-Windows-DLL-loader-to-support-it_003f" href="#FFmpeg-does-not-support-codec-XXX_002e-Can-you-include-a-Windows-DLL-loader-to-support-it_003f">1.2 FFmpeg does not support codec XXX. Can you include a Windows DLL loader to support it?</a></li>
|
|
|
<li><a name="toc-I-cannot-read-this-file-although-this-format-seems-to-be-supported-by-ffmpeg_002e" href="#I-cannot-read-this-file-although-this-format-seems-to-be-supported-by-ffmpeg_002e">1.3 I cannot read this file although this format seems to be supported by ffmpeg.</a></li>
|
|
|
<li><a name="toc-Which-codecs-are-supported-by-Windows_003f" href="#Which-codecs-are-supported-by-Windows_003f">1.4 Which codecs are supported by Windows?</a></li>
|
|
|
</ul></li>
|
|
|
<li><a name="toc-Compilation" href="#Compilation">2 Compilation</a>
|
|
|
<ul class="no-bullet">
|
|
|
<li><a name="toc-error_003a-can_0027t-find-a-register-in-class-_0027GENERAL_005fREGS_0027-while-reloading-_0027asm_0027" href="#error_003a-can_0027t-find-a-register-in-class-_0027GENERAL_005fREGS_0027-while-reloading-_0027asm_0027">2.1 <code>error: can't find a register in class 'GENERAL_REGS' while reloading 'asm'</code></a></li>
|
|
|
<li><a name="toc-I-have-installed-this-library-with-my-distro_0027s-package-manager_002e-Why-does-configure-not-see-it_003f" href="#I-have-installed-this-library-with-my-distro_0027s-package-manager_002e-Why-does-configure-not-see-it_003f">2.2 I have installed this library with my distro’s package manager. Why does <code>configure</code> not see it?</a></li>
|
|
|
<li><a name="toc-How-do-I-make-pkg_002dconfig-find-my-libraries_003f" href="#How-do-I-make-pkg_002dconfig-find-my-libraries_003f">2.3 How do I make <code>pkg-config</code> find my libraries?</a></li>
|
|
|
<li><a name="toc-How-do-I-use-pkg_002dconfig-when-cross_002dcompiling_003f" href="#How-do-I-use-pkg_002dconfig-when-cross_002dcompiling_003f">2.4 How do I use <code>pkg-config</code> when cross-compiling?</a></li>
|
|
|
</ul></li>
|
|
|
<li><a name="toc-Usage" href="#Usage">3 Usage</a>
|
|
|
<ul class="no-bullet">
|
|
|
<li><a name="toc-ffmpeg-does-not-work_003b-what-is-wrong_003f" href="#ffmpeg-does-not-work_003b-what-is-wrong_003f">3.1 ffmpeg does not work; what is wrong?</a></li>
|
|
|
<li><a name="toc-How-do-I-encode-single-pictures-into-movies_003f" href="#How-do-I-encode-single-pictures-into-movies_003f">3.2 How do I encode single pictures into movies?</a></li>
|
|
|
<li><a name="toc-How-do-I-encode-movie-to-single-pictures_003f" href="#How-do-I-encode-movie-to-single-pictures_003f">3.3 How do I encode movie to single pictures?</a></li>
|
|
|
<li><a name="toc-Why-do-I-see-a-slight-quality-degradation-with-multithreaded-MPEG_002a-encoding_003f" href="#Why-do-I-see-a-slight-quality-degradation-with-multithreaded-MPEG_002a-encoding_003f">3.4 Why do I see a slight quality degradation with multithreaded MPEG* encoding?</a></li>
|
|
|
<li><a name="toc-How-can-I-read-from-the-standard-input-or-write-to-the-standard-output_003f" href="#How-can-I-read-from-the-standard-input-or-write-to-the-standard-output_003f">3.5 How can I read from the standard input or write to the standard output?</a></li>
|
|
|
<li><a name="toc-_002df-jpeg-doesn_0027t-work_002e" href="#g_t_002df-jpeg-doesn_0027t-work_002e">3.6 -f jpeg doesn’t work.</a></li>
|
|
|
<li><a name="toc-Why-can-I-not-change-the-frame-rate_003f" href="#Why-can-I-not-change-the-frame-rate_003f">3.7 Why can I not change the frame rate?</a></li>
|
|
|
<li><a name="toc-How-do-I-encode-Xvid-or-DivX-video-with-ffmpeg_003f" href="#How-do-I-encode-Xvid-or-DivX-video-with-ffmpeg_003f">3.8 How do I encode Xvid or DivX video with ffmpeg?</a></li>
|
|
|
<li><a name="toc-Which-are-good-parameters-for-encoding-high-quality-MPEG_002d4_003f" href="#Which-are-good-parameters-for-encoding-high-quality-MPEG_002d4_003f">3.9 Which are good parameters for encoding high quality MPEG-4?</a></li>
|
|
|
<li><a name="toc-Which-are-good-parameters-for-encoding-high-quality-MPEG_002d1_002fMPEG_002d2_003f" href="#Which-are-good-parameters-for-encoding-high-quality-MPEG_002d1_002fMPEG_002d2_003f">3.10 Which are good parameters for encoding high quality MPEG-1/MPEG-2?</a></li>
|
|
|
<li><a name="toc-Interlaced-video-looks-very-bad-when-encoded-with-ffmpeg_002c-what-is-wrong_003f" href="#Interlaced-video-looks-very-bad-when-encoded-with-ffmpeg_002c-what-is-wrong_003f">3.11 Interlaced video looks very bad when encoded with ffmpeg, what is wrong?</a></li>
|
|
|
<li><a name="toc-How-can-I-read-DirectShow-files_003f" href="#How-can-I-read-DirectShow-files_003f">3.12 How can I read DirectShow files?</a></li>
|
|
|
<li><a name="toc-How-can-I-join-video-files_003f" href="#How-can-I-join-video-files_003f">3.13 How can I join video files?</a></li>
|
|
|
<li><a name="toc-How-can-I-concatenate-video-files_003f" href="#How-can-I-concatenate-video-files_003f">3.14 How can I concatenate video files?</a>
|
|
|
<ul class="no-bullet">
|
|
|
<li><a name="toc-Concatenating-using-the-concat-filter" href="#Concatenating-using-the-concat-filter">3.14.1 Concatenating using the concat <em>filter</em></a></li>
|
|
|
<li><a name="toc-Concatenating-using-the-concat-demuxer" href="#Concatenating-using-the-concat-demuxer">3.14.2 Concatenating using the concat <em>demuxer</em></a></li>
|
|
|
<li><a name="toc-Concatenating-using-the-concat-protocol-_0028file-level_0029" href="#Concatenating-using-the-concat-protocol-_0028file-level_0029">3.14.3 Concatenating using the concat <em>protocol</em> (file level)</a></li>
|
|
|
<li><a name="toc-Concatenating-using-raw-audio-and-video" href="#Concatenating-using-raw-audio-and-video">3.14.4 Concatenating using raw audio and video</a></li>
|
|
|
</ul></li>
|
|
|
<li><a name="toc-Using-_002df-lavfi_002c-audio-becomes-mono-for-no-apparent-reason_002e" href="#Using-_002df-lavfi_002c-audio-becomes-mono-for-no-apparent-reason_002e">3.15 Using ‘<samp>-f lavfi</samp>’, audio becomes mono for no apparent reason.</a></li>
|
|
|
<li><a name="toc-Why-does-FFmpeg-not-see-the-subtitles-in-my-VOB-file_003f" href="#Why-does-FFmpeg-not-see-the-subtitles-in-my-VOB-file_003f">3.16 Why does FFmpeg not see the subtitles in my VOB file?</a></li>
|
|
|
<li><a name="toc-Why-was-the-ffmpeg-_002dsameq-option-removed_003f-What-to-use-instead_003f" href="#Why-was-the-ffmpeg-_002dsameq-option-removed_003f-What-to-use-instead_003f">3.17 Why was the <code>ffmpeg</code> ‘<samp>-sameq</samp>’ option removed? What to use instead?</a></li>
|
|
|
<li><a name="toc-I-have-a-stretched-video_002c-why-does-scaling-does-not-fix-it_003f" href="#I-have-a-stretched-video_002c-why-does-scaling-does-not-fix-it_003f">3.18 I have a stretched video, why does scaling does not fix it?</a></li>
|
|
|
<li><a name="toc-How-do-I-run-ffmpeg-as-a-background-task_003f" href="#How-do-I-run-ffmpeg-as-a-background-task_003f">3.19 How do I run ffmpeg as a background task?</a></li>
|
|
|
<li><a name="toc-How-do-I-prevent-ffmpeg-from-suspending-with-a-message-like-suspended-_0028tty-output_0029_003f" href="#How-do-I-prevent-ffmpeg-from-suspending-with-a-message-like-suspended-_0028tty-output_0029_003f">3.20 How do I prevent ffmpeg from suspending with a message like <em>suspended (tty output)</em>?</a></li>
|
|
|
</ul></li>
|
|
|
<li><a name="toc-Development" href="#Development">4 Development</a>
|
|
|
<ul class="no-bullet">
|
|
|
<li><a name="toc-Are-there-examples-illustrating-how-to-use-the-FFmpeg-libraries_002c-particularly-libavcodec-and-libavformat_003f" href="#Are-there-examples-illustrating-how-to-use-the-FFmpeg-libraries_002c-particularly-libavcodec-and-libavformat_003f">4.1 Are there examples illustrating how to use the FFmpeg libraries, particularly libavcodec and libavformat?</a></li>
|
|
|
<li><a name="toc-Can-you-support-my-C-compiler-XXX_003f" href="#Can-you-support-my-C-compiler-XXX_003f">4.2 Can you support my C compiler XXX?</a></li>
|
|
|
<li><a name="toc-Is-Microsoft-Visual-C_002b_002b-supported_003f" href="#Is-Microsoft-Visual-C_002b_002b-supported_003f">4.3 Is Microsoft Visual C++ supported?</a></li>
|
|
|
<li><a name="toc-Can-you-add-automake_002c-libtool-or-autoconf-support_003f" href="#Can-you-add-automake_002c-libtool-or-autoconf-support_003f">4.4 Can you add automake, libtool or autoconf support?</a></li>
|
|
|
<li><a name="toc-Why-not-rewrite-FFmpeg-in-object_002doriented-C_002b_002b_003f" href="#Why-not-rewrite-FFmpeg-in-object_002doriented-C_002b_002b_003f">4.5 Why not rewrite FFmpeg in object-oriented C++?</a></li>
|
|
|
<li><a name="toc-Why-are-the-ffmpeg-programs-devoid-of-debugging-symbols_003f" href="#Why-are-the-ffmpeg-programs-devoid-of-debugging-symbols_003f">4.6 Why are the ffmpeg programs devoid of debugging symbols?</a></li>
|
|
|
<li><a name="toc-I-do-not-like-the-LGPL_002c-can-I-contribute-code-under-the-GPL-instead_003f" href="#I-do-not-like-the-LGPL_002c-can-I-contribute-code-under-the-GPL-instead_003f">4.7 I do not like the LGPL, can I contribute code under the GPL instead?</a></li>
|
|
|
<li><a name="toc-I_0027m-using-FFmpeg-from-within-my-C-application-but-the-linker-complains-about-missing-symbols-from-the-libraries-themselves_002e" href="#I_0027m-using-FFmpeg-from-within-my-C-application-but-the-linker-complains-about-missing-symbols-from-the-libraries-themselves_002e">4.8 I’m using FFmpeg from within my C application but the linker complains about missing symbols from the libraries themselves.</a></li>
|
|
|
<li><a name="toc-I_0027m-using-FFmpeg-from-within-my-C_002b_002b-application-but-the-linker-complains-about-missing-symbols-which-seem-to-be-available_002e" href="#I_0027m-using-FFmpeg-from-within-my-C_002b_002b-application-but-the-linker-complains-about-missing-symbols-which-seem-to-be-available_002e">4.9 I’m using FFmpeg from within my C++ application but the linker complains about missing symbols which seem to be available.</a></li>
|
|
|
<li><a name="toc-I_0027m-using-libavutil-from-within-my-C_002b_002b-application-but-the-compiler-complains-about-_0027UINT64_005fC_0027-was-not-declared-in-this-scope" href="#I_0027m-using-libavutil-from-within-my-C_002b_002b-application-but-the-compiler-complains-about-_0027UINT64_005fC_0027-was-not-declared-in-this-scope">4.10 I’m using libavutil from within my C++ application but the compiler complains about ’UINT64_C’ was not declared in this scope</a></li>
|
|
|
<li><a name="toc-I-have-a-file-in-memory-_002f-a-API-different-from-_002aopen_002f_002aread_002f-libc-how-do-I-use-it-with-libavformat_003f" href="#I-have-a-file-in-memory-_002f-a-API-different-from-_002aopen_002f_002aread_002f-libc-how-do-I-use-it-with-libavformat_003f">4.11 I have a file in memory / a API different from *open/*read/ libc how do I use it with libavformat?</a></li>
|
|
|
<li><a name="toc-Where-is-the-documentation-about-ffv1_002c-msmpeg4_002c-asv1_002c-4xm_003f" href="#Where-is-the-documentation-about-ffv1_002c-msmpeg4_002c-asv1_002c-4xm_003f">4.12 Where is the documentation about ffv1, msmpeg4, asv1, 4xm?</a></li>
|
|
|
<li><a name="toc-How-do-I-feed-H_002e263_002dRTP-_0028and-other-codecs-in-RTP_0029-to-libavcodec_003f" href="#How-do-I-feed-H_002e263_002dRTP-_0028and-other-codecs-in-RTP_0029-to-libavcodec_003f">4.13 How do I feed H.263-RTP (and other codecs in RTP) to libavcodec?</a></li>
|
|
|
<li><a name="toc-AVStream_002er_005fframe_005frate-is-wrong_002c-it-is-much-larger-than-the-frame-rate_002e" href="#AVStream_002er_005fframe_005frate-is-wrong_002c-it-is-much-larger-than-the-frame-rate_002e">4.14 AVStream.r_frame_rate is wrong, it is much larger than the frame rate.</a></li>
|
|
|
<li><a name="toc-Why-is-make-fate-not-running-all-tests_003f" href="#Why-is-make-fate-not-running-all-tests_003f">4.15 Why is <code>make fate</code> not running all tests?</a></li>
|
|
|
<li><a name="toc-Why-is-make-fate-not-finding-the-samples_003f" href="#Why-is-make-fate-not-finding-the-samples_003f">4.16 Why is <code>make fate</code> not finding the samples?</a></li>
|
|
|
</ul>
|
|
|
</li>
|
|
|
</ul>
|
|
|
</div>
|
|
|
|
|
|
|
|
|
<hr size="6">
|
|
|
<a name="General-Questions"></a>
|
|
|
<h1 class="chapter"><a href="faq.html#toc-General-Questions">1 General Questions</a></h1>
|
|
|
|
|
|
<a name="Why-doesn_0027t-FFmpeg-support-feature-_005bxyz_005d_003f"></a>
|
|
|
<h2 class="section"><a href="faq.html#toc-Why-doesn_0027t-FFmpeg-support-feature-_005bxyz_005d_003f">1.1 Why doesn’t FFmpeg support feature [xyz]?</a></h2>
|
|
|
|
|
|
<p>Because no one has taken on that task yet. FFmpeg development is
|
|
|
driven by the tasks that are important to the individual developers.
|
|
|
If there is a feature that is important to you, the best way to get
|
|
|
it implemented is to undertake the task yourself or sponsor a developer.
|
|
|
</p>
|
|
|
<a name="FFmpeg-does-not-support-codec-XXX_002e-Can-you-include-a-Windows-DLL-loader-to-support-it_003f"></a>
|
|
|
<h2 class="section"><a href="faq.html#toc-FFmpeg-does-not-support-codec-XXX_002e-Can-you-include-a-Windows-DLL-loader-to-support-it_003f">1.2 FFmpeg does not support codec XXX. Can you include a Windows DLL loader to support it?</a></h2>
|
|
|
|
|
|
<p>No. Windows DLLs are not portable, bloated and often slow.
|
|
|
Moreover FFmpeg strives to support all codecs natively.
|
|
|
A DLL loader is not conducive to that goal.
|
|
|
</p>
|
|
|
<a name="I-cannot-read-this-file-although-this-format-seems-to-be-supported-by-ffmpeg_002e"></a>
|
|
|
<h2 class="section"><a href="faq.html#toc-I-cannot-read-this-file-although-this-format-seems-to-be-supported-by-ffmpeg_002e">1.3 I cannot read this file although this format seems to be supported by ffmpeg.</a></h2>
|
|
|
|
|
|
<p>Even if ffmpeg can read the container format, it may not support all its
|
|
|
codecs. Please consult the supported codec list in the ffmpeg
|
|
|
documentation.
|
|
|
</p>
|
|
|
<a name="Which-codecs-are-supported-by-Windows_003f"></a>
|
|
|
<h2 class="section"><a href="faq.html#toc-Which-codecs-are-supported-by-Windows_003f">1.4 Which codecs are supported by Windows?</a></h2>
|
|
|
|
|
|
<p>Windows does not support standard formats like MPEG very well, unless you
|
|
|
install some additional codecs.
|
|
|
</p>
|
|
|
<p>The following list of video codecs should work on most Windows systems:
|
|
|
</p><dl compact="compact">
|
|
|
<dt>‘<samp>msmpeg4v2</samp>’</dt>
|
|
|
<dd><p>.avi/.asf
|
|
|
</p></dd>
|
|
|
<dt>‘<samp>msmpeg4</samp>’</dt>
|
|
|
<dd><p>.asf only
|
|
|
</p></dd>
|
|
|
<dt>‘<samp>wmv1</samp>’</dt>
|
|
|
<dd><p>.asf only
|
|
|
</p></dd>
|
|
|
<dt>‘<samp>wmv2</samp>’</dt>
|
|
|
<dd><p>.asf only
|
|
|
</p></dd>
|
|
|
<dt>‘<samp>mpeg4</samp>’</dt>
|
|
|
<dd><p>Only if you have some MPEG-4 codec like ffdshow or Xvid installed.
|
|
|
</p></dd>
|
|
|
<dt>‘<samp>mpeg1video</samp>’</dt>
|
|
|
<dd><p>.mpg only
|
|
|
</p></dd>
|
|
|
</dl>
|
|
|
<p>Note, ASF files often have .wmv or .wma extensions in Windows. It should also
|
|
|
be mentioned that Microsoft claims a patent on the ASF format, and may sue
|
|
|
or threaten users who create ASF files with non-Microsoft software. It is
|
|
|
strongly advised to avoid ASF where possible.
|
|
|
</p>
|
|
|
<p>The following list of audio codecs should work on most Windows systems:
|
|
|
</p><dl compact="compact">
|
|
|
<dt>‘<samp>adpcm_ima_wav</samp>’</dt>
|
|
|
<dt>‘<samp>adpcm_ms</samp>’</dt>
|
|
|
<dt>‘<samp>pcm_s16le</samp>’</dt>
|
|
|
<dd><p>always
|
|
|
</p></dd>
|
|
|
<dt>‘<samp>libmp3lame</samp>’</dt>
|
|
|
<dd><p>If some MP3 codec like LAME is installed.
|
|
|
</p></dd>
|
|
|
</dl>
|
|
|
|
|
|
|
|
|
<a name="Compilation"></a>
|
|
|
<h1 class="chapter"><a href="faq.html#toc-Compilation">2 Compilation</a></h1>
|
|
|
|
|
|
<a name="error_003a-can_0027t-find-a-register-in-class-_0027GENERAL_005fREGS_0027-while-reloading-_0027asm_0027"></a>
|
|
|
<h2 class="section"><a href="faq.html#toc-error_003a-can_0027t-find-a-register-in-class-_0027GENERAL_005fREGS_0027-while-reloading-_0027asm_0027">2.1 <code>error: can't find a register in class 'GENERAL_REGS' while reloading 'asm'</code></a></h2>
|
|
|
|
|
|
<p>This is a bug in gcc. Do not report it to us. Instead, please report it to
|
|
|
the gcc developers. Note that we will not add workarounds for gcc bugs.
|
|
|
</p>
|
|
|
<p>Also note that (some of) the gcc developers believe this is not a bug or
|
|
|
not a bug they should fix:
|
|
|
<a href="http://gcc.gnu.org/bugzilla/show_bug.cgi?id=11203">http://gcc.gnu.org/bugzilla/show_bug.cgi?id=11203</a>.
|
|
|
Then again, some of them do not know the difference between an undecidable
|
|
|
problem and an NP-hard problem...
|
|
|
</p>
|
|
|
<a name="I-have-installed-this-library-with-my-distro_0027s-package-manager_002e-Why-does-configure-not-see-it_003f"></a>
|
|
|
<h2 class="section"><a href="faq.html#toc-I-have-installed-this-library-with-my-distro_0027s-package-manager_002e-Why-does-configure-not-see-it_003f">2.2 I have installed this library with my distro’s package manager. Why does <code>configure</code> not see it?</a></h2>
|
|
|
|
|
|
<p>Distributions usually split libraries in several packages. The main package
|
|
|
contains the files necessary to run programs using the library. The
|
|
|
development package contains the files necessary to build programs using the
|
|
|
library. Sometimes, docs and/or data are in a separate package too.
|
|
|
</p>
|
|
|
<p>To build FFmpeg, you need to install the development package. It is usually
|
|
|
called ‘<tt>libfoo-dev</tt>’ or ‘<tt>libfoo-devel</tt>’. You can remove it after the
|
|
|
build is finished, but be sure to keep the main package.
|
|
|
</p>
|
|
|
<a name="How-do-I-make-pkg_002dconfig-find-my-libraries_003f"></a>
|
|
|
<h2 class="section"><a href="faq.html#toc-How-do-I-make-pkg_002dconfig-find-my-libraries_003f">2.3 How do I make <code>pkg-config</code> find my libraries?</a></h2>
|
|
|
|
|
|
<p>Somewhere along with your libraries, there is a ‘<tt>.pc</tt>’ file (or several)
|
|
|
in a ‘<tt>pkgconfig</tt>’ directory. You need to set environment variables to
|
|
|
point <code>pkg-config</code> to these files.
|
|
|
</p>
|
|
|
<p>If you need to <em>add</em> directories to <code>pkg-config</code>’s search list
|
|
|
(typical use case: library installed separately), add it to
|
|
|
<code>$PKG_CONFIG_PATH</code>:
|
|
|
</p>
|
|
|
<div class="example">
|
|
|
<pre class="example">export PKG_CONFIG_PATH=/opt/x264/lib/pkgconfig:/opt/opus/lib/pkgconfig
|
|
|
</pre></div>
|
|
|
|
|
|
<p>If you need to <em>replace</em> <code>pkg-config</code>’s search list
|
|
|
(typical use case: cross-compiling), set it in
|
|
|
<code>$PKG_CONFIG_LIBDIR</code>:
|
|
|
</p>
|
|
|
<div class="example">
|
|
|
<pre class="example">export PKG_CONFIG_LIBDIR=/home/me/cross/usr/lib/pkgconfig:/home/me/cross/usr/local/lib/pkgconfig
|
|
|
</pre></div>
|
|
|
|
|
|
<p>If you need to know the library’s internal dependencies (typical use: static
|
|
|
linking), add the <code>--static</code> option to <code>pkg-config</code>:
|
|
|
</p>
|
|
|
<div class="example">
|
|
|
<pre class="example">./configure --pkg-config-flags=--static
|
|
|
</pre></div>
|
|
|
|
|
|
<a name="How-do-I-use-pkg_002dconfig-when-cross_002dcompiling_003f"></a>
|
|
|
<h2 class="section"><a href="faq.html#toc-How-do-I-use-pkg_002dconfig-when-cross_002dcompiling_003f">2.4 How do I use <code>pkg-config</code> when cross-compiling?</a></h2>
|
|
|
|
|
|
<p>The best way is to install <code>pkg-config</code> in your cross-compilation
|
|
|
environment. It will automatically use the cross-compilation libraries.
|
|
|
</p>
|
|
|
<p>You can also use <code>pkg-config</code> from the host environment by
|
|
|
specifying explicitly <code>--pkg-config=pkg-config</code> to <code>configure</code>.
|
|
|
In that case, you must point <code>pkg-config</code> to the correct directories
|
|
|
using the <code>PKG_CONFIG_LIBDIR</code>, as explained in the previous entry.
|
|
|
</p>
|
|
|
<p>As an intermediate solution, you can place in your cross-compilation
|
|
|
environment a script that calls the host <code>pkg-config</code> with
|
|
|
<code>PKG_CONFIG_LIBDIR</code> set. That script can look like that:
|
|
|
</p>
|
|
|
<div class="example">
|
|
|
<pre class="example">#!/bin/sh
|
|
|
PKG_CONFIG_LIBDIR=/path/to/cross/lib/pkgconfig
|
|
|
export PKG_CONFIG_LIBDIR
|
|
|
exec /usr/bin/pkg-config "$@"
|
|
|
</pre></div>
|
|
|
|
|
|
<a name="Usage"></a>
|
|
|
<h1 class="chapter"><a href="faq.html#toc-Usage">3 Usage</a></h1>
|
|
|
|
|
|
<a name="ffmpeg-does-not-work_003b-what-is-wrong_003f"></a>
|
|
|
<h2 class="section"><a href="faq.html#toc-ffmpeg-does-not-work_003b-what-is-wrong_003f">3.1 ffmpeg does not work; what is wrong?</a></h2>
|
|
|
|
|
|
<p>Try a <code>make distclean</code> in the ffmpeg source directory before the build.
|
|
|
If this does not help see
|
|
|
(<a href="https://ffmpeg.org/bugreports.html">https://ffmpeg.org/bugreports.html</a>).
|
|
|
</p>
|
|
|
<a name="How-do-I-encode-single-pictures-into-movies_003f"></a>
|
|
|
<h2 class="section"><a href="faq.html#toc-How-do-I-encode-single-pictures-into-movies_003f">3.2 How do I encode single pictures into movies?</a></h2>
|
|
|
|
|
|
<p>First, rename your pictures to follow a numerical sequence.
|
|
|
For example, img1.jpg, img2.jpg, img3.jpg,...
|
|
|
Then you may run:
|
|
|
</p>
|
|
|
<div class="example">
|
|
|
<pre class="example">ffmpeg -f image2 -i img%d.jpg /tmp/a.mpg
|
|
|
</pre></div>
|
|
|
|
|
|
<p>Notice that ‘<samp>%d</samp>’ is replaced by the image number.
|
|
|
</p>
|
|
|
<p>‘<tt>img%03d.jpg</tt>’ means the sequence ‘<tt>img001.jpg</tt>’, ‘<tt>img002.jpg</tt>’, etc.
|
|
|
</p>
|
|
|
<p>Use the ‘<samp>-start_number</samp>’ option to declare a starting number for
|
|
|
the sequence. This is useful if your sequence does not start with
|
|
|
‘<tt>img001.jpg</tt>’ but is still in a numerical order. The following
|
|
|
example will start with ‘<tt>img100.jpg</tt>’:
|
|
|
</p>
|
|
|
<div class="example">
|
|
|
<pre class="example">ffmpeg -f image2 -start_number 100 -i img%d.jpg /tmp/a.mpg
|
|
|
</pre></div>
|
|
|
|
|
|
<p>If you have large number of pictures to rename, you can use the
|
|
|
following command to ease the burden. The command, using the bourne
|
|
|
shell syntax, symbolically links all files in the current directory
|
|
|
that match <code>*jpg</code> to the ‘<tt>/tmp</tt>’ directory in the sequence of
|
|
|
‘<tt>img001.jpg</tt>’, ‘<tt>img002.jpg</tt>’ and so on.
|
|
|
</p>
|
|
|
<div class="example">
|
|
|
<pre class="example">x=1; for i in *jpg; do counter=$(printf %03d $x); ln -s "$i" /tmp/img"$counter".jpg; x=$(($x+1)); done
|
|
|
</pre></div>
|
|
|
|
|
|
<p>If you want to sequence them by oldest modified first, substitute
|
|
|
<code>$(ls -r -t *jpg)</code> in place of <code>*jpg</code>.
|
|
|
</p>
|
|
|
<p>Then run:
|
|
|
</p>
|
|
|
<div class="example">
|
|
|
<pre class="example">ffmpeg -f image2 -i /tmp/img%03d.jpg /tmp/a.mpg
|
|
|
</pre></div>
|
|
|
|
|
|
<p>The same logic is used for any image format that ffmpeg reads.
|
|
|
</p>
|
|
|
<p>You can also use <code>cat</code> to pipe images to ffmpeg:
|
|
|
</p>
|
|
|
<div class="example">
|
|
|
<pre class="example">cat *.jpg | ffmpeg -f image2pipe -c:v mjpeg -i - output.mpg
|
|
|
</pre></div>
|
|
|
|
|
|
<a name="How-do-I-encode-movie-to-single-pictures_003f"></a>
|
|
|
<h2 class="section"><a href="faq.html#toc-How-do-I-encode-movie-to-single-pictures_003f">3.3 How do I encode movie to single pictures?</a></h2>
|
|
|
|
|
|
<p>Use:
|
|
|
</p>
|
|
|
<div class="example">
|
|
|
<pre class="example">ffmpeg -i movie.mpg movie%d.jpg
|
|
|
</pre></div>
|
|
|
|
|
|
<p>The ‘<tt>movie.mpg</tt>’ used as input will be converted to
|
|
|
‘<tt>movie1.jpg</tt>’, ‘<tt>movie2.jpg</tt>’, etc...
|
|
|
</p>
|
|
|
<p>Instead of relying on file format self-recognition, you may also use
|
|
|
</p><dl compact="compact">
|
|
|
<dt>‘<samp>-c:v ppm</samp>’</dt>
|
|
|
<dt>‘<samp>-c:v png</samp>’</dt>
|
|
|
<dt>‘<samp>-c:v mjpeg</samp>’</dt>
|
|
|
</dl>
|
|
|
<p>to force the encoding.
|
|
|
</p>
|
|
|
<p>Applying that to the previous example:
|
|
|
</p><div class="example">
|
|
|
<pre class="example">ffmpeg -i movie.mpg -f image2 -c:v mjpeg menu%d.jpg
|
|
|
</pre></div>
|
|
|
|
|
|
<p>Beware that there is no "jpeg" codec. Use "mjpeg" instead.
|
|
|
</p>
|
|
|
<a name="Why-do-I-see-a-slight-quality-degradation-with-multithreaded-MPEG_002a-encoding_003f"></a>
|
|
|
<h2 class="section"><a href="faq.html#toc-Why-do-I-see-a-slight-quality-degradation-with-multithreaded-MPEG_002a-encoding_003f">3.4 Why do I see a slight quality degradation with multithreaded MPEG* encoding?</a></h2>
|
|
|
|
|
|
<p>For multithreaded MPEG* encoding, the encoded slices must be independent,
|
|
|
otherwise thread n would practically have to wait for n-1 to finish, so it’s
|
|
|
quite logical that there is a small reduction of quality. This is not a bug.
|
|
|
</p>
|
|
|
<a name="How-can-I-read-from-the-standard-input-or-write-to-the-standard-output_003f"></a>
|
|
|
<h2 class="section"><a href="faq.html#toc-How-can-I-read-from-the-standard-input-or-write-to-the-standard-output_003f">3.5 How can I read from the standard input or write to the standard output?</a></h2>
|
|
|
|
|
|
<p>Use ‘<tt>-</tt>’ as file name.
|
|
|
</p>
|
|
|
<a name="g_t_002df-jpeg-doesn_0027t-work_002e"></a>
|
|
|
<h2 class="section"><a href="faq.html#toc-_002df-jpeg-doesn_0027t-work_002e">3.6 -f jpeg doesn’t work.</a></h2>
|
|
|
|
|
|
<p>Try ’-f image2 test%d.jpg’.
|
|
|
</p>
|
|
|
<a name="Why-can-I-not-change-the-frame-rate_003f"></a>
|
|
|
<h2 class="section"><a href="faq.html#toc-Why-can-I-not-change-the-frame-rate_003f">3.7 Why can I not change the frame rate?</a></h2>
|
|
|
|
|
|
<p>Some codecs, like MPEG-1/2, only allow a small number of fixed frame rates.
|
|
|
Choose a different codec with the -c:v command line option.
|
|
|
</p>
|
|
|
<a name="How-do-I-encode-Xvid-or-DivX-video-with-ffmpeg_003f"></a>
|
|
|
<h2 class="section"><a href="faq.html#toc-How-do-I-encode-Xvid-or-DivX-video-with-ffmpeg_003f">3.8 How do I encode Xvid or DivX video with ffmpeg?</a></h2>
|
|
|
|
|
|
<p>Both Xvid and DivX (version 4+) are implementations of the ISO MPEG-4
|
|
|
standard (note that there are many other coding formats that use this
|
|
|
same standard). Thus, use ’-c:v mpeg4’ to encode in these formats. The
|
|
|
default fourcc stored in an MPEG-4-coded file will be ’FMP4’. If you want
|
|
|
a different fourcc, use the ’-vtag’ option. E.g., ’-vtag xvid’ will
|
|
|
force the fourcc ’xvid’ to be stored as the video fourcc rather than the
|
|
|
default.
|
|
|
</p>
|
|
|
<a name="Which-are-good-parameters-for-encoding-high-quality-MPEG_002d4_003f"></a>
|
|
|
<h2 class="section"><a href="faq.html#toc-Which-are-good-parameters-for-encoding-high-quality-MPEG_002d4_003f">3.9 Which are good parameters for encoding high quality MPEG-4?</a></h2>
|
|
|
|
|
|
<p>’-mbd rd -flags +mv4+aic -trellis 2 -cmp 2 -subcmp 2 -g 300 -pass 1/2’,
|
|
|
things to try: ’-bf 2’, ’-flags qprd’, ’-flags mv0’, ’-flags skiprd’.
|
|
|
</p>
|
|
|
<a name="Which-are-good-parameters-for-encoding-high-quality-MPEG_002d1_002fMPEG_002d2_003f"></a>
|
|
|
<h2 class="section"><a href="faq.html#toc-Which-are-good-parameters-for-encoding-high-quality-MPEG_002d1_002fMPEG_002d2_003f">3.10 Which are good parameters for encoding high quality MPEG-1/MPEG-2?</a></h2>
|
|
|
|
|
|
<p>’-mbd rd -trellis 2 -cmp 2 -subcmp 2 -g 100 -pass 1/2’
|
|
|
but beware the ’-g 100’ might cause problems with some decoders.
|
|
|
Things to try: ’-bf 2’, ’-flags qprd’, ’-flags mv0’, ’-flags skiprd.
|
|
|
</p>
|
|
|
<a name="Interlaced-video-looks-very-bad-when-encoded-with-ffmpeg_002c-what-is-wrong_003f"></a>
|
|
|
<h2 class="section"><a href="faq.html#toc-Interlaced-video-looks-very-bad-when-encoded-with-ffmpeg_002c-what-is-wrong_003f">3.11 Interlaced video looks very bad when encoded with ffmpeg, what is wrong?</a></h2>
|
|
|
|
|
|
<p>You should use ’-flags +ilme+ildct’ and maybe ’-flags +alt’ for interlaced
|
|
|
material, and try ’-top 0/1’ if the result looks really messed-up.
|
|
|
</p>
|
|
|
<a name="How-can-I-read-DirectShow-files_003f"></a>
|
|
|
<h2 class="section"><a href="faq.html#toc-How-can-I-read-DirectShow-files_003f">3.12 How can I read DirectShow files?</a></h2>
|
|
|
|
|
|
<p>If you have built FFmpeg with <code>./configure --enable-avisynth</code>
|
|
|
(only possible on MinGW/Cygwin platforms),
|
|
|
then you may use any file that DirectShow can read as input.
|
|
|
</p>
|
|
|
<p>Just create an "input.avs" text file with this single line ...
|
|
|
</p><div class="example">
|
|
|
<pre class="example">DirectShowSource("C:\path to your file\yourfile.asf")
|
|
|
</pre></div>
|
|
|
<p>... and then feed that text file to ffmpeg:
|
|
|
</p><div class="example">
|
|
|
<pre class="example">ffmpeg -i input.avs
|
|
|
</pre></div>
|
|
|
|
|
|
<p>For ANY other help on AviSynth, please visit the
|
|
|
<a href="http://www.avisynth.org/">AviSynth homepage</a>.
|
|
|
</p>
|
|
|
<a name="How-can-I-join-video-files_003f"></a>
|
|
|
<h2 class="section"><a href="faq.html#toc-How-can-I-join-video-files_003f">3.13 How can I join video files?</a></h2>
|
|
|
|
|
|
<p>To "join" video files is quite ambiguous. The following list explains the
|
|
|
different kinds of "joining" and points out how those are addressed in
|
|
|
FFmpeg. To join video files may mean:
|
|
|
</p>
|
|
|
<ul>
|
|
|
<li>
|
|
|
To put them one after the other: this is called to <em>concatenate</em> them
|
|
|
(in short: concat) and is addressed
|
|
|
<a href="#How-can-I-concatenate-video-files">in this very faq</a>.
|
|
|
|
|
|
</li><li>
|
|
|
To put them together in the same file, to let the user choose between the
|
|
|
different versions (example: different audio languages): this is called to
|
|
|
<em>multiplex</em> them together (in short: mux), and is done by simply
|
|
|
invoking ffmpeg with several ‘<samp>-i</samp>’ options.
|
|
|
|
|
|
</li><li>
|
|
|
For audio, to put all channels together in a single stream (example: two
|
|
|
mono streams into one stereo stream): this is sometimes called to
|
|
|
<em>merge</em> them, and can be done using the
|
|
|
<a href="ffmpeg-filters.html#amerge"><code>amerge</code></a> filter.
|
|
|
|
|
|
</li><li>
|
|
|
For audio, to play one on top of the other: this is called to <em>mix</em>
|
|
|
them, and can be done by first merging them into a single stream and then
|
|
|
using the <a href="ffmpeg-filters.html#pan"><code>pan</code></a> filter to mix
|
|
|
the channels at will.
|
|
|
|
|
|
</li><li>
|
|
|
For video, to display both together, side by side or one on top of a part of
|
|
|
the other; it can be done using the
|
|
|
<a href="ffmpeg-filters.html#overlay"><code>overlay</code></a> video filter.
|
|
|
|
|
|
</li></ul>
|
|
|
|
|
|
<p><a name="How-can-I-concatenate-video-files"></a>
|
|
|
</p><a name="How-can-I-concatenate-video-files_003f"></a>
|
|
|
<h2 class="section"><a href="faq.html#toc-How-can-I-concatenate-video-files_003f">3.14 How can I concatenate video files?</a></h2>
|
|
|
|
|
|
<p>There are several solutions, depending on the exact circumstances.
|
|
|
</p>
|
|
|
<a name="Concatenating-using-the-concat-filter"></a>
|
|
|
<h3 class="subsection"><a href="faq.html#toc-Concatenating-using-the-concat-filter">3.14.1 Concatenating using the concat <em>filter</em></a></h3>
|
|
|
|
|
|
<p>FFmpeg has a <a href="ffmpeg-filters.html#concat"><code>concat</code></a> filter designed specifically for that, with examples in the
|
|
|
documentation. This operation is recommended if you need to re-encode.
|
|
|
</p>
|
|
|
<a name="Concatenating-using-the-concat-demuxer"></a>
|
|
|
<h3 class="subsection"><a href="faq.html#toc-Concatenating-using-the-concat-demuxer">3.14.2 Concatenating using the concat <em>demuxer</em></a></h3>
|
|
|
|
|
|
<p>FFmpeg has a <a href="ffmpeg-formats.html#concat"><code>concat</code></a> demuxer which you can use when you want to avoid a re-encode and
|
|
|
your format doesn’t support file level concatenation.
|
|
|
</p>
|
|
|
<a name="Concatenating-using-the-concat-protocol-_0028file-level_0029"></a>
|
|
|
<h3 class="subsection"><a href="faq.html#toc-Concatenating-using-the-concat-protocol-_0028file-level_0029">3.14.3 Concatenating using the concat <em>protocol</em> (file level)</a></h3>
|
|
|
|
|
|
<p>FFmpeg has a <a href="ffmpeg-protocols.html#concat"><code>concat</code></a> protocol designed specifically for that, with examples in the
|
|
|
documentation.
|
|
|
</p>
|
|
|
<p>A few multimedia containers (MPEG-1, MPEG-2 PS, DV) allow one to concatenate
|
|
|
video by merely concatenating the files containing them.
|
|
|
</p>
|
|
|
<p>Hence you may concatenate your multimedia files by first transcoding them to
|
|
|
these privileged formats, then using the humble <code>cat</code> command (or the
|
|
|
equally humble <code>copy</code> under Windows), and finally transcoding back to your
|
|
|
format of choice.
|
|
|
</p>
|
|
|
<div class="example">
|
|
|
<pre class="example">ffmpeg -i input1.avi -qscale:v 1 intermediate1.mpg
|
|
|
ffmpeg -i input2.avi -qscale:v 1 intermediate2.mpg
|
|
|
cat intermediate1.mpg intermediate2.mpg > intermediate_all.mpg
|
|
|
ffmpeg -i intermediate_all.mpg -qscale:v 2 output.avi
|
|
|
</pre></div>
|
|
|
|
|
|
<p>Additionally, you can use the <code>concat</code> protocol instead of <code>cat</code> or
|
|
|
<code>copy</code> which will avoid creation of a potentially huge intermediate file.
|
|
|
</p>
|
|
|
<div class="example">
|
|
|
<pre class="example">ffmpeg -i input1.avi -qscale:v 1 intermediate1.mpg
|
|
|
ffmpeg -i input2.avi -qscale:v 1 intermediate2.mpg
|
|
|
ffmpeg -i concat:"intermediate1.mpg|intermediate2.mpg" -c copy intermediate_all.mpg
|
|
|
ffmpeg -i intermediate_all.mpg -qscale:v 2 output.avi
|
|
|
</pre></div>
|
|
|
|
|
|
<p>Note that you may need to escape the character "|" which is special for many
|
|
|
shells.
|
|
|
</p>
|
|
|
<p>Another option is usage of named pipes, should your platform support it:
|
|
|
</p>
|
|
|
<div class="example">
|
|
|
<pre class="example">mkfifo intermediate1.mpg
|
|
|
mkfifo intermediate2.mpg
|
|
|
ffmpeg -i input1.avi -qscale:v 1 -y intermediate1.mpg < /dev/null &
|
|
|
ffmpeg -i input2.avi -qscale:v 1 -y intermediate2.mpg < /dev/null &
|
|
|
cat intermediate1.mpg intermediate2.mpg |\
|
|
|
ffmpeg -f mpeg -i - -c:v mpeg4 -c:a libmp3lame output.avi
|
|
|
</pre></div>
|
|
|
|
|
|
<a name="Concatenating-using-raw-audio-and-video"></a>
|
|
|
<h3 class="subsection"><a href="faq.html#toc-Concatenating-using-raw-audio-and-video">3.14.4 Concatenating using raw audio and video</a></h3>
|
|
|
|
|
|
<p>Similarly, the yuv4mpegpipe format, and the raw video, raw audio codecs also
|
|
|
allow concatenation, and the transcoding step is almost lossless.
|
|
|
When using multiple yuv4mpegpipe(s), the first line needs to be discarded
|
|
|
from all but the first stream. This can be accomplished by piping through
|
|
|
<code>tail</code> as seen below. Note that when piping through <code>tail</code> you
|
|
|
must use command grouping, <code>{ ;}</code>, to background properly.
|
|
|
</p>
|
|
|
<p>For example, let’s say we want to concatenate two FLV files into an
|
|
|
output.flv file:
|
|
|
</p>
|
|
|
<div class="example">
|
|
|
<pre class="example">mkfifo temp1.a
|
|
|
mkfifo temp1.v
|
|
|
mkfifo temp2.a
|
|
|
mkfifo temp2.v
|
|
|
mkfifo all.a
|
|
|
mkfifo all.v
|
|
|
ffmpeg -i input1.flv -vn -f u16le -c:a pcm_s16le -ac 2 -ar 44100 - > temp1.a < /dev/null &
|
|
|
ffmpeg -i input2.flv -vn -f u16le -c:a pcm_s16le -ac 2 -ar 44100 - > temp2.a < /dev/null &
|
|
|
ffmpeg -i input1.flv -an -f yuv4mpegpipe - > temp1.v < /dev/null &
|
|
|
{ ffmpeg -i input2.flv -an -f yuv4mpegpipe - < /dev/null | tail -n +2 > temp2.v ; } &
|
|
|
cat temp1.a temp2.a > all.a &
|
|
|
cat temp1.v temp2.v > all.v &
|
|
|
ffmpeg -f u16le -c:a pcm_s16le -ac 2 -ar 44100 -i all.a \
|
|
|
-f yuv4mpegpipe -i all.v \
|
|
|
-y output.flv
|
|
|
rm temp[12].[av] all.[av]
|
|
|
</pre></div>
|
|
|
|
|
|
<a name="Using-_002df-lavfi_002c-audio-becomes-mono-for-no-apparent-reason_002e"></a>
|
|
|
<h2 class="section"><a href="faq.html#toc-Using-_002df-lavfi_002c-audio-becomes-mono-for-no-apparent-reason_002e">3.15 Using ‘<samp>-f lavfi</samp>’, audio becomes mono for no apparent reason.</a></h2>
|
|
|
|
|
|
<p>Use ‘<samp>-dumpgraph -</samp>’ to find out exactly where the channel layout is
|
|
|
lost.
|
|
|
</p>
|
|
|
<p>Most likely, it is through <code>auto-inserted aresample</code>. Try to understand
|
|
|
why the converting filter was needed at that place.
|
|
|
</p>
|
|
|
<p>Just before the output is a likely place, as ‘<samp>-f lavfi</samp>’ currently
|
|
|
only support packed S16.
|
|
|
</p>
|
|
|
<p>Then insert the correct <code>aformat</code> explicitly in the filtergraph,
|
|
|
specifying the exact format.
|
|
|
</p>
|
|
|
<div class="example">
|
|
|
<pre class="example">aformat=sample_fmts=s16:channel_layouts=stereo
|
|
|
</pre></div>
|
|
|
|
|
|
<a name="Why-does-FFmpeg-not-see-the-subtitles-in-my-VOB-file_003f"></a>
|
|
|
<h2 class="section"><a href="faq.html#toc-Why-does-FFmpeg-not-see-the-subtitles-in-my-VOB-file_003f">3.16 Why does FFmpeg not see the subtitles in my VOB file?</a></h2>
|
|
|
|
|
|
<p>VOB and a few other formats do not have a global header that describes
|
|
|
everything present in the file. Instead, applications are supposed to scan
|
|
|
the file to see what it contains. Since VOB files are frequently large, only
|
|
|
the beginning is scanned. If the subtitles happen only later in the file,
|
|
|
they will not be initially detected.
|
|
|
</p>
|
|
|
<p>Some applications, including the <code>ffmpeg</code> command-line tool, can only
|
|
|
work with streams that were detected during the initial scan; streams that
|
|
|
are detected later are ignored.
|
|
|
</p>
|
|
|
<p>The size of the initial scan is controlled by two options: <code>probesize</code>
|
|
|
(default ~5 Mo) and <code>analyzeduration</code> (default 5,000,000 µs = 5 s). For
|
|
|
the subtitle stream to be detected, both values must be large enough.
|
|
|
</p>
|
|
|
<a name="Why-was-the-ffmpeg-_002dsameq-option-removed_003f-What-to-use-instead_003f"></a>
|
|
|
<h2 class="section"><a href="faq.html#toc-Why-was-the-ffmpeg-_002dsameq-option-removed_003f-What-to-use-instead_003f">3.17 Why was the <code>ffmpeg</code> ‘<samp>-sameq</samp>’ option removed? What to use instead?</a></h2>
|
|
|
|
|
|
<p>The ‘<samp>-sameq</samp>’ option meant "same quantizer", and made sense only in a
|
|
|
very limited set of cases. Unfortunately, a lot of people mistook it for
|
|
|
"same quality" and used it in places where it did not make sense: it had
|
|
|
roughly the expected visible effect, but achieved it in a very inefficient
|
|
|
way.
|
|
|
</p>
|
|
|
<p>Each encoder has its own set of options to set the quality-vs-size balance,
|
|
|
use the options for the encoder you are using to set the quality level to a
|
|
|
point acceptable for your tastes. The most common options to do that are
|
|
|
‘<samp>-qscale</samp>’ and ‘<samp>-qmax</samp>’, but you should peruse the documentation
|
|
|
of the encoder you chose.
|
|
|
</p>
|
|
|
<a name="I-have-a-stretched-video_002c-why-does-scaling-does-not-fix-it_003f"></a>
|
|
|
<h2 class="section"><a href="faq.html#toc-I-have-a-stretched-video_002c-why-does-scaling-does-not-fix-it_003f">3.18 I have a stretched video, why does scaling does not fix it?</a></h2>
|
|
|
|
|
|
<p>A lot of video codecs and formats can store the <em>aspect ratio</em> of the
|
|
|
video: this is the ratio between the width and the height of either the full
|
|
|
image (DAR, display aspect ratio) or individual pixels (SAR, sample aspect
|
|
|
ratio). For example, EGA screens at resolution 640×350 had 4:3 DAR and 35:48
|
|
|
SAR.
|
|
|
</p>
|
|
|
<p>Most still image processing work with square pixels, i.e. 1:1 SAR, but a lot
|
|
|
of video standards, especially from the analogic-numeric transition era, use
|
|
|
non-square pixels.
|
|
|
</p>
|
|
|
<p>Most processing filters in FFmpeg handle the aspect ratio to avoid
|
|
|
stretching the image: cropping adjusts the DAR to keep the SAR constant,
|
|
|
scaling adjusts the SAR to keep the DAR constant.
|
|
|
</p>
|
|
|
<p>If you want to stretch, or “unstretch”, the image, you need to override the
|
|
|
information with the
|
|
|
<a href="ffmpeg-filters.html#setdar_002c-setsar"><code>setdar or setsar filters</code></a>.
|
|
|
</p>
|
|
|
<p>Do not forget to examine carefully the original video to check whether the
|
|
|
stretching comes from the image or from the aspect ratio information.
|
|
|
</p>
|
|
|
<p>For example, to fix a badly encoded EGA capture, use the following commands,
|
|
|
either the first one to upscale to square pixels or the second one to set
|
|
|
the correct aspect ratio or the third one to avoid transcoding (may not work
|
|
|
depending on the format / codec / player / phase of the moon):
|
|
|
</p>
|
|
|
<div class="example">
|
|
|
<pre class="example">ffmpeg -i ega_screen.nut -vf scale=640:480,setsar=1 ega_screen_scaled.nut
|
|
|
ffmpeg -i ega_screen.nut -vf setdar=4/3 ega_screen_anamorphic.nut
|
|
|
ffmpeg -i ega_screen.nut -aspect 4/3 -c copy ega_screen_overridden.nut
|
|
|
</pre></div>
|
|
|
|
|
|
<p><a name="background-task"></a>
|
|
|
</p><a name="How-do-I-run-ffmpeg-as-a-background-task_003f"></a>
|
|
|
<h2 class="section"><a href="faq.html#toc-How-do-I-run-ffmpeg-as-a-background-task_003f">3.19 How do I run ffmpeg as a background task?</a></h2>
|
|
|
|
|
|
<p>ffmpeg normally checks the console input, for entries like "q" to stop
|
|
|
and "?" to give help, while performing operations. ffmpeg does not have a way of
|
|
|
detecting when it is running as a background task.
|
|
|
When it checks the console input, that can cause the process running ffmpeg
|
|
|
in the background to suspend.
|
|
|
</p>
|
|
|
<p>To prevent those input checks, allowing ffmpeg to run as a background task,
|
|
|
use the <a href="ffmpeg.html#stdin-option"><code>-nostdin</code> option</a>
|
|
|
in the ffmpeg invocation. This is effective whether you run ffmpeg in a shell
|
|
|
or invoke ffmpeg in its own process via an operating system API.
|
|
|
</p>
|
|
|
<p>As an alternative, when you are running ffmpeg in a shell, you can redirect
|
|
|
standard input to <code>/dev/null</code> (on Linux and Mac OS)
|
|
|
or <code>NUL</code> (on Windows). You can do this redirect either
|
|
|
on the ffmpeg invocation, or from a shell script which calls ffmpeg.
|
|
|
</p>
|
|
|
<p>For example:
|
|
|
</p>
|
|
|
<div class="example">
|
|
|
<pre class="example">ffmpeg -nostdin -i INPUT OUTPUT
|
|
|
</pre></div>
|
|
|
|
|
|
<p>or (on Linux, Mac OS, and other UNIX-like shells):
|
|
|
</p>
|
|
|
<div class="example">
|
|
|
<pre class="example">ffmpeg -i INPUT OUTPUT </dev/null
|
|
|
</pre></div>
|
|
|
|
|
|
<p>or (on Windows):
|
|
|
</p>
|
|
|
<div class="example">
|
|
|
<pre class="example">ffmpeg -i INPUT OUTPUT <NUL
|
|
|
</pre></div>
|
|
|
|
|
|
<a name="How-do-I-prevent-ffmpeg-from-suspending-with-a-message-like-suspended-_0028tty-output_0029_003f"></a>
|
|
|
<h2 class="section"><a href="faq.html#toc-How-do-I-prevent-ffmpeg-from-suspending-with-a-message-like-suspended-_0028tty-output_0029_003f">3.20 How do I prevent ffmpeg from suspending with a message like <em>suspended (tty output)</em>?</a></h2>
|
|
|
|
|
|
<p>If you run ffmpeg in the background, you may find that its process suspends.
|
|
|
There may be a message like <em>suspended (tty output)</em>. The question is how
|
|
|
to prevent the process from being suspended.
|
|
|
</p>
|
|
|
<p>For example:
|
|
|
</p>
|
|
|
<div class="example">
|
|
|
<pre class="example">% ffmpeg -i INPUT OUTPUT &> ~/tmp/log.txt &
|
|
|
[1] 93352
|
|
|
%
|
|
|
[1] + suspended (tty output) ffmpeg -i INPUT OUTPUT &>
|
|
|
</pre></div>
|
|
|
|
|
|
<p>The message "tty output" notwithstanding, the problem here is that
|
|
|
ffmpeg normally checks the console input when it runs. The operating system
|
|
|
detects this, and suspends the process until you can bring it to the
|
|
|
foreground and attend to it.
|
|
|
</p>
|
|
|
<p>The solution is to use the right techniques to tell ffmpeg not to consult
|
|
|
console input. You can use the
|
|
|
<a href="ffmpeg.html#stdin-option"><code>-nostdin</code> option</a>,
|
|
|
or redirect standard input with <code>< /dev/null</code>.
|
|
|
See FAQ
|
|
|
<a href="#background-task"><em>How do I run ffmpeg as a background task?</em></a>
|
|
|
for details.
|
|
|
</p>
|
|
|
<a name="Development"></a>
|
|
|
<h1 class="chapter"><a href="faq.html#toc-Development">4 Development</a></h1>
|
|
|
|
|
|
<a name="Are-there-examples-illustrating-how-to-use-the-FFmpeg-libraries_002c-particularly-libavcodec-and-libavformat_003f"></a>
|
|
|
<h2 class="section"><a href="faq.html#toc-Are-there-examples-illustrating-how-to-use-the-FFmpeg-libraries_002c-particularly-libavcodec-and-libavformat_003f">4.1 Are there examples illustrating how to use the FFmpeg libraries, particularly libavcodec and libavformat?</a></h2>
|
|
|
|
|
|
<p>Yes. Check the ‘<tt>doc/examples</tt>’ directory in the source
|
|
|
repository, also available online at:
|
|
|
<a href="https://github.com/FFmpeg/FFmpeg/tree/master/doc/examples">https://github.com/FFmpeg/FFmpeg/tree/master/doc/examples</a>.
|
|
|
</p>
|
|
|
<p>Examples are also installed by default, usually in
|
|
|
<code>$PREFIX/share/ffmpeg/examples</code>.
|
|
|
</p>
|
|
|
<p>Also you may read the Developers Guide of the FFmpeg documentation. Alternatively,
|
|
|
examine the source code for one of the many open source projects that
|
|
|
already incorporate FFmpeg at (<a href="projects.html">projects.html</a>).
|
|
|
</p>
|
|
|
<a name="Can-you-support-my-C-compiler-XXX_003f"></a>
|
|
|
<h2 class="section"><a href="faq.html#toc-Can-you-support-my-C-compiler-XXX_003f">4.2 Can you support my C compiler XXX?</a></h2>
|
|
|
|
|
|
<p>It depends. If your compiler is C99-compliant, then patches to support
|
|
|
it are likely to be welcome if they do not pollute the source code
|
|
|
with <code>#ifdef</code>s related to the compiler.
|
|
|
</p>
|
|
|
<a name="Is-Microsoft-Visual-C_002b_002b-supported_003f"></a>
|
|
|
<h2 class="section"><a href="faq.html#toc-Is-Microsoft-Visual-C_002b_002b-supported_003f">4.3 Is Microsoft Visual C++ supported?</a></h2>
|
|
|
|
|
|
<p>Yes. Please see the <a href="platform.html">Microsoft Visual C++</a>
|
|
|
section in the FFmpeg documentation.
|
|
|
</p>
|
|
|
<a name="Can-you-add-automake_002c-libtool-or-autoconf-support_003f"></a>
|
|
|
<h2 class="section"><a href="faq.html#toc-Can-you-add-automake_002c-libtool-or-autoconf-support_003f">4.4 Can you add automake, libtool or autoconf support?</a></h2>
|
|
|
|
|
|
<p>No. These tools are too bloated and they complicate the build.
|
|
|
</p>
|
|
|
<a name="Why-not-rewrite-FFmpeg-in-object_002doriented-C_002b_002b_003f"></a>
|
|
|
<h2 class="section"><a href="faq.html#toc-Why-not-rewrite-FFmpeg-in-object_002doriented-C_002b_002b_003f">4.5 Why not rewrite FFmpeg in object-oriented C++?</a></h2>
|
|
|
|
|
|
<p>FFmpeg is already organized in a highly modular manner and does not need to
|
|
|
be rewritten in a formal object language. Further, many of the developers
|
|
|
favor straight C; it works for them. For more arguments on this matter,
|
|
|
read <a href="http://www.tux.org/lkml/#s15">"Programming Religion"</a>.
|
|
|
</p>
|
|
|
<a name="Why-are-the-ffmpeg-programs-devoid-of-debugging-symbols_003f"></a>
|
|
|
<h2 class="section"><a href="faq.html#toc-Why-are-the-ffmpeg-programs-devoid-of-debugging-symbols_003f">4.6 Why are the ffmpeg programs devoid of debugging symbols?</a></h2>
|
|
|
|
|
|
<p>The build process creates <code>ffmpeg_g</code>, <code>ffplay_g</code>, etc. which
|
|
|
contain full debug information. Those binaries are stripped to create
|
|
|
<code>ffmpeg</code>, <code>ffplay</code>, etc. If you need the debug information, use
|
|
|
the *_g versions.
|
|
|
</p>
|
|
|
<a name="I-do-not-like-the-LGPL_002c-can-I-contribute-code-under-the-GPL-instead_003f"></a>
|
|
|
<h2 class="section"><a href="faq.html#toc-I-do-not-like-the-LGPL_002c-can-I-contribute-code-under-the-GPL-instead_003f">4.7 I do not like the LGPL, can I contribute code under the GPL instead?</a></h2>
|
|
|
|
|
|
<p>Yes, as long as the code is optional and can easily and cleanly be placed
|
|
|
under #if CONFIG_GPL without breaking anything. So, for example, a new codec
|
|
|
or filter would be OK under GPL while a bug fix to LGPL code would not.
|
|
|
</p>
|
|
|
<a name="I_0027m-using-FFmpeg-from-within-my-C-application-but-the-linker-complains-about-missing-symbols-from-the-libraries-themselves_002e"></a>
|
|
|
<h2 class="section"><a href="faq.html#toc-I_0027m-using-FFmpeg-from-within-my-C-application-but-the-linker-complains-about-missing-symbols-from-the-libraries-themselves_002e">4.8 I’m using FFmpeg from within my C application but the linker complains about missing symbols from the libraries themselves.</a></h2>
|
|
|
|
|
|
<p>FFmpeg builds static libraries by default. In static libraries, dependencies
|
|
|
are not handled. That has two consequences. First, you must specify the
|
|
|
libraries in dependency order: <code>-lavdevice</code> must come before
|
|
|
<code>-lavformat</code>, <code>-lavutil</code> must come after everything else, etc.
|
|
|
Second, external libraries that are used in FFmpeg have to be specified too.
|
|
|
</p>
|
|
|
<p>An easy way to get the full list of required libraries in dependency order
|
|
|
is to use <code>pkg-config</code>.
|
|
|
</p>
|
|
|
<div class="example">
|
|
|
<pre class="example">c99 -o program program.c $(pkg-config --cflags --libs libavformat libavcodec)
|
|
|
</pre></div>
|
|
|
|
|
|
<p>See ‘<tt>doc/example/Makefile</tt>’ and ‘<tt>doc/example/pc-uninstalled</tt>’ for
|
|
|
more details.
|
|
|
</p>
|
|
|
<a name="I_0027m-using-FFmpeg-from-within-my-C_002b_002b-application-but-the-linker-complains-about-missing-symbols-which-seem-to-be-available_002e"></a>
|
|
|
<h2 class="section"><a href="faq.html#toc-I_0027m-using-FFmpeg-from-within-my-C_002b_002b-application-but-the-linker-complains-about-missing-symbols-which-seem-to-be-available_002e">4.9 I’m using FFmpeg from within my C++ application but the linker complains about missing symbols which seem to be available.</a></h2>
|
|
|
|
|
|
<p>FFmpeg is a pure C project, so to use the libraries within your C++ application
|
|
|
you need to explicitly state that you are using a C library. You can do this by
|
|
|
encompassing your FFmpeg includes using <code>extern "C"</code>.
|
|
|
</p>
|
|
|
<p>See <a href="http://www.parashift.com/c++-faq-lite/mixing-c-and-cpp.html#faq-32.3">http://www.parashift.com/c++-faq-lite/mixing-c-and-cpp.html#faq-32.3</a>
|
|
|
</p>
|
|
|
<a name="I_0027m-using-libavutil-from-within-my-C_002b_002b-application-but-the-compiler-complains-about-_0027UINT64_005fC_0027-was-not-declared-in-this-scope"></a>
|
|
|
<h2 class="section"><a href="faq.html#toc-I_0027m-using-libavutil-from-within-my-C_002b_002b-application-but-the-compiler-complains-about-_0027UINT64_005fC_0027-was-not-declared-in-this-scope">4.10 I’m using libavutil from within my C++ application but the compiler complains about ’UINT64_C’ was not declared in this scope</a></h2>
|
|
|
|
|
|
<p>FFmpeg is a pure C project using C99 math features, in order to enable C++
|
|
|
to use them you have to append -D__STDC_CONSTANT_MACROS to your CXXFLAGS
|
|
|
</p>
|
|
|
<a name="I-have-a-file-in-memory-_002f-a-API-different-from-_002aopen_002f_002aread_002f-libc-how-do-I-use-it-with-libavformat_003f"></a>
|
|
|
<h2 class="section"><a href="faq.html#toc-I-have-a-file-in-memory-_002f-a-API-different-from-_002aopen_002f_002aread_002f-libc-how-do-I-use-it-with-libavformat_003f">4.11 I have a file in memory / a API different from *open/*read/ libc how do I use it with libavformat?</a></h2>
|
|
|
|
|
|
<p>You have to create a custom AVIOContext using <code>avio_alloc_context</code>,
|
|
|
see ‘<tt>libavformat/aviobuf.c</tt>’ in FFmpeg and ‘<tt>libmpdemux/demux_lavf.c</tt>’ in MPlayer or MPlayer2 sources.
|
|
|
</p>
|
|
|
<a name="Where-is-the-documentation-about-ffv1_002c-msmpeg4_002c-asv1_002c-4xm_003f"></a>
|
|
|
<h2 class="section"><a href="faq.html#toc-Where-is-the-documentation-about-ffv1_002c-msmpeg4_002c-asv1_002c-4xm_003f">4.12 Where is the documentation about ffv1, msmpeg4, asv1, 4xm?</a></h2>
|
|
|
|
|
|
<p>see <a href="https://www.ffmpeg.org/~michael/">https://www.ffmpeg.org/~michael/</a>
|
|
|
</p>
|
|
|
<a name="How-do-I-feed-H_002e263_002dRTP-_0028and-other-codecs-in-RTP_0029-to-libavcodec_003f"></a>
|
|
|
<h2 class="section"><a href="faq.html#toc-How-do-I-feed-H_002e263_002dRTP-_0028and-other-codecs-in-RTP_0029-to-libavcodec_003f">4.13 How do I feed H.263-RTP (and other codecs in RTP) to libavcodec?</a></h2>
|
|
|
|
|
|
<p>Even if peculiar since it is network oriented, RTP is a container like any
|
|
|
other. You have to <em>demux</em> RTP before feeding the payload to libavcodec.
|
|
|
In this specific case please look at RFC 4629 to see how it should be done.
|
|
|
</p>
|
|
|
<a name="AVStream_002er_005fframe_005frate-is-wrong_002c-it-is-much-larger-than-the-frame-rate_002e"></a>
|
|
|
<h2 class="section"><a href="faq.html#toc-AVStream_002er_005fframe_005frate-is-wrong_002c-it-is-much-larger-than-the-frame-rate_002e">4.14 AVStream.r_frame_rate is wrong, it is much larger than the frame rate.</a></h2>
|
|
|
|
|
|
<p><code>r_frame_rate</code> is NOT the average frame rate, it is the smallest frame rate
|
|
|
that can accurately represent all timestamps. So no, it is not
|
|
|
wrong if it is larger than the average!
|
|
|
For example, if you have mixed 25 and 30 fps content, then <code>r_frame_rate</code>
|
|
|
will be 150 (it is the least common multiple).
|
|
|
If you are looking for the average frame rate, see <code>AVStream.avg_frame_rate</code>.
|
|
|
</p>
|
|
|
<a name="Why-is-make-fate-not-running-all-tests_003f"></a>
|
|
|
<h2 class="section"><a href="faq.html#toc-Why-is-make-fate-not-running-all-tests_003f">4.15 Why is <code>make fate</code> not running all tests?</a></h2>
|
|
|
|
|
|
<p>Make sure you have the fate-suite samples and the <code>SAMPLES</code> Make variable
|
|
|
or <code>FATE_SAMPLES</code> environment variable or the <code>--samples</code>
|
|
|
<code>configure</code> option is set to the right path.
|
|
|
</p>
|
|
|
<a name="Why-is-make-fate-not-finding-the-samples_003f"></a>
|
|
|
<h2 class="section"><a href="faq.html#toc-Why-is-make-fate-not-finding-the-samples_003f">4.16 Why is <code>make fate</code> not finding the samples?</a></h2>
|
|
|
|
|
|
<p>Do you happen to have a <code>~</code> character in the samples path to indicate a
|
|
|
home directory? The value is used in ways where the shell cannot expand it,
|
|
|
causing FATE to not find files. Just replace <code>~</code> by the full path.
|
|
|
</p>
|
|
|
</div>
|
|
|
</body>
|
|
|
</html>
|
|
|
|
|
|
|