Animation merger | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
1. Purpose
Animmerger stitches 2D images together into either a static image
or an animation, while attempting to preserve a global frame of
reference (static background). That is, for a movie that follows an actor around (and the background scrolls to follow them), it creates a movie that has a fixed background, and the camera moves around in the scene. It does this with a motion detection algorithm, a set of different pixel methods, and a simulated infinite 2D canvas — a 2D canvas that extends infinitely to all four directions (well, as infinite as 32-bit integers can get…) 2. Pixel methods
The graphics material comes from Super Mario Bros. Mario, Super Mario Bros., and The Nintendo Entertainment System (NES) are registered trademarks of Nintendo of America Inc. But you knew that, right? 2.1. Static methods2.1.1. AVERAGE
The "average" method produces a "motion blur" effect of the entire
input, reducing it into a single frame.
You can see a faint trace of all animated actors that appeared in the animation. Mario moved very fast so his trace is quite difficult to spot.
Produced with commandline: An alternative implementation of "average" is also provided: "tinyaverage" (option -A). It requires less memory to store, but is less accurate to calculate. If you want the color averages to be calculated through the YUV colorspace rather than the RGB colorspace, add the --yuv option (not supported by tinyaverage). 2.1.2. ACTIONAVG2.1.3. MOSTUSED
The "most used" method produces what might be the background
image for the entire animation.
Note: If there is an actor that sits in a certain location
for a long time, it is also recorded.
Produced with commandline: 2.1.4. LAST
The "last" method is a simpler implementation of the MostUsed
method, simply recording the last pixel value in any location.
Produced with commandline: 2.1.5. FIRST
The "first" method is analogous to "last".
It shows whatever first appeared in a particular pixel location.
The turtles are distorted, because they moved while the screen scrolled.
Produced with commandline: 2.1.6. SOLID
The "solid" method is an experimental light-weight replacement
to the "mostused" method. It simply ignores anything that moves
and retains whatever stays still for the longest time. Unlike "mostused", it does not sum separate appearances together; it only finds the maximum length of consecutive sameness.
As seen here, it has shortcomings, too.
Produced with commandline: 2.1.7. FIRSTNMOST
The "firstnmost" method is analogous to "first" and "mostused";
it chooses the most common pixel of first N pixel values.
Set N with the --firstlast (-f) option. If N is 0, instead gets last uncommon pixel. If N is negative, using least common values rather than most common.
Most common of first 4:
Most common of first 10:
Most common of first 16:
First uncommon:
Least common of first 10:
Produced with commandline: 2.1.8. LASTNMOST
The "lastnmost" method is analogous to "last" and "mostused";
it chooses the most common pixel of last N pixel values.
Set N with the --firstlast (-f) option. If N is 0, instead gets last uncommon pixel. If N is negative, using least common values rather than most common.
Most common of last 10:
Last uncommon:
Least common of last 10:
Produced with commandline: 2.1.9. LEASTUSED
The "least used" method is analogous to "most used". It can be used to find graphical artifacts and teleporting actors, but for the most part, the output is not very useful.
Produced with commandline: 2.2. Animated methods2.2.1. CHANGELOG
The "changelog" method records the entire animation (121995 bytes in this example). It takes considerably less disk space (and is faster to load) than the original animation, because now the background does not scroll.
You see some artifacts in the turtle and in Mario when they appear
near the top of the screen. This is because they were behind the
HUD (the text "WORLD 8-2" for instance), which was removed. Here is how the animation looks like, if the HUD is not removed. (246643 bytes)
Exteriors, i.e. content outside the "current" viewport of the animation
are colored as in the MostUsed pixel method.
Produced with commandline:
The background for ChangeLog is normally generated with the MostUsed method, but it can be
explicitly controlled with the --bgmethod0 and --bgmethod1 options.
2.2.1.1. Motion blur
The changelog method also supports motion blur. Use the --motionblur (-B) option to set it.
Value 0 disables motion blur (default: 0).
Blur length 1:
Blur length 4:
Blur length 20:
Produced with commandline: 2.2.2. LOOPINGLOG
The "loopinglog" method records the entire animation,
but reuses existing frames. Use the -l option
to set the loop length in frames.The smaller value you use, the shorter the animation is in the number of frames, but the more pronounced is the "lemmings" effect.
30 frames (94895 bytes):
10 frames (66738 bytes):
4 frames (40372 bytes):
Produced with commandline: It is also called "loopinglast" mode (option -s) to differentiate from "loopingavg". The loopinglog method also supports motion blur. Use the --motionblur (-B) option to set it. Value 0 disables motion blur (default: 0). 2.2.3. LOOPINGAVG
The "loopingavg" method combines the "loopinglog" and "actionavg" methods.
Use the -l option to set the loop length in frames.The most important difference to "loopinglog" is that overlapping action is averaged rather than explicitly choosing one of the acting pixels. It looks slightly better, but may require GIF palette reduction. In comparison, "loopinglog" only uses pixel colors present in the original images.
30 frames (file size depends on selected palette size):
10 frames:
4 frames:
Produced with commandline:
If you want the color averages to be calculated through the YUV colorspace
rather than the RGB colorspace, add the --yuv option. 2.2.3.1. Motion blur
The loopingavg method also supports motion blur.
Use the --motionblur (-B) option to set it.
Value 0 disables motion blur (default: 0).
Loop length 30 frames, blur length 20:
Loop length 30 frames, blur length 20, with YUV calculations:
Loop length 30 frames, blur length 20, with YUV calculations, and diversity-quantized palette of 16 colors:
Loop length 10 frames, blur length 4:
Produced with commandline: 2.3. Summary
*) These numbers are estimates. Actual memory size per pixel depends on the exact selection of pixel methods requested and the memory allocation overhead. Animmerger strives to always select the smallest combination of pixel methods (memoryconsumptionwise) that can implement all the requested methods. 3. Masking methods
Masked areas can be removed with a number of different methods.
To best demonstrate them, I added an extra huge mask in the middle of the image. It is best seen in the "black" masking, below.
These images were produced with this commandline: 3.1. BLACK/BLANK/CENSOR
This method shows clearly which areas were affected by the mask.
Specifically, the HUD, and a huge rectangle,
and a narrower line extending from the very left edge to the very right
edge of the screen at all times, effectively blocking the contents of
the entire scanline from ever being seen.
Animation: 3.2. HOLE/ALPHA/TRANSPARENT
This method is what animmerger does by default. The transparent regions
are simply treated as holes; there is no content on the affected areas.
If the hidden content becomes available when the camera moves, then those
pixels are recorded.
Animation: 3.3. DELOGO/BLUR/INTERPOLATE
This method removes the content with a circular blur pattern. The method
is almost identical to the delogo filter that can be used in
MPlayer
to remove a tv station logo from video. Content that coindices with the
removed part is replaced with interpolated surrounding pixels;
original pixels of the affected area are not sampled.
Animation (palette-reduced and dithered with -Qd,16 in order to make the 1.5 MB GIF file smaller): 3.4. PATTERN/EXTRAPOLATE
The extrapolate filter tries to extrapolate the content of the masked
areas by detecting repeating tile patterns outside the masked area, and
extrapolating those patterns over the masked area.
The results of this method vary a lot from frame to frame,
so it is not very suitable to be used over large unknown areas.
For small areas, it works nicely.
Note that this algorithm is rather slow on large areas like this.
Animation: 4. Color quantization methods
Animmerger can create its output files in GIF or PNG format,
regardless of whether you are creating an animation or not. GIF files however are limited to a palette of 256 colors, while it is possible that animmerger creates images with more colors than 256. Therefore, the colormap must be reduced before the GIF image can be generated. animmerger supports a number of different color reduction methods, which are listed below. If no method is chosen, whatever is libGD default will be used. The images in this section were generated by making a 30-frame LoopingAvg animation with blur length of 20, rendering it with different palettization parameters and picking the 11th frame.
The exact commandline to produce the images was: Palette reduction methods can be chained in order to take benefits of the differently-appearing strengths of the different methods, but in this test set, each method was used alone. When palette reduction methods have been explicitly selected, animmerger always uses an ordered-dithering method (crosshatch artifacts) to optimize the rendering. This is better for animation than other methods such as Floyd-Steinberg are, because the dithering patterns do not jitter between frames. 4.1. Median-cut (aka. Heckbert)
Heckbert median-cut quantization method repeatedly splits the palette
into two roughly equal-proportion sections ("low" and "high" part
in any of red/green/blue channels) until the desired number of sections
have been generated; the palette is generated by averaging the values
in each section together. It is good at generating relevant palettes, but at smallest palette sizes, it suffers from a blurring problem.
4 colors: 4.2. Diversity
Diversity quantization method alternates between choosing the most popular
remaining color in the image for a "seed" and choosing of the remaining
colors the one that is most distant to any colors selected so far. The result is generally a very good representation of the original image's colors. At the smallest palette sizes, the colors are of course off, but the contrast is still sharp.
4 colors: 4.3. Blend-diversity
The blend-diversity method is a variation to the diversity method; after
the colors have been chosen, they are merged together with the remaining
colors that are most similar to the chosen one.
4 colors: 4.4. NeuQuant
The NeuQuant method, developed by Anthony Dekker in 1994, uses a Kohonen
self-balancing neural network to quickly come up with an optimized palette.
It is especially powerful with optimizing smooth gradients, such as the
motion-blur trails in this pictureset.
4 colors: 5. Dithering
Dithering is a technique by which the human eye can be tricked into
perceiving more colors than there actually is, by putting different-colored
pixels adjacent next to each others in varying proportions.
Animmerger knows a number of different dithering algorithms, including a set of dithering algorithms devised by Joel Yliluoma. These algorithms are categorised as an ordered, positional, patterned dithering method that is very well suited for animations, and often even more eye-pleasing than the random noise patterns generated by error diffusion dithers. Animmerger's dithering can be controlled with the following parameters:
5.1. Gamma correction
Generated with:
# for gamma in 0.1 0.2 0.5 1.0 1.5 2.0 2.2 2.5 3.0 10.0; do
With normal-intensity CGA palette:
(Palette entries: #000, #0A0, #A00, #A50) --gamma=0.1 --gamma=0.2 --gamma=0.5 --gamma=1.0 --gamma=1.5 --gamma=2.0 --gamma=2.2 --gamma=2.5 --gamma=3.0 --gamma=10.0
With EGA palette:
(Relevant palette entries: #000, #555, #AAA, #FFF) --gamma=0.1 --gamma=0.2 --gamma=0.5 --gamma=1.0 --gamma=1.5 --gamma=2.0 --gamma=2.2 --gamma=2.5 --gamma=3.0 --gamma=10.0 Note that animmerger's gamma correction algorithm is somewhat disputable. For instance, although the former example looks good, if we try the same with the EGA palette, where mid-gradient values (0%, 33%, 66% and 100% white) actually exist in the palette, we get the latter, odd-looking result. Conclusion: Your mileage may vary. 5.2. Example
To demonstrate dithering, let us consider this example picture.
It has been assigned a customized palette to go with it.
It is a subset (cropped portion) of a larger picture seen on the page where the algorithm is explained in detail, hence the odd inclusion of blue in it. 5.2.1. Dither error spread factorThe error spread factor provides a very fine-grained control over the final appearance of the dithered image. Though the upper limit of the value is 1.0, higher values can be used for artistic purposes. 5.2.2. Dither matrix sizeThe matrix shape directly controls the manner in which the different-color spots are dispersed. The temporal dithering option can be used for improving the perceived quality of colors (at the cost of flickering), and for artistics effects. Unless the --dithcount (--dc) option was given manually, setting the matrix size also sets the former. (To the size of matrix, or 32, whichever is smaller.)
Note that when making GIF animations, you usually do not want flickering,
because it will inflate the file sizes at a very high rate. With H.264,
it is perfectly fine, especially if you use the 5.2.3. Dither candidate countThe candidate count option directly controls how colors are mixed together in the dithering process. A higher value always results in higher quality, however, there is no sense in making the value larger than the matrix size is. Also, a combination of a large matrix and a small count can be used to simulate a small dithering matrix. Also note that the rendering speed is directly proportional to the number of dither color candidates generated. (It also depends on the size of the palette of both input and output images, and on the dither contrast limiter.) 5.2.4. Dither contrast limiterSpecifying 0 for the contrast usually works nicely, especially if the palette is good, but sometimes you will have to put a higher value there. Such situations may happen if the palette contains a combination of colors that produces the exact color required in the input picture when mixed, but also a closeby color that is not exact. Without the aid of the contrast option, the ditherer will not find the combination and will just use the closerby color that might not look as good. Overdoing it, however, will result in a lot of overly sharp local contrast, which looks mostly bad. Animation is shown in the last frame for the sake of demonstration, because it improves the spatial color resolution. Note that using nonzero --dr with --gamma that differs from 1.0 is currently broken. Please avoid that combination. 6. Color compare methods
In dithering, a color compare algorithm is used. The same algorithm
is also used in the diversity and blend-diversity quantization options.
Animmerger supports a few different algorithms for comparing colors.
Here are two example truecolor* pictures, and the
web-safe palette.
I quantized it using the websafe palette and dithered using
These tests intend to show how each color-compare method identifies colors that most closely match the original. Note: I used gamma correction for these images. Consequently, I disabled the --dr option because these do not mix well together.
Produced with commandline: *) It is truecolor, but it is also dithered. I found 24-bit RGB inadequate for this picture in preventing hard edges in smooth gradients, so I dithered it for this webpage. The input to these tests was undithered. 6.1. RGB
Three pictures are shown:
The two testcases rendered with this color filter, and the third
is a four-frame average of the preceding picture, showing
exactly which average color perception the dithering was getting at.
6.2. CIE76CIE L*a*b*, where delta-E calculated as a simple euclidean difference: √(ΔL² + Δa² + Δb²) It is fast and very often an improvement to RGB. 6.3. CIE94CIE L*a*b* with Cab=√(a²+b²), where delta-E calculated using a much more refined formula (CIE94): √(ΔL² + ΔCab²÷SC² + ΔH÷Sh²) with ΔH = (Δa² + Δb² − ΔCab²), and SC = (1 + 0.048×√(C1ab×C2ab)), and Sh = (1 + 0.014×√(C1ab×C2ab)).
Note: Animmerger uses the deltaE squared rather than the deltaE itself,
which is why the formula may seem different to what it is in reference
material. (There may still be genuine errors though.)
6.4. CIEDE2000CIE L*a*b* based extremely complicated formula called CIEDE2000. 6.5. CMC l:cCIE L*a*b* based quite complicated formula called CMC l:c, with l=1.5 and c=1.0. Interestingly, this operator seemed to have a huge issue with black colors; animmerger has a special workaround for that problem, though the darker green region looks weird too. In general, this has the appearance of being the weakest of all of these operators. Use it only if you are looking for a specific type of special effect. 6.6. BFD l:cCIE L*a*b* based extremely complicated formula called BFD l:c, with l=1.5 and c=1.0. 6.7. Illuminants
RGB to LAB conversions are subject to a lot of perception based science.
A concept of "illuminant matrix" plays a significant role here.
Animmerger knows three illuminant matrices:
Animmerger uses illuminant #3 for CIE76, and illuminant #1 for all other CIE based compare methods, because illuminants #2 and #3 have serious issues with blue and purple tones when any other compare method than CIE76 is used (specifically, they suggest that black is the overwhelmingly best substitute for those colors).
Animmerger converts a RGB value into
CIE L*a*b* and CIE L*C*h*
using the following formula:
7. Transformation
Mathematical transformations can be applied to individual pixels
of the resulting image, using the --transform option.
In this example, the overall color tone of the image was changed
and a lens flare effect was added:
Produced with: 8. Caveats8.1. Parallax motion
Parallax motion is bad. When animating video game content, please ensure that
all background layers are synchronized. Note that this will likely require you
to hack the emulator that is used to produce the video frames.
If different background layers are moving at different speeds with respect to the camera, animmerger will sync into one of them (probably the one that occupies the largest screen area), and the rest will appear to be moving with respect to the chosen background.
Example:
# rm tile-*.gif; animmerger -v -r12x12 -bl -pc -a -4,-3,6,9 pano5/*.png
The palette file was customized by hand, by taking a representative snapshot of the movie, and then progressively merging near-identical entries in the colormap in GIMP manually until only the minimal set of unique colors/tones remain. 8.2. Flashes, fog and other transparent layers
The image aligning engine is confused by anything that
globally changes the screen brightness. This includes
any pain-red-tinting, white-explosion flashes, fog clouds,
etc. Please try to avoid them.
Example: TODO: Add example of how image alignment suffers when using the power bomb in Super Metroid 9. Usageanimmerger v1.6.1 - Copyright (C) 2013 Joel Yliluoma (http://iki.fi/bisqwit/) Usage: animmerger [<options>] <imagefile> [...] Merges (stitches) animation frames together. General options: --help, -h Short help on usage. Use --longhelp or --fullhelp for more/all options. Canvas affecting options: --method, -p <mode> Select pixel type (average/actionavg/mostused/changelog/loopingavg) See full help for details. --looplength, -l <int> Set loop length for the LOOPINGxx modes Image aligning options: --noalign Disable automatic image aligner. Useful if you only want to utilize the dithering and quantization features of animmerger, or your images simply don't happen to form a nice 2D map. This option is identical to --forcealign 0-N=0,0 where N=movie length. Output options: --output, -o <filename/pattern> Output to given filename. The filename may also be a pattern. (See longhelp for details.) --quantize, -Q <method>,<num_colors> or <file> or <R>x<G>x<B>[x<I>] Reduce/load/synthesize palette. See full help for details. --dithmethod, -D <method>[,<method>] Select dithering method(s) (ky/y2/floyd). See full help for details. --gamma [=<value>] Select gamma to use in dithering. Default: 1.0 10. Copying
animmerger has been written by Joel Yliluoma, a.k.a.
Bisqwit, and is distributed under the terms of the General Public License (GPL). 11. Requirements
GNU make and C++ compiler is required to recompile the source code. libgd is also required. 12. See also13. DownloadingDownloading help
The most recent source code (bleeding edge) for animmerger can also be downloaded by cloning the Git repository by:
Date (Y-md-Hi) acc Size Name 2015-0815-1349 r-- 848010 animmerger-1.6.2-win32.zip 2015-0815-1240 r-- 232497 animmerger-1.6.2.tar.bz2 2015-0815-1240 r-- 263129 animmerger-1.6.2.tar.gz 2015-0815-1240 r-- 204160 animmerger-1.6.2.tar.xz 2013-1111-0126 r-- 231471 animmerger-1.6.1.tar.bz2 2013-1111-0126 r-- 262477 animmerger-1.6.1.tar.gz 2012-0731-2158 r-- 231704 animmerger-1.6.0.3.tar.bz2 2012-0731-2158 r-- 264279 animmerger-1.6.0.3.tar.gz 2012-0731-2121 r-- 231977 animmerger-1.6.0.2.tar.bz2 2012-0731-2121 r-- 264636 animmerger-1.6.0.2.tar.gz 2011-1021-1900 r-- 231706 animmerger-1.6.0.1.tar.bz2 2011-1021-1900 r-- 264612 animmerger-1.6.0.1.tar.gz 2011-0926-1347 r-- 228103 animmerger-1.6.0.tar.bz2 2011-0926-1347 r-- 261977 animmerger-1.6.0.tar.gz 2011-0215-1525 r-- 209039 animmerger-1.5.0.tar.bz2 2011-0215-1525 r-- 249345 animmerger-1.5.0.tar.gz 2010-0907-1236 r-- 83227 animmerger-1.4.3.tar.bz2 2010-0907-1236 r-- 104606 animmerger-1.4.3.tar.gz 2010-0817-2209 r-- 62110 animmerger-1.4.2.tar.bz2 2010-0817-2209 r-- 76387 animmerger-1.4.2.tar.gz 2010-0816-1733 r-- 58869 animmerger-1.4.1.tar.bz2 2010-0816-1733 r-- 71444 animmerger-1.4.1.tar.gz 2010-0810-1703 r-- 58273 animmerger-1.4.0.tar.bz2 2010-0810-1703 r-- 70938 animmerger-1.4.0.tar.gz 2010-0806-0747 r-- 49402 animmerger-1.3.1.tar.bz2 2010-0806-0747 r-- 56927 animmerger-1.3.1.tar.gz 2010-0805-1219 r-- 49055 animmerger-1.3.0.tar.bz2 2010-0805-1219 r-- 56413 animmerger-1.3.0.tar.gz 2010-0805-0927 r-- 47818 animmerger-1.2.1.tar.bz2 2010-0805-0927 r-- 54692 animmerger-1.2.1.tar.gz 2010-0805-0904 r-- 47532 animmerger-1.2.0.1.tar.bz2 2010-0805-0904 r-- 54724 animmerger-1.2.0.1.tar.gz 2010-0804-2125 r-- 47378 animmerger-1.2.0.tar.bz2 2010-0804-2125 r-- 54193 animmerger-1.2.0.tar.gz 2010-0804-0733 r-- 45810 animmerger-1.1.3.tar.bz2 2010-0804-0733 r-- 52353 animmerger-1.1.3.tar.gz 2010-0802-1242 r-- 42991 animmerger-1.1.2.tar.bz2 2010-0802-1242 r-- 49233 animmerger-1.1.2.tar.gz 2010-0729-2330 r-- 42401 animmerger-1.1.0.tar.bz2 2010-0729-2330 r-- 48349 animmerger-1.1.0.tar.gz 2010-0728-2144 r-- 35571 animmerger-1.0.1.tar.bz2 2010-0728-2144 r-- 39570 animmerger-1.0.1.tar.gz 2010-0728-2137 r-- 35492 animmerger-1.0.0.tar.bz2 2010-0728-2137 r-- 39554 animmerger-1.0.0.tar.gz← Back to the source directory index at Bisqwit's homepage | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||