A single picture conveys a lot of information about the scene, but it rarely conveys the scene’s true dynamic nature. A video effectively does both but is limited in resolution. Off-the-shelf camcorders can capture videos with a resolution of 720 * 480 at 30 fps, but this resolution pales in comparison to those for consumer digital cameras, whose resolution can be as high as 16 Mega Pixels. Video textures is the perfect solution for producing arbitrarily long video sequences if only very high resolution camcorders exist. Chuang et al system is capable of generating compelling-looking animated scenes, but there is a major drawback: Their system requires a considerable amount of manual input. Furthermore, since the animation is specified completely manually, it might not reflect the true scene dynamics. We use a different tack that bridges video textures and Chuang et al system. We use as input a small collection of high resolution stills that (under-)samples the dynamic scene. This collection has both the benefit of the high resolution and some indication of the dynamic nature of the scene (assuming that the scene has some degree of regularity in motion). We are also motivated by a need for a more practical solution that allows the user to easily generate the animated scene.
In this project, we describe a scene animation system that can easily generate a video or video texture from a small collection of stills (typically, 10 to 20 stills are captured within 1 to 2 minutes, depending on the complexity of the scene motion). Our system first compare the input images to find the pixel difference. It then sort the sequence of images on basis of this comparison to generate an image sequence of the video or video texture . Our system is designed to allow the user to easily fine-tune the animation.
In this project, we describe a scene animation system that can easily generate a video or video texture from a small collection of stills (typically, 10 to 20 stills are captured within 1 to 2 minutes, depending on the complexity of the scene motion). Our system first builds a graph that links similar images. It then recovers partial temporal orders among the input images and uses a second-order Markov Chain model to generate an image sequence of the video or video texture (Fig. 1). Our system is designed to allow the user to easily fine-tune the animation. For example, the user has the option to manually specify regions where animation occurs independently (which we term independent animated regions (IAR)) so that different time instances of each IAR can be used independently. An IAR with large motion variation can further be automatically decomposed into semi-independent animated regions (SIARs) in order to make the motion appear more natural. The user also has the option to modify the dynamics (e.g., speed up or slow down the motion, or choose different motion parameters) through a simple interface. Finally, all regions are frame interpolated and feathered at their boundaries to produce the final animation. The user needs only a few minutes of interaction to finish the whole process. In our work, we limit our scope to quasi-periodic motion, i.e., dynamic textures.
There are two key features of our system. One is the automatic partial temporal order recovery. This recovery algorithm is critical because the original capture order typically does not reflect the true dynamics due to temporal under sampling. As a result, the input images would typically have to be sorted. The recovery algorithm automatically suggests orders for subsets of stills. These recovered partial orders provide reference dynamics to the animation. The other feature is its ability to automatically decompose an IAR into SIARs when the user requests and treat the interdependence among the SIARs. IAR decomposition can greatly reduce the dependence among the temporal orderings of local samples if the IAR has significant motion variation that results in unsatisfactory animation. Our system then finds the optimal processing order among the SIARs and imposes soft constraints to maintain motion smoothness among the SIARs.
It compose of an image loading and processing module which takes input of our system. There is an image comparison and sorting module which gives the ordered sequence of images without any user interaction. Finally the output of the system is produced by the video making module which gives a fine tuned animation.
The basic pipeline of our system, is fully automatic. The system first takes input image and if need can apply some processing functions, then compare the input images and find their pixel difference. In the second stage these images are sorted based on basis of this comparison. Finally, the video or video texture is generated. However, a fully automatic process may not result in satisfactory videos or video textures as the computer does not have high-level understanding of the scene. The user has the option of modifying or fixing the dynamics of the animated scene through simple interfaces. The optional procedures may be added at places labeled. All these steps are automatic; user specified operations (A) may be added to improve the visual quality of the video.
The images can be of format bmp, jpg, gif etc. The overall design of our system can be divided into different modules. These modules include image loading and processing, comparison, sorting and video making. This module loads the input images. The image can be of formats like bmp, jpeg, gif, png etc. The jpeg data format is independent from resolution, aspect ratio and image contents. The quality of the representation is individually configurable. Decreasing the quality requirements results in a smaller file size. Image contents with a simple structure (monochromatic areas, regular patterns, a small amount of colors, etc.) lead automatically to smaller files. The reduction of the image quality becomes visible by compression artifacts in form of a "raster". The size of these structures is 8 x 8 pixels, which first appear within areas of high complexity. A reliable reproduction of color information, e.g. for background colors, is not guaranteed at larger compression rates. The compression of monochromatic images (e.g. b/w graphics, technical drawings, etc.) result frequently in clearly perceptible compression artifacts. For image editing, either professional or otherwise, PNG provides a useful format for the storage of intermediate stages of editing. Since PNG's compression is fully lossless and since it supports up to 48-bit true color or 16-bit grayscale saving, restoring and re-saving an image will not degrade its quality, unlike standard JPEG (even at its highest quality settings). BMP (bitmap) is a bit mapped graphics format used internally by the Microsoft Windows graphics subsystem (GDI), and used commonly as a simple graphics file format on that platform. It is an uncompressed format.
The processing functions in this module includes grayscale conversion, inverting, increasing and decreasing brightness, zoom in and zoom out, watermarking etc. Each of these functions are explained in detail as follows. Go through the attached project report for more info.